Although the question does not specify
engine, let's assume it is
The follow re-runnable code shows that DataFrame.to_sql() creates a
sqlite3 table, and places an index on it. Which is the data from the index of the dataframe.
Taking the question code literally, the csv should import into the DataFrame with a
RangeIndex which will be unique ordinals. Because of this, one should be surprised if the number of rows in the csv do not match the number of rows loaded into the
So there are two things to do: Verify that the csv is being imported correctly. This is likely the problem since poorly formatted csv files, originating from human manipulated spreadsheets, frequently fail when manipulated by code for a variety of reasons. But that is impossible to answer here because we do not know the input data.
DataFrame.to_sql() does should be excluded. And for that,
method can be passed in. It can be used to see what
DataFrame.to_sql() does with the DataFrame data prior to handing it off to the SQL
import csv import pandas as pd import sqlite3 def dump_foo(conn): cur = conn.cursor() cur.execute("SELECT * FROM foo") rows = cur.fetchall() for row in rows: print(row) conn = sqlite3.connect('example145.db') csv_data = """1,01-01-2019,724 2,01-01-2019,233,436 3,01-01-2019,345 4,01-01-2019,803,933,943,923,954 4,01-01-2019,803,933,943,923,954 4,01-01-2019,803,933,943,923,954 4,01-01-2019,803,933,943,923,954 4,01-01-2019,803,933,943,923,954 5,01-01-2019,454 5,01-01-2019,454 5,01-01-2019,454 5,01-01-2019,454 5,01-01-2019,454""" with open('test145.csv', 'w') as f: f.write(csv_data) with open('test145.csv') as csvfile: data = [row for row in csv.reader(csvfile)] df = pd.DataFrame(data = data) def checkit(table, conn, keys, data_iter): print "What pandas wants to put into sqlite3" for row in data_iter: print(row) # note, if_exists replaces the table and does not affect the data df.to_sql('foo', conn, if_exists="replace", method=checkit) df.to_sql('foo', conn, if_exists="replace") print "*** What went into sqlite3" dump_foo(conn)