Menu

SQLite Import CSV: Loading Data with .import and --csv

How to import CSV files into SQLite using the .import command — handling headers, existing tables, custom separators, and the most common errors.

This page includes runnable editors — edit, run, and see output instantly.

CSV Import Lives in the CLI, Not in SQL

There's no IMPORT statement in SQLite's SQL dialect. CSV loading is a feature of the sqlite3 command-line shell — a dot-command called .import. That's an important mental shift if you're coming from MySQL's LOAD DATA INFILE or Postgres's COPY: those run on the server, but .import is something the client tool does for you, reading the file and issuing INSERT statements under the hood.

So everything in this page assumes you're inside the sqlite3 shell:

sqlite3 mydata.db

If you need to import from application code instead — Python, Node, Go — you'll read the CSV in your language and use parameterized INSERT statements. We'll cover that approach in the application integration chapter. Here, we're focused on the CLI.

The Basic .import

The shortest path: tell SQLite the file is CSV, then point .import at the file and a table name.

.mode csv
.import people.csv people

Two things happen depending on whether people already exists:

  • Table doesn't exist — SQLite creates it, using the first row of the CSV as column names. Every column gets TEXT affinity.
  • Table exists — SQLite inserts every row of the file as data. The header row, if there is one, becomes a row.

That second case is where most people get bitten on the first try. If your CSV has a header and your table already exists, you need to skip it explicitly.

Skipping the Header on an Existing Table

Use --skip 1 to tell .import to ignore the first N lines:

CREATE TABLE people (
    name TEXT,
    age  INTEGER,
    city TEXT
);

.import --csv --skip 1 people.csv people

--csv is shorthand for .mode csv scoped to this one command, so you don't have to set the mode separately. --skip 1 drops the header. The remaining lines get inserted into people in column order.

A quick check after import:

SELECT count(*) FROM people;
SELECT * FROM people LIMIT 5;

Column order in the file has to match column order in the table. There's no header-based mapping — .import just lines up the Nth field with the Nth column.

Letting SQLite Create the Table for You

For exploratory work, the easiest path is to skip CREATE TABLE entirely and let .import build the table from the header:

.mode csv
.import sales.csv sales

.schema sales

.schema sales will show something like:

CREATE TABLE sales(
  "order_id" TEXT,
  "amount" TEXT,
  "ordered_at" TEXT
);

Notice every column is TEXT. That's deliberate — .import doesn't try to infer types. If you want amount as a real number and ordered_at as a proper timestamp, create the table yourself first with the right types, then import with --skip 1. SQLite's type affinity will coerce numeric strings into integers and reals on insert.

Custom Separators: TSV, Pipe, Semicolon

.mode csv uses a comma. For tab-separated files, switch modes:

.mode tabs
.import data.tsv events

For other separators, use .separator after picking a mode:

.mode csv
.separator "|"
.import pipe_data.txt events

One thing to know: .mode csv follows RFC 4180 quoting rules — fields with embedded commas or newlines work as long as they're properly quoted with ". .mode tabs is a simpler split-on-character mode with no quoting. If your file has quoted fields with embedded separators, stay in .mode csv and change the separator.

A Realistic Walkthrough

Say orders.csv looks like this:

order_id,customer,amount,ordered_at
1001,Ada,49.99,2026-01-12
1002,Boris,12.50,2026-01-13
1003,"Chen, Wei",199.00,2026-01-14

Note row 3 has a comma inside a quoted field. Here's the full session:

In a real shell, the INSERT block would be replaced by a single .import --csv --skip 1 orders.csv orders. The "Chen, Wei" field stays intact because CSV mode honors the quotes. amount ends up as a real number and order_id as an integer thanks to the column types.

Wrapping the Import in a Transaction

.import issues one INSERT per row. For a few thousand rows that's fine. For a million, it's painfully slow unless you wrap the whole thing in a transaction so SQLite isn't committing after every row:

BEGIN;
.import --csv --skip 1 big_file.csv events
COMMIT;

This single change can turn a multi-minute import into a few seconds. If anything fails mid-import, ROLLBACK undoes the partial load, which is also handy for retries.

You can speed things up further by dropping indexes before the import and recreating them after — index maintenance per row adds up.

Common Errors and How to Fix Them

Error: expected N columns but found M — a row's field count doesn't match the table. Usually:

  • A stray comma in an unquoted field. Re-export with proper CSV quoting, or switch to .mode csv (RFC 4180) instead of .mode tabs.
  • A trailing blank line at the end of the file. Edit the file or use --skip creatively.
  • The table has more columns than the CSV. Either add the missing columns to the file or insert into a staging table with the right shape and copy into the real table.

Header row appears as data — you forgot --skip 1 on an existing table. Delete that row (DELETE FROM t WHERE rowid = 1) and rerun with the flag.

Numbers stored as strings — you let .import create the table, so every column is TEXT. Drop the table, define it explicitly with INTEGER/REAL columns, and re-import.

Error: no such file — the path is relative to wherever you launched sqlite3 from, not to the database file. Use an absolute path or cd to the right directory before opening the shell.

The CLI prints line numbers on errors, which is the fastest way to find the offending row in a large file.

A Quick Recap

  • .import is a CLI dot-command, not SQL. Run it inside the sqlite3 shell.
  • Use --csv to handle quoting properly, --skip 1 to ignore a header row.
  • If the table doesn't exist, .import creates it from the header — but every column will be TEXT. Create the table yourself for proper types.
  • Wrap large imports in BEGIN/COMMIT to avoid one transaction per row.
  • Column order in the file has to match column order in the table.

Next: Exporting Data Back Out

Importing is half the story. The same shell can dump query results or whole tables back to CSV, JSON, or SQL — useful for backups, data pipelines, and handing data off to other tools. That's coming up in exporting data.

Frequently Asked Questions

How do I import a CSV file into SQLite?

Open the database with the sqlite3 CLI, switch to CSV mode with .mode csv, then run .import data.csv table_name. If the table doesn't exist yet, SQLite creates it using the first row of the file as column names. If it does exist, every row of the file is inserted as data — so you usually want .import --skip 1 to skip the header.

How do I import a CSV with a header row into an existing SQLite table?

Use .import --csv --skip 1 data.csv table_name. The --skip 1 flag tells SQLite to ignore the first line so the header row doesn't become a row of data. Without it you'll end up with a row containing the literal column names.

Why does my SQLite CSV import fail with 'expected N columns but found M'?

The file has rows with a different column count than the table — usually from embedded commas, unescaped quotes, or a trailing empty line. Use .mode csv (or --csv) instead of .mode tabs so SQLite handles quoting per RFC 4180, and check the file with a text editor for stray separators. The CLI prints the offending line number, which is the fastest way to find the bad row.

Coddy programming languages illustration

Learn to code with Coddy

GET STARTED