Menu

Export SQLite Data: CSV, JSON, SQL Dumps from the CLI

How to export data out of SQLite — CSV with headers, JSON, full SQL dumps, and single-table backups using the sqlite3 command-line shell.

This page includes runnable editors — edit, run, and see output instantly.

Exporting Is a Shell Job, Not a SQL Statement

SQLite doesn't have a COPY ... TO or SELECT INTO OUTFILE like Postgres or MySQL. Exporting data lives in the sqlite3 command-line shell instead, driven by dot-commands: .mode, .headers, .output, .dump. Once you know those four, you can produce CSV, JSON, plain text, or a full SQL dump from any database.

The mental model: you tell the shell how to format results (.mode), whether to include column names (.headers), and where to send the output (.output). Then you run a query, and that query's results land in the file.

Let's set up a tiny database to play with:

Three rows, four columns. We'll export this in several formats.

CSV: .mode csv with Headers

CSV is the most common export format — spreadsheets, data pipelines, and most other tools speak it. Inside the sqlite3 shell:

sqlite> .mode csv
sqlite> .headers on
sqlite> .output users.csv
sqlite> SELECT * FROM users;
sqlite> .output stdout

What just happened:

  • .mode csv formats each row as comma-separated values, quoting fields that contain commas, quotes, or newlines.
  • .headers on adds a first row with column names. Without it, your CSV has no header — usually not what you want.
  • .output users.csv redirects results to a file. From this point on, query output goes there instead of the screen.
  • The SELECT runs and writes to the file silently.
  • .output stdout switches output back to the terminal so you can see the next query's results.

The resulting file:

id,name,email,signup_date
1,"Ada Lovelace",ada@example.com,2025-01-15
2,"Boris Johnson",boris@example.com,2025-02-03
3,"Carmen Diaz",carmen@example.com,2025-03-22

You can export the result of any query, not just whole tables — filter, join, aggregate, then redirect:

sqlite> .output recent_users.csv
sqlite> SELECT name, email FROM users WHERE signup_date >= '2025-02-01';
sqlite> .output stdout

One-Liner from the Shell

You don't have to enter the interactive shell at all. Pipe the dot-commands and SQL into sqlite3 from your operating system shell:

sqlite3 mydb.sqlite <<EOF
.headers on
.mode csv
.output users.csv
SELECT * FROM users;
EOF

This is the form you want for scripts and cron jobs — repeatable, no manual typing. The .output is local to the session, so it doesn't leak state anywhere.

JSON: .mode json

For exports headed to a web app or a JSON-consuming tool, .mode json produces an array of objects, one per row:

sqlite> .mode json
sqlite> .output users.json
sqlite> SELECT * FROM users;
sqlite> .output stdout

The file:

[{"id":1,"name":"Ada Lovelace","email":"ada@example.com","signup_date":"2025-01-15"},
{"id":2,"name":"Boris Johnson","email":"boris@example.com","signup_date":"2025-02-03"},
{"id":3,"name":"Carmen Diaz","email":"carmen@example.com","signup_date":"2025-03-22"}]

Headers are implicit in JSON (the keys), so .headers doesn't apply here. If you want a custom shape — nested objects, renamed fields — build it inside the query with json_object():

That gives you per-row JSON strings with full control over the structure. Combine with json_group_array() to roll the whole result into a single JSON document.

Full SQL Dump: .dump

.dump is different in kind from CSV or JSON. It produces a .sql file containing the schema and all the data as CREATE TABLE and INSERT statements — enough to rebuild the database from empty:

sqlite3 mydb.sqlite .dump > backup.sql

A snippet of what comes out:

PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE users (
    id INTEGER PRIMARY KEY,
    name TEXT NOT NULL,
    email TEXT NOT NULL,
    signup_date TEXT NOT NULL
);
INSERT INTO users VALUES(1,'Ada Lovelace','ada@example.com','2025-01-15');
INSERT INTO users VALUES(2,'Boris Johnson','boris@example.com','2025-02-03');
INSERT INTO users VALUES(3,'Carmen Diaz','carmen@example.com','2025-03-22');
COMMIT;

Restoring is the mirror operation — pipe the file back into a fresh database:

sqlite3 restored.sqlite < backup.sql

.dump is the right tool for backups, version-control snapshots of test data, and migrating between machines. It preserves indexes, triggers, views — everything in the schema.

Dumping a Single Table

.dump accepts a table name (or pattern) to limit the output:

sqlite3 mydb.sqlite ".dump users" > users_only.sql

That dumps only the users table's schema and rows. Useful when you want to copy one table into another database without bringing the rest along. You can also pattern-match: .dump 'log_%' dumps every table starting with log_.

Schema Without Data

Sometimes you want the structure without the rows — for documentation, a clean dev environment, or to compare schemas across databases. .schema prints the CREATE statements only:

sqlite3 mydb.sqlite .schema > schema.sql

Pair it with a table name to get just one:

sqlite3 mydb.sqlite ".schema users" > users_schema.sql

The output is plain SQL — CREATE TABLE, CREATE INDEX, CREATE TRIGGER — ready to run against an empty database.

Other Useful Modes

.mode has more options than CSV and JSON. A few worth knowing:

.mode column        -- aligned columns, nice for reading at the terminal
.mode markdown      -- pipe-separated, GitHub-friendly tables
.mode html          -- HTML <table> output
.mode tabs          -- tab-separated values (TSV)
.mode insert users  -- emits INSERT statements for the named table
.mode quote         -- SQL-quoted values, useful for inspection

.mode markdown is great for pasting query results into a README or a pull request. .mode insert <table> is a quick way to generate seed data — run a SELECT, capture the INSERT statements, paste them into a fixture file.

sqlite> .mode insert users
sqlite> .output seed.sql
sqlite> SELECT * FROM users WHERE signup_date >= '2025-02-01';
sqlite> .output stdout

A Few Practical Notes

  • .output stdout (or .output with no argument) restores terminal output. Forget this and the next query's results vanish silently into the file.
  • CSV exports don't preserve types. Everything becomes text in the file, and reimporting needs a target schema to interpret it. Use .dump if round-tripping into another SQLite database matters.
  • Large exports are streamed. .output writes rows as they're produced, so you can dump tables larger than RAM without trouble.
  • For a hot backup of a live database, .dump works, but the dedicated .backup command (covered later in the curriculum) is faster and safer because it uses SQLite's online backup API.

Next: Querying Data

You've now got a full picture of writing data — INSERT, UPDATE, DELETE, UPSERT, RETURNING, importing from CSV, and exporting back out. Next up is the other half of working with a database: reading data efficiently. The SELECT statement is where most of your time will go, and that's the next page.

Frequently Asked Questions

How do I export a SQLite table to CSV?

From the sqlite3 shell, switch to CSV mode, turn headers on, point output at a file, and run a query: .mode csv, .headers on, .output users.csv, then SELECT * FROM users;. Run .output stdout afterwards to send results back to the terminal.

What's the difference between .dump and exporting to CSV?

.dump produces a .sql file containing CREATE TABLE statements and INSERT statements — everything needed to rebuild the database from scratch. CSV exports only the rows of one query or table, with no schema. Use .dump for backups and migrations; use CSV for handing data to spreadsheets or other tools.

Can I export SQLite query results to JSON?

Yes. Either set .mode json in the shell and run any SELECT, or use the built-in json_object() and json_group_array() functions to build JSON inside a query. The .mode json approach is simpler for ad-hoc exports; the function approach gives you full control over the shape.

Coddy programming languages illustration

Learn to code with Coddy

GET STARTED