SQLite emphatically warns against concurrent writes. It is not designed for that.
I'm seeing these numbers on my current scratch benchmark:
- Events append to a 10M+ record table (~4+ GiB database).
- Reads are fetched from a separate computed table, which is trigger-updated from the append-only table.
- WAL-mode ON, Auto-vacuum ON
{:dbtype "sqlite",
:auto_vacuum "INCREMENTAL",
:connectionTestQuery "PRAGMA journal_mode;",
:preferredTestQuery "PRAGMA journal_mode;",
:dataSourceProperties
{:journal_mode "WAL",
:limit_worker_threads 4,
:page_size 4096,
:busy_timeout 5000,
:enable_load_extension true,
:foreign_keys "ON",
:journal_size_limit 0,
:cache_size 15625,
:maximumPoolSize 1,
:synchronous "NORMAL"}},
- 1,600 sequential (in a single process) read-after-write transactions, append-only, no batching.- With a separate writer process (sequential), and concurrently, two reader processes, I'm seeing 400+ append transactions/second (into the append-only table, no batching), and a total of 41,000 reads per second, doing `select *` on the trigger-updated table.
My schema is (deliberately) poor --- most of it is TEXT.
(edit: add clarifying text)
Isn’t that schema actually the opposite of poor for SQLite, since it stores everything as text internally?