From my perspective on databases, two trends continued in 2025:
1: Moving everything to SQLite
2: Using mostly JSON fields
Both started already a few years back and accelerated in 2025.
SQLite is just so nice and easy to deal with, with its no-daemon, one-file-per-db and one-type-per value approach.
And the JSON arrow functions make it a pleasure to work with flexible JSON data.
From my perspective - do you even need a database?
SQLite is kind-of the middle ground between a full fat database, and 'writing your own object storage'. To put it another way, it provides 'regularised' object access API, rather than, say, a variant of types in a vector that you use filter or map over.
As a backend database that's not multi user, how many web connections that do writes can it realistically handle? Assuming writes are small say 100+ rows each?
Any mitigation strategy for larger use cases?
Thanks in advance!
Pardon my ignorance, yet wasn't the prevailing thought a few years ago that you would never use SQLite in production? Has that school of thought changed?
I would say SQLite when possible, PostgreSQL (incl. extensions) when necessary, DuckDB for local/hobbyist data analysis and BigQuery (often TB or PB range) for enterprise business intelligence.
I think the right pattern here is edge sharding of user data. Cloudflare makes this pretty easy with D1/Hyperdrive.
For as much talk as I see about SQLite, are people actually using it or does it just have good marketers?
Man, I hope so. Bailing people out of horribly slow NoSQL databases is good business.
FWIW (and this is IMHO of course) DuckDB makes working with random JSON much nicer than SQLite, not least because I can extract JSON fields to dense columnar representations and do it in a deterministic, repeatable way.
The only thing I want out of DuckDB core at this point is support for overriding the columnar storage representation for certain structs. Right now, DuckDB decomposes structs into fields and stores each field in a column. I'd like to be able to say "no, please, pre-materialize this tuple subset and store this struct in an internal BLOB or something".
From my perspective, everything's DuckDB.
Single file per database, Multiple ingestion formats, full text search, S3 support, Parquet file support, columnar storage. fully typed.
WASM version for full SQL in JavaScript.