logoalt Hacker News

100k TPS over a billion rows: the unreasonable effectiveness of SQLite

178 pointsby speckxtoday at 5:59 PM57 commentsview on HN

Comments

diathtoday at 6:49 PM

> Hopefully, this post helps illustrate the unreasonable effectiveness of SQLite as well as the challenges you can run in with Amdahl's law and network databases like postgres.

No, it does not. This article first says that normally you would run an application and the database on separate servers and then starts measuring the performance of a locally embedded database. If you have to keep the initial requirement for your software, then SQLite is completely out of equation. If you can change the requirement, then you can achieve similar performance by tuning the local PGSQL instance -- and then it also becomes a valuation of features and not just raw throughput. I'm not saying SQLite is not an option either, but this article seems confusing in that it compares two different problems/solutions.

show 5 replies
jancitoday at 8:51 PM

How does SQLite handle HA setups? The minimum I want is reliable automatic failover in reasonable time for user-facing service. Ideally an active-active setup.

weitendorftoday at 7:00 PM

I've been working on a hybrid protobuf ORM/generic CRUD server based on sqlite

(code at https://github.com/accretional/collector - forgive the documentation. I'm working on a container-based agent project and also trialling using agents heavily to write the individual features. It's working pretty well but the agents have been very zealous at documenting things lol).

This is my first real project using sqlite and we've hit some similarly cool benchmarks:

* 5-15ms downtime to backup a live sqlite db with a realistic amount of data for a crud db

* Capable of properly queueing hundreds of read/write operations when temporarily unavailable due to a backup

* e2e latency of basically 1ms for CRUD operations, including proto SerDe

* WAL lets us do continuous, streaming, chunked backups!

Previously I'd only worked with Postgres and Spanner. I absolutely love sqlite so far - would still use Spanner for some tasks with an infinite budget but once we get Collector to implement partitions I don't think I would ever use Postgres again.

show 1 reply
bob1029today at 8:49 PM

This is great until you encounter a customer with a hard RPO requirement of 0. SQLite has a few replication options, but I would never trust this in high stakes domains over PGSQL/MSSQL/DB2/Oracle/etc.

jaketorontotoday at 8:43 PM

Are you limiting your # of connections to postgres to 8? Is this unnecessarily throttling your throughput? This seems like quite the bottleneck... connection pools are good when your app is overwhelming your db.. but in this case, you really should be trying to put more load on Postgres... I'm concerned that this whole experiment is tainted by this choke point. I would love to see this tested again with a much larger connection pool. (Also, might be nice for you to share what the CPU and thread usage on the Postgres side was during this). (Side note: I'm a big fan of sqlite.. but I've run into lots of performance bottlenecks caused by arbitrarily low connection pools and the like)

show 1 reply
matthewaveryusatoday at 6:28 PM

The only caveat being this assumes all your data can fit on a single machine, and all your processing can fit on one machine. You can get a a u-24tb1.112xlarge with 448 vcores, 24TB RAM for 255/hour and attach 64TB of EBS -- that's a lot of runway.

show 6 replies
tomconnorstoday at 7:45 PM

Cool stuff as usual, Anders. One of the nice things about running a networked DB is that it makes redeploying the application a bit simpler. You can spin up a new EC2 instance or whatever and once it's online kill the old one. That gets 0 or close to 0 downtime. If the DB is on the same instance, replacing it requires loading up the DB onto the new instance, which seems more error prone than just restarting the app on the original instance, but in my experience that typically incurs downtime or some complicated handoff logic. Have you had to deal with anything like that running sqlite in prd?

show 1 reply
badmonstertoday at 7:05 PM

The real insight here is recognizing when network latency is your bottleneck. For many workloads, even a mediocre local database beats a great remote one. The question isn't "which database is best" but "does my architecture need to cross network boundaries at all?"

show 2 replies
ducktectivetoday at 8:31 PM

Does anyone have rough numbers (max daily users etc) on viability of SQLite vs PostgreSQL for a typical user-facing webapp or e-commerce application?

I know due to some recent update, SQLite can support concurrent reads but still only a single writer. For which cases this would be a problem?

Some recommend it's better to start with postgres anyway if you have any remote thoughts of scaling in mind....

alexwennerbergtoday at 6:47 PM

Does anyone have examples of organizations that have leveraged SQLite and written about their experience? I've read a lot of theory and benchmarks about it lately and it seems extremely impressive, but I'm wondering if anyone has written about pushing it to its limits "in production"

show 1 reply
gausswhotoday at 8:32 PM

SQLite db's do not reliably survive Syncthing across multiple devices. Until that's true I am always seeking a DB design that does.

pmdulaneytoday at 6:50 PM

Prefer "surprising" to "unreasonable".

show 1 reply
dmezzettitoday at 7:18 PM

I've used SQLite as the content storage engine for years with TxtAI. It works great. Also plenty of good add-ons for it such as sqlite-vec for storing vectors. It can take you pretty far and maybe it's all you need in many circumstances.

koakuma-chantoday at 8:11 PM

Alternative title: SQLite is All You Need

westurnertoday at 6:31 PM

That's a helpful TPS Report.

TIL `SAVEPOINT` can occur in a BEGIN ... END SQLite transaction, and that works with optimizing batch size on a particular node with a given load.

Is there a solution for SQLite WAL corruption?

From https://news.ycombinator.com/item?id=45133444 :

> "PSA: SQLite WAL checksums fail silently and may lose data" https://news.ycombinator.com/item?id=44672902

> sqlite-parquet-vtable, [...]

show 1 reply
busymom0today at 7:20 PM

Previously, I had always used Postgres for database and Rust or NodeJS for my backend. For my new website (https://limereader.com/), I used Swift for my backend, SQLite for Database, Vapor for web server in the Swift app and am self-hosting the site on an old Mac mini.

A sqlite related issue I ran into had to do with accessing the SQLite database from multiple threads. Found out a solution easily: for multi-threading use, SQLite needed to be initialized with a `SQLITE_OPEN_FULLMUTEX` flag. Since then, the website has been running seamlessly for about 3 weeks now.

show 3 replies