This might be true for SELECTs, but I found INSERTs are massively slower when compared to grouping in transactions.
Which should be obvious. But I could see some reading this blog post and jumping to the wrong conclusion.
There is some risk that, if you design your website to use a local database (sqlite, or a traditional database over a unix socket on the same machine), then switching later to a networked database is harder. In other words, once you design a system to do 200 queries per page, you’d essentially have to redesign the whole thing to switch later.
It seems like it mostly comes down to how likely it is that the site will grow large enough to need a networked database. And people probably wildly overestimate this. HackerNews, for example, runs on a single computer.
>For a 50-entry timeline, the latency is usually less than 25 milliseconds. Profiling shows that few of those milliseconds were spent inside the database engine.
And instead were spent blocking on the disk for all of the extra queries that were made? Or is it trying to say that the concatenation a handful of strings takes 22 ms. Considering how much games can render with a 16 ms budget I don't see where that time is going rendering html.
This feels like a very elaborate way of saying that doing O(N) work is not a problem, but doing O(N) network calls is.
How does one go about deployment and backups with a local db? Like let’s say I have a web app hosted on a cloud service like App Engine or Elastic… if I redeploy my web app how do I make sure my current local db does not get get wiped? How are periodic backups handled?
I can think of many hacks to do this, but is there a best practice for this kind of stuff? I’m curious how people do this.
A lot of skepticism in comments. Let me remind them doing N loops over local disk with in memory cached pages is absolutely different compared to doing RT over typical VPS network. Having said that there is no silver bullet for dumb code! So let's not conflate the argument the author is trying to make.
Well, it depends. I vividly remember removing 200 small SQLite queries from a routing algorithm in a mobile app (by moving the metadata into a small in-memory data store instead) and roughly doubling its speed. :-) It was a pretty easy call after seeing sqlite3_step being the top CPU user by a large margin.
The article doesnt make it at all clear what it is comparing to - mysql running remotely or on the same server? I'm sure sqlite still has less "latency" than mysql on localhost or unix socket, but surely not meaningfully so. So, is SQLite really just that much faster at any SELECT query, or are they just comparing apples and oranges?
Or am i mistaken in thinking that communicating to mysql on localhost is comparable latency to sqlite?
Also SQLite is 35% faster than the filesystem:
Side note - is this post accessible from the site somewhere? I don’t see where you’d find it (along with the C is Best post [1] shared here recently).
I do t have time to test myself now, but it would be interesting to see a proper benchmark. We all know it's not suitable for high write concurrency, but SQLite should be a very good amount faster for reads because of the lack of overhead. But how much faster is it really?
I’ve been experimenting with LiveStoreJS which uses a custom SQLite WASM binary for event sync, so for simplicity I’ve also used it for regular application data in browser and found no issues (yet). It surprised me that using a full database engine in memory could perform well vs native JS objects at scale but perhaps at scale is when it starts to shine. Just be wary of size limits beyond 16-20mb.
Make sure you click this link https://sqlite.org/src/timeline
So the sqlite developers use their on versioning system which uses sqlite for storage. Funny.
Has anyone tried using distributed versions of sqlite, such as rqlite? How reliable is it?
Definitely was something surprising that I discovered when building with Sqlite recently. We're tought to avoid N+1 queries at almost any cost in RDBMs but in Sqlite, the N+1 can actually be the best option in most cases.
I had to build some back-office tools and used Ruby on Rails with SQLITE and didn't bother with doing "efficient" joins or anything. Just index the foreign keys, do N+1s everywhere - you'll be fine. The app is incredibly easy to maintain and add features because of this and the db is super easy to backup - literally just scp the sqlite db file somewhere else. Couldn't be happier with this setup.
quite interesting. So SQL patterns can be optimised differently in SQLite
One index scan beats 200 index lookups though surely?
I.e. sometimes one query is cheaper. It is not network anymore.
Also you can run your "big" DB like postgres on the same machine too. No law against that.
I am using SQLite on paperless-ngx (an app to manage pdf [4]). It is quite difficult to beat SQLite if you do not have a very huge parallelism factor in writes.
SQLite is an embedded database: no socket to open, you directly access to it via file system.
If you do not plan to use BigData with high number of writers, you will have an hard time beating SQLite on modern hardware, on average use cases.
I have written a super simple search engine [1] using python asyncio and SQLite is not the bottleneck so far.
If you are hitting the SQLite limit, I have an happy news: PostgreSQL upgrade will be enough for a lot of use cases [2]: you can use it to play with a schemaless mongo-like database, a simple queue system [3] or a search engine with stemming. After a while you can decide if you need a specialized component (i.e. Kafka, Elastic Search, etc) for one of your services.
[1]: https://github.com/daitangio/find
[2]: https://gioorgi.com/2025/postgres-all/
[3]: https://github.com/daitangio/pque
[4]: https://docs.paperless-ngx.com