logoalt Hacker News

michaelchisaritoday at 6:32 AM2 repliesview on HN

The single writer is less of an issue in practice than it's made out to be. Modern nvme drives are incredible and it's trivial to get 5k writes per second in an optimized WAL setup. Way more than most apps could ever dream.

And even then, I've used a batch writer pattern to get 180k writes per second on a commodity vps.


Replies

0123456789ABCDEtoday at 8:15 AM

all* of that + sharding -> https://sqlite.org/lang_attach.html

ex: main.db + fts.db. reading and writing to main.db is always available; updating the fts index can be done without blocking the main database — it only needs to read, the reads can be chunked, and delayed. fts.db keeps the index + a cursor table — an id or last change ts

could also use a shard to handle tables for metrics, or simply move old data out of main.db

* some examples:

  conn = sqlite3.connect("data.db")
  conn.execute("PRAGMA journal_mode=WAL")        # concurrent reads (see above)
  conn.execute("PRAGMA synchronous=NORMAL")      # fsync at checkpoint, not every commit
  conn.execute("PRAGMA cache_size=-62500")       # ~61 MB page cache (negative = KB)
  conn.execute("PRAGMA temp_store=MEMORY")       # temp tables and indexes in RAM
  conn.execute("PRAGMA busy_timeout=5000")       # wait 5s on lock instead of failing
edit: orms will obliterate your performance — use raw queries instead. just make sure to run static analysis on your code base to catch sqli bugs.

my replies are being ratelimited, so let me add this

the heavy duty server other databases have is doing that load bearing work that folks tend to complain about sqlite can't do

the real dmbs's are doing mostly the same work that sqlite does, you just don't have to think about it once they're set up. behind that chunky server process the database is still dealing with writing your data to a filesystem, handling transaction locks, etc.

by default sqlite gives you a stable database file, that when you see the transaction complete, it means the changes have been committed to storage, and cannot be lost if the machine were to crash exactly after that.

you can decide to wave some, or all of those guaranties in exchange for performance, and this doesn't even have to be an all or nothing situation.

show 1 reply
Ringztoday at 8:15 AM

I usually try to explain it like this: “Single writer” is rarely a real problem, because a writer is not slow. It writes exclusively, but very quickly.

"Batch writer pattern" is a good idea to get rid of expensive commits.