Keeping the DB local cuts the worst latency spikes, but then you trade away the whole pitch of ephemeral compute and just-in-time scaling, so you end up glued to old-school infra patterns in disguise, plus node affinity and warm-cache babysitting that look a lot like the stuff SQLite was supposed to let you dodge. Add a few readers on volatile nodes and it get ugly fast.
Couldn't have said it better myself.
The tradeoff is real, but it's workload-specific. Turbolite is for read-mostly analytics on truly ephemeral compute -- Lambda or spot instances hitting a dataset that refreshes once a day. There's no persistent node to pin, so local-with-replication doesn't apply. You're comparing apples to a different fruit entirely.
What makes 250ms achievable isn't just range requests. It's the B-tree-aware page grouping. Standard S3 VFS pays 15-20ms per random page fetch. Group the pages a query actually touches into contiguous segments and you turn dozens of round trips into a few sequential reads. For cold JOIN queries on small-to-medium tables, that's the difference between 4 seconds and usable.
For OLTP or anything with meaningful write throughput, local WAL replication (LiteFS, haqlite, Litestream) is clearly right. These are two different problems that happen to both use SQLite.