logoalt Hacker News

hylarideyesterday at 12:22 PM1 replyview on HN

It obviously depends on how you use your data, but it really is surprising how far one can go with large tables when you implement sharding, caching, and read replicas.

For tables with a lot of updates, Postgres used to fall over with data fragmentation, but that's mostly been moot since SSDs became standard.

It's also easier than ever to stream data to separate "big data" DBs for those separate use cases.


Replies

hliyanyesterday at 2:23 PM

Thanks, I knew I forgot something: read replicas