logoalt Hacker News

mmiao05/15/20252 repliesview on HN

a 100 million rows table is fairly small and you just don't need a distributed database. but you will need one if you hit 10 billion rows


Replies

jacobsenscott05/15/2025

You can partition that over 20 or 30 or more tables on one PG instance and have good performance - assuming a good partitioning key exists. If you need to query all 10B rows you'll have a bad day though.

mplanchard05/15/2025

Depends on what you’re doing with them. We’ve currently got a postgres DB with >100b rows in some tables. Partitioning has been totally adequate so far, but we’re also always able to query with the partition keys as part of the filters, so it is easy for the query planner to do the right thing.