It’s incredible how much Postgres can handle.
At $WORK, we write ~100M rows per day and keep years of history, all in a single database. Sure, the box is big, but I have beautiful transactional workloads and no distributed systems to worry about!
Two days ago, I'd have said the same. Yesterday, big box went down, and because it was so stable, it was a joint less oiled and the spare chickened out at the wrong time and apparently even managed to mess up the database timeline. Today was the post-mortem, and it was rough.
I'm just saying, simple is nice and fast when it works, until it doesn't. I'm not saying to make everything complex, just to remember life is a survivor's game.
At $WORK, we are within the range of 2 billion rows per day on one of our apps. We do have beefy hardware and ultra fast SSD storage though.