It obviously depends on how you use your data, but it really is surprising how far one can go with large tables when you implement sharding, caching, and read replicas.
For tables with a lot of updates, Postgres used to fall over with data fragmentation, but that's mostly been moot since SSDs became standard.
It's also easier than ever to stream data to separate "big data" DBs for those separate use cases.
Thanks, I knew I forgot something: read replicas