Call me old fashioned, but when records start reaching the 100 million range, it's usually an indication that either your dataset is too wide (consider sharding) or too deep (consider time based archival) to fit into a monolithic schema. For context, I've dealt with multiple systems that generate this volume of data between 2003 - 2013 (mostly capital markets, but also some govt/compliance work) with databases and hardware from that era, and we rarely had an issue that could not be solved by either query optimization, caching, sharding or archival, usually in that order.
Secondly, we did most of these things using SQL, Bash scripts, cron jobs and some I/O logic built directly into the application code. They were robust enough to handle some extremely mission critical systems (a failure could bring down a US primary market and if it's bad enough, you hear it on the news).
Depends on the read/write workload and row size, but yeah after 100-200m rows PostgreSQL vacuums can take a while. And index rebuilding (which you have to do on an active table) too.
It all depends though, sometimes 1b is passe.
But 100m is a good point to consider what comes next.
It obviously depends on how you use your data, but it really is surprising how far one can go with large tables when you implement sharding, caching, and read replicas.
For tables with a lot of updates, Postgres used to fall over with data fragmentation, but that's mostly been moot since SSDs became standard.
It's also easier than ever to stream data to separate "big data" DBs for those separate use cases.