a 100 million rows table is fairly small and you just don't need a distributed database. but you will need one if you hit 10 billion rows
Depends on what you’re doing with them. We’ve currently got a postgres DB with >100b rows in some tables. Partitioning has been totally adequate so far, but we’re also always able to query with the partition keys as part of the filters, so it is easy for the query planner to do the right thing.
You can partition that over 20 or 30 or more tables on one PG instance and have good performance - assuming a good partitioning key exists. If you need to query all 10B rows you'll have a bad day though.