With some extra admin work, you can greatly increase your insert throughput, as long as the table load is comprised mostly of inserts: 1. Partition your table by range of a monotonic ID or timestamp. Notice the primary key will have to contain this column. A BIGINT id column should work fine; 2. Remove all the other indexes from the partitioned table. Add them to all the partitions, except the latest one. This way, the latest one can endure a tough write load, while the other ones work fine for reads; 3. Create an admin routine (perhaps with pg_cron) to create a new partition whenever the newest one is getting close to the limit. When the load moves to the newer partition, add indexes concurrently to the old one; 4. You'll notice the newest partition will the optimized for writes but not reads. You can offset some of that by replacing BTREE secondary indexes with BRIN [1], particularly the one with bloom operator (not to be confused with Postgres Bloom regular indexes [2]). BRIN is a family of indexes more optimized for writes than reads. If the partition is not too large, it shouldn't be too bad to read from it. 5. Later you can merge partitions to avoid having too many of them. Postgres has commands for that, but I think they lock the whole table, so a safer bet is to copy small partitions into a new larger one and swap them manually.
[1] https://www.postgresql.org/docs/current/brin.html [2] https://www.postgresql.org/docs/current/bloom.html
These are good suggestions but I'm apprehensive they might come back and say they have 64 GB (or less) of RAM or they are using PostgreSQL RDS on AWS or something.
I asked them for specifics.