Isn't this kind of the reason why teams will tend to put database proxies in front of their postgres instances, to handle massive sudden influxes of potentially short lived connections?
This sounds exactly like the problem tools like pgbouncer were designed to solve. If you're on AWS one could look at RDS Proxy.
Also check out ProxySQL [1][2], it's an extremely powerful and battle-tested proxy. Originally it was only for MySQL/MariaDB, where it is very widely used at scale, even despite MySQL already having excellent built-in scalable threaded connection management. But ProxySQL also added Postgres support too in 2024 and that has become a major focus.
The article is very well written but is somewhat lacking at the end.
The conclusion lists pgbouncer as one of the solutions but it does not explain it clearly.
> Many pieces of wisdom in the engineering zeitgeist are well preached but poorly understood. Postgres connection pooling falls neatly into this category. In this expedition we found one of the underlying reasons that connection pooling is so widely deployed on postgres systems running at scale. [...] an artificial constraint that has warped the shape of the developer ecosystem (RDS Proxy, pgbouncer, pgcat, etc) around it.
The artificial constraint is the single core nature of postmaster.
Other points at the end of the article that can be improved:
> we can mechnically reason about a solution.
Mechanically as in letting an AI find a solution, or as in reasoning like a mechanic, or? Furthermore:
> * Implementing jitter in our fleet of EC2 instances reduced the peak connection rate
How? Did they wait a random amount of milliseconds before sending queries to the db?
> * Eliminating bursts of parallel queries from our API servers
How?