Some a prime example of a service that naturally peaks at round hours.
We have a habbit of never scheduling long running processes at round hours. Usually because they tend to be busier.
https://hakibenita.com/sql-tricks-application-dba#dont-sched...
Note that they were running Postgres on a 32 CPU box with 256GB of ram.
I'm actually surprised that it handled that many connections. The data implies that they have 4000 new connections/sec...but is it 4000 connections handled/sec?
I'm a bit confused here, do they have a single database they're writing to? Wouldn't it be easier and more reliable to shard the data per customer?
Isn't this kind of the reason why teams will tend to put database proxies in front of their postgres instances, to handle massive sudden influxes of potentially short lived connections?
This sounds exactly like the problem tools like pgbouncer were designed to solve. If you're on AWS one could look at RDS Proxy.
very stupid question: similar to how we had a GIL replacement in python, cant we replace postmaster with something better?
maybe this is silly but these days cloud resources are so cheap. just loading up instances and putting this stuff into memory and processing it is so fast and scalable. even if you have billions of things to process daily you can just split if needed.
you can keep things synced across databases easily and keep it super duper simple.
[dead]
I think this is the kind of investigation that AI can really accelerate. I imagine it did. I would love to see someone walk through a challenging investigation assisted by AI.
> sudo echo $NUM_PAGES > /proc/sys/vm/nr_hugepages
This won't work :) echo will run as root but the redirection is still running as the unprivileged user. Needs to be run from a privileged shell or by doing something like sudo sh -c "echo $NUM_PAGES > /proc/sys/vm/nr_hugepages"
The point gets across, though, technicality notwithstanding.