logoalt Hacker News

rastignackyesterday at 9:57 PM1 replyview on HN

Just monitor it and you’re done. I’ve delivered and maintained hundreds of pg instances and never faced this issue. There is so much literature about it that at some point no one even slightly skilled will face it.


Replies

johnbarronyesterday at 10:49 PM

>> Just monitor it and you’re done.

This is just anecdote, colliding with documented database behavior, who is not an issue on Oracle, SQL Server, or IBM DB2.

PostgreSQL explicitly documents xid wraparound as a failure mode that can lead to catastrophic data loss and says vacuuming is required to prevent it. Near exhaustion, it will refuse commands.

Small sample of known outages:

- Sentry — Transaction ID Wraparound in Postgres

https://blog.sentry.io/transaction-id-wraparound-in-postgres...

Mailchimp / Mandrill — What We Learned from the Recent Mandrill Outage

https://mailchimp.com/what-we-learned-from-the-recent-mandri...

Joyent / Manta — Challenges deploying PostgreSQL (9.2) for high availability

https://www.davepacheco.net/blog/2024/challenges-deploying-p...

BattleMetrics — March 27, 2022 Postgres Transacton ID Wraparound

https://learn.battlemetrics.com/article/64-march-27-2022-pos...

Duffel — concurrency control & vacuuming in PostgreSQL

https://duffel.com/blog/understanding-outage-concurrency-vac...

Figma — Postmortem: Service disruption on January 21–22, 2020

https://www.figma.com/blog/post-mortem-service-disruption-on...

Even AWS updated their recommendation as recently as Feb 2025, and is an issue in Aurora Postgres as well as Postgres.

"Prevent transaction ID wraparound by using postgres_get_av_diag() for monitoring autovacuum" https://aws.amazon.com/blogs/database/prevent-transaction-id...