logoalt Hacker News

diarrhea08/02/20251 replyview on HN

Your version makes sense. I understood the OP's approach as being different.

Two (very, if indexed properly) short transactions at start and end are a good solution. One caveat is that the worker can die after t1, but before t2 - hence jobs need a timeout concept and should be idempotent for safe retrying.

This gets you "at least once" processing.

> this obviously has the drawback of knowing how long to sleep for; and tasks not getting "instantly" picked up, but eh, tradeoffs.

Right. I've had success with exponential backoff sleep. In a busy system, means sleeps remain either 0 or very short.

Another solution is Postgres LISTEN/NOTIFY: workers listen for events and PG wakes them up. On the happy path, this gets instant job pickup. This should be allowed to fail open and understood as a happy path optimization.

As delivery can fail, this gets you "at most once" processing (which is why this approach by itself it not enough to drive a persistent job queue).

A caveat with LISTEN/NOTIFY is that it doesn't scale due to locking [1].

[1]: https://www.recall.ai/blog/postgres-listen-notify-does-not-s...


Replies

maxbond08/02/2025

What are you thoughts on using Redis Streams or using a table instead of LISTEN/NOTIFY (either a table per topic or a table with a compound primary key that includes a topic - possibly a temporary table)?

show 1 reply