logoalt Hacker News

BowBunyesterday at 6:14 PM3 repliesview on HN

Traditional DBs are a poor fit for high-throughput job systems in my experience. The transactions alone around fetching/updating jobs is non-trivial and can dwarf regular data activity in your system. Especially for monoliths which Python and Ruby apps by and large still are.

Personally I've migrated 3 apps _from_ DB-backed job queues _to_ Redis/other-backed systems with great success.


Replies

brightballyesterday at 6:28 PM

The way that Oban for Elixir and GoodJob for Ruby leverage PostgreSQL allows for very high throughput. It's not something that easily ports to other DBs.

show 2 replies
sorentwoyesterday at 7:36 PM

Transactions around fetching/updating aren't trivial, that's true. However, the work that you're doing _is_ regular activity because it's part of your application logic. That's data about the state of your overall system and it is extremely helpful for it to stay with the app (not to mention how nice it makes testing).

Regarding overall throughput, we've written about running one million jobs a minute [1] on a single queue, and there are numerous companies running hundreds of millions of jobs a day with oban/postgres.

[1]: https://oban.pro/articles/one-million-jobs-a-minute-with-oba...

show 1 reply
asa400yesterday at 10:06 PM

How high of throughput were you working with? I've used Oban at a few places that had what pretty decent throughput and it was OK. Not disagreeing with your approach at all, just trying to get an idea of what kinds of workloads you were running to compare.

show 1 reply