logoalt Hacker News

alexjplantyesterday at 3:23 PM2 repliesview on HN

Ah yes, and every consumer should just do this in a while (true) loop as producers write to it. Very efficient and simple with no possibility of lock contention or hot spots. Genius, really.


Replies

CharlieDigitalyesterday at 4:00 PM

I've implemented a distributed worker system on top of this paradigm.

I used ZMQ to connect nodes and the worker nodes would connect to an indexer/coordinator node that effectively did a `SELECT FROM ORDER BY ASC`.

It's easier than you may think and the bits here ended up with probably < 1000 SLOC all told.

    - Coordinator node ingests from a SQL table
    - There is a discriminator key for each row in the table for ordering by stacking into an in-memory list-of-lists
    - Worker nodes are started with _n_ threads
    - Each thread sends a "ready" message to the coordinator and coordinator replies with a "work" message
    - On each cycle, the coordinator advances the pointer on the list, locks the list, and marks the first item in the child list as "pending"
    - When worker thread finishes, it sends a "completed" message to the coordinator and coordinator replies with another "work" message
    - Coordinator unlocks the list the work item originated from and dequeues the finished item.
    - When it reaches the end of the list, it cycles to the beginning of the list and starts over, skipping over any child lists marked as locked (has a pending work item)
Effectively a distributed event loop with the events queued up via a simple SQL query.

Dead simple design, extremely robust, very high throughput, very easy to scale workers both horizontally (more nodes) and vertically (more threads). ZMQ made it easy to connect the remote threads to the centralized coordinator. It was effectively "self balancing" because the workers would only re-queue their thread once it finished work. Very easy to manage, but did not have hot failovers since we kept the materialized, "2D" work queue in memory. Though very rarely did we have issues with this.

antonvsyesterday at 3:29 PM

It's one of my favorite patterns, because it's the highest-impact, lowest-hanging fruit to fix in many systems that have hit serious scaling bottlenecks.