logoalt Hacker News

pjc50today at 11:17 AM2 repliesview on HN

Several things going on here:

- concurrency is very hard

- .. but object storage "solves" most of that for you, handing you a set of semantics which work reliably

- single file throughput sucks hilariously badly

- .. because 1Gb is ridiculously large for an atomic unit

- (this whole thing resembles a project I did a decade ago for transactional consistency on TFAT on Flash, except that somehow managed faster commit times despite running on a 400Mhz MIPS CPU. Edit: maybe I should try to remember how that worked and write it up for HN)

- therefore, all of the actual work is shifted to the broker. The broker is just periodically committing its state in case it crashes

- it's not clear whether the broker ACKs requests before they're in durable storage? Is it possible to lose requests in flight anyway?

- there's a great design for a message queue system between multiple nodes that aims for at least once delivery, and has existed for decades, while maintaining high throughput: SMTP. Actually, there's a whole bunch of message queue systems?


Replies

jitltoday at 1:48 PM

> The broker runs a single group commit loop on behalf of all clients, so no one contends for the object. Critically, it doesn't acknowledge a write until the group commit has landed in object storage. No client moves on until its data is durably committed.

show 1 reply
candiddevmiketoday at 12:44 PM

AFAIK you can kinda "seek" reads in S3 using a range header, WCGW? =D

show 3 replies