TBH I don't think anyone can utilise S3 for the active segment, I didn't dig into Warpstream too much, but I vaguely recall they only offloaded to S3 once the segment was rolled.
The Developer Voices interview where Kris Jenkins talks to Ryan Worl is one of the best, and goes into a surprising amount of detail: https://www.youtube.com/watch?v=xgzmxe6cj6A
tl;dr they write to s3 once every 250ms to save costs. IIRC, they contend that when you keep things organized by writing to different files for each topic, it's the Linux disk cache being clever that turns the tangle of disk block arrangement into a clean view per file. They wrote their own version of that, so they can cheaply checkpoint heavily interleaved chunks of data while their in-memory cache provides a clean per-topic view. I think maybe they clean up later async, but my memory fails me.
I don't know how BufStream works.
The thing that really stuck with me from that interview is the 10x cost reduction you can get if you're willing and able to tolerate higher latency and increased complexity and use S3. Apparently they implemented that inside Datadog ("Labrador" I think?), and then did it again with WarpStream.
I highly recommend the whole episode (and the whole podcast, really).
The Developer Voices interview where Kris Jenkins talks to Ryan Worl is one of the best, and goes into a surprising amount of detail: https://www.youtube.com/watch?v=xgzmxe6cj6A
tl;dr they write to s3 once every 250ms to save costs. IIRC, they contend that when you keep things organized by writing to different files for each topic, it's the Linux disk cache being clever that turns the tangle of disk block arrangement into a clean view per file. They wrote their own version of that, so they can cheaply checkpoint heavily interleaved chunks of data while their in-memory cache provides a clean per-topic view. I think maybe they clean up later async, but my memory fails me.
I don't know how BufStream works.
The thing that really stuck with me from that interview is the 10x cost reduction you can get if you're willing and able to tolerate higher latency and increased complexity and use S3. Apparently they implemented that inside Datadog ("Labrador" I think?), and then did it again with WarpStream.
I highly recommend the whole episode (and the whole podcast, really).