Neat! Having literally everything backed by object storage is The Dream, so this makes a lot of sense. So to compare this to the options that are available (that aren't Kafka or Redis streams) I can imagine you could take these items that you're writing to a stream, batch them and write them into some sort of S3-backed data lake. Something like Delta Lake. And then query them using I don't know DuckDB or whatever your OLAP SQL thing is. Or you could what develop your own S3 schema that that's just saving these items to batched objects as they come in. So then part what S2 is saving you from is having to write your own acknowledgement system/protocol for batching these items, and the corresponding read ("consume") queries? Cool!
Yes, that is a reasonable way to think about it! And as s2-lite is designed as a single-node system, there is a natural source of truth on what the latest records are for consuming in real-time.