This is fair question. A stream here == a log. Every write with S2 implementations is durable before it is acknowledged, and it can be consumed in real-time or replayed from any position by multiple readers. The stream is at the granularity of discrete records, rather than a byte stream (although you can certainly layer either over the other).
ED: no k8s required for s2-lite, it is just a singe binary. It was an architectural note about our cloud service.
Your documentation needs improvement. It proudly mentions the alphabet soup of technologies you use, but it leaves me completely baffled about what s2 does, what problem s2 is trying to solve, or who the intended audience of s2 is.
So you frame the data into records, save the frame somehow (maybe with fsync if you're doing it locally, or maybe you outsource it to S3 or S3-compatible storage?), then ack and start sending it to clients. Therefore every frame that's acked or sent to clients has already been saved.
Personally I'd add an application level hash to protect the integrity of the records but that's just me.
At first glance I wondered if a hash chain or Merkle tree might be useful but I think it's overkill. What exactly is the trust model? I get the sense this is a traditional client-server protocol (i.e., not p2p). Does it stream the streams over HTTP / HTTPS, or some custom protocol? Are s2 clients expected to be end-user web browsers, other instances of s2 or something else?