logoalt Hacker News

We replaced H.264 streaming with JPEG screenshots (and it worked better)

443 pointsby quesobobyesterday at 6:00 PM263 commentsview on HN

Comments

mikepavoneyesterday at 8:00 PM

> When the network is bad, you get... fewer JPEGs. That’s it. The ones that arrive are perfect.

This would make sense... if they were using UDP, but they are using TCP. All the JPEGs they send will get there eventually (unless the connection drops). JPEG does not fix your buffering and congestion control problems. What presumably happened here is the way they implemented their JPEG screenshots, they have some mechanism that minimizes the number of frames that are in-flight. This is not some inherent property of JPEG though.

> And the size! A 70% quality JPEG of a 1080p desktop is like 100-150KB. A single H.264 keyframe is 200-500KB. We’re sending LESS data per frame AND getting better reliability.

h.264 has better coding efficiency than JPEG. For a given target size, you should be able to get better quality from an h.264 IDR frame than a JPEG. There is no fixed size to an IDR frame.

Ultimately, the problem here is a lack of bandwidth estimation (apart from the sort of binary "good network"/"cafe mode" thing they ultimately implemented). To be fair, this is difficult to do and being stuck with TCP makes it a bit more difficult. Still, you can do an initial bandwidth probe and then look for increasing transmission latency as a sign that the network is congested. Back off your bitrate (and if needed reduce frame rate to maintain sufficient quality) until transmission latency starts to decrease again.

WebRTC will do this for you if you can use it, which actually suggests a different solution to this problem: use websockets for dumb corporate network firewall rules and just use WebRTC everything else

show 6 replies
adamjsyesterday at 7:30 PM

They might want to check out what VNC has been doing since 1998– keep the client-pull model, break the framebuffer up into tiles and, when client requests an update, perform a diff against last frame sent, composite the updated tiles client-side. (This is what VNC falls back to when it doesn’t have damage-tracking from the OS compositor)

This would really cut down on the bandwidth of static coding terminals where 90% of screen is just cursor flashing or small bits of text moving.

If they really wanted to be ambitious they could also detect scrolling and do an optimization client-side where it translates some of the existing areas (look up CopyRect command in VNC).

show 5 replies
qbow883yesterday at 9:38 PM

Setting aside the various formatting problems and the LLM writing style, this just seems all kinds of wrong throughout.

> “Just lower the bitrate,” you say. Great idea. Now it’s 10Mbps of blocky garbage that’s still 30 seconds behind.

10Mbps should be way more than enough for a mostly static image with some scrolling text. (And 40Mbps are ridiculous.) This is very likely to be caused by bad encoding settings and/or a bad encoder.

> “What if we only send keyframes?” The post goes on to explain how this does not work because some other component needs to see P-frames. If that is the case, just configure your encoder to have very short keyframe intervals.

> And the size! A 70% quality JPEG of a 1080p desktop is like 100-150KB. A single H.264 keyframe is 200-500KB.

A single H.264 keyframe can be whatever size you want, *depending on how you configure your encoder*, which was apparently never seriously attempted. Why are we badly reinventing MJPEG instead of configuring the tools we already have? Lower the bitrate and keyint, use a better encoder for higher quality, lower the frame rate if you need to. (If 10 fps JPEGs are acceptable, surely you should try 10 fps H.264 too?)

But all in all the main problem seems to be squeezing an entire video stream through a single TCP connection. There are plenty of existing solutions for this. For example, this article never mentions DASH, which is made for these exact purposes.

show 6 replies
Dylan16807yesterday at 6:48 PM

> When the network is bad, you get... fewer JPEGs. That’s it. The ones that arrive are perfect.

You can have still have weird broken stallouts though.

I dunno, this article has some good problem solving but the biggest and mostly untouched issue is that they set the minimum h.264 bandwidth too high. H.264 can do a lot better than JPEG with a lot less bandwidth. But if you lock it at 40Mbps of course it's flaky. Try 1Mbps and iterate from there.

And going keyframe-only is the opposite of how you optimize video bandwidth.

show 2 replies
kccqzyyesterday at 7:47 PM

There are so many things that I would have done differently.

> We added a keyframes_only flag. We modified the video decoder to check FrameType::Idr. We set GOP to 60 (one keyframe per second at 60fps). We tested.

Why muck around with P-frames and keyframes? Just make your video 1fps.

> Now it’s 10Mbps of blocky garbage that’s still 30 seconds behind.

10 Mbps is way too much. I occasionally watch YouTube videos where someone writes code. I set my quality to 1080p to be comparable with the article and YouTube serves me the video at way less than 1Mbps. I did a quick napkin math for a random coding video and it was 0.6Mbps. It’s not blocky garbage at all.

show 4 replies
andaiyesterday at 8:34 PM

Many moons ago I was using this software which would screenshot every five seconds and give you a little time lapse and the end of the day. So you could see how you were spending your computer time.

My hard disk ended up filling up with tens of gigabytes of screenshots.

I lowered the quality. I lowered the resolution, but this only delayed the inevitable.

One day I was looking through the folder and I noticed well almost all the image data on almost all of these screenshots is identical.

What if I created some sort of algorithm which would allow me to preserve only the changes?

I spent embarrassingly long thinking about this before realizing that I had begun to reinvent video compression!

So I just wrote a ffmpeg one-liner and got like 98% disk usage reduction :)

show 1 reply
nemothekidyesterday at 10:44 PM

I'm very familiar with the stack and the pain of trying to livestream video to a browser. If JPEG screenshots work for your clients, then I would just stick with that.

The problem with wolf, gstreamer, moonlight, $third party, is you need to be familiar with how the underlying stack handles backpressure and error propagation, or else things will just "not work" and you will have no idea why. I've worked on 3 projects in the last 3 years where I started with gstreamer, got up and running - and while things worked in the happy path, the unhappy path was incredibly brittle and painful to debug. All 3 times I opted to just use the lower level libraries myself.

Given all of OPs requirements, I think something like NVIDIA Video Codec SDK to a websocket to MediaSource Extensions.

However, given that even this post seems to be LLM generated, I don't think the author would care to learn about the actual internals. I don't think this is a solution that could be vibe coded.

show 1 reply
somehnguyyesterday at 10:48 PM

40mbps for video of an LLM typing text didn't immediately fire off alarm bells in anyone's head that their approach was horribly wrong? That's an insane amount of bandwidth for what they're trying to do.

show 2 replies
Tareanyesterday at 7:49 PM

Having pair programmed over some truly awful and locked down connections before, dropped frames are infinitely better than blurred frames which make text unreadable whenever the mouse is moved. But 40mbps seems an awful lot for 1080p 60fps.

Temporal SVC (reduce framerate if bandwidth constrained) is pretty widely supported by now, right? Though maybe not for H.264, so it probably would have scaled nicely but only on Webrtc?

dotancohenyesterday at 7:11 PM

They're just streaming a video feed of an LLC running in a terminal? Why not stream the actual text? Or fetch it piecemeal over AJAX requests? They complain that corporate networks support only HTTPS and nothing else? Do they not understand what the first T stands for?

show 2 replies
lewqtoday at 3:19 AM

Hi, author of the post here. Just fixed up some formatting issues from when we copied it into substack, sorry about that. Yeah, I used Opus 4.5 to help me write it (and it actually made me laugh!). But the struggle was real. Something I didn't make clear enough in the post is that jpeg works because each screenshot is taken exactly when it's requested. Whereas streaming video is pushing a certain frame rate. The client driving the frame rate is exactly what makes it not queue frames. Yes, I wish we could UDP in enterprise networks too, but we can't. The problem actually isn't opening the UDP port, it's hosting UDP on their Kubernetes cluster. "You want to what?? We have ingress. For HTTPS"

Join our discord for private beta in January! https://discord.gg/VJftd844GE

(This post written by human)

show 2 replies
keerthikoyesterday at 7:50 PM

> The fix was embarrassingly simple: once you fall back to screenshots, stay there until the user explicitly clicks to retry.

There is another recovery option:

- increase the JPEG framerate every couple seconds until the bandwidth consumption approaches the H264 stream bandwidth estimate

- keep track latency changes. If the client reports a stable latency range, and it is acceptable (<1s latency, <200ms variance?) and bandwidth use has reached 95% of H264 estimate, re-activate the stream

Given that text/code is what is being viewed, lower res and adaptive streaming (HLS) are not really viable solutions since they become unreadable at lower res.

If remote screen sharing is a core feature of the service, I think this is a reasonable next step for the product.

That said, IMO at a higher level if you know what you're streaming is human-readable text, it's better to send application data pipes to the stream rather than encoding screenspace videos. That does however require building bespoke decoders and client viewing if real time collaboration network clients don't already exist for the tools (but SSH and RTC code editors exist)

toledocavaniyesterday at 11:57 PM

This thread is great, truly the only way to get great answers on the HN is to post a wrong blog. But stupid wrong blogs are unlikely to get into HN front page, kudos for the writer for striking the right balance between easy to understand, working, interesting but faulty solution.

robrainyesterday at 6:35 PM

"Think “screen share, but the thing being shared is a robot writing code.”"

Thinks: why not send text instead of graphics, then? I'm sure it's more complicated than that...

show 2 replies
laurenceroweyesterday at 7:39 PM

If you are ok with a second or so of latency then MPEG-DASH (standardized version of HTTP Live Streaming) is likely the best bet. You simply serve the video chunks over HTTP so it should be just as compatible as the JPEG solution used here but provide 60fps video rather than crappy jpegs.

The standard supports adaptive bit rate playback so you can provide both low quality and high quality videos and players can switch depending on bandwidth available.

rekshawyesterday at 11:43 PM

I remember 12 years ago, while the Flash vs Html war was still raging on (pre-html5), I created a framework to create web video playback using CSS and JPEGs. It would expect a set of big JPEGs, each containing the frames of the video in a grid (a "reel"), and play it by changing the css background position (and swap out the background with the next jpeg once a "reel" was complete).

It worked really well, and I also cloned the (at the time) Youtube player UI. Seeking, keyframes, flexible framerate, etc were all supported out of the box thanks to the simple underlying architecture.

https://github.com/VAS/animite

plqbfbvtoday at 1:49 AM

I dabbled a bit with recoding/encoding videos in the past: 40mbps is basically blu-ray quality (1080p/4k depending on content), and it's being used to stream a mostly-static background with some text scrolling in front of it.

A 3-minute chat with Claude suggests 30FPS should be plenty (perhaps minor cursor lag can be noticed if it's drawn), with a GOP of 2s (60 frames) for fast recovery, VBR 1mbps average with a max bitrate at 1.2mbps for crappy connections, and bframes to minimize bandwidth usage (because we have hw encoding).

The crappiest of internet cafes should still be able to guarantee 1.2mbps (150kb/s). If they can do 5-10FPS with 150kb frames, they have 6-12mbps available. Worst case GOP can be reduced to 15 frames, so that there's 2x I-frames every second, and the latency is 500ms tops.

tcherasarotoday at 4:10 AM

Reminds me when I was working on the video system for a mast on a sub-marine 20 years ago.

Customer had impossible set of latency, resolution, processing and storage requirements for their video. They also insisted we use this new H.264 standard that just came out though not a requirement.

We quickly found MJPEG was superior for meeting their requirements in every way. It took a lot of convincing though. H.264 was and would still be a complete non-starter for them.

karhutonyesterday at 7:08 PM

I made this because I got tired of screensharing issues in corporate environments: https://bluescreen.live (code via github).

Screenshot once per second. Works everywhere.

I’m still waiting for mobile screenshare api support, so I could quickly use it to show stuff from my phone to other phones with the QR link.

zipy124today at 10:01 AM

This is just poor engineering. H.264 streaming is obviously superior to JPEG streaming, else MJPEG (motion jpeg) would be standard for screen sharing. In addition if all you're sharing is a picture of text, and you have access to the text, you can just send the damn text instead and render it locally.

MBCookyesterday at 7:32 PM

So it’s video of an AI typing text?

Why not just send text? Why do you need video at all?

show 2 replies
Jakobyesterday at 7:07 PM

Yes, this is unfortunately still the way and was very common back when iOS Safari did not allow embedded video.

For a fast start of the video, reverse the implementation: instead of downgrading from Websockets to polling when connection fails, you should upgrade from polling to Websockets when the network allows.

Socket.io was one of the first libraries that did that switching and had it wrong first, too. Learned the enterprise network behaviour and they switched the implementation.

abujazartoday at 10:13 AM

This reminds me of https://github.com/memvid/memvid

andaiyesterday at 8:32 PM

I recognize this voice :) This is Claude.

rcarmoyesterday at 6:41 PM

This was the most entertaining thing I read all day. Kudos.

I've had similar experiences in the past when trying to do remote desktop streaming for digital signage (which is not particularly demanding in bandwidth terms). Multicast streaming video was the most efficent, but annoying to decode when you dropped data. I now wonder how far I could have gone with JPEGs...

show 1 reply
jayd16yesterday at 7:11 PM

So they replaced a TCP connection with no congestion control with a sycnronous poll of an endpoint which is inherently congestion controlled.

I wonder if they just tried restarting the stream at a lower bitrate once it got too delayed.

The talk about how the images looks more crisp at a lower FPS is just tuning that I guess they didn't bother with.

vincepaulushooktoday at 8:16 AM

Hi, I would concur to some of the comments. A key frame in H264 is already encoded in a similar way as JPEG. Major differences are the "defaults": the flexibility of JPEG in terms of colors depth, color map, but that can be also addressed with a video codec, too. Then when using a video codec like H264, it will also contain differential frames which will only send differences. It depends on the content but these frames can be significantly smaller than a key frame, like 10x.

So the math is that H264 can nearly only be better than JPEG, assuming proper parameters for the type of content, the targeted transmission challenges, the transmission type.

Using JPEG is close to using only key frames from a compression stand point (not to say, it is exactly like that), which is close to older protocols like MPEG-1 (DVD), or close to intra-frames only codec (like used as intermediate formats, for editing or preservation). And the difference in size is a no-brainer, eventually this is the amount of data that needs to be sent to every user.

In my opinion, the first consequence of using JPEG only is the cost per device, the number of concurrent streams from a server and what not.

If the perception of quality is low with H264 compared to JPEG, some parameters need to be adjusted. And ultimately, H264 is already an old codec anyway, not the one I would recommend, newer ones can address visual perception and bandwidth in a much better way. the VP-8/9/AV1 family will reduce the "macro block" effect of the H.26x codecs. Using HDR will dramatically improve the quality and will crush any benefit from JPEG, benefits related to the number of bits per pixels and the poor 8bits color maps, with a much higher efficiency.

Should the volume of users and the cost per user be of any consideration, a lossy video codec will prevail.

Video projects are challenging in the details: wish you the best.

any1yesterday at 8:30 PM

I have some experience with pushing video frames over TCP.

It appears that the writer has jumped to conclusions at every turn and it's usually the wrong one.

The reason that the simple "poll for jpeg" method works is that polling is actually a very crude congestion control mechanism. The sender only sends the next frame when the receiver has received the last frame and asks for more. The downside of this is that network latency affects the frame rate.

The frame rate issue with the polling method can be solved by sending multiple frame requests at a time, but only as many as will fit within one RTT, so the client needs to know the minimum RTT and the sender's maximum frame rate.

The RFB (VNC) protocol does this, by the way. Well, the thing about rtt_min and frame rate isn't in the spec though.

Now, I will not go though every wrong assumption, but as for this nonsense about P-frames and I-frames: With TCP, you only need one I-frame. The rest can be all P-frames. I don't understand how they came to the conclusion that sending only I-frames over TCP might help with their latency problem. Just turn off B-frames and you should be OK.

The actual problem with the latency was that they had frames piling up in buffers between the sender and the receiver. If you're pushing video frames over TCP, you need feedback. The server needs to know how fast it can send. Otherwise, you get pile-up and a bunch of latency. That's all there is to it.

The simplest, absolutely foolproof way to do this is to use TCP's own congestion control. Spin up a thread that does two things: encodes video frames and sends them out on the socket using a blocking send/write call. Set SO_SNDBUF on that socket to a value that's proportional to your maximum latency tolerance and the rough size of your video frames.

One final bit of advice: use ffmpeg (libavcodec, libavformat, etc). It's much simpler to actually understand what you're doing with that than some convoluted gstreamer pipeline.

josephernesttoday at 6:45 AM

Related: for some hardware project, I have a backend server (either C++ or python) receiving frames from an industrial camera, uncompressed.

And I need these frames displayed in a web browser client but on the same computer (instead of network trip like in this article).

How would you do this ?

I eventually did more or less like OP with uncompressed frames.

My goal is to minimize CPU usage on the computer. Would h264 compression be a good thing here given source and destination are the same machine?

Other ideas?

NB: this camera cannot be directly accessed by the browser.

show 1 reply
egorfineyesterday at 6:40 PM

> The constraint that ruined everything: It has to work on enterprise networks.

> You know what enterprise networks love? HTTP. HTTPS. Port 443. That’s it. That’s the list.

That's not enough.

Corporate networks also love to MITM their own workstations and reinterpret http traffic. So, no WebSockets and no Server-Side Events either, because their corporate firewall is a piece of software no one in the world wants and everyone in the world hates, including its own developers. Thus it only supports a subset of HTTP/1.1 and sometimes it likes to change the content while keeping Content-Length intact.

And you have to work around that, because IT dept of the corporation will never lift restrictions.

I wish I was kidding.

show 9 replies
didibustoday at 12:17 AM

What I'm wondering is, why couldn't the AI generate this solution? And implement it all?

Why did they need to spend human time and effort to experiment, arrive at this solution and implement it?

I'm asking genuinely. I use GenAI a lot, every day, multiple times a day. It helps me write emails, documents, produce code, make configuration changes, create diagrams, research topics, etc.

Still, it's all assisted, I never use its output as is, the asks from me to the AI are small, so small, I wouldn't ever assign someone else a task this small. We're not talking 1 story point, we're talking 0.1 story point. And even with those, I have to review, re-prompt, dissect, and often manually fix up or complete the work.

Are there use-cases where this isn't true that I'm simply not tackling? Are there context engineering techniques that I simply fail to grasp? Are there agentic workflows that I don't have the patience to try?

How then, do models score so high on some of those tests, are the prompts to each question they solve hand crafted, rewritten multiple times until they find a prompt that one-shot the problem? Do they not consider all that human babysitting work as the model not truly solving the problem? Do they run the models with a GPU budget 100x that they sell us?

show 1 reply
benterixyesterday at 7:53 PM

> And the size! A 70% quality JPEG of a 1080p desktop is like 100-150KB. A single H.264 keyframe is 200-500KB.

I believe the latter can be adjusted in codec settings.

show 1 reply
rezonantyesterday at 11:51 PM

I guess their LLM doesn't have much training data on how to do video engineering. The result? A "video" stack that looks like a junior engineer wrote it.

algestenyesterday at 7:00 PM

WebSockets over TCP is probably always going to cause problems for streaming media.

WebRTC over UDP is one choice for lossy situations. Media over Quic might be another (is the future here?), and it might be more enterprise firewall friendly since HTTP3 is over Quic.

petcatyesterday at 6:37 PM

so did they reinvent mjpeg

show 3 replies
socketclustertoday at 1:59 AM

Next phase would be to do diffs between the JPEGs and if the diff is smaller than the next JPEG, only send the (gzipped) diff and reconstruct the next JPEG on the client side.

TBH, the obsession with standards is kind of nutty. It's not that hard to implement custom solutions that are better adapted to specific problems. Standards make sense when you want maximum interoperability but not everything requires this degree of interoperability these days. It's not such hassle to just provide a lightweight client in those cases.

For example, it's not ideal to use HTTP2 server push for realtime chat use cases. It was primarily intended for file push to avoid round-trip latency but HTTP is such a powerful and widespread protocol that people feel the need to use it for everything.

wewewedxfgdfyesterday at 7:23 PM

webp is smaller than jpeg

https://developers.google.com/speed/webp/docs/webp_study

ALSO - the blog author could simplify - you don't need any code at all at the web browser.

The <img> tag automatically does motion jpeg streaming.

show 2 replies
monustoday at 7:27 AM

Well, we are serving latency sensitive remote control to <one of the biggest banks in US> via WebRTC which uses TLS over TURN so you get 443 HTTPS for the whole traffic.

No NAT, no UDP, just pure TURN traffic over Cloudflare TURN with TLS.

Terrettatoday at 4:20 AM

Helix is a commercial multi-protocol streaming server:

https://en.wikipedia.org/wiki/Helix_Universal_Server

HTTP Live Streaming is already a thing:

https://en.wikipedia.org/wiki/HTTP_Live_Streaming

See also DASH, M-JPEG, progressive download, etc.

> "Who knew?"

Everyone in the streaming industry, and not so long ago that it's been forgotten.

throwaway173738yesterday at 9:55 PM

This article reminds me so much of so many hardware providers I deal with at work who want to put equipment on-site and then spend the next year not understanding that our customers manage their own firewall. No, you can’t just add a new protocol or completely change where your stuff is deployed because then our support team has to contact hundreds of customers about thousands of sites.

epxyesterday at 6:46 PM

Would HLS be an option? I publish my home security cameras via WebRTC, but I keep HLS as a escape for hotel/cafe WiFi situations (MediaMTX makes it easy to offer both).

show 1 reply
binocarlosyesterday at 6:43 PM

> I mashed F5 like a degenerate.

I love the style of this blog-post, you can really tell that Luke has been deep down in the rabbit hole, encountered the Balrog and lived to tell the tale.

show 2 replies
Eduardyesterday at 9:17 PM

> A JPEG screenshot is self-contained. It either arrives complete, or it doesn’t. There’s no “partial decode.”

What about Progressive JPEG?

avsnyesterday at 7:29 PM

We did something similar in one of the places I've worked at. We sent xy coordinates and pointer events from our frontend app to our backend/3d renderer and received JPEG frames back. All of that wrapped in protobuf messages and sent via WS connection. Surpassingly it kinda worked, not "60fps worked" though obviously.

dehrmanntoday at 1:45 AM

> What if we only send keyframes?

I think the author reached this conclusion, but individual jpegs is essentially only keyframes.

> We don’t spam HTTP requests for individual frames like it’s 2009.

Uncompressed frames are huge, somewhere between 5 MB and 50 MB. The overhead of a request is negligible. It's also different when you're optimizing for latency and reliability where dropped frames is OK. Really, the lesson is they should have tried the easy thing first to see how good it was.

saagarjhatoday at 1:31 AM

I’m currently doing this in one of my side projects: https://github.com/saagarjha/Ensemble. It works, kinda; it’s good enough for demos at least and I haven’t had much time to improve it. At some point you would really want to use an actual video encoder though because JPEGs are not cheap to encode and send even with hardware acceleration.

nicoyesterday at 8:17 PM

Super interesting. Some time ago I wrote some code that breaks down a jpeg image into smaller frames of itself, then creates an h.264 video with the frames, outputting a smaller file than the original image

You can then extract the frames from the video and reconstruct the original jpeg

Additionally, instead of converting to video, you can use the smaller images of the original, to progressively load the bigger image, ie. when you get the first frame, you have a lower quality version of the whole image, then as you get more frames, the code progressively adds detail with the extra pixels contained in each frame

It was a fun project, but the extra compression doesn’t work for all images, and I also discovered how amazing jpeg is - you can get amazing compression just by changing the quality/size ratio parameter when creating a file

dehrmanntoday at 1:50 AM

> We’re building Helix, an AI platform where autonomous coding agents work in cloud sandboxes. Users need to watch their AI assistants work. Think “screen share, but the thing being shared is a robot writing code.”

This feels like a fast dead end. Agents will get much faster pretty quickly, so synchronous human supervision isn't going to scale. I'd focus on systems that make high-signal asks of humans asynchronously.

wood_spirityesterday at 6:37 PM

A long time ago I was trying to get video multiplexing to work over mobile over 3G. We struggled with H264 which had broad enough hardware support but almost no tooling and software support on the phones we were targeting. Even with engineers from the phone manufacturer as liaison we struggled to get access to any kind or SDK etc. We ended up doing JPEG streaming instead, much like the article said. And it worked great but we discovered we were getting a fraction of the framerate reported in Flash players - the call to refresh the screen was async and the act of receiving and deciding the next frame staved the redraw so the phone spent more time receiving lots of frames but not showing them. Super annoying and I don’t think the project survived long enough for us to find a fix.

🔗 View 50 more comments