logoalt Hacker News

mikepavoneyesterday at 8:00 PM6 repliesview on HN

> When the network is bad, you get... fewer JPEGs. That’s it. The ones that arrive are perfect.

This would make sense... if they were using UDP, but they are using TCP. All the JPEGs they send will get there eventually (unless the connection drops). JPEG does not fix your buffering and congestion control problems. What presumably happened here is the way they implemented their JPEG screenshots, they have some mechanism that minimizes the number of frames that are in-flight. This is not some inherent property of JPEG though.

> And the size! A 70% quality JPEG of a 1080p desktop is like 100-150KB. A single H.264 keyframe is 200-500KB. We’re sending LESS data per frame AND getting better reliability.

h.264 has better coding efficiency than JPEG. For a given target size, you should be able to get better quality from an h.264 IDR frame than a JPEG. There is no fixed size to an IDR frame.

Ultimately, the problem here is a lack of bandwidth estimation (apart from the sort of binary "good network"/"cafe mode" thing they ultimately implemented). To be fair, this is difficult to do and being stuck with TCP makes it a bit more difficult. Still, you can do an initial bandwidth probe and then look for increasing transmission latency as a sign that the network is congested. Back off your bitrate (and if needed reduce frame rate to maintain sufficient quality) until transmission latency starts to decrease again.

WebRTC will do this for you if you can use it, which actually suggests a different solution to this problem: use websockets for dumb corporate network firewall rules and just use WebRTC everything else


Replies

auxiliarymooseyesterday at 9:06 PM

They shared the polling code in the article. It doesn't request another jpeg until the previous one finishes downloading. UDP is not necessary to write a loop.

show 2 replies
eichinyesterday at 8:33 PM

Probably either (1) they don't request another jpeg until they have the previous one on-screen (so everything is completely serialized and there are no frames "in-flight" ever) (2) they're doing a fresh GET for each and getting a new connection anyway (unless that kind of thing is pipelined these days? in which case it still falls back to (1) above.)

show 1 reply
chrisweeklytoday at 12:55 AM

Related tangent: it's remarkable to me how a given jpeg can be literally visually indistinguishable from another (by a human on a decent monitor) yet consist of 10-15% as many bytes. I got pretty deep into web performance and image optimization in the late 2000s and it was gratifying to have so much low-hanging fruit.

lelanthrantoday at 9:02 AM

> Still, you can do an initial bandwidth probe and then look for increasing transmission latency as a sign that the network is congested. Back off your bitrate (and if needed reduce frame rate to maintain sufficient quality) until transmission latency starts to decrease again.

They said playing around with bitrate didn't reduce the latency; all that happened was they got blocky videos with the latency remaining the same.

afioritoday at 7:16 AM

I am almost sure that the most perfect solution would involve using a video codec protocol but the issue is implementation complexity and having to implement a production encoder yourself if your usecase is unusual.

This is exactly the point of the article they tried keyframes only but their library had a bug that broke it

nazgul17today at 4:27 AM

Regarding the encoding efficiency, I imagine the problem is that the compromise in quality shows in the space dimension (aka fewer or blurry pixels) rather than in time. Users need to read text clearly, so the compromise in the time dimension (fewer frames) sounds just fine.

show 1 reply