logoalt Hacker News

kllrnohjyesterday at 5:43 PM1 replyview on HN

> With only 8 bits of precision, it really sucks for HDR, gainmap or no gainmap. You just get too much banding.

This is simply not true. In fact, you get less banding than you do with 10-bit bt2020 PQ.

> JXL otoh is completely immune to banding

Nonsense. It has a lossy mode (which is its primary mode so to speak), so of course it has banding. Only lossless codecs can plausibly be claimed to be "immune to banding".

> The JXL spec already has gainmaps...

Ah looks like they added that sometime last year but decided to call it "JHGM" and also made almost no mention of this in the issue tracker, and didn't bother updating the previous feature requests asking for this that are still open.


Replies

spaceducksyesterday at 8:15 PM

> Nonsense. It has a lossy mode (which is its primary mode so to speak), so of course it has banding. Only lossless codecs can plausibly be claimed to be "immune to banding".

color banding is not a result of lossy compression*, it results from not having enough precision in the color channels to represent slow gradients. VarDCT, JPEG XL's lossy mode, encodes values as 32-bit floats. in fact, image bit depth in VarDCT is just a single value that tells the decoder what bit depth it should output to, not what bit depth the image is encoded as internally. optionally, the decoder can even blue-noise dither it for you if your image wants to be displayed in a higher bit depth than your display or software supports

this is more than enough precision to prevent any color banding (assuming of course the source data that was encoded into a JXL didn't have any banding either). if you still want more precision for whatever reason, the spec just defines that the values in XYB color channels are a real number between 0 and 1, and the header supports signaling an internal depth up to 64 bit per channel

* technically color banding could result from "lossy compression" if high bit depth values are quantized to lower bit depth values, however with sophisticated compression, higher bit depths often compress better because transitions are less harsh and as such need fewer high-frequency coefficients to be represented. even in lossless images, slow gradients can be compressed better if they're high bit depth, because frequent consistent changes in pixel values can be predicted better than sudden occasional changes (like suddenly transitioning from one color band to another)