logoalt Hacker News

Issues with Color Spaces and Perceptual Brightness

111 pointsby todsacerdotiyesterday at 7:01 AM63 commentsview on HN

Comments

jorviyesterday at 11:50 AM

I've always found these "perceptual vs absolute" things about human senses very interesting.

Hearing has a few quirks too:

- When we measure sound pressure, we measure it in log (so, every 3dB is a doubling in sound pressure), but our hearing perceives this as a linear scale. If you make a linear volume slide, the upper part will seem as if it barely does anything.

- The lower the volume, the less perceivable upper and lower ranges are compared to the midrange. This is what "loudness" intends to fix, although poor implementations have made many people assume it is a V-curve button. A proper loudness implementation will lessen its impact as volume increases, completely petering off somewhere around 33% of maximum volume.

- For the most "natural" perceived sound, you don't try to get as flat a frequency response as possible but instead aim for a Harman curve.

- Bass frequencies (<110Hz, depending on who you ask) are omnidirectional, which means we cannot accurately perceive which direction the sound is coming from. Subwoofers exploit this fact, making it seem as if deep rich bass is coming from your puny soundbar and not the sub hidden behind the couch :).

show 6 replies
Animatsyesterday at 9:22 AM

This is part of "tone mapping"[1] in high dynamic range rendering. The idea is that pixels are computed with a much larger range of values than screens can display. 16 bits per color per pixel, or even a floating point value. Then, to generate the displayed image, there's a final step where the pixel values are run through a perceptual transformation to map them into 8 bit RGB (or more, if the hardware is is available.)

This has issues. When you go from a dark space to a bright space, the eye's iris stops down. But not instantaneously. It takes a second or two. This can be simulated. Cyberpunk 2077 does this. Go from a dark place in the game to bright sunlight and, for a moment, the screen becomes blinding, then adjusts.

In the other direction, go into a dark space, and it's dark at first, then seems to lighten up after a while. Dark adaptation is slower then light adaptation.

Tone mapping is not just an intensity adjustment. It has to compensate for the color space intensity problems the OP mentions. Human eyes are not equally sensitive to the primary colors.

Some visually impaired people hate this kind of adjustment, it turns out.

Here's a clip from Cyberpunk 2077.[2] Watch what happens to screen brightness as the car goes into the tunnel and then emerges into daylight.

[1] https://en.wikipedia.org/wiki/Tone_mapping

[2] https://youtu.be/aWlX793ACUY?t=145

show 2 replies
kookamamieyesterday at 8:00 AM

See Oklab colorspace for an attempt at fairer perceptual brightness: https://bottosson.github.io/posts/oklab/

show 3 replies
PaulHouleyesterday at 2:58 PM

There's the strange phenomenon that people like to say "bright red" as much as it is an oxymoron.

#ff0000 is, in terms of brightness, pretty dark compared to #ffffff yet there is a way it seems to "pop out" psychology. It is unusual for something red to really be the brightest color in a natural scene unless the red is something self-luminous like an LED in a dark night.

weinzierlyesterday at 8:20 AM

"Unfortunately, I haven’t been able to find any perceptually uniform color spaces that seem to include these transformations in the final output space. If you’re aware of one, I would love to know."

OSA-UCS takes the Helmholtz-Kohlrausch effect into consideration.

show 1 reply
boulosyesterday at 9:01 PM

Amy Gooch's Color2Gray and various follow-on work has better coverage of the OP's actual goal:

> evaluate relative brightnesses between art assets, and improve overall game readability

The method in Color2Gray is trying to enhance salience, but the paper does a good job of comparing the problems (including red / blue examples in particular).

Like other commenters, I think oklab would look better than CIELAB on the example given in the OP. https://bottosson.github.io/posts/oklab/#comparison-with-oth... and the Munsell data below it show it to be a lot more uniform than either CIELAB or CIELUV.

vanderZwanyesterday at 2:15 PM

So as a guy with protanomaly, the biggest shock for me when I got my colorlite glasses¹ was that the traffic signs with bright blue and dark red colors suddenly looked dark(er) blue with very bright red. I asked my (normal vision) friends how they experienced those traffic signs and it was the latter. The lenses corrected for that.

It was actually quite shocking how much more sense most color choices in art and design made to me, which was a much bigger reason for me to keep wearing the glasses than being able to distinguish red, green and brown better than before. The world just looks "more balanced" color-wise with them.

While it was very obvious early on in my life that I experienced most green, red and brown colors as ambiguously the same (I did not know peanut butter was not green until my early thirties), the fact that there also were differences in perceptual brightness had stayed completely under the radar.

¹ And yes, these lenses do work, at least for me. They're not as scummy as enchroma or other colorblind-"correcting" lenses, for starters you can only buy them after trying them out in person with an optometrist, who tests which type of correction you need at which strength. Ironically their website is a broken mess that looks untrustworthy[0]. And their list of scientific publications doesn't even show up on Google Scholar, so they probably have near-zero citations[1]. But the lenses work great for me.)

[0] https://www.colorlitelens.com/

[1] https://www.colorlitelens.com/color-blindness-correction-inf...

show 1 reply
andrewlayesterday at 6:37 PM

It seems that this would be well-suited to a simple online test -- show a square with one color and a square inside of that with a different color, and ask the user whether the inner square is brighter (or too close to call). Aggregate this across users and assess the fit to the CEILAB or other color spaces. It seems like you could get almost all hn users to take a stab at this for a bit before they get sick of it.

qwertoxyesterday at 8:47 AM

If this submission has put you into a mindset of wanting to know more about colormaps, here's a good video about how Matplotlib ended up having `vividris` as its default colormap:

A Better Default Colormap for Matplotlib | SciPy 2015 | Nathaniel Smith and Stéfan van der Walt https://www.youtube.com/watch?v=xAoljeRJ3lU

show 1 reply
seanwilsonyesterday at 11:10 AM

> This process means that there is some error. For example, ideally, a color with L=50 looks twice as bright as a color with L=25. Except, with very strongly saturated colors like red, this isn’t actually the case in any of these color spaces.

A benefit of doing it this way is you account for color blindness and accessibility e.g. all colors at L=50 will have the same WCAG contrast ratio against all colors at L=25. This helps when finding colors with the contrast you want.

Related, I'm working on a color palette editor based around creating accessible palettes where I use the HSLuv color space which has the above property:

https://www.inclusivecolors.com/

You can try things like maxing out the saturation of each swatch to see how some some hues get more bold looking at the same lightness (the Helmholtz-Kohlrausch effect mentioned in the article I think). You can also explore examples of open source palettes (Tailwind, IBM Carbon, USWDS), where it's interesting to compare how they vary their saturation and lightness curves per swatch e.g. red-700 and green-700 in Tailwind v3 have different lightnesses but are the same in IBM Carbon (the "Contrast > View colors by luminance only" option is interesting to see this).

show 1 reply
mark-ryesterday at 12:57 PM

There's another problem nobody ever talks about. The way our RGB monitors work, some colors will be more intense than others at the same brightness. A red or blue patch will only emit half as many photons as a magenta patch, because magenta includes both red and blue. The non-linear response of the eye helps here, but not completely.

show 1 reply
leocyesterday at 3:26 PM

This is something photographers and filmmakers had to (and sometimes still have to) deal with when shooting on black-and-white film, surely? (Though maybe B&W film sometimes has noticeably different responses to light at different visible frequencies, which might happen to counter this, or to heighten it?) There’s a long history of using colour lens filters with B&W film.

hatthewyesterday at 7:30 PM

Hmm, the red definitely looks more vivid to me, but I'm not sure I would say it's lighter. In an HSL-type colorspace I would say it has more S, but not more L.

show 1 reply
mxfhyesterday at 10:32 AM

Desaturation methods are tricky as well. So is the image information transformation on it's way to the hardware and display hardware characteristics themselves.

Accurate color reproduction on uncalibrated consumer devices is just wishful thinking and will not be fixed in the forseeable future.

So unless you work in a color controlled and calibrated environment it's hard to make any reliable statements about perception.

I simply would not worry too much about optimizing perceptual color spaces at this point.

https://library.imaging.org/cic/articles/31/1/36

RealStickman_yesterday at 10:28 AM

For a slightly humorous explaination and exploration of color spaces, I highly recommend this video.

https://youtu.be/fv-wlo8yVhk

Aeolunyesterday at 8:40 AM

LCH works pretty well for perceptually uniform. At least in my experience.

show 1 reply
_kbyesterday at 11:50 AM

For some adjacent work in the analogue domain, there's a exceptional set of paintings here that play with the fuzziness of this perception: https://www.pstruycken.nl/EnS20.html

kevingaddyesterday at 7:33 AM

I've definitely struggled with this trying to do generalized stuff with colors in game visuals and UI. Using a color space like cielab or okhsl helps some but it's tough to come up with values/formulas that work just as well for red as they do for yellow, blue and purple.

refulgentisyesterday at 3:49 PM

Not even wrong, in the Pauli sense: lightness is not brightness, red is dark in terms of lightness, you're looking for chroma to be represented in lightness which would make it not-lightness. Anything different would result in color blind people seeing it differently.

viggityyesterday at 2:56 PM

I once had an app that used a lot of colored bars with text labels on top of them, and I wanted a programmatic way to determine if a color should use black text or white text because R+G+B over some threshold was not working. I stumbled upon the YIQ color space where Y is the perceived luminance. Under 128 got a white text, over 128 got black text. Worked like a charm.

Y = 0.299 * R + 0.587 * G + 0.114 * B

bnetdyesterday at 2:38 PM

Yes, don't confuse color, chroma, and value.

> Unfortunately, I haven’t been able to find any perceptually uniform color spaces that seem to include these transformations in the final output space. If you’re aware of one, I would love to know.

Traditional painting.

Also, to the author on the same blog, came across this: https://johnaustin.io/articles/2023/how-we-can-actually-move...

Get off the internet.