January 17th, 2025

Issues with Color Spaces and Perceptual Brightness

John Austin addresses challenges in color spaces like CIELab and its variants, highlighting issues with brightness perception in saturated colors and introducing L_EAL for improved accuracy in color representation.

Read original articleLink Icon
InterestConfusionAppreciation
Issues with Color Spaces and Perceptual Brightness

John Austin discusses the challenges associated with color spaces and perceptual brightness, particularly focusing on the CIELab color space and its modern variants like CIECAM02 and Oklab. These color spaces aim to be perceptually uniform, meaning that numerical changes should correspond to human perception of color changes. However, they often fail to accurately represent the brightness of highly saturated colors, such as red, due to the Helmholtz-Kohlrausch effect, which causes these colors to appear brighter than their calculated lightness values suggest. Austin highlights recent research that introduces the "Predicted Equivalent Achromatic Lightness" (L_EAL), which better aligns perceived lightness with actual color saturation. This value is particularly useful for desaturating images, as it provides a more accurate gray representation for the final output. Austin notes that while working on a tool to desaturate game screenshots, he observed that red assets appeared darker than expected, leading to potential misjudgments in asset brightness. He expresses a need for perceptually uniform color spaces that incorporate these transformations to improve accuracy in color representation.

- CIELab and its variants aim for perceptual uniformity but struggle with saturated colors.

- The Helmholtz-Kohlrausch effect causes discrepancies in perceived brightness.

- The "Predicted Equivalent Achromatic Lightness" (L_EAL) offers a better measure for perceived lightness.

- Accurate desaturation is crucial for evaluating game asset brightness.

- There is a lack of perceptually uniform color spaces that account for saturation effects.

AI: What people are saying
The comments discuss various aspects of color perception and representation, particularly in relation to color spaces and their challenges.
  • Many commenters express interest in the complexities of color perception, including the differences between brightness, lightness, and chroma.
  • Several users mention alternative color spaces like Oklab, LCH, and HSLuv as potential solutions for more accurate color representation.
  • There are discussions about the impact of color blindness on perception and the effectiveness of corrective lenses.
  • Some comments highlight the challenges of achieving accurate color reproduction on uncalibrated devices and the limitations of current color spaces.
  • Practical applications, such as color palette editors and tone mapping in rendering, are also mentioned as relevant to the topic.
Link Icon 22 comments
By @jorvi - about 14 hours
I've always found these "perceptual vs absolute" things about human senses very interesting.

Hearing has a few quirks too:

- When we measure sound pressure, we measure it in log (so, every 3dB is a doubling in sound pressure), but our hearing perceives this as a linear scale. If you make a linear volume slide, the upper part will seem as if it barely does anything.

- The lower the volume, the less perceivable upper and lower ranges are compared to the midrange. This is what "loudness" intends to fix, although poor implementations have made many people assume it is a V-curve button. A proper loudness implementation will lessen its impact as volume increases, completely petering off somewhere around 33% of maximum volume.

- For the most "natural" perceived sound, you don't try to get as flat a frequency response as possible but instead aim for a Harman curve.

- Bass frequencies (<110Hz, depending on who you ask) are omnidirectional, which means we cannot accurately perceive which direction the sound is coming from. Subwoofers exploit this fact, making it seem as if deep rich bass is coming from your puny soundbar and not the sub hidden behind the couch :).

By @Animats - about 17 hours
This is part of "tone mapping"[1] in high dynamic range rendering. The idea is that pixels are computed with a much larger range of values than screens can display. 16 bits per color per pixel, or even a floating point value. Then, to generate the displayed image, there's a final step where the pixel values are run through a perceptual transformation to map them into 8 bit RGB (or more, if the hardware is is available.)

This has issues. When you go from a dark space to a bright space, the eye's iris stops down. But not instantaneously. It takes a second or two. This can be simulated. Cyberpunk 2077 does this. Go from a dark place in the game to bright sunlight and, for a moment, the screen becomes blinding, then adjusts.

In the other direction, go into a dark space, and it's dark at first, then seems to lighten up after a while. Dark adaptation is slower then light adaptation.

Tone mapping is not just an intensity adjustment. It has to compensate for the color space intensity problems the OP mentions. Human eyes are not equally sensitive to the primary colors.

Some visually impaired people hate this kind of adjustment, it turns out.

Here's a clip from Cyberpunk 2077.[2] Watch what happens to screen brightness as the car goes into the tunnel and then emerges into daylight.

[1] https://en.wikipedia.org/wiki/Tone_mapping

[2] https://youtu.be/aWlX793ACUY?t=145

By @kookamamie - about 18 hours
See Oklab colorspace for an attempt at fairer perceptual brightness: https://bottosson.github.io/posts/oklab/
By @PaulHoule - about 11 hours
There's the strange phenomenon that people like to say "bright red" as much as it is an oxymoron.

#ff0000 is, in terms of brightness, pretty dark compared to #ffffff yet there is a way it seems to "pop out" psychology. It is unusual for something red to really be the brightest color in a natural scene unless the red is something self-luminous like an LED in a dark night.

By @weinzierl - about 18 hours
"Unfortunately, I haven’t been able to find any perceptually uniform color spaces that seem to include these transformations in the final output space. If you’re aware of one, I would love to know."

OSA-UCS takes the Helmholtz-Kohlrausch effect into consideration.

By @boulos - about 5 hours
Amy Gooch's Color2Gray and various follow-on work has better coverage of the OP's actual goal:

> evaluate relative brightnesses between art assets, and improve overall game readability

The method in Color2Gray is trying to enhance salience, but the paper does a good job of comparing the problems (including red / blue examples in particular).

Like other commenters, I think oklab would look better than CIELAB on the example given in the OP. https://bottosson.github.io/posts/oklab/#comparison-with-oth... and the Munsell data below it show it to be a lot more uniform than either CIELAB or CIELUV.

By @vanderZwan - about 12 hours
So as a guy with protanomaly, the biggest shock for me when I got my colorlite glasses¹ was that the traffic signs with bright blue and dark red colors suddenly looked dark(er) blue with very bright red. I asked my (normal vision) friends how they experienced those traffic signs and it was the latter. The lenses corrected for that.

It was actually quite shocking how much more sense most color choices in art and design made to me, which was a much bigger reason for me to keep wearing the glasses than being able to distinguish red, green and brown better than before. The world just looks "more balanced" color-wise with them.

While it was very obvious early on in my life that I experienced most green, red and brown colors as ambiguously the same (I did not know peanut butter was not green until my early thirties), the fact that there also were differences in perceptual brightness had stayed completely under the radar.

¹ And yes, these lenses do work, at least for me. They're not as scummy as enchroma or other colorblind-"correcting" lenses, for starters you can only buy them after trying them out in person with an optometrist, who tests which type of correction you need at which strength. Ironically their website is a broken mess that looks untrustworthy[0]. And their list of scientific publications doesn't even show up on Google Scholar, so they probably have near-zero citations[1]. But the lenses work great for me.)

[0] https://www.colorlitelens.com/

[1] https://www.colorlitelens.com/color-blindness-correction-inf...

By @andrewla - about 7 hours
It seems that this would be well-suited to a simple online test -- show a square with one color and a square inside of that with a different color, and ask the user whether the inner square is brighter (or too close to call). Aggregate this across users and assess the fit to the CEILAB or other color spaces. It seems like you could get almost all hn users to take a stab at this for a bit before they get sick of it.
By @qwertox - about 17 hours
If this submission has put you into a mindset of wanting to know more about colormaps, here's a good video about how Matplotlib ended up having `vividris` as its default colormap:

A Better Default Colormap for Matplotlib | SciPy 2015 | Nathaniel Smith and Stéfan van der Walt https://www.youtube.com/watch?v=xAoljeRJ3lU

By @seanwilson - about 15 hours
> This process means that there is some error. For example, ideally, a color with L=50 looks twice as bright as a color with L=25. Except, with very strongly saturated colors like red, this isn’t actually the case in any of these color spaces.

A benefit of doing it this way is you account for color blindness and accessibility e.g. all colors at L=50 will have the same WCAG contrast ratio against all colors at L=25. This helps when finding colors with the contrast you want.

Related, I'm working on a color palette editor based around creating accessible palettes where I use the HSLuv color space which has the above property:

https://www.inclusivecolors.com/

You can try things like maxing out the saturation of each swatch to see how some some hues get more bold looking at the same lightness (the Helmholtz-Kohlrausch effect mentioned in the article I think). You can also explore examples of open source palettes (Tailwind, IBM Carbon, USWDS), where it's interesting to compare how they vary their saturation and lightness curves per swatch e.g. red-700 and green-700 in Tailwind v3 have different lightnesses but are the same in IBM Carbon (the "Contrast > View colors by luminance only" option is interesting to see this).

By @mark-r - about 13 hours
There's another problem nobody ever talks about. The way our RGB monitors work, some colors will be more intense than others at the same brightness. A red or blue patch will only emit half as many photons as a magenta patch, because magenta includes both red and blue. The non-linear response of the eye helps here, but not completely.
By @leoc - about 11 hours
This is something photographers and filmmakers had to (and sometimes still have to) deal with when shooting on black-and-white film, surely? (Though maybe B&W film sometimes has noticeably different responses to light at different visible frequencies, which might happen to counter this, or to heighten it?) There’s a long history of using colour lens filters with B&W film.
By @mxfh - about 15 hours
Desaturation methods are tricky as well. So is the image information transformation on it's way to the hardware and display hardware characteristics themselves.

Accurate color reproduction on uncalibrated consumer devices is just wishful thinking and will not be fixed in the forseeable future.

So unless you work in a color controlled and calibrated environment it's hard to make any reliable statements about perception.

I simply would not worry too much about optimizing perceptual color spaces at this point.

https://library.imaging.org/cic/articles/31/1/36

By @hatthew - about 7 hours
Hmm, the red definitely looks more vivid to me, but I'm not sure I would say it's lighter. In an HSL-type colorspace I would say it has more S, but not more L.
By @RealStickman_ - about 16 hours
For a slightly humorous explaination and exploration of color spaces, I highly recommend this video.

https://youtu.be/fv-wlo8yVhk

By @Aeolun - about 17 hours
LCH works pretty well for perceptually uniform. At least in my experience.
By @_kb - about 14 hours
For some adjacent work in the analogue domain, there's a exceptional set of paintings here that play with the fuzziness of this perception: https://www.pstruycken.nl/EnS20.html
By @runsonrum - about 15 hours
By @kevingadd - about 18 hours
I've definitely struggled with this trying to do generalized stuff with colors in game visuals and UI. Using a color space like cielab or okhsl helps some but it's tough to come up with values/formulas that work just as well for red as they do for yellow, blue and purple.
By @refulgentis - about 10 hours
Not even wrong, in the Pauli sense: lightness is not brightness, red is dark in terms of lightness, you're looking for chroma to be represented in lightness which would make it not-lightness. Anything different would result in color blind people seeing it differently.
By @viggity - about 11 hours
I once had an app that used a lot of colored bars with text labels on top of them, and I wanted a programmatic way to determine if a color should use black text or white text because R+G+B over some threshold was not working. I stumbled upon the YIQ color space where Y is the perceived luminance. Under 128 got a white text, over 128 got black text. Worked like a charm.

Y = 0.299 * R + 0.587 * G + 0.114 * B

By @bnetd - about 11 hours
Yes, don't confuse color, chroma, and value.

> Unfortunately, I haven’t been able to find any perceptually uniform color spaces that seem to include these transformations in the final output space. If you’re aware of one, I would love to know.

Traditional painting.

Also, to the author on the same blog, came across this: https://johnaustin.io/articles/2023/how-we-can-actually-move...

Get off the internet.