Gamut Mapping Part 2: Getting to the Display

Yes, that is what I was attempting to demonstrate in my above render.

Hi Derek,
thanks for downloading the images and use them for further tests.

Before your post you asked me this:
“Am I understanding correctly that the issue you are observing (Red spheres appearing less bright than white ground) is with the renders not with the RED footage? If that is correct are you able to reproduce the same with a camera?”

I was thinking about for some days how to answer to that question.
In my post I used a very simple and graphical 3D generated example that shows an issue:
Although I raise the exposure of the overall image, the red spheres are appearing darker than the surrounding at some point. And this feels odd.

At first I did not like this simple rendering, that’s why I created the other rendering with the red bulbs. This result looks more similar to the image of the Red XMas footage. But the image has also more detail that is distracting the viewer (and myself of course) from the issue. It is less obvious.

Next, you suggested to avoid some very pure and intense values in 3D animation to avoid the image results that you are seeing in the red spheres example. It’s true, I think such intense pure primary colors you could only achieve with a laser.
The red spheres example I set up in such a simple way so that I don’t get distracted. If you just take the Red XMas footage and examine it, there is so much going on in the image and the process, how it was captured and now displayed, that you might miss what is going on. I certainly did.

I needed this simplified example to realize that there is maybe something else going on that I was not aware of.

Still thinking about how to answer your question:
Friday morning on the way to work I crossed a street and saw a gas station with red colored LED lights for the gasoline prices. I pulled out my iPhone and made some photos with a manual exposure override in the default photos app. In know, there is a lot of processing happening in the images from an iPhone but I still gave it a try. I made four photos, simply by raising the exposure slider in the app. Later I aligned the photos in Affinity Photo so that they are more easy to compare.

Inspect the red lights of the gas prices:

I guess in this images there is happening the same as in the red spheres rendering:

  • In the darkest exposure the red numbers appear to be the brightest element in the photo.
  • In the second exposure I could argue that the bike lights are feeling brighter than the red numbers already.
  • The third exposure feels to me to be kind of a normal exposure how I saw the scene at this morning, but the numbers are not feeling to be the brightest element in the photo anymore like in the first two exposures. Although to my eye I clearly saw them as the brightest element in my view. Thats why I took these photos!
  • And in the brightest exposure I could argue that the red numbers appear actually darker than most of the surrounding scene. For me the yellow sign appears now to be the brightest element or the green leaves of the tree.

To answer your question. Yes, I can see the problem in other footage too.
I raise the overall exposure of a scene and at some point or a certain exposure, some colors start to appear darker than the surrounding, although as you can see in the darkest exposure, the red lights are very bright.

Maybe someone can share some more thoughts on that?




Super interesting, thanks for the images! I’m curious how much of this appearance phenomena is the software image processing that the images went through on the iphone, and how much is from the Helmholtz-Kohlrausch effect in our visual system.

Dev Update

I’ve pushed a few more changes to my open-display-transform git repo which is now at v0.0.81b3.

  • Add alt version of the DCTL which has more user parameters exposed. This may be useful for expert users who wish to play around with the various parameters.
  • Simplify and reduce model which allows for continuously varying adjustment of the curve based on a single parameter representing white luminance Lw (see this post for more details).
  • Add quasi-perceptual highlight dechroma (as I’m now calling “path to white”) or hue-bias. This is mainly to counter-act Abney effects with blues turning magenta and reds turning pink as they are dechroma’d. It helps a little to resolve perceptual differences between SDR and HDR appearance… though it’s definitely not perfect.
  • Little tweaks to the norm weights
  • Remove piecewise hyperbolic compression function. The appearance is pretty similar with correctly adjusted parameters and I don’t think the extra complexity is worth it.


Thanks for posting these they really help to further illustrate the phenomena. I also have been wandering about with my camera looking at colored lights! This is really fascinating stuff!

If I’m understanding correctly, this is not only something that you are seeing specific to ACES per se, but really speaks to the inherent limitations of a camera to capture what our eyes see. Is that right?

More specifically, I think the issue is related to the limitations of photography to capture practical lights in film. That is, in film where there is a practical light (a lamp for instance) cinematographers will have to do lots of workarounds, putting dimmers on lights so they will not blow out and then lighting the subject with other lights off camera motivated from the practical light that is now too dim to have much effect on the subject. In other words, it takes a lots of extra lights to make an image with practical lights (a lamp shade, a camp fire) appear in a photograph the way it does to our eyes. I think that is even more the case with colored lights where it is often not feasible to put lights on a dimmer (a neon sign, or car lights in traffic) and instead the exposure needs to be darkened or the surrounding needs to be darkened. For example here in this shot from Rainman, the black surrounding of the traffic lights helps them appear bright, in contrast to the white background on your sign.

In other words, I’m wondering if what you are observing is a current limitation on cameras that one needs to work around?

I’m curious if you could create in Photoshop an image of this scene that looks the way it did to your eye? That is, is it possible to represent this on an SDR display at all? Along the same lines, are you able to find in a movie still an example of what you are wanting to see - a red light that looks brighter than its surrounding white background?

1 Like

Here are some test images with OpenDRT 0.81b3. FWIW I have the saturation at 1.2 for all of these cause I like it that way. :slight_smile:

The highlight dechroma (cool name) looks pretty sweet.

[updated to AP0. The OpenDRT also has a gamma node before it set to 0.85]

Skin tones are looking nice in comparison to the “goulish green” skin tones of the ACES RRT. The OpenDRT feels a lot more faithful to the texture colors (in this case in sRGB primaries) as well as to the subject’s skin tone.

1 Like

Yeah you’re not wrong. I actually boosted the default saturation for SDR to 1.2 (and reduced saturation to 1.0 for HDR) in my latest commits to compensate a bit better for the Hunt Effect, which is actually quite visible in these SDR vs HDR tests that I did.

1 Like

In the tests that I was doing I saw it with inv. EOTF (“sRGB gamma”), ACES and OpenDRT. So yes, it is not specific to ACES at all.

And no, if it would be a camera limitation, then the Red XMas footage would also show “red bulbs”. But as the sensor was fully saturated, the bulbs can only show RGB maximum on the display.

I think it is important to separate the issues and look at them one by one.

1 Like

Here’s a sign showing something similar. Because of the exposure the colored numbers do appear brighter than the sign’s white background, but the green appears brighter than the red because the red is clipping.

If I draw in Photoshop something approximating what my eye sees I get this:

I’m making the red act more like the green in that it is retaining saturation, doing a “bokeh” on the edge, and a “highlight dechroma” at the core. That is, it was not doing any of these things in the photo, so I Photoshopped it.

It would appear to me that red acts differently than green or yellow. Green and yellow appear brighter, and red and blue (pure indigo blue) appear dark. If I understand correctly, @jedsmith has therefore made his DRT in its latest incarnation dechroma more for red and blue to address this, so they will better approximate what our eye sees. Here’s an example of that on red using @ChrisBrejon’s render. Note it clips and goes magenta in 81b1 but goes to white in 81b3.

Question for @jedsmith: should green also be going to white as well? The idea being that all colors would behave similarly in their dechroma?

Here is a test render with a light set to pure red (ACEScg primaries) and pure green, at different exposure values, viewed with OpenDRT v0.81b3. Is this behaving as expected/desired?

I would rather assume that the colored emissive numbers are in fact brighter than the white paint of the sign, because they actually emit light. The white sign is reflecting the environment.

I doubt a bit that you see with your eyes a desaturated core in the red numbers. I assume you saw the numbers bright red.
Your interpretation with the tools you have at hand is to add more (green & blue) emission overall to the red numbers, because the red does not appear bright enough.
Although at least on my iMac display I can read “digital” these values:

It seems the red is not clipping yet.

Picture’s worth a thousand words. Give us a picture of what your eyes see.

1 Like

It is absolutely fantastic to see this discussion, as this discussion seems critically foundational to the most simple idea of a “tonality compression”.

One minor point I’d highlight:

While this is accepted and conventional wisdom, HKE / Evans actually defies this wisdom in an almost completely inverse manner. The percentages of influence for green-yellow versus reddish and bluish.

Since some folks have fallen into this rabbit hole, there’s a really simple test that can be conducted at home using a display referred digital content creation application of your choosing:

  1. Fill the canvas with a middling grey value. Somewhere around 46% code value.
  2. Put a fully emissive BT.709 or Display P3 square on the screen.
  3. Place an achromatic swatch adjacent to it, with a gap in between.
  4. Try to match that maximal emission red with the achromatic value to create a “sensation of brightness” that matches it.
  5. Rinse and repeat for fully emissive pure blue light in BT.709 / Display P3.

The first glaring effect will be “Wow this is damn challenging”, which is reflected in the experiments done on this front from Wyszecki & Stiles all the way up to more recent testing. Spatial matching is helluva challenging!

However, assuming one accepts the challenge, it should end up being a rather mind popping experiment, especially if one works backwards and takes the resultant code values and calculates how much achromatic light emission is required to create an “equivalent” sensation of highly chrominous emissions.

This all nicely ties up with the idea that tristimulus “colour management” using discrete three light “mapping” via curves might not work.

It should also call into question the loose tossing around of terms like “nits”; remember that a nit is a measure of luminance and tied exclusively to the singular case of R=G=B case. That means that any attempt at compressing tonality shifts dramatically as the chrominous component deviates away from achromatic.

Food for thought.


A naive question: should the “path to white” for red light possibly be red > yellow > white, rather than red > pink > white?

I know that’s exactly what people have been trying to avoid (along with blue going to magenta) but I’m wondering if, following HKE, if we perceive pink as less bright than saturated yellow, the pink feels wrong?

I would not be so confident that this is as simple as the way it is framed.

Oh I’m not confident at all about saying that! I get the sense that the whole issue is quite complicated. Can you help unpack it?

Just to be 100% clear on this specific point. We try to avoid it as a default un-modifiable look, embedded in the output transform. We need some kind of ground truth (chromaticity linear thingy) to have a neutral base and then apply whatever look you may find suitable to your project/taste. I am very fond of this idea.


1 Like


And the fact that the whole perceptual fire mumbo jumbo is typically just a bunch of device dependent hand waving rubbish that is completely ahistorical.

At best it’s an aesthetic flourish. At worst, it’s a nightmare trend due to mishandling digital RGB colour for at least two decades.

Here’s a pic, note the tail lights on the car going ever so slightly towards yellow. The visual effect is that it appears brighter than the red, while maintaining saturation. I sort of doubt that James Cameron chose this intentionally as a “look” and rather think that this is just what the camera did. It’s how the camera represents (within the limitations of the medium) what the human eye sees. Is this saturated hue shift the “right” way to represent this? Is a dechroma to pink the right way? Since its all simplifications/approximations/translations of what our eye sees there’s not really a “right” answer.

@Chris, I hear you on the need for a neutral starting point! I also think the idea of giving the image-maker control to get the image they want is a really good approach to take, and so fully agree on that point too!

Regarding the practicality of applying this “red-increases-in-perceptual-luminescence-with-a-hue-shift-before-going-to-white look” (or RIPLWAHSBGTW-look for short) it’s not really clear to me how an indy filmmaker without a color scientists would do that. Can you say more?

1 Like

You sure? Is that a bad transfer? Remember that film has to be projected and then scanned by a digital sensor, which takes us right back to the start of skews.

Ha! No, I’m really not sure at all! I’m just naively asking questions. Appreciate your wisdom and insight!

I would not call it wisdom, just having suffered through too many digital scans.

Here is a simple example of the identical sequence, iterated / scanned / manipulated in subtly different ways. Which one represents the print stimulus that would end up projected on the wall?