Aesthetic intent - human perception vs filmic reproduction

Thank you for pointing that out and helping make the distinction clear. While they may yield the same result, they have different intentions and it’s important to separate them; I know I’ve been guilty of lumping them together in some of these conversations.

Assuming the source colors were intended to stay fully saturated, just at increased luminance, hypothetically the right side would land at the display max, and everything in-between would get remapped between that and the darkest chip. This strategy of course falls apart if the brightest chip is 1,000% of the dark one and you can’t represent the ratios appropriately.

Just talking out loud here (or typing out loud?): I’m trying to wrap my head a bit around if or why color luminance should be treated differently than achromatic luminance (greyscale; what we typically define as dynamic range: black to white). We have strategies for how to map luminance to both standard and high dynamic range displays: there is some form of gamma curve and eventually the values get clipped at a pre-determined level. When dealing with black and white, we knowingly accept that some values (especially high luminance) will get clipped. With color, however, I guess the issue is that any clipping usually looks bad? Looking at the ACES render of the blue pub/bar, the entire staircase area turns into just a blue blob. I really don’t think anyone will say it looks good OR that it looks like the original scene. But then I wonder if it could have been graded out/back to maintain the detail. I don’t know.

Another thing I had rattling around in my head today is if part of our perception of white being brighter than a color has to do with current display technologies (and even print). With most current displays it’s an additive mix of red, green, and blue. So yellow (a mix of red and green) will be brighter than either of the single colors because you basically have two lights on instead of one. White is the combination of all three, so it is the brightest. However, there is some newer technology like WOLED that has a discreet white light source. In that case white is not a mixture of colors, so feasibly fully saturated red could have the same luminance as white. Would we still choose a path to white if it is no brighter than a color?

You need to have a defined maximum value at the input, with everything above that clipping. From that defined maximum you map it to the display, just like greyscale. The problem with this approach of course is these clipped values. Frankly, I don’t know what to do with them, other than to grade them back “into range” before you ever hit the transform. Is it feasible? Does it look good? I have no idea at this stage.

I’ll argue with myself here for a moment. What if we had a magic camera that could actually capture these incredibly luminant (spell check tells me that’s not a real word, ha), but very saturated light bulbs? How would we ever hope to display that correctly? Right now we couldn’t. In the case of (relative) chromaticity-preserving, do they just become red circles? What about the rest of the red in the image? Does it stay brilliant/saturated like it is, or should it be lowered to help preserve the ratio of how bright these bulbs are? Do they take a path to white? These are also questions I don’t have answers to.

Likewise! Makes me wish I had Nuke AND knew how to use it.

I would highly recommend that you watch this awesome talk by @daniele which goes into the spectral characteristics of reflective surface colors, and explains the physics of why a surface color with more color purity appears darker.

All of the nuke setups I’ve posted work fine in the free Nuke Non-Commercial. Knowing how to use it is another challenge! :slight_smile: I will say you don’t need to know the full software to be able to compare images and play around with view transforms though.

2 Likes

Thanks Jed for the explanations ! 100% agreed on gamut mapping and its importance.

Interesting to observe that the term “gamut compression/mapping” is nowhere to be found on the paper dropbox from the Output Transform VWG. Is this intentional ? :stuck_out_tongue:

I double-checked in the aces_rae pdf from 2017 and the term is indeed present (in the D. Artifacts paragraph) :

Gamut compression or mapping based on IPT or ICtCp color spaces would be a fruitful research axes to overcome these issues.

On the other hand, the term “gamut mapping” is mentioned in the Background Information Dropbox paper :

This is a simple, fast gamut mapping algorithm. It maps RGB values onto the 0 to 1 cube using line/plane intersection math which has been optimized to take advantage of the fact that the planes are the [0,1] cube faces. Out-of-gamut points are mapped towards a value on the neutral axis. If the RGB values are linear tristimulus values for arbitrary RGB primaries then the algorithm preserves dominant wavelength on a chromaticity diagram. It also preserves hue in the HSV sense. Light out-of-gamut colors are darkened as they approach the gamut, while dark colors are lightened (i.e. some lightness is traded off to preserve chroma). There are certainly many more sophisticated algorithms for gamut mapping, but this is simple, fast, robust, and useful as a point of comparison.

Maybe it would be nice to see how this algorithm handle our dataset ?

Garrett, at this point, I don’t have anything to add further to the debate. So I’ll just watch Daniele’s video and try to learn something. :wink: Thanks for your answers and patience !

Update : sorry, this is too interesting ! Minute 16:25 of the video :

Film produces the colours very much in a similar way than nature does it.

Boom, my mind just exploded ! :wink: I’ll keep watching !

Regards,
Chris