Thank you for pointing that out and helping make the distinction clear. While they may yield the same result, they have different intentions and it’s important to separate them; I know I’ve been guilty of lumping them together in some of these conversations.
Assuming the source colors were intended to stay fully saturated, just at increased luminance, hypothetically the right side would land at the display max, and everything in-between would get remapped between that and the darkest chip. This strategy of course falls apart if the brightest chip is 1,000% of the dark one and you can’t represent the ratios appropriately.
Just talking out loud here (or typing out loud?): I’m trying to wrap my head a bit around if or why color luminance should be treated differently than achromatic luminance (greyscale; what we typically define as dynamic range: black to white). We have strategies for how to map luminance to both standard and high dynamic range displays: there is some form of gamma curve and eventually the values get clipped at a pre-determined level. When dealing with black and white, we knowingly accept that some values (especially high luminance) will get clipped. With color, however, I guess the issue is that any clipping usually looks bad? Looking at the ACES render of the blue pub/bar, the entire staircase area turns into just a blue blob. I really don’t think anyone will say it looks good OR that it looks like the original scene. But then I wonder if it could have been graded out/back to maintain the detail. I don’t know.
Another thing I had rattling around in my head today is if part of our perception of white being brighter than a color has to do with current display technologies (and even print). With most current displays it’s an additive mix of red, green, and blue. So yellow (a mix of red and green) will be brighter than either of the single colors because you basically have two lights on instead of one. White is the combination of all three, so it is the brightest. However, there is some newer technology like WOLED that has a discreet white light source. In that case white is not a mixture of colors, so feasibly fully saturated red could have the same luminance as white. Would we still choose a path to white if it is no brighter than a color?
You need to have a defined maximum value at the input, with everything above that clipping. From that defined maximum you map it to the display, just like greyscale. The problem with this approach of course is these clipped values. Frankly, I don’t know what to do with them, other than to grade them back “into range” before you ever hit the transform. Is it feasible? Does it look good? I have no idea at this stage.
I’ll argue with myself here for a moment. What if we had a magic camera that could actually capture these incredibly luminant (spell check tells me that’s not a real word, ha), but very saturated light bulbs? How would we ever hope to display that correctly? Right now we couldn’t. In the case of (relative) chromaticity-preserving, do they just become red circles? What about the rest of the red in the image? Does it stay brilliant/saturated like it is, or should it be lowered to help preserve the ratio of how bright these bulbs are? Do they take a path to white? These are also questions I don’t have answers to.
Likewise! Makes me wish I had Nuke AND knew how to use it.