Re-Framing and Clarifying the Goals of an Output Transform

Welcome @christopher.cook!

Yes, the HVS is more sensitive to brightness changes compared to chroma changes, this is the basis of chroma sub-sampling, YUV, Y’CbCr, YCoCg & co.

This sentence is interesting because depending how you read it, it is incorrect: In theory, if you design a system where the stimuli are generated in an “ideal” perceptually uniform space, e.g. “JzAzBz”, “CAM16L-LCD” (putting quotes because they are not perfect), the difference of some delta units along the lightness axis should be perceptually equivalent to the difference of the same delta units along another axis. Actually not matter the vectors, provided they have the same length, an observer should not be able to perceive a pair as being more different than the other.

Goes back to what I just wrote, which metric is being used for your system?

Agreed! Unfortunately, in our case, the white luminance coming from the display will always be that of the luminance summation of the primaries, so unless we limit peak white artificially, the primaries will always have less luminance.

On a very much related topic, Samsung engineers are trying to leverage the Helmholtz–Kohlrausch effect to increase brightness of displays while reducing power consumption: A New Approach for Measuring Perceived Brightness in HDR Displays

Is there anything making it like that the VWG does not keep it in mind? I could be wrong but I certainly don’t have the feeling that the transforms produced so far or last year discussions have ignored the rendering medium. Irrespective of whether they do it successfully or not, all the current ACES ODTs acknowledge the destination gamut, they are purposely targeting a specific device. Sure gamut mapping is crude, i.e. clipping, and colours are skewing, but they have not been engineered without thinking about a target device.

Cheers,

Thomas

1 Like