Display Transform Based on JzAzBz LMS

Hi all!

I’m a random developer who fell down the fascinating rabbit hole of color management about 2 weeks ago!

Thanks to Christophe Brejon, Jed Smith, Troy Sobotka, Daniele Siragusano, and many others, I went from blissful ignorance to compulsive thinking about colors, day and night :wink:

I would like to share with you some thoughts I have about a similar algorithm to JzDT.

My understanding of all those concepts is still very fresh so I might say stupid things…

Ignoring gamut compression, I expect from the DRT:

1. Luminance to be compressed from scene values (~unbounded) to display values (bounded)
2. Hue to be preserved
3. Relative lightness to be preserved across pixels (if, in scene-space, object A is perceived as lighter/brighter than object B, that relationship should still hold in display-space)

1. is the main point of tone mapping, 2. is AFAIU the main point of ACESNext’s DRT and 3. is something I haven’t seen anywhere so far but that I find very interesting.

I believe OpenDRT and JzDT do not satisfy 3.

I suspect 3. might be an interesting constraint as it removes some degree of freedom from the space of possible solutions.

Most notably, I believe it makes the path-to-white an inevitable consequence of the constraints: it does not need to be “engineered” nor needs to be parameterizable.

Here’s how to construct the algorithm:

  • To satisfy 1. and 3., the tone curve needs to be applied to the lightness (Jz if we use JzAzBz as our CAM).

  • If we keep the chromaticity values constant (Az and Bz), we can deduce some corresponding display-referred RGB values.
    Assuming we choose correctly the output range of our tone curve, those display-referred RGB values can be all made to be within their respective possible range (ie. no clamping).
    All constraints are satisfied and this algorithm is also chroma-preserving (was not a goal though) but this leads to an issue:
    The brightest displayed white will only be able to be as bright as the dimmest fully saturated primary color of the display (ie. primary blue).
    This would lead to images generally dimmer than one would expect from their display hardware.

  • We can introduce a new constraint:
    4. Output colors must be able to span the full range of colors of the display-referred colorspace.

  • To get back the full range of luminance of the display-referred space, we need to expand the tone scale output range accordingly but then some overly bright and pure scene-referred colors will get outside of their display-referred colorspace.

  • The solution is to allow the chroma (computed by converting Az and Bz to chroma and hue) to be variable and scale down exactly by as much as needed to satisfy all the other constraints (incl. 4.).

I haven’t put this algorithm to code but I believe that once a CAM and a tone curve are chosen, the rest of the implementation should be pretty tightly defined.

I guess that solving the equation to find the value of the chroma might not be trivial within the framework of non-linear color spaces like JzAzBz but it should be doable.

To conclude, I believe that

  • 3. is a nice and intuitive property

  • that it is useful to explicitly state 4.

  • they can both lead to a tightly defined algorithm with a native and unparameterized path-to-white.

PS1: I guess that maybe the hue will be preserved in a more perception-accurate way because it is managed in the JzAzBz space instead of the LMS space.

PS2: Another consequence is that the tone function is not applied anymore in linear LMS space but in non-linear JzAzBz space. I guess that means applying directly an adapted S-curve without embedding it in the Michaelis-Menten model since JzAzBz’s PQ function is already doing a similar job.

4 Likes