Display Transform Based on JzAzBz LMS

That is indeed true at normal reflective surface colours but it does fix high luminance emissive colours. Maybe max(r, g, b) norm compensated with a per-channel gamma adjustment on the source RGB would do it.

For context, I test and adjust all DRT parameters on both the Rec. 709 and Rec. 2020 primaries images from @ChrisBrejon . I also put a saturation to 0 node right after the DRT to give me an idea of what they do to luminance. Once I’m satisfied, I test on real content.

To explain better, I’d add a more accurate version of this sentence (by Jed) :

the appearance of the specular and diffuse is rendered differently by the different display transforms.

Thanks Jed for the help ! Because as Troy explained several times :

Light stimulus is light stimulus and the image is something else. The “specular” thing is a product of the surface, and the “how bright” it ends up is a byproduct of the image.

Sorry for the confusion !


1 Like

Hello again, as promised, here are some more examples comparing the various prototypes from Jed.

An ACEScg render of the Eisko Louise model displayed in sRGB (ACES) :

FYI, the color of the blue lights used is ACEScg (0.1, 0, 1) and for the red light it is ACEScg (1, 0.1, 0).

The same ACEScg render displayed in sRGB (OpenDRT v0.0.83b2) :

The same ACEScg render displayed in sRGB (OpenDRT v0.0.90b2) :

The same ACEScg render displayed in sRGB (JzDT) :

I will focus here on the blue spotlight in the volumetric and the lips of Louise. Here are some close-ups where I removed the red lights.

A close-up of the same ACEScg render displayed in sRGB (ACES) :

Same close-up displayed in (OpenDRT v0.0.83b2) :

Same close-up displayed in (OpenDRT v0.0.90b2) :

Same close-up displayed in (JzDT) :

Two things caught my eye with OpenDRT v0.0.90b2 and JzDT :

  1. The “weird” shape around the spotlight (light is visible to camera and I used an oval-shaped bokeh as well in Guerilla Render).
  2. The hard impact on the labial inferior.

So I did the same little test of luminance (not that it proves anything)…

A close-up of the desaturated ACEScg render displayed in sRGB (ACES) :

A close-up of the desaturated ACEScg render displayed in sRGB (OpenDRT v0.0.83b2) :

A close-up of the desaturated ACEScg render displayed in sRGB (OpenDRT v0.0.90b2) :

A close-up of the desaturated ACEScg render displayed in sRGB (ACES) :

Same observations can be made than the Lego renders. WIth the different prototypes from Jed, the grayscale versions are quite similar. But unlike the Lego renders, I personally feel that this time OpenDRT v0.0.83b2 looks closer to the luminance tests. Or at least the smoother transitions on the lips and the spotlight look better (god, what a terrible word to use).

So, this got me thinking, that maybe in some scenarios OpenDRT v0.0.90b2 and JzDT look closer to some kind of luminance ground truth and in some other scenarios OpenDRT v0.0.83b2 actually wins. We did compare some red lego bricks first and then a blue spotlight, this may be well related to the actual colours used.

More tests to follow soon if you guys find these tests interesting…



Is there a reflective surface somewhere camera right in the Louise scene?

In the ACES and OpenDRT 0.0.83b2 render it appears there is red light mixing into her face, noticeable particularly on the jaw line, but in the ACES render also seen on the cheekbone and the lips. In the OpenDRT 0.0.90b2 and JzDT it’s almost non-existent, especially in the JzDT. I can’t tell if that is just errors in the highlights (like the magenta halo around the spotlight in the ACES render), or if it’s part of the scene.

Hey @garrett.strudler , long time no see. Yes, you’re absolutely right ! There is a light “somewhere camera right” in the Louise scene.

  • The color of the blue lights is ACEScg (0.1, 0, 1).
  • For the red lights it is ACEScg (1, 0.1, 0).

The scene itself has 4 lights and to avoid any ambiguity, here is the lighting breakdown with the 4 display Transforms.

“key” light of the ACEScg render displayed in sRGB (ACES) :

“key” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.83b2) :

“key” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.90b2) :

“key” light of the ACEScg render displayed in sRGB (JzDT) :

“kick” light of the ACEScg render displayed in sRGB (ACES) :

“kick” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.83b2) :

“kick” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.90b2) :

“kick” light of the ACEScg render displayed in sRGB (JzDT) :

“rim” light of the ACEScg render displayed in sRGB (ACES) :

“rim” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.83b2) :

“rim” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.90b2) :

“rim” light of the ACEScg render displayed in sRGB (JzDT) :

“fill” light of the ACEScg render displayed in sRGB (ACES) :

“fill” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.83b2) :

“fill” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.90b2) :

“fill” light of the ACEScg render displayed in sRGB (JzDT) :

Hope it clarifies a bit !



Hello again,

some more tests on a Cornell Box. Here is the setup : linear-sRGB textures, rendered in ACEScg and displayed in sRGB. Here we go !

ACEScg render displayed in sRGB (ACES)

ACEScg render displayed in sRGB (OpenDRT v0.0.83b2)

ACEScg render displayed in sRGB (OpenDRT v0.0.90b2)

ACEScg render displayed in sRGB (JzDT)

And some luminance tests :

desaturated ACEScg render displayed in sRGB (ACES)

desaturated ACEScg render displayed in sRGB (OpenDRT v0.0.83b2)

desaturated ACEScg render displayed in sRGB (OpenDRT v0.0.90b2)

desaturated ACEScg render displayed in sRGB (JzDT)

I think the same observations can be made from the previous renders. I still believe there is somewhere some ground truth that we can rely on rather than subjective/aesthetics judgement. I believe this is what is hinted here and here.

Hope it helps… As always, nice work Jed !


1 Like

And finally… Some light sabers tests comparing the same display transforms.

ACEScg render using ACEScg primaries, displayed in sRGB (ACES).

Same render, displayed in sRGB (OpenDRT v0.0.83b2).

Same render, displayed in sRGB (OpenDRT v0.0.90b2).

Same render, displayed in sRGB (JzDT).

In the following examples, I completeley desaturate the scene-referred values.

Desaturated ACEScg render, displayed in sRGB (ACES).

Same desaturated render, displayed in sRGB (OpenDRT v0.0.83b2).

Same desaturated render, displayed in sRGB (OpenDRT v0.0.90b2).

Same desaturated render, displayed in sRGB (JzDT).

And for comparison, I have applied the desaturation after the Output Transform.

Display-referred desaturation in sRGB (ACES).

Display-referred desaturation in sRGB (OpenDRT v0.0.83b2).

Display-referred desaturation in sRGB (OpenDRT v0.0.90b2).

Display-referred desaturation in sRGB (JzDT).

That’s it for me. Happy rendering !


Hi @ChrisBrejon ,

I think it would be possible to get a kinda-ground truth for the LEGO sailor scene by reproducing it with physical legos and lights and shooting it with different brands of lo-tech point-and-shoot and/or cellphone cameras that all go straight to sRGB (with default auto settings). They all have their own different magic sauce recipes in order to try and get and image close to what you’re seeing but taking an average of those and trying to avoid getting married to one result or another could give us an idea w.r.t. how much light each colour of lego brick should be reflecting and what average hue it should have in Rec.709. Next, with the physical rig as a reference, one could take a raw photo with a high-end camera and do a grade from log to PQ/Rec.2020 @ 1000 nits on a reference monitor in order to try and match as close as possible what one sees. This could give us a ground truth reference for the HDR version.

Currently, @jedsmith 's DRTs are the closest that we have to a ground truth reference but we can’t know that as they were evaluated on reference images that are either full CG or pre-existing footage which none here knows what it looked like for real when it was shot. Essentially, Jed’s DRTs are almost perfect and we’re in the last 10% stretch where we need to tweak, adjust or just plain throw alternative algorithms that look similar but slightly improved at the wall until something sticks and everybody is happy.

I, personally, will be happy when emissive reds and blues (fires and skies) will be under control without sacrificing saturation in the diffuse reds, greens and yellows too much (skin tones and grass), i.e. when memory colours are under control. That doesn’t mean that we should bake in red->orange skew and/or blue->cyan skew and/or contrast to an excessive level though, just that those can be easily achieved with simple hue shifts that work well in all targets instead of requiring different LUTs per target. Jed’s perceptual correction is an example of a hue shift that “just works”.

1 Like

Can’t we just encode an image with display encoding and no highlights roll-off and think of it as this is our reference? For example to encode with power law gamma 1/2.4 and display it on gamma 2.4 display? All the colors that are not clipped are our reference. And the same for HDR. What am I missing? Or those clipped colors are actually what I’m missing?

Those clipped colours are indeed what you are missing :slight_smile:

Rec.709 at exposure level +2 or +3 will give values >1 very fast. Take a non-pure red with a bit of green in it and you will notice that it skews progressively towards orange as you raise exposure because the red channel has already been clipped to 1 while the green channel is still being raised. It will also get stuck on (1, 1, 0) which is pretty useless if your source has a higher dynamic range than 0…1 diffuse reflectance.

1 Like

Hi all!

I’m a random developer who fell down the fascinating rabbit hole of color management about 2 weeks ago!

Thanks to Christophe Brejon, Jed Smith, Troy Sobotka, Daniele Siragusano, and many others, I went from blissful ignorance to compulsive thinking about colors, day and night :wink:

I would like to share with you some thoughts I have about a similar algorithm to JzDT.

My understanding of all those concepts is still very fresh so I might say stupid things…

Ignoring gamut compression, I expect from the DRT:

1. Luminance to be compressed from scene values (~unbounded) to display values (bounded)
2. Hue to be preserved
3. Relative lightness to be preserved across pixels (if, in scene-space, object A is perceived as lighter/brighter than object B, that relationship should still hold in display-space)

1. is the main point of tone mapping, 2. is AFAIU the main point of ACESNext’s DRT and 3. is something I haven’t seen anywhere so far but that I find very interesting.

I believe OpenDRT and JzDT do not satisfy 3.

I suspect 3. might be an interesting constraint as it removes some degree of freedom from the space of possible solutions.

Most notably, I believe it makes the path-to-white an inevitable consequence of the constraints: it does not need to be “engineered” nor needs to be parameterizable.

Here’s how to construct the algorithm:

  • To satisfy 1. and 3., the tone curve needs to be applied to the lightness (Jz if we use JzAzBz as our CAM).

  • If we keep the chromaticity values constant (Az and Bz), we can deduce some corresponding display-referred RGB values.
    Assuming we choose correctly the output range of our tone curve, those display-referred RGB values can be all made to be within their respective possible range (ie. no clamping).
    All constraints are satisfied and this algorithm is also chroma-preserving (was not a goal though) but this leads to an issue:
    The brightest displayed white will only be able to be as bright as the dimmest fully saturated primary color of the display (ie. primary blue).
    This would lead to images generally dimmer than one would expect from their display hardware.

  • We can introduce a new constraint:
    4. Output colors must be able to span the full range of colors of the display-referred colorspace.

  • To get back the full range of luminance of the display-referred space, we need to expand the tone scale output range accordingly but then some overly bright and pure scene-referred colors will get outside of their display-referred colorspace.

  • The solution is to allow the chroma (computed by converting Az and Bz to chroma and hue) to be variable and scale down exactly by as much as needed to satisfy all the other constraints (incl. 4.).

I haven’t put this algorithm to code but I believe that once a CAM and a tone curve are chosen, the rest of the implementation should be pretty tightly defined.

I guess that solving the equation to find the value of the chroma might not be trivial within the framework of non-linear color spaces like JzAzBz but it should be doable.

To conclude, I believe that

  • 3. is a nice and intuitive property

  • that it is useful to explicitly state 4.

  • they can both lead to a tightly defined algorithm with a native and unparameterized path-to-white.

PS1: I guess that maybe the hue will be preserved in a more perception-accurate way because it is managed in the JzAzBz space instead of the LMS space.

PS2: Another consequence is that the tone function is not applied anymore in linear LMS space but in non-linear JzAzBz space. I guess that means applying directly an adapted S-curve without embedding it in the Michaelis-Menten model since JzAzBz’s PQ function is already doing a similar job.


Hello Paul and welcome to ACESCentral !

This is exactly what I have been through as well… You’re in the right place ! :wink:

This is not the first time I hear this and I think some people on this forum would be especially interested in this part.

This is a really interesting point of view. Thanks for sharing it ! Maybe it could be discussed during next meeting… I particularly like how you’re deconstructing the mechanics point by point.


1 Like

JzDT is much better at this than OpenDRT. From my art technical director perspective, it maintains lighting relationships much better than OpenDRT does (esp. w.r.t. brightly lit blues) although it does not produce as nice colours. I would agree with him. Once I have all the comments from my team in their final form, I will post them here.

Hi Paul,

Interesting post. Great to read and think about new ideas and approaches.

A few comments on first digestion:

Lightness is a tricky scale to predict, and I would not try to base a requirement on lightness.
An excellent summary paper about the difficulties with lightness:

But basically, your intuition about the monotonousness of the transform is not wrong; just the dimensions need to be discussed.

Also, I am not sure how you deduce:

[quote=“paulgdpr, post:14, topic:4008”]
Assuming we choose correctly the output range of our tone curve, those display-referred RGB values can be all made to be within their respective possible range (ie. no clamping).

[quote=“paulgdpr, post:14, topic:4008”]

while maintaining the (~unbounded) condition:

Further, I am wondering about:

I think you cannot easily directly relate from a CAM tone mapping to display units (this is one fundamental issue with CAM inspired spaces). Display Gamuts are oddly shaped in there.
This needs a bit of thoughts I guess, and this also applies the the rest of the algorithm you propose.

You also forsee this here, I think:

And I would like to mention that one crucial attribute for the DRT is “simplicity” and that the algorithm needs to be GPU friendly.

This all depends on the definition of “hue”.
There is some degree of freedom in designing your experiments and models and then fitting the models to the experimental results.

Be aware that PQ was fitted to JNDs (just noticeable differences), so encoding images with the minimal possible consumption of bits is perfect in PQ. This is what its design goals were, I guess. And also where it shines.
However, larger-scale equal distances is an entirely different perception and might not agree with JNDs.

I hope some of this helps.


So if it works fast on a GPU and looks like ass, this is a design win?

I cannot believe that this sort of metric should hold any weight whatsoever in the act of image formation. Solve the problem of image formation first, worry about the performance and compute implications second. Having priorities this distorted is shocking.

L* is essentially an observer sensation metric derived from linear observer stimulus^1, which can be taken back to display colourimetry. The path from an observer sensation appearance metric back to currents and displays is perhaps feasible.

1 - Yes we can all acknowledge the fundamental brokenness of datasets derived from flicker photometry. Just using it as an example that we can indeed have a potential fluidity between observer sensation and observer stimulus that would allow us to get back to display colourimetry.

I haven’t read that in what @daniele said, he only highlighted that simplicity should be a critical requirement which is reasonable given it prevents the creation of steam factory. He never said it should take precedence on everything else.

Any research on this particular topic?

Of course I appreciate and understand the general intention of what Daniele said here. There are some problems with uttering the word “simplicity” however, much like “beautiful”; it’s just a seductive word.

At some point, a model will either sufficiently model what it is attempting to achieve, and fail at other potential things. My point is that to some degree, “simplicity” is just a meaningless word. How many steps do we need to take colourimetry from one display to another? Why? What is “simpler” here?

The critical thing here is a reevaluation of the process of image formation. It’s as much as brainstorming session as anything. I’d hope that such an attempt wasn’t hindered by chasing mythical dragons and leprechauns.

There’s an absolute proverbial metric ton of interesting stuff out there that laser focuses on aspects. I appreciate Lennie / Pokorny / Smith Luminance for a decent wide lens summation of the surface. 1931’s busted up aspect is nicely stated in Stockman’s Cone Fundamentals and CIE Standards page 90.

I’m interested by citations that state that the datasets obtained by flicker photometry are broken. You link Stockman but the cone fundamentals are tightly coupled to the CMFs by a mere linear relationship, if the datasets were so fundamentally broken as you imply it, you would not be able to do that at all.

The goal of flicker photometry is to frequently derive “equivalence” in brightness / relative lightness of chromatic stimulus.

Under that basic definition, if the testing does not actually model what it is attempting to measure, I would say that’s pretty broken. See how almost every CAM fails miserably at this basic task.

Luminance as a metric of observer stimulus response connected to observer sensation, is only more accurate as it approaches achromatic. Seems to be a bit of a tautological problem.

1 Like

I still don’t see any citation, just an opinion (which is fine in itself) :slight_smile:

The purpose of heterochromatic flicker photometry is to measure the spectral sensitivity of the HVS. The datasets produces by this method are verified by other approaches, for example electro-retinograms as shown by Aiba, Alpern and Maaseidvaag (1967):

If one take two radiometrically matching colours stimulus, and scale them so that they photometrically match, they will certainly appear much closer in apparent brightness, but not necessarily perceived to match, which, indeed, hints at some more complex mechanisms happening in the eye and cortex.

Does it means that the datasets broken? Certainly not! It means that we are far from having all the answers. Heterochromatic brightness matching experiments show that it is complex business, e.g. CIE 1988. To quote Conway et al. (2017):

We can now predict, with reasonable precision, how the three classes of cones react to any given physical stimulus. Yet many mysteries about how that code leads to the perception of color remain.