Display Transform Based on JzAzBz LMS

I’ll post here another of my experiments since it might be an interesting point of comparison to OpenDRT:

JzDT: Nuke | Resolve DCTL
A simple display transform based on JzAzBz LMS
screenshot_2021-09-27_22-39-15

During one evening of randomly testing things, I tried applying a chromaticity-linear tonescale to the LMS colorspace in hdr-IPT. I discovered that the result is very similar to CIE 2006 LMS, (or at least the approximation that we get to using the CIE 1931 XYZ to CIE 2006 LMS conversion matrix available in the Chromaticity Coordinates for Graphic Arts Based on CIE 2006 LMS with Even Spacing of Munsell Colours paper by Richard Kirk).

While working on updating the perceptual model used in OpenDRT from ICtCp to JzAzBz, I thought I would make a quick sketch to see what it would look like just using this color model for rendering. I decided to use the max(r,g,b) norm for this one. High purity reds and blues render less saturated, which looks less good out of the box on normal reflective surface colors, but this approach has fewer issues with higher purity colors being pushed out of gamut. Pros and cons.

I also put a display-referred tint control in there for @Troy_James_Sobotka :smiley:

Happy experimenting.

1 Like

I wonder if there ever will be a time where we manage to see a difference between the replication of an image on a display or output medium like paper, and the image itself?

The idea that stimulus is an image is absolutely ridiculous. An image is manifested as stimulus, but all stimulus is not an image.

This seems to rest at the foot of so much of this horrific mess we are wallowing in…

Respect though; image referred tinting is a real thing. Anything less is an ahistorical overfit.

1 Like

Hi everyone,

I have been testing several of my CG frames through the different prototypes and I would like to share some of the results. I think the JzAzBz LMS prototype is worth a shot for some reasons I would like to detail below. I would like to mention that I am using all these Display Transform (ACES, OpenDRT and zDT) with default values.

This is an ACEScg render, displayed in sRGB (ACES) :

Same ACEScg render, displayed in sRGB (OpenDRT v0.0.83b2) :

Same ACEScg render, displayed in sRGB (OpenDRT v0.0.90b2) :

Same ACEScg render, displayed in sRGB (JzDT) :

There are two things that caught my eye. First is the red lego in the back.

With ACES, it is kinda orange.

This is the result with OpenDRT v0.0.83b2.

This is the result with OpenDRT v0.0.90b2. You can see an outline on the left screen side of the lego.

JzDT has a similar rendering but the outline (fringing ?) is gone.

The second thing I noticed is about the lego brick in the low left corner.

Close-up of the ACEScg render, displayed in sRGB (ACES) :

Same close-up of the ACEScg render, displayed in sRGB (OpenDRT v0.0.83b2) :

Same close-up of the ACEScg render, displayed in sRGB (OpenDRT v0.0.90b2) :

Same close-up of the ACEScg render, displayed in sRGB (JzDT) :

If you look only at the red brick, you can see that the specular and diffuse response are completely different between the several display transforms. This has been explained several by Thomas in the past.

But what I found interesting (I have seen it on several other renders of mine, not only this Lego one) is that OpenDRT v0.0.90b2 and JzDT are converging to a similar rendering. Is this coincidential ?

Secondly, after comparing the different renders of the red brick, I asked myself if there was some kind of “ground truth” somewhere. Because I don’t want to rely on my eyes, especially on full CG render. So I did my little test to desaturate completely the renders to check out luminance.

Close-up of the ACEScg render, displayed in sRGB (ACES) :

Same close-up of the ACEScg render, displayed in sRGB (OpenDRT v0.0.83b2) :

Same close-up of the ACEScg render, displayed in sRGB (OpenDRT v0.0.90b2) :

Same close-up of the ACEScg render, displayed in sRGB (JzDT) :

What really got me interested is that by desaturating completely the renders, first of all, they all look quite similar, especially the sun impact on the red brick. And by comparing closely, I thought if we only focus on the red brick and the sun impact, that the JzDT was the closest one to the luminance test.

I’ll do more tests in the coming days. I have several things to point out that I would like to share.

Chris

That is indeed true at normal reflective surface colours but it does fix high luminance emissive colours. Maybe max(r, g, b) norm compensated with a per-channel gamma adjustment on the source RGB would do it.

For context, I test and adjust all DRT parameters on both the Rec. 709 and Rec. 2020 primaries images from @ChrisBrejon . I also put a saturation to 0 node right after the DRT to give me an idea of what they do to luminance. Once I’m satisfied, I test on real content.

To explain better, I’d add a more accurate version of this sentence (by Jed) :

the appearance of the specular and diffuse is rendered differently by the different display transforms.

Thanks Jed for the help ! Because as Troy explained several times :

Light stimulus is light stimulus and the image is something else. The “specular” thing is a product of the surface, and the “how bright” it ends up is a byproduct of the image.

Sorry for the confusion !

Chris

1 Like

Hello again, as promised, here are some more examples comparing the various prototypes from Jed.

An ACEScg render of the Eisko Louise model displayed in sRGB (ACES) :


FYI, the color of the blue lights used is ACEScg (0.1, 0, 1) and for the red light it is ACEScg (1, 0.1, 0).

The same ACEScg render displayed in sRGB (OpenDRT v0.0.83b2) :

The same ACEScg render displayed in sRGB (OpenDRT v0.0.90b2) :

The same ACEScg render displayed in sRGB (JzDT) :

I will focus here on the blue spotlight in the volumetric and the lips of Louise. Here are some close-ups where I removed the red lights.

A close-up of the same ACEScg render displayed in sRGB (ACES) :

Same close-up displayed in (OpenDRT v0.0.83b2) :

Same close-up displayed in (OpenDRT v0.0.90b2) :

Same close-up displayed in (JzDT) :

Two things caught my eye with OpenDRT v0.0.90b2 and JzDT :

  1. The “weird” shape around the spotlight (light is visible to camera and I used an oval-shaped bokeh as well in Guerilla Render).
  2. The hard impact on the labial inferior.

So I did the same little test of luminance (not that it proves anything)…

A close-up of the desaturated ACEScg render displayed in sRGB (ACES) :

A close-up of the desaturated ACEScg render displayed in sRGB (OpenDRT v0.0.83b2) :

A close-up of the desaturated ACEScg render displayed in sRGB (OpenDRT v0.0.90b2) :

A close-up of the desaturated ACEScg render displayed in sRGB (ACES) :

Same observations can be made than the Lego renders. WIth the different prototypes from Jed, the grayscale versions are quite similar. But unlike the Lego renders, I personally feel that this time OpenDRT v0.0.83b2 looks closer to the luminance tests. Or at least the smoother transitions on the lips and the spotlight look better (god, what a terrible word to use).

So, this got me thinking, that maybe in some scenarios OpenDRT v0.0.90b2 and JzDT look closer to some kind of luminance ground truth and in some other scenarios OpenDRT v0.0.83b2 actually wins. We did compare some red lego bricks first and then a blue spotlight, this may be well related to the actual colours used.

More tests to follow soon if you guys find these tests interesting…

Chris

2 Likes

Is there a reflective surface somewhere camera right in the Louise scene?

In the ACES and OpenDRT 0.0.83b2 render it appears there is red light mixing into her face, noticeable particularly on the jaw line, but in the ACES render also seen on the cheekbone and the lips. In the OpenDRT 0.0.90b2 and JzDT it’s almost non-existent, especially in the JzDT. I can’t tell if that is just errors in the highlights (like the magenta halo around the spotlight in the ACES render), or if it’s part of the scene.

Hey @garrett.strudler , long time no see. Yes, you’re absolutely right ! There is a light “somewhere camera right” in the Louise scene.

  • The color of the blue lights is ACEScg (0.1, 0, 1).
  • For the red lights it is ACEScg (1, 0.1, 0).

The scene itself has 4 lights and to avoid any ambiguity, here is the lighting breakdown with the 4 display Transforms.

“key” light of the ACEScg render displayed in sRGB (ACES) :

“key” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.83b2) :

“key” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.90b2) :

“key” light of the ACEScg render displayed in sRGB (JzDT) :

“kick” light of the ACEScg render displayed in sRGB (ACES) :

“kick” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.83b2) :

“kick” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.90b2) :

“kick” light of the ACEScg render displayed in sRGB (JzDT) :

“rim” light of the ACEScg render displayed in sRGB (ACES) :

“rim” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.83b2) :

“rim” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.90b2) :

“rim” light of the ACEScg render displayed in sRGB (JzDT) :

“fill” light of the ACEScg render displayed in sRGB (ACES) :

“fill” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.83b2) :

“fill” light of the ACEScg render displayed in sRGB (OpenDRT v0.0.90b2) :

“fill” light of the ACEScg render displayed in sRGB (JzDT) :

Hope it clarifies a bit !

Thanks

2 Likes

Hello again,

some more tests on a Cornell Box. Here is the setup : linear-sRGB textures, rendered in ACEScg and displayed in sRGB. Here we go !

ACEScg render displayed in sRGB (ACES)

ACEScg render displayed in sRGB (OpenDRT v0.0.83b2)

ACEScg render displayed in sRGB (OpenDRT v0.0.90b2)

ACEScg render displayed in sRGB (JzDT)

And some luminance tests :

desaturated ACEScg render displayed in sRGB (ACES)

desaturated ACEScg render displayed in sRGB (OpenDRT v0.0.83b2)

desaturated ACEScg render displayed in sRGB (OpenDRT v0.0.90b2)

desaturated ACEScg render displayed in sRGB (JzDT)

I think the same observations can be made from the previous renders. I still believe there is somewhere some ground truth that we can rely on rather than subjective/aesthetics judgement. I believe this is what is hinted here and here.

Hope it helps… As always, nice work Jed !

Chris

1 Like

And finally… Some light sabers tests comparing the same display transforms.

ACEScg render using ACEScg primaries, displayed in sRGB (ACES).

Same render, displayed in sRGB (OpenDRT v0.0.83b2).

Same render, displayed in sRGB (OpenDRT v0.0.90b2).

Same render, displayed in sRGB (JzDT).

In the following examples, I completeley desaturate the scene-referred values.

Desaturated ACEScg render, displayed in sRGB (ACES).

Same desaturated render, displayed in sRGB (OpenDRT v0.0.83b2).

Same desaturated render, displayed in sRGB (OpenDRT v0.0.90b2).

Same desaturated render, displayed in sRGB (JzDT).

And for comparison, I have applied the desaturation after the Output Transform.

Display-referred desaturation in sRGB (ACES).

Display-referred desaturation in sRGB (OpenDRT v0.0.83b2).

Display-referred desaturation in sRGB (OpenDRT v0.0.90b2).

Display-referred desaturation in sRGB (JzDT).

That’s it for me. Happy rendering !

2 Likes

Hi @ChrisBrejon ,

I think it would be possible to get a kinda-ground truth for the LEGO sailor scene by reproducing it with physical legos and lights and shooting it with different brands of lo-tech point-and-shoot and/or cellphone cameras that all go straight to sRGB (with default auto settings). They all have their own different magic sauce recipes in order to try and get and image close to what you’re seeing but taking an average of those and trying to avoid getting married to one result or another could give us an idea w.r.t. how much light each colour of lego brick should be reflecting and what average hue it should have in Rec.709. Next, with the physical rig as a reference, one could take a raw photo with a high-end camera and do a grade from log to PQ/Rec.2020 @ 1000 nits on a reference monitor in order to try and match as close as possible what one sees. This could give us a ground truth reference for the HDR version.

Currently, @jedsmith 's DRTs are the closest that we have to a ground truth reference but we can’t know that as they were evaluated on reference images that are either full CG or pre-existing footage which none here knows what it looked like for real when it was shot. Essentially, Jed’s DRTs are almost perfect and we’re in the last 10% stretch where we need to tweak, adjust or just plain throw alternative algorithms that look similar but slightly improved at the wall until something sticks and everybody is happy.

I, personally, will be happy when emissive reds and blues (fires and skies) will be under control without sacrificing saturation in the diffuse reds, greens and yellows too much (skin tones and grass), i.e. when memory colours are under control. That doesn’t mean that we should bake in red->orange skew and/or blue->cyan skew and/or contrast to an excessive level though, just that those can be easily achieved with simple hue shifts that work well in all targets instead of requiring different LUTs per target. Jed’s perceptual correction is an example of a hue shift that “just works”.

1 Like

Can’t we just encode an image with display encoding and no highlights roll-off and think of it as this is our reference? For example to encode with power law gamma 1/2.4 and display it on gamma 2.4 display? All the colors that are not clipped are our reference. And the same for HDR. What am I missing? Or those clipped colors are actually what I’m missing?

Those clipped colours are indeed what you are missing :slight_smile:

Rec.709 at exposure level +2 or +3 will give values >1 very fast. Take a non-pure red with a bit of green in it and you will notice that it skews progressively towards orange as you raise exposure because the red channel has already been clipped to 1 while the green channel is still being raised. It will also get stuck on (1, 1, 0) which is pretty useless if your source has a higher dynamic range than 0…1 diffuse reflectance.

1 Like

Hi all!

I’m a random developer who fell down the fascinating rabbit hole of color management about 2 weeks ago!

Thanks to Christophe Brejon, Jed Smith, Troy Sobotka, Daniele Siragusano, and many others, I went from blissful ignorance to compulsive thinking about colors, day and night :wink:

I would like to share with you some thoughts I have about a similar algorithm to JzDT.

My understanding of all those concepts is still very fresh so I might say stupid things…

Ignoring gamut compression, I expect from the DRT:

1. Luminance to be compressed from scene values (~unbounded) to display values (bounded)
2. Hue to be preserved
3. Relative lightness to be preserved across pixels (if, in scene-space, object A is perceived as lighter/brighter than object B, that relationship should still hold in display-space)

1. is the main point of tone mapping, 2. is AFAIU the main point of ACESNext’s DRT and 3. is something I haven’t seen anywhere so far but that I find very interesting.

I believe OpenDRT and JzDT do not satisfy 3.

I suspect 3. might be an interesting constraint as it removes some degree of freedom from the space of possible solutions.

Most notably, I believe it makes the path-to-white an inevitable consequence of the constraints: it does not need to be “engineered” nor needs to be parameterizable.

Here’s how to construct the algorithm:

  • To satisfy 1. and 3., the tone curve needs to be applied to the lightness (Jz if we use JzAzBz as our CAM).

  • If we keep the chromaticity values constant (Az and Bz), we can deduce some corresponding display-referred RGB values.
    Assuming we choose correctly the output range of our tone curve, those display-referred RGB values can be all made to be within their respective possible range (ie. no clamping).
    All constraints are satisfied and this algorithm is also chroma-preserving (was not a goal though) but this leads to an issue:
    The brightest displayed white will only be able to be as bright as the dimmest fully saturated primary color of the display (ie. primary blue).
    This would lead to images generally dimmer than one would expect from their display hardware.

  • We can introduce a new constraint:
    4. Output colors must be able to span the full range of colors of the display-referred colorspace.

  • To get back the full range of luminance of the display-referred space, we need to expand the tone scale output range accordingly but then some overly bright and pure scene-referred colors will get outside of their display-referred colorspace.

  • The solution is to allow the chroma (computed by converting Az and Bz to chroma and hue) to be variable and scale down exactly by as much as needed to satisfy all the other constraints (incl. 4.).

I haven’t put this algorithm to code but I believe that once a CAM and a tone curve are chosen, the rest of the implementation should be pretty tightly defined.

I guess that solving the equation to find the value of the chroma might not be trivial within the framework of non-linear color spaces like JzAzBz but it should be doable.

To conclude, I believe that

  • 3. is a nice and intuitive property

  • that it is useful to explicitly state 4.

  • they can both lead to a tightly defined algorithm with a native and unparameterized path-to-white.

PS1: I guess that maybe the hue will be preserved in a more perception-accurate way because it is managed in the JzAzBz space instead of the LMS space.

PS2: Another consequence is that the tone function is not applied anymore in linear LMS space but in non-linear JzAzBz space. I guess that means applying directly an adapted S-curve without embedding it in the Michaelis-Menten model since JzAzBz’s PQ function is already doing a similar job.

4 Likes

Hello Paul and welcome to ACESCentral !

This is exactly what I have been through as well… You’re in the right place ! :wink:

This is not the first time I hear this and I think some people on this forum would be especially interested in this part.

This is a really interesting point of view. Thanks for sharing it ! Maybe it could be discussed during next meeting… I particularly like how you’re deconstructing the mechanics point by point.

Chris

1 Like

JzDT is much better at this than OpenDRT. From my art technical director perspective, it maintains lighting relationships much better than OpenDRT does (esp. w.r.t. brightly lit blues) although it does not produce as nice colours. I would agree with him. Once I have all the comments from my team in their final form, I will post them here.

Hi Paul,

Interesting post. Great to read and think about new ideas and approaches.

A few comments on first digestion:

Lightness is a tricky scale to predict, and I would not try to base a requirement on lightness.
An excellent summary paper about the difficulties with lightness:
https://www.sciencedirect.com/science/article/pii/S0960982299002493

But basically, your intuition about the monotonousness of the transform is not wrong; just the dimensions need to be discussed.

Also, I am not sure how you deduce:

[quote=“paulgdpr, post:14, topic:4008”]
Assuming we choose correctly the output range of our tone curve, those display-referred RGB values can be all made to be within their respective possible range (ie. no clamping).

[quote=“paulgdpr, post:14, topic:4008”]

while maintaining the (~unbounded) condition:

Further, I am wondering about:

I think you cannot easily directly relate from a CAM tone mapping to display units (this is one fundamental issue with CAM inspired spaces). Display Gamuts are oddly shaped in there.
This needs a bit of thoughts I guess, and this also applies the the rest of the algorithm you propose.

You also forsee this here, I think:

And I would like to mention that one crucial attribute for the DRT is “simplicity” and that the algorithm needs to be GPU friendly.

This all depends on the definition of “hue”.
There is some degree of freedom in designing your experiments and models and then fitting the models to the experimental results.

Be aware that PQ was fitted to JNDs (just noticeable differences), so encoding images with the minimal possible consumption of bits is perfect in PQ. This is what its design goals were, I guess. And also where it shines.
However, larger-scale equal distances is an entirely different perception and might not agree with JNDs.

I hope some of this helps.
Daniele

6 Likes

So if it works fast on a GPU and looks like ass, this is a design win?

I cannot believe that this sort of metric should hold any weight whatsoever in the act of image formation. Solve the problem of image formation first, worry about the performance and compute implications second. Having priorities this distorted is shocking.

L* is essentially an observer sensation metric derived from linear observer stimulus^1, which can be taken back to display colourimetry. The path from an observer sensation appearance metric back to currents and displays is perhaps feasible.

1 - Yes we can all acknowledge the fundamental brokenness of datasets derived from flicker photometry. Just using it as an example that we can indeed have a potential fluidity between observer sensation and observer stimulus that would allow us to get back to display colourimetry.

I haven’t read that in what @daniele said, he only highlighted that simplicity should be a critical requirement which is reasonable given it prevents the creation of steam factory. He never said it should take precedence on everything else.

Any research on this particular topic?

Of course I appreciate and understand the general intention of what Daniele said here. There are some problems with uttering the word “simplicity” however, much like “beautiful”; it’s just a seductive word.

At some point, a model will either sufficiently model what it is attempting to achieve, and fail at other potential things. My point is that to some degree, “simplicity” is just a meaningless word. How many steps do we need to take colourimetry from one display to another? Why? What is “simpler” here?

The critical thing here is a reevaluation of the process of image formation. It’s as much as brainstorming session as anything. I’d hope that such an attempt wasn’t hindered by chasing mythical dragons and leprechauns.

There’s an absolute proverbial metric ton of interesting stuff out there that laser focuses on aspects. I appreciate Lennie / Pokorny / Smith Luminance for a decent wide lens summation of the surface. 1931’s busted up aspect is nicely stated in Stockman’s Cone Fundamentals and CIE Standards page 90.