Gamut Mapping Part 2: Getting to the Display

It may be worth noting that ARRI say in the Alexa LogC Curve – Usage in VFX white paper, with regard to the difference between the 3x3 matrix for “tone-mapped ALEXA wide gamut RGB” and the standard calculated matrix:

Comparing this matrix with the one given above, you may recognize that the latter generates less saturated RGB values (smaller values along the main diagonal). This deviation from the correct conversion is done to compensate for the increase of saturation produced by the tone-map curve.

I do not know if the modification to the matrix is made based on objective or subjective criteria. Perhaps @hbrendel or @joseph can comment.

thanks @jedsmith ! I personally think that this is going in the right direction and your tests are completely necessary to this group.

I will use the latest version of your DRT to test it thoroughly on +100 of renders from our movies, submit them to the supervisors and CTO and will try to provide an official answer from our studio.

Regards,
Chris

Hey,

so I ran Jed’s DRT and the ACES OT on a vast selection of frames and shots to compare. So far the general agreement in the studio is the following :

  • All Cornell boxes, spheres, light sabers images look better. We are pretty pleased with the hue preservation and overall it looks more correct. Even GI (or display of the GI) looks better. This makes this prototype really promising to our eyes.
  • We have mixed results on lighting/comp frames coming from our past movies. Sometimes it looks better, sometimes it doesn’t. My take on this is that all those frames/assets have been fine-tuned using a certain DRT and displaying them through a completely different transform/technology doesn’t make them magically look better.

I’ll be probably able to give an official response from the studio next week but so far, we’re still very interested by the possibility of having a hue preserving Output transform for ACES 2.0.

Regards,
Chris

1 Like

@jedsmith

This is super interesting stuff. I’m still trying to get my mind around the algorithm and the parameters but they certainly look comprehensive.

Take this with a fist full of salt, but I did play with the defaults a bit and anecdotally found the following to be a slightly better balanced starting point based on my image set on my totally uncalibrated monitor. It also felt like a bit of saturation was needed so I tossed in a saturation node in the completely wrong place.

naive.2020.02.01_altParams.nk (16.0 KB)

Feel free the throw up on this while I continue to try to figure out the algorithm and all the nasty side effects my parameter changes probably had.

May have gotten a little ahead of myself. The parameter changes I made help some images and hurt others for sure. In general I think the direction I moved with the highlight luminance is better and restores some of the sparkle we were chatting about. Overall saturation seems to help skin tones. I may have went a touch high on the contrast though and the sweet stop on the path to white really seems image dependent.

I promise I’ll stop with the subjective play now and analyze what the heck is actually happening to the pixels :wink:

Cherry-picked a few images and Juxtaposed them: https://academy-vwg-odt-compare-images.netlify.app/

These are really interesting.

I feel the frame of the CG woman’s face looks far less ‘cinematic’ through the ‘naive DRT’, yet provides a clearer view of what’s actually going on with the asset. More true to ‘the eye’ than ‘the camera’. This feels like a win for surfacing and look-dev, where creating a robust and versatile asset is the goal, but a loss for Lighting and Comp, where ‘make my movie look like a movie’ is the unsaid note hiding under everything.

The explosion on the other hand feels far less realistic to me (take that for what it’s worth, my real world exposure to explosions is pretty limited). My assumption here is that the look was developed in a way that takes advantage of the existing RRT/ODT to give the illusion of additional colour complexity. (Would need some real world unclipped explosion frames to really know here, which are hard to come by)

Once you get to the live action stuff, I feel like the ‘naive DRT’ is generally less attractive across the board. Hurting the ‘looks good out of the box’ angle, but helping the ‘allow the LMT to do the work’ angle.

2 Likes

This echoes my current thinking looking at those images.

A possible explanation could be the Bezold-Brücke Effect: As luminance increases, there is a notable expansion of the yellow wavelengths roughly above 500nm and also for blue wavelengths under 500nm.

Bezold-Brücke Effect

Now that we have opened the can of worms, shall we talk about colour appearance modeling?

Cheers,

Thomas

2 Likes

Just for reference, here’s the comparison of @jedsmith naive with the default parameters vs. the parameter adjustments I made plus saturation. The nuke node is above.


1 Like

I made some progress today. I figured out a method for biasing the path to white based on hue, and adjusting output brightness based on hue using rgb ratios. This update has a lot of experimentation, including an attempt at gamut mapping out of gamut colors. I think it’s headed in the right direction, but still lots of work to do.

Here are the same test images run through the new version
https://mega.nz/folder/W65DEaZJ#9Ede0e0Ex2v8aYlB91oS1w

And here is the updated node.

3 Likes

Yeah sorry I was going to do an in-depth demonstration video of how everything is working, but I got distracted today implementing new features and experimenting. More info to come!

No worries at all. I’ll take a look at the new stuff but I think this is really doing an interesting job of merging the hue maintain tone scale with a more natural roll off in highlights.

And Juxtaposed here (with numbering trap avoidance :smiley: ): https://academy-vwg-odt-compare-images-v02.netlify.app/

Cool ! I have done a quick test with ACEScg Colorwheels for comparison.

ACES 1.2 - Rec.709 (ACES) in ctl

Naive DRT V01 (default options)

Naive DRT V02 (default options)

So it makes me wonder :

  • What should be part of the DRT ?
  • What should be part of the LMT ?
  • What could/should be done in grading ?

Regards,
Chris

That colour wheel is a very useful way of looking at what is happening. The v2 seems to exhibit some strange bands alternating light and dark in the yellow to magenta quadrant. These distortions are even more obvious on a vectorscope:

Am I right, @jedsmith, in thinking that v2 is using some hue qualified adjustments, based on a similar method to that used by the red modifier in the RRT sweeteners?

I don’t intend to be negative. You are doing great work. These are really interesting experiments and visualisations, and will give us plenty to talk about in today’s meeting.

Hello, I am the one who made this render.
I can confirm it has been lookdev’ed to look good under the ACES ODT. Thought, no manual color values were used only Blackbody value (but that I think were sRGB Blackbody values).

I would love to try lookdeving the same explosion under the naive DRT to see how it behave.

4 Likes

Hey,

just wanted to share a couple of thoughts post-meeting#6. After carefully comparing images from v01 and v02 naive transform this afternoon, I noticed some weird shifts on the orange (hence my question in meeting#6). Here is an example on why v01 looks more neutral to me :

Naive transform v01 :
cc24_v1

Naive transform v02 :
cc24_v2

And a couple of frames that are worth comparing I think (if you didn’t have time to look at hundreds of frames) : on the left, rec.709 (ACES) ctl. On the right, Jed’s DRT. Since I am the person who did this render, I feel like that the “render” on the right is much closer to what I expected while working on it :

Same thing here. I was pretty pleased with these results as well :

I think that the red xmas picture is also a clear example we’re heading in the right direction. :wink:

And the blue bar of course :

I think the GM VWG pictures were added to the mega.nz link a bit later. So I completely missed them when I downloaded the link 2 days ago ! I always wondered how a more robust output transform would handle this kind of imagery… I guess I have an answer now. :wink: Anyway, just a quick post in case you missed the images added later and to emphasize the excellent work from @jedsmith .

Update : my blue little sphere looks also very nice with the white specular on it. Even the exposure level on the light itself looks more consistent to me. Sweet ! :wink:

Regards,
Chris

1 Like

I have to say, I struggle to visualise what I would expect from something lit with ACEScg primaries. Since it’s not something any of us have experienced in reality, it’s tricky. There is no doubt the image on the right looks more pleasing. But is it truly representative of an image lit by monochromatic lights which are on (or in fact slightly outside) the spectral locus?

I would say it feels to me like the right hand image is lit with fairly bright, saturated red and green lights, and I can try to imagine what the lights I think are lighting it look like. Maybe like traffic lights, but definitely real world, plausible lights. So if a DRT has to map non-physically-realisable ACEScg lights so they look like the effect of real lights, what would it do with those real lights? It would by definition have to map them to something less saturated. So everything gets desaturated to make room at the boundaries. I think this is the reason a lot of stuff looks rather “muted” under @jedsmith’s DRT.

I really am just thinking aloud here. I’m not saying the mutedness is necessarily a bad thing, if it can be overridden with an LMT. It could be compared to the complaints that go round in circles here where people say ACES is too dark because it doesn’t map diffuse white to display white like the sRGB VLUT that they are used to (it’s a similar “making space near the boundary”). But I am slightly wary of judging a DRT as being good because it makes pleasing looking renderings of CGI with super-saturated lighting where we have no real world reference for what it “should” look like.

3 Likes

Hey Nick,

thanks for thinking out loud like that ! I think it is great that people are reacting and asking questions without fear of backfiring. :wink: I am very eager to learn with the group and explore the fascinating facets of this display rendering transform. I agree that my statement can be misleading and I hope I will be able to shine some light on that.

There is maybe a trap that I set myself into which is “what I would expect to see”. For the record, when I was working on this scene, I kept in mind this reference from Star Wars. Where the lightsabers are white and the glow appears colorful/saturated.

In a sense, lighting with ACEScg primaires do not make much sense. As you stated, they are outside of the spectral locus, aka invisible to the human eye. But I can tell you that lighting this scene with BT.2020 primaries will give the same issue. Here is the same scene where I used BT.2020 primaries for the lights :

So one may think that the issue comes from the primaries themselves and that using BT.709 primaries would be the solution. But by doing that we are only offsetting/bypassing the problem I think.

It is also interesting to notice that this technique (using BT.709 primaries) will make the lights skew : blue gets purple, red gets orange and green gets yellowish. So it may look like that using BT.709 primaries in an ACEScg working space fixes the issue. But that’s not really the case as you can see :

This skew could be part of a look, which is mostly accidental. Some might like it, some might not. But for the sake of the argument, let’s say that as an artist I want control. No happy accident. The fact that complimentary light was used on the path to white lacks control and engineering in my opinion.

For the record, BT.709 primaries expressed in ACEScg are :

  • (1, 0, 0) → (0.61312, 0.07020, 0.02062)
  • (0, 1, 0) → (0.33951, 0.91636, 0.10958)
  • (0, 0, 1) → (0.04737, 0.01345, 0.86980)

Fun fact : if I were to do a render in a sRGB/BT.709 linear rendering space with sRGB/BT.709 primaries for the lasers displayed through an sRGB or BT.1886 eotf (or even the spi-anim OCIO config), I would get the exact same issue !

Here is an example of a sRGB/BT.709 render displayed with the spi-anim config in Film (sRGB) :

Apart from the lightsabers not going to white, the same issues are present here. Fascinating, isn’t it ? It is like we are chasing our tails and I guess this leads to the question : what should happen when the red hits maximum display ?

So maybe the issue here is not whether the laser primaries are outside or not of the spectral locus. I believe the issue we are facing is entirely bound to the display limitations.

A couple of days ago, something along those lines has been said on Rocket Chat that :

In digital RGB there’s just more emission on single R,G,B channels. The channels of a display do not go magically to white. The path to white for overexposure in digital RGB has to be engineered, as opposed to film.

This explanation helped me a lot with what we are trying to achieve. I never thought of that before but it makes so much sense to me.

Here is an example with ACES 1.2 to illustrate how the energy is reflected in the medium up to a certain value, and how it falls apart because of display limitations.

I believe all of the issues we are facing here are related to the display volume and its limitations :

  • Why does the blue skew to purple ?
  • Why do high emission colours skew ?
  • Why do we get massive patches of identical colours ?

And I think that what Jed is doing with his DRT is the beginning of a solution : compressing the volume. Because we have display limitations. And how we negotiate these limitations is the trickiest part.

As Lars Borg said yesterday :

You’re going to have a detail loss unless you’re very clever on the gamut mapping […].

Here is another sweep of exposure with Jed’s DRT to get a clear sense of what’s happening :

And I guess that with this technique there should be a way to get “objective". Because we should be able to say “this is beyond the display’s capability to display this blue”. Either we can display a color on our monitors, or we can’t.

In a way, our conceptual framework is to figure out the range we have at the canvas. We should be focusing on the limitations of the medium so my light colors in the working/rendering space can be displayed as best as they can be :

  • If we overcompress we will get posterization/gamut clipping again.
  • If we undercompress? We may not maximize the image density coming out of the display.

Exactly like what Lars said in meeting#6 :

The source is unlimited and only when you hit display space, you know what the bounds are.

And also Jed :

It’s tricky, because the path to white only kicks in when one channel hits maximum (a bit before actually), which is tied to the display.

So, here is a potpourri of what I have been trying to understand for the past six months and hopefully it will help to make things a bit clearer. Or not. :wink: But I do believe from the bottom of my heart that we should now focus on getting values to display (broad strokes) and then later talking about sparkling skin and other nice details.

Regards,
Chris

On the subject of display gamut volume, I recorded a video with some plots and visualizations.

I’m sure these are all well-understood concepts for most of the people here, but for a color science neophyte like myself I found it to be a useful visualization exercise. I’ll put the video here in case it helps any other curious folks who might be lurking and are interested in better understanding (one of the) problems at hand.

And here is the Nuke script used in the video in case anyone wants to play with it.
20210204_display-referred_gamut-volume_visualize.nk (61.7 KB)

Edit

Here’s another quick demo visualizing Christopher’s lightsaber render through the same process. Again there’s no gamut mapping or conversion happening here: it’s just scene-linear rgb in mapping to display-linear rgb out.

6 Likes

Just watched the video. Probably the best demonstration I have ever seen on the topic and a much better explanation than my clumsy post.

Congrats Jed,
Chris

1 Like