Gamut Mapping Part 2: Getting to the Display

Nice stuff !

I have been playing with the Nuke script this week and it has given me interesting results. I think it is great to go back to basics in a way (at least for me !) in order to fully understand what we are dealing with.

I have done a quick image of the tool’s options/parameters if anyone is interested. It helped me to grasp better these concepts.

I am looking forward to future updates of the tool. Let me know if my “mockup” is wrong or incorrect, I’d be happy to update it.

Chris

1 Like

Thanks @ChrisBrejon! Now that you’ve made this excellent diagram explaining everything, I reworked the parameters a bit to hopefully make it more clear what is going on. I realized after reading my post again with a fresh brain that there were some things that could be made more clear and simplified.

screenshot_2021-01-29_15-34-36

I reworked the parameters so that the naming is more consistent, and so that the linear value that is calculated is displayed right below the value you are adjusting.

Everything functions the same way except for the limit parameter, which I’ve changed the name of to compression and altered how it works. Now instead of mapping infinity to the value that you specify in stops above lin_white, you specify a value in stops above lin_white, and that value is compressed to display maximum. In other words, compression calculates the max value, which is the maximum scene-linear value that will be represented on the display.

I’ve also exposed the strength parameter, so you can adjust the slope of the compression curve. It seems like you need control over this to get the best results, depending on the dwhite and compression value you have set.

As before here’s a little video demo of the updates and changes I’ve made.

EDIT - Here is the nuke script described in the above screenshot
NaiveDisplayTransform_v02.nk (6.9 KB)

I also made some further usability improvements and additional parameters, which I did not make a post about. Here is that updated version:
NaiveDisplayTransform_v03.nk (7.6 KB)

EOTF should be EOTF^{-1} or Inverse\ EOTF and we should try to remove gamma if possible too!

3 Likes

Thanks @Thomas_Mansencal ! Indeed, that’d be much better this way ! I’ll update the sketch asap.

Cool @jedsmith ! I’ll give it a try next week ! Great update and video explanation. Thanks for sharing !

Jed,

Thanks so much for the work and thoughts. There’s a lot here to react to. I’ll just throw out a few points to consider.

  • I don’t know if I’d characterize highlight desaturation as a problem. Sometimes it’s the effect you want, and sometimes it isn’t.

  • I think there’s value in recognizing, either conceptually or in practice, that the rendering transform, at some point, should yield display colorimetry and turning that display colorimetry into a signal is a process that’s pretty straight forward and objective (e.g. convert to display primaries, encoded with inverse EOTF, etc). The only wrinkle is gamut mapping. We’ve talked about this We’ve talked about the separation of rendering and creation of the display encoding a bunch in the past and it’s fine to separate that out but it’s not really part of the rendering discussion per-say. We discussed the concept in this document. http://j.mp/TB-2014-013.

  • Displaying linear scene data on a display, within the limits of that display’s capability, is a helpful tool at times. With the ACES 1.0 work we did that a lot. I also suggested doing that during the gamut mapping work to help visualize the ACES data. Generally speaking, to make a reasonable reproduction on an output medium you’re going to need to compensate for viewing flare and the surround associated with the display environment (among other things). But those two usually end up increasing the slope of the tone scale. I’d highly recommend Chapter 5 of Digital Color Management: Encoding Solutions Second Edition which does the topic way more justice than I ever could.

3 Likes

Hey Alex, thanks for the thoughts!

Thank you! I’m in this to learn, and I appreciate the reference. I’ll take a look at this. As I mentioned in my disclaimer above, this experiment is purely to better understand the problem at hand, not something I am putting forward as something to be considered as an aesthetically pleasing display rendering.

My Apologies for the imprecise terminology. As I’ve mentioned in the past, I could more accurately be referred to as a “Color Pragmatist” than a “Color Scientist”.

The problem that I’m attempting to understand in my ramblings above could perhaps be more precisely expressed.

What I’m trying to understand is what happens to color when approaching and passing the “top end” or max luminance of a display-referred gamut boundary. That is, for an rgb triplet which contains a color that is too bright to be represented on a display device, what happens to hue when one channel clips to display maximum and other channels do not. These unnatural renderings of hue in the upper portions of the luminance range is why I think gamut reduction as luminance approaches display maximum is critically important to rendering a good looking picture, rather than an aesthetic preference. The example photos that you posted here show the issue pretty clearly.

Of course the implicit assumption in all of this is that we would want a chromaticity preserving tonescale rather than a per-channel rgb approach which applies this luminance-gamut limiting as a byproduct.

Can you elaborate on why gamut mapping should not be a part of the rendering discussion? Surely there should be some type of handling for colorimetry that can not be reproduced on the display device in the default display rendering transform? I’m sure it’s something silly I didn’t think of, but like I said I’m in this to learn :slight_smile:

2 Likes

Hi Jed,

can you post again a link to your latest version of your Nuke Gizmo? Thanks.

Hey @TooDee - Sorry I didn’t specify in my post: it’s updated at the same link,

1 Like

Totally understand … thank you for playing with this and sharing what you’ve learned.

I think that’s the part I’m reacting to really. I don’t know if we do or don’t. A chromaticity preserving tone scale sounds great, makes great plots, looks great on some images and absolutely horrible on others. We spent a ton of time in the ACES 1.0 development process looking at a chromaticity preserving tone scale it it was like an onion. We’d peel back layers and just run into more and more issues. The models kept getting more complex and eventually we pulled the plug. I think it’s worth looking at again but I’m not sure we want to make it a hard requirement.

This is just the way my brain splits up the problem but going from scene-referred colorimetry to the colorimetry as we want the image to appear on the display is the rendering. For the sake of argument let’s call that output XYZ. There’s a separate, and relatively easy, task that needs to occur to take those output XYZ values and turn them into a set of code values. When those code values drive the display in question measurements of the light coming from the display should match the rendering output XYZ values.

process

Obviously, depending on the details of the rendering, the output XYZ values may exceed the capabilities of what the display can actual show. If that happens choices need to be made in the signal generation stage. With this sort of conceptual split though, you can do interesting things. For instance, you can make the rendering create output XYZ values for a limited dynamic range and a small set of primaries (e.g. rec709) and then generate the corresponding code values to correctly display those output XYZ values on a display with much higher dynamic range and a larger set of primaries. This effectively lets you use an HDR monitor as a SDR monitor without changing the monitor calibration. @sdyer and I do this all the time with our BVM-X300.

1 Like

The simple fact it would be such a departure to the current rendering makes it a no-go as a default for me. It does not mean we should not provide a way to be able to do it though, the advantage is that it could be introduced alongside and let people test and migrate their workflows to a hue-preserving one.

100% agreeing for the gamut-mapping point!

Cheers,

Thomas

Just to complement the hue-preserving discussion, I remembered the presentation Alex Fry (EA) gave at the GDC 2017: https://www.gdcvault.com/play/1024466/High-Dynamic-Range-Color-Grading

2 Likes

Sorry I may not have respond to this directly. I wasn’t trying to say gamut mapping wasn’t part of the rendering but rather the conversion from output Colorimetry to display code values. If the rendering produces values outside the displays capability obviously gamut mapping is needed. Probably better to make the rendering not produce out of gamut values to begin with though.

Thanks for clarifying. As is so common in conversations about color, differently understood terminology seems to be the source of my misunderstanding here. As your diagram shows, I believe you are thinking about “Rendering” as a separate step from the transform to get to display colorimetry, where I was thinking of it as the whole process to get from scene-linear to display.

I do wonder though - is it really possible to separate rendering from transformation to display colorimetry?

Isn’t this impossible? - (This may be another stupid question - like I said, color pragmatist)

What if we need to create a rendering for Rec.2020 and for Rec.709? Are we going to use the same gamut mapping approach? Would we limit the colors to the smaller one? Or would there be two different approaches specialized for each display gamut?

Certainly we could conceptually move the gamut mapping problem into a box between “Output XYZ” and “Display Code Values” but it seems to me like this would still need to be a specialized process for each output display gamut?

Please let me know if I’m missing something here, I am genuinely curious.

There has been a very interesting conversation on slack and I thought it would be worth doing a summary on acescentral.

@Thomas_Mansencal :

my only point is that removing hue shifts as a default is not desirable because of the arguments Alex Forsythe brought up and the simple fact it would change the look massively. […] Reminded me Alex Fry GDC presentation from 2017 where he explained that he was adamant about removing them but it did not worked for all cases, quite the opposite!

Which @Troy_James_Sobotka replied :

Optionally, I think the point is being missed.

  1. The display can’t represent the colour of the flame, so it skews.
  2. It is possible, if one were so inclined, to add a saturation adjustment to return more of the “yellowish” flame.
  3. It is impossible to remove the saturation surface of the image once the skews are introduced.
    […] But again, the crux of this discussion is essentially about why bother with colour science at all if it never comes out of the display?

The conservation followed like this with @Thomas_Mansencal explaining :

Without entering into subjective discussions, changing something like the default tonescale contrast has much less impact than making the tonescale hue preserving. So making the latter change a default in a new ACES rendering transform has much deeper consequences, so I don’t think it is a good idea, it should be optional. If you have built a ton of assets under the current rendering transform and you end up in a situation where the new one forces you to rebalance all of them, it is not a great change. […] Not saying there is no problem, just saying that the medecine should not be the nuclear option.

@daniele then added :

If the skew is part of the Display Rendering it needs to do the same skew for all deliverable. In fact your cg assets will look completely different in HDR under current transforms. […] Well the starting point should be same appearance , I guess.

Which Thomas replied :

They will (look different) and it is acknowledged, entirely depends on the deliverables. And the client might also either be fine with a different look on HDR but even request for it. […] Just look at the Haywire trailer by Steven Soderbergh… Not saying that I like it, but the client pays the bills here and he is certainly pushing for that look. I don’t know if the appearance should be the same with vastly different class of displays: are clients expecting that the appearance of a 100nits, 10000nits and the real world appearance are the same? It is a fundamental question to which I have no answers. Say I’m paying extra bucks to go to HDR cinema, what is the point to get there if things look the same than SDR exhibition ? That is the type of questions producers ask themselves.

Then Daniele explained :

I am not saying HDR and SDR should be identical. But a yellow candle in SDR should not be orange in HDR I guess. At least I know many colourists which are confused when that happens.

To which Thomas replied :

I certainly could see that happening but all your pyro effects turning red/white under a hue-preserving rendering transform is equally bad, and it does not affect a single person but hundreds of people. All of sudden, all your Lookdev, FX, Lighting & Rendering teams are affected. Sometimes, broken by force of adoption is what is correct, but we are entering philosophical discussions here. And again, what feels broken for you might be a feature or the expected behaviour for others. […] I can certainly point at the dangers of changing the rendering transform in such a dramatic way that it preserves hues. […] Changes to a a large system like ACES must be incremental, you just can’t modify something as critical as the RRT without triggering a storm, instead you should give people the option to change their workflow, at their will and gradually. Happy to be corrected but it seems like the only sane thing to do to avoid having an angry mob knocking at the Academy’s door!

Jed entered the conversation :

2.0 seems like a good version to make a change to the default display rendering transform. I’m not sure there would be an angry mob if it looked better than the current version. I think a lot there are a lot of people especially in vfx who use aces because it is easy and off the shelf, who think that it is a “neutral” view transform, and are trusting it to represent their asset work faithfully, and they do not realise what is happening to the color that they are so careful to create faithfully once it goes through the view transform to the display. Certainly a chromaticity preserving rendering method is not the easiest path forward, but I do not think an approach should be discarded because it is difficult or inconvenient. As professional image makers we should be able to make intentional decisions about the appearance of color, to have control over what is happening. I think there is benefit in the approach of trying to pull apart the reasons why certain things are happening, why we are seeing certain artifacts. It is very difficult to break down problems like this because there are so many factors at play. But I think if we try to isolate problem areas maybe there are solutions we haven’t thought of.

And Daniele suggested :

You can make a LMT to match ACES 1.0?

@nick also shared his thoughts :

If we offer a few options out of the box, then I think the default should feel close to what we have now, to keep the people who already use ACES comfortable. But we could have a hue preserving alternate, and maybe people would gradually move over. Then we can deprecate it in 3.0.

A big question brough up by Thomas and Troy was about objectivity :

The problem is that looking better is highly subjective and any large DRT change has potentially a large cost.

I can’t help but find this “highly subjective” line of discourse completely quite disturbing. What’s our ground truth? We have a working space. We have values. We are not displaying them. At all. Not even remotely.

Interestingly enough it looks like the famous K1S1 DRT is not hue-preserving.

Then @sdyer entered the conversation :

[…] all options are on the table - if they meet the requirements. Which we are trying to define. everyone here seems to be acting like decisions have already been made. Can we focus on defining absolute requirements and then maybe even things that “would be nice to have” (if we can get them without breaking other stuff)?

We then agreed that we would need to see examples of chromaticity preserving tone scale sounds great, makes great plots, looks great on some images and absolutely horrible on others to better understand what happened for the creation of ACES 1.0.

I also tried to give my tuppence :

Keeping skews for a major release doesn’t sound right to me. For a major release, we should not be shy of changing stuff. This is the moment to do things “right” or “better”… Not 3.0. By definition, a major change means stuff will change and eventually break.

Which Scott replied to :

I can tell you I am 100% in favor of changing stuff, even drastically, IF it fixes the things we consider broken and still delivers on the other things we want it to keep doing. The requirements define the solution. so figure out what it must do and then we can focus our efforts on making something that does. We can’t sell a big change jsut for the sake of changing things. But if we can show the many ways on which it improves known issues with v1, we will have a lot easier time convincing the stalwarts to change and the holdouts to revisit it.

Finally Daniele shared some thoughts :

I still have difficulties to accept the “cg assets” argument. If the assets are so fragile to the DRT in use how do they survive a normal grading session. All the cg and VFX elements I got my hands on work very well with different DRTs and looks. They look different, but so does all the live action too. […] In a way a cleaner DRT should actually help to produce neutral assets.

Final words from thomas about CG issues :

Not all assets are equally fragile, pyro elements certainly would be. You have a whole class of people using pre-rendered texture sheets and such in games. And they tweak the look through the DRT directly.

I have tried to reproduce this conversation in the most faithful way. If you think I haven’t done a good job, please let me know and I will modify/delete the post. But I thought it would be important to share with everyone on acescentral.

  • Should the new Output Transforms be hue-preserving ?
  • Should there be a default behaviour and then an optional one ?
  • Do we want a smooth transition or a fresh start ?
  • Should we consult (again) with studios/people using ACES to see what they think ?

Regards,
Chris

4 Likes

Thanks @ChrisBrejon,

To complement the CG assets point, I made an emphasis on pyros, but it can be extended to any emissive source generally or assets with high-reflectance or that we are super picky about, e.g. CG skin.

As support to what could be the widescale effect, here is an old public thread that highlights the type of challenges that a change of DRT can induce.

The worst that could happen would be people trying again to apply the inverse new RRT with some custom massaging to maintain the previous look. Noteworthy, a significant portion of the time I spend on ACEScentral has been either explaining to people the ACES look compared to their “Old sRGB” one BUT also discouraging, sometimes with friction, their attempts at maintaining it by applying the inverse RRT on their textures.

I have been fortunate enough to work in a few studios and experienced DRT change from the first or second rows and, no, it is not as simple as flicking a switch.

Cheers,

Thomas

1 Like

@Thomas_Mansencal I have seen the effects of DRT change on cg asset look development. I agree that it is not a simple or well-understood problem, and it can cause serious headaches if the workflow is not understood and planned properly from start to finish.

My perspective here will be from feature film VFX - and from my personal experiences.

A conflict often arises in VFX studios when there is a desire for a consistent internal DRT in cg asset look development. An internal DRT has benefits for workflow, re-usability, and makes things easier for lookdev and texture. It can help a lot of things.

However, on different shows, the DRT from the client DI house is very rarely consistent. Every show has a customized display rendering transform.

At some point the cg asset has to be composited into a shot and sent to the client. If significant dialing of lookdev has happened under an internal DRT, and that asset gets rendered and comped there is significant danger for the appearance to change significantly - especially on more saturated colors like pyro, and more sensitive objects like human skin, as you mention.

So my question is: if a studio is using an internal DRT for cg asset work, don’t they already face the problem you mention? And as @daniele pointed out, wouldn’t a more chromaticity-accurate display rendering transform actually help solve some of these issues?

In the past, I have suggested workflows where initial cg asset work happens with the internal DRT, but at a certain point, evaluation of that work should happen under the show DRT, to avoid a big surprise when the work goes into comp.

Maybe cg asset look development work should even be checked under multiple DRTs in order to verify that assumptions are correct: an internal DRT, the show DRT, and a simple linear-to-linear display as you were talking about in your earlier post.

2 Likes

This is a good point, here is my take on it: The Lookdev cost to fine-tune any reused assets from a facility asset library is almost always factored in the bid for the Show. If you are smart, you develop your reusable assets under the facility DRT and you delegate Show specific changes as late as possible, e.g. Lighting and Compositing tweaks.

When you get to the point where you have a consistent set of changes or improvements that look good on the show one, you can merge them to the show variant as a base and you might also decide to merge them back to the asset library variant if they look good under the facility DRT.

The idea is to try branching from the asset library/templates and merge back in show improvements if deemed appropriate.

This works, and I Lookdev"ed" a lot of assets without ever looking at the show DRT. Depending on your workflow, fine-tuning as mentioned above can be delegated to the Lighting artists. It is also extremely hard to have an asset that worked well under the constrained setup of the Lookdev lighting scenarios to systematically behave correctly under the thousands of shot Lighting variants and artists probably have hooks to tweak the shaders at this stage of work.

On the paper, it should, but see EA which is a public experience trivial to quote :slight_smile: At any rate, I’m not saying that we should not have such a feature, I’m saying that we should not make it a default for the new rendering transforms: It has not only a dramatic impact on the appearance of everything but also on the behaviour of tools themselves, i.e. people are used to working with hue skewing.

Cheers,

Thomas

1 Like

Hum… I don’t know to be honest. From what I understand :

  • Few studios actually use the ACES Output Transforms. So impact would not be as dramatic as it may sound.
  • Assets are probably lookdeved already with different lighting/viewing conditions and should be robust enough to survive this stress test.
  • Lighting artists tweak assets on a per-shot basis all the time anyway.
  • CG assets in my experience are never re-used exactly in their exact original state. Between an original movie and a sequel, a couple of years have generally passed by with the introduction of new technologies. So we always end up adapting and tweaking the assets for each show for this particular reason.
  • I think this Working Group has some of the best people in the industry in it and I fear that these limitations will inhibit us. The Academy should show the way rather than trying to respect a legacy behavior.
  • It is also important to think of the scope of this group and I do think that people who actually uses and cares about ACES are on acescentral. Will they manifest their concerns by reading this post ? I’m not sure. What am I trying to say is : can we reach out to MPC (for example) and see if they’d be concerned ?
  • People who likes the skewing look could/should use an LMT in my opinion. Or even stick to ACES 1.2 if needed ?

I just asked a VFX supervisor about this specific topic. Short answer is :

It does not change anything to me if the Output Transforms are modified. We will adapt the assets, especially if it is to reach a better look.

And I asked another CG supervisor about this :

In fact, I compare this issue to the way we manage projects. Each project has a version configuration of particular softs and plugins. This project is supposed to work with this version of software. And we know very well that intermediate releases should not break backward compatibility, while major releases would potentially do. It seems a little illusory to me to imagine a version system where you never break something. And then hey … We’re not talking about making assets unusable … Just a little effort to adapt.

It is interesting to notice that they both used the word adapt. Because it is probably what they are used to do already. :wink:

And I will check with our CTO tomorrow to see if I can get an official response from our studio. But so far my personal take is that any animation studio or school I have worked with would prefer a better Output Transform even if it means tweaking some assets rather than the contrary. We should aim at the highest quality possible in my opinion.

Happy to discuss,
Chris

PS : I agree that this is the limited point of view from a lighting artist working on animated features. Especially the argument of couple of years have passed by between movies.

2 Likes

Any statistics that we don’t have about this? Seems quite important! :slight_smile:

But it is certainly not mutually exclusive with having a main area, either facility or show centric, where you define 98% of the Look right?

How many shows are you working on concurrently at Illumination, 2-5? Scale that to 15-20 and then you will have a very different perspective to the problem :slight_smile:

As I read it, you have already decided that the current per-channel processing is legacy which is entirely fine, (although the VWG and the TAC will need to be convinced), while I’m suggesting, again, that such drastic changes if adopted, should be introduced incrementally with all the carefulness required.

I think you either mispresented the topic here or your CG supervisor did not understand it properly. This group would simply not exist if the Academy was not willing to implement any changes or have ACES evolve. Also, we would not have written the ACES RAE paper 3 or 4 years ago if we knew the Academy would not be responsive to our proposals. The crux of the problem is about how you introduce those changes to your userbase. The Academy, us, have a responsibility toward them and we simply cannot act like cowboys. If we want to introduce critical look changes, e.g. ACES 0.1.x to ACES 1.x.x, we have the responsibility to introduce them gracefully, a hot-swap is anything but graceful IMHO.

It is a given :slight_smile: but you don’t necessarily need to resort to the BFG to get there.

Cheers,

Thomas

Thanks Thomas ! As always, your help and insight are much appreciated.

A few disclaimers :

  • I owe you guys everything ! I have been part of the ACES community for more than two years and it has been a great journey and learning experience.
  • Apologies if I ask too many questions or post too much but I like to get to the bottom of things as you guys have probably noticed.

My assumption that few studios actually use the ACES output Transforms comes from personal conversations. I was able to discuss about this topic with several people and this is the answer I have been given several times. I’d love to help for an official poll among VFX and Animation Studios to come with accurate data.

Totally ! But may I ask which studio ? Is there actually a studio with 15 shows going using the ACES Output Transforms for lookdev and rendering ? You’re not talking about Weta, right ? Since you guys do not use ACES.

Not trying to convince anyone here. :wink: If per-channel lookup is responsible for blue spheres going purple, I can only encourage to look at other options. About the drastic changes and the carefulness it requires, I have been asking another CG sup who uses ACES and our CTO. They both gave me the same answer : if there is a jump in quality, we’d rather patch the assets. I guess without seeing any images or examples, it is impossible to take a decision anyway.

I don’t necessarily disagree with you but I think it would only be fair to ask the userbase in my opinion. I shared the link to meeting#1 on linkedin so people could express their needs. It got 3169 views but 0 answer. My take on this is that people/companies/studios will adapt to what ACES 2.0 offers. And that the userbase that will be affected by ACES 2.0 is already present on acescentral… Isn’t that the case ?

More thoughts on the topic. I have been thinking on the audience after reading this very interesting post. From our conversation, I see 2 different audiences for ACES :

  • Expert users/companies which main request for ACES 2.0 would be Daniele’s idea : modularity and flexibility (I hate to put words into other people’s mouth but that’s the way I currently see things, especially after last meeting).
  • Non-expert users/studios (a group to which I belong) and I guess it would be fair to say that our request would be a more robust Output Transform since we do not have necessarily access to color scientists. That’s at least the feedback I got from my VFX sups friends…

As far I know, Illumination (with our current slate of movies) is currently the biggest studio using ACES from input to final output. I’d love to be proven wrong on this one though. :wink: But I never heard of a studio who uses ACES the way we do, except On Animation Montreal.

I was able to talk to our CTO today and he was absolutely positive that if we’d ever make a sequel to a movie done with ACES 1.2, he’d rather use ACES 2.0 to get a quality jump and patch thousands of assets if needed. So I think it would be fair to say that this is the official position of our studio. I’d love to hear from the userbase if anyone is in disagreement. And of course, you could still have a LMT emulating ACES 1.2 if needed. Or worst case scenario : stick to ACES 1.2 until you can make the jump.

Furthermore, we also think that Hue Skews should be part of a look. If a client wants a hue skew, it should be done through an LMT as well. Not through the default look. Don’t we agree that the ACES 2.0 Output Transform should be as neutral as possible ? And that hue skews are not neutral by definition ? Anyway I want just to emphasize that all the examples I have provided come from real-production situations. The saturated spheres, characters, volumetric lights come from our extensive testing with ACES.

Final word from an other CG sup friend (I have asked seven supervisors so far) :

As long as ACES 2.0 does not change the whole workflow, it will be easily accepted I think. Let’s not kid ourselves, when you do a sequel, you have to re-check every single asset. Because a lot of project features change from one project to an other. Look at Random Walk SSS or Ptex…

I think that doing the right thing (if there’s such a thing) is a great way to be innovative and that it will make more people wanting to use ACES. But I hear you : we cannot act like cowboys and should consult with the userbase if that’s an option.

Looking forward to your future answers,
Chris

2 Likes