Gamut Mapping Part 2: Getting to the Display

I think I might modify the statement about ‘studios using the ODT’ slightly. First what do we mean by studio, could be many different groups of people, so I’ll interpret it as applying to a specific film/project.

I would agree with the statement that I’ve rarely (possibly never) seen a project use the ODT as-is with no LMT, but I do have 1-2 out of many current projects using an LMT combined with stock ODTs for making their editorial media - no idea what happens in DI. These projects tend to be from the same “Film studio” . It is not uncommon for these projects to emulate another look via the inverse of the RRT+ODT.

However more of a problem for me are the “15-20” other projects which don’t deliver an ACES workflow, but instead hand me a single baked LUT, which makes adapting the look to any of my other output devices more work than it could be. I don’t know if these are baked down versions of an LMT+RRT+ODT, or some print stock emulation, a simple tonescale+gamut adjustment, or something totally creative (I’ve reverse engineered examples of at least all of these).

Supporting these and other creative choices that are in conflict with each other, e.g. desaturating highlights vs preserving pure high luminance colours vs hue preserving within the overall framework is a suitable goal to try achieve.

This is separate to what the stock rendering should be, as Scott suggests we can break things if it makes sense, as can be seen in the previous releases, attempts have been made to preserve historical backwards compatibility by supplying LMTs and or other emulations “under” the current rendering, we should of course follow suit.

Perhaps we need to categorise the current wish list by stating if it is possible under the current framework to agreeably solve the problem or not. To me this means that modifications to the current rendering needs to be made in the direction of facilitating more possible outputs and moving the “restrictions/constraints” to the LMT.

If we have enough of a case on the ‘not’ side then it makes sense to consider what adjustments to the framework need to be made to allow for the desired flexibility, whilst minimising the other valid concerns the content owners have such as wanting to limit the scope of black box/secret sauce components, such as baked LUTs - I think it is OK to have such a component for creativity, but shouldn’t prevent anybody from retargeting to an alternate output device. i.e. I do not think we can go as far as “providing a set of LUTs” that replace the whole output Transform.

Kevin

3 Likes

Thanks for the precision Kevin ! This reinforces me to think that we’re one of the few using ACES as a color management system without any modification/LMT.

Chris

1 Like

Maybe, or they will move on :slight_smile:

I don’t know, I have not seen a lot of people of MPC or ILM roaming around here, I certainly see a lot of the same faces, but saying we are representative of the entire ACES user base would be wrong. When in doubt, think about all the studios in China, Vietnam or India that are using ACES but do not come here because of the language barrier, maybe we don’t care, I won’t be the judge here. With that said, we are certainly the people interested steering the project direction which gives us a ton of power BUT also a lot of responsibility.

Here is an experiment for you guys, take a large dozen of shots with as much varied content possible from your current shows, and process them through the 0.1.0 and see if it addresses your wish list, and if not report why. You should also try with Jed’s nuggets above.

You probably wanted to say “NOT be part” :wink: You guys are totally entitled to wish for a hue-preserving rendering but I would like to point out that non-hue preserving rendering accounts for the majority of all the content produced by the entertainment industry. Given that, would I say that all the movies shot with ARRI and their rendering pipeline are broken or that the rendering of the Planet of The Apes is busted because hue skew? I would not dare, would those productions have benefited from hue-preserving rendering, I cannot say but at any rate, they are out and made their producers and consumers happy.

Broken could be the normal, the neutral. Nobody would be fool enough to type on a keyboard optimized for morse code right?

To me, innovative would be to offer more choice to our user base, not impose it. Quite coincidentally, I will finish by quoting @KevinJW:

Thanks Thomas for the answer !

I understand that there might be studios using ACES which are not present on acescentral. But I also think we should not forget about people who are active here and try to point real issues from productions. Not saying it is the case though. :wink:

I’ll try the ACES 0.1.1 experiment out of curiosity. Probably this week when I find some time.

And just for the record, I meant in a clumsy way that hue skews should be part of a separated look, aka LMT. Sorry for the inaccuracy.

Take care,
Chris

From my standpoint, you are adamant that hue-preserving rendering is a requirement for your workflows but at the same time, I have the feeling, and I could be wrong, that you did not put it through test thoroughly.

If this is the direction you would like to go, I would not try that as a curiosity experiment but as a truly objective process to come up with results that can inform the VWG direction.

Some of the issues with the ACES 0.1.x were pertaining to noise/camera black levels/negative values, with pure CG rendering you should not really suffer from that and should be in a position where, hopefully, the DRT does not exhibit too many artefacts.

My above suggestion is valid for any changes we do, they must be motivated and we ideally need to show that, objectively, they fix problems or make the work easier. I’m quite glad that @jedsmith is spearheading the effort with hue-preserving rendering and has been producing images, please keep them coming!

Cheers,

Thomas

1 Like

At the risk of repeating myself, I am not adamant on anything. I work for people who are adamant on certain things as you have certainly understood by now. :wink: I hope that you realize that I am currently in a tricky situation and am a tiny bit desperate for solutions.

I personally feel like that I have done nothing but test ACES upside-down for the past two years (which has been a great experience). Hence my questions, posts and images I provided to this group. Only today I have shared three more images to compare RED IPP2 with the ACES Rec.709 Transform. You should also see the amount of documentation about ACES at work, it’s insane. :scream:

The main issue for 0.1.1 to be discarded was the lack of invertibility, which if I have understood correctly is a requirement for the ACES 2.0 Output Transform. Nonetheless I did some tests with 0.1.1 not more than ten days ago. I was just curious on why some people liked it and others not.

I do understand that changes must be motivated but I think that my examples and explanations from my numerous posts speak for themselves. I have also tried to provide for each render an example of what I think it should look like, as a point of comparison.

I am more than happy to do more testing but I would ask you to point me in the right direction if possible. :slight_smile:

@sdyer I have a question for you : is a production using 0.1.1 Output Transform currently possible ? What would be the cons ? No HDR Output Transform ? No Resolve implementation for DI ?

Thanks guys !
Chris

Far from me the idea of shooting the messenger! Please invite them here, the more the merrier! I think we have distracted the thread enough, so I will stop and we can continue on Rocket/Slack anyway.

Possible yes … suggested, I wouldn’t. As Thomas mentioned there’s a bunch of reasons we moved past 0.1.1. I’d do a lot of testing before even thinking about it. Yes, it’s not invertible but you can also get some strange stuff happening due to the per-hue adjustments. Is the show CG only or is there any live action?

Full CG stuff. We have tried many options :

  • Gamut Compress algorithm : not suitable for our needs.
  • Switch ODT : we don’t have the knowledge to do that properly and may cause DI issues (HDR?)
  • Use 0.1.1 : I just checked and it is lacking many stuff like ACEScg ?

Hence my multiple posts and examples in order to share what I have been through for the past two years. But don’t sweat it, I need to sleep on it and see things more clearly. At this point, I feel like I have hacked this thread in an inappropriate way. Sorry about that @jedsmith !

Thanks @Alexander_Forsythe !

Chris

Getting the my thread back on track! :slight_smile:

I have built an updated slightly less naive display transform which incorporates a pivoted slope adjustment and a compression curve for shadows and highlights.

EDIT - the link above points to a gist that has since been updated: here is the nuke script in question:
NaiveDisplayTransformPivoted_v01.nk (14.8 KB)

Again, this is not intended for production use, nor as a proposal for a display rendering transform. It should be treated as an experiment, and tested as such.

I’ll try to do a more in-depth video on how it works, but here’s a quick text-based rundown:

  • Middle grey in → out specifies the pivot
  • Contrast adjusts the slope of the linear section of the curve
  • Black is stops below middle grey to map to display 0
  • White is stops above middle grey to map to display 1 after compression
  • The compression curves utilize the very simple power( p ) compression function.
  • dlin start and end specifies where the linear section starts and ends (in display linear values)
  • p for shoulder and toe adjusts the slope of the compression curve or how aggressive it is.

Path to White
I’m going to adopt the Timothy Lottes terminology here in an attempt to improve the rather unsophisticated phrase “highlight desaturation”. I will use the phrase “path to white above display maximum.” I described this a bit in my last posts. With a chromaticity preserving display render transform, when one channel clips, and the hue in question is a saturated color, we need to guide the other channels towards the achromatic axis, so that the appearance of a bright highlight is maintained. Without this behavior brightly colored highlights look unnaturally dark.

To achieve this I’m using a technique similar to what is used in the Lottes presentation: moving the RGB Ratios towards a neutral color before multiplying them back into the norm. I am calculating the factor from the compression amount. The path to white start is basically a blackpoint adjustment on that factor. The bias is a gamma adjustment on that factor. And the huebias is an additional gamma adjustment per-channel. This introduces hue skews in the path from saturated color to achromatic axis, but it is interesting to play with so I left a control there for it.

Since we are compressing shadows now as well, there is also a set of controls for desaturating those compressed pixels. One of the things that’s always bothered me about per-channel RGB renderings is how highlights get desaturated, but shadows get more saturated, leading to unnatural colors in shadow areas.

The techniques used here for Path to White adjustments are very much still in progress. Ideally I’d like to figure out how to modify the path to white based on hue without losing chromaticity invariance. Currently in my aesthetic opinion, red and blue desaturate too quickly, and I would like to be able to control this. If anyone has any ideas here please chime in.

Some Pictures
I’ve decided to upload the source images I’m using for testing to a dropbox folder. It contains 2k exr images from a variety of open sources. I thought it would be convenient for people to grab them, but I won’t submit them to the official dropbox in case there are issues with rights. If this is not okay please let me know and I’ll take them down.

I’ve uploaded 200 comparison images of the Naive Display Transform vs the ACES Rec.709 rendering here (sorry my dropbox is full so it’s a mega.nz link)
https://mega.nz/folder/7mJiyDBT#Mt1gJcgvAtRj45VNqTzRaQ

Skin tones could still use some modifications, and I think some brightening of reddish orange hues would help the fire and explosion frames. Overall I think it’s looking pretty decent. A neutral and lower contrast rendering which preserves chromaticity values.

Gamut Mapping
Another thing is that this transform still does not have any sort of gamut mapping. You still see artifacts from this with chromaticities that are outside of the display gamut. For example the visible lines in this hue sweep

Or the dark fringing around the lights in this portraight:

Interestingly, there are also issues caused by having the 3x3 Matrix converting from ACEScg to Rec.709. This is happening in display linear after the tonescale.

In this hue sweep, which is a sweep of Rec.709 primaries, instead of ACEScg primaries as you saw above, you can see some subtle lines and artifacting from yellow to magenta. This is because the 3x3 matrix boosts the brightness of red when converting to Rec.709, which then gets clamped. This clamping causes a hue shift towards cyan.

Should the 3x3 matrix to display primaries happen before the tonescale? Should it include some type of gamut mapping? Should gamut mapping be used instead of a 3x3 matrix? I have a growing hunch that the reason blues look a little purple and oranges look a little red in Rec.709 is because of the perceptually non-uniform attributes of a 3x3 matrix chromaticity transformation. Can a real color scientist tell me that this is “color pragmatist bullshit”? I would be very curious to know.

Happy to here thoughts and feedback!

5 Likes

For those who are interested in going back to the future, I’ve put together a pure Nuke node implementation of the ACES 0.1.1 Output Transform.

image

The intent here is to allow people to explore what they do and don’t like about the old 0.1.1 transform, and make it easier to compare with the newer stuff (Like running it along side Jed’s 1.0.3 → 1.2 Nuke nodes).

I’ve also exposed toggles for most of the parts of the RRT and ODT that have a material effect on the look, so you can turn them on and off to see what does what.

3 Likes

The basis of BT.2020/ACEScg and BT.709 are not aligned together, if you look at BT.2047 Annexes, great care is taken about aligning the hues together using Gamut Mapping.

1 Like

What is the destination gamut volume here?

Is the result to a target volume?

If the destination target can be generated from the source as you have here for volume results, might it be possible to lean on a perceptually uniform model as the final translation step?

Cherry-picked a few images and Juxtaposed them: https://academy-vwg-odt-compare-images-v01.netlify.app/

I would be interested to see a closer fit to ACES current SDR OT.

5 Likes

It may be worth noting that ARRI say in the Alexa LogC Curve – Usage in VFX white paper, with regard to the difference between the 3x3 matrix for “tone-mapped ALEXA wide gamut RGB” and the standard calculated matrix:

Comparing this matrix with the one given above, you may recognize that the latter generates less saturated RGB values (smaller values along the main diagonal). This deviation from the correct conversion is done to compensate for the increase of saturation produced by the tone-map curve.

I do not know if the modification to the matrix is made based on objective or subjective criteria. Perhaps @hbrendel or @joseph can comment.

thanks @jedsmith ! I personally think that this is going in the right direction and your tests are completely necessary to this group.

I will use the latest version of your DRT to test it thoroughly on +100 of renders from our movies, submit them to the supervisors and CTO and will try to provide an official answer from our studio.

Regards,
Chris

Hey,

so I ran Jed’s DRT and the ACES OT on a vast selection of frames and shots to compare. So far the general agreement in the studio is the following :

  • All Cornell boxes, spheres, light sabers images look better. We are pretty pleased with the hue preservation and overall it looks more correct. Even GI (or display of the GI) looks better. This makes this prototype really promising to our eyes.
  • We have mixed results on lighting/comp frames coming from our past movies. Sometimes it looks better, sometimes it doesn’t. My take on this is that all those frames/assets have been fine-tuned using a certain DRT and displaying them through a completely different transform/technology doesn’t make them magically look better.

I’ll be probably able to give an official response from the studio next week but so far, we’re still very interested by the possibility of having a hue preserving Output transform for ACES 2.0.

Regards,
Chris

1 Like

@jedsmith

This is super interesting stuff. I’m still trying to get my mind around the algorithm and the parameters but they certainly look comprehensive.

Take this with a fist full of salt, but I did play with the defaults a bit and anecdotally found the following to be a slightly better balanced starting point based on my image set on my totally uncalibrated monitor. It also felt like a bit of saturation was needed so I tossed in a saturation node in the completely wrong place.

naive.2020.02.01_altParams.nk (16.0 KB)

Feel free the throw up on this while I continue to try to figure out the algorithm and all the nasty side effects my parameter changes probably had.

May have gotten a little ahead of myself. The parameter changes I made help some images and hurt others for sure. In general I think the direction I moved with the highlight luminance is better and restores some of the sparkle we were chatting about. Overall saturation seems to help skin tones. I may have went a touch high on the contrast though and the sweet stop on the path to white really seems image dependent.

I promise I’ll stop with the subjective play now and analyze what the heck is actually happening to the pixels :wink:

Cherry-picked a few images and Juxtaposed them: https://academy-vwg-odt-compare-images.netlify.app/

These are really interesting.

I feel the frame of the CG woman’s face looks far less ‘cinematic’ through the ‘naive DRT’, yet provides a clearer view of what’s actually going on with the asset. More true to ‘the eye’ than ‘the camera’. This feels like a win for surfacing and look-dev, where creating a robust and versatile asset is the goal, but a loss for Lighting and Comp, where ‘make my movie look like a movie’ is the unsaid note hiding under everything.

The explosion on the other hand feels far less realistic to me (take that for what it’s worth, my real world exposure to explosions is pretty limited). My assumption here is that the look was developed in a way that takes advantage of the existing RRT/ODT to give the illusion of additional colour complexity. (Would need some real world unclipped explosion frames to really know here, which are hard to come by)

Once you get to the live action stuff, I feel like the ‘naive DRT’ is generally less attractive across the board. Hurting the ‘looks good out of the box’ angle, but helping the ‘allow the LMT to do the work’ angle.

2 Likes