Nuke Gamut Compression node

I’m looking to get some clarity on which version of the Nuke gamut compression node to use.

There is the VWG repo: aces-vwg-gamut-mapping-2020/model at master · ampas/aces-vwg-gamut-mapping-2020 · GitHub which I had assumed would be the official place. This appears however to be the “kitchen sink” version, and I had thought it was the intention of the group to not present so many options to users. So I’m hesitant to give this to my VFX students.

There was a much more paired down version provided for the “gamut compression user testing” that only has a drop-down for power, which seems like a good option to me.

Just wanted to check in and see what the group would recommend. Thanks!

There is currently no official release version for Nuke (or any other DCC). The hope is that with the release of OCIOv2.1, which includes the Reference Gamut Compression, this will soon become available in Nuke, and will become the proper way to apply the RGC.

Other application developers are also working on native implementations of the RGC.

It may prove necessary to provide stop-gap versions for Nuke and other applications before OCIOv2.1 implementation is released, and for people working with older versions of Nuke.

The version used for testing was exactly the same as the “kitchen sink” version, but with the parameter controls hidden. It is of course perfectly possible to do this to prevent people using the tool creatively.

1 Like

Thanks that’s quite helpful Nick.

Reading through the group’s papers I see that there seems to be agreement to have gamut mapping “always on” (but not baked in) for things like on set monitoring for VFX. Similarly, I’m thinking it could likewise be good to have it “always on” in CG rendering as well, for example in Maya, likewise only affecting the display and not baked in. Has there been any discussion of this by the group?

Hi Derek!

When rendering in CG, your shaders/textures etc should already be within a defined range based on your working space (i.e, ACEScg or sRGB) so theoretically, the values should all be within range and the gamut compression should not be necessary, nor if applied affect the values of the render in a meaningful way - unless the values rendered sit at fully saturated primaries - then you might notice a difference.

Unless you are talking about a way to have the RCG applied to say, a background plate brought into maya for reference?

Hi Carol!

Here’s an example render. Top row is rendered with ACEScg primaries. Bottom row is rRGB primaries. Especially with the ACEScg primaries, with highly saturated colors that are on the boarders of the color gamut, as brightness increases the primaries shift into secondaries. Red goes to yellow, green to cyan, blue to magenta. I believe this is because in CG it’s quite easy to pick these colors that would normally only be possible in a laser or neon light. The effect is less pronounced with sRGB primaries (scene-linear sRGB) but it’s still causing the blue to shift to magenta. In all cases the gamut compression really improves it.

I’d be happy to upload the EXRs if desired, but I believe the images by @ChrisBrejon basically show the same thing, just with light sabers instead of jelly beans :slight_smile:

P.S. I’ve been reading through the Netflix Partner Help docs on ACES, and they are an amazing resource!

Totally, what you’re seeing is compression around the boundaries of AP1 which is due to where our safety gamut is defined (CC24 values) which are well inside AP1. This is intended, but is not the main intention of the gamut compression - as how you define “improve” in this case is relatively subjective.

All that to say, if you find in your workflows that applying the GC to renders improves your work, there is no reason not to use it. We just won’t be recommending it as an official workflow :slight_smile:

Also, keep an eye on the work being done in the Output Transform group, as some of the qualities you are observing here (the blue to magenta shift, etc) are actually due to the OT, and not necessarily a gamut problem - even if it can help.

Thanks for the kind words on the docs - always open to feedback if you have it!

1 Like

That sounds like an extremely reasonable approach, thank you


This has nothing to do with heavily chroma laden imagery, and everything to do with incorrectly mapping on a per-channel stimulus basis. Think of it as mapping fret positions on a per string basis on a guitar, without a care in the world about the original chord being formed.

Chroma laden mixtures come with some specific sensation based issues, but literally none of those are even remotely possible to tackle with such a broken underlying system. That is, if the goal is colour constancy, that goal is based on a robust sensation framework, which is in turn firmly anchored on a robust stimulus framework.

In fact, it is clearly demonstrative that using the mechanic employed in systems such as ACES will fail even the most basic principles of “colour management”, largely because “colour” is a sensation metric. Even if we ignore that, and try for “stimulus management” this too fails, because every single stimulus mixture that cannot be expressed at a display becomes device dependent.

And to be clear, even the most rudimentary stimulus management isn’t employed. Will it be in 2.0? Maybe.

1 Like

Hey, this reminds of some tests I did a year ago. I have given up on this approach because using the Gamut Mapping Compressor as a LMT was not possible with OCIOv1. And having learned about the “chromaticity preserving” approach in the Output Transform VWG, I was not interested by this hack anymore on my full CG renders.

Hopefully ! Looking foward to this stimulus/sensation conversation in our next VWG meeting then !



1 Like