ZCAM for Nuke

As I wanted to look at the way the ZCAM M correlate compression moves chromaticities around, I did it on a full polar grid:

It is not just the blue that slides toward cyan, red turns more orange too, etc…


Red only turns orange in the brighter ranges though and it’s a hue shift which everybody is used to and is useful for fire vfx. Dark reds don’t do this. Personally, I don’t mind that much when bright blues turn to cyan (although those who want to nail sky colour 100% will mind more). It’s dark blues that turn dark cyan which are disturbing and also the fact the the shift is very sensitive to slight variations in the input, e.g. the noise that’s present in the sRGB sphere image.

Agreed, now, something that kind of annoys disturbs me is that we are currently mainly using ZCAM as a perceptually uniform space where it is much more than that, i.e. viewing conditions compensation.

At the point where we only use it for its perceptual uniformity, I think we might be better looking at simpler models like IPT / ICtCp / Jzazbz / Oklab that might behave better with hue uniformity when gamut mapping. We are more sensitive to hue changes compare to chroma changes, so it would make sense for me to privilege a model that is optimized for hue uniformity. I don’t think ZCAM has been optimized to be particularly good with hue uniformity compared to say IPT. ZCAM is a jack-of-all-trades and when you favour one correlate, e.g. hue, the other ones are impacted negatively. Put another way: you cannot have it all.

TLDR: Not much point using ZCAM if we don’t use its viewing conditions modeling capabilities.




I have the same interrogations although I would say that one the things the 0.07 version of Matthias’ ZCAM model clearly demonstrated by exposing all the possible parameters is that, although we can agree on basic principles, there is no pleasing everyone with a single set of default values for a DRT. Our favored settings are very far from the default settings and were chosen with constraints very specific to what we’re trying to achieve. I find it refreshing though to be able to explicitly set reference white to 200 for SDR as that is the basic assumption we’re working with instead of sRGB standardized 80 nits.

(Pseudo) Analysis of the Correlation of the Hue and Colourfulness ZCAM Attributes using Dominant Wavelengths

I wanted to quantify how much the hue attribute of ZCAM is affected by the compression of its colourfulness attribute, i.e. quantify the correlation between the two attributes. Ideally, a perceptually uniform model should exhibit no correlation, unfortunately, such model does really not exist!

We can quantify the change of a single hue in term of its Dominant Wavelength shift, i.e. how much the Dominant Wavelength has changed after the colourfulness has been reduced.

Given the Spectral Locus, we can trivially compute all the trajectories its chromaticities follow when the ZCAM colourfulness is reduced:

With the trajectories obtained, we need to find the intersections with the gamut of interest, e.g. sRGB. Raycasting in curved space is not trivial, but fortunately, we have generated a lot of points to draw the trajectories, thus we can find the closest point for any given trajectory to the sRGB gamut segments:

And finally trace a ray between that point and the whitepoint to find the output Dominant Wavelength:

We can then quantify visually and numerically the shifts.

Here, each column is split diagonally like a saw: The upper left triangles are the input Dominant Wavelengths, the bottom right triangles are the output Dominant Wavelengths.

We can confirm that the blues are turning cyan, there is a very noticeable slant. Here are some cherry-picked Dominant Wavelengths in close-up:

Doing so, we have also obtained a function that we could use to potentially correct the hue shifts:

The next steps is to clean-up the code so that I can run it easily on other models!




Agreed, now, something that kind of annoys disturbs me is that we are currently mainly using ZCAM as a perceptually uniform space where it is much more than that, i.e. viewing conditions compensation.

I agree here, and I think it is worth noting that even if you end up using the viewing condition bits of ZCAM, you don’t necessarily have to use the full model in the final implementation. It will probably be fairly simple to produce a simplified model that models that particular part if its behavior at lower computational cost and with better behavior outside the visual gamut (if required).

Regarding hue lines


It is correct for the lines to not line up here though. All models designed to model the perception of hue are explicitly designed to not have straight lines in chromatically space due to model the Abney effect. For the attributes to not be correlated perceptually, they have to have some correlation in linear space.

Regarding the blue primary → 1:1:1 interpolations:

Hue and lightness not being accurate for colors far from the experimental data used to derive models (and in case if AP0 beyond real colors overall), is of course a part of the problem, but I don’t think it is the only reason for problems in those plots.

I think this plot by Alex Fry is a great way of showing why gamut mapping along straight hue lines to the exact boundary of RGB gamuts is problematic in general:


In all these cases the original line is a smooth line of increasing chroma. As the line gets projected to the RGB gamut, the deep blue lines follow a model of a perceptual hue line, which takes a path through cyan to model the Abney effect. When compressing that to the RGB gamut, the large change in available gamut between blue and cyan results in the line bending in backwards, preserving hue, but resulting in a really strong distortion to Chroma.

This is with Rec2020 to sRGB and with Oklab, since that is what I had setup, but it behaves very similarly in this case.

First image is gamut compression to a smooth approximation of the gamut, followed by soft clipping to RGB. Second image is compression in straight hue lines.

The first example will have more hue distortions, but does not get the strange chroma reversal. Arguably, maintaining a smooth curve and avoiding chroma reversals like this at the expense of perfect hue preservation is the better choice in cases like this.

Maybe a bit more clear in this blue to purple gradient:


This is what I was thinking we should probably try to do, simpler perceptually uniform model, e.g. ICtCp or Oklab, with the Viewing Conditions adaptation layer.

Yes! However, it is arguably not artist friendly that changing colourfulness changes the hue so dramatically, the blue → cyan shift specifically is quite problematic (and has been pointed out a few times already). I’m almost tempted to say that it produces the opposite of what people are used to, i.e. hues are changing toward cyan rather than purple.

A long time ago, in this VWG, I asked which colour appearance phenomena we want to model/use, it is still a relevant question.


Any chance you could describe what you mean by this in a little more depth? What I’m understanding from it is

  1. Gamut compress in a perceptual space to an approximation of the target gamut hull that is slightly larger but for which said hull is more smoothly varying in said perceptual space
  2. Not as sure what you mean by “soft clip to RGB”

Any chance you could describe what you mean by this in a little more depth?


You got 1. exactly right.

After that you convert to the target RGB gamut. Since we compressed to a larger space, you can have colors outside the gamut though, with RGB values less than zero and larger than one. So, need some way to map those values into the [0,1] range. Just clipping to that range does OK, but will introduce some non-smoothness into your mapping. So instead you can use a curve like this to get a smooth transition near 0.0 and 1.0:


Then there are some more details to not have the effect near white and black (since this would distort grayscale values) and to use the curve for both 0.0 and 1.0. The full implementation is here:

This is the difference between soft and hard clipping, when looking at the hull of the gamut approximation:

Would be nice trying this with a tighter gamut approximation also, so it touches/almost touches most of the RGB gamut corners. That would make the flat areas smaller. Haven’t had time to do that yet though.


For those who are interested, I have now pushed version 9 of the DRT ZCAM prototype to my GitHub repo:


Based on the insights provided by @bottosson, this version adds a feature to apply smoothing to the edges of the target RGB cube used for finding the gamut boundary when compressing the colorfulness (M) correlate:

The result is a reduction in the color fringing that was previously observed by @jmgilbert on the Rec.2020 luminous spheres test image:


Since the smoothing comes at the expense of saturation at the edges and corners of the target gamut the “limit” parameter of the compression function has been reduced to compensate.

In addition the option to toggle between projecting the lightness (J) correlate towards either the lightness of the gamut cusp or SSTS mid-grey has been replaced by a slider to blend between the two instead.


Great work @matthias.scharfenber!

Worth also noting that the hue skew toward cyan is still present:

Notice the clumping of cyan.

1 Like

As mentioned in the last WVG meeting I have now pushed version 10 of the ZCAM IzMh DRT to GitHub:

This version is now a pure Blink script node (the Blink code is available as a separate file as well) and runs significantly faster than the previous ones.
The v10 also replaces the 2-Stage gamut compression approach with a single stage that compresses both J & M correlates at the same time towards a focal point.

I’ve also added some tool tips to the Nuke node and hopefully helpful comments to the code.


Jed has been working on a number of look tools, one of the more recent one’s being ShadowContrast. OpenDRT is not intended to “look good” without an added LMT, so I thought it could be interesting to show Zcam and OpenDRT + LMT side by side. The “look” consists simply of a ShadowContrast node with settings intended to approximate a similar tone scale to ACES.

Of course there are the familiar differences in the two, but I found it striking how similar they look side by side with the similar tone scales.


Interesting that the gradient blending is really different here:

Maybe the compression does not help and produces some artefacts too.

1 Like

Thanks for those tests, there are quite interesting. I still think the biggest issue I have with ZCAM DRT is this :

We can also notice this “cyan” effect on the Light Sabers and Blue Bar. Otherwise, indeed, results are not too far apart.


I still don’t understand what “lit with ACEScg blue primary” really means . That aside …

If we say that the scene objects have the chromaticity of the ACEScg blue primary, why would one expect that to end up as being blue (e.g. rgb=[0,0,1]) in display code value space?

The chromaticity of the ACEScg “blue” primary has a hue that’s defined by JzAzBz in ZCam. We can’t see the ACEScg blue, so we don’t know the hue. Assuming we believe ZCAM works (big assumption), if one follows that hue line into a gamut that’s smaller than ACEScg and able to be reproduced on a display, that’s probably the real hue of the ACEScg primary.

1 Like

In addition to the CG renders of blue light, here’s a photo of blue light (coming from the gamut mapping test images)

Something else perhaps worth noting: The green dude looks less saturated in zCAM. That might be from the highlight desaturation though, rather than a hue shift.

Anyone want to explain to me how a fit model (replace the F with SH) based around a display EOTF JND experiment could ever possibly even remotely work even faintly like anything related to the HVS?



It is a hue shift in the greens toward cyan. For example:

1 Like