ZCAM for Nuke

Isn’t that exactly what colours with CIExy coordinates outside the spectrum locus do? They are “meaningless” but we preserve them through processing so we can decide what best to do with them.

2 Likes

I don’t think so? It depends on what it is relative to. Given ARRI dominates this space, it’s probably fair to say that folks push around AWG values, and then go back to LogC, and onward. Relative to the camera observer stimulus encoding, those values hold meaning, and given the vast majority of the efforts out there that use ARRI pipes, it’s fair to say that they begin and end life as AWG, not CIE xy.

With that said, it doesn’t address the point that if we are specifying meaningless values in an encoding, we’d end up knee deep right back in device dependency again, right smack dab where the mess is now.

I’m open to the idea if someone is able to explain how that works in a system that is attempting to appearance match. What is the intended stimulus? How does such a system appearance match something that doesn’t actually mean the same thing across different devices?

If the ultimate input is xy Foo, and medium A can’t represent it, so it represents it as something else, and medium B also can’t represent it, so it represents it as something else again, and medium C also can’t… so it represents something else… rinse and repeat. Seems to me it’s right back where the mess started, which is where it is now.

2 Likes

Well… In all fairness, it is impossible to completely eliminate device-dependency unless the limiting gamut is something smaller than 70% sRGB as there are laptop screens that can’t even display that. With the pandemic, streaming services have become the new normal and that raises the possibility of content being seen in less-than-ideal conditions to sky high levels, e.g. on bad laptop screens in extremely bright surrounds.

Thanks to a suggestion from @TooDee I’ve now got this working:

Please note, this is still my 0.6 version of ZCAMishDRT, not Matthias’s more developed version. I’ll try and run out something similar with it before the next meeting.

3 Likes

Can you also share the file again on iCloud? Thanks.

I got some interesting results after adding optional Michaelis-Menten/Naka-Rushton to @nick 's DCTL implementation of the ZCAM model and comparing our whole set of frame captures with SSTS under different settings :

  • Default settings for everything
  • SSTS + Viewing Conditions = dim
  • MM + Viewing Conditions = dim
  • SSTS + Highlight Desat = 1.75 + GC Threshold = 0.7 + Ref lum = 200 + Y Mid = 8 + Y Max = 120 + Viewing Conditions = dim
  • MM under same modified settings
  • MM under modified settings but with Y Mid back to 10
  • OpenDRT 0.90b4 with sRGB gamma 2.2 preset
  • OpenDRT 0.90b4 with sRGB gamma 2.2 preset but corrected with an additional node to use piecewise sRGB EOTF

Please note that I’m using the version from this commit : Add full Scharfenberg ZCAM DRT with partially implemented inverse and that I’ve fixed the DCTL+Cuda errors myself. I stuck to it because it gave better results with our subtle red volumetric fog in a very dark area than the later one with high boundary gamut compression.

Please also note that I add to tweak blues to remove the cyan hue shift as it ruined sky colour (a memory colour) and also tweak the saturation in the high ranges because the DRT was desaturating way too fast at high EVs and that led to a small exposure compensation so the saturation increase wouldn’t kill the brightness sensation too much.

Final note is that I haven’t had time to fully test the model in HDR yet as I need to test it on multiple monitors with different Y_MAX and different Rec.2020 coverage.

With those caveats out of the way, here are the results I got :

  • Dark viewing conditions is too dark for game content in SDR because games are usually played in bright environments but this is not new and we already knew that so that’s why I moved to dim for further tests.
  • Dim viewing conditions with default settings and SSTS gives a good result with almost all of my footage except the scenes which have too much bright VFX in them (especially fire). By viewing our test footage under this DRT, I actually learned that there was a subtle red volumetric fog in our tutorial area that I didn’t know about because it was completely lost under OpenDRT (crushed to gray under gamma 2.2 and crushed to black under piecewise sRGB).
  • In general, Michaelis-Menten/Naka-Rushton tone scale makes everything darker but can sometimes make brights brighter.
  • The tweaked settings with SSTS hit a very sweet spot. Lowering Y_MID to 8 allows us to increase chroma (through increasing ref white and Y_MAX) and reduce highlight desat which, in turn, makes scenes with very bright VFX look way better. It also matches blacks to the darker piecewise sRGB curve which is something that is non-negotiable with our art department.
  • As it makes things darker, MM tone scale completely crushes blacks under the same conditions.
  • However, darks can be matched with MM by raising back Y_MID to 10. With this setting, we end up with more contrast when using Michaelis-Menten/Naka-Rushton tone scale from OpenDRT due to the different Y_MIDs.
  • OpenDRT 0.94b2 with sRGB preset and pure gamma 2.2 kinda matches default settings with SSTS in brightness but loses our subtle red volumetric fog (it kinda makes it gray).
  • OpenDRT 0.94b2 with sRGB preset corrected to use piecewise sRGB EOTF crushes darks to black a lot. SDR fire VFX also look way too pink. I tried correcting that with a hue shift in an experimental branch but it unfortunately twisted all red assets to orange.

Final conclusion from someone who actually used OpenDRT which was seen as a pre-alpha of ACES 2 in a shipped product : ZCAM model with SSTS looks very very promising. Next step for me is running that by our leads.

4 Likes

Trying to get a better handle on the issue @ChrisBrejon was pointing out in the other thread.

The ramps below are:
Top: sRGB 0:0:1 → 1:1:1
Middle AP1 0:0:1 → 1:1:1
Bottom: AP0 0:0:1 → 1:1:1

My assumtion is the big drop into darkness in the AP0 ramp is down to the ZCAM model not really being able to make sense of imaginary colours, that are at best, mostly near ultraviolet.

But the little dip towards the bottom of the AP1 ramp is what we’re seeing in his ramp examples.

Pre-transform they plot out like this:

As the M compression is wound in, you see something like this.

compressionWind_v001

My guess is that the model can’t really keep J/Iz perfectly stable as you yank the M and h values around?

2 Likes

I think we can see that in this hue correlate plot as well. Green line is JzAzBZ hue correlate in scene linear and blue is ZCAM hue correlate in display linear, after the gamut mapping.

Input values are ACES2065-1 AP0.

In top row the horizontal axis is chroma from 0-100%. In the bottom row the horizontal axis is the exposure approximately from -7 to +8 with 100% chroma.

The hue deviates significantly from JzAzBz over the whole exposure range (100% chroma), and that deviation is that kink we see in the top row (>90% chroma).

Edit: corrected the JzAzBz hue correlate in the image, my first post used wrong input values.

1 Like

Hey Alex,

QQ: Are the AP0 colours gamut mapped before entering ZCAM?

APO exhibit rather dramatic singularities, and the fact that its basis is rotated so much compared to usual colourspace probably does not help.

Blue to White Ramp and looking at the various correlates only (with AP0 correlates clipped to [-inf, max(sRGB Correlate)]):





The most interesting ones here are obviously, M, C and h, they point out that we cannot afford not gamut map before ZCAM, non-physically realisable colours are behaving in a rather unpredictable way.

Cheers,

Thomas

2 Likes

Not necessarily advocating this for our use case but just wanted to share this paper. Serves as a reasonable review of calculation methods of GBDs

2 Likes

I came across it a few days ago and kind of dismissed it because of the required sampling not being adequate for our needs. Something I haven’t had time to look at in details but I posted on Slack (and is referenced in the paper you linked) is Herzog and Mueller - 1997 - Gamut mapping using an analytical color gamut representation, the claim of an analytical representation is highly interesting because it would mean no LUTs or a anything to represent the GBD.

Oh and it might be useful to fork the thread into a GDB Computation / Gamut Mapping one as it is a rather big problem that is not specific to ZCAM?

As I wanted to look at the way the ZCAM M correlate compression moves chromaticities around, I did it on a full polar grid:

It is not just the blue that slides toward cyan, red turns more orange too, etc…

2 Likes

Red only turns orange in the brighter ranges though and it’s a hue shift which everybody is used to and is useful for fire vfx. Dark reds don’t do this. Personally, I don’t mind that much when bright blues turn to cyan (although those who want to nail sky colour 100% will mind more). It’s dark blues that turn dark cyan which are disturbing and also the fact the the shift is very sensitive to slight variations in the input, e.g. the noise that’s present in the sRGB sphere image.

Agreed, now, something that kind of annoys disturbs me is that we are currently mainly using ZCAM as a perceptually uniform space where it is much more than that, i.e. viewing conditions compensation.

At the point where we only use it for its perceptual uniformity, I think we might be better looking at simpler models like IPT / ICtCp / Jzazbz / Oklab that might behave better with hue uniformity when gamut mapping. We are more sensitive to hue changes compare to chroma changes, so it would make sense for me to privilege a model that is optimized for hue uniformity. I don’t think ZCAM has been optimized to be particularly good with hue uniformity compared to say IPT. ZCAM is a jack-of-all-trades and when you favour one correlate, e.g. hue, the other ones are impacted negatively. Put another way: you cannot have it all.

TLDR: Not much point using ZCAM if we don’t use its viewing conditions modeling capabilities.

Cheers,

Thomas

2 Likes

I have the same interrogations although I would say that one the things the 0.07 version of Matthias’ ZCAM model clearly demonstrated by exposing all the possible parameters is that, although we can agree on basic principles, there is no pleasing everyone with a single set of default values for a DRT. Our favored settings are very far from the default settings and were chosen with constraints very specific to what we’re trying to achieve. I find it refreshing though to be able to explicitly set reference white to 200 for SDR as that is the basic assumption we’re working with instead of sRGB standardized 80 nits.

(Pseudo) Analysis of the Correlation of the Hue and Colourfulness ZCAM Attributes using Dominant Wavelengths

I wanted to quantify how much the hue attribute of ZCAM is affected by the compression of its colourfulness attribute, i.e. quantify the correlation between the two attributes. Ideally, a perceptually uniform model should exhibit no correlation, unfortunately, such model does really not exist!

We can quantify the change of a single hue in term of its Dominant Wavelength shift, i.e. how much the Dominant Wavelength has changed after the colourfulness has been reduced.

Given the Spectral Locus, we can trivially compute all the trajectories its chromaticities follow when the ZCAM colourfulness is reduced:

With the trajectories obtained, we need to find the intersections with the gamut of interest, e.g. sRGB. Raycasting in curved space is not trivial, but fortunately, we have generated a lot of points to draw the trajectories, thus we can find the closest point for any given trajectory to the sRGB gamut segments:

And finally trace a ray between that point and the whitepoint to find the output Dominant Wavelength:

We can then quantify visually and numerically the shifts.

Here, each column is split diagonally like a saw: The upper left triangles are the input Dominant Wavelengths, the bottom right triangles are the output Dominant Wavelengths.

We can confirm that the blues are turning cyan, there is a very noticeable slant. Here are some cherry-picked Dominant Wavelengths in close-up:





Doing so, we have also obtained a function that we could use to potentially correct the hue shifts:

The next steps is to clean-up the code so that I can run it easily on other models!

Cheers,

Thomas

4 Likes

Agreed, now, something that kind of annoys disturbs me is that we are currently mainly using ZCAM as a perceptually uniform space where it is much more than that, i.e. viewing conditions compensation.

I agree here, and I think it is worth noting that even if you end up using the viewing condition bits of ZCAM, you don’t necessarily have to use the full model in the final implementation. It will probably be fairly simple to produce a simplified model that models that particular part if its behavior at lower computational cost and with better behavior outside the visual gamut (if required).

Regarding hue lines

image

It is correct for the lines to not line up here though. All models designed to model the perception of hue are explicitly designed to not have straight lines in chromatically space due to model the Abney effect. For the attributes to not be correlated perceptually, they have to have some correlation in linear space.

Regarding the blue primary → 1:1:1 interpolations:

Hue and lightness not being accurate for colors far from the experimental data used to derive models (and in case if AP0 beyond real colors overall), is of course a part of the problem, but I don’t think it is the only reason for problems in those plots.

I think this plot by Alex Fry is a great way of showing why gamut mapping along straight hue lines to the exact boundary of RGB gamuts is problematic in general:

compressionWind_v001

In all these cases the original line is a smooth line of increasing chroma. As the line gets projected to the RGB gamut, the deep blue lines follow a model of a perceptual hue line, which takes a path through cyan to model the Abney effect. When compressing that to the RGB gamut, the large change in available gamut between blue and cyan results in the line bending in backwards, preserving hue, but resulting in a really strong distortion to Chroma.

This is with Rec2020 to sRGB and with Oklab, since that is what I had setup, but it behaves very similarly in this case.

First image is gamut compression to a smooth approximation of the gamut, followed by soft clipping to RGB. Second image is compression in straight hue lines.

The first example will have more hue distortions, but does not get the strange chroma reversal. Arguably, maintaining a smooth curve and avoiding chroma reversals like this at the expense of perfect hue preservation is the better choice in cases like this.

Maybe a bit more clear in this blue to purple gradient:

6 Likes

This is what I was thinking we should probably try to do, simpler perceptually uniform model, e.g. ICtCp or Oklab, with the Viewing Conditions adaptation layer.

Yes! However, it is arguably not artist friendly that changing colourfulness changes the hue so dramatically, the blue → cyan shift specifically is quite problematic (and has been pointed out a few times already). I’m almost tempted to say that it produces the opposite of what people are used to, i.e. hues are changing toward cyan rather than purple.

A long time ago, in this VWG, I asked which colour appearance phenomena we want to model/use, it is still a relevant question.

2 Likes

Any chance you could describe what you mean by this in a little more depth? What I’m understanding from it is

  1. Gamut compress in a perceptual space to an approximation of the target gamut hull that is slightly larger but for which said hull is more smoothly varying in said perceptual space
  2. Not as sure what you mean by “soft clip to RGB”

Any chance you could describe what you mean by this in a little more depth?

Sure!

You got 1. exactly right.

After that you convert to the target RGB gamut. Since we compressed to a larger space, you can have colors outside the gamut though, with RGB values less than zero and larger than one. So, need some way to map those values into the [0,1] range. Just clipping to that range does OK, but will introduce some non-smoothness into your mapping. So instead you can use a curve like this to get a smooth transition near 0.0 and 1.0:

]()

Then there are some more details to not have the effect near white and black (since this would distort grayscale values) and to use the curve for both 0.0 and 1.0. The full implementation is here:

This is the difference between soft and hard clipping, when looking at the hull of the gamut approximation:

Would be nice trying this with a tighter gamut approximation also, so it touches/almost touches most of the RGB gamut corners. That would make the flat areas smaller. Haven’t had time to do that yet though.

4 Likes