ZCAM for Nuke

After Alex Forsythe’s suggestion about using ZCAM in last weeks meeting, I thought I better try and get my head around it. Generally I find the best way to do that is to try and make it work in Nuke, so I’ve given that a crack.

I’m not going to pretend I fully understand what’s going on here, but hopefully there is enough here for people to have a play around with, or improve on, or integrate into something larger.

The node has two modes, forward and inverse

forward will dump as many of the attributes as I could produce into layers with a zcam_X. naming convention (same info in each layer’s rgba channels), whilst leaving the xyz data in the main layer. You can view them by looking at the stream with a LayerContactSheet node.

inverse will reconstruct xyz, but only using the J,aM,bM attributes (This could/should change in the future).

There are a bunch of params to play around with, the only ones I’ve really touched so far are the different surround constants (which seem to do… something).

The guts of the node are pretty gory, but it seems to do what I expect, forward → inverse will behave as a null op, and pushing around the zcam_J layer in the middle will push things around. Hopefully there is some value here for people who want to experiment, and try and understand it (which is all I’m attempting to do here)

I’ve based my implementation on the python version found in luxpy here:

3 Likes

Nice! I haven’t poked at it too much but it seems like you don’t have the first chromatic adaptation step, i.e. Step 0.

Because it is under-documented and I spent a bit of time trying to get the numbers matching the Supplemental document, you need to effectively do a Von Kries chromatic adaptation but it needs to include the degree of adaptation D that can be computed from the CIECAM02 equation. Because the values fed to the model are absolute, you will also want to normalise the D65 whitepoint WRT the reference whitepoint XYZ_w. There is no documentation again on how to do that, but XYZ_{D65} / Y_{D65} * Y_w seems to do the job when doing the chromatic adaptation.

The Colour implementation is not finished but getting there :slight_smile:

Cheers,

Thomas

1 Like

Ahh yes! I forgot to mention that.
I’m assuming you’ve already got your data in D65.

I’ve made some updates to allow the forward mode to output aS and aC (as well as aM), and added support for multiple reconstruction modes to the inverse path.

The first version only supported J, aM, bM.
It now supports J or Q ( Brightness or Lightness)
Along with aM/bM, aC/bC, aS/bS and M/h or C/h

This is based on what I see in the luxpy implementation:

        :outin:
            | 'J,aM,bM', optional
            | String with requested output (e.g. "J,aM,bM,M,h") [Forward mode]
            | - attributes: 'J': lightness,'Q': brightness,
            |               'M': colorfulness,'C': chroma, 's': saturation,
            |               'h': hue angle, 'H': hue quadrature/composition,
            |               'Wz': whiteness, 'Kz':blackness, 'Sz': saturation, 'V': vividness
            | String with inputs in data [inverse mode]. 
            | Input must have data.shape[-1]==3 and last dim of data must have 
            | the following structure for inverse mode: 
            |  * data[...,0] = J or Q,
            |  * data[...,1:] = (aM,bM) or (aC,bC) or (aS,bS) or (M,h) or (C, h), ...

The controls only effect the inverse mode, as the forward mode still just dumps everything out.

As @Thomas_Mansencal noted, this still does not contain the CAT to D65 step.

1 Like

This is actually more complicated than I thought, as we have the model implemented in a PR, here are some relevant notes:

  • Safdar, Hardeberg and Luo (2021) does not specify how the chromatic adaptation to CIE Standard Illuminant D65 in Step 0 should be performed. A one-step Von Kries chromatic adaptation transform is not or transitive when a degree of adptation is involved. Safdar, Hardeberg and Luo (2018) uses Zhai and Luo (2018) two-steps chromatic adaptation transform, thus it seems sensible to adopt this transform for the ZCAM colour appearance model until more information is available. It is worth noting that a one-step Von Kries chromatic adaptation transform with support for degree of adaptation produces values closer to the supplemental document compared to the Zhai and Luo (2018) two-steps chromatic adaptation transform but then the ZCAM colour appearance model does not round-trip properly.
  • Step 4 of the inverse model uses a rounded exponent of 1.3514 preventing the model to round-trip properly. Given that this implementation takes some liberties with respect to the chromatic transform to use, it was deemed appropriate to use an exponent value, i.e. 50 / 37, that enables the ZCAM colour appearance model to round-trip.
  • The values in the third column of the supplemental document are likely incorrect:
    • Hue quadrature H_z is significantly different for this test, i.e. 47.748252 vs 43.8258.
    • F_L as reported in the supplemental document has the same value as for L_a = 264 instead of 150.

Am I missing something? Does it not simply specify CAT02?
zcam_step_0

Well, a simple Von Kries transform with CAT02 does not work because the illuminant values are absolute, i.e. it will scale the input tristimulus values in a non desirable way. At which point you are opening a can of worm: Is it a Von Kries transform or the CIECAM02 transform from which CAT02 is originated? Should it support degree of adaptation which cannot be inverted with a one-step Von Kries transform? And so on…

I’m dealing with similar complexities on my end.

Other issues that I’m not completely sure how to deal with are the default parameters for our application.

Regarding round tripping, would it make sense to slightly alter the model from the paper’s description so it round trips, then retest against the LUCHI dataset to see if it significantly impacts the color appearance predictions?

1 Like

Repo has now been updated with the bodged together ZCAMishDRT I showed in the meeting today.

3 Likes

For the code-minded ones, here are some ZCAM Shadertoys : https://www.shadertoy.com/results?query=zcam

They set sRGB reference white to 200 nits in their calculation which really makes sense.

2 Likes

As it currently stands, the Colour implementation roundtrips perfectly with the two-step Von Kries transform from Zhai et al. (2018) and the slight exponent change. I don’t think it would change any predictions as the values are really close to the supplemental paper. I’m still meant to contact the authors, been a hectic week down there!

Just playing with a few ramps and gradients here.

The Image below represents:

  • J ramping from 0 → 100 in the y axis.
  • M at a constant value of 25
  • h ramping from -180 → 180 in the x axis

Which then passes from ZCAM (scaled down by 100) → XYZ → sRGB (display linear)

When plotted in 3D it looks like this (the cube is 0.0 → 1.0):

As (sort of) expected, the flat plane at the top of the cone seems to intersect the 1.0,1.0,1.0 corner of the cube.

Although it does not form a circle centered on that point.

The interesting lump around yellow can be seen here.

My assumption is the swelling at the bottom is the model attempting to maintain (M) Colourfulness as (J) Lightness drops. Pushing values that aren’t particularly saturated in absolute terms outside of the sRGB display gamut volume.


1 Like