ZCAM for Nuke

Interesting, when you start with macbeth colours, and ramp M, the hook that seems to be exacerbating the shift from blue to cyan beceomes much clearer.

Also got M ramps from the 709 and AP1 primary positions.

The “hook” should only be in CIE xy. That’s kind of the point. These hooked lines should be uniform in hue but varying in Chroma.

Worth noting M is colorfulness which includes lightness. C is chroma independent of lightness. Probably doesn’t matter in this illustration but worth noting.

I’ll put together more examples

1 Like

Yeah, I agree. This frame has has been remapped to simulate it being rendered with a Rec709 blue area light, rather than a 100% AP1 blue. Which looks a bit saner. (1.2 is another story :/)

As opposed to:

I’m not totally sure if this is “wrong” or “right”, but it definitely breaks the long held expectation (especially in the CG domain), that when you grab a colour picker and crank it too the “most blue” position, that you get “maximum blueness”

Have we been avoiding this issue because of the way the exisiting transform clips?
Do we need to think about our colour pickers a bit differently?
I’m not sure…

1 Like

I think this is definitely something that needs considering. Picking a colour which is only ever seen through a display transform makes the issue complicated to begin with. But also lighting with a pure primary which is actually not a real colour makes the concept of how that “should” look tricky.

And what about going to the BT.2020 primary as a nearby real colour? I know CGI doesn’t always aim for realism. But a cinematographer would never normally light a scene with a laser!

1 Like

Yes. At least as someone who has spent way too long thinking about UI and pickers.

I believe that there are two cases here with respect to “colour” picking, that are both legitimate and form “correct” answers depending on the context:

  1. A stimulus linear value. This would closely march alongside of simulated stimulus based light transport and albedo needs. Aka the “simulate complimentary light in a stimulus light transport model” pick.
  2. An observer sensation uniform value. The “If a virtual observer were to see something inexpressible in a medium, what would be a reasonable approximation?” question. This would lean heavily on appearance based models, with more or less success depending on which model is chosen.

Both provide unique answers that are legitimate solutions, depending on the problem at hand.

1 Like

Not picking on @nick as his point is valid, just more of a general rant. I’m always a little uneasy with this idea of lighting with a primary in CG. Light sources have special power distributions. Those spectral powered distributions can be classified as having a color which is able to be represented as a point on a CIE chromaticity diagram, but it is the spectral power distribution of the source that defines the CIE XYZ values of an object in a scene. For any light source with a chromaticity not exactly on the boundary of the spectrum locus there are an infinite number of spectral power distribution, so saying an object a scene was lit by a chromaticity coordinate is inadequate to describe how the scene colorimetry was created.

Understanding CG packages are not spectral for the most part, it’s probably best to just describe the CIE XYZ values of the final scene objects regardless how they were created.

Hey guys, great conversation so far ! Thanks for that !

I have been wondering during the whole VWG if my examples were valid or not… Yes, most of them use ACEScg primaries. But for some reasons that I am happy to detail here.

  • When you load an ACES OCIO Config in most DCC softwares, you have access to these primaries. Since most of them use the role “scene_linear” or “rendering” which is set to “ACES - ACEScg” by default.
  • Only Autodesk Maya has a way to choose/pick a colour in different color space than the working/rendering space. I have pushed for this feature to be implemented in several DCCs without any success unfortunately.
  • In a studio of 600/800 artists, it is hard to check what every artist does and to be sure he does not reach those primaries. Even by setting our working space to “Utility - Linear - sRGB” I have seen some clipping in production because the artist was adding a “saturation” node to tweak the color.
  • If you think it would be helpful to re-render those CG examples with BT.2020 primaries for instance, I´d be happy to do so. I think that my latest iteration of those renders were using achromatic light sources anyway for faster testing.

Two things though I would like to point out :

  • From my early testing (like a year ago or so at the beginning of the VWG), I did not notice major changes using BT.2020 primaries in my renders instead of ACEScg ones (with the ACES Output Transform).
  • I thought, based on several discussions I had last year, that the “real stress test” for a Display Transform was on the working space´s primaries. That was my understanding of it. If we set BT.2020 primaries in an ACEScg working space, we´re adding “complementary light” and do not work on primaries anymore.

My tuppence,
Chris

There’s also the question that isn’t stated, based on “Who cares?”

At the end of the day, we have stimulus, either via camera or render, that cannot be expressed in the medium.

It is nonsensical, fundamentally problematic, and goes against many centuries of creative image making, for any stimulus to grow in magnitude and not have a representation in the medium.

This strikes me as part of the entire problem outlined above, and one that no CAM can “Just Work” for.

3 Likes

[EDIT - Mistakes were made, see post here]

I think good old “Blue Bar” can show some of what seems to be going on here.

Through the ZCAMishDRT the infamous blue stairs render very very green.

But when you look at the input data, and see where the light coming down the stairs sits (way out in imaginary space on the bottom left), and the then look at where the model extrapolates out that curve (it must be extrapolated, as none of the test data can be out there in imaginary space), you can see why it’s ending up where it does.

If I drop in the Gamut Compressor, it pulls the input data back into a much more plausible position, which helps a lot (Although does not eliminate it).

2 Likes

Super interesting and useful views!!!

It seems obvious, to me at least, the issue here is the starting ACES values. Before the compression they are in no man’s land and follow the zcam arc for that “hue” into cyan. When the compress in, it’s closer to something we’d consider blue but still following the arc of that hue into a final cyan-ish color. The question is if the saturated, compressed blue is something we’d see as a same hue as where it ultimately ends up, or if Zcam is failing to predict the perception of hue in this case.

1 Like

Bear in mind the “gamut” compressor operates on neither stimulus nor plausible sensation.

It is, in fact, a skewy tool that deviates to digital RGB defaults. In this case, it would artificially collapse values to digital RGB compliments.

Yeah, it’s tricky isnt it.

There was some real colour that existed on the day.
The camera has done what it can.
The IDT is landing those values way off in imaginary space.
The Gamut compressior is doing what it can, without knowing anything about what caused the initial error.
Then ZCAM’s definiton of hue is sliding stuff back in, but it’s presumably going to perform at it’s worst out near the edge.

3 Likes

I think that sums up what the situation is on that image. Well summarized!!

Yes, the data ending up after the IDT is none-stimumus and none plausible sensation. Classical CAMs are fragile along that dimension.

1 Like

This is in a place where it’s a little bit easier for other people to play with now.

There are sRGB and P3 limted SDR modes, and a P3 limited HDR mode.

The output of the node is designed to be used with Nuke’s slightly oddball Display P3 Extended HDR mode.

Please note: There are 4 images inside the group that will need to be repathed to your local file system for it to work.

The file with just the node is called ZCAMishDRT_v004.nk

3 Likes

Yes, it is not totally unexpected that the model will behave badly outside the visible spectrum and outside the training dataset.

For what it’s worth, here are some plots of the LUTCHI stimuli. I haven’t sourced the other datasets used for ZCAM, but it gives an idea of how the core dataset is scattered:


Notebook: Google Colab

Cheers,

Thomas

1 Like

Pretty thin/non existent around the problem area……

:thinking:

1 Like

That is key, isn’t it. None of us (maybe @martin.smekal?) know what the bar looked like on the day of the shoot. The closest we have is what the image looks like with the current default ALEXA Rec.709 LUT, which is what it would have looked like on a Rec.709 monitor on the shoot (assuming they just used the default mon out LUT).

To my eye that does skew a bit cyan. But nowhere near as much as the ZCAM rendering.

But I do think that, as with the gamut compression operator, we can’t really worry too much about what the rendered appearance of image data outside the spectrum locus is. All we can really say about that data is that it is not colorimetrically correct. So if it gets rendered in a way which is devoid of artefacts, and a colourist is able to grade it to where they want creatively (through the display rendering) is that enough?

2 Likes

I can’t actually get the same result as @alexfry with his latest version. This is without RGC:

It does do some weird things, especially in the blues:

And this is without RGC too:

These all were with the ZCAMishDRT v0.4, input ACEScg, Gamut bounds Rec.709/sRGB - 100nits, output sRGB - D65 - sRGB.