ZCAM for Nuke

Did you miss the point where AP1 blue is slightly beyond the border? We are not talking about AP0 here or anything off the border by any large xy unit. The context is that particular primary. The difference between BT.2020 and AP1 is not significant which is why I was saying that the test should use the former instead and we would not be having that conversation basically.

“Not significant” implies that your measurement has led to a conclusion based on… something?

What outside-of-the-spectral-locus means in terms of general image formation is nonsense, and as a result, that is just one chunk of something er other that needs to be brought back into the working tristimulus model if there is validity placed on the pre-transform values.

I do think that we are actually likely making a case of the same problem; some values can be meaningful, and some are absolutely rubbish relative to the model in question.

What I take issue with is the idea that we can plot anything outside the locus.

I believe the Asano model used LMS plots for this reason? Using the CIE xy tristimulus plot, as a stimulus specification, is gravely misleading, and leads to completely incoherent, yet seductively logical, questions and problematic conclusions.

image

3 Likes

We are all waiting to see your Colab notebook with a proposal.

1 Like

It’s a question. Is this the answer?

I’m legitimately curious about the question and the opinions on the subject:

  1. Should there be a mapping into a working space?
  2. If so, which version? Perceptual-like or Psychophysical / light-transport-like?
  3. Why?
  4. What is important about this transform in terms of qualia? Smoothness of tristimulus results to avoid posterization kinks? “Accuracy” of tristimulus values in the psychophysical tristimulus sense, or “accuracy” in terms of “colourfulness” in relation to other values?
  5. How could testing be designed to test for undesirable facets? Sin patterns etc?

Dunno. No Colab from me. I’m genuinely curious as to what the minds here think on the rather valuable question you brought up above.

3 Likes

Here’s some more images of blue stuff to peruse…

CG renders of blue stuff

Lights by Chris in both sRGB and ACEScg primaries. I can see the cyan in the ACEScg, but not really in the sRGB:


Film of blue stuff looking blue

Next we have a bunch of blue stuff, none of which is looking particularly cyan in zCAM to me…

A blue screen:

blue sky:


blue light




Film of blue stuff looking cyan

We do have this picture that looks cyan, again from the gamut mapping group test images:

I find it interesting that while some images like this one are looking cyan in zCAM, many are not.

From a purely pragmatic perspective could we say that “it works” means we can point a camera at a scene and expect to see the colors on screen looking like they did to our eyes? Then the question would be whether the scene looked blue or cyan to the eye? If it looked cyan then we could say zCAM “works” and OpenDRT does not. Conversely if it looked blue then we could say OpenDRT “works” and zCAM does not. The one that is doing the better job at faithfully reproducing the color that the eye saw on set is the one that “works” practically. The file name has “f matas” is that a name or do we know who shot this? I wonder if we could confirm with them how it appeared on set to the eye?

This green bottle is an interesting example

The raw data sits well outside of both 709 and P3:

The image below shows:

  • Left - ACES → 709 matrix | Exp -0.7 (to level match) | EOTF
  • Middle - DRT_ZCAM_IzMh_v10_Blink
  • Right - OpenDRT 0.0.90b4 (clamp disabled)

greenBottle_DRT3up_v001

One thing I’d note here is that in both the simple Matrix and OpenDRT renderings, the hue of the liquid and the label on the front of the bottle both appear to be fairly similar. But if we look at the plot, they actually sit in pretty different positions.

image

Neither the simple Matrix+EOTF or OpenDRT really make an explict effort to compress down to the target display gamut, just clipping anything negative, which I assume is causing those two different greens to just collapse into eachother. Whilst DRT_ZCAM_IzMh is explictly trying to pull them back in along the M line to inside the display volume (although note, not completely inside with the current settings), but it is still maintaining speration between the liquid and the label, which are clearly different colours in reality.

As always, the question comes back to what did that liquid really look like to a human oberserver on the day?

I don’t think the issue here is what we see with blues going Cyan because of the way the ZCAM’s hue lines bend around towards cyan, as the ZCAM model’s hue lines around bottle’s run pretty straight when viewed like this:

3 Likes

Hello, I had rendered a while ago the biped jelly with an achromatic light in ACEScg but never bothered to upload it to the dropbox folder. With a simple Nuke setup such as the one below, you can colour the light in any possible way :

I basically just multiply with a blue constant colour the achromatic render and by using an OCIOColorSpace node, I can set this blue primary to be the sRGB blue primary. Here is the same setup with BT.2020 blue primary :

Here are the results using ACES 1.2 :
ACEScg render using a blue sRGB primary Area Light :

ACEScg render using a blue BT.2020 primary Area Light :

I have uploaded the (achromatic) AP0 exr here. For several reasons, I currently only have access to NC Nuke so unfortunately I cannot test these setups with ZCAM lzMh v10.

I know that a biped render can be hard to judge so I also did a more realistic render using 3d scans. It is using one achromatic light for the whole scene so the same Nuke setup can be used for different tests. Here are the results using ACES 1.2 :

Linear-sRGB achromatic render :

Linear-sRGB render using a blue sRGB primary Light Bulb :

Linear-sRGB render using a blue BT.2020 primary Light Bulb :

I have uploaded the (achromatic) AP0 exr here. Kudos to Victor Pajot for the scans and @jmbihorel for the Asian CG model (beautiful original artwork available here).

If anyone has time and a Nuke license to run those four tests through ZCAM lzMh v10, that would be greatly appreciated (maybe you @Derek ?).

Thank you. This explanation makes more sense to me than the previous one. Much appreciated.

Out of curiosity, did anyone have a look at this ? It was posted by Romain Guy on Slack some weeks ago. it is not necessarily related to the ZCAM conversation we are having, but could be of interest for our ACES Output Transforms ? Maybe ?

Finally, if everyone thinks that it makes sense for a blue ACES primary to look “cyan”, I will shut up. :wink: Maybe there are some logical/scientific reasons that I don’t get. But I think that CG artists will freak out.

At some point (and maybe you guys will hate me for writing this), I wonder how a “Chromaticity Linear” Display Transform using Alex or Matthias’ Gamut Compression would look like ? I am not even sure that the perceptual hue paths should be part of the DRT (I guess this was maybe hinted by Jed at the last meeting ? And that maybe this brings us back to the Miro Board from six months ago ?). There is a nice “colorfullness” (sorry for butchering the words) with the ZCAM DRTs that (if I understood properly) comes more from the Gamut Compression itself than the ZCAM model that I would like to “keep”.

Chris

2 Likes

ZCAM DRT and OpenDRT for Rec.2020 red, green and blue.






1 Like

The Asano model provides both RGB and XYZ CMFs and someone reading his thesis will find out that he has no problem comparing the observers in CIE Lab space for example. I would suggest asking Mark and Asano why you think they are both wrong doing that!


Now, assuming that it is possible to compare observers, only a few percents increase in blue sensitivity are required to reach the AP1 blue primary for the Standard Observer. Somewhere there is likely an observer that would see it:

It is not based on “nothing/something”, but simple tests, one can encode an image with BT.2020 and another with AP1 and assess if he can make a visual difference. Here is a random blindest example:

I honestly would not be able to say which one is what! They might very well be the same images, who knows?

We can also compute Delta E on the colour checker, and directly measure the colour difference between the patches encoded in both RGB colourspaces:

[('dark skin', 0.0011419524692043341),
('light skin', 0.0033962236416911372),
('blue sky', 0.0036090355653960724),
('foliage', 0.0021312144064404947),
('blue flower', 0.0050948318536764899),
('bluish green', 0.0027797102041388065),
('orange', 0.0072347586036700063),
('purplish blue', 0.0071736115979037377),
('moderate red', 0.0033920583758162036),
('purple', 0.0021555866734024091),
('yellow green', 0.010202657482306884),
('orange yellow', 0.010127672244120979),
('blue', 0.0058280109434577397),
('green', 0.0047691284180629741),
('red', 0.0031879290573101839),
('yellow', 0.014413642744676864),
('magenta', 0.0044688435898245539),
('cyan', 0.0051040729312890141),
('white 9.5 (.05 D)', 0.00047139703700771376),
('neutral 8 (.23 D)', 0.00012077587856360507),
('neutral 6.5 (.44 D)', 0.00011996350929703285),
('neutral 5 (.70 D)', 4.1345601686671215e-05),
('neutral 3.5 (1.05 D)', 0.00010587071067308681),
('black 2 (1.5 D)', 4.2559679481020816e-05)]

With that in mind, I don’t think that talking about the AP1 blue primary colour is heretical: The two colourspaces are for practical purposes the same and if the blue laser of BT.2020 is visible and can produce a colour, I will certainly not blame anyone discussing about AP1. Can an admin s/AP1/BT.2020/g so that we can move on…

3 Likes

I found some green liquid so I took a picture:



OpenDRT and ACES are closer to what I saw with my eyes. ZCAM DRT is entirely wrong color. Lit 100% with sun light.

The data is within P3.

4 Likes

Interesting… I wonder what’s going on here then?

If the colours are not clipped and within target display gamut, it should be easy to produce the ground truth for it, simply encode for your target display, no need for anything else.

ZCAM, vanilla, should produce the same values in output than that in input, i.e. it roundtrips, so I’m assuming that either there is a problem with the current implementation or the changes bolted-in do more than what they should. Might be worth looking at the chromatic adaptation too.

1 Like

The hue shift happens as a result of the gamut compression in the DRT ZCAM IzMh. Disabling that (or reducing the compression to around 1.03 from the default 1.2) makes the green match OpenDRT very closely.

1.2 compression:
compr12

1.03 compression:
compr103

So really the difference here is just that the 1.2 compression pulls in more stuff into the gamut.

Yeah, my read on it is that what we’re seeing here in the clamp variations is in fact a “skew”. Both “Perceptual Hue Preserving” or “Chromaticity Angle Preserving” approaches would show similar results to what we see with ZCAM in this situation.

The plot below winds on the compression used by ZCAM DRT (1.0 → 1.2) and OpenDRT (clamp mixed from 0 → 1.0)

greenBottle_DRT2up_chromaPlot_animated_v001

2 Likes

So then, the hue lines in the model are either not correct at all, or it’s an IDT problem?

The hue lines are more or less correct as far as prediction goes, if you increase or decrease colourfulness (or chroma), the perceived hue will shift, i.e. Abney Effect. The model could also be over-predicting. The question (which I asked a few posts above and a few meetings ago) is whether it is desirable, i.e. do we want hues to shift as we compress colourfulness? We are not trying to predict the appearance of a colour, which the model does, we are trying to implement a DRT that behaves predictably.

4 Likes

This, to me, is a very telling animation.

The ZCam model appears to be doing exactly what we hoped it would from a perspective of not rotating hues (I think, we should confirm numbers in JCh). The OpenDRT model’s behavior is different, but are the visual results more desirable?

Also, it’s worth thinking about how this will translate to other devices. Will be behavior of the ZCam model yield a more consistent results across display devices with different primaries and dynamic ranges vs. the behavior we’re seeing in OpenDRT, regardless which result produces a more desirable result on one particular display?

3 Likes

I can see it in the sRGB too. It’s even more obvious in a scene with fake night post-processing applied.

False.

image

I see how this works.

A discussion about how to form an image and the relationship to values with no meaning to a standard observer, and what those values mean in terms of relationships, is countered by a formed image.

My goodness. I give up.

2 Likes

Thank you, it is ok to disagree, provided it is done respectfully, it won’t be the first nor the last!