ZCAM for Nuke

Anyone want to explain to me how a fit model (replace the F with SH) based around a display EOTF JND experiment could ever possibly even remotely work even faintly like anything related to the HVS?

Thanks.

2 Likes

It is a hue shift in the greens toward cyan. For example:


1 Like

I’ve always thought of it that same way. Saying that a reproduction of something lit with an AP1 primary “looks wrong” is making a big assumption about what we think the AP1 primary actually “looks like”.

1 Like

I’m sorry… this is pure madness.

Imagine laying out a 2D map of the 3D earth globe, pointing beyond to the table, and speculating what it looks like.

“Hey honey… want to go to Wyoming?”

“No… let’s go north of the North Pole!”

Way out into astronaut architechturism.

2 Likes

I would use a laser-like primary such a that of BT.2020 and then we can continue the discussion whilst having everyone comfortable with the idea that this time it can be seen by the CIE 1931 2 Degree Standard Observer.

Because the Standard Observer does not see it, it does not mean that you won’t. We are using a much needed frontier, average of a few observers, because it is simpler mathematically (who wants to carry probabilities around?) when in reality it is statistically much fuzzier.

Should we instead use the probability that this particular stimulus is visible for a “probabilistic observer”, it could very well be high. I will compute that for the Asano Observers when I have spare cycles.

@ChrisBrejon: What about converting your image from BT2020 to AP0. You haven’t rendered spectrally anyway so it does not really matter. The result won’t be much different though as you might rightly so expect.

Cheers,

Thomas

Until I get to do it, here is something relevant: About issues and terminology - #7 by Thomas_Mansencal

Sorry, you are conflating issues in the same way your Asano diagram seductively creates the idea that all projections are on the CIE xy projection, as opposed to appreciating that the horizons for each observer there are singular and closed domain.

The space is bounded. Suggesting that anything exists beyond the spectral locus for the observer is absolute nonsense and rubbish.

All we have is the standard observer model, and the moment we step outside that, all bets are off and we are into nonsense land. Can a standard observer, as per Asano et al be calibrated for that specific observer? Absolutely. Suggesting that somehow the spectral locus is different and meaningful beyond the locus, as opposed to a psychophysical representation in each observer, is pure nonsense.

It’s a physical wall of visible electromagnetic radiation.

Trying to suggest that AP1 blue might be visible to some other observer is hilarious.

2 Likes

I was mistaken to use AP1 as the example in my previous post. It was a bad example.

The point I was trying to make is that there’s no reason to assume a primary in a larger RGB space should map to the primary of a smaller RGB space when converting between them. Further the hue of the larger space’s primary may very well be reproduced with a mixture of Red, Green and Blue in the smaller space.

2 Likes

This is a discussion I tried to bring up way back at the “gamut” mapping VWG way back when. What is a reasonable and sane approach here for getting values to a working model?

  1. Perceptual “hue”.
  2. Tristimulus linear-energy-like.

Could the working space mapping be different and subject to different requirements to the image formation mapping?

2 Likes

Yeah well, no. The CIE xyY projective transformation is valid for any observer. You can also design a transformation that maps an observer to another.

The space is indeed bounded but there are as many spaces as there are observers, for the same reason that there is a sensitivity space for every single camera.

Again it is a wall for the particular observer you use, nothing says that a stimulus this observer cannot perceive will not be seen by another one. We have plenty of observers, e.g. the Individual Observer, even the standardised ones, e.g. 2012, proving it is the case.

If you make the border probabilistic instead of an average/mean, you can come up with a probability that it can be seen by a distribution of observers. Put another way, we are only considering the central slice of the distribution of observers that served to build the Standard Observer.

You then could certainly find a value that is visible for some observer and that is mapped exactly where AP1 blue is located for the Standard Observer. We do that all the time with cameras and, surprise, many values are mapped outside the spectral locus!

Cheers,

Thomas

1 Like

Super.

How does that relate to the idea to the erroneous notion as to what AP1 blue looks like again? I’m confused.

I’m still confused as to how this relates to what AP1 blue looks like.

Wait… are you telling me that some observers are looking at 500nm and seeing something outside / beyond / magical-not-electromagnetic radiation of 500nm?

I’m confused still.

Put another way, this has nothing to do with the meaning of the cone firing for a specific slice of electromagnetic radiation. I’m still confused as to how this relates to what AP1 blue looks like again, and how that Asano diagram relates to what it looks like?

So we’ve come around to suggesting that indeed AP1 blue, which is beyond the visible spectrum wall event horizon, is now meaningful to an observer.

And those values beyond the standard observer spectral locus are math garbage from a rubbish 3x3 fit. Meaningless to a standard observer. Meaningful to the camera observer.

Way up into space now…

2 Likes

Did you miss the point where AP1 blue is slightly beyond the border? We are not talking about AP0 here or anything off the border by any large xy unit. The context is that particular primary. The difference between BT.2020 and AP1 is not significant which is why I was saying that the test should use the former instead and we would not be having that conversation basically.

“Not significant” implies that your measurement has led to a conclusion based on… something?

What outside-of-the-spectral-locus means in terms of general image formation is nonsense, and as a result, that is just one chunk of something er other that needs to be brought back into the working tristimulus model if there is validity placed on the pre-transform values.

I do think that we are actually likely making a case of the same problem; some values can be meaningful, and some are absolutely rubbish relative to the model in question.

What I take issue with is the idea that we can plot anything outside the locus.

I believe the Asano model used LMS plots for this reason? Using the CIE xy tristimulus plot, as a stimulus specification, is gravely misleading, and leads to completely incoherent, yet seductively logical, questions and problematic conclusions.

image

3 Likes

We are all waiting to see your Colab notebook with a proposal.

1 Like

It’s a question. Is this the answer?

I’m legitimately curious about the question and the opinions on the subject:

  1. Should there be a mapping into a working space?
  2. If so, which version? Perceptual-like or Psychophysical / light-transport-like?
  3. Why?
  4. What is important about this transform in terms of qualia? Smoothness of tristimulus results to avoid posterization kinks? “Accuracy” of tristimulus values in the psychophysical tristimulus sense, or “accuracy” in terms of “colourfulness” in relation to other values?
  5. How could testing be designed to test for undesirable facets? Sin patterns etc?

Dunno. No Colab from me. I’m genuinely curious as to what the minds here think on the rather valuable question you brought up above.

3 Likes

Here’s some more images of blue stuff to peruse…

CG renders of blue stuff

Lights by Chris in both sRGB and ACEScg primaries. I can see the cyan in the ACEScg, but not really in the sRGB:


Film of blue stuff looking blue

Next we have a bunch of blue stuff, none of which is looking particularly cyan in zCAM to me…

A blue screen:

blue sky:


blue light




Film of blue stuff looking cyan

We do have this picture that looks cyan, again from the gamut mapping group test images:

I find it interesting that while some images like this one are looking cyan in zCAM, many are not.

From a purely pragmatic perspective could we say that “it works” means we can point a camera at a scene and expect to see the colors on screen looking like they did to our eyes? Then the question would be whether the scene looked blue or cyan to the eye? If it looked cyan then we could say zCAM “works” and OpenDRT does not. Conversely if it looked blue then we could say OpenDRT “works” and zCAM does not. The one that is doing the better job at faithfully reproducing the color that the eye saw on set is the one that “works” practically. The file name has “f matas” is that a name or do we know who shot this? I wonder if we could confirm with them how it appeared on set to the eye?

This green bottle is an interesting example

The raw data sits well outside of both 709 and P3:

The image below shows:

  • Left - ACES → 709 matrix | Exp -0.7 (to level match) | EOTF
  • Middle - DRT_ZCAM_IzMh_v10_Blink
  • Right - OpenDRT 0.0.90b4 (clamp disabled)

greenBottle_DRT3up_v001

One thing I’d note here is that in both the simple Matrix and OpenDRT renderings, the hue of the liquid and the label on the front of the bottle both appear to be fairly similar. But if we look at the plot, they actually sit in pretty different positions.

image

Neither the simple Matrix+EOTF or OpenDRT really make an explict effort to compress down to the target display gamut, just clipping anything negative, which I assume is causing those two different greens to just collapse into eachother. Whilst DRT_ZCAM_IzMh is explictly trying to pull them back in along the M line to inside the display volume (although note, not completely inside with the current settings), but it is still maintaining speration between the liquid and the label, which are clearly different colours in reality.

As always, the question comes back to what did that liquid really look like to a human oberserver on the day?

I don’t think the issue here is what we see with blues going Cyan because of the way the ZCAM’s hue lines bend around towards cyan, as the ZCAM model’s hue lines around bottle’s run pretty straight when viewed like this:

3 Likes

Hello, I had rendered a while ago the biped jelly with an achromatic light in ACEScg but never bothered to upload it to the dropbox folder. With a simple Nuke setup such as the one below, you can colour the light in any possible way :

I basically just multiply with a blue constant colour the achromatic render and by using an OCIOColorSpace node, I can set this blue primary to be the sRGB blue primary. Here is the same setup with BT.2020 blue primary :

Here are the results using ACES 1.2 :
ACEScg render using a blue sRGB primary Area Light :

ACEScg render using a blue BT.2020 primary Area Light :

I have uploaded the (achromatic) AP0 exr here. For several reasons, I currently only have access to NC Nuke so unfortunately I cannot test these setups with ZCAM lzMh v10.

I know that a biped render can be hard to judge so I also did a more realistic render using 3d scans. It is using one achromatic light for the whole scene so the same Nuke setup can be used for different tests. Here are the results using ACES 1.2 :

Linear-sRGB achromatic render :

Linear-sRGB render using a blue sRGB primary Light Bulb :

Linear-sRGB render using a blue BT.2020 primary Light Bulb :

I have uploaded the (achromatic) AP0 exr here. Kudos to Victor Pajot for the scans and @jmbihorel for the Asian CG model (beautiful original artwork available here).

If anyone has time and a Nuke license to run those four tests through ZCAM lzMh v10, that would be greatly appreciated (maybe you @Derek ?).

Thank you. This explanation makes more sense to me than the previous one. Much appreciated.

Out of curiosity, did anyone have a look at this ? It was posted by Romain Guy on Slack some weeks ago. it is not necessarily related to the ZCAM conversation we are having, but could be of interest for our ACES Output Transforms ? Maybe ?

Finally, if everyone thinks that it makes sense for a blue ACES primary to look “cyan”, I will shut up. :wink: Maybe there are some logical/scientific reasons that I don’t get. But I think that CG artists will freak out.

At some point (and maybe you guys will hate me for writing this), I wonder how a “Chromaticity Linear” Display Transform using Alex or Matthias’ Gamut Compression would look like ? I am not even sure that the perceptual hue paths should be part of the DRT (I guess this was maybe hinted by Jed at the last meeting ? And that maybe this brings us back to the Miro Board from six months ago ?). There is a nice “colorfullness” (sorry for butchering the words) with the ZCAM DRTs that (if I understood properly) comes more from the Gamut Compression itself than the ZCAM model that I would like to “keep”.

Chris

2 Likes

ZCAM DRT and OpenDRT for Rec.2020 red, green and blue.






1 Like

The Asano model provides both RGB and XYZ CMFs and someone reading his thesis will find out that he has no problem comparing the observers in CIE Lab space for example. I would suggest asking Mark and Asano why you think they are both wrong doing that!


Now, assuming that it is possible to compare observers, only a few percents increase in blue sensitivity are required to reach the AP1 blue primary for the Standard Observer. Somewhere there is likely an observer that would see it:

It is not based on “nothing/something”, but simple tests, one can encode an image with BT.2020 and another with AP1 and assess if he can make a visual difference. Here is a random blindest example:

I honestly would not be able to say which one is what! They might very well be the same images, who knows?

We can also compute Delta E on the colour checker, and directly measure the colour difference between the patches encoded in both RGB colourspaces:

[('dark skin', 0.0011419524692043341),
('light skin', 0.0033962236416911372),
('blue sky', 0.0036090355653960724),
('foliage', 0.0021312144064404947),
('blue flower', 0.0050948318536764899),
('bluish green', 0.0027797102041388065),
('orange', 0.0072347586036700063),
('purplish blue', 0.0071736115979037377),
('moderate red', 0.0033920583758162036),
('purple', 0.0021555866734024091),
('yellow green', 0.010202657482306884),
('orange yellow', 0.010127672244120979),
('blue', 0.0058280109434577397),
('green', 0.0047691284180629741),
('red', 0.0031879290573101839),
('yellow', 0.014413642744676864),
('magenta', 0.0044688435898245539),
('cyan', 0.0051040729312890141),
('white 9.5 (.05 D)', 0.00047139703700771376),
('neutral 8 (.23 D)', 0.00012077587856360507),
('neutral 6.5 (.44 D)', 0.00011996350929703285),
('neutral 5 (.70 D)', 4.1345601686671215e-05),
('neutral 3.5 (1.05 D)', 0.00010587071067308681),
('black 2 (1.5 D)', 4.2559679481020816e-05)]

With that in mind, I don’t think that talking about the AP1 blue primary colour is heretical: The two colourspaces are for practical purposes the same and if the blue laser of BT.2020 is visible and can produce a colour, I will certainly not blame anyone discussing about AP1. Can an admin s/AP1/BT.2020/g so that we can move on…

3 Likes