That is key, isn’t it. None of us (maybe @martin.smekal?) know what the bar looked like on the day of the shoot. The closest we have is what the image looks like with the current default ALEXA Rec.709 LUT, which is what it would have looked like on a Rec.709 monitor on the shoot (assuming they just used the default mon out LUT).
To my eye that does skew a bit cyan. But nowhere near as much as the ZCAM rendering.
But I do think that, as with the gamut compression operator, we can’t really worry too much about what the rendered appearance of image data outside the spectrum locus is. All we can really say about that data is that it is not colorimetrically correct. So if it gets rendered in a way which is devoid of artefacts, and a colourist is able to grade it to where they want creatively (through the display rendering) is that enough?
Yeah, it mirrors what we see in that M bounds image, with the pinching at blue.
It can also be useful to visualise how much of the input gamut is falling outside of the display’s gamut triangle.
Left: ACEScg ColourWheel plotted in CIE 1976 u’ v’
Middle: ACEScg ColourWheel → Rec709 with all pixels with neg values dimmed.
Right: ZCAMishDRT result with Rec709 gamut triangle bounds overlayed.
Left: ACES ColourWheel plotted in CIE 1976 u’ v’
Middle: ACES ColourWheel → Rec709 with all pixels with neg values dimmed.
Right: ZCAMishDRT result with Rec709 gamut triangle bounds overlayed.
According this presentation (slide 15) JzAzBz should be much more uniform in the distribution of hues than other color spaces. Unfortunately, there’s not a lot of info on how this comparison was created and it only focuses on JzAzBz not the entire Zcam model.
After much head scratching, I eventually figured out what was going on here.
The way I have to do the 2D LUT lookup means I need to pipe into the same image stream as the image being processed. If that image is smaller than 1024x1024 the Expression node was unable to sample any positions outside the frame. The HueSweep node outputs 512x533, so J/h combinations with a 2D position outside of that were unable to be sampled. I’ve now added a reformat fix inside the node to deal with this.
I’m still a little suss about the slight clustering of points around the cyan line, but the gigantic rip (and total lack of highlight compression) is gone.
Functionally, the biggest difference is that the edge of the gamut, and the head (for lack of a better word) are now handled seperately. For example backing off the edge of gamut compression whilst turning up the top end desaturation.
I’ve also made the process of repathing the internal gamut bounds images a bit eaiser. There is a now just a single exposed path you can point at the directory where they’re stored.
Great stuff @alexfry. On initial testing, that looks really good out of the box.
Is there a reason you don’t just use relative paths for the images in your Nuke script? I just set the path to [python {nuke.script_directory()}]/images/gamutBounds/
Obviously that only works for Nuke scripts in the root of the GitHub repo. But defaulting to that makes it work immediately when you just open the script.
I also added a Multiply node before the final Colorspace25 node, set to (parent.Colorspace25.colorspace_out==31) ? 100 : 1 so that it converts to absolute nits prior to ST.2084 encoding if you select that EOTF.
Again, adding custom curves could break that by changing the enum from 31, but it works initially.
EDIT: The Python expression 100 if (nuke.toNode('Colorspace25').knob('colorspace_out').value() == 'st2084') else 1 should be more robust to possible enum changes.
The time it took me to work this out shows how hacky and “trial and error” my coding is!
The tone scale compared to ACES 1.2 is a bit off (for sRGB output), difference is mostly the dark-to-dim surround compensation. Started looking at this because I wasn’t sure if the darkening of colors I see happens because of the model or the tone scale. Is the SDR curve meant for P3D65?
It probably makes sense that the dark-to-dim is not baked into the tone scale when using a CAM, since that is a crude attempt at an appearance match, and should probably be left for the CAM to do a better job of.
Looking quickly at the internals of @alexfry’s DRT group node, all the ZCAM transforms are set to “average” surround. Some experimenting with different surround settings for the forward and inverse ZCAM transforms is probably needed.
For those of you who are interested in the alternative Nuke implementation of a ZCAM based DRT shown in todays (2021-12-15) meeting you can find my (very much work-in-progress) model here:
I know that @Thomas_Mansencal has said that he believes a two step chromatic adaptation should really be used. But in the absence of that, do you think the Bradford matrix check-box should be set on the input Colourspace conversion to CIE XYZ? Or have I missed something, and you have a two step adaptation to D65 constructed as part of the node tree?
EDIT: I see, on further investigation that the input ZCAM conversion uses ACES, rather than D65, as the reference white, so perhaps it is handled here and the Bradford CAT should not be enabled earlier.
I’ve got an HDR monitior (Macbook Pro M1) and have both @alexfry and @matthias.scharfenber flavors of the zCAMish DRT working via the “Enable macOS HDR Color Profile (Display P3) (Beta)” function. Looks amazing!
I’d like to compare this to ACES 1.2 and OpenDRT but am not having much luck with getting either of these to display in HDR.
ACES in HDR
For ACES I’m using the Nuke ACES 1.2 OCIO config with the view transform set to P3-D65 1000 nits (ACES), and it displays in SDR (i.e. it is clamping at 1). If I switch the view transform to Raw it does display in HDR.
OpenDRT
@jedsmith 's OpenDRT v90b4 is just showing pitch black. This is coming from the Display Scale node (i.e. disabling it make the image appear again).
I’ve put together a clip with a bunch of example footage and still images running through the 3 DRTs.
Left Column: ZCAMishDRT Alpha 0.6
Middle Column: OpenDRT v0/0/90b2
Right Column: ACES 1.2
Top Row: -5 Stops
Middle Row: Standard Exposure
Bottow Row: +5 Stops
Still frames all do a -5 to +5 Exposure sweep over 51 frames.
The idea with the -5 and +5 offsets is to give you something to compare when trying to get a sense of “does this image feel like the same image, or are the proportions shuffled around as exposure changes?”
Whilst ZCAMishDRT’s approach to gamut compression helps here, by always preferencing Brightness over Colour intensity, it can lead some images feeling like they desat too fast. The question there is, which matters more? The colour of the light, or the quantity of the light?
Currently, the area that looks the worst to me is ZCAMishDRT’s handling of very bright, but only moderately saturated, areas holding on to too much saturation as they approach the clipping point. This can most clearly be seen on the clips outside the Mercedes Benz museum. Perhaps it’s going to need an explict highlight desat tied to the tonemapping section, rather than the gamut volume compressor alone?