Any chance you could describe what you mean by this in a little more depth?
You got 1. exactly right.
After that you convert to the target RGB gamut. Since we compressed to a larger space, you can have colors outside the gamut though, with RGB values less than zero and larger than one. So, need some way to map those values into the [0,1] range. Just clipping to that range does OK, but will introduce some non-smoothness into your mapping. So instead you can use a curve like this to get a smooth transition near 0.0 and 1.0:
Would be nice trying this with a tighter gamut approximation also, so it touches/almost touches most of the RGB gamut corners. That would make the flat areas smaller. Haven’t had time to do that yet though.
Based on the insights provided by @bottosson, this version adds a feature to apply smoothing to the edges of the target RGB cube used for finding the gamut boundary when compressing the colorfulness (M) correlate:
The result is a reduction in the color fringing that was previously observed by @jmgilbert on the Rec.2020 luminous spheres test image:
Since the smoothing comes at the expense of saturation at the edges and corners of the target gamut the “limit” parameter of the compression function has been reduced to compensate.
In addition the option to toggle between projecting the lightness (J) correlate towards either the lightness of the gamut cusp or SSTS mid-grey has been replaced by a slider to blend between the two instead.
As mentioned in the last WVG meeting I have now pushed version 10 of the ZCAM IzMh DRT to GitHub:
This version is now a pure Blink script node (the Blink code is available as a separate file as well) and runs significantly faster than the previous ones.
The v10 also replaces the 2-Stage gamut compression approach with a single stage that compresses both J & M correlates at the same time towards a focal point.
I’ve also added some tool tips to the Nuke node and hopefully helpful comments to the code.
Jed has been working on a number of look tools, one of the more recent one’s being ShadowContrast. OpenDRT is not intended to “look good” without an added LMT, so I thought it could be interesting to show Zcam and OpenDRT + LMT side by side. The “look” consists simply of a ShadowContrast node with settings intended to approximate a similar tone scale to ACES.
I still don’t understand what “lit with ACEScg blue primary” really means . That aside …
If we say that the scene objects have the chromaticity of the ACEScg blue primary, why would one expect that to end up as being blue (e.g. rgb=[0,0,1]) in display code value space?
The chromaticity of the ACEScg “blue” primary has a hue that’s defined by JzAzBz in ZCam. We can’t see the ACEScg blue, so we don’t know the hue. Assuming we believe ZCAM works (big assumption), if one follows that hue line into a gamut that’s smaller than ACEScg and able to be reproduced on a display, that’s probably the real hue of the ACEScg primary.
I’ve always thought of it that same way. Saying that a reproduction of something lit with an AP1 primary “looks wrong” is making a big assumption about what we think the AP1 primary actually “looks like”.
I would use a laser-like primary such a that of BT.2020 and then we can continue the discussion whilst having everyone comfortable with the idea that this time it can be seen by the CIE 1931 2 Degree Standard Observer.
Because the Standard Observer does not see it, it does not mean that you won’t. We are using a much needed frontier, average of a few observers, because it is simpler mathematically (who wants to carry probabilities around?) when in reality it is statistically much fuzzier.
Should we instead use the probability that this particular stimulus is visible for a “probabilistic observer”, it could very well be high. I will compute that for the Asano Observers when I have spare cycles.
@ChrisBrejon: What about converting your image from BT2020 to AP0. You haven’t rendered spectrally anyway so it does not really matter. The result won’t be much different though as you might rightly so expect.
Sorry, you are conflating issues in the same way your Asano diagram seductively creates the idea that all projections are on the CIE xy projection, as opposed to appreciating that the horizons for each observer there are singular and closed domain.
The space is bounded. Suggesting that anything exists beyond the spectral locus for the observer is absolute nonsense and rubbish.
All we have is the standard observer model, and the moment we step outside that, all bets are off and we are into nonsense land. Can a standard observer, as per Asano et al be calibrated for that specific observer? Absolutely. Suggesting that somehow the spectral locus is different and meaningful beyond the locus, as opposed to a psychophysical representation in each observer, is pure nonsense.
It’s a physical wall of visible electromagnetic radiation.
Trying to suggest that AP1 blue might be visible to some other observer is hilarious.
I was mistaken to use AP1 as the example in my previous post. It was a bad example.
The point I was trying to make is that there’s no reason to assume a primary in a larger RGB space should map to the primary of a smaller RGB space when converting between them. Further the hue of the larger space’s primary may very well be reproduced with a mixture of Red, Green and Blue in the smaller space.
Yeah well, no. The CIE xyY projective transformation is valid for any observer. You can also design a transformation that maps an observer to another.
The space is indeed bounded but there are as many spaces as there are observers, for the same reason that there is a sensitivity space for every single camera.
Again it is a wall for the particular observer you use, nothing says that a stimulus this observer cannot perceive will not be seen by another one. We have plenty of observers, e.g. the Individual Observer, even the standardised ones, e.g. 2012, proving it is the case.
If you make the border probabilistic instead of an average/mean, you can come up with a probability that it can be seen by a distribution of observers. Put another way, we are only considering the central slice of the distribution of observers that served to build the Standard Observer.
You then could certainly find a value that is visible for some observer and that is mapped exactly where AP1 blue is located for the Standard Observer. We do that all the time with cameras and, surprise, many values are mapped outside the spectral locus!