ACES 2.0 CAM DRT Development

Hmm… I think that’s a bug. The inverse was clean in v27, I believe. Might be the gamut mapper changes in v28 at fault.

The gamut mapper will have biggest impact for inverse anyway, so rather than tweaking the current gamut mapper (like I did in v28) it would better to have a gamut approximation based implementation, and tweak that. I’m assuming the current implementation, which uses LUT for finding the cusp and iterative approach to find the boundary, isn’t appropriate for final implementation.

But the one that goes outside the spectral locus is the one with the gamut mapper enabled, which is what @alexfry showed. That makes sense, because in an inverse, the gamut compressor becomes a gamut expander.

Actually I can’t reproduce this. Did you by any chance have RGC enabled?


The v27 with gamut mapper version is even more outside. Here’s v27 with and without gamut mapper (as can be seen v28 improves inverse, v27 added more compression to deal with shadow clipping but it was meant to be temporary solution, which v28 now addresses):


No. Just v28, settings as per your repo in both directions.

I am inverting from Rec.709 / BT.1886. If I use a Rec.709 / linear inverse it appears to fill the cube, as you show. For inverting real world display-referred imagery, I would say that the 2.4 gamma needs to be included in both directions.

EDIT: sRGB in both directions comes closer to filling the cube. I’m assuming that a small inaccuracy in inversion is exaggerated by the inverse 2.4 gamma. Less so by the linear portion at the bottom of sRGB.

Using the Python implementation with compress mode from @Thomas_Mansencal’s Colab (with a small tweak of the compress/decompress fuctions to trap for divide by zero when x=y=z) I see the same result with a BT.1886 → JMh → BT.1886 round trip. Using just the eight corners of the cube (i.e. primaries, secondaries, black and white) after the round trip I get:

[[[[-0.0000000000 0.0000000000 0.0000000000]
   [0.0365082030 0.0368818295 0.9995565545]]

  [[0.0397139681 0.9998362401 0.0348177461]
   [0.0150639031 1.0000119030 1.0000046571]]]


 [[[0.9992951472 0.0354635798 0.0291468397]
   [0.9999506298 0.0253964397 0.9999180909]]

  [[0.9998752347 0.9999066957 0.0383670810]
   [1.0000062083 1.0000225205 1.0000066041]]]]

0.0397139681 is 10-bit code value 41, which is quite a significant difference from zero.

A post was split to a new topic: Issues using Oklab transform in Nuke

It does seem to be clipping of fully saturated colours, rather than distorting of the cube. If I use a cube of 0.1 and 0.9 values instead of 0 and 1, I get:

[[[[0.1000006208 0.1000022520 0.1000006604]
   [0.1001876482 0.0999380216 0.9000013549]]

  [[0.1000006208 0.9000151462 0.1000969808]
   [0.1001876482 0.9000121840 0.9000058020]]]


 [[[0.8999969470 0.1001772577 0.1000037330]
   [0.9000055875 0.1001131844 0.9000014967]]

  [[0.8999969470 0.9000232306 0.1001000493]
   [0.9000055875 0.9000202684 0.9000059437]]]]

It may not be exact, but it’s close enough to be down to calculation rounding errors. And the round trip difference is less than one 12-bit code value.

I was also looking again at the Blink code, when did the spow function get changed from mirroring to clamping? Was that something @matthias.scharfenber did back in the ZCAM version? We need to be careful of things which were done to fix an issue which may not necessarily still be the case.

Clamping may make sense in many cases in a DRT (we don’t want erroneous “negative light” affecting the result) but might it be better to clamp pixels with negative luminance on input, but retain other negatives which might be out of gamut values?

I believe the spow() has been that way for a very long time. The commented-out code was added later, and never taken into use.

@Thomas_Mansencal ’s Python implementation uses Colour’s built in spow which I believe defaults to mirroring.

I’ve done some testing on the path of the cusp, again using @Thomas_Mansencal 's Python. I’m not sure that my Python and Blink XYZ <-> JMh conversions give identical results, but this is a test of principle more than anything else.

cusp

The path of the cusp (Rec.709 cube in this case) is very peculiarly shaped. I think we would need to accept a very crude match if we wanted to use a function to approximate it.

Although I suppose the six obvious cusps in the paths are the primaries and secondaries, so perhaps just joining those six points with suitable curves might be reasonable. The path of the cusp in J could even be fairly reasonable approximated with six straight lines.

2 Likes

I think a smooth approximation of that M path is not impossible.

As far as the cusp J goes, we might not even need that. If you look at the current mapper it doesn’t really use the cusp J directly. The focus point J is always a blend of middle gray and the cusp J. And that choice of blend is entirely arbitrary (it’s set to 0.5 at the moment, half way between middle gray and the cusp J). So we could just have a smooth curve that gives us directly J we like.

When I was testing this, I found that having the blend closer to cusp J for secondary colors and closer to middle gray for primary colors, produced the best mapping to my eye. Current 0.5 blend darkens yellows and cyans maybe a bit too much as their cusp J is very high, as can be seen from the plot. Middle gray is under any cusp J.

But can “picking a curve we like” be generalised to all display targets? Or do you you think the same J curve could be applied to all, just scaled with peak white?

I don’t know if it could be scaled. But the cusp J doesn’t have to be precise so even if we want to still do a blend (lerp) between middle gray and the cusp J, an approximation should be sufficient.

But overall I assume we’ll need per-gamut curves.

That was my assumption too. In which case we need a documentable method for creating a curve for an arbitrary gamut. That can’t really be “pick one that looks nice to you”!

I’ve pushed the code to generate the cusp plots to my repo.

I need to investigate why I am not getting identical results from the Blink and Python XYZ to JMh conversions, as the former was derived from the latter. The Python does generate NaNs with some input, so I’m guessing the functions were modified in the Blink to prevent those, and perhaps this is the source of the difference.

My mistake. I was using “Average” surround in the Python, thinking that was “the middle one” to match the Blink. When I use “Dim” they match.

Changing to dim obviously alters the plots, but the general shape is still the same.

Notice also that the implementation in the DRT is using different primaries than stock Hellwig.

Still using the stock matrix for now, but I have tracked down some errors in my boundary search and some accidental quantisation, and I now have smooth animated plots without glitches, and a path plot which does indeed pass through all the primary and secondary colours.

cusp_path_marked

It looks like the kink in the bottom part of the cusp “triangle” was an error in my code. It actually does look quite close to a simple triangle now.

LUT repo has been updated to reflect the work done on v028

2 Likes

I now have a plot of the cusp path with the modified LMS matrix from v28.

Still broadly the same shape. And when I plot it for P3-D65, the shape is similar too.

I wonder what a gamut mapper would look like which was based simply on the path of the cusp, with straight lines to black and white? The data from my plot could simply be declared as two 360 entry arrays in the code, removing the need for an iterative solve.

In the last meeting the eccentricity factor in Hellwig came up and @luke.hellwig thought that it doesn’t affect things that much with the gamut mapper. So I quickly tested the model without it (set to 1.0) and indeed the effect is minimal. A small adjustment to the chroma compression made images pretty much exact match to the model with the factor. Biggest impact is in yellow/blue axis. Highly saturated blue gets a little lighter. Yellow compression is reduced giving more saturated yellow and orange, which I thought is good (and helps with inverse too).

So I’m thinking that the eccentricity factor could be removed from the model to simplify it, and to improve the inverse. And then as necessary change the hue-dependent chroma compression to adjust things. I could show example images, but there’s really nothing between the images with and without it, the effect is so small.

2 Likes