ACES 2.0 CAM DRT Development

I was not thinking about archival purposes, point taken. You and Anton are right, this is a whole different can of worms.

Out of curiosity, why are 1D LUT and 3x3 matrix not enough for IDTs? This approach satisfies “linearity” condition.

In my mind (and mind of a lot of my colleagues) ACES is an established color-management system with its own set of pros and cons. There is some mental inertia which we should overcome to pivot it to a meta-framework. It is a very interesting challenge, especially because some of the concepts should be defined from ground up, but I wonder…

What specific problem(s) does meta framework solve? I don’t see a lack of innovation in the industry, but I may be missing something.

P.S. I think that the latter part of the discussion can be continued in the framework thread.

Thanks for taking star image for through another test.

I am not really sure if this is kind of image is a good method to test a DRT in the first place. But it is simple test image at least. And I am pretty sure if someone in AE with the new ACES configs would look for an intense blue, why not using 0/0/1?


I noticed not only the blue changed (but there is still the kink in the plot), but also the yellow changed slightly. I got more straight to the gamut boundary?


Why is this kink in the plots only happening for blue and not for the other primaries and secondaries?

The gamut mapper is what creates the curvature in chromaticity space as it compresses along the perceptual hue lines. It’s not just the blue, green and red stars shows curving as well. There’s also clipping (which further skews) as the gamut mapping result is not quite exact to the boundary.

2 Likes

Thanks, that makes the issue very clear.

I would exert caution here.

We know from spectral paint mixing that those sorts of clefts are remarkably common, and they don’t manifest as “looking pooched”.

For example, from Evans An Introduction to Color1:

Here’s from Patton’s Pigment Handbook Volume 3:

We should be careful to infer meaningfulness from normalized signal projections such as the Standard Observer CIE xy projection. The “hey that looks pooched” cannot be determined from an examination of the colourimetric CIE xy projection alone.

Surely we have a vast enough amount of evidence in this thread alone to disprove the idea that there is a 1:1 stimulus to cognition Cartesian-like model that can even remotely generate “perceptual hue lines”? No? Not yet? Folks are still clinging to this absolute rubbish?


1 Evans, Ralph M. An Introduction to Color. New York: Wiley, 1948.

2 Likes

Incidentally, the parameters in that LMT default to a hue centre of 288, which was a value I found worked visually well to me. But I was only looking at the Rec.709 output. The hue value which lines up with the AP1 blue primary is 250.

UPDATE: I have just pushed a commit changing the default hue centre to 250 and adding a DCTL version.

Hi Nick,
I won’t post another plot, but the lighter “ring” inside the blue star gets more prominent with the second fix. With the first fix it was less.

Is that centred on 288 or 250? The original node version was 250, and then v2 was 288 until I just changed it.

It’s getting confusing for me :slight_smile:
Your first non Blink gizmo looks identical to the latest version with blink for me (250).

The one I tried in the meantime was the one with 288 and this made the “ring” more visible again.

Sorry for not being clear. The original (built from multiple nodes) version was centred on 250. When I built the second (v2) pure Blink version I originally set it to default to 288 (although it is obviously a user parameter). Yesterday when I pushed the DCTL version I set that to default to 250, matching* the original version, but I also changed v2 to default to 250 as well.

* When I say “matching” they won’t match exactly, because although the original version was centred on 250, it uses spline curves in a Nuke ColorLookup node, where the pure Blink and DCTL versions use a “bump function” with similar but not identical shape.

1 Like

I’ve tested Nick Shaw’s DCTL version of v55:

  • The noise is more noticeable compared to v52

  • Some colours still appear too bright in SDR (for example yellow and magenta in ACES OT VWG Sample Frame - 0029). I had “Encode as BT.2100 PQ” checked to compare SDR and HDR on the same monitor.

My theory is the following:

The compression of lightness J is much stronger in SDR than in HDR. Maybe some colours that tend to be bright (yellow, orange, magenta, cyan) need some additional compression of lightness J.

There is also the possibility that my monitor is just deceiving me.

Looking again at the chromaticity plot from @TooDee’s star image , it seems to me that the sharpness of the “double back” in the plot is a result of two things.

Firstly, and probably most importantly, it is not actually as sharp as it looks, because we are viewing a 2D projection of a line which in 3D space actually curves down and back in.

Secondly, the corner is sharpened by the fact that in the blue corner, the blue primary of the limiting Rec.709 gamut and the reach AP1 gamut are very close to each other. The compression curve uses threshold and limit parameters calculated as:

limit = \frac{M_{reach}}{M_{boundary}}
threshold = \frac{1}{limit}

This means that when the reach and boundary M values are close, as they are in the blue corner, then limit is only just above 1.0 and threshold is only just below it, resulting in quite a sharp transition into compression.

I remember @priikone added this varying threshold to address a particular issue, but I cannot remember what that issue was. In any case, because other things have changed since then, it may be worth revisiting it to see if the issue it solved still exists, and whether what its negative impact in the blue corner is a good tradeoff against its benefits elsewhere.

2 Likes

The issue was that if the limt is close to 1.0 and the threshold is constant (like 0.75), it wouldn’t compress much at all. It was causing artifacting in HDR as it was just clipping rather than compressing. So even a small amount of compression fixed the artifacts (I think we tried to look at some of those artifacts in one of the meetings also).

Ah yes. I remember now.

Presumably the artefacts are caused by the imprecision of the gamut approximation, followed by final clipping. Because in theory, if things were precise “not compressing much” would be the right thing to do, as only a very small amount of compression would be needed to compress the AP1 boundary to the limiting gamut boundary, because they are already close.

Or maybe moving the threshold is about affecting the slope of the compression curve as it hits 1.0.

I just opened a new pull request #42 for @alexfry for CAM DRT v057, also available from my repo.

This version fixes the white point used in the computation of various tables, especially the reach table and the AP1 gamut cusp table. The white point for them is now the ACES white rather than D65.

As a result the inverse improved, so this version now reduces the cusp smoothing value and smoothing factors without negatively impacting the inverse. The end result is slightly less clipping in the forward direction. There’s not really a visible difference to v056.

@TooDee’s stars chromaticities v057 vs v056 in sRGB output:

v057:


v056:

BT.1886 inverse v057 vs v056:

v057:


v056:

1 Like

Following @priikone’s suggestion in last night’s meeting, I’ve added optional Björn Compress Mode to the Nuke and DCTL implementations of my LMT.

They currently still use the LMS matrix from v55, but since with Björn compression they will be using a JMh space which doesn’t match the one in the DRT anyway, the exact LMS matrix didn’t seem important.

The compress mode certainly seems to reduce the darkening of the inner area of the blue star.

I also noticed that we could also reduce the lower hull gamma from 1.14 to 1.139 which reduces clipping even more, without impacting the inverse. So maybe in next version.

After yesterday’s meeting I went over hellwig model again , and I was able to simplify the color model itself it to just a few nodes, I am getting the same values as v55 in DCTL

I wanted to do this to understand the behaviors. If you see the nodes the color model can be explain the following way:

  1. convert to XYZ

  2. convert to the LMS space which is technically just an RGB space with a red primary slightly outside XYZ, a green primary so far outside that it breaks the xyY calculation, and a blue primary slightly outside XYZ.

  3. re-normalize achromatic using gains

  4. RGB values are transformed using a sigmoid function.
    The original model likely only dealt with values between 0 and 1 so not a huge problem for that case. We can achieve a similar effect with a simple power function and plus some gain, although this gain gets canceled later on in step 7.
    This step, like any adjustments made to individual color channels,this is what cause hue shifts. It breakes values outside the RGB range, and while DRT helps a bit, it doesn’t fully address the issue.
    Similar to other per-channel adjustments, this step pushes colors towards the primaries of the LMS/RGB color space its applied in. When compared to common display standards (P3, Rec.709, Rec.2020), the color space used here has more magenta in the blue and red channels, and more cyan in the green channel. This significantly shifts colors towards magenta and cyan which is the opposite of aesthetically pleasing results, it not as noticeable due to the rest of the DRT not been per-channel.
    Another drawback of this step, like with other per-channel adjustments, is a slight tendency to push colors towards achromatic.
    Screenshot 2024-03-28 122740

  5. Rotate the cube to be up-right, plus scale the 3 channels to be better spread around.

  6. convert to a cylindrical coordinates space.

  7. scale achromatic to make J channel 1 = Luminace 1, for achromatic, the rest still scaled differently.

hellwigModel.nk (35.7 KB)

9 Likes

Just pushed a set of v57 DCTLs to my repo

That is an excellent analysis @jpzambrano9 Well done ! I am pretty sure many people on this forum will appreciate this deconstruction of the model.

I could not help but think that per-channel was behind this behavior when I heard Pekka´s question in yesterday´s meeting (around 30:00).

Why is it that with this transform the skew is always towards the secondaries ?

If I recall correctly, the first person who explained to me this behavior was Troy and even if I did not understand all the maths behind it, I put it in my article since it seemed relevant.

Just for fun, I will share here a meme:

Hope you don´t mind the cheeky Scooby doo twist :wink:

Happy Easter everyone !

6 Likes