ACES 2.0 CAM DRT Development

If I get it right, this should be basically a set of rules on how to build your own workflow and still be compatible with ACES using some metadata thing?

Below isn’t basically a reply to your proposal, but it reminded me about my message from one discord discussion.

And regarding DRT - I’d prefer something like that: standard s curve, that has an option to adjust contrast. But the overall design should allow to easily change the path-to-white / “hue preserving” / per-channel modules.
So no waiting for major updates. Just different modules, loaded into DRT, that provide different qualities. So if someone needs “hue-preserving”, just downloads it or makes it by themselves.
Sort of a simple to use constructor. No choosing between different DRTs, but a standard system that allows to change or modify it’s qualities as files containing some code or a 3d table responsible for the particular part of rendering.

I wouldn’t focus too much on my specific LMT. I am not proposing it as the ultimate solution. It was just a quick experiment into the concept of a JMh based LMT.

My real point is that a range of LMTs are possible, for various technical or creative purposes, and a DRT which does not limit the scope of what those LMTs can do allows the widest range of possibilities.

And ARRI Reveal doesn’t always reach the corners, whereas CAM DRT has that as a requirement we’ve chosen to have. Here’s an example sweep for v056 and ARRI Reveal:

v056:


ARRI Reveal:


2 Likes

Here is an updated version of my LMT, written as a single Blink kernel, rather than built from multiple DRT nodes in diagnostic modes. It also now includes parametric control of the hue to be compressed, rather than using a curve lookup, so it is easier to try the effect on different hues.

Well, if you want to create an archive master without DRT, this does not work. Also, how do you treat graphics for different versions (SDR vs. HDR)? If it would be simple, it would have been done :wink: .

OCIO is a colour management system; it is not the same as defining a meta-framework. For example, IDTs are only expressed as 1D LUT and Matrix in OCIO, which is unacceptable for a meta-framework. Creating a framework would actually be quite an undertaking in practice. This working group has not even started defining the rest of the pipeline (Viewing Conditions, White Point Handling, EOTFs etc…). It is still a lot of work…

It is never too late.

I think this would only move the issue a step further up the abstraction layer. People would want a different algorithm at some point, also the system would be so complex that it would be hard to control.
The only way I found satisfying all use cases is to make the DRT a swappable building block.

5 Likes

I was not thinking about archival purposes, point taken. You and Anton are right, this is a whole different can of worms.

Out of curiosity, why are 1D LUT and 3x3 matrix not enough for IDTs? This approach satisfies “linearity” condition.

In my mind (and mind of a lot of my colleagues) ACES is an established color-management system with its own set of pros and cons. There is some mental inertia which we should overcome to pivot it to a meta-framework. It is a very interesting challenge, especially because some of the concepts should be defined from ground up, but I wonder…

What specific problem(s) does meta framework solve? I don’t see a lack of innovation in the industry, but I may be missing something.

P.S. I think that the latter part of the discussion can be continued in the framework thread.

Thanks for taking star image for through another test.

I am not really sure if this is kind of image is a good method to test a DRT in the first place. But it is simple test image at least. And I am pretty sure if someone in AE with the new ACES configs would look for an intense blue, why not using 0/0/1?


I noticed not only the blue changed (but there is still the kink in the plot), but also the yellow changed slightly. I got more straight to the gamut boundary?


Why is this kink in the plots only happening for blue and not for the other primaries and secondaries?

The gamut mapper is what creates the curvature in chromaticity space as it compresses along the perceptual hue lines. It’s not just the blue, green and red stars shows curving as well. There’s also clipping (which further skews) as the gamut mapping result is not quite exact to the boundary.

2 Likes

Thanks, that makes the issue very clear.

I would exert caution here.

We know from spectral paint mixing that those sorts of clefts are remarkably common, and they don’t manifest as “looking pooched”.

For example, from Evans An Introduction to Color1:

Here’s from Patton’s Pigment Handbook Volume 3:

We should be careful to infer meaningfulness from normalized signal projections such as the Standard Observer CIE xy projection. The “hey that looks pooched” cannot be determined from an examination of the colourimetric CIE xy projection alone.

Surely we have a vast enough amount of evidence in this thread alone to disprove the idea that there is a 1:1 stimulus to cognition Cartesian-like model that can even remotely generate “perceptual hue lines”? No? Not yet? Folks are still clinging to this absolute rubbish?


1 Evans, Ralph M. An Introduction to Color. New York: Wiley, 1948.

2 Likes

Incidentally, the parameters in that LMT default to a hue centre of 288, which was a value I found worked visually well to me. But I was only looking at the Rec.709 output. The hue value which lines up with the AP1 blue primary is 250.

UPDATE: I have just pushed a commit changing the default hue centre to 250 and adding a DCTL version.

Hi Nick,
I won’t post another plot, but the lighter “ring” inside the blue star gets more prominent with the second fix. With the first fix it was less.

Is that centred on 288 or 250? The original node version was 250, and then v2 was 288 until I just changed it.

It’s getting confusing for me :slight_smile:
Your first non Blink gizmo looks identical to the latest version with blink for me (250).

The one I tried in the meantime was the one with 288 and this made the “ring” more visible again.

Sorry for not being clear. The original (built from multiple nodes) version was centred on 250. When I built the second (v2) pure Blink version I originally set it to default to 288 (although it is obviously a user parameter). Yesterday when I pushed the DCTL version I set that to default to 250, matching* the original version, but I also changed v2 to default to 250 as well.

* When I say “matching” they won’t match exactly, because although the original version was centred on 250, it uses spline curves in a Nuke ColorLookup node, where the pure Blink and DCTL versions use a “bump function” with similar but not identical shape.

1 Like

I’ve tested Nick Shaw’s DCTL version of v55:

  • The noise is more noticeable compared to v52

  • Some colours still appear too bright in SDR (for example yellow and magenta in ACES OT VWG Sample Frame - 0029). I had “Encode as BT.2100 PQ” checked to compare SDR and HDR on the same monitor.

My theory is the following:

The compression of lightness J is much stronger in SDR than in HDR. Maybe some colours that tend to be bright (yellow, orange, magenta, cyan) need some additional compression of lightness J.

There is also the possibility that my monitor is just deceiving me.

Looking again at the chromaticity plot from @TooDee’s star image , it seems to me that the sharpness of the “double back” in the plot is a result of two things.

Firstly, and probably most importantly, it is not actually as sharp as it looks, because we are viewing a 2D projection of a line which in 3D space actually curves down and back in.

Secondly, the corner is sharpened by the fact that in the blue corner, the blue primary of the limiting Rec.709 gamut and the reach AP1 gamut are very close to each other. The compression curve uses threshold and limit parameters calculated as:

limit = \frac{M_{reach}}{M_{boundary}}
threshold = \frac{1}{limit}

This means that when the reach and boundary M values are close, as they are in the blue corner, then limit is only just above 1.0 and threshold is only just below it, resulting in quite a sharp transition into compression.

I remember @priikone added this varying threshold to address a particular issue, but I cannot remember what that issue was. In any case, because other things have changed since then, it may be worth revisiting it to see if the issue it solved still exists, and whether what its negative impact in the blue corner is a good tradeoff against its benefits elsewhere.

2 Likes

The issue was that if the limt is close to 1.0 and the threshold is constant (like 0.75), it wouldn’t compress much at all. It was causing artifacting in HDR as it was just clipping rather than compressing. So even a small amount of compression fixed the artifacts (I think we tried to look at some of those artifacts in one of the meetings also).

Ah yes. I remember now.

Presumably the artefacts are caused by the imprecision of the gamut approximation, followed by final clipping. Because in theory, if things were precise “not compressing much” would be the right thing to do, as only a very small amount of compression would be needed to compress the AP1 boundary to the limiting gamut boundary, because they are already close.

Or maybe moving the threshold is about affecting the slope of the compression curve as it hits 1.0.

I just opened a new pull request #42 for @alexfry for CAM DRT v057, also available from my repo.

This version fixes the white point used in the computation of various tables, especially the reach table and the AP1 gamut cusp table. The white point for them is now the ACES white rather than D65.

As a result the inverse improved, so this version now reduces the cusp smoothing value and smoothing factors without negatively impacting the inverse. The end result is slightly less clipping in the forward direction. There’s not really a visible difference to v056.

@TooDee’s stars chromaticities v057 vs v056 in sRGB output:

v057:


v056:

BT.1886 inverse v057 vs v056:

v057:


v056:

1 Like