ACES 2.0 CAM DRT Development

While working on simplifying the chroma compression I have come across a couple of potential issues.

1. R=G=B doesn’t produce M = 0

I noticed that when input image is just a grey ramp, M is a small positive number. But since the model increases colorfulness as J goes higher the small M value keeps also increasing. Given the fact that the DRT uses the “discount illuminant” and internally it’s using Illuminant E, shouldn’t M be zero in this case? The DRT does not output grey ramp with some color, but within the JMh, M does seem to have colorfulness.


And here is a plot of M multiplied by 1000 of that input image (missing pixels either were 0.0 or very small values):

2. Display white is above 1.0

This is an old issue I noticed back in the ZCAM DRT days. It seems that display white comes out as slightly above 1.0. It’s 1.0015 with the above ACEScct (0.0-1.0) ramp image. J coming out of the tonescale is above 100.0 (with 100 nits curve). limitJmax is also slightly above 100.0 (limitJmax is an inverse of display 1.0). I’m wondering if the DRT is using wrong multiplier somewhere (100 vs limitJmax) or whether the limitJmax is actually correct. Or is this not a real issue?

I’m not sure where all YCbCr systems are suggested as doomed? All YCC spaces as best as I understand them abide quite strictly to Grassmann additivity principles. The goofy “CAM” models do not.

They don’t do this, as this is an already solved problem.

We have a pretty good idea how basic global frame relations work, as outlined by MacAdam way back in 19381. I’d like to think that we can agree that this “works” for the lower frequency spatiotemporal field case. In fact, it’s baked into every camera, additive, and subtractive mediums we have currently.

What we glean from the mechanic is that in such a system, there’s a “stasis” that can be achieved for the lowest frequency analysis. But it is not strictly luminance in terms of the underlying forces. It also includes chrominance.

I’ll borrow Boynton’s2 definition here:

We are now in a position to define a new term: chrominance. Whereas luminance refers to a weighted measure of stimulus energy, which takes into account the spectral sensitivity of the eye to brightness, chrominance refers to a weighted measure of stimulus energy, which takes into account the spectral sensitivity of the eye to color.

When we balance both luminance and chrominance… we end up with the RGB model. Or CMY. Or literally any and all stasis based models including but not limited to the Standard Observers themselves.

For example… and not to pick on OK*, if we look at this picture, we can see what I mean by “oopsie double up”:

Here notice how the “blue” is pegged low on the vertical axis. This is because it carries a low luminance. However, when we use this sort of approach, we are privileging luminance exclusively. Notice how the energy of the blue channel is violating the “up / down” relationship of the totality of the row? This is the double up. If we wanted to position blue at this low of a luminance vertical axis, we would simultaneously need to reduce chrominance. If we fail, the combined chrominance and luminance force at the global field level exceeds the global frame value, which would be the achromatic value at that given row position.

What I am saying is that no “CAM” works here. It’s nonsense from the lowest possible principles, given that the problem is already solved. “But what about the Abney Effect or HKE!!!111”. Sadly, no CAM can address this if the aforementioned effects are in fact cognitive field based.

For example, there are to the best of my knowledge exactly zero discrete quantity models that can account for the wild variations of the Bressan3 Christmas demonstration riff below. The models are a dead end, and worse, they simply do not in any way, shape, or form, work. Nonsense garbage, and we’d do well to simply accept that, as cognitive field relationships are present at all times. They don’t arbitrarily turn on or off. The “box tops” are identical tristimulus, as well as the swatch below in the border field. Imagine how useful these ridiculous CAM’s are at describing this.

We can couple this with some of Tse’s work4, to assert that in the cases where the field differentials are ambiguous in terms of our individual cognitive apparatus, that we modulate our cognition accordingly. That is, ultimately, the “judge” is a higher order cognition process, struggling to reify the “lightness” or “darkness”. The following can be modulated for lightness or darkness at will:

Note if we blow the “up” and “down” relationship, that the cognitive modulation is more resistant:

So as for HKE, we already have some pretty solid evidence to note a relationship to MacAdam’s limit5, 6. Couple this with a general understanding of cognitive fields, and we can probably extrapolate some pretty reasonable “rules”, without diving into the depths of field analysis.

TL;DR: Up / Down relationships are incredibly important to be maintained in forming a picture.

Given we are already in a uniquely balanced system, and specifically in relation to picture forming:

Axiom #1:
Don’t f##k with “up” / “down” relationships if one is not creatively intending to do so for cognitive impact.

Corollary: Let f_{picture} be the picture forming function. Rule #1 for any balanced RGB-like system might be expressed as:

max(f_{picture}({RGB_{achromatic}})) >= max(f_{picture}(RGB_{chromatic}))

I am reasonably confident this holds true for all per-channel mechanics. I am skeptical this holds true for any “CAM” mumbo jumbo, but I’m hoping to be proven wrong here.

Hopefully the delusional idiot nonsense speak above is understandable here.

/taps the sign☝️

1MacAdam, David L. “Photometric Relationships Between Complementary Colors*.” Journal of the Optical Society of America 28, no. 4 (April 1, 1938): 103. Photometric Relationships Between Complementary Colors*.
2 Boynton, Robert M. “Theory of Color Vision.” Journal of the Optical Society of America 50, no. 10 (October 1, 1960): 929. Theory of Color Vision.
3 Bressan, Paola. “The Place of White in a World of Grays: A Double-Anchoring Theory of Lightness Perception.” Psychological Review 113, no. 3 (2006): 526–53. APA PsycNet.
4 Tse, Peter U. “Voluntary Attention Modulates the Brightness of Overlapping Transparent Surfaces.” Vision Research 45, no. 9 (April 2005): 1095–98. Redirecting.
5 Stenius, Å Ke S:son. “Optimal Colors and Luminous Fluorescence of Bluish Whites.” Journal of the Optical Society of America 65, no. 2 (February 1, 1975): 213. Optimal colors and luminous fluorescence of bluish whites.
6 Schieber, Frank. “Modeling the Appearance of Fluorescent Colors.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting 45, no. 18 (October 2001): 1324–27.



R=G=B input should only produce a M=0 if the white point matches exactly what the RGB-> XYZ matrix does, and the numerical robustness of the conversion from XYZ → Jab produces a = b = 0.

What happens if you pretend for testing that you feed in Illuminant E balanced data?
Next feed in XYZ that is just the white point scaled up and down. Do you get equal values after the nonlinearity is applied?

that should help figure out which part might be slightly off.

When I implemented the base model in C++ I think I was trying to be careful to pre compute various matrices using higher precision, there are a number of cases where they can be concatenated together to reduce the number of rounding steps,


I disagree here with the word just.
Two values which are very close in RGB or YCC can be far apart in Hue (for example around the neutral axis).
Also the visible change of hue is depending on the magnitude of “colourfulness”.

In the model M is:

M=sqrt(a^2 + b^2) * 43 * surroundNc * et

Which we’ve already simplified by removing the eccentricity factor. We can further simplify this to following without the rendering changing:

M=sqrt(a^2 + b^2) * surroundNc

Nothing in the DRT needs the higher scaling for the M, AFAICS. The surround value is a constant we can pick (currently 0.9 for “dim”, and 1.0 would be “average”, so we could get rid of that too if wanted).

Edit: correction, cusp smoothing needs the higher scaling as it’s using absolute values, but that can be easily changed to be relative (would be better anyway). Tested, and with cusp smoothing set to 0, the rendering doesn’t change with the scaling factor removed.

As long as you leave hue alone, you are fine. Then you could scale a,b directly using M as a modulator.

But if you start to tinker with hue-specific tweaks I think you are off the rails.

1 Like

We “leave hue alone” insofar as we do not change the h value while we are in the JMh working space, so we come out with the same hue we went in with. But the value of h is certainly used in other operations. If nothing else we use the position of the cusp of the gamut hull at the current hue to drive other things.

But I think that what we do with that should really only significantly affect values near and beyond the edge of the gamut, where J and M are changed by gamut compression. The values near the neutral axis where, as you say, small changes of RGB can cause large hue changes should be unaffected by gamut compression, so should come out of the model the same as they went in.

What about the in-gamut compression, @priikone? Should we be concerned about the effect of hue instability near neutral on that?

I’m currently writing a ZCAM based DRT for DaVinci Resolve. Instead of porting the existing blink code I’m writing it from scratch in the hope of finding something. In doing so, I stumbled across this:

float3 monoJMh = float3(inputJMh.x,0.0f,0.0f);

You set the hue to zero as if the image contains only colors in the red-magenta hue. But of course, the luminance in the CAM models is hue dependent. So is it really beneficial to fake a red-magenta hue?

BTW: The results of the chroma compression algorithm by Mantiuk are quite promising. But I still need to do more SDR/HDR comparisons.


Even if, is there something we can do about it? As you say, we don’t change the hue, so whatever RGB changes happen by changing J and M are driven by the model. All operations, in-gamut compression included, happen in JMh.

The compress mode, though, happens in LMS, in order to avoid negative values in LMS.

When M is also zero, the value of h is not relevant. It cancels out. This part of the code is creating an achromatic colour with the same J value as the original, in order to run that backwards through the model to find the corresponding Y value and tone map that, because the tone mapping curve used operates in the linear luminance domain. The tone mapped value is then run forwards through the model, and the original M and h values “put back”. Ideally we would just apply a tone mapping curve to J, without the need for this back and forth. But we don’t have a version of the tone mapper that works in the ZCAM / Hellwig J domain.

The hellwig_ach Blink version here is a simplified version of the back and forth, which removes all the elements unnecessary for achromatic values. This is for Hellwig, but you could probably do something similar for ZCAM.

1 Like

I have found that the SDR image sometimes looks brighter / more saturated than the HDR image. It looks even brighter / more saturated than the original linear rgb input. Then I remembered that DCamProf’s neutral tone reproduction operator was very good at matching the tone-mapped image with the linear image perceptually. So, I’ve looked into it and found that it uses mainly the luminance from a pure rgb curve for tone mapping. Only for more saturated colors the tone curve is applied on the input luminance and then used for the tone mapping:

  • Luminance is the same as a DCP tone curve applied in linear ProPhoto space
    • This means that the contrast and brightening will be the same as a standard DCP/RGB curve, so you can use the same curve shape.
    • Exception: for high saturation colors (flowers etc) a pure luminance curve is blended in, as it has better tonality.

Source: DCamProf


tonecurve applied on luminance
Tonecurve applied on the input luminance, which is then used for the tone mapping

output luminance from rgb tonecurve
Luminance derived from a pure rgb curve, which is then used for tone mapping

I personally do prefer the look of Anders Torger’s method (especially for red colors like in the example above) and I hope that it can improve SDR/HDR matching. Now I’m curious if it’s possible to use this method for the ACES DRT? I mean, is it possible to find an inverse of the transform?


I pushed a new proto version v042-pex that has a chroma compression without saturation boost. Available from my DRT prototype repo. Please test and see how that behaves compared to the previous one and how the Rec.709 and Rec.2100 appearance match looks. There’s not much difference to v035 in Rec.709.

I created LUTs for Rec.709, Rec.709 sim and Rec.2100 P3Limited, also available from the same repo under the LUT directory.

I also created an alternative version of Rec.2100 P3Limited LUT which maps middle gray to 13.3 nits. It’s there for testing to see what a lower middle gray mapping does to the appearance match when comparing against Rec.709.


Hi Pekka,

would it be possible to get also an OCIO config like @alexfry usually created for new versions?

37 posts were split to a new topic: Luminance-Chrominance Polarity Based Display Rendering Transform

OK, meeting is close, but better late than never.

New bakes:
ACES2 Candidates rev042 = CAM_DRT_v042_pex.blink
Which I think is the latest one @priikone endorses.

ACES2 Candidates rev043 = CAM_DRT_v043.blink
Which is a hybrid of CAM_DRT_v042_new_scaling.blink and my own CAM_DRT_v042.blink

rev043 also has different things set.
in gamut compression is off
reach primaries is Spectral Locus
And the gamut compressor is iterative

I also need to see what’s going on with inversion in v043, but need to call it a night.

1 Like

For the version that goes to testing can we have it as a DCTL?

I have tried to get Nuke non-commercial to work with Blinkscrip with no success. I have a Windows system with AMD hardware.

I would also like to run the test images on the 12.9" iPad M2 Pro with DaVinci Resolve for iPad to view in both SDR and HDR. Nuke does not run on iPad.


(An additional note is to be aware that some GPU cards have settings for graphics pixel bit format which is separate and differs from display color bit depth. From my understanding the pixel bit will effect the calculations, truncating them to the selected bits. Make sure that the GPU is set to 10-bit in both areas. The latest AMD Radeon Pro W7900 & 7800 go to 12-bit as do the upper end Nvidia. )

I’ve just pushed CAM DRT v042-pex3 to my repo.

v042-pex3 version brings the following changes:

  • New chroma scaling with the exponent suggested by @luke.hellwig.

  • New in-gamut compression algorithm that matches the look of the previous versions. It first expands colorfulness a little bit in the darker colors, and then compresses to create the path to white. The in-gamut compression no longer pushes values outside spectral locus (or is minimal compared to previous versions).

  • New HDR/SDR appearance match technique. The old technique of rescaling the tonescale to bring the chroma scaling ratios closer to 100 nits tonescale is no longer used. Instead, the ratios are used as is and the tonescale is not touched. The match is created by scaling the in-gamut compression parameters automatically by the peak luminance. This is simpler than the previous one.


LUTs and DCTLs are available in my repo for Rec.709, Rec.709 sim and Rec.2100 P3Limited.

Look compared to v035



Effect of in-gamut compression to values close to spectral locus

Input image:

Input chromaticities:

Scaled chromaticities (using the new exponent):

Fully chroma compressed chromaticities:

Then comparing to v035. v035 fully chroma compressed chromaticities:

The last two images show the difference between v042-pex3 and v035, and the problem that the older versions had that @alexfry has been talking about. In v042-pex3 the chromaticities are no longer pushed outward, or is only minimal compared to older versions.


Here are the DCTL versions

Thankyou Anton and Pekka both, but will we be able to have math DCTLs and not just a LUT? Thanks.

I gave a couple reasons in:

I’ve added some new baked variations to the LUT repo:

rev044 - Which is what @priikone showed in the meeting this moring, but with some of my iterative solver fixes merged (not active in this bake though.

rev044_B - Is the identical blink code, but with some different settings one the Nuke node.

  • IterativeGamutCompressor is On.
  • custp to mid blend is set to 0
  • Reach Compression Mode is on, and set to Spectral Locus

Both can also be found in the dev repro:

CAM_DRT_v044.blink is used by both
CAM_DRT_v044.nk and CAM_DRT_v044_B.nk (same code, different settings)

My one BIG caveat, is that the inverse pathway on the iterative gamut compressor isn’t working properly, and I’m not going to be able to sort it before I go away.

The difference between the two variations is pretty negligable when looking at “normal” images, and really only manifests with extreme colours and intensities.