ACES 2.0 Seeing a few issues

Hello,

I would like to share a few updates on my end. As a personal exercise for myself, I have developed a LMT for ACES 1.3. It is available for free on my Git with an OCIO config.

My main goal was to develop a pleasing and natural look for ACES 1.3 by tweaking purity, brilliance and hue shifts:

  • Reduce overall contrast of ACES 1.X
  • Avoid clipping as much as possible to ensure “smooth” gradients
  • Adjust the “hue-path bendings” to untangle the skews
  • Improve purity attenuation to avoid passing g0 threshold

I was overall pleased with the result and I thought that maybe such an LMT would be useful for CG students or freelance colorists as a starting point. I have put all my love for beautiful pictures and movies in this LMT, although I would easily admit it is far from perfect.

To perform this look development as best as I could, I looked at hundreds of images (both live-action and CG) and compared against several picture formations to avoid visual adaptation.

A bit like the ARRI Film Lab (where they mention that “the grain profiles were not designed to perfectly emulate a specific stock but instead represent a look that is characteristic of many different styles of stocks […]”), I have put a bit of all the picture formations that I like in this LMT (JP2499, openDRT, High Contrast Venice 2 LUT from Picture Shop…).

Here are some visual examples (ACES 1.3 with RGC and ACES 1.3 with my LMT) :

I would love to get some feedback about the LMT if possible. If you want to use it in Resolve, it needs to be applied on “Tlog_Egamut2”. I could not find a proper shaper space within the ACES family (happy to also hear some thoughts about this).

I think my next step is to try the same exercise for ACES 2.0, although I can already share that I struggled much more than ACES 1.X on my preliminary tests. Could it be because ACES 1.X is at its core a “simple” Matrix 3x3 and an s-curve (per-channel/RGB tonescale) when ACES 2.0 is a much more complex model (CAM and such) ? I would love to hear from anyone who had experience with this.

Biggest issues I faced so far were in the blues:

I will share updates on this LMT as soon as I can. Thanks !

3 Likes

I experienced similar difficulties in the context of grading a single shot, where at a certain threshold the data sort of gets vacuumed into the blue corner and the only way to stay away from it would be to reduce saturation and/or luminance by such an amount that you either end up with a muted image, or drop the luminance enough that you’re just enough under the top side of the volume in the blue area if that makes sense. Both solutions weren’t ideal imo as they were very sensitive to fail again with subsequent grading decisions like upping the exposure for example.

I understand grading operations tend to be much more coarse and/or local compared to algorithms for continuous smooth adjustments but I can understand similar issues are encountered along the way as you’re still underneath the DRT in the process and thus potentially limited by it’s mechanisms and it’s in the case of ACES fixed parameters.

Hi Chris,

unfortunately I was not able to investigate the issue further. But I will, once I have some more free time. It’s also a good opportunity to test Blender 5.0 with the new combined ACES 1.3, 2.0 and AgX ocio config.

This is exactly what I have experienced with blues. I am tweaking them in every possible way and somehow I cannot find a way to make them look “right”. Like if I was fighting the picture formation or something.

You actually can see it very clearly in this example:

This image is like the killer stress test for picture formation and it has helped me a lot in the past months to evaluate things.

And yes I agree with you. I see look development as a grading pass where we try to develop the aesthetics of a project at a more global level. Mine is very generic because I am merely trying to make things look a bit more natural and pleasing out-of-the-box.

I am worried that now ACES 2.0 has been implemented basically everywhere (Nuke, Resolve, Blender…), we have to deal with pink salmon fires for the next ten years. But maybe I am just worrying too much.

As always thanks for your answers guys !

Hello,

I have kept on with my investigations. I have noticed something interesting I think.

This is a blue ACEScg primary lighting a face through JP2499:

Now I am going to add a red ACES primary backlight:

Now pay attention, this is an addition (a “plus” in Nuke) since I am adding a light. And notice the screen right cheek, it has barely changed.

Same example with ACES 2.0. First the blue primary ACEScg light:

And adding now the red ACEScg primary backlight:

Do you guys see how the screen right cheek got darker even if we are adding light ? This is one of the issues I am facing to build my ACES 2.0 LMT. Any help or comment would be appreciated. Thanks !

Apologies for the multi-posting. But I am done with a first draft of my LMT for ACES 2.0.

My goal was to provide a pleasing and natural aspect out-of-the-box on all the images I could test on. As expressed a couple of times in this thread, I struggled mainly with the blues which are very hard to untangle.

But the workaround that I found was to apply the RGC after my LMT. So this LMT is made of two steps: a grading pass for the pleasing aspect and the RGC to “contain” the data. That´s a bit of duct taping but I could not find a better solution for now.

I will detail here the process of fabrication. Maybe it will inspire others to come up with their own LMT and also start a conversation about what “pleasing” means. I am sure there are better ways to describe and analyze pictures than “it is is all creative”.

Thanks for your attention.

I generally start with a hue sweep in ACEScg where you can observe (among other things):

  • Due to its “chromaticity linear” nature, ACES 2.0 will mainly go “pink” as exposure increases with reds and slightly violet as blues go brighter.
  • We may also look at how the yellows are outstanding and probably breaking g0 threshold (the brighter yellow patches look emissive).
  • The ACEscg green primary row may also look too “desaturated/pale” (it should almost be like a laser, right ?).
  • Finally, the blues are too dark, compared to its surrounding hues.


    So, the LMT tries to improve those aspects. Mainly by reducing purity and gently bending the hue paths. I also added a contrast boost to sit somewhere between ACES 1.X and ACES 2.0.

But those patches do not tell the full story and are missing a key component: gradients.

So generally my second step is too look at ACEScg primaries´ gradients:



On a red ACEScg primary, I try improve mainly two aspects:

  • With ACES 2.0, there is like a clip happening that prevents the volumetric light to look “smooth”.
  • And also, we are missing a slight hue path bending to go slightly more towards orange.

But if you pay really attention, I could not unfortunately get rid of the “kink” entirely.

Let´s have a look now at a blue ACEScg primary:



We can make the same observations here: by default, we get a breakup in the light and some funky hue distortions towards magenta are happening. This example here was the most difficult to deal with. I had to desaturate quite a bit to improve on those aspects.

Once I am done with those simple examples, I generally start looking at more complex stimuli and keep refining as much as I can. Because something that is missing from the previous images are the relationship between hues and how they “blend” together. This is super important.

For example:



Although I think my LMT improves overall this image, there is one specific area that is slightly worse. If you look at the penultimate row between blue and violet, the transition is actually less smooth in mine.

But that is a compromise I had to make for this kind of pictures:



Hopefully, with my LMT, the blue light reads as “light” on the table and the transitions between the wood (on the border) and the bright and colorful lights are “smoother”. Although I reckon they are far from perfect.

Here are more successful examples from my “exercise”:



So here, the obvious improvement is the hue bending towards orange and yellow. But there is something else at stake here. If you focus on the relationship between the grey smoke and the explosion (especially at the bottom right of the simulation), I managed to invert it. Because ACES 2.0 was the only DRT (that I know of) to create this effect.

We may now look at the sunset that Charles was mentioning:



Same thinking here. The hue bending helps with the pleasing aspect and I also managed to mitigate the polarity issue in the sun area.

In the following example, we can observe that peppers and apples look more natural and pleasing with my LMT. You may also observe that the contrast boost helps to make the specular reflections “sparkle”. We will come back to that.



Something I noticed very late in the image above are the blue reflections from the bottle on the red and yellow pepper (screen left). I think my LMT also improves this area.

And since we are on the topic of blue lights reflections, we shall move to our next example:



I focused here a lot on the blue lights. With ACES 2.0, it looks like thee is a dark ring around them. You may also see them in the blue reflection in the window (right screen). My LMT tries to give them a better energy representation.

It is worth noting here that my LMT is a bit saturated. This is mainly because I come from a “feature animation” background and we generally tend to prefer saturated images.

Another image I spent hours looking at:



Several things worth pointing out here:

  • The contrast boost helps overall the image because ACES 2.0 looks too “flat” by default (this is by design). I was really careful to not push contrast as much as ACES 1.X.
  • A good area to focus on is the shoulder (that is in the sun) of the screen left lady. With ACES 1.X, the contrast is so high that you loose all shaping. And with ACES 2.0, the “chomaticity linear” natue of it does not give a pleasing skin tone.
  • Somehow my LMT sits between those two. Hopefully, the sunny skin looks pleasing and the contrast boost helps without losing the shaping in the “highlights” region. If you focus on the face, you will hopefully see that the skin “sparkles”.
  • Also note that the orange patterns on her shirt do not look emissive anymore with my LMT.

We can also compare more “extreme examples”:



Same overall thinking here. By gently tweaking the hue bending, the brightness and the purity, we recover some shaping on the faces and a pleasing aspect.

More examples about skin:



Hopefully the contrast boost helps overall and if you look carefully at the lady on the left´s face, you may see some improved skin sparkling on her. I tried to retrieve as best as I could the information but in a very subtle way.

Of course we may compare my favorite stress test:



You can see how I struggled in the blue region unfortunately. But I think the rest of the hues improved with my LMT.

I checked also my Lego characters to make sure yellows behaved as expected:



The three bricks screen-left use ACEScg primaries. The green one might go slightly too much towards yellow. There was a fine balance to find which was not easy.

Finally, I also spent quite some time on the Grinch images:



Few elements to point out:

  • The contrast boost helps overall with the aspect, especially the snow.
  • The shadows with my LMT look blue and not magenta. I think it helps a lot.
  • The grinch´s colour looks closer to its original intention.
  • The christmas sweater relationship between green and red is also closer to the intended look.

In conclusion:

  • If you are reading this, thank you. I spent a lot of time on this LMT and I hope I was able to explain clearly (part of) the process.
  • I focused on a very holistic approach here rather than focusing on specific issues. My goal was really to give ACES 2.0 a more robust starting point.
  • This LMT will be available later this month on my Git as part of an OCIOv2.4 config.
  • I intend to share it with the Blender community because I think it will give CG artists and students a better starting point.
  • Although I agree my LMT is not a solution per-se nor perfect.
  • I also acknowledge that this long block of text is missing some proper thinking on the “why” I think a picture formation is successful or not. That will be for another time.

Finally, this “LMT exercise” has triggered an interesting question for me, which I would like to share. If you look back at the first hue sweep with the squares, in which order do you guys think the primaries should reach “white” ? I tend to think reds first, then green and finally blue (roughly thinking of LMS here…) but I would love to hear some thoughts on that.

Thanks !

8 Likes

Wow! @ChrisBrejon this is a ton of work and very interesting stuff - thank you for your continued efforts and keeping up with regular updates. I will need to come back to this post later to dig more into this but I just wanted to let you know I have been reading along with your progress but am very interested in exploring further on my own images so I can give you more helpful feedback. Great stuff!

1 Like

Great work @ChrisBrejon! Thanks for sharing. It would be interesting to learn more about your approach for making these adjustments.

You might find the following RED sample clip quite interesting, especially with ACES 2.0.

https://www.red.com/download/sample-r3d-file-v-raptor-x-8k-vv-devil

Thanks @thomasberglund. I appreciate the kind words. I described part of the process in my previous post, but I am happy to go more in detail about my approach. Please feel free to ask me about any specific point you would like to know.

Why

The reason for this LMT is to give a less “neutral” and more “pleasing” starting point. Because of its “chromaticity linear” nature, ACES 2.0 will display “salmon pink” fires by default. I use this example because it is the most obvious one but I actually believe that "bending the hue paths" benefits all parts of the picture formation (skin tones, skies…)

For the record, I actually tried to make a CG feature film with a “chromaticity linear” approach and let’s say it was not my best idea. We changed the LUT half-way of the project to re-introduce some carefully engineered hue bending.

How

The LMT was generated in Nuke using some grading tools that I unfortunately cannot tell much about them. I would just say that there are some of the best grading tools I ever had in my hands and they might get released in 2026.

They allowed me to tweak different aspects of the picture formation such as “brilliance”, “purity”, “hue shift”, “contrast” and “saturation”…

What

Finally, on the approach itself, let me say first that:

  • I wish I had a better approach. We discussed at length here on how we could generate some test pictures that would unambiguously reveal some aspects of a picture formation and where it falls apart. Unfortunately, I am not there yet.
  • I am 100% convinced that we can come up with a better LMT for ACES 2.0. The one I released on my Git is a just first draft and I do intend to improve it in the coming weeks.

When it comes to LMT (and even Picture Formation), we often fall in the realm of “this is all creative” and we are a few here to think that a more rigorous approach when it comes to pictures could benefit the entire community.

Recipe

Visual Adaptation

First, I really really try to fight “visual adaptation”. So in the Nuke script that I use, I have like 5 or 6 different picture formations in my viewer to constantly compare between them. This is one of the most important points. In my script, for example I mainly use “OpenDRT”, “ACES 1.0”, “Picture Shop High Contrast”, “JP2499” and of course ACES 2.0.

And there are a few things I know about them so I can set my eye accordingly. Like for instance, ACES 1.0 has too much contrast and ACES 2.0 not enough. So my LMT would aim at something in-between. “JP2499” for instance slightly pushes bright greens too much into yellows to my taste and “Picture Shop High Contrast” desaturates overall too much the blues. Same thing with the hue path bending that are necessary but generally too strong with per-channel RGB tonescales.

These observations might look random but they come from carefully looking at hundreds of images from different sets with various picture formations. So I try to come up with some kind of “average” (for a lack of a better word).

Samples

In my script, I also have access to hundreds of images both computer-generated and from different cameras. I try to use the widest possible range from portraits, macbeths, gradients, laser beams, night clubs and landscapes.

As explained earlier, I focus a lot on “gradient’s smoothness” in all those examples, not only monochromatic (like a red ACEScg primary going white) but also overlapping gradients of different colours (going from blue to red). Because I believe gradients are basically everywhere when it comes to pictures.

Golden Rules

And finally, I have come up with a series of rules when it comes to picture formation. For example, I would always privilege “luminance” over “chrominance”. Even if I want to “maximize purity”, I would never do it in a way that sacrifices the shaping and reading of forms.

Because in the end, this is what pictures are about, right ? The reading of shapes. This is what I called in my article (“it shall not break visual cognition”) because in the example below, we cannot cognize the spheres properly:

Just like this example that Troy Sobotka shared five years ago on twitter:



Look at the blue cap and blue gear of both players. I believe here what is important is not try to reproduce the scene “as if I were standing there” but to make sure the blue sport gear does not stand out compared to the other elements of the picture. We “read” the second picture much better, compared to the first one.

There are more stuff I take in account (like “polarity”) but one of the key components also to look for is the relationship between PBR and pictures. This one is a bit complex for me to explain but it has been my biggest epiphany of 2025. And seeing that more engineers and artists start connecting the dots between PBR and Picture Formation is just exciting:


In my article, I mention the “air material” (e.g. atmosphere and volumetrics) as a great way to evaluate a picture formation. For example, if some “saturated pixels” accidentally punch through a layer of smoke (or a cloud). I believe an example of a glass milk was also shared on this forum at some point.

But we can actually go way further when we start to think of “gloss” (or sheen/specular) as one of the most constructive critical cognitive mechanisms. And I will not expand further because I am just parroting Troy Sobotka at this point. I’ll just say that this “gloss” theory is everywhere and simply amazing, and might nicely relate to one of Alex Forsythe’s comments (from 3 years ago) that “skin should sparkle”.

Here is a nice example to illustrate this point using two different picture formations (compare the MacBeth chart and skintones, and think about what a sheen layer does):



I will add one last thing: I do not believe there is only one valid picture formation (if we go back to the baseball example, we could argue that both images are “valid”), but I do think there is one “correct” starting point which we could depart from. We just haven’t found it yet.

And I am sure some more clever minds have figured out better what the “science of pictures” is actually about. Some might even say that pictures are a different complete field by itself and have nothing to do with colourimetry for instance.

Thanks for reading.

PS: thanks for the link to the RED footage. I will add it to my samples tests !

5 Likes

Amazing. Thank you so much @ChrisBrejon! :slight_smile:

I often come back to your excellent CG Cinematography book and related articles, especially “What makes a good picture (formation)” which is incredibly insightful.

Below is a picture formation test I did with the RED sample clip I mentioned, which I find quite interesting. The preview still image is converted from Rec.709 BT.1886 to Gamma 2.2 for sRGB display.

I have not seen these kinds of issues before in my ACES 2.0 testing with other test clips. I might be doing something wrong here, but as far as I can tell this is what happens when using an ACES Transform in Resolve.

It would be interesting to learn more about what is happening here.

Here are the RED RAW R3D settings used in Resolve.

Here is a plot of CIE 1931 xy chromaticity for the input “scene-referred” colorimetry on the left, the picture formed by ACES 2.0 Picture Formation in the middle, and the colorimetry of the formed picture on the right.

As you can see, candle light as captured by the RED Camera Observer tends towards the camera observer spectral locus, and is placed quite far “outside” the human observer spectral locus by the colorimetry formation matrix.

All of the Picture Formation approaches you pictured except for Arri Reveal and ACES 2.0 seem to handle this pretty well though, so what’s going on here? It turns out there is a similar strange behavior in reddish-orange hues as there is in blue hues.

If we take a hue sweep of the same hue angle as the skin, we can see a strange region where the hue changes suddenly from reddish orange to yellow and then back to reddish orange:

Here’s a full screen image of the hue sweep so you can see it better:

And here is a plot of the hue as it varies from left to right. The increase 3/4 of the way across the plot is the hue distorting towards yellow and then suddenly distorting back towards red.

In 3 dimensions, we can see the same hue distortion. Note that the behavior changes over the intensity range. Higher intensity actually has the inverse behavior.

All this leads to a perplexing and unintuitive behavior of the Picture Formation, where if you increase the saturation of the input image data, the perceived saturation of the image is reduced due to the hue shift towards yellow.

Perhaps this is one aspect of what Daniele was warning about when the ODT VWG first started to go down the path of using a human observer color appearance model in their approach?

It’s all fine though, this image is “pathological” so we don’t need to worry about it :wink:

Here is the nuke script used to generate the above images if anyone wants to take a look for themselves.

devils-in-the-details.nk (596.6 KB)

4 Likes

Despite it having been said a couple of times in previous posts, ACES2 is not chromaticity linear, quite the opposite. Since it has the CAM in it, it will take that straight line and bend it according to that model; the hue will vary.

This is the Dominant Wavelength image plotted without gamut mapping through ACES2 (AP1 output):

Then again, who knows how they should bend in the end…

And here with mapped to different destination gamuts through ACES2:



Let’s look at how well ACES2 tracks the the plankcian locus (Nuke node) through ACES2 Rec.709, as that is one line we kind of know how it should look like, for whatever it’s worth:

In this case it is the gamut mapper (and the final hard clamp) that creates that hard transition. We can see all of these issues of hard transitions with a full hue sweep image through ACES2:

In ACES2, LMTs will have to add some compression if they want all these transitions across hues to be smoother…

I also find it interesting that we all, myself included, seem to accept without hesitation that the skin tone should come out reddish in that image. I don’t know how real it is though. We all can test this with a candle in a bathroom with lights off and see how red our faces actually go. :slight_smile: I dunno, might be a preference thing…

1 Like

Let’s ignore the view frustum discussion in this, which would render the idea of continuity of a discretized sample space utterly meaningless…

Are you suggesting that if I, as a picture author, use a sweep of whatever I deem as being the “limits” of the stimuli in my PBR model, and wrap it around a cylinder or a sphere or a plane, that I should expect the continuity of that texture to be formed into a pictorial depiction that holds a discontinuity such as this?

Further, if I wrap my spectral pictorial reflections upon a plane, or a sphere, or a cylinder, or any continuous form, and cover it with a “roughness”, that I should expect the absolutely bizarre results of these CAM based approaches?

That’s designed behaviour?

Not suggesting anything, just showing what comes out of ACES2 with that particular image. Image like that was discussed years ago also in Some reflections and experiments - #3 by bottosson.

Personally, I would like images like that, or any one other, to have smooth gradients.

Thanks for you ansswer Pekka, this is helpful.

Apologies, this one is on me. I was looking at fire images and over-simplified things by saying that ACES 2.0 was “chromaticity linear”. But hopefully, my point still stands though. Currently, fires look “salmon pink” and ACES 2.0. would benefit from a “default” LMT.

Although I agree that this “dominant wavelength” image was very helpful to help me build my LMT, I think it does not tell the full story here because it only shows a single source colorimetry point’s behavior through the picture formation.

May I suggest to check the behavior on a full sweep of "excitation purity":


The striking visual difference is because ACES 2.0 distorts towards yellow across the entire “intensity” range, whereas most picture formations will distort towards yellow above middle grey and distort the other way towards red below middle grey.

Indeed, the main culprit for this behavior (and the sudden desaturation of narrow spectra blue stimuli) is the “apply out of gamut compression”.


I find this comment very interesting and it might be worth to ask ourselves why do we all seem to agree on this point ?

This is where I kinda disagree if I may. I am not sure how to express this properly but we are image makers. Our goal is not to re-create the human vision or a “scene as if I was standing there”. I think the whole “scene/display” axiom is a mental trap we have all fallen into. And it would be interesting to change our vantage towards a more image-centric approach.

I was reading this book to my kid the other day and I was honestly horrified at what I was reading:


The eyes are like windows open to the outside world. Images enter in the form of beams of light […]” I know we discussed some years ago if it would be interesting to define what images are and this proposal was kind of discarded. But my feedback is that it has been helpful to me to think about it that way.

2 Likes

Agreed, but, I think it’s perfectly valid to have a goal (or preference) of photographic reproduction of the scene with emphasis on color accuracy and realism. Is more yellow fire realism or preference? Does it matter?

So, it’s ok to stare at yourself in the dark bathroom in candlelight and see what you see. Then again, what’s the old saying: “we don’t see the world as it is, we see the world the way we are”. Particularly spot on for color. So it’s all preference in the end. If the color looks correct to you, it’s correct. If it looks wrong to you, it’s wrong. :slight_smile:

“Accuracy” with respect to what? How do you quantify “realism” in this context?

Does it matter? If that is your goal or preference, then I think you get to decide, whatever example, reference or measurement you want to use or not use to get the reproduction you’re after… I could’ve used other words too, like “natural”, “true-to-life”, I don’t think it matters.

If this were an even remotely plausible postulate, not a single primate would be able to propel their bodies through space.

This goal has never been the case since the advent of photography, neither in the analog nor electronic eras. The extreme differences between the pictorial depiction stimuli and the stimuli in-front-of-camera should not be discarded under the dismissive and euphemistic “preferential adjustments”, as such language does a disservice to what could only be described as fundamental mechanisms.

But let us follow along the proposed ahistorical ideological trail…

I would be curious if anyone has fully embraced the concept that the CIE XYZ stimuli model of relative wattage requirements to yield an indistinct boundary within a bipartite field has absolutely nothing to do with colour cognition? It has to do with identifying a flux difference across a very specific boundary condition, nothing more. To misplace colour cognition into the domain of boundary flux is a grave a priori error of logic and judgement.

But let’s ignore this, and further pretend that the entire idea of “colour accuracy” exists under the framing implied. Despite what display and streaming service and IP vendors might propose, is it reasonable to suggest that pictorial depictions of prisms, rainbows, diffraction, iridescence, indirect interactions off of all energy fields, etc. is fundamentally impossible due to the lack of “accuracy” of stimuli metrics? Or can rainbows be depicted in black and white film? How can this be without this entry point a priori postulate of “colour accuracy”, upon which the entire “scene” vs “display” paradigm of Giorgianni and Kodak rests, be subject to extreme scrutiny?

And even if the generous interpretations outlined above were remotely tenable from the very first stage of the proposition, let alone the actual biology of our primate systems, we should note that the additional layer of Colour Appearance Models and Uniform Colour Spaces, which the present discourse of “colour accuracy” rests upon. These can be framed as deformations of the stimuli flux fields that are so far displaced from anything remotely patterning after “appearance” so as to be equivalent to proposing Jupiter as equivalent for Earth.

Of course it matters when some proposition is resting atop layers upon layers of science fiction. None of this works. We started at a faulty proposition, and executed low effort and dismissive interrogation of fundamental constructs. Should we be surprised that some folks might suggest that the continued dismissal and recklessly low effort interrogations matter?

2 Likes

I just want to say that I really appreciate the hard work that has been done with ACES 2.0 and I hope we can keep the discussions civil and constructive. Not saying the discussion has been fully derailed yet, but I have a feeling it might happen later unless we are mindful about this moving forward.

This is subjective of course, but I agree that it currently feels like ACES 2.0 would really benefit from a default LMT, like @ChrisBrejon has shown multiple good examples of earlier in this thread.

Adding advanced hue skews in grading to get “good looking” pictures shouldn’t be needed by default.

4 Likes