Thanks @thomasberglund. I appreciate the kind words. I described part of the process in my previous post, but I am happy to go more in detail about my approach. Please feel free to ask me about any specific point you would like to know.
Why
The reason for this LMT is to give a less “neutral” and more “pleasing” starting point. Because of its “chromaticity linear” nature, ACES 2.0 will display “salmon pink” fires by default. I use this example because it is the most obvious one but I actually believe that "bending the hue paths" benefits all parts of the picture formation (skin tones, skies…)
For the record, I actually tried to make a CG feature film with a “chromaticity linear” approach and let’s say it was not my best idea. We changed the LUT half-way of the project to re-introduce some carefully engineered hue bending.
How
The LMT was generated in Nuke using some grading tools that I unfortunately cannot tell much about them. I would just say that there are some of the best grading tools I ever had in my hands and they might get released in 2026.
They allowed me to tweak different aspects of the picture formation such as “brilliance”, “purity”, “hue shift”, “contrast” and “saturation”…
What
Finally, on the approach itself, let me say first that:
- I wish I had a better approach. We discussed at length here on how we could generate some test pictures that would unambiguously reveal some aspects of a picture formation and where it falls apart. Unfortunately, I am not there yet.
- I am 100% convinced that we can come up with a better LMT for ACES 2.0. The one I released on my Git is a just first draft and I do intend to improve it in the coming weeks.
When it comes to LMT (and even Picture Formation), we often fall in the realm of “this is all creative” and we are a few here to think that a more rigorous approach when it comes to pictures could benefit the entire community.
Recipe
Visual Adaptation
First, I really really try to fight “visual adaptation”. So in the Nuke script that I use, I have like 5 or 6 different picture formations in my viewer to constantly compare between them. This is one of the most important points. In my script, for example I mainly use “OpenDRT”, “ACES 1.0”, “Picture Shop High Contrast”, “JP2499” and of course ACES 2.0.
And there are a few things I know about them so I can set my eye accordingly. Like for instance, ACES 1.0 has too much contrast and ACES 2.0 not enough. So my LMT would aim at something in-between. “JP2499” for instance slightly pushes bright greens too much into yellows to my taste and “Picture Shop High Contrast” desaturates overall too much the blues. Same thing with the hue path bending that are necessary but generally too strong with per-channel RGB tonescales.
These observations might look random but they come from carefully looking at hundreds of images from different sets with various picture formations. So I try to come up with some kind of “average” (for a lack of a better word).
Samples
In my script, I also have access to hundreds of images both computer-generated and from different cameras. I try to use the widest possible range from portraits, macbeths, gradients, laser beams, night clubs and landscapes.
As explained earlier, I focus a lot on “gradient’s smoothness” in all those examples, not only monochromatic (like a red ACEScg primary going white) but also overlapping gradients of different colours (going from blue to red). Because I believe gradients are basically everywhere when it comes to pictures.
Golden Rules
And finally, I have come up with a series of rules when it comes to picture formation. For example, I would always privilege “luminance” over “chrominance”. Even if I want to “maximize purity”, I would never do it in a way that sacrifices the shaping and reading of forms.
Because in the end, this is what pictures are about, right ? The reading of shapes. This is what I called in my article (“it shall not break visual cognition”) because in the example below, we cannot cognize the spheres properly:
Just like this example that Troy Sobotka shared five years ago on twitter:
Look at the blue cap and blue gear of both players. I believe here what is important is not try to reproduce the scene “
as if I were standing there” but to make sure the blue sport gear does not stand out compared to the other elements of the picture. We “read” the second picture much better, compared to the first one.
There are more stuff I take in account (like “polarity”) but one of the key components also to look for is the relationship between PBR and pictures. This one is a bit complex for me to explain but it has been my biggest epiphany of 2025. And seeing that more engineers and artists start connecting the dots between PBR and Picture Formation is just exciting:
In my article, I mention the “air material” (e.g. atmosphere and volumetrics) as a great way to evaluate a picture formation. For example, if some “saturated pixels” accidentally punch through a layer of smoke (or a cloud). I believe an example of a glass milk was also shared on this forum at some point.
But we can actually go way further when we start to think of “gloss” (or sheen/specular) as one of the most constructive critical cognitive mechanisms. And I will not expand further because I am just parroting Troy Sobotka at this point. I’ll just say that this “gloss” theory is everywhere and simply amazing, and might nicely relate to one of Alex Forsythe’s comments (from 3 years ago) that “skin should sparkle”.
Here is a nice example to illustrate this point using two different picture formations (compare the MacBeth chart and skintones, and think about what a sheen layer does):
I will add one last thing: I do not believe there is only one valid picture formation (if we go back to the baseball example, we could argue that both images are “valid”), but I do think there is one “correct” starting point which we could depart from. We just haven’t found it yet.
And I am sure some more clever minds have figured out better what the “science of pictures” is actually about. Some might even say that pictures are a different complete field by itself and have nothing to do with colourimetry for instance.
Thanks for reading.
PS: thanks for the link to the RED footage. I will add it to my samples tests !