With ACES 2.0 being prepped for release, I was wondering what the concrete plans were regarding ACES provided looks. I recall a desire for an ACES 1.0-like contrast curve being one of them. What are the ideas for others?
And what is currently the goal for 2.0 release? Will the looks ship with it or somewhere later down the line?
We currently have an LMT that will provide an exact match to the ACES 1.0 SDR tone curve. Its just a 1D LUT applied to all three channels, so it does nothing special to color.
We will add 3D LUT matches to the entire look of ACES 1.0 once the 2.0 code is fully locked.
I would like to add other creative looks as examples, but don’t want them all to be 3D LUTs. Some can be, but I’d also like some that utilize a chain of operators in CLF to encapsulate formulaic modifiers.
The hold up has been a lack of 1-to-1 match for tools and operators availalbe in color correctors to the operators allowed in CLF. To combat this, I was going to compile some DCTL tools to provide a UI on some controlled functions to allow some technical colorists who have volunteered to make some looks for us. I would be asking them to limit their changes to using these operators, which I can then take the values from and push into the various CLF Process Nodes that can accomplish the same result. There are quite a few color operations that can at their core be boiled down to matrix operators, exponent functions, and/or log functions.
Obviously more complex grading can be done, as well, and we can capture those in 3D-LUTs just like any other look, using a Inverse Output Transform to derive the ACES → ACES’ mapping.
If you have any ideas on any of this, I’d welcome participation or assistance from anybody interested in providing looks or helping build the tools to build “limited-control looks” (agnostic to any vendor’s under-the-hood color operator code) .
(Unfortunately, right now this has been delayed on my plate since we’re trying to work out issues in the 2.0 dev release before locking that at end of month.)
Any 1D channel by channel application to tristimuli will shift the gradients between the stimuli, and in turn, yield a wildly different cognitive computational inference of colour, spatiotemporal articulation depending.
No 1D channel by channel gradient operator controls “tone” any more than it controls “colour”, hence my confusion.
Would you mind elaborating, because this statement sounds like absolute nonsense?
It’s exactly as I said, it applies a 1-D LUT to match the contrast of the tonescale from v1. It does nothing to try to match the color appearance of v1. It is intended only to match the neutral tone scale of v1. If you desaturate the image, the images will match. Otherwise, you get what you get.