In case it is useful for anyone here, I created a Nuke implementation of the ACES Output Transforms matching the ACES 1.2 AMPAS CTL.
There are presets for most of the aces 1.2 view transforms, and each “module” of the output transform is customization so you can poke around and see how each piece works.
While I had my hands deep in the innards of the CTL code, I came across a question I have been wondering about - perhaps someone here would have an answer / clarification / or more information:
The older ACES 1.0.3 SDR view transforms use two steps for the tonescale / tonemapping component of the output transform: Step 1 is to transform from scene linear to OCES using the segmented_spline_c5 algorithm. Step 2 is to modify the knee and shoulder with the segmented_spline_c9 algorithm, then map to display linear.
For the newer HDR view transforms, these two steps are combined into one step: RRT+ODT, also referred to as an OutputTransform. This uses the newer Single Stage Tonescale (SSTS) algorithm.
While developing this tool I was trying to match the older segmented_spline_c5 + segmented_spline_c9 tonescale using the newer SSTS algorithm but I was not able to find parameters that exactly matched new to old. I got pretty close but not exact. Using the SSTS tonescale, the default values give you an image that has a slightly softer highlight rolloff in the shoulder. (which I personally actually prefer the look of, as it looks closer aesthetically to many film show luts that I have seen over the years)
With all that info layed out, my question… First: is it possible to exactly match the older tonescale using the SSTS algorithm, if you override the SSTS parameters in just the right way? And two: is it part of the plan for the future to use the SSTS algorithm for the SDR output transforms?
Just too curious for my own good Thanks for your help!
It is most likely not possible and by design, you will be able to get a good fit but not exact, the intent of the SSTS is to provide a model that easier to tune, the RRT + ODT while very elegant from a design standpoint is hard to tweak as both set of curves affect each other adversely and also end up generating non-smooth derivatives.
I don’t think there is an official plan to replace the SDR ODTs with an SSTS counterpart but it is a topic that has certainly been discussed!
No. As @Thomas_Mansencal said, you can get close but there’s not enough degrees of flexibility (by design!) to match all the weirdnesses of the SDR RRT+ODT tonescales.
I used the 1.1 HDR transform revisions to demonstrate a different and more flexible approach to make “automatic” creation of tonescales for dynamic ranges other than those that ship with the pre-made transforms more intuitive and, more importantly, repeatable.
We couldn’t overhaul the SDR to use the same algorithm without changing the “look” - which would have required a major version bump to 2.0. But there are many other simplifications that we want to work into 2.0 and we were not at a phase to tackle those yet so we left the SDR transforms untouched.
This thread has a little bit of discussion around the SSTS approach when I was first proposing it to the HDR ODT Virtual Working Group:
I think yes, the SSTS in some form will be used. It almost certainly won’t use the current parameters but I think the philosophy is more robust than the separate RRT+ODT tonescales. The SSTS is much more intuitive and flexible without allowing the user to “create their own shape”. By that, I mean it’s a technical rendering curve, not a creative one. So if you put in the same display parameters, you get an Output Transform with an identical tonescale every time. Figuring all this out will all be part of the ACES 2.0 Output Transform work - which I expect will be forming soon, so if you have an interest in this - I suggest you look out for that announcement and join in the discussion if you want. It is open to all!
Thank you very much @Thomas_Mansencal and @sdyer for the helpful explanations. It helped me understand this more clearly. I’ll keep an eye out for the ACES 2.0 Output Transform work. I’m curious to see how this develops.
For some reason I get a BlinkScript compile error on my system, and need to switch off Use GPU if available for every Blink node. It might be useful to expose that as a global switch. I have found that for per pixel operations, the overhead of transfer to the GPU and back is not worth it, and BlinkScript is faster on the CPU.
I found that the ACES_103_OutputTransform had an artefact. I did not see it with normal photographic images, but when I tested with the Cornell Spheres image created by @Thomas_Mansencal, certain dark saturated colours clamped to black when dark to dim surround was enabled:
Some investigation suggests that this occurs when the calculated luminance value going into the dark to dim power function is negative. I was able to remove the artefact by disabling your ClampMin prior to the power function, and changing the expression to:
r > 0 ? pow(r, DIM_SURROUND_GAMMA) : -pow(-r, DIM_SURROUND_GAMMA)
This makes the power function mirror about zero rather than clamping negatives. But this does not quite match the result of ctlrender, since the CTL does clamp negatives, but does not create this artefact.
I just checked, and @alexfry’s Pure Nuke ACES does not appear to suffer from this issue. Although a quick inspection suggests that it applies the surround gamma to XYZ, rather than converting to Yxy and back, applying the surround gamma only to Y.
Thanks a lot for checking out my tools. I appreciate the bug report, and you are indeed correct that there is an issue. It turns out that in the ACES_103_OutputTransform node (and also in both InvOutputTransform nodes), I had messed up the order of operations. The dark_to_dim gamma adjustment was being applied from AP1 instead of from XYZ. This explains there were negative values being clamped that shouldn’t have been negative in the first place.
I’ll admit I spent a lot more QC efforts on the ACES_OutputTransform node, but still it’s a big oversight!
I’ve just pushed some changes which should fix this issue.
I’ve also added a couple checkboxes for use gpu and vectorize for the blinkscript nodes. On my machine it is waaay faster with gpu enabled, but I have seen suspicious and highly variable performance with different GPU’s so I agree it’s a good idea to expose that option to the user in case of trouble.
Thanks again for the feedback and don’t hesitate if you see any other weirdness or have any other suggestions!