Right now we have feedback and discussion about the ACES 2.0 OT Candidate testing spread over a few different threads.
I thought it might make sense to make a seperate thread that focuses specfically on the testing package itself, and related issues.
As things stand currently, I’ve got a WIP testing repo here:
(This is temporary, and will be migrated to a more offical home at some point)
It contains a nukescript with 3 test candidates, and 3 sets of LUTs exported from that targeting:
- Resolve
- Baselight
- OCIO
There are 3 candidate transforms, referred to as A, B and C.
Candidate A
This represents a minimal departure from the existing ACES 1.2 transforms.
Some minor colour tweaks collectively known as the RRT Sweeteners have been removed.
The c9 (SDR) and SSTS (HDR) tonescales have both been replaced with the Michaelis-Menten Spring Dual-Contrast (MMSDC) curve, still applied as a per channel RGB lookup in the AP1 rendering space. Mapping down to the final display gamut is a simple matrix, as used in the existing ACES Display Transforms.
Candidate B
Is @jedsmith’s OpenDRT version 0.1.2 from GitHub - jedypod/open-display-transform: Open Display Transform is a collection of tools and experiments for rendering wide-gamut scene-linear data into an image for an SDR or HDR display device.
Candidate C
Is the ZCAM DRT, developed by @matthias.scharfenber and myself. For all the gory details, see the https://community.acescentral.com/t/zcam-for-nuke/ thread
Issues for discussion
Selection of Candidate A
There seems to be some growing momentem from @ChrisBrejon, @sdyer and others to swap out Candidate A from the warmed over ACES 1.2 transform, which uses AP1 as it’s rendering space, to something else with an improved set of rendering primaries. Potentially @jedsmith’s rgbDT could be a drop in replacement here, or something else. Hopefully we can hash out a plan here pretty quickly
Input shaper issues with Candidates B & C
@meleshkevich noticed he wasn’t able to hit the corners with either OpenDRT or ZCAM DRT with the LUTs provided in the WIP testing repo.
This bascially comes down to the shaper space I’ve used to prepare the data for the baked 3D LUTs (ACEScct curve, with AP0 primaries) not covering all the values needed to hit the extreme corners of the target display (taking into account highlight desat, and target gamut compression).
There are two potential ways we could look at solving this.
- Replacing the ACEScct curve with another 1D curve that captures a much larger range of negative values in APO or AP1
- Coming up with a new set of shaper primaries that place all needed input values in the positive range before hitting the 1D shaper.
Currently I’m experimenting with different sets of primaries. The plots below show an input cube run through the inverse of the P3 D65 SDR versions of each transform. With a made up set of primaries plotted out that seem like a decent minium fit. Ideally I’d like the same set of primaries for both, but as you can see, they both land their inverted values in different areas.
Based on various discussions in the meetings, I’m not worrying about inverting values from a Rec2020 display, or HDR displays, as hitting the extreme corners of those seems (for now) to be less essential.
I’m curious what other people’s thoughts are about this. Even if it’s not ideal, it feels like whatever transform we go with, it needs to be implementable as a 3D LUT in a reasonable way.
P3 D65 SDR Display values throught the inverse ZCAM DRT
P3 D65 SDR Display values throught the inverse OpenDRT
There is perhaps a related dicussion here about how realistic it is to expect DCC applications to actually allow you to land values out in these extreme positions. Whilst the transforms both allow you to reach the extreme corners of the display gamut in theory, that might not be possible in most real world scenarios.