ACES Output Transforms Architecture VWG Meeting #24 - August 18, 2021

Interesting meeting to watch. If I understood correctly, 9 months have passed (it has been 24 meetings, time flies !) and we are halfway on the timeline ?

My hope when this VWG started was that if we could come up with a better model for the Output Transforms, it would benefit many CG artists out there (using Resolve, Unreal, Unity, Maya, OCIO…). So I have always seen this VWG as a great opportunity to provide a more robust standard for non-expert users in Color Management.

If I try do a quick summary of these past 9 months, I see three “big” challenges ahead of us :

  1. We have the flexible architecture topic brought by @daniele . I personnally support this idea because it will not change anything for non-expert users who will be using the ACES Output Transforms by default. While expert users will have their daily life/work most likely simplified. I don’t know if this idea is still being discussed or if it has been discarded. I know if it has been mentioned in a couple of TAC meetings (respectively the one from February 2021 and May 2021). Daniele’s document was made public a couple of hours after the latest TAC meeting. So as @Thomas_Mansencal put it back in May :

I only wish it came last week as some of those points were discussed during TAC this morning but not to the extent they should have been. It is certainly worth re-raising to Arch. TAC specifically. There are good points made and it would be a shame not doing it.

Next TAC meeting will be the 22nd of September. So maybe it is a great opportunity to discuss this topic properly.

I was also quite pleased to listen to @SeanCooper pointing at the Miro board again to brainstorm more about it. Because now I look back about it, I’d be interested to hear thoughts about Framework III or IV and if they’re really “impossible” to achieve. I have been paying more and more attention to our directors’ notes lately and tinting highlights is a much more common request than what I used to think. So I wonder if “image-referred” grading isn’t actually a sane thing to do here.

  1. The Output Transform itself would be our second challenge. I know the tonescale has been discussed pretty much extensively and I sometimes wonder is this list of modules by @jedsmith has been given the deserved attention :
  • Input Conversion - Convert input gamut into the rendering colorspace, in which we will “render” the image from scene-referred to display-referred . The rendering colorspace might be an RGB space or an LMS space, or something else entirely.
  • Gamut Mapping - We may want some handling of chromaticities which lie outside of our target output gamut.
  • Whitepoint - We may want to creatively change the whitepoint of our output in order to create warmer or cooler colors in our output image regardless of what display device whitepoint we are outputting to.
  • Rendering Transform - There are many valid approaches for this. ACES uses a per-channel approach where the tonescale curve is applied directly to the RGB input channels. Since I’m working on a chromaticity-preserving display rendering transform, I’ll outline what might go into that.
    • Separate color and grey using some norm.
    • Apply some tonescale to grey to convert from scene-referred to display-referred.
    • Apply some model for surround compensation and flare compensation.
    • Apply some chroma compression to highlights so that very bright saturated colors are nudged towards the achromatic axis as their luminance increases towards the top of the display gamut volume.
  • Display Gamut Conversion - Convert from our rendering colorspace into our display gamut. If you’re looking for a tool do do arbitrary gamut and whitepoint conversions in Nuke you could check out GamutConvert.nk .
  • InverseEOTF -We need to apply the inverse of the Electro-Optical Transfer Function that the display will apply to the image before emitting light from your display.
  • Clamp - And lastly, maybe we want to clamp the output into a 0-1 range, to simulate what is going to happen when we write the image out into an integer data format.

I’d be curious to know if everyone agrees on this list, if it’s missing anything. So maybe it is easier to focus on the “big picture” first and then we can list each module and make sure we find a proper answer for each of them. I guess I’m just interested to know other people’s opinions on what “ingredients” would be needed for an Output Transform.

Maybe a proper demo of the latest version of OpenDRT would be necessary as well for one of our next meetings. I have been comparing final renders of our next movies between ACES and OpenDRT and I was super pleased with the range of values that give OpenDRT. If we put aside any aesthetics prefererences (is that even possible ?), I was super impressed by the range of values in the skies and really any scene that would be lit by the sun. Hue paths on their path to white offer a much better coverage of the display gamut (skies are not cyan, wooden floors are not yellow : two of our infamous notorious 6!) Super impressive really, it’s like it was the first time I was seeing these colours on display. Just beautiful…

  1. Finally, maybe a “minor” topic would be the use of “Looks/LMT” in our deliverable or not. Topic has been adressed a couple of times here and here. I’d be curious to know if a default ACES Look/LMT is currently an option, if you guys think it is a good idea and what it should do. I reckon the “Look topic” is related to the first two topics mentioned above. So it is all intertwined…

Thanks for reading. Hopefully, I haven’t been over-simplifying things and I give justice to all the challenges this group is facing. I won’t mention HDR/SDR here or mid-gray behaviour, even those are important of course.

Regards,
Chris

6 Likes