ACES Output Transforms Architecture VWG Meeting #24 - August 18, 2021

Notice of Meeting Cancellation
Please note that we will not be meeting this Wednesday, August 3rd (tomorrow) due to some conflicts with ASWF Open Source Days and because there is not enough new work completed to conduct a productive meeting.

We encourage everyone to continue exploring their particular work items and use ACESCentral to communicate until the next meeting in two weeks.

Thanks Scott !

Was meeting #24 announced on AC ? I cannot find the link anywhere on the forum. I am currently watching the recording :

https://transcripts.gotomeeting.com/#/s/a19b241de073b781b603b442bd7fb711dff11af23452c075cf03f212001d2a22

Regards,
Chris

Interesting meeting to watch. If I understood correctly, 9 months have passed (it has been 24 meetings, time flies !) and we are halfway on the timeline ?

My hope when this VWG started was that if we could come up with a better model for the Output Transforms, it would benefit many CG artists out there (using Resolve, Unreal, Unity, Maya, OCIO…). So I have always seen this VWG as a great opportunity to provide a more robust standard for non-expert users in Color Management.

If I try do a quick summary of these past 9 months, I see three “big” challenges ahead of us :

  1. We have the flexible architecture topic brought by @daniele . I personnally support this idea because it will not change anything for non-expert users who will be using the ACES Output Transforms by default. While expert users will have their daily life/work most likely simplified. I don’t know if this idea is still being discussed or if it has been discarded. I know if it has been mentioned in a couple of TAC meetings (respectively the one from February 2021 and May 2021). Daniele’s document was made public a couple of hours after the latest TAC meeting. So as @Thomas_Mansencal put it back in May :

I only wish it came last week as some of those points were discussed during TAC this morning but not to the extent they should have been. It is certainly worth re-raising to Arch. TAC specifically. There are good points made and it would be a shame not doing it.

Next TAC meeting will be the 22nd of September. So maybe it is a great opportunity to discuss this topic properly.

I was also quite pleased to listen to @SeanCooper pointing at the Miro board again to brainstorm more about it. Because now I look back about it, I’d be interested to hear thoughts about Framework III or IV and if they’re really “impossible” to achieve. I have been paying more and more attention to our directors’ notes lately and tinting highlights is a much more common request than what I used to think. So I wonder if “image-referred” grading isn’t actually a sane thing to do here.

  1. The Output Transform itself would be our second challenge. I know the tonescale has been discussed pretty much extensively and I sometimes wonder is this list of modules by @jedsmith has been given the deserved attention :
  • Input Conversion - Convert input gamut into the rendering colorspace, in which we will “render” the image from scene-referred to display-referred . The rendering colorspace might be an RGB space or an LMS space, or something else entirely.
  • Gamut Mapping - We may want some handling of chromaticities which lie outside of our target output gamut.
  • Whitepoint - We may want to creatively change the whitepoint of our output in order to create warmer or cooler colors in our output image regardless of what display device whitepoint we are outputting to.
  • Rendering Transform - There are many valid approaches for this. ACES uses a per-channel approach where the tonescale curve is applied directly to the RGB input channels. Since I’m working on a chromaticity-preserving display rendering transform, I’ll outline what might go into that.
    • Separate color and grey using some norm.
    • Apply some tonescale to grey to convert from scene-referred to display-referred.
    • Apply some model for surround compensation and flare compensation.
    • Apply some chroma compression to highlights so that very bright saturated colors are nudged towards the achromatic axis as their luminance increases towards the top of the display gamut volume.
  • Display Gamut Conversion - Convert from our rendering colorspace into our display gamut. If you’re looking for a tool do do arbitrary gamut and whitepoint conversions in Nuke you could check out GamutConvert.nk .
  • InverseEOTF -We need to apply the inverse of the Electro-Optical Transfer Function that the display will apply to the image before emitting light from your display.
  • Clamp - And lastly, maybe we want to clamp the output into a 0-1 range, to simulate what is going to happen when we write the image out into an integer data format.

I’d be curious to know if everyone agrees on this list, if it’s missing anything. So maybe it is easier to focus on the “big picture” first and then we can list each module and make sure we find a proper answer for each of them. I guess I’m just interested to know other people’s opinions on what “ingredients” would be needed for an Output Transform.

Maybe a proper demo of the latest version of OpenDRT would be necessary as well for one of our next meetings. I have been comparing final renders of our next movies between ACES and OpenDRT and I was super pleased with the range of values that give OpenDRT. If we put aside any aesthetics prefererences (is that even possible ?), I was super impressed by the range of values in the skies and really any scene that would be lit by the sun. Hue paths on their path to white offer a much better coverage of the display gamut (skies are not cyan, wooden floors are not yellow : two of our infamous notorious 6!) Super impressive really, it’s like it was the first time I was seeing these colours on display. Just beautiful…

  1. Finally, maybe a “minor” topic would be the use of “Looks/LMT” in our deliverable or not. Topic has been adressed a couple of times here and here. I’d be curious to know if a default ACES Look/LMT is currently an option, if you guys think it is a good idea and what it should do. I reckon the “Look topic” is related to the first two topics mentioned above. So it is all intertwined…

Thanks for reading. Hopefully, I haven’t been over-simplifying things and I give justice to all the challenges this group is facing. I won’t mention HDR/SDR here or mid-gray behaviour, even those are important of course.

Regards,
Chris

6 Likes

That looks like a pretty good summary @ChrisBrejon . On the rendering transform side, since it looks like the most likely candidate for ACES 2 is OpenDRT, we implemented it in BG:3 and it was considered mostly ok with some caveats. What we will end up shipping for our next patch is a modified version of it to address artists’ concerns and performance issues. I will of course contribute back all our changes to the VWG. If you have an alternative chromaticity-preserving approach though, I would be interested to discuss it with you as the main issues we have are matching reds and blues between SDR and HDR OpenDRT (also between 300 nits HDR, 1000 HDR and 4000 nits HDR) without a display-referred trim pass.

I’ll try to get some challenging content with ACES colour compensations have been removed and redone grading for our next meeting rendered with different settings both with original OpenDRT and modified. I won’t be able to stream it though as it will be PQ-encoded content (and one sRGB vid).

It definitely is an option although I would only put in the extra contrast and saturation. Try applying a s-curve shaped contrast curve to log luminance then applying over-saturation in linear RGB space followed by a hue correction in a decorrelated perceptual space like ICtCp or OkLab in order to restore pre-saturation hue (look at the perceptual path in OpenDRT). Alternatively, you can just use Resolve 17 HDR controls which work like I just described. Don’t use the primary grade controls.

[Note: Updated the thread title to the date of the meeting being discussed.]

Here is the meeting summary and recording / transcript.

Just read the transcript since I thought the meeting was today and didn’t realize I’d already missed it and I’m going to add that soft-proofing on different kinds of monitors is a very valid use case since we do this all the time here at Larian. We start by making the source look good for the HDR 1000 then soft-proof by moving the window to the SDR monitor which trigger a code path in our engine that switches DRT and also swapchain colourspace. We also soft-proof by moving the window from a more capable HDR monitor to a less capable one (although we don’t re-optimize the target DRT in that case). There’s also the option to straight up disable Windows Advanced Color Mode if we want to compare SDR colour rendering to HDR colour rendering on the same monitor (turns out that, at the same luminance level, SDR on a lot of monitors has a saturated, dark yellow feel while HDR feels more blue, bright and desaturated)

Thanks @sdyer for the notes and renaming of the thread.
Thanks @jmgilbert for your insight. I think your feedback on OpenDRT will be very valuable, since you are already using it in production. I’ll DM you about another chromaticity-preserving approach which may interest you.

On my side, I have been able to do a render that should illustrate the range of values that I was mentioning earlier. This test has been inspired by this thread on AC. I had been looking for a while to render something “iconic” for the VWG (hence my lego renders for instance). And by reading the notes on Gamut Compression, I thought : what is more iconic than a can of coke ?

So I did a bit of research to get the coca-cola red values, convert them to “linear-sRGB” and ended up with an exr texture of (0.9047, 0, 0). I then lit with an HDRI (the treasure island) and a distant light. Here are the results :

Linear - sRGB render, displayed in sRGB (ACES)

Same linear - sRGB render, displayed in sRGB (OpenDRT)

I was pretty pleased with this test because it matches exactly what I saw at the studio a couple of weeks ago, especially the red bounce on the white ball. To avoid any confusion, here are the exact steps of this test :

  1. A single render in the BT.709 footprint.
  2. Which was then taken to the wider gamut working space of ACEScg (in Nuke).
  3. Each was rendered from ACEScg through their respective transforms.

Regards,
Chris