HDR10+ tonemap and ACES tonemap

Hi!

Since the ACES tone scale can be calculated using the peakLuminance parameter( aces-output/d65/rec2100/Output.Academy.Rec2100-D65_1000nit_in_Rec2100-D65_ST2084.ctl at dev · ampas/aces-output · GitHub ), if I decide to retrieve the peakLuminance from DXGI_OUTPUT_DESC1 on Windows, I could theoretically adjust the tone scale to a user’s display environment. This should support maximum luminance differences across different user environments.(Provided artists agree with this approach)

SMPTE ST 2094-40 seems to define its own tone mapping. And its metadata has display luminance related values.

My question is: is HDR10+ simply an additional layer that operates independently of ACES, or could it potentially conflict with the ACES tone mapping process?

I’m not sure what tone mapping in 8.7.4 in SMPTE ST 2094-40 is for. And I don’t quite understand their “scene adaptive”.
(Whatever their scene adaptive is, proper exposure adjustment , color grading and ACES tonemapping should do the same sort of effect, I assume.)

Somehow, I suspect that SMPTE ST 2094-40 exists because mostly display capability is not ideal at all today. Maybe I can just ignore it?

My apologies if this is more of an HDR10+ question, but my main concern is its interaction with an ACES workflow.

It is an additional layer yes, and unfortunately it’s mostly out of your control.

ACES is for creating your masters, while HDR10+ is for an individual consumer display to adapt based on its current conditions & capabilities. If a display can show your grade as is, then HDR10+ does nothing. Same as the other metadata standards.

While they do both involve (the very loaded term) “tonemapping” to some extent, most of your heavy picture formation occurs with the ACES DRT as to be seen by you on a reference display, while any consumer display HDR tonemapping is typically much simpler, focusing on taming luminance only when needed.

It is still worth QCing the analysis of things like HDR10+ and Dolby Vision to get an idea of what it might attempt to do on a display that’s less capable, tweaking things if possible, though ultimately you’re at the whims of each manufacturer on the way they implement it (if they even do).

If an example scene had a MaxFALL of 150nits and MaxCLL of 500nits, most displays capable of HDR10+ likely wouldn’t adjust things at all. Mobile devices might depending on the current brightness setting & ambient light levels. The more you push things, the more those additional tonemapping layers will come into play, as you exceed the capabilities of downstream devices.

1 Like