HDR Intent for Output Transform


I would say that in 2020, errr 2021 :), you should probably work in the reverse direction: SDR is probably on its way out and I would think that it is a terrible mistake to bound your creative intent with SDR. Unfortunately, it is what will happen if your Golden is graded on SDR and you then do HDR trim passes. Clients should be walking from the HDR grading suite to the SDR one, not the other way around :slight_smile:

To take a basic example, an order of magnitude luminance increase, e.g. 120nits → 1500nits, will certainly increase colourfulness, if we go SDR → HDR, to maintain the appearance we would have to decrease “saturation” in a way or another and conversely in the other direction. One fundamental question is whether we want to do that or let the system behave and not try to bend the natural world forces to our will. We can have processes for both paths obviously and we can also cherry-pick what colour appearance effects we want to model/handle.

This could be the cause of many sources that are intermingled together. For one, you might have adopted a chromaticity-preserving DRT but because of the Hunt and Bezold-Brücke effects, your skies or flames, that are now possibly an order-of-magnitude brighter, will appear different, it is a real-world effect! Where you decide to peg mid-grey and the headroom you give to highlights entirely affect the reproduction here. Then there are exhibition conditions, e.g. are your grading suites environment comparable, what are the observer adaptation states, etc…

The problem space is way larger than simply producing a hue or chromaticity-preserving DRT and we objectively don’t have the tools or knowledge to keep everything under control. There are many spatial effects that have a deep effect on the appearance that we are not even considering to start with.

Keeping that in mind, I think we need to refine what we are trying to achieve here.

“Will SDR look the same as HDR? Is the look of the image maintained ? is vague at best, and we need to define exactly what is pertaining to the look that we want to maintain.



I will state as a personal requirement that the system can and should allow for both. How? That falls under the “intent” switch that I keep talking about.

So to me this should be “Can SDR be made to look the same as HDR?” andCan the look of the image be maintained?”

EDIT: But yes, agree that we need to refine our terminology and really hone in on the specifics of what we’re talking about in each scenario and use case.


I don’t have a proper HDR and SDR suite to make comparisons between, but in theory in the “real world” that would only be true in the upper exposure ranges, right? In most cases the mid-grey point and even the overall picture level should be similar between SDR and HDR (this is what HLG was designed to preserve), in which case the “colorfulness” should also not be different (color gamut/range aside).

Unfortunately this may actually not be the case in proper grading suites where the SDR spec is 100 nits brightness and HDR diffuse white is around 200 nits (which was based on consumer/end-user studies). I haven’t read the papers, but ST.2084 PQ was originally based around 100 nits diffuse white (undoubtedly to coincide with SDR), but was later revised to 203 nits, likely to align with HLG and accepted consumer use. It’s possible, then, that you may see more of a difference between SDR and HDR in a proper grading suite than you would “in the real world.”

As a differing example, I would also say that DCI is not less colorful (perceptually, anyway) than Rec.709, despite having half the peak brightness (48 nits vs 100).

Just so I understand better, are you suggesting a user-accessible parameter for “maintaining intent?” What changes/is different between the two options?

Hi Garrett,

Those are good points but keep in mind that the context for my answer to Christophe was skies and flames appearance, i.e. emission sources, and those are/can be well above diffuse white for quite obvious reasons. You want emission sources to appear bright, isn’t it the entire point of HDR after all?



It is not so easy, unfortunately.

during production, you switch all the time between SDR and HDR even if the first grade is designed for HDR. Editing, VFX, etc… are always hybrid.

Also if you grade HDR first you tend to do less of local relighting (fewer shapes to get the dynamic range together), So your SDR trim will need either lots of shapes and tracking work (if you can do a separate master) (and you maybe don’t get the time for this work) or the global trim tools will have a very hard time.

The other way around is easier because removing the effect of a shape or blending the effect back a bit is faster and simpler.

Don’t get me wrong I just want to say that both directions have their pro and cons.


Yeah, the idea was to share some experience from the userbase. Since our CTO’s testimony completely matches to what Joachim described, I thought it would be interesting to share with group.

Far from me the idea of entering the should/shouldn’t debate, HDR not being my field of expertise. At some point we will be interested to switch to a full HDR workflow but that’s not the case currently. And I think that many studios are in the same situation.

I won’t enter the reasons as to why HDR looked different to SDR in our case. As you said it could come from many reasons. But I think the key word here is “predictability”. That’s what we expect when entering a DI grading suite.

Question may be vague to you but our answers (as you stated) should not be. :wink:


It is, IMHO, critical to try understanding why imagery looked different, otherwise how are we expecting to address the problems properly?

Discounting system calibration issues, are they caused by perceptual effects, fundamental issues in the DRT or both?

Don’t get me wrong, I’m all for predictability here, but between colour appearance effects and observer metamerism alone we could be doing circles chasing our own tails for a while.

At any rate, the more information you can share with the group wrt issues you experienced, the better, otherwise we are left with “things looked different in HDR”. If you have an opportunity to reproduce them, please do, for the group, and ultimately, your studio.

This is a hard one, as you might have realised it, we do not have full understanding of how the HVS works, if so, we would not have those conversations in the first place. Expecting, that this group comes up with answers to all the questions is putting an unreasonable amount of pressure on it, especially when the research itself has not formulated proper solutions to some of the problems enumerated prior.

It is also why I’m always asking to describe the issues properly and within context so that we can design proper solutions within the constraints of our knowledge.



This will take the thread a bit sideways but yes, it makes sense in 2019 but does it still in a 2021 COVID-19 world where theatres are closed at an incredible rate?. I tend to look at Netflix for the trends and unless I misunderstood when I asked @carolalynn during the GM VWG last year, they are HDR first.

Hola! You’re correct, in an ideal world, we’d be HDR first. However, as @daniele noted, the workflows to maintain that through the entire production pipeline are challenging at best, and though things are progressing quickly, it will still be years before anyone can truly be HDR first.

This is the reason I stay in the camp of as much can be in the LMT, should be. It should be a creative choice of how “HDR” you actually want your HDR to be, as well as how that translates into SDR for accurate representation in VFX, etc. I understand our options there may reach a limit, and aspects will need to be done in the core OTs… but limiting that I think we all agree upon.

All of this exploration is wonderful, working towards core requirements we cannot live without, vs those that are optional, vs those we know we do not want. There’s a lot of great thoughts here I need to fully catch up on as well, so apologies for possibly missing things but basically, I agree with @sdyer on his scenario proposals and “can” vs “must” being things we have to distinguish.


Thanks for clarifying @carolalynn, this is helpful!

I certainly won’t disagree here!



Also especially during the pandemic remote workflows with remote viewers sitting on all kinds of displays a simultaneous and appearance driven translation is especially important.

When we wrote our rendering we tested remote grading sessions with different viewing conditions and it works surprisingly well.

1 Like

Shouldn’t the Netflix (streaming) argument be the same that in this thread ? I can’t recall who told me during one of the midgray conversations :

Good luck on convincing the academy about that !


Not sure I get the connection there @ChrisBrejon - though obviously where mid gray ends up is definitely important :slight_smile:

Well, midgray pegs at 0.10 at displat linear mostly because of dark theaters, right ? If streaming and Netflix become the norm (because of covid for instance), is there a case where tv sets, smartphones and ipads would benefit of pegging midgray at 0.18 in display linear ?

Joshua’s answer on this matter is brilliant.


I think these discussions are solidifying (at least in my mind) the need for multiple styles/variants of output transforms. I would hope that an acceptable outcome of this VWG is the decision (if made) that we can’t hit all the targets with one group of OTs, and to provide all the necessary OTs as required.

For HDR I am seeing two distinct variants needed:

-An HDR transform that creates an image that reasonably coincides with the SDR transform. There have been stated numerous and valid reasons why this workflow is necessary. This would be created after (or possibly alongside) work on a new SDR transform is done.

-Because that transform will almost certainly make compromises and restrict what could be an “optimized” HDR image, we need another OT that is less restrictive for HDR for those that need/want it.

-There are also some specialized transforms that @sdyer has mentioned, like SDR within an HDR container that is good for comparison work.

It is feasible that an LMT could accomplish the first goal, but I really dislike the idea of a “default” LMT having to be applied; that feels too fragile in my mind. We probably need to start a separate thread about a default LMT if that continues to be a possibility, and/or flag that over to the architecture group.

Previous ACES RRT styles are available via a LMT, why the first goal would be any different than those, especially when it is also about emulating a look.

Assuming for a second, that we are going down the road of adding a different rendering transform that makes HDR look like SDR, what do we do then about the existing emulation LMTs?

If we don’t do anything, we are left with a dirty system with transforms doing similar things but categorised differently and if we move all those LMTs as new Rendering Transforms, we end up with an explosion of RT and the role of the LMT becomes muddy.

This is certainly not a great situation from an architectural standpoint.



That’s a valid point; at the end of the day it’s about simulating our SDR look.

However, what I think differentiates this circumstance is that it might reasonably be the default/base look that is widely used, unlike the LMT’s provided for backwards compatibility/comparison. One of the discussions a couple meetings ago was how a fair number of people use ACES simply because “it works”; the necessity of using a default LMT to get a base look, but only when doing HDR, convolutes the workflow.

These will all have to be re-developed for ACES 2.0 anyway since we are changing the OT (including rendering), do they not?

The inverse is also true, though, that SDR would look like HDR. So in the case of a production doing an HDR master first, the SDR grade should be a simple trim pass (and possibly not even needed in some circumstances). If the “standard” HDR OT looks wildly different from the SDR OT, best case scenario there is an LMT that gets it close and do final tweaks from there, but worst case you’re doing a full grade again in SDR.

Just throwing it out there, but maybe this is better as a both/and approach. There are two HDR transforms available: one that “matches” the SDR OT and one that is less restrictive. There is ALSO an LMT available that can be used with the less restrictive HDR OT to match the SDR look. This would be comparable to say rendering directly in 0.1.1 versus using an LMT and rendering in the current version. Both should yield the same (or very similar) results, but either can be used depending on workflow.

They are provided for compatiblity reasons but with the intent of simulating a particular look, and instead having 3 or 4 RRTs, the choice was made to have a single one and use the dedicated tool in the block diagram to handle look changes. Unless it is impossible to preserve the SDR look in HDR via a LMT which it might be although I don’t see why, I don’t see a really compelling reason to break the entire system for that.

You have n “Old” LMTs mapping to RRT 1.0, you only require a new one mapping from RRT 1.0 to RRT 2.0 to be able to use the n “Old” with RRT 2.0.

The thing to consider (and that people wanting different RT seem to miss), is the combinational explosion for CMS with fixed configuration, e.g. OCIO.

To put things in perspective, the current ACES 1.2 Config has roughly 25 OTs for users to chose from, should we have two styles, e.g. current vs hue-preserving, we would double the count, i.e. ~50 OTs. If we wanted the SDR look maintained for HDR, we would go from 8 to 16 OTs to perform the job of what a single LMT might be able to do.

To be clear, I’m not against having multiple RT/RRT, I’m just extremely careful about the consequences of doing so because I also happen to be at the other end of the barrel. We have great powers but also great responsibilities.



That’s good info, thanks @Thomas_Mansencal. I don’t do much with OCIO (although I believe a lot of people do), so that’s good consideration.

Totally off topic, but I was just having a brief look at OCIO in AE and at the OCIO website and it appears user-selectable parameters beyond input and output space would not be possible (for the end user), is that correct? The concept of user-selectable parameters for OT(s) in ACES has been brought up a couple times, but it looks like that couldn’t be reproduced in OCIO… which is unfortunate as you could severely decrease the number of separate configs if you could say, choose the white point (D50, D55, D65, etc) in OCIO and have it call the correct transform in the chain, instead of having to maintain discreet libraries for every option.

The config itself is fixed but it is possible to change dynamically parameters on various transforms such as the CDL Transform. ACES, on the other hand, has no dynamic parameterisation, you can swap a transform for another but the transforms themselves are fixed. You cannot really change how they behave and there is no immediate plan to handle dynamic parameterisation as it opens a great metadata pandora box nobody dared to look inside yet! :slight_smile:

It certainly does not mean it is not something that was talked about or has not been considered, it is just really hard to do right.