HDR Intent for Output Transform

Tags: #<Tag:0x00007f63348c6ad8>

A topic that has been brought up a few times, but that will need to be decided eventually is what the “intent” of HDR rendering is going to be for this group.

There have been defined two groups/camps:
-aesthetically/perceptually similar to SDR
-content is optimized/maximized for HDR

As a workflow/pipeline I strongly lean towards having a perceptual match to SDR. If I take a camera image and run it through ACES and out to a monitor, I would expect it to look the same (well, similar) on an SDR and HDR monitor. I have heard this voiced a number of times in this group. This is also consistent with other current HDR strategies/standards like HLG.

For the content producers/studios that seek an “optimized” deliverable for HDR (brighter, more saturated, “poppier”, whatever), this is almost certainly going through another grading pass anyway and can be optimized there, as long as the Output Transform is not limiting or severely negatively affecting creative choices.

Like my other post, I’m hoping we can eventually establish a “target” in this area so we have something to move forward with.

1 Like

It must allow for both. There are some that want HDR to look just like their SDR - they don’t want to see more of the original scene exposure range revealed in the bigger range of output. However, there are many (myself included) who feel it would be silly to have HDR available and not use it to show more of the original exposure range than makes it to an SDR scene. There are cases one could imagine where both would be desirable. Both must be supported - in my mind via different ODTs with rendering intent switched.


I had tried to write out these workflow/use cases on the Dropbox site because I do think this is an important workflow question that will influence how we design this thing.

  • Order of grading/mastering content
    1. SDR first as hero grade, then apply HDR OT(s) and see more of original exposure range, more color, more highlight/shadow detail - i.e. optimize source content to the display capabilities; intent is image optimization
    2. SDR first as hero grade, then apply HDR OT(s) and see “same” range of original scene exposure, just mapped into the HDR range - i.e. don’t show more color, highlight/shadow detail; intent is image match
    3. HDR first as hero grade, then apply SDR OT(s) and see “same” range of original scene exposure, tone-mapped into SDR for similar appearance; intent is image match
      • Do we try to make our own solution for this or do we defer to existing solutions?
    4. HDR first as hero grade, then apply SDR OT(s) and see less range of original scene exposure, a “window” of the larger rendering output but made to look as good as possible (to the extent possible) on the less capable device; intent is optimization and not an appearance match to HDR

What scenarios am I missing? Can people foresee a need for each of these? Can any be eliminated? Reactions?


I remember seeing this is in the doc, but thank you for re-posting here for reference.

I guess a definition of terms is in order as I hadn’t contemplated that they might be interpreted differently, but I’m seeing now they certainly can be!

When I said “perceptual match” I guess I was thinking more of your first scenario, in which there may be more exposure range available, but the general image intent is still the same. I hadn’t really considered your second scenario in which people may want an exact match, which is functionally SDR inside an HDR container at that point.

When I said “optimized for HDR” originally, a better word may have been “emphasized” as that seems to be the alternate scenario. In which the SDR image may not even be referenced or considered when creating the HDR deliverable.

Your third and fourth scenarios will need careful consideration as they will eventually become the norm.


I have a difficult time distinguishing these two things as well. I agree they’re different, so I should probably try to craft a 5th scenario in there. SDR in HDR is one thing and has uses (like being able to show SDR vs HDR on a monitor without needed to switch the display setup). I didn’t include that here. I was meaning to say “perceptual match” as in has the same “feel” in both, without the “wow factor” of super brilliant highlights etc. I will try to fine tune the language and see if we can be more specific.

For me it helps to think about HDR not as a flip switch or a binary state (SDR vs HDR) but as a continuum of different image states. Just looking at peak white parameter it varies from 48 to 10000 nits seamlessly.


My opinion is that 4 could and probably should be eliminated. If HDR is the first and hero grade, then optimisation and appearance match should be the same. Trying to simulate an SDR appearance from an HDR grade seems undesireable to me. The first 3 options are the ones I would like to see implemented.

There is another possible option in my mind, which is maybe outside of the scope of ACES. That is, when an SDR (display referred) master is expanded to emulate a greater exposure range, even though the SDR source has already been graded or tonemapped and the original scene referred (camera). data is not available. I hope I have explained that clearly and correctly.

We could use a tightening up of terms in discussions like this. What is the preferred way of saying something that is technically in an HDR format, but which looks the same as, or very similar to the SDR version (eg your option 2). I have been using “showcase HDR” for option 3 that exploits HDR with good effect, but lack words for deliberate limitation of the dynamic range. I don’t like to say fake HDR, since it is not. It is correct HDR. Color is easier - a black and white “look” is rarely taken to mean a black and white signal.



just wanted to chime in some feedback from our CTO about HDR and SDR.

We generally do our first hero grade in SDR where we try to have our movie as beautiful as it can be. And when we move to the HDR room, we kinda expect to see something similar as a starting point. The worst that has happened is when HDR looks completely different from SDR in an uncontrolled way (this may be due to many reasons). And as a client, we expect as a starting point to have a respect of the creative intent. Rather than flames or skies changing colours.

I’ll quote @daniele and @joachim.zell from meeting#6 :

I don’t think nobody really means it should only go to 100 Nits and be like, literally the same, appearance.

But what I saw just by working with clients, they feel so comfortable when they walk from the SDR room to the HDR room, and recognize the imagery, then they say, Oh, good, I mean, in a good, in a good facility here, but Now show us what else we can do can be.

Not sure if it means preserving appearance or not. :wink: But I like this quote from Colorfront :

Will SDR look the same as HDR? Is the look of the image maintained ?



I would say that in 2020, errr 2021 :), you should probably work in the reverse direction: SDR is probably on its way out and I would think that it is a terrible mistake to bound your creative intent with SDR. Unfortunately, it is what will happen if your Golden is graded on SDR and you then do HDR trim passes. Clients should be walking from the HDR grading suite to the SDR one, not the other way around :slight_smile:

To take a basic example, an order of magnitude luminance increase, e.g. 120nits → 1500nits, will certainly increase colourfulness, if we go SDR → HDR, to maintain the appearance we would have to decrease “saturation” in a way or another and conversely in the other direction. One fundamental question is whether we want to do that or let the system behave and not try to bend the natural world forces to our will. We can have processes for both paths obviously and we can also cherry-pick what colour appearance effects we want to model/handle.

This could be the cause of many sources that are intermingled together. For one, you might have adopted a chromaticity-preserving DRT but because of the Hunt and Bezold-Brücke effects, your skies or flames, that are now possibly an order-of-magnitude brighter, will appear different, it is a real-world effect! Where you decide to peg mid-grey and the headroom you give to highlights entirely affect the reproduction here. Then there are exhibition conditions, e.g. are your grading suites environment comparable, what are the observer adaptation states, etc…

The problem space is way larger than simply producing a hue or chromaticity-preserving DRT and we objectively don’t have the tools or knowledge to keep everything under control. There are many spatial effects that have a deep effect on the appearance that we are not even considering to start with.

Keeping that in mind, I think we need to refine what we are trying to achieve here.

“Will SDR look the same as HDR? Is the look of the image maintained ? is vague at best, and we need to define exactly what is pertaining to the look that we want to maintain.



I will state as a personal requirement that the system can and should allow for both. How? That falls under the “intent” switch that I keep talking about.

So to me this should be “Can SDR be made to look the same as HDR?” andCan the look of the image be maintained?”

EDIT: But yes, agree that we need to refine our terminology and really hone in on the specifics of what we’re talking about in each scenario and use case.


I don’t have a proper HDR and SDR suite to make comparisons between, but in theory in the “real world” that would only be true in the upper exposure ranges, right? In most cases the mid-grey point and even the overall picture level should be similar between SDR and HDR (this is what HLG was designed to preserve), in which case the “colorfulness” should also not be different (color gamut/range aside).

Unfortunately this may actually not be the case in proper grading suites where the SDR spec is 100 nits brightness and HDR diffuse white is around 200 nits (which was based on consumer/end-user studies). I haven’t read the papers, but ST.2084 PQ was originally based around 100 nits diffuse white (undoubtedly to coincide with SDR), but was later revised to 203 nits, likely to align with HLG and accepted consumer use. It’s possible, then, that you may see more of a difference between SDR and HDR in a proper grading suite than you would “in the real world.”

As a differing example, I would also say that DCI is not less colorful (perceptually, anyway) than Rec.709, despite having half the peak brightness (48 nits vs 100).

Just so I understand better, are you suggesting a user-accessible parameter for “maintaining intent?” What changes/is different between the two options?

Hi Garrett,

Those are good points but keep in mind that the context for my answer to Christophe was skies and flames appearance, i.e. emission sources, and those are/can be well above diffuse white for quite obvious reasons. You want emission sources to appear bright, isn’t it the entire point of HDR after all?



It is not so easy, unfortunately.

during production, you switch all the time between SDR and HDR even if the first grade is designed for HDR. Editing, VFX, etc… are always hybrid.

Also if you grade HDR first you tend to do less of local relighting (fewer shapes to get the dynamic range together), So your SDR trim will need either lots of shapes and tracking work (if you can do a separate master) (and you maybe don’t get the time for this work) or the global trim tools will have a very hard time.

The other way around is easier because removing the effect of a shape or blending the effect back a bit is faster and simpler.

Don’t get me wrong I just want to say that both directions have their pro and cons.


Yeah, the idea was to share some experience from the userbase. Since our CTO’s testimony completely matches to what Joachim described, I thought it would be interesting to share with group.

Far from me the idea of entering the should/shouldn’t debate, HDR not being my field of expertise. At some point we will be interested to switch to a full HDR workflow but that’s not the case currently. And I think that many studios are in the same situation.

I won’t enter the reasons as to why HDR looked different to SDR in our case. As you said it could come from many reasons. But I think the key word here is “predictability”. That’s what we expect when entering a DI grading suite.

Question may be vague to you but our answers (as you stated) should not be. :wink:


It is, IMHO, critical to try understanding why imagery looked different, otherwise how are we expecting to address the problems properly?

Discounting system calibration issues, are they caused by perceptual effects, fundamental issues in the DRT or both?

Don’t get me wrong, I’m all for predictability here, but between colour appearance effects and observer metamerism alone we could be doing circles chasing our own tails for a while.

At any rate, the more information you can share with the group wrt issues you experienced, the better, otherwise we are left with “things looked different in HDR”. If you have an opportunity to reproduce them, please do, for the group, and ultimately, your studio.

This is a hard one, as you might have realised it, we do not have full understanding of how the HVS works, if so, we would not have those conversations in the first place. Expecting, that this group comes up with answers to all the questions is putting an unreasonable amount of pressure on it, especially when the research itself has not formulated proper solutions to some of the problems enumerated prior.

It is also why I’m always asking to describe the issues properly and within context so that we can design proper solutions within the constraints of our knowledge.



This will take the thread a bit sideways but yes, it makes sense in 2019 but does it still in a 2021 COVID-19 world where theatres are closed at an incredible rate?. I tend to look at Netflix for the trends and unless I misunderstood when I asked @carolalynn during the GM VWG last year, they are HDR first.

Hola! You’re correct, in an ideal world, we’d be HDR first. However, as @daniele noted, the workflows to maintain that through the entire production pipeline are challenging at best, and though things are progressing quickly, it will still be years before anyone can truly be HDR first.

This is the reason I stay in the camp of as much can be in the LMT, should be. It should be a creative choice of how “HDR” you actually want your HDR to be, as well as how that translates into SDR for accurate representation in VFX, etc. I understand our options there may reach a limit, and aspects will need to be done in the core OTs… but limiting that I think we all agree upon.

All of this exploration is wonderful, working towards core requirements we cannot live without, vs those that are optional, vs those we know we do not want. There’s a lot of great thoughts here I need to fully catch up on as well, so apologies for possibly missing things but basically, I agree with @sdyer on his scenario proposals and “can” vs “must” being things we have to distinguish.


Thanks for clarifying @carolalynn, this is helpful!

I certainly won’t disagree here!



Also especially during the pandemic remote workflows with remote viewers sitting on all kinds of displays a simultaneous and appearance driven translation is especially important.

When we wrote our rendering we tested remote grading sessions with different viewing conditions and it works surprisingly well.

1 Like

Shouldn’t the Netflix (streaming) argument be the same that in this thread ? I can’t recall who told me during one of the midgray conversations :

Good luck on convincing the academy about that !