HDR Intent for Output Transform

Hi Garrett,

Those are good points but keep in mind that the context for my answer to Christophe was skies and flames appearance, i.e. emission sources, and those are/can be well above diffuse white for quite obvious reasons. You want emission sources to appear bright, isn’t it the entire point of HDR after all?

Cheers,

Thomas

It is not so easy, unfortunately.

during production, you switch all the time between SDR and HDR even if the first grade is designed for HDR. Editing, VFX, etc… are always hybrid.

Also if you grade HDR first you tend to do less of local relighting (fewer shapes to get the dynamic range together), So your SDR trim will need either lots of shapes and tracking work (if you can do a separate master) (and you maybe don’t get the time for this work) or the global trim tools will have a very hard time.

The other way around is easier because removing the effect of a shape or blending the effect back a bit is faster and simpler.

Don’t get me wrong I just want to say that both directions have their pro and cons.

3 Likes

Yeah, the idea was to share some experience from the userbase. Since our CTO’s testimony completely matches to what Joachim described, I thought it would be interesting to share with group.

Far from me the idea of entering the should/shouldn’t debate, HDR not being my field of expertise. At some point we will be interested to switch to a full HDR workflow but that’s not the case currently. And I think that many studios are in the same situation.

I won’t enter the reasons as to why HDR looked different to SDR in our case. As you said it could come from many reasons. But I think the key word here is “predictability”. That’s what we expect when entering a DI grading suite.

Question may be vague to you but our answers (as you stated) should not be. :wink:

Regards,
Chris

It is, IMHO, critical to try understanding why imagery looked different, otherwise how are we expecting to address the problems properly?

Discounting system calibration issues, are they caused by perceptual effects, fundamental issues in the DRT or both?

Don’t get me wrong, I’m all for predictability here, but between colour appearance effects and observer metamerism alone we could be doing circles chasing our own tails for a while.

At any rate, the more information you can share with the group wrt issues you experienced, the better, otherwise we are left with “things looked different in HDR”. If you have an opportunity to reproduce them, please do, for the group, and ultimately, your studio.

This is a hard one, as you might have realised it, we do not have full understanding of how the HVS works, if so, we would not have those conversations in the first place. Expecting, that this group comes up with answers to all the questions is putting an unreasonable amount of pressure on it, especially when the research itself has not formulated proper solutions to some of the problems enumerated prior.

It is also why I’m always asking to describe the issues properly and within context so that we can design proper solutions within the constraints of our knowledge.

Cheers,

Thomas

This will take the thread a bit sideways but yes, it makes sense in 2019 but does it still in a 2021 COVID-19 world where theatres are closed at an incredible rate?. I tend to look at Netflix for the trends and unless I misunderstood when I asked @carolalynn during the GM VWG last year, they are HDR first.

Hola! You’re correct, in an ideal world, we’d be HDR first. However, as @daniele noted, the workflows to maintain that through the entire production pipeline are challenging at best, and though things are progressing quickly, it will still be years before anyone can truly be HDR first.

This is the reason I stay in the camp of as much can be in the LMT, should be. It should be a creative choice of how “HDR” you actually want your HDR to be, as well as how that translates into SDR for accurate representation in VFX, etc. I understand our options there may reach a limit, and aspects will need to be done in the core OTs… but limiting that I think we all agree upon.

All of this exploration is wonderful, working towards core requirements we cannot live without, vs those that are optional, vs those we know we do not want. There’s a lot of great thoughts here I need to fully catch up on as well, so apologies for possibly missing things but basically, I agree with @sdyer on his scenario proposals and “can” vs “must” being things we have to distinguish.

2 Likes

Thanks for clarifying @carolalynn, this is helpful!

I certainly won’t disagree here!

Cheers,

Thomas

Also especially during the pandemic remote workflows with remote viewers sitting on all kinds of displays a simultaneous and appearance driven translation is especially important.

When we wrote our rendering we tested remote grading sessions with different viewing conditions and it works surprisingly well.

1 Like

Shouldn’t the Netflix (streaming) argument be the same that in this thread ? I can’t recall who told me during one of the midgray conversations :

Good luck on convincing the academy about that !

Chris

Not sure I get the connection there @ChrisBrejon - though obviously where mid gray ends up is definitely important :slight_smile:

Well, midgray pegs at 0.10 at displat linear mostly because of dark theaters, right ? If streaming and Netflix become the norm (because of covid for instance), is there a case where tv sets, smartphones and ipads would benefit of pegging midgray at 0.18 in display linear ?

Joshua’s answer on this matter is brilliant.

Chris

I think these discussions are solidifying (at least in my mind) the need for multiple styles/variants of output transforms. I would hope that an acceptable outcome of this VWG is the decision (if made) that we can’t hit all the targets with one group of OTs, and to provide all the necessary OTs as required.

For HDR I am seeing two distinct variants needed:

-An HDR transform that creates an image that reasonably coincides with the SDR transform. There have been stated numerous and valid reasons why this workflow is necessary. This would be created after (or possibly alongside) work on a new SDR transform is done.

-Because that transform will almost certainly make compromises and restrict what could be an “optimized” HDR image, we need another OT that is less restrictive for HDR for those that need/want it.

-There are also some specialized transforms that @sdyer has mentioned, like SDR within an HDR container that is good for comparison work.

It is feasible that an LMT could accomplish the first goal, but I really dislike the idea of a “default” LMT having to be applied; that feels too fragile in my mind. We probably need to start a separate thread about a default LMT if that continues to be a possibility, and/or flag that over to the architecture group.

Previous ACES RRT styles are available via a LMT, why the first goal would be any different than those, especially when it is also about emulating a look.

Assuming for a second, that we are going down the road of adding a different rendering transform that makes HDR look like SDR, what do we do then about the existing emulation LMTs?

If we don’t do anything, we are left with a dirty system with transforms doing similar things but categorised differently and if we move all those LMTs as new Rendering Transforms, we end up with an explosion of RT and the role of the LMT becomes muddy.

This is certainly not a great situation from an architectural standpoint.

Cheers,

Thomas

That’s a valid point; at the end of the day it’s about simulating our SDR look.

However, what I think differentiates this circumstance is that it might reasonably be the default/base look that is widely used, unlike the LMT’s provided for backwards compatibility/comparison. One of the discussions a couple meetings ago was how a fair number of people use ACES simply because “it works”; the necessity of using a default LMT to get a base look, but only when doing HDR, convolutes the workflow.

These will all have to be re-developed for ACES 2.0 anyway since we are changing the OT (including rendering), do they not?

The inverse is also true, though, that SDR would look like HDR. So in the case of a production doing an HDR master first, the SDR grade should be a simple trim pass (and possibly not even needed in some circumstances). If the “standard” HDR OT looks wildly different from the SDR OT, best case scenario there is an LMT that gets it close and do final tweaks from there, but worst case you’re doing a full grade again in SDR.

Just throwing it out there, but maybe this is better as a both/and approach. There are two HDR transforms available: one that “matches” the SDR OT and one that is less restrictive. There is ALSO an LMT available that can be used with the less restrictive HDR OT to match the SDR look. This would be comparable to say rendering directly in 0.1.1 versus using an LMT and rendering in the current version. Both should yield the same (or very similar) results, but either can be used depending on workflow.

They are provided for compatiblity reasons but with the intent of simulating a particular look, and instead having 3 or 4 RRTs, the choice was made to have a single one and use the dedicated tool in the block diagram to handle look changes. Unless it is impossible to preserve the SDR look in HDR via a LMT which it might be although I don’t see why, I don’t see a really compelling reason to break the entire system for that.

You have n “Old” LMTs mapping to RRT 1.0, you only require a new one mapping from RRT 1.0 to RRT 2.0 to be able to use the n “Old” with RRT 2.0.

The thing to consider (and that people wanting different RT seem to miss), is the combinational explosion for CMS with fixed configuration, e.g. OCIO.

To put things in perspective, the current ACES 1.2 Config has roughly 25 OTs for users to chose from, should we have two styles, e.g. current vs hue-preserving, we would double the count, i.e. ~50 OTs. If we wanted the SDR look maintained for HDR, we would go from 8 to 16 OTs to perform the job of what a single LMT might be able to do.

To be clear, I’m not against having multiple RT/RRT, I’m just extremely careful about the consequences of doing so because I also happen to be at the other end of the barrel. We have great powers but also great responsibilities.

Cheers,

Thomas

That’s good info, thanks @Thomas_Mansencal. I don’t do much with OCIO (although I believe a lot of people do), so that’s good consideration.

Totally off topic, but I was just having a brief look at OCIO in AE and at the OCIO website and it appears user-selectable parameters beyond input and output space would not be possible (for the end user), is that correct? The concept of user-selectable parameters for OT(s) in ACES has been brought up a couple times, but it looks like that couldn’t be reproduced in OCIO… which is unfortunate as you could severely decrease the number of separate configs if you could say, choose the white point (D50, D55, D65, etc) in OCIO and have it call the correct transform in the chain, instead of having to maintain discreet libraries for every option.

The config itself is fixed but it is possible to change dynamically parameters on various transforms such as the CDL Transform. ACES, on the other hand, has no dynamic parameterisation, you can swap a transform for another but the transforms themselves are fixed. You cannot really change how they behave and there is no immediate plan to handle dynamic parameterisation as it opens a great metadata pandora box nobody dared to look inside yet! :slight_smile:

It certainly does not mean it is not something that was talked about or has not been considered, it is just really hard to do right.

Cheers,

Thomas

There’s one thing I believe we should all consider which is the “in my humble opinion” obsolescence of the 100nit SDR standard. No consumer TV (or GUI monitors) is 100nit when in SDR (200 to 350nit seem a good average) out of the box. I have several clients (Broadcast mostly producing in HLG) saying that they are calibrating for 203nit to have a similar feel to their HDR, but that they are struggling with greyscale tracking by those displays. I guess, what I’m trying to say here is that maybe a revision by SMPTE, ITU, etc… of the SDR standard to accommodate a higher peak luminance would be much needed so we are not chasing our tails too much when the artistic intent need to be preserved in both SDR and HDR in simultaneous production (or… on a film set! where both technologies will co-exist for quite some time).
I would also consider the same standard of 5 nits dim ambient light for both SDR and HDR content production should be reconsidered.
I know none of this is the tasks of the OT working group or ACES, but maybe raising our voices to those standardization bodies might help the work we’re trying to achieve here?.

4 Likes

100% agree. The fact that HLG settled on 203 nit for diffuse white and PQ quickly updated from 100 nit to 203 nit makes me hopeful there wouldn’t be too much pushback on allowing at least an “alternate” recommendation or standard for SDR.

1 Like

Scenarios 2 and 3 are what I would prefer from a User’s perspective.

If there are suitable “existing solutions” I would suggest using these as foundations to improve upon.