I’m new to ACES, and I want to apply it to SDR images, but I’ve noticed that when using ACES, the mapping goes from (1.0) to (0.8). I understand that the 0.8-1.0 range is reserved for the ‘over bright’ parts of HDR images, which doesn’t seem necessary for SDR images.
I found a step in the ODT called ODT_48nits, and removing it or changing it to ODT_1000nits allows the brightness to reach the maximum. Is this the step that causes it? But I couldn’t find ODT_100nits.
Currently, we are simply stretching the result from 0.8 to 1.0 after mapping, which doesn’t feel quite right. Which step should I modify?
Hello @lysis013 and welcome to ACES Central !
I don´t think you should apply ACES to SDR images. I think it is meant to be used on High-Dynamic Range “data” only.
Regards,
Chris
When I want to work with photographic SDR imagery (such as Rec. 709 video) in an ACES environment, I use what I call the “inverse ODT” method, setting the Rec. 709 ODT as the input transform.
This process is not currently 100% lossless for some very saturated colors, but it essentially promotes the SDR source to HDR by un-doing a plausible assumption of what might have been its HDR-to-SDR encoding curve.
Not all SDR sources respond well to this extreme processing. Don’t expect the highlight detail to be pretty or hold up well to downstream processing.
However, basic corrections do respond well.
If Chris or others want to strongly discourage this I’m all ears. I wrote a whole lot more about it here: Linear Light, Gamma, and ACES — Prolost
Nice read! If you feel like it, you may want to update it at some point since After Effects and Magic Bullet have had OCIO and ACES support implemented natively a while back.
Thanks! I did add addendums to that effect, but it’s probably time for a whole new post. Maybe after 2.0 is released.
Pleased to hear that Stu, refer to this doc frequently - was back n fwd with maxon asking why (in AE aces 1.3) if I used Looks and pushed the blue shadows extreme Vs colorista log mode sandwiched CCT pushed blue extreme the result is quite different - not really wrong but as I thought it was it was, I tinkering in and out of transforms in AE for days to try and work out what was going on. Still wondering if there is a way to stay aces and overlay the motion graphics and logos etc. Thinking no, there are some odd inverts you can do but the colours have ugly aliased edges as its a hack. Presently I composite and grade in aces (sandwiched cct for most footage) - output to rec 709 and add the motion grfx etc there. One comp to rule them all tho would be nice.
Exploring moving to ACES for VFX work, primarily involving incorporating CG with live action.
Say you have SDR source footage for a VFX shot (10bit DPX or even PR4444). What is the preferred way to comp ACES cg into it? Is Stu’s “Inverse ODT” (converting the SDR source into pseudo HDR by inverting the SDR Video output transform) the way to go? I have seen some minor changes on extremely saturated colors using this method. Not sure those colors are even in the sRGB/Rec709 gamut.
Or would you convert you CG elements back over to already tone mapped SDR (either via ODT or manual curves, highlight compression, etc.) before comping. This seems less than ideal and defeats Nuke’s linear workflow.
If Nuke is setup for ACEScg it seems you pretty much need to convert the source BG plate footage (even though leaving it as untouched as possible is of course ideal). Obviously the ideal would be to get ACES EXRs for the BG plate from the colorist, but not everyone is setup for this, and I find many of the other people my client works with can’t properly handle EXRs (I know… it’s 2024). So for this particular pipeline I typically deliver already comped Rec709 SDR, since that is what they can deal with.
Thoughts on approaches for this? Thanks. I apologize if reviving necro-threads is taboo here, but it seemed to fit in directly with the conversation.
The point of scene-referred compositing is that the different elements are not yet colour rendered, and integration and physically plausible edits can be accomplished.
If you have Video footage to start with, which is colour graded and colour rendered to a display referred state, the inverse RRT will not help you to restore scene-referred state, in the true sense.
Also inverse transforms have extremely steep gradients. The resulting pseudo scene-referred data might be very fragile (meaning that if you start to modifing it, it will break apart).
The best place to spend your energy is educating the people around you and try to get proper scene-referred data.
I hope this makes sense.
Thanks for the reply.
Makes perfect sense (been doing this about 35 years). I get we can’t magically get linear light out of 10 bit or even 16bit display reference footage. Stretching those values out will inevitably result in some posterization if they are not squeezed back in the same way. I was impressed that the transformation in Nuke does seem lossless, at least with the Macbeth chips I was reading from footage that had been run through the transform and its inverse. I should probably do some tests on extreme highlights and shadows and check for any banding, or other artifacts.
So it does sound viable if you are planning on comping into colored footage and not grading it heavily.
Of course I would much rather get scene referenced EXRs from color to begin. That’s just not always something I can control. I always try, though.
I do hope the future holds more scene referenced stuff. For me it depends on the client. In the commercial world everything is tight deadlines and fast turnarounds with little room for error. Few take the time to experiment and explore new tech as they should. It is nice to see the software getting updated, and ACES / OCIO emerging as a standard we can use everywhere. It’s going to be great when everything settles in.
Thanks again.
Care to elaborate? What weaknesses are you referring to? How might you suggest it be improved?
I found some of your other postings regarding the polarity reversal. Interesting reading. Thanks.