Some confusion about ACES 1.3 in DCC software

I’m sorry I wrote this down with translation software.
I’ve been looking for an answer to a question for two days. First of all, the built-in version of ACES in Maya 2023 is supposed to be 1.3, and the new versions of AE and Nuke both have 1.3 built-in, and the results of their profiles are basically the same, and since my production doesn’t involve shooting content, I’m using the ACES 1.3 CG version.
However, in all three applications, I was able to color convert any SRGB 8bit image from an external web page, and the results displayed under ACES 1.0-SDR Video (SRGB) Display were dark, and the results viewed on the external web page were not perfectly restored by many methods.
If DIsplay is switched to Un-time-mapped, the ACES feature is lost.

Original image:


Converted images using SRGB-Texture:

I would like to understand, was this result achieved intentionally?
Because the current result will directly affect the camera projection, causing the image used for the projection to show a result that is not consistent with what the artist drew in PS.
Or am I missing the important way to get the correct result in Display for ACES 1.0 - SDR?

My English is not very good and I have been trying to search for this content on the internet for the past two days but to no avail.

There are Display Rendering Transforms in ACES framework. This is what you usually select in viewer. They have tone mapping built-in. It makes super bright colors appear less bright for a more pleasing soft clipping instead hard clipping. But for input you’ve set just a usual sRGB setting called sRGB texture.

So your current image path is:
input image > conversion from assumed sRGB encoding to linearized data > compositing > conversion from linearized data to the image that should look nice on a display you select in viewer settings. The last step includes tone mapping. But the first step (sRGB texture input) doesn’t. So if you want to preserve (not completely identical for pixels with maximum saturation, but still) the appearance of the input image, you should convert it to linearized data using something, that inverts Display Rendering Transform. And there are inverse display transforms included in ACES. Unfortunately I don’t remember what they are called exactly in OCIO, since I use software where ACES implemented different way. But it is for sure there.

Another thing (not related to the issue above) you probably should keep in mind is that since you do animation, most likely the final product will be rendered using Rec709 DRT, not sRGB. But the display of the artist who created the image, most likely was not calibrated to inverse sRGB transfer function, but Gamma 2.2 instead. So, if you choose inverse sRGB DRT (called something different in OCIO, but I don’t know what exaclty) for input, but then the final video will be rendered with Rec709 DRT, the shadows of the input image will be noticeably lifted. So maybe selecting inverse Rec709 DRT (called something different in OCIO, but I don’t know what exaclty) for the input image can be a better option. It will still have gamma 2.2 - gamma 2.4 mismatch, but it’s not that noticeable. Personally I’d also choose Rec709 DRT for viewer in nuke because sRGB DRT in ACES 1.x uses sRGB transfer function, but your monitor most likely uses Gamma 2.2 transfer function instead of inverse sRGB, which leads to be shadows displayed darker, than they should be. There is a small chance your monitor actually uses inverse sRGB decoding. In some displays sRGB preset actually uses sRGB curve (that makes shadows to be displayed brighter). Or if you calibrate your display using ICC profiles, i beleive it also forces display to perform like inverse sRGB function display in most cases. But I can be wrong about ICC, I calibrate displays using LUTs, so my experience with ICC very limited.
And since you can’t know for sure all these variables, probably an acceptble option is to select between inverse sRGB and inverse Rec709 input transforms based on what look fits better the scene. Unfortunately, there is a big mess with gamma 2.2 and sRGB in internet and in display manufacturers specs and so on, which leads to using strictly utility thing like selecting an approriate input transform like a creative tool (not always of course, but in this particular case).

1 Like

Thank you very much! After some time of trying, I found the way to invert it. And was able to get the correct image now.
About the monitor I’ll pay attention to it in the follow-up too!
Thank you again for solving this problem that has been bothering me.

Hello BrokeNighT,
When you mentioned you found the way to invert it to get the correct image, could you please post the steps you took to get a working result? I am running into the same problem. Thank you!

In nuke, sRGB images from web pages are first described in RAW; then the following parameters can be adjusted by adding the node as shown. Sorry I only vaguely understand the purpose of these actions, so I won’t explain them.


Alternatively the process can be simplified to just an OICO Display, the above method is just the one I think is easier to understand.

But for Exr, if it was previously rendered in the linear-sRGB color space, the conversion process I used would be a bit more complicated. I also show it as a screenshot:

I don’t know if there is a better way, but this is the way I currently use.
In the current version of AE (2023), I didn’t find the invert direction property, so I can’t get the “correct” display.
Maya only started to support invert direction in version 2023, but I don’t check it if I don’t have a need for camera projection, because it calculates metallic materials with a high saturation that I don’t want.
image

A simpler approach is to set the input transform in the Read node to linear Rec709 (sRGB).