The First OCIOColorSpace node converts from Non-LinearsRGB to LinearACES-2065-1 by applying the sRGBElectro-Optical Transfer Function (EOTF) on your image via a 1D LUT and then applying the sRGB to ACES-2065-1 via two concatenated transformation matrices.
The Second OCIOColorspace node then applies the Reference Rendering Transform (RRT) plus the sRGB Output Device Transform (ODT) via a shaper LUT + a 3D LUT. That transformation is entirely different than the InversesRGBEOTF that you would need to use here to get back to your original image without a View Transform.
So basically, your second OCIOColorspace needs to re-encode your image to Non-LinearsRGB by applying the exact inverse transformation that the first OCIOColorSpace node applied, i.e swap the in and out. Note that in this particular case if you keep everything Raw, you will see the exact same image as no encoding or decoding will occur.
Yeah, I’m always lurking into CGI production techs and tried to jump and test ACES by practice.
I’m not under my desk so I can’t test for now.
So, just to be sure I get your (precise) comment, I repeat:
Your point 1: sRGB EOTF (1D LUT) makes values linear, then sRGB to ACES-2065-1 convert primaries. The white point doesn’t change as it’s the same between both standards (D65).
Am I getting it right? So this is not where my image become darker (and it make sense). OK.
Now the reason why I get a darker output is related to RRT+sRGB ODT, especially, RRT.
ACES output apply a filmic look (S-shaped curve) and is not made to display “already OK” images (TV logo incrustation).
So if you need to apply a logo on your image, you need to mess with some inverted “RRT/ODT”.
Am I right?
I’m still investigating… Thanks for the precious information!
Not exactly and just for other readers as it might not be clear, applying the sRGB EOTF will make the image darker, the values are going through that curve:
In practice, what should happen in the viewing chain is as follows:
Nuke’s Read node decodes the 8-Bit Display Referred Imagery using the sRGB EOTF or the relevant decoding function, it could be BT.709 or a LOG curve for camera footage, Gamma 1.8, 2.2 or anything that decodes, i.e. a DecodingColour Component Transfer Function(CCTF).
Nuke’s Viewer node encodes the Scene Referred Imagery with the chosen View Transform, e.g. Output - sRGB and the graphics card sends the encoded imagery to the Display.
The Display circuitry applies the EOTF, e.g. the sRGB EOTF if the Display is calibrated to sRGB and then sends photons to your eyeballs.
Thanks a lot for those curves, I will use them to train our teams!
applying the sRGB EOTF will make the image darker
Technically yes, but it’s not the “reason” why final output is darker. It simply make data linear. Kinda like the ugy gamma(1/2.2). Nothing new under the sun. Am I right?
The whitepoint will also change from D65 to ACES Whitepoint (~= D60)
Wow, thanks for pointing this!
Then the ACES RRT + sRGB ODT, takes the darker image and makes it brighter
There is where my image is make darker than the original. As shown in your curve, output no more reach 1.0, but ≃ 0.8. This in order to represent > [1.0-16.0] values between 0.8 and 16.0, leading to the “filmic” look.
That is why my Nuke viewport output is darker than original, and that’s also why some color space manipulations are needed to keep the image output consistent.
Thanks for your Nuke script, I will dig into it as soon as I’m on my workstation.
Once again, thanks for the in depth explainations!
Correct! But it was important to clarify for other people that this transformation in your graph has a very strong effect.
Exact, the RRT + ODT combination are what creates the Filmic Look of ACES. Note that in OpenColorIO they are modelled as a single transformation split into a 1D + 3D LUTs but those LUTs are not discrete representations of the RRT and ODT blocks.