I am currently experimenting with CinemaDNG ACES workflows in Resolve. I’ve written a sample DCTL but I’m unsure of some things.
Does Resolve automatically and correctly debayer CinemaDNG raw to ACES? I’ve seen a number of different workflows for DJI CinemaDNG material and they all look different, so I’m unsure which is the correct method.
I’ve followed these steps for this example workflow:
Resolve YRGB project
Import media and debayer CinemaDNG to P3-D60 Linear
Apply DCTL (P3D60 to AP1 linear then run through the ACEScct encoding function)
Apply ACES transform on 1st node in Resolve (IDT : ACEScct CSC → R709)
The results look accurate but the colour is slightly teal tinted, on the scopes the red channel clipped at the bottom end.
I’ve used ACEScg as my target space as it has the AP1 primaries but is encoded scene-linear. I thought I didn’t need a CAT transform as both colourspaces had the same white point, I’ve tried with and without and the results still look tinted,
If anyone has any ideas what I am doing wrong or can point me in the right direction that would be a big help.
Although in your code for Colour you have changed the white point of your duplicated colour space, you have not set it to use a derived XYZ matrix (use_derived_matrix_RGB_to_XYZ) so it is still using the matrices from the original P3-D65 space.
Not to be pedantic, but you would actually need chromatic adaptation, because the ACES whitepoint is not D60 (D60 has an xy chromaticity coordinate of 0.321626 0.337737 and ACES white is 0.32168 0.33767. Therefore a more appropriate matrix to convert from P3D60 to ACEScg would be
These work much better. I’ve loop tested the DCTL with Resolve ACES and both are matching now.
I’ll have a look at my python code and see if I can replicate the matrix you have. What is the process for debugging and verifying/testing CTL and DCTL transforms to ensure that the output is correct?
Thanks, I thought that the DCTL needed the matrix transposed.
When debayering the CinemaDNG from the Inspire3 Drone to P3D60-Linear DJI recommend a 1.4 exposure change in the RAW tab in DaVinci Resolve. This is part of their CinemaDNG to Dlog workflow.
I assume that this exposure adjustment correctly maps the DJI sensor values to the expected D-Log values. So that middle-grey falls where it should and the exposure matches the non-raw D-Log media.
Do I need to incorporate something similar to correctly map the P3-Linear to ACEScct so that middle grey falls into the expected place for ACEScct? If yes, what is the best way to calculate this factor?
E.G:
p3-linear 0.18 reflectance
ACEScct 18% normalised CV: 0.414
0.414 / 0.18 = Scaling factor of 2.300 to be applied to R,G,B values before any transformation.