Here’s a question for the pros, as part of an OCIO/ACES implementation where we are replicating what most of the studios are doing now for color grading, we want to grade our game in DaVinci. In short we want to bake our CC LUTs from Resolve into the engine, to do it correctly we have to grade before the RRT tonemap takes place to keep in line with the ACES pipeline.
So, and correct me if I’m wrong, we are outputting the game in ACEScc without tonemap (I’m told from the programming side that the tonemap refers directly to the RRT), in DaVinci the idea is to take this feed and apply the RRT as a LUT so that we can restore that tonemap back to the game output and do our color grading session from there. I guess that the approach makes sense up to that point. But…
We just found out that the tonemap/RRT is somewhat already “baked” in OCIO in the form of .spi3d LUT files along with its shapers. [i.e. $OCIOPath\OpenColorIO-Configs\aces_1.2\luts\Log2_48_nits_Shaper.RRT.Rec.2020]
So my questions are:
- How would I convert this .spi3d into a .cube?
- Do I need to create an RRT for every color space? i.e RRT.sRGB, RRT.rec2020, etc.
Whatever the default RRT/ACES filmic tonemap, my final goal is to extract it as a LUT so that it can be applied in Davinci.
Cheers and thanks a lot for any advice!