Howdy. I’m exploring integrating ACES into a real-time game engine. Our current (standard dynamic range) workflow is something like:
Take post-tonemap 8bit TIFF/PNG screenshot
Import into grading software like Resolve with settings for sRGB
Generate 3D LUT as a *.cube file
Apply *.cube to a source LUT provided by the engine inside of Photoshop in sRGB space
Import the modified source LUT into the engine and apply it, post-tonemapping
The revised HDR proposed workflow is:
Take pre-tonemap 16bit EXR screenshot
Import into grading software like Resolve with settings for ACES (ACEScg IDT most likely)
Grade and generate a 3D LUT as a *.cube file
Apply *.cube to a source LUT provided by the engine in ACEScg space
Import the modified source LUT into the engine and apply it, pre-tonemapping
Assuming that our engine implements ACES RRT + ODT correctly, I suppose that while grading I should be looking at the image with the standard ACES RRT applied as a final node to simulate tonemapping, correct? Will this output make sense on an sRGB monitor or will I have to set the Resolve project to have an ODT of sRGB to get the right picture…? This part confuses me. It seems that I want my output to be sRGB (for viewing purposes) but I need to output a LUT in ACES space.
About the LUT itself, does it even make sense? Does the proposed pipeline preserved the high dynamic range of the image assuming the LUT is created in ACES space and applied before tonemapping?
Yes you’re correct, you grae with the RRT+ODT on. What you want to create is a LMT, Look Management Transform, that could be a LUT, but you have to be careful because in such workflow there is stuff you don’t see and might reveal on different outputs. In any case you would need a full implementation of ACES in your engine. Also consider that the LMT part as now is not really implemented in any grading tool, though you can hack it.
If you are going to use a 3D LUT, you can’t apply it directly to unclamped float image data. Some kind of shaper (usually logarithmic) is needed to convert the data to something uniform and limited to the 0-1 range. A 3D LUT saved from Resolve in ACES mode will be designed to be applied to ACEScc(t) image data. So if your engine provides an ACEScg CMS pattern to apply the LUT to, you would need to account for this. Can the game engine use ACEScct as the working space to apply the LUT, and therefore supply an ACEScct CMS pattern?
As for the working space of the engine, right now it’s linear Rec. 709 space, and I am trying to set up a pipeline recommendation for the engineering team. Because our will be outputting an image in ACEScg space, I will either need to give something back in ACEScg space or convince the engineering team to do the LUT application in ACEScc/t space. Are you saying that even if I set the outgoing transform in Resolve to ACEScg, it will still only be able to generate a 3D LUT in ACEScc? Is there a way to confirm what space the LUT is saved in?
Resolve only outputs 3D LUTs, and 3D LUTs, as I describe above cannot take linear input. The exported LUT will represent whatever is done in the node tree (excluding spatial operations) and does not include the Output Transform. The node tree is in the working space, which can only be ACEScc or ACEScct.
I don’t know the details of the engine you are using, but if it exports an EXR CMS pattern for you to process to create a LUT for it, that is likely to be constrained to the 0-1 range. Therefore image data you feed to the resulting LUT must also be in that range, or it will be clipped. For unclamped scene-linear data that would mean clipping everything above diffuse white.