New to this community after reading up on as much as I could in the past week or so.
Already got the jist of ACES, but recently had to dive deep(er) as I’m trying to implement it at our studio.
But my team is a bit reluctant to adopt it the way I read it to implement it.
How I understand it now is:
- You’d convert live plates from, in our case, an Alexa Mini.
- In Nuke, set the Read node Colorspace to Input - ARRI - V3 LogC (EI800).
- Set a Write node with ‘write ACES compliant EXR’ ticked on.
- Write EXR sequences with ACEScg in them.
- Read these EXRs as raw/linear and comp the VFX on top of it.
So that’s how I understand it now.
But my team asked “why not comp straight onto the footage and not write those to EXR?”
So my question is if you could indeed do just that.
Make the VFX, rendered in ACEScg space and, in Nuke, use the plate as we receive it and just convert it with the Read node.
To then put the VFX on top.
And then I guess use an ‘OCIO Color Space’ node to convert the result to ACES2065-1 to send to final grading.
So with that, skipping the step of converting the plate to ACES EXR files.
Are there downsides to this?