ACES workflow : Pre-convert to ACEScg or work in ACES2065

Hi guys,

I work for a vfx company. We currently use the Adobe line of products, namely After Effects and Photoshop, our artists are slowly starting to make the switch to Nuke. I would say we are 70% of the way there. We were hoping to avoid working in ACES until this transition was complete, as I’ve been told we might run into problems working ACES in AE, however, we finally had a client ask us for ACES.

I’ve done a fair bit of research into the workflow, and read up on the forum as much as possible, but I still have questions about that the workflow that remain unanswered. We’ve tested a few workflows, and we’ve settled on one that works for us, assuming this adheres to the standard.

We are relying heavily on OCIO to provide any colour transforms needed. I’m unsure yet if this will cause problems when we have heavier comps, and how to render/caching times will be affected.

Also, what is currently the accepted practice? Is it preconverting all our assets to ACEScg before ingest, and converting them back to ACES2065 for delivery or should we ingest everything in ACES2065 and use OCIO to convert to ACEScg, and then apply the ODT for viewing on top of that?

Lastly, what is the proper way to convert footage to ACES colour space? I’ve managed to do this in Resolve, but I was hoping that this can also be done in AE (in 32bit PIZ EXRs), since we already have scripts in place that facitate this process, but that would mean OCIO would be used to provide this colour transform. Can this do the job?

Any input is greatly appreciated.

Thanks,

Mark

Hi @mrendo,

Excellent, this will make your life better :slight_smile:

I’m not sure what do you mean by before ingest here but if you are on your way of adopting ACES, I would probably recommend converting your assets to ACEScg, it will be a step less to do with your renders and will pay off in the long run. Similarly you ideally do want to do your compositing in ACEScg as it behaves better than ACES2065-1. You will probably still have to do some operations, e.g. keying, in the Camera Vendor recommended colourspace or even the native camera space but this should just be transient and temporary. If you use the OCIO config, it expects the working space to be ACEScg.

I think you gave the answer above! Nuke obviously :slight_smile: You can automate that easily, batch it on the farm, etc…

@Thomas_Mansencal thank you for your reply,

We have coded a system (like I assume most higher end companies have) that sorts any elements we receive from client/shoot ourselves. ie: what I mean by before/after ingest. So any elements ingested into this system will be exclusively in ACEScg. Does this work, or does it break the standard?

Ok, interesting. Can you give me some examples of what operations might require this and why? Also can footage be reconverted using OCIO from ACEScg to camera vendor colourspace/native camera space?

Interesting. Not sure how it would on Nuke, is there any documentation about it? So Nuke is the only way to do this? Should we avoid AE for this, and if so why?

Working with ACES in After Effects is pretty tricky because AE’s color management is based on ICC color profiles, which isn’t how anyone else seems to do it. So at least as of early this year when I last did tests, the OCIO plugin for After Effects actually needs AE’s built-in color management to be turned completely off in order to work correctly, as OCIO becomes your entire color management system. You do still want 32-bit color depth in your project settings, otherwise AE will not work correctly with EXRs (which are 16-bit standard but a special “half-float” numbering system as I’m sure you’ve read. 32-bit EXRs are possible but generally unnecessary and not part of the ACES specification.)

Regarding the pre-ingestion transcoding, I believe that according to strict ACES spec your EXR files are always supposed to remain as ACES2065-1, and then each piece of software is tasked with converting to the other ACES formats for the current step of the process, but converting back to ACES2065-1 for export. So your first step would be transcoding your camera footage to ACES2065-1 EXR file sequences, then importing those to an After Effects project, where you would use OCIO to both convert to ACEScg and apply the ODT+RRT for your calibrated monitor. Then when you’re ready to export, you would need to use another OCIO instance in your composition to convert back to ACES2065-1. In your Output Module settings, I believe you have to specify 32-bit color depth, and you can embed an ACES2065-1 color profile but last I looked AE was not yet able to do that in a spec-compliant way (something about the color space metadata not being recorded correctly).

Sorry If I was not clear, by CG assets I was implicitly meaning your Textures, HDRI, and anything you will use directly in your renders.

Yes sure but I’m still not sure what you were referring to as before ingest, before you have ingested any data it can be in any state and any encoding, that is why you ingest it, to conform it to known state and encoding as per your pipeline requirements. If you want to do things as per the book you should use ACES2065-1 at this stage but from a practical standpoint…

…the reality is that depending the amount of data you pipe into your compositing package, e.g. Nuke, it might still be beneficial to have everything readily available in the working colorspace, i.e. ACEScg because you will not pay for the cost of extra conversion. Sure, a limited amount of inputs can be handled with OCIO but as you get higher resolution images and / or deep images you will probably enjoy having less operations possible in your graph. Similarly, stills for texturing, I would convert them into ACEScg, so that they can be imported straight without hassle by the texture artists.

As you build your assets library, you will find that it is easier to reuse them if everything is on the same baseline, no conversion in shaders means that if you swap your renderer or whatever things will be more predictable.

So while this probably not what was intended by the AMPAS originally, production experience has shown that some gamuts are more appropriate for CG than others, and BT.2020 / ACEScg fare better than ACES2065-1. So from there it should make sense to adopt the formers as your working space and only use ACES2065-1 for exchange and archival. As a matter of fact big VFX facilities tend to exchange ACEScg encoded imagery directly :slight_smile:

Cameras try to be colorimetric and thus honour Maxwell-Ives criterion, in practise they tend to record radiant energy (lets not call that colour since it is not visible) outside the visible spectrum and ACES2065-1. The result is that you get negative values, typical cases are green screen, and because negative values can be a pain to deal with, this is an instance where you could have to resort on using what I call the native camera space which is essentially the raw data after debayering and before conversion to a working RGB colourspace.

Nuke has a good CLI: Command-Line Operations

You can use After Effects but it will not scale very well.

Cheers,

Thomas

1 Like