Archiving and future proofing

Yes, the “expected ODT” is the one used in mastering and therefore for any subsequent rendering of the archived ACES files.

Changing the ODT might be done for a number of reasons, but most likely to repurpose archived content for another display type and/or viewing environment. And changing the ODT might very well require another color correction pass, ranging from a simple trim to something more substantial depending on the target display and viewing parameters.


Hi @chuckyboilo.
First of all the SMPTE standard ST2067-50 “IMF Application #5: ACES” ―which we almost completed and will soon be published― specifies how an ACES IMP (IMF package) should be encoded. As Andy said, it will report information about ACES version and the ODT (filenames for now, but it may be their TransformID in the future).
Multiple ODTs may also be specified in a IMP (e.g. for content to be mastered in different displays and/or dynamic ranges, as per your use-case); technical means to visually verify that the used ACES pipleine is correct are also included within the IMF specs from that standard.
This is because we designed this not just for studio interop, but for long-term archival as well; this is why it internally uses OpenEXR encoding from ACES standard ST2065-4.

As for storage footprint concerns, the ACES IMP is expected to be huge, as the archival copy needs be, first of all, of maximum quality and future-proofed.
That is again why, as @jim_houston said, uncompressed OpenEXR was chosen, while Apple ProRes and Avid DNxHD/R are, instead, both compressed and propietary formats.

ACES IMF uses the usual ACES MXF container (structured as per ST2065-5), where each .mxf video file wraps a sequence of EXR frames (structured as per ST2065-4).

This has been discussed here and there and I’m quite keen to put that back on the table: OpenEXR has a few lossless compression schemes that could be used to reduce the size of the files. I would be keen to have reinstated on ACES Central the reason why one of the scheme was not adopted and maybe reassess if it is a good choice to not allow compression at all in the ACES container.



I agree with you Thomas: at least one mathematically-lossless compression scheme should be added.

I believe that the rationale behind the uncompressed choice (dating back at Standard ST2065-4 from 2013), was the poor support, by applications available at that time, of the OpenEXR file format as a whole and, in particular, of a uniform lossless compression algorithm in the EXRs ― at least from the perspective of systems capable of reading and uncompressing images for direct real-time playback.

Today, instead, ZIP and RLE lossless variants are common to most EXR engines.

That would definitely help for small companies like the one I work at!

A little additional history here: when the ACES Project Committee prepared to take the ACES subset of OpenEXR to SMPTE for standardization (ST 2065-4), the group seriously considered including the various compression schemes. To do so meant it needed to come up with a compression “plug-in architecture” for the standards document, as well as taking on standardization of the compression schemes themselves. That was more than could practically have been done at the time, so the Committee decided to start with the basics: uncompressed ACES2065 image data (ST 2065-1).

The intention was (and is) to add compression support in the standards suite. Any volunteers willing to take this on?

I would love to help with that. This can be done for the compression algorithms that are currently published as standards, but this would practically help only if they match with the algorithms that the VFX/post community is using when dealing with production EXR shots.

Andy, we had the same problem when drafting ST2067-50 (IMF Application #5: ACES). Since there doesn’t seem to be a complete (and standardized) registry of (output-referred) color spaces, color gamuts, color encodings, it was not possible to refer to mastering color-spaces in the IMF packages by just mentioning them as metadata in the XML, so an ACES Output Transform’s CTL name (TransformID) is all that could be used in the Standard document.

There is some SMPTE UL used for a very narrow set of color-spaces to be used in certain MXF specs, but this is largely insufficient compared to today’s increasing color-jungle, These ULs should be constantly expanding along with mastering colorimetry – which inflates today at a faster rate than standardization can keep up with.

I think it would also be beneficial, as I said many times in past occasions, that the Academy puts up a public registry about ACES names/values, TransformIDs for ACES core components, and that is standardized somehow so that it can be referenced by other documents (like SMPTE families ST2065 and ST2067). Things to put up in the gesitry would ideally be:

  • ACES color-gamut ULs (AP0 and AP1)
  • ACES color space ULs (ACES2065-1, ACEScg, ACEScc, ACEScct, ACESproxy10, ACESproxy12, APD, ADX)
  • ACES version name strings / ULs (as per AMPAS Standard S-2014-002)
  • hashing system for Academy-official CTLs so that it is possible to retreive and validate a CTL in automatic, persistent way (the .ctl files may just be linked, in this registry, back from the GitHub CTL repository).

As regards CTL hashes and other UUID proposals I will come up in a different topic.

Hi Zach - apologies for the seriously delayed reply. Somehow I missed your post.

ACESclip and the IMF packaging spec are connected, albeit loosely. ACESclip is intended to be the metadata carrier that is “born” on the set and travels with the content through the entire production pipeline (with all of its various paths that split and merge back together) so that when the IMF package is created, the essential metadata is sitting right there in the ACESclip file.

Presently, ACESclip isn’t quite at the level of adoption for this to happen, so manual metadata setting is required at the time of IMF package creation (depending on the software tool that does this). Of course, we want to see ACESclip implemented throughout the imaging pipeline and that’s why it’s part of the ACES Logo qualifications.

Heya Andy — thanks for the response! I was worried the question was a bit too open-ended :slight_smile:

Appreciate the clarification! This makes a whole lot of sense, ACESclip generation happening on set… permits one to imbue CDLs created on set with meaning! Seems obvious now that I think about it.

Man, I would do anything to live in world where tracking down what the hell a given plate “is” isn’t such a monumental effort in a mixed vendor, mixed camera pipeline… fingers crossed!

Better than crossing fingers - use them to type out emails/tweets/user forum postings requesting ACESclip support from equipment manufacturers and service providers! :grinning:

Thomas: “why one of the [OpenEXR lossless] schemes was not adopted”.

As far as the compression schemes in OpenEXR being used, we took an agnostic approach here
It is an operational decision and requires that you know how your OpenEXRs are being used.
But also to the point, when compressed files are sent between facilities, there is always some likelihood
of something going wrong with a chunk and the damage is lessened if uncompressed – thus the preference for it in the ACES container. A properly check-summed file using a lossless compression scheme in the EXR library can be a good thing in some places, but adding compression/decompression blocks would slow things down a little. But the equation of where is the pain is constantly changing, so evaluating the trade of a bigger file versus better processing is always worthwhile. Also, worth mentioning, is a desire to have
the ACES clips be “Play Anywhere” and the compressed versions didn’t “Play Well” except on Big Iron. (Assuming you have the bandwidth on the machines I/O.) The compressed formats still have issues especially in 4K playback (which is where b44 came from – as a lossy fix to that problem). The ACES committee always had thought that OpenEXR had useful ‘other’ parts, and were always careful to never imply that they couldn’t be used. But by the time you are getting to interchange and archive, the benefits of uncompressed files seemed to be in the lead. As always though, changing conditions could mean that it is time to re-evaluate this question. What does everyone think?

My feeling is that EXR files are super hard to manage is smaller studios and smaller budget features. Most of our clients prefer to get masters in Pro-Res. So it’s hard to sell them an openEXR archive. I love ACES but I sometimes feel that you only have the “top tier” of productions in mind.


It could be an unconscious bias.

Certainly, if you start to consider TV, almost no one works uncompressed.

If you start to look at different levels of production from one-man shooter, up to 50+ camera shows, there are going to be different ‘best practices’.

I also agree about getting smaller files to deal with and this really be looked at for ACESNext.

Part of the fix may be a need for a more modern compression format in OpenEXR when you look at things like ProRes, H265, DNX*, etc The project committee didn’t have expertise in that area. Would be nice to have some.


You have a point! And, I hope I didn’t sound too harsh! :wink:

But still most american TV shows have allot more budget that what we have in Quebec/Canada. That being said I doubt that TV producers would like to archive to EXR. Some might. Most would find it overkill (in the current context). That will obviously change with the growth of HDR.

Thanks @jim_houston for taking the time to help us out!


I don’t think that having “less” damage in this context is a good thing. I would rather know immediately that the data is trashed with an obvious defect than finding that we are missing some lines at the bottom when about to push a trailer out. And if it is compressed, well…, it will be faster to send back. :smiley:

From a Onset perspective as a ICG Digital Image Technician and the state of computing. The Generation of lossless OpenEXR would be no more a large data footprint then Alexa65 Arriraw OG is to the DIT Team on-set/nearset. Its just been deemed to expensive.

ACESclip as a plug for a SMPTE Standard in BMD Davinci Resolve “15” sounds like a spectacular idea for us DITs to show to our DPs especially during the prep of a show:slight_smile:

I don’t think anybody is suggesting that transcodes to EXR should be done on-set/near-set. That would mean having to transcode all of every take (or make selection decisions you wouldn’t want to be forced into in the pressured environment of a set) and do so at the highest possible quality. I don’t see that as viable or necessary for now. As available processing power, storage capabilities and transfer speeds increase, who knows.

I certainly never thought that.

EXRs are more of a deliverable to VFX in the earliest form because it keeps all of the image
quality from the camera. Only rarely do images go right from set to VFX. I am aware that some places prefer to work with the RAW files directly.

I’ve been dabbling with the DWAA compression in Resolve. It’s very efficient. It cuts down the size of frames by about 3.5 time (depending on the compression ratio)…

So… I’m wondering. I know that @ACES wants us to use uncompressed. But what would/could happen if I decided to use DWAA compressions to enable playback in 4K+.

312MB/s VS 1.2 GB/s is a HUGE difference.

At a compression ratio of 25 the images are visually lossless…



This thread is still entitled Archiving and future proofing so, despite being personally in favor of speccing ACES-compliant compressed OpenEXRs (as I already declared elsewhere in ACEScentral), from this thread’s perspective neither playback quality nor on-set operations being relevant here.

Said that, I subscribe most facilities rather keep original raw files up until the first post-production rendering (for either offline editorial, VFX plates, intermediate-grade renders, mezzanines and final masters). Despite “being allowed” in the above cases as well, ST2065-4 format (i.e. “uncompressed, 16bpc, ACES2065-1 EXRs” for short) is not mandated to be created prior to on-set or playback operations.
As per Jim’s comments, most facilities would rather do grading/playback from raw sources to keep maximum quality (* cfr. bullet-point below). If your infrastructure can sustain this (which means from Bayern-pattern, usually compressed footage), chances are ST2065-4 is well within their technological capacity as well (meant as storage, bandwidth, CPU/GPU power)

  • This is usually good practive, unless other concerns demand an early render from raw files, like Debayerning concerns or other processes now called “digital negative development”.