ACESclip, CLF and a standard metadata archive

Tags: #<Tag:0x00007fa127628980> #<Tag:0x00007fa1276288b8> #<Tag:0x00007fa1276287f0>

Hello everybody,
First of all, I have to say how glad I am that the ACADEMY decided to create an official discussions board, where the engineers from every software/hardware house can meet the technicians from every stage of production and discuss openly about work-methods, issues and solutions.

As a major supporter of ACES and its aim, I keep constantly my eyes opened on the current softwares and hardwares availability that support the ACES framework. And, if it is definitely true that more and more systems on the market are embracing ACES on the color-management point of view, I feel like there is still some sort of resistance adopting the Common LUT File format and the ACESclip metadata archive.

I wonder if we should try to push a bit more on those essential pillar of the ACES workflow. The reason why I believe that is really important are the same that brought me to support ACES in the first place: find a way to avoid mistakes during a production pipeline by offering to the end users a solid and vendor-free way to manage their contents by going above the current working-methods and technical possibility (to make the system future-proof) and by standardizing the way how software and hardwares talk to each others, modifying the way of how they work in the background to assure us a smooth interoperability although all production stages.

More in details, I believe that CLF and AcesClip need to be strongly implemented because:

  1. Trying to find a common, standard, way to express and store color transformations and track them, along with other essential metatada, within an entire pipeline, it’s as important as trying to standardize the color transformations themselves: basically, how we can hope that the average of end users, with maybe a limited or maybe just not strong enough experience in color science and workflow management, could ever hope to implement ACES without having any trouble in using all the available color transformations that make it work? For instance, creating CDL on set and expecting that those color decisions will be consistently be represented in a DI room it’s currently already possible through the ACES framework, BUT how many people in the world know ho to convert data from AcesProxy to ACEScc and then ACES2065? One will say that are the ACES-compatible softwares that should do that, but how it will ever happen if people will not know that those transformations are needed in the first place? There are many tools out there that will do the math easily and perfectly, but there is still need of someone that tells them how (or when) and what to do. I believe that it’s part of the Academy’s aim to transform ACES into something that will work seamlessly from the user prospective, avoiding any possible mistake that can happen from an inaccurate or poor use of the ACES color science. Ultimately, most of the productions and the post production houses, most of the on set technicians and DP or Colorist and Post Supervisor that want to use ACES because of its color science, they don’t want to know what ACES cc or ACES proxy are, they are artists and technicians that aim to get the best from their footage, and that’s it. Saving them the effort of reading and studying ACES documentation and ask support to the software vendors, will strongly expand the amount of medium, small or independent productions out there that will use ACES in the future.

  2. On the other hand, talking a bit more deeply about ACESClip, there is a technical issue that we are facing everyday in trying to standardize workflows (not only between different productions, but mostly within pieces the same production): the different terminology and approach that every camera and software vendor are using when defining the same metadata values. I understand that it is not in the aim of the ACADEMY to standardize how softwares and cameras manage their metadata, but I believe that it could be VERY important if the ACADEMY will suggest a single approach on essential metadata that are currently needed to develop a image pipeline properly:

  • camera-vendors like ARRI, SONY and RED use different fields to express the same metadata value, such of White Balance, clip name, iso, shutter, fps, UUID of the clip, lens data and so on;

  • most of the time, most those data are recognized only from the camera-vendor post production software (or their SDK) and very few get read properly, plus most of the post production software will use again new fields to express the same data trying to convert the very few they read from the camera end to their way making the data then impossible to be recognized by the camera itself;

  • because of this very reason, in this tower of Babel scenario, trying to really make an use of those data within a production workflow it’s really hard;

  • even a single camera model workflow becomes hard when you try to use a simple data across your pipeline, for instance trying to use the WB data to adjust automatically your softwares to the right IDT (for instance), or to link to those data some essential piece of your (I wish) automated pipeline. The whole thing become pointless when trying to make it work on multi-camera model shows where the possible variables of combination of metadata-field-name/actual-value become limitless.

` I would love to see the ACESClip being able to standardize a bunch of essential data (again, like WB, Lens data, clip-related data) and have the ACESclip .xml sidecar implemented directly within the camera vendors and the post workflow, so one can start generating and tracking those data from set to post easily. Don’t get me wrong, cameras and software might keep their own way of doing things, but if there will be a way to bing able to merge metadata together by using the same data and keep them linked to the contents, it would be simply amazing.
So, along with LUTs and color transformations, we would be able to use the vendor-related metadata to automate certain process of the workflow or facilitate the use of some data for other steps (i.e. lens data in VFX!).
I know that the openEXR has enough big headers to handle all these data, and I’m currently pushing with some vendors to add (or avoid to loose) certain data to their .exr encoding, but the problem is more about know how define those data then actually carry them.
I really believe that we should to work together to standardize this aspect and ask to the camera and software vendors to use those naming conventions to define their metadata.

Looking forward to hear back from you,

Best regards,

Francesco Giardiello

1 Like

Hi Francesco.
It’s really nice to meet you here as well as in real life discussing about real problems and real benefits of ACES.

I absolutely agree with you about the importance of stressing CommonLUT (CLF) and the “ACESclip” container formats as unified metadata structures ― from on-set to delivery. We conceived them since the beginning and worked really hard to come up with something that is meaningful and simple to implement and use in the existing echosystem, in as wider workflow settings as possible. These two elements, though —the CLF and the ACESclip— have understandably a slower implementation time than the other ACES components, but they will get there.

I think that this thread that you started, Francesco, is a perfect place as well to introduce a bit more these two ACES components to the general public.

The Common LUT Format has been around for many years and is a XML specification for describing color transformations based on 1D/3D LUTs and other simple formulas, its main intent being that of standardising a LUT file format over the plethora of existing ones. CLF also includes a minimal description of source and target colour spaces and their encodings (e.g. color-depth and arithmetics types for input and output codevalues), as well as the internals of the interpolation algorithm, which may or may not be left to the LUT processing algorithm. Some products already support this LUT format.

I would anticipate that future version of CommonLUT will implement more complete descriptions of source and target color-spaces to make LUT interpretation even less ambiguous, like _UUID_s tied to specificc LUTs, including and especially those realizing ACES factory-default Input (IDTs) and Output Transforms (ODTs).

The ACES clip-container is another XML-based file meant to “sidecar” every clip file, frame-per-file sequence of files, or folder-structured clip in an ACES workflow, to represent its complete metadata (not just the color-related ones). As it’s an external file, it can be a manifest to virtually any file formats including the camera-raws (thus not only the ACES OpenEXR sequences as in SMPTE 2065-4).

Future release will incorporate stronger features that aim at keeping the clip-container more consistent across its creation and transport stages, and its use by differential products. This feature set includes UUIDs and advanced features like the color-pedigree, which keeps track not only of the clip’s current colorimetry in its associated file form, but some or all of the color pipeline stages it has undergone during its processing. This may help the auditing of different color processes, which is especially useful in at least two cases:

  • presence of clips that have had (or will have) a partially out-of-ACES processing (e.g. archived/archival footage)
  • multi-facility workflow where each facility may use different internal processes even in the contest of ACES

Thanks again for the opportunity to talk about these often overlooked but really paramount features. Like I said, I hope we can both contribute and work with other product partners to better integrate these advanced features of ACES versions 1 and up.

1 Like

I would love to learn more about how the academy suggests using ACESclip within a show production, and standard metadata naming conventions moving forward.

In the case of metadata, one example that I’ve seen is while VFX vendors are expected to work in ACEScg, the recommended delivery of plates and final deliverables is ACES2065 (AP0). It would be helpful for deliverables to have a metadata key indicating the color primaries of the EXR. Is there a recommended/standard metadata key for this? I’ve seen the source color primaries of the camera indicated (such as DragonColor2, or Alexa Wide Hue Gamut), but this information could become even more confusing for the consumer of these plates if there isn’t a standard key indicating the current colorspace of the EXR.

On top of this, if VFX houses are using ACEScg as their working colorspace, how will a playback application know what primaries the exr container is in within an ACES framework? If there’s a standard naming convention to including this information I’d love to use it !

Upon reviewing the sample “golden reference” images the Academy provides, I did notice there’s a “acesImageContainerFlag” with a value of 1. Is this what’s intended to be used to identify a “true” ACES container? I assume his flag means more than just color primaries? The ACEScg reference example does not have the metadata key, but nowhere indicates that the colorspace is other than the folder name.

In the case of ACESClip, is this something that would store the LMT used for each production?

Hi Michael.
The acesImageContainerFlag metadata refers only to the SMPTE2065-1 colour space that you mentioned at first (AP0 primaries). This flag should be 1 only in OpenEXR files storing an image in that exact colour space. As for colour primaries’ specification in file metadata, the OpenEXR format already has tags for chromaticity coordinates of color primaries and whitepoint. This can be used to describe other RGB-based colour space gamut ― not just the ACES ones.
This metadata behaviour is defined in SMPTE Standard 2065-4.

So applications writing EXRs in a colour-managed setting (which should be the correct scenario) must also write these chromaticity coordinates ino the file header. For the same reason, colour-managed applications reading EXRs are expected to honour those tags and act accordingly (e.g. applying the proper input transformations as required by their own working colour spaces).

Going straight to the point as self-describing ACES colour-spaces features in OpenEXR:

  • for ACEScg, the AP1 primaries should be specified in the file header and the acesImageContainerFlag shall be set to 0;
  • for SMPTE2065-1, the AP0 primaries should be specified as well in the file header and the acesImageContainerFlag shall be set to 1.

And, of course, the ACESclip container can describe this all, in a separate (“sidecar”) file. The reason for using an external file is because the ACESclip may be a manifest to a clip in any file format (e.g. camera raw files, or final-delivery formats), so you cannot rely a priori on any file format’s capabilities to internally hold specific and less-specific metadata.