ACESclip use cases

In the first ACESclip virtual working group meeting a decision was made to focus on 2 use cases for ACESclip.

  1. ACES viewing pipeline documentation/setup
  2. Archival

The consensus was that use case #1 should be the initial focus and we should build up from the strawman specification rather than try to modify the existing version of ACESclip.

Current characteristics of the strawman are:

  • Supports viewing pipeline descriptions starting from camera files or ACES files.
  • It only references transforms via transformIDs and does not attempt to embed them
  • Supports the concept of LMT application spaces via <acesToLmtWorkingSpace> and <lmtWorkingSpaceToAces> tags.
  • Does not attempt to keep versions of ACESclip (other than to note creation times and modification times)
  • Does not specify which files the LMT should be applied to. (no timeline info)

The strawman was an attempt to build the simplest possible description of an ACES pipeline but may not have taken into account various use case requirements. What use case requirements may have been missed?

Discussions during the meeting proposed various methods to version the ACESclip, including a proposal to use blockchain, and discussions of the pros and cons of carrying timeline info.

@Wolfgang_Ruppel recently brought to our attention a new IMF mechanism to append arbitrary timeline info in IMF packages. This seems like it might have potential application for the LMTs in the ADSM (IMF App5)

The new member of the IMF family is actually RDD 47 “Isochronous Stream of XML Documents (ISXD) Plugin” [1].
It basically defines a new Virtual Track and the associated MXF track file that carries exactly one XML instance per frame, plus zero or more static XML files providing global context.
There can be multiple ISXD Virtual Tracks per composition.

[1] https://doi.org/10.5594/SMPTE.RDD47.2018

As mentioned in my post from 1 month ago, below are two proposals for an alternative ACESclip syntax. I just renamed .xml in .html because otherwise ACEScentral denies the upload of XML files.
I am therefore re-uploading the files from that post. As you can see they propose to implement a few things that were discussed today.

TB-2014-009_sample.ACESclip.html (2.7 KB)
The “simpler” ACESclip:

  • links to Sony camera-native MXF file using the MediaID as ClipID;
  • it references to an external “non-standard” CLF as IDT,
  • stacks it first with a “balancing” CDL (explicitly declaring ACEScct as working space for it),
  • then stacks it with a pre-grading LMT described as an external CLF.
  • It then describes one output pipeline made up of:
  • a CLF-referenced LMT,
  • a standard (Academy-supplied) RRT referenced by official TransformID,
  • a standard ODT referenced by its official TransfromID.

changing-metadata-sample.ACESclip.html (6.2 KB)
The more “complex” ACESclip also implements optional features (color pedigree) like:

  • authoring metadata like name, creation date, ACES version, etc.;
  • one IDT with applied flag (which means the IDT is already baked into the footage and is not fo re-application);
  • 3 different color-pipelines (one for previz, one for theatrical, one for HDR mastering), each with its own stack of LMT and Output Transform component, referenced from components from the <TransformLibrary> section of the XML;
  • an optional <History> section, with historical/forensic data for that clip, chronologically sorted by revisions.

Handling of multiple color-pipelines in one ACESclip may be handled when it is imported/loaded/assoaciated in a vendor product – either automatically integrated from the show pipeline, or manually with the artist picking the color-pipeline name from the GUI displaying all those read from the ACESclip, according to how they need footage to be viewed.

Two alternative proposals were set forth in the VWG, particularly for mastering/archival purposes:

  • Using supplemental IMF packages to link different essences (different frame rates, format, trims, etc.) as well as different ACESclip files (or pointing to different color-pipeline sections from an ACESclip with all color-piepliens).
  • Employing IMF OPL to link to different ACESclips (or parts of one ACESclip) – athough this works only for differentiating Output Transforms but for the same essences.

As a final note, this is the SMPTE working group on Extensible Time Label (TLX) mentioned by Jim during last call.

Hi Walter,

Thanks for joining the call today.

On the first call, we decided it would be better to start from the Strawman spec, rather than try to pair back the complexity of TB-2014-009, in order to keep things simple and identify what features are missing based on use cases and requirements.

I think today, with your help, we identified two features that are not addressed in the Strawman:

  • How do you associate the ACESclip XML with a clip? At a bare minimum, we could leverage ALE/EDL like we do with CDL, but should we include the ClipID schema from TB-2014-009?
  • How do you store history vs. know what is ‘current’ for someone receiving a shot?

So my questions for you:

1- Is it possible to create your “simple” ACESclip example using the current Strawman spec?
2- What is missing from the Strawman spec that your “complex” ACESclip requires?

Thanks,
Chris

1 Like

Hi Chris.
I read the Strawman implementation and started from its sample ACESclip, which I upload here as reference:
strawman_sample.ACESclip.html (1.7 KB)

your question 1
My first proposed, simple case can be integrated with current Strawman by adding to it:

  • a <ClipID> element describing the bond between the present ACESclip and the footage file(s);
  • a <rrt> element;
  • the possibility for <idt>, <lmt>, <rrt> and <odt> elements to refer to [possibly a concatenation of] either official ACES components (referred to by their TransformIDs), by ASC CDLs, or as external files (CLF, CTL, or other LUT file formats);
  • including capability for the <lmt> element and its children to specify ACES colorspaces for its components (e.g. LUTs or CDLs can be based on ACEScct as an implicit conversion to/from ACES2065-1 is assumed via metadata).

The above simple Strawman extension also allows LMTs and ODTs defined as concatenation of several items. Here’s a slight adoption of sample ACESclip from the Strawmen specs, adopted to include a link to footage and the use of custom RRT+ODT.
Adaption of Strawman to first example is here:
strawman_Walter-sample_01.ACESclip.xml (1.4 KB)

your question 2
My second, more complex example with the color-pedigree (including multiple output pipelines, more metadata, and optional historic/forensic data), that also be derived from the Strawman by increasing its complecity. Namely:

  • introduce a <History> top-level element whose children contain historic data that are no longer current; the timeline for such data can be recovered sorting by their <modificationTime> elements.
  • introduce a <TransformLibrary> top-level element, where individual color-pipeline elements can be listed and each given a UUID internal to the ACESclip; the <idt>, <lmt>, <rrt> and <odt> elements inside <acesPipeline> top-level element are simple ordered list of transform UUID

@CClark
I am uploading here the reference ACESclip for usecase #2, starting from the Strawman as base XML schema.
strawman_Walter-sample_02.ACESclip.xml (10.0 KB)

First, a birds’ eye view of a fictional use-case and invented history of the color pipelines is in order. Then, I will get down to fine details in the XML file, including specific line numbers (in bold).

Scenario (using real product names but fictional facility names/hostnames/info):

  • A clip is shot with a RED camera, ingested and balanced onset with a CDL.
  • There is a DI pre-grade 3 days after on the original footage. Meanwhile:
  • a proxy DNxHR is rendered for editorial department;
  • a de-Bayered transcode in ST2065-4 OpenEXRs is prepared for neutral VFX plates (to be viewed with an additional OoG LUT helping with compositing), and
  • a similar EXR render (without OoG) is prepped out of final cut for the D,I…

The additions to Strawman are:

  • Optional <clipID> top-level element to link the ACESclip to an actual footage. Its children elements are: <file> or <seq> specifying the filename or frame sequence; <clipName> to link to a predetermined clipname/tapename metadata (that is file-format dependant – @joseph said he would look for candidates in ARRI files); <metadata> to link to specific named OpenEXR metadata value (whose key is in the key= attribute); <modificationTime> to refer to the file’s last modification time (not recommended for frame sequences).
  • Optional <transformLibrary> top-level element, to store and modularize repetitive transforms used in the ACESclip (to be referenced from there using UUIDs) – not used in this ACESclip sample.
  • Optional <History> top-level element to store historical/forensic data about past conditions of the ACES clip and its color processing (e.g. you store versions where the clip was in camera-native format and a specific IDT was applied). I proposed in the last VWG call to move inside it everything that is no longer valid, and keep all that is current outside of it.
  • The <idt>, <lmt>, <rrt> and <odt> elements now have a ref= attribute that specifies whether each component is defined via a CTL TransformID, via a LUT (file) or via a CDL. Optionally, either a source= and target= attribute may be used to implicitly convert to/from ACES color-spaces different from ST2065-1.
  • Multiple <lmt>s can be concatenated, in which case mandatory pos= attribute specifies the stack ordering for each LMT.

The main section of the LUT (lines 6−36) links the ACESclip to an actual, HDR-graded EXR sequence, compliant with ST2065-4, and describes three pipelines for different viewing scenarios, all employing a show LUT #2 (acting on ACEScct), externally referenced as a CLF file:

  • line 16 — EXR frame sequence is defined by means of filename (using idx= attribute to specify the placeholder character for frame-sequence digits and its range by optional min= and max=)
  • lines 17−18 — further logical link between ACESclip and footage is specified using two EXR custom metadata specified in the <metadata> elements.
  • lines 20−25 — Pipeline #1 for generic VFX previz sRGB@D60 monitors, using the same OoG LUT that compositors used
  • lines 26−30 — Pipeline #2 for generic theatre screening via “PQ” (ST2084) projectors.
  • lines 31−35 — Pipeline #3 for HDR mastering in a generic Rec.2020 colorimetry,
  • line 37 — This example does not employ the <transformsLibrary> section: this makes the ACESclip harder to read, but simpler to implement. There will be a third version of this identical ACESclip uploaded, but implementing the Transforms’ Library.
  • lines 38−242 — The <History> section includes the optional color-pedigree of the ACESclip, in reverse order – yet sorting is governed by the <modificationDateTime> in reach revision.
  • Each <Revision> element in the pedigree replicates the structure of the top-level structure, as it was in a condition in the past. Please read the Scenario section above to reconstruct it:
  • lines 211−241 — A 3-parts R3D clip is color-preprocessed via RED SDK parameters from the camera, offloaded from RED Mag for archival into an LTFS tape. No viewing, just Data Management, therefore no ACES Output Transform is specified at all.
  • lines 169−210 — R3D clip is balanced on-set via a CDL created in ACEScct and sent to a Rec.709 monitor through a LUT-box.
  • lines 133−168 — 3 days after, the original R3D clip is pre-graded in a P3@D60-calibrated theater with the DoP; both the on-set balance CDL and a pre-pared show LUT #1 (acting on ACEScc) are used as LMTs.
  • lines 89−132 — Avid DNxHR™ proxy is rendered, compliant with ST2065-5, for the editorial department (using HLG monitors), with same CDL and show LUT #1 as LMTs.
  • lines 63−88 — DeBayered EXR plates are prepared for VFX. Compositors will use a carefully crafted OoG technical LUT (originally created in ACEScg) to help them with exposure constrains. The same show LUT #1 is used on P3@D60 monitors.
  • lines 39−62 — DeBayered, balanced but ungraded EXR plates, from an hypothetical reel-3 final cut, are also sent for final DI grading with a P3@DCI projector. No LMT is applied as DoP and DI colorist decided to reset all grading.

As I wrote above, there will be a third ACESclip sample uploaded, identical to this usecase, but employing Transform Library for greater accessibility and re-use of transforms (to avoid some of them being redundantly re-written, as in the present ACESclip).

On due time, I can volunteer to write an updated Strawman specification including every details of both simple and complex ACESclip proposals. Please let me know what you think.

As a last comment, I can give a live explaination of either the restricted or the complete ACESclip derived from Strawman at the next ACESclip VWG.

Answers to Chris’ questions below:

  1. Yes, it is possible to create a “simple” ACESclip using the current Strawman (re-introducing <clipID>)…
  2. In the more “complex” ACESclip representation an (optional) <history> element is introduced, where items no longer current are timestamped and moved into (and eventually forgotten). Only elements outside of <history> are mandatory and relevant for immediate/automatic/default processing.

I can play a live demo on ACESclip at the next VWG meeting, showing how the XML looks like and changes after 6-7 steps of an end-to-end pipeline: on-set, pre-grading, editorial, grading, compositing, mastering, archival.

Furthermore, attached here is tentative draft on an ACESclip specification that I based on the Strawman, but including the clipID.

S-2019-009__ACESclip_W.Arrighetti_v1.1.1.pdf (635.0 kiB)

Please consider chapters 1, 4, 5 and app. A to be mostly copy-n-pasted from the original ACESclip document, whereas chapters 3 and 6 and apps. B and C to be completely new.

The document is a preparation for the more complete ACESclip (which will include the historic/forensics optional component), as well as additional features for some of the XML elements.
However, if you stick with just Required elements (and their attributes), the implementation is essentially based on Strawman, with the only new introduction being the <clipID> element for binding with footage.

As regatrds samples in my posts above from last week, you will find the “simple” ACESclip example in Appendix B (strawman_Walter-sample_01.ACESclip.xml), whereas the more “complex” ACESclip example (with “historic” color-pedigree) in Appendix C strawman_Walter-sample_02.ACESclip.xml).


Here is a further update, with full historical/forensics elements included.

  • Appendix A is currently unfinihsed; will contain the full ACESclip XSL.
  • Appendix B contains the “simple” ACESclip usecase (one double LMT, one color pipeline, no historical data)
  • Appendix C contains a complex ACESclip usecase (including full color-pedigree); it also shows how to link to an IMF package (both output-referred mastered, and an App.#5 ACES master).

S-2019-009__ACESclip_W.Arrighetti_v0.7.pdf (612 kiB)

Any program that is capable of processing history/forensic elements (either reading and writing), will do it. Otherwise, the product will simply ignore the whole <history> top-level element (but leave its content unchanged).
Whenever ACESclip is processed, if something changes, a new <colorPipeline> element is generated, whereas its “old” content is appended as a new <revision> element in the historic section (i.e. the top-level <history> element). Things that can be retrieved from there are:

  • What was a specific clip stored in the past: which filename, file format, tapename (up to the original camera name and metadata when, if and when it was a camera-native file)
  • what Input Transform (and its parameters) the camera-native file was processed with (otherwise this would be lost when camera-native is rendered into an ACES color-managed EXR/MXF file).
  • what Ouptut Transforms (and their parameters) the footage had been viewed though, in several stages of production/post-production (this may be recorded even for files no longer in thse viewing pipelines, like the on-set’s)
  • what LMTs (including CDLs) was the footage processed with, that may have been possibly baked in a future render of the file
  • are we looking at a baselight-grade, a technical-grade, a creative-grade or a finished version of the film?
  • who, from which system and application (and their versions), domain/hostname, touched that file