What is ACESclip?

ACESclip needs some love. The original 2014 spec was not widely adopted, a simpler straw-man version was circulated in 2017, but there remains a larger question most of the industry has when I bring up ACESclip which is - “what is that?”

That brings us here, to take a fresh look and answer the initial questions:

  1. What should ACESclip do? What should ACESclip not do?
  2. What software or hardware should be responsible for creating or handling it?

The original intent was to store clip-level metadata which, in its simplest form, describes how an ACES clip should be viewed. The word ‘clip’ here could mean a shot, VFX plate, or an entire movie.

As an example, if someone receives an ACES clip, there are unknowns such as:

  • Which version of ACES should be used?
  • Is there an LMT I should be viewing this clip with?
  • Which ODT represents the ‘creative intent’?

As you can see, this metadata is important, especially as we move towards ACES Next, where (I think) there will be more LMTs or ‘looks’ which deviate from the standard RRT, and more ACES projects will need to be archived.

Please have a happy and relaxing holiday, and start gathering your thoughts for discussion here. We will likely have our first VWG call in late January - more details to follow.

Thanks,
Chris

2 Likes

as Chris points out, version 1 tries to represent the transform stack including both which IDT was used (for forensics) as well as the full ACES Viewing Transform used while grading. Because LMTs are custom and not widely distributed, a mechanism for transmitting the LUTs is provided — whether it is a merged LUT with everything or just a LMT component LUT. Transporting LMTs was one of the significant goals (and lacks of the system ) but it directly collided with the fact that Looks are a direct competitive component between color corrector vendors. The lack of support is not just because of an XML format ( which does everything it was intended to do) but rather that there is not agreement that some things should be portable. There are too many options and the instructions for assembling a viewing kit are unclear. Is it better to have a single LUT representing the mastering display as used or a LMT component directly in a LUT which can be sampled with the ACES system version?

Also - now - what about carrying the ST 2086 and Dynamic Metadata within the ACES system. There is no standard archival carrier for files when this is created or saved.

And is XML the right choice anymore ; should JSONs be used either instead or along with?

Lots of things to think about.

Jim

Hello Chris.
To me the most important steps to develop for ACESNext ACESclip are

  • Standardising the logical bonding between an ACESclip ClipID key and UID of footage it refers to, detecting specific header metadata to use as UIDs for every major image/video file formats (particularly camera-native’s)
  • Possibily to specify IT (and chosen parameter values), LMT or LMTs, OT (and chosen parameter values) to use with the clip
  • Allow for different OTs according to pre-determined output devices
  • Allow for several customized components (IT, LMTs, RRT, ODTs), i.e. not those in any Academy-released ACES distributions, to be embedded into the ACESclip (as either CLFs and CTLs), or hard-linked to it, via digest-UIDs, as external files
  • Allow additional log of which application made which change to the CLIP (log-style)

My extended comments follow into the next post.

p.s.: I totally second Jim as to JSON (even better JWTs, or _JSON Web Token_s) being explored as valid alternative or replacement to XML.

Here I try to reply to your questions one by one with my personal views.
In the next reply I will draft my personal view on immediate actions.
.
“ACESclips” should be a sidecar to any clips – either stored in an ACES format (ST2065-4/-5) or not. It’s in the latter case (ACESclip manifesting a non-ACES footage), however, that their power becomes even more useful.

Just think to camera-native raw files (ARRIRAW, R3D, etc.). They are usually brought deep into a postproduction pipeline “as is”, because of time / computation / storage effort to transcode into EXR and preservation of original picture and metadata.
An outstanding exception to that being de-Bayering camera RAWs at the source to ensure pixel-precise plates are delivered to all departments (e.g. VFX).

When such camera-native files are extensivley used deep into a workflow, they may undergo many ACES technical and creative processes, until a master is finally produced in another file format. Along the pipeline color and imaging metadata were stacked, that different production software (conforming, compositing, color, video localization) may refer to.
Such information are, roughly:

  • which ACES version is being used overall
  • which exact IT (and its parameters) was used to enter the clip into ACES
  • which CDL was used to monitor on-set (might be never used again, yet still relevant)
  • which LMT(s) were applied at different steps (possibly stacked one on top of the other)
  • multiple OTs (and their parameters) might have possibly be pre-determined for specific, generic or particular devices/enviornments, to view the clip with

The mos timportant aspect, to me, is how to logically link the ACESclip to its footage. Despite <ClipID> and other tags already drafted in the strawman, a very important piece is still missing: without a recommended practice (in fact honored by all Product Partners) to link an ACESclip to the UID specific for every major image file format, there is no chance to mantain such a logical link.
It is Ye Same Ol’problem of “which camera metadata do I put in an EDL tapename?”, as well as “which ALE column do I conform an edit against?”.
Once it is clear which metadata in a file header the ACESclip links with, no naming-convention of filesystem proximity loose bonds are needed.
I suggest, however, to keep in the ACESclip specification that, for as far as possible, ACESclip should aldo be bound to its videoclip or file-per-frame sequence using also naming convention enforcement. For example (exploring either XML and JSON options):

  • To a RED clip A174_C001_1901FF_001.R3D the ACESclip should be A174_C001_1901FF.ACESclip.xml (hint: RED tapename A174_C001_1901FF, instead, is not sufficiently unique to serve as ClipID)
  • To an CinemaDNG file sequence reel_[0000000-9999999].dng the ACESclip should be reel.ACESclip.json

Places to specify ACES version as well as IT, LMT and OT are already in place in the strawman. They should only be “extended” to allow:

  • parameters in case a parametric Input Transform is used
  • parameters in case parametric Output Transforms are used (read below)
  • multiple Output Transforms in case a previous team steps backwards in the pipeline (e.g. camera, pre-grading, or an OTT distributor who later stream the content) pre-arranged a series of predetermined viewing devices to view the clip with, for example:
    • 1 standardized on-set monitor
    • 1 standardrzed HDR monitor for editorial
    • 1 DCI projector
    • 1 primary grading HDR monitor
    • 1 standardized monitor for compositing
    • 1 common Rec709 output for production-people previz on iPads/etc
    • 2 standardized profile HD TV models for HD deliveries
    • 1 standardized profile for DolbyVision HDR deliveries
    • 1 standardized profile for HLG HDR deliveries
  • different LMTs in case different creative looks are to be compared. In this case the ACESclip may reference different external CLFs or they may be included in the XML/JSON in a separare area of the ACESclip, each with its UUID. In the color-pedigree section of the ACESclip, they are simply recalled based on that UUID (either stacked one on top of the other, or as alternatives that may be selected from a UI). This is the same mechanism used in reference files in a DCP: there is a XML namespace of all the assets, where UUID is liked to every file (the PKL) and another XML namespace where timeline is described by its assets (the CPL).
  • While ACES components contained in Academy-distributed, official ACES releases may be referenced by just their name (or, better, via the hash of their original files – see my earlier proposal on TransferHash field in CTLs), possibility to link customized, non-official components should be given. They may be either referenced as external files (so again a UUID/hash mechanism realizes the logical bond), or just embedded inside the ACESClip (which is useful particularly for XML ACESclips and external files, as namespaces can be reliably used)
  • Examples of such externally- or internally-referenced ACES components may be:
    • customized ITs (e.g. non-Product Partner cameras, or inverses of unofficial OTs).
    • customized RRTs (of course this should be avoided for as much as possible),
    • customized ODTs (e.g. exotic or particular monitors where the ODT may also account for their profiling)
    • other forms of creative or technical transforms to be considered as (parts of) LMTs
  • ACESclip may be generated as early as near-set or upon ingestion by the post facility) and may be modified in later stages (image pre-processing, pre-grading, color-grading, compositing, finishing, mastering). Again, particularly for XML ACESclips where attributes can be added to XML elements, I usggest every update is associated to a UUID that is reported as an attrivute to any element added or modified during that update. In a final part of the ACESclip a list indexed by those UUIDs reports logged data for each update, such as, at least:
    • date/time of the change (accoring to the system’s local time of course)
    • hostname, username, OS (+version) and software application (+ version) doing the change
    • optional comments/notes that can be hand-filled

I am reporting here an initial proposal for action to resolve the first point in the above bullet-list.

As regards ACESclip ClipID bond, the Academy may form a Working Group comprising cinema/broadcast camera manufacturers (not just those that are Product Partners, but also those with an interest) and file-format maintainers.
By file-format maintainers I’d like to mention both organizations (e.g. ILM for OpenEXR, Adobe for DNG/TIFF/PSD, Apple for QuickTime, etc.) and standardization bodies like SMPTE, ISO/IEC for formats like MP4, MXF, JPEG, JPEG2000, PNG, …
In case no people show up for a file-format, it should be clear that the WG will decide, in absentia, also for that file-format.

The WG will deliver a table where, for every file format a spefic metadata in its header is pinpointed to fill in ACESclip’s ClipdID.
In case only “optional” metadata are advised the WG will add a recommended best practice to make sure the metadata is created and maintained throughout, which will then be handed off ACES Product Partners.

Thank you for sharing your thoughts Walter, and Jim.

I’m making note of these for a wider discussion, and hope you can join our first VWG call on Jan 30th.

Any others who have thoughts, feel free to post here (or start a new thread) prior to then.

Thanks,
Chris

This is just a brief infor on how color metadata should be shared across ACESclop and CLF.
First of all CLF is a LUT format, therefore color metadata are welcome for as long as they are related to applying a series of color transformations to footage. Such color transformation should be applicable standalone, not related/linked to a particular clip. ACESclip container exists for that.

ACESclip container cross-references a specific piece of footage asset (by means of ClipID) with several color metadata, among which one or more CLFs may be included.

Since ACESCclip and CLF are both XML-based, their cross-referencing mechanism should be similar as to how XML-DSig, --i.e. the W3C standard for XML digital signatures)-- works.
That is, alternatively:

  • CLF detached from ACESclip, where each CLF has its UUID, which is referenced from within the ACESclip. Ideally, referenced .clf file(s) should stay as filesystem-close as possible to .ACESclip.xml file; or
  • CLF enveloped by ACESclip (or ACESclip enveloping CLF, which is the same), where the whole XML namespace of one or more CLF(s) is wrapped by the ACESclip, and referenced internally from ACESclip via standard XPath.

For example, Digital Cinema uses the enveloped approach for signing KDMs, whereas it uses the detached approach for referencing assets from CPL to PKL and AssetMap.

Typical usecase.
ARRIRAW frame sequence with one ACESclip and one CLF. ACESclip describes the color pedigree of that particular ARRIRAW clip; the CLF is just a “creative LUT”.

What color metadata is in the ACESclip sidecar file:

  • Unique binding to that particular frame sequence (bond between ClipID and ARRIRAW header UIDs);
  • Version of ACES and version of the RRT
  • ACES Input Transform to apply to ARRIRAW LogC footage
  • UID reference to the CLF file
  • ACES Output Transform to use when output on a reference monitor (Rec.709)
  • ACES Output Transform to use when output on HDR Grading monitor

The CLF instead contains a concatenation of:

  • Reference to standard ACES2065-1 to ACEScct conversion
  • CDL SOP/Sat values
  • A nice, 3D LUT for further creative touches (from ACEScct to ACEScct)
  • Reference to standard ACEScct to ACES2065-1 conversion
1 Like