CLF is not ACES

I think it’s important for level-setting to start a very basic discussion of what CLF is and is not. Similar to asking what is out of scope, but a bit more on the “what are we trying to accomplish?” side of things…

To me, CLF should be exactly what it says - a Common LUT Format. It is designed to be interoperable across all software and hardware (when fully adopted). From the introduction in the current spec:

LUTs are in common usage for device calibration, bit depth conversion, print film simulation, color space transformations, and on-set look modification. With a large number of product developers providing software and hardware solutions for LUTs, there is an explosion of unique vendor-specific LUT file formats, which are often only trivially different from each other.

So the goal here is to to make a vendor-agnostic LUT format. Not an ACES LUT format. And certainly not a full-fledged ACES transform implementation. It is a format that should be “the future of LUTs” so that once all vendors support it, there is no need to convert between different vendor formats or worry about things like “oh, this LUT format does not support a shaper so what do i do now?”.

So although we are talking about finalizing CLF as a part of the ACESnext efforts. CLF is not ACES. CLF is a mechanism that could be used to encapsulate one or more ACES transforms, but it is much more than that too. It is just a LUT, but eventually hopefully it can be the LUT.

In an ideal world, whenever someone passes me a LUT, I will know what to do with it and have means to load it in whatever software/hardware I am using. Thinking even more grandiose, it could be an “archival” LUT (when archiving the real transforms for some reason isn’t possible/practical).

Reactions and thoughts?


Important points.

I think that emphasises the importance of metadata in CLF LUTs. If you think a CLF is always an LMT, then of course it is ACES2-65-1 in and out, so it’s not vital that is specified in the metadata. If it’s just a universal LUT, then it’s very important that the intended input, and output colour encoding is known.

It’s a shame there aren’t precisely standardised names for all the different colour encodings. SMPTE 2115 may help with this.

I just wanted to pin-point here a few things that were discussed during the last call. I will use ACES-agnostic terminology, because this perfectly applies to a more generic context, and because it is what a CommonLUT format was agreed to be.
However, this has been discussed elsewhere in other threads with specific terminology (as this place really is about ACES).

Genrically speaking, a LUT is anatomically composed of one or more instances of one or more of the following categories:

  • an input module, which usually normalizes the input codevalues to a certain baseline. Examples are shaper LUT, scaler (w or w/o clipping), tonemap, log2lin or lin2log, camera-specific transfer maps (channel-dependent or not)
  • a core module, which is either the color-science and/or the creative part of the whole map. Examples are SOP, LGG, RGB matrix, CAT equations, etc.
  • an output module, which is functionally the reverse of the input module and may land the ouput codevalues of a LUT on either the same encoding/colorspace or on a different one (it depends whether the LUT embeds some colorspace conversion or not).

This is very abstract, yet it can be easily achieved, in a CLF, by concatenating 3+ nodes.

There was a discussion during the last call as to wheter the input module (but, in my opinion, the same holds for the output module as well) can be represented by means of a coarse-grained 1D LUT of half-floats, which Josh replied it is technically possible but very hard to do, because it’s like reverse-engineering a color-transform.

Since however --especialy for camera-specific input modules-- those transforms are quite fixed (or depend on a very few parameters, like E.I. does for ARRI), and since someone said that many camera manufacturers ask for their “camera profiles” being ingluded in CLF and similar “programs”, it makes perfect sense to ask them to provide such input modules.

Once they are pre-computed, they can be easily re-used as individual input/output nodes in CLFs,

Even better, if each CLF is provided with a UUID (in the form of SMPTE Unified Label, some hash-digest of the transforms – I made a few proposals on this elsewhere in here), CLF nodes may just reference the digest, rather than include the whole module.

Solutions to keep track of those modules:

  • distributing them as single-node CLFs,
  • the same as above, but with a online registrar that just records the association between the colorspace and/or camera names and their UUIDs (UL, hash, etc.).
  • central repository complete with the module LUTs (where camera manufacturers are sole responsible for depositing theirs)