Float values in CLF

How are most people applying LMT or other CTL transforms in production? Since some softwares and hardware devices don’t yet support CTL directly, I guess I would have to convert it to a 3D LUT, but how does this handle float values in ACES?
I’m reading through the CLF specification which includes some range scaling options. So I could scale all the data to fit between 0-1, apply the LUT, then scale back to float. But this would lose precision. Is this a typical workflow?

CLF includes a number of operators (Process Nodes) which handle unclamped float linear data directly. But if you use a 3D LUT you need to do more than just scale a wider range of the linear data to 0-1. You need to ‘shape’ it so that the data is more uniformly spread across that 0-1 range. Linear data is completely unsuitable as input to a 3D cube.

There is no one ‘correct’ shaper. One option is to apply the cube to ACEScct data.

What kind of ProcessNode are you referring to? The only ones I see in the spec are LUT, Range, Matrix, and CDL. I think none of these are sophisticated enough to represent a CTL, but it does say other types of nodes could be added in the future.

When you say “represent a CTL”, a CTL can represent a range of transforms, some of which might be representable by a single Process Node (if the CTL implements only an ASC CDL transform, for example). But more often a CTL would need to be replicated using a series of process nodes, and there is no one ‘correct’ translation from a particular CTL to CLF. Some parts of a complex CTL will need to be baked into a 3D LUT, but careful selection and optimisation of the Process Nodes surrounding the 3D LUT will make the resulting CLF a better approximation of the original CTL transform.

However, you need to be aware that currently CLF is not widely supported, and not all aspects are supported in every implementation.

For commercials and the LMT we tend to use a shaper in the log colorspace of the hero camera from the shoot. So for example if the hero camera is Arri then we use AlexaV3LogC shaper and for Red cameras we use a 3G10-RedwideGamut. We actually used to use AlexaV3LogC for everything but then we ran into a few shows where very saturated red led lights (ironically shot on a RED) were out of gamut when we used AWG and clipped.

Okay the 1D shaper LUT makes sense. So overall I gather this is not a simple process, and it requires some thought on a case-by-case basis.
Nick, do you know if there are other types of ProcessNode besides the ones listed in the above link? I wonder what it means that “not all aspects are supported in every implementation”.

Currently LUT1D, LUT3D, Matrix, Range and ASC_CDL are the only types of ProcessNode available in CLF.

Within these there are various sub-types, such as halfDomain 1D LUTs, which last time I tested (I have not checked the v16 beta) were not implemented in Resolve, for example.

Some new types, such as log2lin and lin2log have been proposed for CLF 2.0, but this is still in development (see the CLF Virtual Working Group threads here for more detail.)