Film Emulation LUT to LMT

Tags: #<Tag:0x00007f6328ee6d48> #<Tag:0x00007f6328ee6c80> #<Tag:0x00007f6328ee6bb8> #<Tag:0x00007f6328ee6af0>

Can someone guide me in using a commercial film emulation LUT as a Look Management Transform?

The DP i’m working with would like to use a commercial film emulation LUT on-set and for dailies, with a view to use it as the starting point for the final grade.

At various stages in the production, we’ll be shooting on the Arri AMIRA, Arri AlexaXT, RED Epic, GoPro and some other unspecified drone camera. There are around 5 VFX shots as well.

I’ll be delivering to Rec.709 BT.1886 and DCI-P3, grading in DaVinci Resolve 12.5.x

The colour pipeline at the moment stands at:

ACEScc v1.02

  1. IDT: Alexa Log-C EI800 v3 (or the ITD for which-ever camera is used)
  2. LMT [CDL: offset and saturation only]
  3. RRT + ODT: Rec.709 100nits BT.1886
  4. Emulation LUT [Rec. 709 Kodak Vis3 50D 5203 NEG_FC (input Rec.709 --> LUT --> Film Contrast)]

I imagine I want the LMT to be a combination of the CDL and the Emulation LUT

  1. IDT: Alexa Log-C EI800 v3 (or the ITD for which-ever camera is used)
  2. LMT [CDL: offset and saturation only] + Emulation LUT
  3. RRT + ODT: Rec.709 100nits BT.1886

The LUT comes with a number of different transfer function - eg. Rec.709, generic LOG, Alexa LogC, REDLogFilm etc.

I have access to Lattice, DaVinci Resolve, Pomfort LiveGrade to transform and concatenate LUTS and LMTs.

Can someone point me in the right direction?


Do you already have an emulation LUT ready for ACEScc? I have a bunch of emulation LUTs but none of them work in ACEScc. I mean… They work but they don’t look correct. I would also be worried about color clipping… I’ve copied certain film emulation LUTs by creating custom grades. They’re not perfect but a good starting point that feels like a film LUT. I’d be curious to know what you’re planning to use as a LUT for ACEScc.

This info may help:
The approach uses the LightSpace CMS ACES tools, but obviosutl that’s how we work with ACES.


Lattice can concatenate LUTs and multiple CTL files to achieve what you are after. Whether you should is a different question…

I guess it’s basically like what we we’re doing before with LUTs… I’m just very aware of clipping colors now with ACES.

@nick… Please enlighten me. How would one go about taking a Film Lut meant for Logc to ACEScc in Lattice. I’m trying a few things here but can’t wrap my head around the LOGc to ACEScc part…

@steve… I’ve looked into doing this with our LS CMS. But same here… It’s the starting point that’s confusing me.


Basically you make a concatenation of transforms so that the image data is transformed to LogC before the LUT is applied, and after the LUT you add transforms that invert what is going to happen to it downstream, so that after RRT and ODT you are left with the unmodified LUT output.

But you have to consider whether if you are going to all this trouble ACES is the right approach. You are ending up cancelling out a big part of what ACES does. I know this is an ACES forum, but that doesn’t mean we can’t discuss whether ACES is always the right path to take.

I know that this would be counterproductive. It’s just my old habit of using film LUTs to wow clients. I like the freedom of ACES much better than a LUT!

Building an LMT by inverting the RRT is a dangerous and limiting process. It might be able to match the look but the data after the LMT Transform is highly unstable due to the inverse process.

By “inverting the RRT” I believe you are talking about using an inverse ODT/RRT to construct “empirical” LMTs, which are “derived by sampling the results of some other color reproduction process” (TB-2014-010 Sec. 7.2). If that’s indeed what you are referring to, then yes, these can have limitations because they are basically 3D-LUTs derived from an output-referred look which by its very nature is “limited” to a particular gamut and dynamic range.

However even if limiting, empirical LMTs are also very functional and have their uses. Calling them “dangerous” may be a bit hyperbolic. So long as one doesn’t “bake in” an empirical LMT to their original ACES data, empirical LMTs can serve very well to define the creative intent of a filmmaker. And while they may not be extensible to “magically” re-render a specific look to HDR, for example, they can help a filmmaker very exactly achieve a particular look for a particular output.

“Danger” is only present if people are not aware of the limitations and potential pitfalls of the process they are using, and in this sense, I see them as no more “dangerous” than any other process used in making motion pictures today. Basically, you can’t save people from themselves - you can only try to educate them well enough so that they don’t inflict catastrophic and irreversible degradation to their images.

If one wants to construct LMTs that are reusable across many Output Transforms and not limited to a particular output look, the solution is provided in “analytic” LMTs. These are LMTs that are expressed as a set of ordered mathematical operations on colors or color component values, not as LUTs. This allows them to be less limiting and more extensible across multiple outputs - and I believe more aligned with the overarching concepts of ACES.

I’ll be illustrating some of the potential pitfalls with “empirical LMTs” and advocating for “analytic LMTs” in Part 2 of my LMT topic posts.

1 Like

Thanks for this comprehensive answer, we are all on the same page. The words I used were maybe a bit too striking, sorry.

I just wanted to raise awareness that a LMT if produced by inverting the RRT+ODT might have its limitation, as you already and clearly have brought to the point. So a user of a LMT should be aware of that. Building truly unlimiting LMTs based on film emulation is possible but it is quite a hard task.

Looking forward to the Part 2 of the LMT article and your considerations of building LMTs.

This topic is maybe the key to success of a colour management system. I agree that pleasing colour reproduction should not be part of a display rendering transform and should be applied in the grading domain as a creative choice. The Display Rendering Transform ideally should only take care of image colour appearance to match the look on different viewing conditions – in theory :slight_smile: