Output Transform Tone Scale

Hey @Derek !
Thanks for getting your hands dirty. We need more people testing things out!

Modules

The stuff I’m posting in this thread is purely related to the “tonescale” component of the display transform. While important, it is only one component. A potentially incomplete list of different modules we might need:

  • Input Conversion - Convert input gamut into the rendering colorspace, in which we will “render” the image from scene-referred to display-referred. The rendering colorspace might be an RGB space or an LMS space, or something else entirely.
  • Gamut Mapping - We may want some handling of chromaticities which lie outside of our target output gamut.
  • Whitepoint - We may want to creatively change the whitepoint of our output in order to create warmer or cooler colors in our output image regardless of what display device whitepoint we are outputting to.
  • Rendering Transform - There are many valid approaches for this. ACES uses a per-channel approach where the tonescale curve is applied directly to the RGB input channels, as you were doing in your above experiments. Since I’m working on a chromaticity-preserving display rendering transform, I’ll outline what might go into that.
    • Separate color and grey using some norm.
    • Apply some tonescale to grey to convert from scene-referred to display-referred.
    • Apply some model for surround compensation and flare copmensation.
    • Apply some chroma compression to highlights so that very bright saturated colors are nudged towards the achromatic axis as their luminance increases towards the top of the display gamut volume.
  • Display Gamut Conversion - Convert from our rendering colorspace into our display gamut. If you’re looking for a tool do do arbitrary gamut and whitepoint conversions in Nuke you could check out GamutConvert.nk.
  • InverseEOTF -We need to apply the inverse of the Electro-Optical Transfer Function that the display will apply to the image before emitting light from your display.
  • Clamp - And lastly, maybe we want to clamp the output into a 0-1 range, to simulate what is going to happen when we write the image out into an integer data format.

EOTF

The “DisplayEOTF” gizmo is pretty old technology. It is basically just the Inverse EOTF guts of the Nuke ACES Output Transform implementation I did, ripped out and put in it’s own node. I would recommend that you use the InverseEOTF node in the ElectroOpticalTransferFunction.nk file I posted above instead. It’s a lot simpler.

Nuke Specifics

If you’re using a stock ACES config in Nuke, and your read node’s input colorspace is set to ACES - ACES2065-1 as I see in your screenshot, this is doing a gamut conversion from AP0 to AP1. This makes your “Rendering colorspaces” ACEScg. This is the same as the ACES Output Transforms. The ACEScg → Rec709 matrix is essentially your display gamut conversion. and your DisplayEOTF node is your Inverse EOTF curve.

Hope this big wall of text helps clarify things, or at least brings some new questions that will help in your journey of learning.

2 Likes