Per-Channel Display Transform with Wider Rendering Gamut

As I mentioned briefly here, I wanted to do a quick experiment using a per-channel RGB display rendering approach, experimenting with different primaries for the “rendering gamut”.

The other day I was messing around with the excellent and well-loved Arri K1S1 display transform, reading through this whitepaper again. When I last read it in 2012 I barely understood what a colorspace was. How time flies.

Anyway, I decided to implement it in Nuke:
Arri_K1S1_simplified.nk (19.3 KB)

It is extremely simple

Essentially a tonemap in Alexa Wide Gamut, and a display encoding, which includes a custom AWG → Rec.709 matrix which desturates the red and green primaries a little:

So I decided to make my own, inspired by this simple though highly effective approach to a display transform. I built a little setup that lets you position the coordinate of the three primaries positions using a 2d position knob, which then automatically calculates a matrix to convert your incoming image into this “custom rendering gamut”.

This little setup makes it really intuitive and easy to get a feel for what behavior the primaries positions causes on the rendering transform. I messed around and came up with a few presets to start off with. You can also set a reference gamut if you want as an overlay, to see where it’s primaries are positioned.

Those of you who are more interested in a per-channel display rendering approach may be interested to play with this. I’ve included the two tonemap nodes with presets that I’ve been playing around with, but you could easily swap in whatever tonemap approach you wanted to. Here’s the nuke script:
DIY_PerChannel_DRT.nk (108.0 KB)

Edit
Thanks to @daniele I found a bug in my code for the matrix calculation when the source or target gamut is XYZ.

IMPORTANT
Please update to this new setup which fixes the issue.
DIY_PerChannel_DRT_v2.nk (109.9 KB)

Note
The Blinkscript node won’t work in nuke non-commercial. However it’s just used for drawing the spectral locus plot. Here’s a PNG image you can use in it’s place if you’re using Nuke NC

4 Likes

Quite impressive Jed ! Another approach that also gives good results I think ! :scream:

I used these settings :

  • Source gamut is ACEScg.
  • Destination gamut is the one that comes by default in the Nuke script.
  • Output primaries are P3. White Point is D65.
  • EOTF is for sRGB display (so I suspect Gamma2.2 ?).

ACES

Jed’s per-channel

ACES

Jed’s per-channel

ACES

Jed’s per-channel

ACES

Jed’s per-channel

ACES

Jed’s per-channel

ACES

Jed’s per-channel

ACES

Jed’s per-channel

ACES

Jed’s per-channel

ACES

Jed’s per-channel

ACES

Jed’s per-channel

ACES

Jed’s per-channel

ACES

Jed’s per-channel

A few observations on my super limited testing :

  • This looks like a super simple approach.
  • It kinda fixes all the issues I have been pointing at (blue sphere does not get magenta anymore and light sabers don’t clip).
  • Hues are better preserved I think as well (you can tell on the blue to magenta/purple sweep).
  • And it kinda respects the ACES look in a way.

Now I’m going to play with the little tool you made Jed to see how the change of primaries affect the rendering transform. I need to do way more testing !

Congrats !

4 Likes

Looks amazing for me! I really like how it handles blue and doesn’t let it go to magenta. And seems like it has softer highlights roll-off which is also good, at least for me. Would love to see it on the real shots with people.

And these artifacts in transition from magenta background to green character (still with hands up) - are they in source or caused by output transform mapping?

It looks so good that I’m even going to try installing Nuke noncommercial again and find a way to make it at least start (I’m a colorist, not a vfx artist at all and don’t use Nuke). Does it work in Nuke noncommercial?
What I want to see by myself is if this issue (or it is normal?) somehow solved with this display transform.

It works in Nuke non-commercial and I would recommend you test it on your own set of images if possible. The more tests we have, the better.

It is true that I haven’t shared “real” live-action images. Sorry about that !

Chris

Just a quick note to say I’ve updated the setup to fix a bug in the matrix calculation. Please update to the new setup in my initial post!

Sorry about that! Happy experimenting…

Certainly acceptable indeed! It should not come as a surprise that similar things have been used by some studios for a very long time.

Sometimes in an ACES workflow, if you need to emulate or retain characteristics of the look of the rendering of a particular vendor, e.g. ARRI or RED, it is much easier to use their native gamuts as the basis to apply the transforms from.

When you dig a bit more, you could find hints at why the RRT usage is not widespread :wink:

@ChrisBrejon : If you have cycles, I would be keen to see your images with E-Gamut please! I have been totally swamped for the past months (which is incredibly frustrating).

Cool ! The change of primaries and its consequences completely caught me off guard. I’d like to experiment something today. Maybe it is stupid but :

  • What if I added a Matrix Transform between the Shaper and the ODT in the OCIO Config ? Like right between the two lines ? Just as an experiment. Like this :

    to_reference: !
    children:
    - ! {src: InvRRT.Rec.709.Log2_48_nits_Shaper.spi3d, interpolation: tetrahedral}
    - ! Matrix BlaBlaBla
    - ! {src: Log2_48_nits_Shaper_to_linear.spi1d, interpolation: linear}
    from_reference: !
    children:
    - ! {src: Log2_48_nits_Shaper_to_linear.spi1d, interpolation: linear, direction: inverse}
    - ! InvertMatrixBlaBlaBla
    - ! {src: Log2_48_nits_Shaper.RRT.Rec.709.spi3d, interpolation: tetrahedral}

Hey Thomas, I am curious about this quote. What do you mean ?

I suspect that you’re referring to Cycles the render engine ? I don’t use it BUT I am more than happy to re-re-render any of these images with the TCAMv2 OCIO config (if that’s what you mean).

I guess the idea would be :

  • Use E-Gamut primaries instead of ACEScg primaries whenever possible ?
  • A stupid question : should I render in “Linear-E-Gamut” for these tests ? Because technically I could use E-Gamut primaries and still render in ACEScg, right ?

Anyway, super happy to re-render anything once I got these details. Cool !

It is really the same problem as that of CG rendering, with worse consequences because you do no act on secondary bounces here! A corollary is that if you need to white balance an image and want to do it right, it is only really possible in the Camera RGB space. Of the Importance of the Basis!

It seems weird to apply a matrix here, because a the space is purposely highly non-linear b you would run the risk of introducing values outside the [0, 1] domain resulting in thermo-nuclear explosions in the cube right after :smiley:

There are many reasons why movie makers pick a camera/lens kit over another, often it is for its “look” which is directly related to the optical path, hardware and software processing.

You might have noticed that the main camera vendors, e.g. ARRI, RED, Sony and BMD provide not only the hardware but also the software to process the captured imagery. It is common to receive plates/client LUTs that are the product (or highly related) to the software of the camera vendor, e.g. IPP2, K1S1 or ALF2. By extension, it is also quite common for VFX vendors to do comp work in the camera vendor gamut.

Hope that clarifies what I was hinting at!

Nah, I was meaning space cycles, i.e. time! :slight_smile:

Thanks for the answers ! This clarifies a lot !
I have time on my hands and I am more than happy to provide some examples with E-Gamut !
I’ll probably post something later in the day.

Best,
Chris

OMG. I just understood what you meant. Running the same set of images and using E-gamut as rendering primaries. I thought I had to re-render in CG with E-Gamut primaries in my lights. Might be the whole “cycles” term that confused me. Sorry about that !

I basically used the same settings :

  • I used V2 of the nuke script (minor differences here and there).
  • Source gamut is ACEScg.
  • Destination gamut is the one that comes by default in the Nuke script and E-Gamut.
  • Output primaries are P3. White Point is D65.
  • EOTF is for sRGB display (so I suspect Gamma2.2 ?).

ACES


E-Gamut

Jed’s default

ACES


E-Gamut

Jed’s default

ACES


E-Gamut

Jed’s default

ACES


E-Gamut

Jed’s default

ACES


E-Gamut

Jed’s default

ACES


E-Gamut

Jed’s default

ACES


E-Gamut

Jed’s default

ACES


E-Gamut

Jed’s default

ACES


E-Gamut

Jed’s default

ACES


E-Gamut

Jed’s default

ACES


E-Gamut

Jed’s default

ACES


E-Gamut

Jed’s default

From the tests I did yesterday, another preset that worked well on my imagery was BlackMagic. I’ll re-run these images for comparison.

Chris

Here are the tests using BMDFilmv1 :

I’ll try to use the expressions “CG rendering” and “Display rendering” to avoid any confusion in my next posts. :wink:

Chris

What about using Jed’s default with a small hue rotation for the reds that brings them halfway between E-gamut and Jed’s default? 9 degrees should do the trick since I can align ACES 1.3 to E-gamut with a hue rotation of 18 degrees.

Yes sure, why not ? It looks like we are “deep” in the subjective realm. :wink:

What was fascinating to me with this “experiment” :

  • Interactive manipulation for display rendering primaries
  • See how much the basis vectors could impact/improve the display rendering (in real-time !).
  • Test how a simple setup could produce potentially pleasing ACEScg renders.

I do agree that a combo “Chromaticity linear display transform” + “Look” would also be very nice to achieve. Such as this one :

E-Gamut primaries - TCAMv2 DRT

E-Gamut primaries - TCAMv2 DRT + Vision Look

Best,
Chris

1 Like

Subjectively speaking, it seems to me like the blues and the magentas of the TCAM DRT all clump together in the Y = 1.0 middle region without the vision look. It is a bit of an extreme look though.

@ChrisBrejon could you post a link to the original fire image (08_p3d65_BMDFilmv1)? I’d like to do some tests with it. Thanks!

Hey @Derek

the exr can be found here. It is called : fx_fire_001_lin_ap0.exr and should be loaded as ACES - ACES2065-1 in Nuke.

Thanks !

Chris

1 Like

E Gamut Blue is a “very far out” blue virtual primary. Not sure if a sweep of full saturated EGamut primaries makes much sense visually, it is definitely a stress test.

I pushed a display transform based on the above technique called rgbDT: Nuke | Resolve DCTL

It’s quite simple. This is the rendering code for the DCTL:
screenshot_2021-09-27_22-18-53

It uses the same tonescale model I posted about here.

The rendering primaries it uses are

red: (0.859, 0.264)
green: (0.137, 1.12)
blue: (0.085, -0.096)
white: (0.3127, 0.329)

Nothing scientific. Something I subjectively/aesthetically arrived at playing around with the above setup.

1 Like

For me, it looks the best of all your three published DRTs. It doesn’t make dark skin tone look greenish. Not greenish actually, but it looks like this because of the desaturation in the shadows. And I really like rgbDT doesn’t have this effect! But it have the same effect, that current ACES DRT has - dark colorful objects in the scene have sharp transitions to the clipped solid colors. And also I found it’s most noisy one in the shadows.

Here are images from Alexa (first image, I guess, are two Alexa cams at the same time. This is EXR with a huge dynamic range) with gamut compressor applied. Then I applied different DRTs. And after that I added Gamma up, to better see the shadows.

Rec709 OETF

ACES 1.2 DRT

OpenDRT

JzDT

rgbDT

Rec709 OETF

ACES 1.2 DRT

OpenDRT

JzDT

rgbDT

And OpenDRT still shows different white point on my PC :slight_smile:

Doing tests of rgbDT and OpenDRT, I’m seeing that rgbDT appears to do all of the good things that OpenDRT does. It does not have the notorious six shift of skewing blue to magenta or red to yellow for example. I’d be curious to know if there are things that OpenDRT does that rgbDT cannot?

At the same time, rgbDT has out of the box the characteristics that match the current ACES transform - luminous hues of sunshine look yellow-orange instead of pink. Second, the “flattening” of color on faces does not happen with rgbDT. Put differently, these two things which can be changed in OpenDRT with a Look Transform (via a hue shift and zone saturation) are already there in rgbDT out of the box. The only thing one needs to do to make it look like ACES is increase the contrast a bit with a linearGrade.

This leads me to conclude that rgbDT seems pretty perfect. Is there a “gotcha” I’m missing in here somewhere? What is the disadvantage or limitation of using this per-channel display approach instead of OpenDRT?

Let me add that with the Look tools Jed has provided, it’s certainly possible to get OpenDRT+LMT to look like rgbDT. So either way can get to the same desired look. I’d be interested in understanding the pros and cons of each path up the mountain.