Gamut Mapping Part 2: Getting to the Display

@jedsmith Is it intentional that defaults in params version are different compared to a regular OpenDRT version? Which one has “better” settings?
Thank you!

Hey @meleshkevich – The defaults in OpenDRT_params.dctl are approximately equivalent to the Rec.1886: 2.4 Power | Rec.709 preset. Unless you know what you are doing and want to experiment with the tonescale model, rgb weights, or other settings, I would strongly recommend that you do not use the _params version. OpenDRT is not a grading tool. The purpose is to provide a sensible and predictable set of defaults for rendering scene-referred images for different viewing conditions, which provides a good starting place for grading. Hope that helps. I am working on documentation that explains all this stuff… it just takes a lot of time :stuck_out_tongue:

1 Like

@jedsmith
Hi!
Could you please tell me if it works as it should?
Looks like OpenDRT DCDM Inverse ODT kills yellow colors.

At first there is Cineon to DCI XYZ 2383 LUT.

Then white point creative adjustment with simple gain applied over tonemapped image.

Then there are ACES 1.2 DCDM Inverse ODT and ACES 1.2 Rec709 ODT. Which gives a common figure on vectorscope for print film LUTs. So we got Cineon input and Rec709 output.

But with OpenDRT DCDM Inverse ODT yellow colors are gone. Actually they are gone after IDT stage with DCDM Inverse ODT. So OpenDRT Rec709 ODT there is just for comparison.

I thought any inverse should give the identical result when it’s reversed back.

Also, should I turn off lenscap black checkbox when I use the DCTL as inverse ODT?

I am not sure I understand well what you are trying to do. So I will just respond with things that I do understand to be true.

  • Forward display transform + inverse display transform should yield something approximating a null-operation. The inverse will never be perfect because of the limitations of the display gamut volume.
  1. There was a bug in a previous release with the inverse direction, so if you haven’t tried it maybe use the latest commit and see if it helps.
  2. I don’t know what the input is to the inverse display transform, but based on your nodegraph I would raise my eyebrows and proceed with extreme caution. Inverting some display referred image into sensible scene-linear image data is non-trivial and dangerous in this human’s opinion.

The lenscap black checkbox (which I have since removed thinking it is another bad idea) adds a small value to the input image in order to compensate for shadow grain below zero. Since OpenDRT maps input zero to output zero, this can be problematic for digital cinema camera footage which subtracts lenscap black average value from the frame on debayer. I removed it because my checkbox is not sufficient to solve this problem. It is something that a colorist should manage.

If you are designing some transform to go in the inverse direction, mapping some display-referred input imagery to some scene-referred output, I would only say that yes it is important to map zero values correctly.

Best of luck!

1 Like

Very good point. It’s important to remember that a DRT does not have to do everything. There is always a person in the loop making adjustments.

2 Likes

100% behind that. The DRT’s main job is to give a — ideally low or very low contrast — reasonable starting point for grading and to not break down on specific hues, specific luminance levels and when switching output targets. The first part of this last requirement is to avoid the need of informally shared fix LMTs (I am looking at you blue highlight fix*) and the last part is to avoid per-target display-referred trim passes for workflows where that isn’t an option (like game engines) at all or in part.

*Other examples would be forced Hue vs Hue, Hue vs Lum and Hue vs Sat nodes before the DRT

1 Like

That is your bottle neck.
I think you should try to avoid concatenating different DRTs via inverse transform.

This will produce an unideal ACES Master.

If you want a filmic look you need to build a proper LMT which does not bottleneck to SDR…

I hope this makes sense.

Daniele

1 Like

Thanks! Looks better now, but I still get solid black colors here and there even if I try to use it with the image that has rec709 gamut only (converted to DCI XYZ) and doesn’t even touch blacks or whites. So for me this still looks like something is probably wrong with this inverse ODT.

But I found another thing. Looks like DCDM ODT has 2.4 gamma, not 2.6. Is it on purpose?

And if you don’t mind, I’d also like to ask you how is it going with OpenDRT progress? Is it more or less looks like ACES 2.0 will look or there is still a lot to be done and changed?
and what about Nayatani_HK? If it compensates the effect that should always be compensated, why not to put it in ODT (or OpenDRT?) also? Or this was just an experiment?
I would really like to test something new if there is anything new in DCTL format :slight_smile:

Hey @meleshkevich !

Hm, that’s weird. I double-checked in the last version of my DCTL and I’m pretty sure it’s using a pure 2.6 power function for all DCI output presets including DCDM X’Y’Z’ – Can you give any more details for a reproduction?

It’s going well thanks :slightly_smiling_face: open-display-transform is an experimental project I work in during my decreasing quantities of spare time. I have lots of ideas to test and experiments to do. I have been working on quite a few things lately. I’ll have a few posts coming up.

I recently pushed a few changes for an upcoming opendrt v0.0.90 release, simplifying tonescale and doing some experiments with different approaches for gamut compression.

I have no idea what ACES 2.0 will be. Based on the sentiment in the preceding vwg meetings, it’s sounding like it will be the same as ACES 1.2 but with a slightly reduced contrast for SDR outputs.

As for HK compensation it’s an interesting experiment but I don’t think the model makes sense to include in a display transform. HK describes a perceptual phenomena. The sensation of certain hues appear brighter to our eyes than the stimulus would indicate. Since a display transform maps one stimulus to another stimulus under different viewing conditions, perhaps there should be compensation for the different behaviors of the human visual system between the two viewing conditions. What roles HK plays in the differences here is unclear to me and I’m not sure that this model represents that well.

2 Likes

Other approach could also be what you demoed in: Per-Channel Display Transform with Wider Rendering Gamut

Considering how simple that approach is, the results were impressive. The big question is how well does that work in HDR?

I just used it over linear gray gradient. Input was set to acescct. Waveform analyzer showed up a change in whitepoint with dci d65 output preset.
The same way I was judging gamma difference. DCI preset make curve “brighter” than 1886 preset. But DCDM (and even probably dci d65/d60) showed up the same curve (despite the different white point of course) as 1886 preset.

Great! Will check it out soon!

Oh, I was sure (and still hope) openDRT will be ACES 2.0 DRT but with slightly tweaked parameters.
What I really don’t like in current ACES DRT is how it clips saturated colors in the shadows. Still have no idea how to fix it in grading. And it’s noticable even by eye on oled screens or on inverse sRGB EOTF displays with rec709 ODT. (And just for statistics: I’m from power law 2.2 gamma club, all sRGB EOTF screens are evil :smile: )

Gosh it would be a shame to not have all of the good chroma stuff openDRT is doing :frowning: I wonder if there was an LMT for OpenDRT that emulated ACES 1.2 if that would shift the tides?

28 posts were split to a new topic: sRGB piece-wise EOTF vs pure gamma

True. Me either. But I think your work is on the right track.

I don’t think that’s my takeaway. I do not think 2.0 will (nor should) be the same as ACES 1.2.

I’m happy to hear that sentiment. What is your takeaway?

I personally think OpenDRT is much closer to being a candidate than anything else. I have looked at your v0.0.90 and it’s pretty great. Overall, i am quite impressed with where it stands. It definitely produces better results for many of the known issues and seems to meet most of the requirements. I’ll summarize some thoughts and questions on a separate thread soon (since this thread is quite long).

I am taking a closer look at the modules contained within 1) so I can better understand each step’s role and 2) to see if/where we might be able to make it even simpler. I think the most important thing is how it adapts across different devices. I know you’ve given this a lot of thought but I haven’t really tested and seen what it can do yet. I think with some more exploration of its behavior across different devices I expect that we can probably use most of what your efforts have produced and push it toward being a candidate.

4 Likes

This post had diverged off-topic into sRGB EOTF discussions. A really good conversation to keep in it’s own place, so I have moved everyone’s posts relating to gamma 2.2 and sRGB piecewise EOTF to a separate thread. Please see: sRGB piece-wise EOTF vs pure gamma

4 Likes

Hello may I ask where to find these test files you are using?

Here : Dropbox - Output Transform Image Submissions - Simplify your life @MatthewJavelin

Thank you so much!, very much appreciated.