Output Transforms Architecture VWG - September 6th, 2023

The recording and notes from meeting #117 are now available.

  • Alex Fry: …There are disagreements about the angle of the gamut compression though. I find the P3 / 709 mismatch more objectionable…
  • Pekka Riikonen: I agree but I’m nor sure the horizontal compression is the solution. Maybe we need a better lightness mapping.

What I forgot to mention to @alexfry that this might be as simple as having a different lightness mapping parameter value for different gamuts. I’m not sure we’ve tested this but we should. The idea simply would be to use different “cusp to mid blend” value for different gamuts.

For example, if P3 used for example 0.8, then Rec.709 would be lower, for example 0.7, and rec.2020 would be higher. This parameter changes the effective projection angle, so it should be tested whether small adjustment to this value would bring better match. And if it helps, then we could come up with some way to automatically scale the value with the gamuts. I don’t know if it works, but it’s worth testing…

Do you have an idea for what attribute of a gamut this value could be derived from, so a suitable value could be calculated for any arbitrary gamut? I can’t think of anything obvious.

Not sure, but I don’t want to worry about it until we know the idea works…

  • Christopher Jerome: What about using slope of J, rather than change?
  • Kevin Wheatley: We tried that a long time ago.

That version (ZCAM v12 with derivative path to white) also had the option to use RGB norm (or LMS actually) instead of lightness with the tonescale. Just thought to mention that since we’ve talked about applying the tonescale in some other way past few weeks. If memory serves it worked fine without any weird behavior in the model space…

Heya,

Could someone point out to the “desaturated version” showed in the meeting ?

I would be curious to check it on our footage and test the “acescct saturation” trick also.

Thanks !

It’s in my proto DRT repo as CAM_DRT_v042_new_scaling.blink. There is also .nk of the same name, however, I did not update that one. So the blink script needs to be re-loaded in the .nk.

One thing we did ignore during the meeting, though, is that the scene JMh values that we were comparing against have also been compressed by the LMS compress mode. So in reality, the positions of the spectral locus values are further out than what the scene JMh would suggest. In other words, I would claim, we should scale the ratios higher to get back closer to the (actual) positions of the spectral locus before the LMS compression (the input image represented the x,y positions around the spectral locus in 5 nm steps). We can perhaps get into this in the next meeting…

Here’s a video showing the difference of the LMS compress mode to the JMh positions of the spectral locus input image. The wider one is without compression, the narrower one is with the compression and is what we saw in the meeting:

A quick test to see what scaling factor would need to be applied to get back (close) to the same positions without the LMS compress mode turns out be around 1.23, which is identical to the one I’m already using in v042-pex2 for SDR. The image is no longer as desatured as it would be without the scaling factor.