Objective tests of the gamut mapper

Hi,

A few things, we started to port the Model to Python which uncovered some issues:

We should tried to document some of the magic defaults: cyan=0.09, magenta=0.24, yellow=0.12, e.g. how did we come up with those numbers?

I will try to make another notebook with power fitting of the other compression curves and compute the error as discussed.

Cheers,

Thomas

The current defaults come from @jedsmith’s suggested values for an “average of popular digital cinema cameras”.

I wonder if when we send it out for broader testing we should just have equal defaults of e.g. [0.2, 0.2, 0.2] and let people form their own opinions on what good values are. Currently there is a tendency to just leave them at Jed’s defaults.

1 Like

Agreed!

I made the notebook to compute the Delta E when fitting Tanh with Power: https://colab.research.google.com/drive/1EGALDvn_f4M2VT_MGt9YIgW8jnz9B2iH?usp=sharing.

TLDR: At the semi-random parameterisation I used, the difference is invisible to anyone.

Our compression functions (minus Log who blew here for some unknown reason) and solved for 1.5:

Power fitted with Tanh with limit, threshold, power = [1.53566777 0.68826751 4.23153967]:

\Delta E_{00}\ Statistics :

dark skin 0.00000000
light skin 0.00000000
blue sky 0.00000000
foliage 0.00000000
blue flower 0.00000000
bluish green 0.00000000
orange 0.00279433
purplish blue 0.00000211
moderate red 0.00000000
purple 0.00000000
yellow green 0.00012693
orange yellow 0.00297997
blue 0.00027476
green 0.00000000
red 0.00035977
yellow 0.00391853
magenta 0.00000000
cyan 0.00006808
white 9.5 (.05 D) 0.00000000
neutral 8 (.23 D) 0.00000000
neutral 6.5 (.44 D) 0.00000000
neutral 5 (.70 D) 0.00000000
neutral 3.5 (1.05 D) 0.00000000
black 2 (1.5 D) 0.00000000
count 24.00000000
mean 0.00043852
std 0.00109651
min 0.00000000
25% 0.00000000
50% 0.00000000
75% 0.00008279
max 0.00391853

We can fiddle with the parameters and re-run, I will probably make some sliders later but I’m confident that we can fit a lot of scenarios while producing results that are orders of magnitude under the detection threshold.

Cheers,

Thomas

PS: @Alexander_Forsythe: I humbly request a badge for being the first non-Academy person to use the new LaTeX feature! :wink:

@daniele brought up during the last meeting that the gamut mapper compresses the area of the spectral locus such that there is a ‘slice’ by the P3 red primary which is empty. This is also the case, to a lesser extent, with the P3 blue primary.

But thinking further about it, I believe this is inevitable with any gamut compressor. Since the red to green edge of the AP1 gamut lies hard up against the edge of the spectral locus, if the compressor is to bring colours outside that edge to within the gamut, it is inevitable that colours on the edge need to be ‘pulled in to make space’.

So a value can only end up at the P3 red primary if it starts outside the spectral locus.

We should also remember that if the gamut compressor is applied just after the IDT, then subsequent grading is still able to push a colour back to the P3 red primary if that is creatively desirable.

1 Like

Yes, this is also true for BT.2020 which makes it even more important to be able to remove/disable the operator if highly saturated colours are desired.

It seems likely the consequence of doing the compression in ACES RGB space. This is exactly why a lot of gamut mapping is applied in spaces that attempt to be uniform like CIELAB or CIECAM-UCS. It would be interesting to think about ways to translate the ACES values to the destination space first and compress relative to those RGB primaries as our goal is slightly different as a scene-referred gamut mapping and we want to fill the destination space. Just musing … obviously the math gets tricky.

Maybe we could just be a bit less aggressive on the yellow side for the default values.

1 Like