RGB Saturation Gamut Mapping Approach and a Comp/VFX Perspective

Hey @daniele ,
Huge thanks for the very helpful feedback.

Apologies for the delay in my reply as well, I got distracted by some other projects for the last few weeks and only recently started diving into this again.

I did a lot more work and I think I have something that is headed more in the right direction: Here is a link to a blinkscript code snippet in C++, and a Nuke node. I find the C++ a bit easier to develop and understand what’s going on, but the approach works as an expression node in Nuke as well.

Your comment about mach bands and thinking about luminance and saturation as an achromatic axis and distance got me thinking. I did a bunch of research (and learned a lot). I found an interesting paper suggesting a different way of calculating saturation than the typical HSV style cylindrical projection, the IHLS colorspace. I created a Nuke node to convert to this colorspace, thinking this method of calculating saturation might be useful.

Then I realized maybe I was over-thinking the problem even still. Because if we ignore all of the complexities of human vision, camera colorimetry, and all of the complex colorscience that is being discussed in the other threads in this working group, at the end of the day all we need to do is take the pixels that are outside of the gamut boundary and map them back in.

So I thought maybe the distance could be calculated with a very simple Euclidean distance function for each color component. This would specify how far the component is from the achromatic axis. A value of 0 is achromatic, and a value of 1 is at the gamut boundary, and values over 1 are outside of the gamut boundary.

// achromatic axis 
float ach = max(r, max(g, b));

// euclidean distance from the achromatic axis for each color component
float d_r = sqrt( pow(r-ach, 2)) / ach;
float d_g = sqrt( pow(g-ach, 2)) / ach;
float d_b = sqrt( pow(b-ach, 2)) / ach;

With the distance we could now specify a threshold for where the colors would start to be affected.

// gamut compression factor for each color component:
// if threshold is 0.2, 80% of the gamut will be unnaffected
// and the gamut compression will be limited to the outer 20% 
float f_r = max(0.0f, d_r - thr);
float f_g = max(0.0f, d_g - thr);
float f_b = max(0.0f, d_b - thr);

However if we use this as a factor to scale down the rgb vector, it could be pushed more towards the achromatic axis than the threshold boundary, so we need to compensate. We can also add an adjustment for each color component so that we can tweak the “color bias” of the gamut compression.

// scale the compression factor by threshold 
// so that the rgb value is never "desaturated" more than the outer boundary
// limit compression factor by cyan magenta yellow controls
f_r = f_r * thr * cyan;
f_g = f_g * thr * magenta;
f_b = f_b * thr * yellow;

At this point the scale factor is linear, which may introduce mach bands on the edges of the gamut compression threshold. Ideally we would bias the linear factor to have a nonlinear interpolation from the boundary threshold to the outer limit. However this may be over my head mathematically without doing a lot more research! For now I’ve set this up to use a simple gamma power function, which doesn’t look terrible … but is not ideal. Suggestions welcome here!

// apply a power function so that the gamut compression is nonlinear.
// not certain about this approach!
f_r = pow(f_r, 1/cyan);
f_g = pow(f_g, 1/magenta);
f_b = pow(f_b, 1/yellow);

Finally we scale down each color component by the factor and write this out to an image.

// scale down each color component 
// by gamut compression factor towards achromatic axis
float c_r = (r-ach)/(f_r+1)+ach;
float c_g = (g-ach)/(f_g+1)+ach;
float c_b = (b-ach)/(f_b+1)+ach;

I have not yet delved into thinking about how to reverse this process (if inverting is even possible using this approach).

This approach does seem to handle the different sample images better than the last approach I posted. There seems to be much less tweaking of parameters per image required.

Any suggestions for improvements / comments / criticisms are welcome! In addition to any ideas about approaches to nonlinearly interpolating a linear gradient, and thoughts about inverting. Hope this is helpful!

3 Likes