Now you hit a fragile point here. Saturation, or do you mean Chroma? (Just kidding).
For many image processing problems, we can decide what we take as the reference for our implementation.
For “Exposure” we can, for example, pick the radiometric domain and do stuff in linear light.
For sharpening we can decide, do we want to compensate sharpness loss in optics for example (thanks @Troy_James_Sobotka for pointing that out)) - then we pick linear light (errata for my post above). Or do we go for neuronal sharpness which comes from later inhibition for instance, then a “perceptual domain” would be probably the right starting point (as mentioned above).
Now the thing with saturation is, that it is hard to make a physical scene more saturated (or desaturated). How would you do this? How can you make a real red hat and a blue t-shirt more spectral selective at the same time - together with green grass? You could wet the surface a bit to make it scatter less, but there are limits, and then the t-shirt is wet. You could add particles in the air, but then you also change flare and contrast a lot. There is no simple way. You need to buy a new red hat and a new blue t-shirt which is more spectrally selective.
So what can we do?
You could throw in some energy preserving models:
I talk just about this from ~7.00 min - 30.00 min
You could try to go into a colour appearance space and take sensory adaptation and scaling of the cardinal directions as your starting point. (Good luck with scene-referred data)
musing about this from 15.00 min onwards (sorry for the voice, had an exceptionally bad hangover that morning).
You could look into modelling saturation in spectral-domain. Spoiler: produces unexpected results.
My point is:
there is no “correct” way of adding (or removing) saturation. This is the reason why we have at least 5 different ways to modify saturation in Baselight (currently working on an additional three
I also think that distance functions in both linear and log encoded RGB are not the right thing for this task. But it is something we test against when we evaluate Display Rendering Transforms.
I hope this helps.