This is based heavily on @Thomas_Mansencal’s excellent work, and is frankly just a minor tweak on his views of the data.
There has been quite a discussion about “data” with respect to cameras and their capturing. I thought it would be informative to plot the camera’s native response to the spectral locus with respect to how the values end up after processing via a naive set of three virtual camera primaries.
As we can see, the idea that the camera captures “data” is only relative to the constrained selection roughly around the solved set of swatches in question, and even then, it deviates. This seems like an unavoidable subject when determining a parametric or otherwise approach to gamut mapping, as the procedure would inevitably be mapping non-data.
Although the spectral responses are varied, the overall trends are perhaps striking.
Just as a follow up, I found that the plot didn’t quite do enough justice to the warping in two dimensions, so I have re-rendered out the series using 2D top down classical view and all of them using a 3D animation. For example:
Given this is spectral related, and that the VWG here has moved on from spectral officially, this isn’t of much relevance any longer. I’ll update this reply with a few more animations as they complete.
The distortions in the projected camera spectral locus shape are caused by the forced fitting of the camera spectral response down to three virtual primaries, as opposed to the original and typically undisclosed native spectral response.
It is clear that the variance between standard observers is a real thing, however in this particular example, the camera spectral response overall distortions are simply a result of the bad math that is produced via the aforementioned forced fit. (Discounting the natural irregularities due to the specific camera spectral responses, which isn’t the source of the majority of the woes.)
Here are the updated plots with the virtual camera primaries plotted in blue. As we can see, the primary cause for gamut mapping grief starts with the fitting of the spectral sensitivity region to a brute force 3x3. This leads to nonsensical non-data for a good chunk of the camera’s triplet range when transformed by the aforementioned set of virtual camera primaries. A good chunk of the resultant values will end up outside of the spectral locus as a result, within the projected footprint of the virtual camera primaries. In some cases, this leads to erroneous “yellow” triplets transforming into “cyan”, or “green” into “yellow” for the value range that maps to the nonsensical region beyond the spectral locus, as an example.
absolutely, also the space virtual primaries span is not filled fully filled with achievable values (similar to XYZ being larger than the physically achievable colours). But I need to raise the this is an IDT subject buzzer here ;-). Out of Gamut Colours can have many causes and camera profiling is just one of it.
But this all cements the claim that the spectral locus should not be seen as a sharp border. We need to gradually lower or trust in data the further we move away from the centre.
Isn’t it a native part of a gamut map? The actual term implies going from somewhere, to somewhere, the core essence of a map. I would echo @hbrendel’s concerns here; lack of an input would be extremely problematic.
Well, we’ve had that experience, right? Stuff doesn’t look necessarily as the probe says it should… and on thop of that would be interesting to compare with the influence of perceived contrast vs perceived saturation…