Virtual Camera Primaries Rendering of Spectral Locus

This is based heavily on @Thomas_Mansencal’s excellent work, and is frankly just a minor tweak on his views of the data.

There has been quite a discussion about “data” with respect to cameras and their capturing. I thought it would be informative to plot the camera’s native response to the spectral locus with respect to how the values end up after processing via a naive set of three virtual camera primaries.

As we can see, the idea that the camera captures “data” is only relative to the constrained selection roughly around the solved set of swatches in question, and even then, it deviates. This seems like an unavoidable subject when determining a parametric or otherwise approach to gamut mapping, as the procedure would inevitably be mapping non-data.

Although the spectral responses are varied, the overall trends are perhaps striking.

I hope someone finds this animation useful. The sources are located here via a Colab.


Thanks Troy It’s really interesting :slight_smile:

Just as a follow up, I found that the plot didn’t quite do enough justice to the warping in two dimensions, so I have re-rendered out the series using 2D top down classical view and all of them using a 3D animation. For example:

Link to the Colab.

Given this is spectral related, and that the VWG here has moved on from spectral officially, this isn’t of much relevance any longer. I’ll update this reply with a few more animations as they complete.

1 Like

Once you see such image you understand why the spectral locus is a fuzzy area and not a sharp boundry :-). Clearly important for IDT creation. :+1:


It strikes me that it is more about the forced virtual camera primaries providing an unfit solve that results in non-data due to the dimensionality of the fit?

Metameric failure near the edges of the spectral locus between observers is an entirely different issue?


I believe Pridmore’s research in the spectral domain also leans toward providing some viable research into gamut mapping as well, however. I won’t belabour the point though, as that ship has sailed.

This is more this one that gives me fuzziness :wink:




Can you elaborate on this? I think I cannot fully follow.

Have you ever plotted all the spectral loci into one xy diagram? Should look interesting too.


1 Like

pure fuzziness :slight_smile:

1 Like

Interesting to see where the HVS fuzzy appears vs the camera variances

The distortions in the projected camera spectral locus shape are caused by the forced fitting of the camera spectral response down to three virtual primaries, as opposed to the original and typically undisclosed native spectral response.

It is clear that the variance between standard observers is a real thing, however in this particular example, the camera spectral response overall distortions are simply a result of the bad math that is produced via the aforementioned forced fit. (Discounting the natural irregularities due to the specific camera spectral responses, which isn’t the source of the majority of the woes.)

Here are the updated plots with the virtual camera primaries plotted in blue. As we can see, the primary cause for gamut mapping grief starts with the fitting of the spectral sensitivity region to a brute force 3x3. This leads to nonsensical non-data for a good chunk of the camera’s triplet range when transformed by the aforementioned set of virtual camera primaries. A good chunk of the resultant values will end up outside of the spectral locus as a result, within the projected footprint of the virtual camera primaries. In some cases, this leads to erroneous “yellow” triplets transforming into “cyan”, or “green” into “yellow” for the value range that maps to the nonsensical region beyond the spectral locus, as an example.

This diagram makes monitor calibration waaay less relevant than before…

absolutely, also the space virtual primaries span is not filled fully filled with achievable values (similar to XYZ being larger than the physically achievable colours). But I need to raise the this is an IDT subject buzzer here ;-). Out of Gamut Colours can have many causes and camera profiling is just one of it. :grimacing:

But this all cements the claim that the spectral locus should not be seen as a sharp border. We need to gradually lower or trust in data the further we move away from the centre.


Isn’t it a native part of a gamut map? The actual term implies going from somewhere, to somewhere, the core essence of a map. I would echo @hbrendel’s concerns here; lack of an input would be extremely problematic.

That is, to not consider the virtual primaries means that we are leaving the nonsensical “greens that should be yellow” for example, or “magentas that should be bluer” as-is. Further, is it problematic to use the virtual camera primaries as the input? I missed the discussion as to why. It cannot solve the problems listed in this thread, for example, in particular the red turning yellow due to the clip and rotate.

I realize the ship has sailed, and I’m entirely fine with that. Hopefully this discussion isn’t disruptive to the VWG.

The CC24 seems about the limit in good faith.

If the trust region were defined as such, a radiometric-like (aka RGB) or more radiometric (aka spectral) warp from trust region to tightest virtual primaries fit might have been worth looking at.

Well, we’ve had that experience, right? Stuff doesn’t look necessarily as the probe says it should… and on thop of that would be interesting to compare with the influence of perceived contrast vs perceived saturation…

1 Like

I’d tack on the usual misunderstanding of colorimetry, which was always a description of the stimuli not the sensation.

Human-inspired dimensional reduction ( 400 -> 3) is a better mental framework for it in my humble opinion.