LMS vs Camera Native for white balance

I’ve just confirmed my assumption that temp and tint sliders in Alexa Raw perform white balancing in Camera Native primaries. I always thought each camera uses their own secret LMS space for white balance.

So now I’m curious:

  1. Is this common for cameras to perform white balance in whatever space is native for their sensor?

  2. Where can I find a conversion matrix from Alexa Camera Native to XYZ or ACES? And probably for other common cameras?

  3. Of course, performing white balance in Camera Native space seems to be the best for LED and neon light sources, because there will never be any negative values (if I understand it right). But what about the best color space for skin tone? Is CAT02 is the best option for skin tone, or there are better LMS out there? Should I use CAT02 (or something better) even if Camera Raw settings are available? Or for skin tone it’s best to use camera native space?


I apologize for the questions that are not directly related to ACES, but I don’t know any better place for this kind of questions.

1 Like

I’ve just double checked it and now I’m not completely sure this is true. I’ve never used Nuke so it’s a bit complicated for me to use it for comparison. And in Resolve there is no option for selecting Camera Native gamut.


That is technically the only correct way to white balance because the gains are applied to raw sensor data. This means that you do not have any RGB channel cross-talk at this point and thus will not skew the colours.

Ideally, white balance should also happen before demosaicing and, for similar reasons: As the missing channel values from the CFA are reconstructed, you are effectively using other channels values to build them, thus introducing chanel cross-talk.

Depends on the vendor but this is usually secret sauce and can vary per-camera for a same body model as a function of production variation. They can be derived with some work or obtained with a NDA :slight_smile:

See my reply to 1, in Camera RGB space you don’t have channel cross-talk, so it is always almost better for that exercise if you want consistency and reproducibility.

It is actually extremely difficult to match the white balancing output of a camera in any other space than its Camera RGB space.




Thanks a lot for the detailed explanation!

This is why I get more noise with gain sliders in color correct node for white balancing, compared to Raw temp and tint? Another thing I should google now is that white balance in Raw happens before demosaicing. I thought it is impossible because it’s gray scale image at this point. But now I think I get it. The fact it is gray scale image is not a problem, because we know the position of each r, g and b photosites, so we can apply gain to these gray scale pixels according to their placing.

So I can get the matrix comparing native and some known primaries using a shot with macbeth color chart. It will not be perfect but more or less close I think. Sounds legal :slight_smile: But at first I should somehow compare how much of a difference between different bodies of the same model. Probably this all not worth it.

And that’s why different LMS like CAT02 and Bradford exist? Or they was made for displays, not for cameras? And they are used in IDT just because there are no better alternative? CAT02 for some cameras and Bradford for the others.
If you don’t mind, I have one more question. Is it better to use some of these CAT02, Bradford or some other LMS spaces, or official known wide gamut (but not native) primaries from vendor are better, because they are still more or less close to native? For example using Alexa wide gamut, S-Gamut3 (not .Cine, I guess?), REDWideGamutRGB and so on, when camera native primaries are unknown.

What I want in the end - I decided to convert various raw and prores shots from different cameras to ACES 2065-1 EXR, so I would have a big collection of test images and all in one color space. But I would like to balance them at first. I understand how should I adjust the exposure (native ISO and gain in linear space). But white balance is way more complicated thing. Even more complicated than I thought before your reply.

1 Like

True camera WB can actually be quite complex.

Yes, most companies do RGB gains in sensor/scene linear space.

However, some of them do try to apply some type of matrix to try and optimize for the different theoretical spectral target rendering of typical color patches like Macbeth, TM-30, etc.

In general, though, pure RGB gains do a good enough job in many cases, so that’s why it’s done that way.

CAT02 exists for the human visual system in general, not just for displays. It’s a more sophisticated LMS spectral sensitivity model that takes nonlinear responses to light into account as well.

It works best when the original data is based off of blackbody or CIE Daylight color info.

Completely agree with @Thomas_Mansencal