Lens cap black and flare level

Hi! it’s me with some beginner questions again!
I do understand that sensor creates negative values. And I more or less understand that it can create artifacts, when somebody decides to uncompress gamut after it was compressed and baked in 16 bit EXR.
I’ve also read that it’s the colorist who should fix it before exporting EXR to VFX department. But this means, colorist should do it before IDT (I guess in linear space with the source primaries), as there are negative values after the IDT.
So:

  1. Why it isn’t a part of IDT? Probably a simple slider, or a checkbox, or even the option that is always on and has a fixed value.
  2. If it should be done before IDT, how can colorist do this without making their own “ACES” with nodes and plugins/DCTLs to be able to put anything before the IDT? It is possible of course to go from ACEScct back to scene linear and native primaries, but I’m not sure this is a routine task every colorist, who uses ACES, do.
  3. What operation is the best choice for this? Is it offset in scene linear? But it slightly pushes up the whole image, not the shadows only. Lift, I guess, is not an option. It will change contrast and do opposite above 100 nits. Maybe some sort of a soft clip that touches the darkest darkest shadows only?

And I have one more question, that is not about setting black level, but:
If gamut compressor works best with more or less correct exposure and white balance set in the source (RAW settings for example), for exposure or even WB in IDT? For ProRes source files, as an example.
It’s because it would make the whole system more complicated than usually needed? Well, It’s not there, just because the gamut compressor is relatively new, but IDT are old. But I guess, there is no plans for implementing it. It would bring more problems than it should solve?

Hi,

Sensors do not create negative values, the only thing a camera sensor generates when it collects photons are electrons! It converts light to voltage and that process does not produce negative values.

They start to appear when the image is processed and that, for example, the camera black level is set so that the mean of the camera noise is centered to zero. If you imagine the noise being distributed as a Poisson distribution (which is more or less a discretised gaussian distribution), the principle is to move the peak at zero.

They can also occur as part of the mapping from Camera RGB to ACES2065-1, because the sensors colour filter array (CFA) do not match the human cone responses, there will always be a set of Camera RGB values that cannot be mapped to Tristrimulus Values by a linear transformation.

Clipping negative values can make things actually worse, imagine that you lift blacks on an image that has its noise distribution centered around zero, doing so you will raise the noise level which while not being ideal might be more acceptable than milking on a clipped image because the noise distribution is entirely positive now. It is best left to the user to figure out what is acceptable. With that in mind, negative values should be preserved as long as possible. Also, IDTs are fixed transforms, so you cannot change them.

Not a lot of camera vendor will give you an opportunity to do that and the likehood that you produce a better IDT than them is nil.

Keep them, protect them, cherish them, they are your friend! :slight_smile:

2 Likes

Are you are referring to reference gamut compression here?

But then there’s the LMTs. This was one big question I was asking myself when making those “other DRT” LMTs. To handle negatives or not to handle. What is the best practice if there is one? There’s no guidance on this that I could find anywhere. Should LMTs try to cover negative range that benefits most common workflows (in color grading and VFX) and if yes, then how much range should they handle? And even if AP0 linear doesn’t have negatives, as soon as the LMT transforms to AP1 linear suddenly there might be… I chose not to handle them and they clip to ACEScct 0.0.

You should just flare the image so that negative values become positive again. Effectively undoing what the camera processing did.

I have had various discussions over the last years with camera manufacturers, trying to explain that black offset is maybe not the best approach to this problem.

2 Likes

Thanks for the explanation!

For example if I touch new pivot tool in HDR palette tools in Resolve, it kills everything below 0. Sounds like one more reason to not use it. As well as useless LMS in global wheel that makes correction worse than if there were no LMS at all, and other artifacts and bugs.
And what about ACEScct LUTs? I guess this is acceptable to clip below 0 at the Show LUT stage of the image processing chain?

Sorry, I’m not sure I get right, what you mean. I was talking there about the best place of dealing with negative values and I thought it’s best to to it before gamut compressor and even before IDT.

I was asking about reference gamut compression (RGC) because, as I have understood it, the purpose of RGC is to “heal” those negative pixel values that result from converting camera raw to AP0 (and also from AP0 to AP1) by mapping them back inside the bounds of the gamut.

I would take that to mean that RGC is the way to deal with negative values. I’m not an engineer or color scientist, so maybe I’m getting that wrong. I’m just an artist trying to connect the dots like you are :slight_smile: So I welcome any corrections if I’ve understood this incorrectly.

It brings back negative values of the pixels if they have high saturation, but the noise usually have not enough saturation to be completely affected by the gamut compressor.
Here is ACEScct image from Alexa with gamut compressor applied. It has negative values (below lens cap zero level of ACEScct encoding curve)

1 Like

@meleshkevich Thanks that makes sense.

@daniele you say elsewhere

I’d be interested in learning more about how to do this in Nuke or Resolve. Are there any articles you can point to that cover this workflow in detail?

Hi,
yes, this is what we do currently, if the camera manufacturer would not subtract the black in the first place, there would not be a need to preflare.

There is so much to unpack here …

Let’s start with the simplest issue with others have already touched on …

Sensors do not produce negative values … they are essentially integer devices and only produce positive integer values. One might expect that zero light (e.g. a lens capped black picture) would produce a consistent code value of 0. However, due to sensor dark current the resulting image will have a normally distributed range of code values around some mean above 0.

Logically this doesn’t make sense when converting camera code values to linear exposure values. A picture which received zero light should be an exposure value of zero at all pixels. In practice, if the non-zero distribution of camera code values were clipped to zero that would cause artifacts in normal images. So, rather than clip the largest code value produced by a black capped image to an linear exposure value of 0, most camera manufactures linearization math places the mean of the dark noise at an exposure value of 0, which means that half the pixels in a black capped image will result exposure values above 0 and half will be below zero.

Note this has nothing to do with flare or gamut mapping at this point. It’s just an artifact associated with the conversion of camera code values to linear exposure values when camera code values have dark noise.

5 Likes

I would argue against that. It is relatively straightforward to deal with in VFX, and then is under their control, rather that VFX being supplied with images with some unknown process applied, which may have broken scene-linearity or clipped data.

3 Likes

that bit is actually what I think is the flaw in the current pipeline:
You take a measurement from an extreme low spatial frequency (full frame) and apply then an offset on the highest spatial frequency (per pixel).
I don’t think this is sensible.

Also most universal DRTs clip values below 0.0 linear light. Only the DRTs from the camera manufacturer map a small negative linear light value to 0.0 display linear - which brings whole new issues into the pipeline.

1 Like

@daniele I agree there’s a discussion to be had here. I was just trying to share what’s typical, not necessarily what’s optimal.

1 Like

Except there is an elephant in tbe room.

We’ve gone from camera stimulus, to observer stimulus.

The former is meaningful positive stimulus, and the latter is meaningless; there’s literally zero meaning to negative stimulus in the observer model.

So under that reality, it makes good sense to consider perhaps actually generating meaningful observer stimulus at the gateway?

If I’m following your point correctly, this is a problem with no clear right answer.

If a camera code value greater that 0 is mapped to a linear exposure value of 0 then some camera code values will be mapped to negative linear exposure values and that’s nonsensical.

If a camera code value of 0 is mapped to a linear exposure value of 0 then linearity with respect to the scene is compromised as the dark noise contribution to the image is treated as signal.

The typical solution to map the mean of the dark noise to an exposure value of 0 is a compromise.

I don’t know if I agree with the characterization of the problem as “camera stimulus” vs. “observer stimulus” but rather “camera digital code values” vs “exposure values at image plane”. Regardless, you’re right that negative exposure values make no physical sense and are an artifact due to the dark noise.

To @nick’s point, a well understood process is probably the most important thing here. Don’t go throwing curveballs at your VFX team.

My view is that we can somewhat sanely assume the camera saw some visible light that created camera sensed stimulus. The idea there being that if we seek to represent some degree of “tonality”, we would expect that positive stimulus to translate to a positive observer stimulus.

1 Like

The sensor dark current arises from thermal emission of electrons, so the sensor does produce signals when there is no light. As mentioned previously the dark current signals have a Poisson distribution. The dark current signals are typically subtracted off, sometimes using a dark frame (better) and sometimes using mean values. In either case the subtraction will produce some slightly negative values. These values will be small but can be quite different for the different channels so they may be very non-neutral. Since they are not the result of light falling on the sensor there may not be any combination of light wavelengths that could produce the signals.

To avoid clipping these signals many cameras add to raw images a black offset that is often significantly larger than the maximum negative excursion of the dark current signals. Then the offset value is encoded in the raw file to be subtracted when linearizing the raw file (hopefully using software that does not clip the resulting negative values).

But then the camera is used to capture scenes. In many cases there will not be any significant areas in the focal plane image that are so dark the light incident is insufficient to lift the negative values to positive. Camera flare helps with this. Usually negative values will only persist if the scene is quite dark and the signal gain (ISO setting) is set quite high so the noise excursion is amplified.

Generally, IDTs are designed to handle slightly negative values. There is no issue with applying a matrix to linear values that go slightly negative. However, the matrix can amplify further the unrealistically chromatic signals resulting from the random noise being different on the different camera channels. In some cases this can cause artifacts. Also, LUT-based IDTs may not handle negative values.

In these cases one thing to do is to soft-clip the very slightly positive and negative values to a range from very slightly above zero down to zero using a “toe” function. Another approach is to apply an offset to the camera signals just sufficient to make all values non-negative. Which of these is preferred is to some extent a creative choice. The first option will keep the signal linear to the captured light except at the very extreme low end of the signal range, compressing the extreme blacks (and noise). The second option will be like adding a bit of synthetic flare to the whole image, reducing contrast slightly but maintaining the separation of the extreme blacks. In either case this should be done to the camera signals either when creating the raw image or in the IDT after subtracting the black offset.

Also, as mentioned previously, negative values can result from the conversion of the camera signals to colorimetric (ACES) values, because of the fact that the camera spectral sensitivities are different from those of the eye. In this case, the negative values can be relatively large – too large to address using a toe function or offset. Some sort of gamut mapping is needed, but it becomes a camera and possibly image specific problem. Ideally the gamut would not be compressed except for when an ACES value outside the spectral locus is encountered. It is a useful fact that the colors on the spectral locus can each only be produced using one spectrum, so if one knows the camera signals that result from these spectra (i.e. the camera spectral sensitivities), one can map them exactly and not produce any colors outside the spectral locus. This requires a 2D LUT, though, and some method to smoothly transition to the interior gamut. For these reasons the exact spectral locus mapping is not widely implemented.

5 Likes