Quite some times when I convert dark live-action footage from their colorspace to ACES I’m often ending up with negative values. This happens on both slog3 footage and red raw directly to ACES from the footage I have tried out.
When compositing you really don’t want negative values in your comp but also you don’t want to clip negative values as you’ll lose that data in the shadows.
So my question for you all is, how would you deal with this? The two solutions I can think of is:
- Bump up the brightness until there is no negative values any longer.
Pros: You’re not changing the contrast and you can easily revert the added brightness in the end
Cons: bright stuff gets even brighter.
- Bump up the shadows until there is no negative values any longer
Pros: Bright things don’t get brighter.
Cons: You might change the contrast some in the shadows depending on how much you need to add to remove the negative values. It might also be hard to bring it back down in the end if needed and might change other composited objects to change that is not wanted.
I’m attaching a test-frame from some stock footage I have shot if you want to try something out
It’s shot using an FS7 with the color profile S-Log3 - S-Gamut3.Cine
While there certainly are negative values in that image, there are not as many as there might first appear. It would appear to be S-Log3 with a legal-to-full scale applied, as you would get if you, for example, recorded the S-Log3 SDI output of the FS7 on an external ProRes recorder, and then decoded the ProRes using default scaling.
You may well already be aware of this. But if not, you should be applying a full-to-legal scale to the image before applying the S-Log3 / S-Gamut3.Cine IDT. This will significantly reduce the offset needed to bring all values positive once linearised.
How was that image recorded? And might it be possible to get a copy of the camera original footage? It would make a very useful test image for the gamut mapping Virtual Working Group. This group is investigating methods for removing negative values from images caused by out of gamut colours. The current proposals will unfortunately not help with the negatives in your image.
Thanks for the great information Nick!
The footage is recored streight to the Sony FS7 so no external recorder used here.
I just tried what you said and converted the footage from full to legal and yeah, that really fixed a lot of the bigger problems I have gotten before!
There is still some negative values left tho.
Here is the raw footage that you’re fully free to try what ever you wish with!
I also included a frame from some red footage we shot. Converting this one to ACES directly from the debayering still gives some negative values.
For the red footage the lowest value I’m getting is -0.00941. Not much but still some.
For the fire footage (when set to legal range) I’m getting lowest value -0.113.
Thanks @nick for your help ! Could expand a bit on this :
I’d love to know/understand why.
It has to do with some central design decisions made into ACES. Many of the IDT’s allow for the log to linear curve to have negative values.
This is related to the fact there are many forms of ‘linear’ The linear used in most IDT’s is based on the noise floor of the sensor. Linear ‘0’ is the results with a lens cap on. negative values are created by over shoot and undershoot in the noise. This was good for systems that emulated analog capture, such as a TV video camera where 0 represented the bias voltage on the camera that would correspond to the television black , or a film scan where the 0 would correspond to the base of film.
This method creates significant issues when you try applying physics based techniques such as 3x3 matricies or when you are trying to match 2 cameras. So in the camera matching example the 0 or every camera that uses this technique measures the same. So lets say you have one camera with 7 stops of latitude below 18% grey and another with 5 stops. They both have the same 0 point. Both cameras have their black level stretched to 0. one is stretching farther, so when you try and match values closer to 0 the stretch is different and they images don’t quite line up. It works absolutely fine if you only use one camera.
In a physics based example each stop of latitude represents a doubling or halving of light. So if 18% grey = .18 the
5 stop camera will have it’s minimum value at 0.005625
7 stop camera will have it’s minimum value at 0.00140625
Light Physics based image models will not have 0 or less in them. That becomes important when you apply a 3x3 matrix.
A 3x3 matrix is mathematical shorthand for the combination of linear light through a filter. the ideas is that if you vary the intensity of light behind a set of 3 filters in a set you can reproduce any one filter in another set. Negative values REALLY throw this off. When a 3x3 matrix is applied to a negative linear value in is in effect reversing the equation, A filter combination that should be ADDING light to balance is now SUBTRACTING light. Any colors channels that cross that boundary will now have the color differences increased as opposed to being brought closer together
a SUPER simple example is a color patch in ArriLogC with the values of .3333,0,.6666. In linear it has the values of
.101 -.017 2.45 if you in linear light
double the exposure you get
.202, -.34, 4.91 !!!
This not only changes exposure but the hue and saturation as well. A 3x3 matrix will have this error propagated on every channel, creating errors across hue, saturation and luminance. The errors are small but its effects crop up in every pipeline and make every operation more complex.
So this is a long way to say that this was done on purpose.
The best way to deal with this is to create your own log to lin conversion based on the an optical physics linear model and not a broadcast camera linear model. Because a matrix multiply is done in all IDT’s you need to correct this before going to an IDT.
Thank you for this great and complete answer @jslomka ! Much appreciated !