Film stock/ADX10 Gamut

As someone thats too you g to ever have worked with film stock (ive done 1 feature but it was delivered as adx10 and they wanted back acesAP0 it was very simple).
I have no good understanding about the relationship between scene-light and the codevalues that come out of a filmscanner.

In my mind it would go like that:

Filmstock has a certain response to light, depending on intensity and wavelength of the light it causes a change in density on the negative per color layer(I have done a lot of analoge photography so its not foreign to me) .

This response can be measured and profiled.

the scanner then scans the negative, again causing a change in current? (I have no idea how laserscanners work) depending on the density of the negative, this would go into a A/D converter that quantizes that into linear 16bit or whatever codevalues that then get processed into something like adx10 12bit log and saved as dpx files using the aformentioned film stock and laser profile to create a scene-reffered output.

If this is correct (I am sure I got something wrong I am really just guessing, please correct me! I am keen to understand)

What even is ADX10? I wasnt able to find much about it on google, is this cineon with ap1 gamut? when going adx10->adx10 there is 0 error in nuke so I assume its not doing any gamut conversion under the hood.

Then also what gamut is cineon? How do you define the colorspace of filmstock… etc

So many questions. :smiley:

Yes correct

Mostly correct. Except… There is no laser scanner. Only laser recorders (to write to film from digital). Most recent arriscan sacnners are basically Alexa’s that record film with a really good lens.

Not sure about the LIN to LOG conversion (16 to 12). Maybe at @pguerin would know more here. (and correct me if I’m wrong above).

It’s basically the same but in 10 bit DPX.

It’s always been implied that basically sRGB is the gamut of film. Of course that is not 100% accurate as film is not simply just a triangle colorspace. Since it’s subtractive some channels tend to influence others and creating colors that might go out of sRGB for example.

1 Like

I believe most scanners have an AtoD converter to 14 bits. Then a lot of processes are going on (dead/weak pixels, ghosting, offset and gain uniformity, base offset and gain, LUT to ADX) and output to ADX10 is… 10 bit! ADX is quite close to Cineon and there is a formula in ACES spec to convert to ACES. You can assume scene linearity once in ACES but not sure how accurate that is. If the scanner has been correctly calibrated for ADX with the respective film stock, it should be more accurate than the old cineon. With modern film stocks (Vision3), ADX10 properly calibrated tends to lack dynamic range. So, shots with very bright highlights will clip unless the scaling is decreased (which would break linearity) or ADX16 is used. ADX16 stored in DPX is a strange animal. Software should properly read ADX metadata for correct interpretation.

In the end, ADX represent film densities, so the gamut discussion is not simple… Think of it as a digital intermediate between film and ACES AP0. Not sure exactly what you mean by 0 error in Nuke. Most color conversion in Nuke can be transparent when inversed even with gamut conversion…

Cheers!

Pierre

1 Like

Oh wow, great info, so adx is yet another log curve with no specifications of what gamut it is? And they get interpreted as sRGB ?

Regarding the error, there is a rounding error when doing gamut transforms in 32bit nuke when writing out 16bit files then transforming them back to source on output. lets say you write 16bit acesCG exrs from a logC prores source and then convert those back to logC there is a slight error. inside of the 32bit nuke processing its transparent.

Film like most sensors goes not have a gamut in the sense of a range of colours it can reproduce.

If you look at the spectral sensitivities of a sensor (film or digital) they will typically span the full range of wavelengths we can see and have overlap between the different channels, you could call them R, G, B if it is a digital sensor, L, M, S if you talk about the eye, etc but because they generally respond to a range of wavelengths, this is not an exact name and usually corresponds to where the bulk of the channels sensitivity falls.

The key about the gamut is that if the sensitivities of your capture device fully span the eye’s sensitivities then that sensor can ‘see’ all the colours, this is true of most camera sensors and such they all have the same capture ‘gamut’. Or put another way there is no colour they do not respond to.

When people typically talk about a camera gamut, they probably mean - how the camera encoding is supposed to be interpreted for display, e.g. a meaning will be attached to the R, G, B channels in terms of their primary chromaticities and white point, which can be combined together to give you a normalised primary matrix allowing you to transform from device/encoding dependant values to tristimulus/chomaticity, which is considered device independent and allows you to plot a gamut volume, etc. These encoding primaries are chosen based on a number of factors relating to technical performance and making pleasing images and are generally computed using less extreme object colours well inside the HVS limit.

In ADX you can look at the matrices converting to AP0 as encoding the intended display gamut, this chart shows what the effective gamut is being assumed to be assuming the same white chromaticity as ACES uses, for this discussion ignore the ‘_mod’ version,.

As you should be able to see the chosen mapping from ADX ‘gamut’ is mostly inside ACEScg’s AP1, this was not explicit documented and is an interpretation of the EXP_TO_ACES in the CTL files:

const float EXP_TO_ACES[3][3] = {
    {0.72286, 0.11923, 0.01427},
    {0.12630, 0.76418, 0.08213},
    {0.15084, 0.11659, 0.90359}
};

factoring out the AP0 back to XYZ then decomposing it to RGB primaries

Kevin

4 Likes

For those with a working install of Colour Science for python …

import numpy as np
import warnings
import colour

# This is defined as outputting ACES 2065-1
EXP_to_AP0 = np.array([
        [0.72286, 0.12630, 0.15084],
        [0.11923, 0.76418, 0.11659],
        [0.01427, 0.08213, 0.90359]
    ]) 

def RGB_to_RGB_matrix_to_normalised_primary_matrix(RGB_to_RGB_matrix,
                                                   XYZ_to_RGB_matrix):
    M = np.einsum('...ij,...jk->...ik', RGB_to_RGB_matrix, XYZ_to_RGB_matrix)

    M = np.linalg.inv(M)

    return M

NPM = RGB_to_RGB_matrix_to_normalised_primary_matrix(
            np.linalg.inv(EXP_to_AP0),
            colour.RGB_COLOURSPACES['ACES2065-1'].XYZ_to_RGB_matrix)
P, W = colour.primaries_whitepoint(NPM)

film_gamut = colour.RGB_Colourspace(
            "film_gamut",
            primaries=P,
            whitepoint=W,
            #illuminant='D60',
            RGB_to_XYZ_matrix=NPM,
            XYZ_to_RGB_matrix=np.linalg.inv(NPM))

colour.RGB_COLOURSPACES.update({"film_gamut": film_gamut})

print(W)
print(P)

Gives

[ 0.32168102  0.33767131]
[[ 0.66374667  0.32237624]
 [ 0.15093369  0.74512602]
 [ 0.12757409  0.06353744]]
2 Likes

Also note that there is a fundamental mechanical difference in the two mediums that make it impossible to discretize film into three emissions.

At risk of more greatly oversimplifying, the three dye layers were mixed with the projector bulb. As those dye layers depleted during exposure, the result would be a light mixture that is both “brighter” as a result of more bulb shining through, but also less chroma as a result of the layer depletion.

Digital RGB could be considered fixed colour lights, with variable emission, while film was fixed emission with variable colour layers.

With film, the colour formation matrix would never be able to describe the colour volume from density measurements because the dye layers would exist in a continuum of flux through an exposure range, and therefore there are no fixed “primaries”.

2 Likes