I wanted to share the results of some in house experimentation I did a few months ago to assess the state of ACES Input Transforms. The full details are in the attached PDF, and this is by no means a “formal” journal-type writeup, but gives an idea of the procedure followed and the results.
The basic idea was to see how well cameras could be made to match if the procedures described in P-2013-001 were followed under optimum conditions (i.e. using reliable spectral sensitivitity measurements, proper exposure, etc.).
Introduction:
In this experiment, a monochromator was used to expose a camera with narrow bands of light centered at regular wavelength increments across the visible spectrum. Data was collected and processed to derive spectral sensitivity data. The validity of the spectral sensitivity data was evaluated independently by testing linearity of sensor response as well as color accuracy between theoretical calculated data and empirical photography. Once confidence in the linearity and spectral sensitivity data was established, the procedure specified in “Academy P-2013-001: Recommended Procedures for the Creation and Use of Digital Camera System Input Device Transforms (IDTs)” was followed. Finally, the results of applying the IDTs for four different cameras were compared to the ACES Reference Input Capture Device (RICD) and to each other.
Conclusion:
There are many opportunities for other options and improvements, but if the procedure described in P-2013-001 is followed using quality spectral sensitivity data - good matches from camera to camera can be obtained. The most difficult part was having the default exposure of a properly exposed 18% gray consistently come out at ACES=[0.18 0.18 0.18], but a one-time correction grade to establish the adjustment needed to to attain this was enough to make all cameras fall into a match with each other.
I am hoping somebody finds this useful and/or it prompts a discussion about Input Transforms. I am happy to answer any questions about the experiment.
However, I will be uploading the raw photos and derived data once I have a chance to convert my saved variables out of that pesky proprietary .mat format and into something more usable
Even with their inherent limitations, 3x3 matrices seem to do a very respectable job.
Camera’s tend to produce results that match each other more closely than they match scene colorimetry (i.e. the RICD), even when scene colorimetry is the target. This just shows that cameras aren’t great colorimeters.
Thank you very much for doing this! It’s always good to reaffirm our assumptions.
This post prompted me to post my slides from IBC 2019, where I looked at the other implications of using 3x3 matrices here:
I very much look forward to regenerating those diagrams with the data made available here, to validate my own assumptions and make sure I didn’t do any accidental black-box voodoo.
@SeanCooper: I think your last slides are correct, at least I would not expect something massively different except for the regions folding back to the illuminant above the line of purples. Anyway, this is not important compared to this question:
Is ACEScg sufficient?
My understanding is that the AMPAS intent is to have a dedicated Working Group specifically answering this question. However, the question should be more refined, sufficient for what?
For rendering? Probably, it tends to behave nicely as was shown a few times.
For compositing? Probably not as cameras will probably always tend to exceed its gamut.
Is the answer to the second question yet another gamut? My gut feeling, and I could be wrong, is no: there might always be a camera that crosses over its bound, the gamut would also likely need to be quite large, starts to remind me scRGB and this is not a good souvenir
I will try to find some spare cycles to plot Jinwei’s Spectral Database and the Raw to ACES one similarly.
I didn’t want to take over this thread, hence me creating a different post of the slides.
But, in general, I think the simple question pertaining to this IDT post, is that the present best-practice 3x3 IDT matrices and an AP1 working space don’t play nicely together for all cases, and what should or could be done to help?
This eventually public dataset could provide a very useful testing ground to analyze the nature of the 3x3 IDT for a larger variety of spectral stimuli, and allow for analysis of other solutions that may yield a more favorable “spectral hull” response.
To your points specifically:
Seeing how there is potentially a movement to add more colorspaces to the ACES definition to allow for “camera-native” grading via CDLs in AMF… I would say yes, this is a very real possibility to explore because there are people actively being “hurt” by AP1 based working spaces.
I know the reasons for the above are more complicated than just negatives in your image, but that is still a serious issue.
In my personal opinion, there are three inter-related items that could/should be explored.
Alternate IDT methods that would improve colorimetric response and provide better “spectral-hull” response. Keeping in mind other image quality factors like noise.
Explore and propose best practices for handling non-colorimetric data (handling negatives gracefully, gamut-mapping, etc.)
Accept the present reality of out-of-spectral-locus producing 3x3 IDTs and bad handling of negative RGBs and look to accommodating them in a new Wide Gamut-esque working space.
The interesting bit is that it is entirely interdependent on your base spectral sensitivity curves, your IDT regression method, and your training data. So it would be a great topic to explore, as it could end up being not that crazy, who knows?
Plenty there to debate over I’m sure no one will take it the wrong way
I guess that one of the first thing to do would be to enumerate practical problems that are caused by having colours out of the spectral locus/working space, e.g. makes it hard to key anything, potentially generates nuclear colours, etc…
The best solution to any of these problems is that there are maybe different solutions and recommended approaches, e.g. go back to native Camera Space where nothing is negative per definition then key and transform back for the former, gamut map to bring back the potential nuclear colour into the spectral locus for the later.
Agreed with all the 3 points with the caveat that 3 is a race that we will never win without making huge compromises as there will eventually be a vendor that will break our assumptions. The only guaranteed way is to be able to go back to the native Camera Space, I’m obviously disregarding camera noise here. That being said , maybe a new gamut that handles 95% of all the use cases is good enough, just worried by what the 5 remaining % will cause
As promised. I have organized and stripped down the data to make it easier to share.
For those who just want the spectral sensitvity data, each directory has a file named “ss_CameraManufacturer_Model.txt” which is tab-delimited 380-780nm in 2nm increments (interpolated from the 5nm sampled monochromator measurements)
For those that want to duplicate the experiment, I have uploaded the camera raw files. You can process these yourself to linear TIFFs using dcraw and then do extraction of the RGB values per wavelength capture to derive the spectral sensitivities.
I also included the power measurement file from the monochromator rig as well as the lens transmission.
“sampled_rgb.txt” is the averaging each of the 91 monochromator files representing wavlengths from 350-800nm in 5nm increments.
The entire package can be browsed and selectively downloaded or downloaded as a whole from this Dropbox link
@nick@Thomas_Mansencal the python will have to wait, but there is interest in re-doing it that way for similar analysis work that likely might be done with motion picture cameras as part of the imminent IDT VWG (although that group will be focusing on the inconsistency in ISO ratings and recommendations for getting nominally exposed images to come in closer to [0.18, 0.18, 0.18] - i.e. less need for the one-time correction I mention was needed in the conclusion)
@Thomas_Mansencal I agree its finicky, and it only take the next camera model to throw the whole thing out the window…
I would just prefer to say to an average user of ACES that ACES actually does solve your issues, not create more, and its a bit difficult to say that straight-faced as things are right now. The RRT/ODT sided gamut-mapping that has been mentioned would be a huge win, but it only solves one communities issues (in a way).
3x3 needs to be improved. nailing exposure may be important, but we should wean RAW and ACES away from the concept of exposure. All video should use RAW or they will have burned in color science. maybe we need some sort of universal RAW color pipeline like ipp2 for all cameras.
The hue response over exposure needs to be re-calibrated in such a way that when viewing a color chart and changing the exposure, the hue lines will stay straight and not warp in the yuv vectorscope, essentially being better than the camera’s factory default transform lut.
If we can combine RAW, and de-merge ACES away from exposure, and then perfect hue response over exposure, we can then use any exposure we wish and compensate perfectly (with the desired ISO response) to get accurate hues across any choice of camera or exposure.
This will require a larger matrix and larger data sets but will be universal.