I was listening to the last meeting and getting slowly aware of the gamut problems with ACES,
also thanks to „troy_s“.
I wanted to give some feedback from a user perspective as a nuke compositor and an online artist on flame.
I am working for more than three years in commercials with ACES, a lot of them were car commercials.
The main problem areas are the car paint and the head- and tail lights as they have mostly LED lamps nowadays
that result in the highest values. Add anamorphic lenses into the mix and this often creates some nasty values around the lamps.
Some years back before we used ACES on regular jobs and were testing out the possibilities, I was testing around with a red tail light shot from a car. See the PDF from 2017. I used the HueCorrect node, a soft clip in HLS (but only for saturation) and a Color Correction node.
ACES in Steps_v04_gamut_only.pdf (6.2 MB)
ACES didn’t fix the problem of the high values then, but at least I was able to „massage“ the image to a better look. And this was already a big advantage for me at that time. The result was far from perfect when I see it today but I was happy with it.
Later in several projects we had to use regularly the neon suppression fix in Nuke, that we reversed back out at the end of the comp, if no one forgot to do it. Often the neon suppression fix was then used again in Resolve for the grading.
This means at some point the correction has to be applied anyway, because no one wants to see this ugly artifacts.
So I can’t see any reason that I want to get them back at any point.
At the end of the discussion I was looking at the ARRI schematic and I saw two places where these „extreme“ values should be fixed and this is the IDT and the RRT/ODT in my opinion.
As a user I don’t want to be presented with a bad looking image by any camera in the first place. I don’t want to be forced to use a fix somewhere in the pipeline. This is not user friendly. For me it is a bug in the whole system if I run into it often.
Here is an off topic example: If I take my DSLR with it’s simple sRGB view transform and my girl-friend next to me with an iPhoneX takes „better looking“ pictures at first sight on the display, I know with which photo I rather would like to start working in „post“ with.
Sure I grade my DSLR photos with Capture One later on, the quality is better altogether and I have more possibilities to tweak my images, but the temptation is there: why not start working with an image that looks better? Is is professional that I have to know about a „neon highlight fix“ matrix?
If only the IDT takes care of the problem, then no camera can create these extreme values anymore. But in comp I could easily create even higher values by adding or correcting elements. And in the grading process the moment I use gamma operations I also might increase values dramatically. That’s why I think the RRT&ODT must also take care of these extreme values and map them properly back, so that they cannot escape the spectral locus.
At last I have a question:
If I understand it right, no human visible color can be outside the horse shoe anyway?
And I would need 31 primaries to cover each 10 nm steps to describe the spectral locus more precisely?
I think I understand why the AWG green and blue primaries are outside the spectral locus. In this way I can cover
more „space“ for these color inside the spectral locus. But shouldn’t at some point all my possible colors constraint by the spectral locus?
I hope this is the right place to give this kind of feedback. And I am sorry if I might misused a color related term here or there.