I’m going to be a bit verbose here… bear with me.
Say for example we have an Arri AlexaWideGamut source image that has had the current 3x3 matrix IDT applied to bring the image into an ACEScg working gamut. Say there are out of gamut values in this RGB space.
Say we use the current proposed gamut compression algorithm with equal max distance parameter values (say 0.2). This process will result in each of the RGB components being compressed equally.
If we process this gamut compressed image and view it through the ACES RRT+ODT, and compare it to the stock Arri image display pipeline, there absolutely will be apparent hue shifts. This is because the IDT portion of this processing pipeline has introduced a difference in the RGB ratios, because the chromaticity coordinates of the source gamut (AWG) are not of equal distance from the target gamut (AP1).
Therefore, the max distance parameter needs to be biased according to the source gamut.
I think that this absolutely depends on the circumstance in which this tool is used.
For gamut compression applied before RRT+ODT, I absolutely agree. A good set of default parameter settings that work well for the most common source images, and are not exposed to the user, would be essential here.
For gamut compression applied in a VFX pipeline by a vendor, this is absolutely not the case. The more parameters and customization the better, in this circumstance. The gamut compress operator is likely to be customized for the specific needs of a show, likely shot by a specific set of digital cinema cameras.
For gamut compression in the DI, parameterization should be less technical and more artistically driven. Tweak the apparent hue and saturation after display rendering transform, make it look good. In this circumstance parameters with a good set of defaults would be essential.
Moving forward I think it might be important to consider for what circumstance the work we are doing is targeted. I haven’t heard that discussed much so far in this working group.
With all that said, here is a proposal for a default set of max distance values. I put together a set of max distance values in ACEScg from a variety of digital cinema camera source gamuts. Note that this is based on the assumption that there are no out of gamut values in the camera vendor’s source gamut, which may or may not be the case.
Gamut | Max Distance R | Max Distance G | Max Distance B |
---|---|---|---|
Arri AlexaWideGamut | 1.075553775 | 1.218766689 | 1.052656531 |
DJI D-Gamut | 1.07113266 | 1.1887573 | 1.065459132 |
BMD WideGamutGen4 | 1.049126506 | 1.201927185 | 1.067178249 |
Panasonic VGamut | 1.057701349 | 1.115383983 | 1.004894257 |
REDWideGamutRGB | 1.059028029 | 1.201209426 | 1.24509275 |
Canon CinemaGamut | 1.087849736 | 1.210064411 | 1.166528344 |
GoPro Protune Native | 1.038570166 | 1.138519049 | 1.227653146 |
Sony SGamut | 1.054785252 | 1.149565697 | 1.003163576 |
Sony SGamut3.Cine | 1.072079659 | 1.198700786 | 1.026392341 |
Max | 1.087849736 | 1.218766689 | 1.24509275 |
Average | 1.059741957 | 1.172203864 | 1.111424208 |
With a few outliers (RedWideGamutRGB), there are some common trends here. With some padding, a sane set of default max distance values might be something in the realm of 0.09 0.24 0.12
. These numbers were arrived at by both looking at the averages and max of the above distances, as well as evaluating different settings on the source imagery that we have available to work with.
All of the test images so far visually subjectively “look pretty good” with these settings.
I know what @Thomas_Mansencal is going to say - “we should not care about how it looks at this stage, the only thing we should do is compress all values into gamut” - but what if it looks bad after the view transform is applied? We can’t really go back and fix it. And again, I think this statement depends on the context in which this tool is applied.
Please correct me if any of my assumptions are wrong and I’m curious to hear thoughts on this!