A few Color Appearance experiments

Very interesting experiments. Lots of food for thought, and discussion in tonight’s meeting.

Can I ask, when you say you “fixed the 0.18 grey to match ACES 1.2/ACES 035 values” how did you do this? Are you just changing the reference white for the output to the value needed to hit the target? And are you doing the same to change the output luminance to target SDR or HDR?

@luke.hellwig has suggested that for his model to work as intended, the same reference white level should be used on the way in and the way out. This is what we are doing in the current DRT, modifying the values within the model space to target different outputs, rather than varying the model output parameters.

@luke.hellwig has also said that he does not have much confidence in the viewing condition dependent aspects of CAM16. So again, this is why the DRT uses the same input and output viewing conditions, and if a “dark-to-dim” adjustment is decided to be needed (or more likely a “dim-to-dark” for theatrical output, given we have based our work to date primarily on Rec.709) we will add a tweak to the rendering, rather than leveraging the model to do that.

Anybody who feels I have misrepresented what we are doing or have decided, please jump in and correct me.

Hi Nick

Yes, that is exactly what I am doing.

I am able to target any input to match any white point or the mid grey XYZ values by adjusting the output diffuse white XYZ values.

I did not think this would work, and still not 100% that this is the “correct” way to do it.

But it is extremely effective which pushes my confidence into the 95-99%, especially with the match between ACES 1.2 SDR and HDR.

I first tried only adjusting the Luminance level (La/L_A) but it did very little to affect the XYZ output, I think it has strongest role in the full or partial adaptation to another white point but no affect on where the targeted white point lands in XYZ upon output.

I have done many more tests doing exactly this: SDR=>HDR and HDR=>SDR for ARRI Reveal, ACES 1.2, and of course the Current ACES 2.0 candidate (035).

These tests are interesting are interesting because they can demonstrate different approaches to matching SDR/HDR output. They also become more interesting with the 10 stops over/ 10 stops under colorchecker image.
If there is one particular comparison that someone is curious about I likely already have an image and maybe some data as well and could provide a post.

What I wanted to understand first is what should an appearance match look like between two different luminance levels (according to current CAMs), and how do the values change both inside and outside the model when that happens.

Regarding input/output parameters. If the input and output match there is no effect, so the current model, which does everything inside, is not “wrong” in any way by using Dim in and Dim out, and works as intended.

I used Average in/Average out as a baseline/placebo, since I was transforming Scene Linear ACES values directly to Dim and Dark conditions to see what those scene linear values should look like under those conditions, so that some objective targets could possibly emerge, especially regarding saturation.

The close correspondence with Average in and Dark out with ACES 1.2 (SDR and HDR) demonstrates that this is probably a reasonable approach.

I don’t want to get to ahead of what these tests already show, but to if you wanted to fully take advantage of a CAM based DRT, a single “ideal” render could designed around “Dark” surround output, but by changing the output to Dim, it would automatically transform the output and adjust the contrast and saturation for Dim conditions.
The same could be done with target white point, but both those features might only be practical for HDR since SDR is less flexible where max white lands and the highlight rolloff should still be carefully crafted to maximize the limited dynamic range.

That is why I only consider these tests as more of a sanity check (which I think was overdue, at least for me) to (re)understand what should be targeted and how that can happen inside the model, I hope they can serve that function at least a little.

This sentiment also very much applies to myself, and the approach I have taken with these tests.

It is all a bit hacky, especially the trying to compare scene 1.0 limited HDR with SDR on an SDR display, but I think it is a pretty good hack to at least provide a few talking points or references to help limit the confusion when discussing these transforms.

Changing output luminance by altering output reference white, is different to what we are currently doing. But if it provides a good appearance match at different luminances, it would be interesting to see if it could be possible to tone-map by modulating the reference white on a per pixel basis using the tone curve.

Of course I don’t know how a path to white, and output gamut compression could be applied when using such an approach.

And as for invertibility…

Yes, I am aware that this is how the current CAMDRT is working, and part of the reason I chose regular stock CAM16 instead of Hellwig modified CAM16, as I didn’t want to confuse anyone (more than I might already be) that these settings correspond to the current CAMDRT.

I think path to white has lots of options, but in the end it will be a visual preference/judgement that determine the best solution.

Gamut mapping will always be difficult and always require some compromises.
I suspect this is the part of the DRT that will continue to need the most work and attention.
Good LUT based gamut compression is already full of compromises, so an invertible analytic approach is bound to be difficult, but definitely worth pursuing.

I don’t want to be too presumptuous just yet, but I do think it may be possible to create one fixed “ideal render transform” using both J and M targeting SDR or maybe 1000/540 limited HDR, and use output white point to do most the transform for display luminance and the J curve is only modulated slightly in the highlight region to make use of additional luminance or compress for lower dynamic range.

No reason the Daniele curve could not still be used for this function, may even require less parameters, or maybe the way some parameters function could be repurposed to refine other regions of the curve more precisely if that is required.

As for any surround/whitepoint related functions that are features in a standard CAM, my understanding is that those functions have always been designed with the intention of being perfectly invertible, but have met with numerical limitations in practice when used for real imaging problems.

@luke.hellwig and others have done great work to refine some of the functions, but a CAMDRT from ACES AP0 scene values or wide gamut IDTs is still a massive stress test for the numerical limits of the CAMs, as everyone working on this project is well aware.
Novel solutions to these numeric limits have already been tried or are already in use and probably even more could/should be tested.