ACES and chromatic adaptation consistency

The V2 IDTs (covering pre-LogC3, i.e. maybe one year’s worth of content generated in 2010 or early 2011) should all go away.

The V3 IDTs for camera native should all go away. They were only valid for that first sensor generation. The only IDTs that should survive are for LogC3, of which there should be one for each of the 14 EIs that the pre-ALEXA 35 cameras can capture, no more.

I’m curious why more models aren’t using CAT16, which has some mathematical convenience?

I also want to take a moment to point out some of the newer rather promising chromatic adaptation models:

Weighted Geometric Mean (WGM) method: A new chromatic adaptation model. Shen C, Fairchild MD (2023) Weighted Geometric Mean (WGM) method: A new chromatic adaptation model. PLOS ONE 18(8): e0290017. DOI:10.1371/journal.pone.0290017

This is a re-visit to some of Fairchild’s work in the 80s? (His graduate thesis maybe…) And pointing out the clearly obvious trend that adaptation does not seem to follow a straight line in LMS space but seems to track along the Planckian locus.

Or perhaps in another projection like DKL it is tracking along the (L+M)-S axis (just thinking on forum here, I haven’t actually dug into the DKL idea)

1 Like

Completely agree. See my previous comment re chromatic adaptation models

1 Like

One key issue with the new models from Mark, (and it has a physiological basis) is that they are not invertible, e.g. D65 \rightarrow D60 \neq (D60 \rightarrow D65)^{-1}.

1 Like

And we deal with camera data, so data outside the spectral locus must be handled with great care. Everything which is not a 3 linear transform (a matrix) is a no-go at that early stage…
So please keep it as simple as possible.

I think this new WGM model is invertible?

Is it expressable by a 3x3 Matrix?

Any approach to chromatic adaptation will assume that the input will be valid CIE XYZ values. The mapping from Camera RGB to CIE XYZ should happen first. Any values outside the spectrum locus are erroneous mapping from Camera RGB to CIEXYZ and should best be addressed in that transform (easier said that done). There’s certainly upsides (e.g. simplicity, no NaNs, etc.) to using a 3x3 for the Camera RGB to CIEXYZ mapping and the CAT, but accuracy is the downside.

Choosing the best models given engineering considerations is a whole other decision … I think there may be a CIC keynote on the tradeoffs of “accurate color science” and engineering choices made in building practical imaging systems :wink:

yes, there will be, and it is spot on to this subject.

1 Like

Yes. The 3x3 is computed as a geometric mean instead of linear interpolation between LMS points. But otherwise similar mathematics.

I’ve been debating this A LOT recently with my colleagues. Rare that I’m the one arguing that less accuracy is fine enough.

Anyway, apologies for high-jacking this thread @meleshkevich, should we move this technical / implementation talk somewhere else? I’m new to the structure of ACES Central. Hi all!

1 Like

It’s not off topic at all! I’m way more interested in the unification of the adaptation methods for ACES (to one transform for all cases, as the best option in my opinion), rather than just selecting the “right” method for my particular case.
I’ll just rename the thread.

But have we confirmed that the WGM forward transform produces the inverted reverse transform?

Because if it is not the case, like for VK20, the fact that the transforms can be expressed as 3x3 matrices is almost irrelevant. We would now have to communicate the adaptation direction that was used and/or whether it is an inverse matrix or not used for that direction.

Let me put that here so that we are all on the same page [1]:

  1. Fairchild, M. D. (2020). Von Kries 2020: Evolution of degree of chromatic adaptation. Color and Imaging Conference, 28(1), 252–257. doi:10.2352/issn.2169-2629.2020.28.40

I had a nice chat with Mark about this on Thursday. And I think there is a very important missing reference to your figure, it was later used, I think the next year, at CIC to show that while YES chromatic adaptation is not reversible, the effect / delta is smaller than intraobserver variation. In Mark’s poster “Yeah, it’s not reverable in reality. But that doesn’t matter”

And later… the benefits of a reversible model outweigh the cost and complexity .

WGM is a reversible model if the D value is documented. And actually, according to the underlying physiological theory behind WGM, Mark reported to me a personal belief that actually with the right reference point, near 15,000K according to data, and given enough time the D value should actually be constant for all conditions and is estimated to be around 0.7.

This and especially the latter paragraph are more musings than verified results. But the paper is quite good. And it’s clear that WGM is superior to CAT16. All though the total error is only slightly less, it works significantly better for any adaptation conditions white points that shifted away from the plankian locus.

The paper has links to 4 datasets, would you mind collecting these and putting them in colour-data?

doi:10.1371/journal.pone.0290017

edit… it seems I do not have permission to share links so my apologies for requiring the copy and paste.

We are dealing with camera data from none-colourimetric observer, at this stage of the pipeline. Also the data is not colour rendered.
I see no point in making anything more complex nor giving up on analytical invertibility at this stage…

1 Like

I still don’t understand, why there should be any adaptation method at this early stage from camera (where a lot of non real color values) to intermediate space, that also is just a reasonable container for a bit wider than rec2020 colors. Why not put chromatic adaptation to the end, into DRT?
And also, from what I’ve seen, even the colors from color checkers look too much different between cameras to worry about any faithful reproduction at this early stage.

Even if the only delivery in the world would be D60 masters, I think there’s still more sense to do the chromatic adaptation with the colors that are already inside the spectral locus, not with some data from cameras, that is just a compromise here and there instead of the real colors of the image.
ACES 2 release is the best and probably the only time for the next maybe 10 years, where this significant change would be easily accepted.

Every new gamut - and it’s always the same question - how (and why at all) to convert white point to keep it neutral in ACES?

And again, D60 delivery is an incredibly small percentage of use cases. And for sure it will decrease overtime.

Or would really like at least to hear why it’s a bad idea.
Would really like ACES to be developed for everyone, not just for few big studios that mainly use D60, but have the resources to be the only who are heard.
Almost all the colorists have no idea what the chromatic adaptation is at all. Even Resolve tools still don’t have it in some places (node’s color space) for years (no white point conversion at all actually, so saturation to zero will result in tinted image).
Users need a simpler pipeline to make less mistakes using it, not more things where we can break something or use it wrong.

2 Likes

Sorry if a bit off topic, but I think the fact that Resolve/BMD has the tendency to (over)simplify things is actually the reason we easily break the image, because when they do it wrong we barely have control over it. We don’t necessarily need simpler tools, we need the right tools and proper implementation. And with a healthy dose of curiosity an artist will learn to use them correctly.

1 Like

D60 may be a rare delivery format, but D65 and D50 aren’t. And in this user scenario that kicked off this thread I am imagining that there is some content already ingested / transformed into DaV IWG. Transforming it to a new white point in a mostly-appearance preserving way seems desireable.

But actually I agree with these last two comments and I think that above use case kind of sucks. Getting into the intermediate format from footage directly should only involve white balancing or maybe 2-d LUT for companies that want to refine their imaginary colors.

Otherwise a CAT is only particularly useful in a scenario where some look or some perceptual quality of a data encoding should be preserved while transforming the container to account for new viewing conditions dynamically (“night shift”) or new delivery format.

But in the case where some footage has already been transformed to a particular color volume, like ARRI WG4 it might be desirable to use a perceptual transform to convert to a D50 (or other) referred space. This use case example is making the assumption that the WG4 D65 referred footage already has some look or perceptual quality that the author(s) intend to preserve into a new intermediate format, thus they are using the WG4 referred data as an intermediate format cooking in part of their look or a part of their intended picture.

In an alternate scenario where the footage is really only being captured based on technical parameters for later grading and in the case where the WG4 referred footage is being used only as a technical container, just re-computing the data into a newly referred container with no transformation to WP or perceptual preservation it makes sense to not use any CAT.

The latter is the example that I think has more technical accuracy. And I think is the workflow used by larger budget productions with more experienced DP’s, DITs, and Colorists, but is not necessarily the most common experience at lower experience levels.

TL;DR if the data file you are viewing has some color choices cooked into it that you want to preserve while transforming to a new container, that’s the point of a CAT. But if the data file is just colorimetric data and does not encode a particular color intent, then no CAT makes sense.

1 Like

If you have data that encodes colorimetry (be it XYZ, RGB, or another), you need to understand the viewing environment and state of adaption of a presumed viewer in order for that colorimetry to have any meaning in terms of perception. A CAT is needed to create corresponding colorimetry for a viewer adapted to a different source than that of the original colorimetry.

1 Like