Gamut Mapping Progress Report - DRAFT For Comments

Hi All!

Here is the link to the Dropbox Paper page:

Progress Report Draft

Please check it out, leave any and all comments - we are really looking for consensus on these topics the group has discussed so far, so if you have reservations/disagreements, now is the time to voice them! Anyone with the link can comment, so please feel free to share with anyone you think might have opinions, even if they are outside the normal working group participants.

Please let us know if you have any issues or questions.

7 Likes

Great work! Initial diagonal reading was great, will do that carefully over the weekend!

2 Likes

I was expecting this document, thanks for your hard work!!!

2 Likes

Thanks for summarizing the effort (and debate) so far, its good to see it all laid out. Sunlight is the best disinfectant, as they say :slight_smile:

What I’d personally really like to see at the end of this document are definitive questions that we will collectively work to resolve moving forward.

I’ll propose the following:

  • Along what dimension are we mapping values?
  • What is the boundary of the zone of trust/safety/maximum-viable-gamut/no-touchy?
  • What is our metric of success?

Imbued in my proposed questions are some assumptions so I’ll spell out my thoughts with regard to each one, and maybe we can settle on some good questions that we can rally behind.


Along what dimension are we mapping values?

This was the long-and-short of what my initial investigation was digging into. We seem to have settled on a a consensus that we won’t use perceptual models (CAMs) to guide the mapping which I would agree with, but I haven’t seen a discussion specifically around what we do instead. It seems there is unspoken agreement that we need to map along a dimension of saturation/purity (perceptual, physical, or otherwise) and preserve some concept of hue (perceptual, physical, or otherwise). So I think it would be good to pull this topic to the forefront and work towards an answer.

What is the boundary of the zone of trust?

This question assumes that we all agree on the “zone of *” concept, which seems to be the case? If so, what defines the boundary? Is this boundary static or dynamic?

What is our metric of success?

I’d break this down further into two categories.

First, when we’re testing answers to the above questions, how are we judging good versus bad solutions? I don’t think there is any question that subjective judgements will play a role, but are there objective measurements could we think of?

Second, when do we know the VWG is successful? When we solve very specific real-world problems? If so, which ones? Is there a way to implement test-driven development? Are there specific tests we can perform to know we’re done?

3 Likes

Thanks for the reply @SeanCooper - makes a ton of sense. Feel free to comment it on the doc so it’s logged there for posterity too, if you don’t mind! We’re actually thinking of pulling ‘unknowns’ plus the good points you raise into a new ‘next steps’ doc, which can be more of a working playground for us, while this progress report stays more of a ‘past decisions’ doc.

Speaking of comments - please, everyone, feel free to comment on the actual doc should you have opinions - we’d really like to get an idea of people’s headspaces before our next meeting in a week and a half, and I’d rather not take silence as agreement :slight_smile:

1 Like

Thanks for the report @carolalynn: it was exhaustive and concise at the same time.
Pretty much integrating most things said during the meetings (at least those I attended to).

I also recall @LarsB proposing to base the gamut-mapping on a AI-driven, “big-data” comparison with large data sets which both an original and a gamut-mapped version of each image exist (excluding creative color correction amnog the two versions of course).
Despite I think the approach is a bit overkill, it empirically makes sense and worh being mentioned in the report. Especially if a real scientific consensus is not reached in the end.

My last two cents on the reverse-compatibility: as lon as gamut mapping is in the post-AP0 stage (e.g. in the AP0 to AP1 state), it’s totally acceptable to agree on a color-mapping that potentially changes every now and and then (for example, at every major ACES release), because:

  • full versioning metadata are already specified in any AMF-driver operation;
  • as long as pictures are short- and long-term archived in ACES2065-1 only.
2 Likes

I would like to remark on the point that talks about the boundary and the zone of trust in @SeanCooper post. In my opinion after finding a method that should be one of the main goals since my opinion on the parameterization of the algorithm depends purely on this boundary. As a colorist is important to stick to the creative intent when starting a grading, I had for example issues with this when using the neon fix(I had to use other methods).

Looking at enough plots, it becomes quite evident that the camera virtual primaries only hold up somewhere generally around the Color Checker 24 range, and skew wildly toward non-data from that generalized region outward as the camera’s spectral locus trends toward the virtual primaries. Worse, that non-data has dramatic “hue” shifts, which would likely end up present in a gamut mapping approach that ignores it.

1 Like

@Troy_James_Sobotka:

ColorChecker Classic

AMPAS 190 Training Data

Cheers,

Thomas

2 Likes

Thanks for that!

Of note, having seen it now:

  1. There is very little room to conduct gamut annealing with the 190.
  2. Echoing @matthias.scharfenber’s thought that the 24 is as close to a de facto reference standard on set.
  3. The distance between data and non-data is questionable with the 190; the virtual camera primaries quickly cleave to non data from the fit target. Is weighting for them prudent at the expense of the more legitimately fit values?
1 Like

The transition could very well be starting somewhere between the CC 24 and the 190, it has to be a transition anyway.

I’m thinking it makes sense to perhaps have an ACES Monster Manual of the various effects that are being juggled here? Not all of the psychophysical nonlinearities are high value targets, where some may be. I’m thinking hue linearity regarding skies, but also the lesser understood increased luminance “blues turn purple” for example?

1 Like

It would certainly make a lot of sense to categorise the various defects, it might indicate different remedies.

I get the sense that not everyone is entirely aware of the nuances here, so perhaps it would be a good asset to have around. Not sure how to break it down necessarily. I mean, I have countless examples of this one for example, which is often confused with the gamut hull clip of blue to purple, yet entirely different.

2 Likes

@carolalynn, @matthias.scharfenber & @nick: It is probably worth reinstating the deliverable at the end of the report (and whether they still stand true):

3 Likes

Agreed, thanks @Thomas_Mansencal!

1 Like