Notice of Meeting - ACES Gamut Mapping VWG - Meeting #15 - 6/4/2020

ACES Gamut Mapping VWG Meeting #15

Thursday, June 4, 2020
5:00pm - 6:00pm Pacific Time (UTC-12:00am, Fri)

Please join us for the next meeting of this virtual working group (VWG). Future meeting dates for this month include:

  • 6/11/20 9:30am
  • 6/18/20 5pm
  • 6/25/20 9:30am

Dropbox Paper link for this group:

We will be using the same GoToMeeting url and phone numbers as in previous groups.
You may join via computer/smartphone (preferred) which will allow you to see any presentations or documents that are shared or you can join using a telephone which will be an audio only experience.

Please note that meetings are recorded and transcribed and open to the public. By participating you are agreeing to the ACESCentral Virtual Working Group Participation Guidelines

Audio + Video
Please join my meeting from your computer, tablet or smartphone.
[https://global.gotomeeting.com/join/241798885]

First GoToMeeting? Let’s do a quick system check: [https://link.gotomeeting.com/system-check]

Audio Only
You can also dial in using your phone.
Dial the closest number to your location and then follow the prompts to enter the access code.
United States: +1 (669) 224-3319
Access Code: 241-798-885

More phone numbers
Australia: +61 2 8355 1038
Austria: +43 7 2081 5337
Belgium: +32 28 93 7002
Canada: +1 (647) 497-9379
Denmark: +45 32 72 03 69
Finland: +358 923 17 0556
France: +33 170 950 590
Germany: +49 692 5736 7300
Ireland: +353 15 360 756
Italy: +39 0 230 57 81 80
Netherlands: +31 207 941 375
New Zealand: +64 9 913 2226
Norway: +47 21 93 37 37
Spain: +34 932 75 1230
Sweden: +46 853 527 818
Switzerland: +41 225 4599 60
United Kingdom: +44 330 221 0097

I uploaded to the group an ACES OpenEXR version of the ALEXA frame that introduced so many to the purple highlight fringe issue. Thomas has already updated the gamut mapping ramblings notebook with a cropped region of this (though, to be absolutely sure I used the same parameters in doing the raw conversion on this frame as I did on the rendering of which I sent him a crop, it would be best to re-crop that regain from the full-resolution image I just put up).

Initially I thought it wasn’t repaired by the current candidate. And in fact there’s not much in the way of improvement if the compression method is atan or tanh. Simple does help. I will see if I can put up a full-resolution display-referred ALEXA rendering for BT.2595-compliant viewing.

The image rendered through the ALEXA pipeline is now also uploaded. The two images are:

  • orig_arri_ludwigstraße_take0000_lens_wide_open_r709_d65_r1886.tif
  • arri_ludwigstraße_take0000_lens_wide_open.exr

Scott, that ‘orig_’ in what I just uploaded was a mistake; could I ask you to remove it? Thanks.

As discussed in the meeting today, some figures with the ColorChecker SG and the notebook to generate them (assuming you have the data):

ColorChecker SG under D60 and viewed with sRGB

ColorChecker SG Crop under D60 and viewed with sRGB

BabelColor Average (ColorChecker Classic) under D60 and viewed with sRGB

Note how the ColorChecker SG seems more saturated, I never thought to check but it is interesting.

ColorChecker SG under D60

BabelColor Average (ColorChecker Classic)

I will double-check the computations with fresher eyes later.

Cheers,

Thomas

PS: My assumption that the excess in saturation is because of the semi-gloss coating seems to be confirmed by reading this article: http://www.northlight-images.co.uk/x-rite-colorchecker-sg-review/

1 Like

The difference in the embedded CC24 due to the chart being semi-gloss (SG) is also discussed by BabelColor.

1 Like

Hi,

Sharing that as another follow-up to the meeting and topic for discussion and clarification:

The context here is an implementation where the LMTs are immutable, i.e.an OCIO pipeline with no controls. Don’t ask why three LMTs like that, I just know that is possible :wink:

The obvious problem with Pipeline 1 is that with no opt-out possible from Gamut Mapping, i.e. the small green boxes, the region above the threshold is compressed 3 times.

Important, to keep in mind is that the ACES transformations are immutable, i.e. without user parameters for obvious simplicity of implementation reasons. Metadata tracking is hell, we all know that very well. We, thus, cannot really have a default that states that every AP0 to AP1 transformation is Gamut Mapped with an opt-out, unless we double the shipped LMTs count. We could certainly do that but it would be a rather non-elegant architecture that the TAC would not be happy with! :wink:

Pipeline 2 is what I think we should be looking for, i.e. a dedicated, optional, Gamut Mapping LMT that can be inserted, at any point in the LMT stack. User is then responsible and in control of what is happening. The implementation would be super simple and it would be relatively easy to change the operator without having to version up all the LMTs in a backward-incompatible way because we have tweaked the operator.

This Virtual Working Group is sandwiched in-between the IDTs and RRT/ODT portions of the ACES pipeline, it is dependent on the future findings of the Virtual Working Groups that will be combing them soon and nothing says that there will not be revision on the Gamut Mapping operator.

Cheers,

Thomas

PS: I had an unrelated feeling that we should check what is happening when we compress the values with respect to DCI-P3, it is entirely possible that the operator will prevent reaching out the maximum P3 values.

2 Likes

Pipeline 1 as shown would certainly be undesirable. I would imagine that if the decision is that the gamut map should be a hard coded part of every AP0 to AP1 conversion, it is implicit that its inverse should be a hard coded part of every AP1 to AP0 conversion, since these may frequently happen multiple times during many pipelines. Then a (quite possibly different) gamut map would need to be applied as the first step of the RRT, or SSTC Output Transform.

Thanks for the additional test image @joseph! This one is an interesting example to look at I think. It may be the case that preserving as much saturation as possible may not be aesthetically desirable in all circumstances.

Here are a few example crops from the image @joseph uploaded.


The image as rendered by the arri color pipeline.


The image as rendered by the ACES Rec.709 Output Transform.


The previous image with gamut compression applied: tanh thr 0.2 lim 0.2. My personal subjective aesthetic opinion of this image is that the purple highlights are so saturated that they still look like artifacts.


The previous image, but with threshold increased to 0.5. Effectively this reduces the saturation of the purple highlights and brings them more in line with the Arri picture rendering. The threshold has to be increased to this degree because the initial slope of the tanh curve is very steep.


And just for comparison and to play devil’s advocate, here is the same image but with Reinhard gamut compression, with a threshold of 0.3. It looks quite similar to the tanh with a higher threshold.

In fact you can get very similar results out of all of the different compression curves if you adjust the threshold and limits.

I think this example is a good case for the importance of judging the aesthetic performance of the algorithm, at least in the context of a pre output transform gamut mapping.

Edit: Here is the nuke script I used to generate these images if anyone wants to play around with it.

Impossible to judge without the pre-ODT mapping. It’s gamut volume escape all over.

Here’s the meeting notes, sorry for the delay, don’t let me derail the convo - I’ll reply to some things separately with my pipeline thoughts.

  • BlinkScript and DCTL will be the versions we keep in the super-repo and get parity on for testing, and then create the CTL towards the end as a deliverable
  • @Thomas_Mansencal will connect with @Alexander_Forsythe to move the super-repo into the AMPAS git repository instead of Colour
  • @nick demoed a dark noisy image to show the algorithm’s effectiveness on negative numbers - both on the version that maps from infinity and @jedsmith’s latest that intersects one.
    • Group’s consensus is that dealing with all-negative pixel values resulting from the black level being set at the average noise floor value is out of scope for this group and belongs to the IDT realm.
  • Thomas has the spectral values (official) for ColorCheckerSG chart
  • Are there different values used in television?
    • SMPTE color bars?
    • DSC Chroma du Monde
  • @matthias.scharfenber : the colorChecker 24 makes the most sense, it’s what gets put in front of cameras on set and is also used in IDT generation
  • Matthias is experimenting with per-channel thresholds. Thomas pointed out that in this case the per-channel limits would need to compensate for any differences between the thresholds.
  • Discussion around where this algorithm lies in the pipeline, and whether or not it should be required
    • Thomas proposed right before the RRT
    • @carolalynn proposes as a part of the AP0 to AP1 transform
    • Mapping should be opt out, but have old transforms available for possible edge cases and archival for posterity
  • How should we be evaluating our algorithm in a display referred context?
    • RRT/ODT, but also other (simpler?) display transforms.
    • Thomas pointed out limitations of 3D lut based transforms
    • @joseph pointed out Dolby’s 2D transform experiments - do we have more info in these?
    • Qualitative or quantitative evaluation, or a balance of both?
    • We should be testing on sequences too
      • Fabian and Martin’s submissions, 2-3 seconds worth each as AP0 exrs
1 Like

100% agree. This is why the invertibility was such a strong requirement in my book.

I don’t know that we’ll ever get the adoption/results we’re after if it’s an optional LMT. Also, as the algorithm should not adversely affect colors inside the ‘zone of trust’ there is no reason not to apply it wholesale or opt out.

@Thomas_Mansencal - your pipeline 1 is basically what I was envisioning, with the addition that the inverse mapping would be happening - so still technically, only applying it once overall.

All that said, I see no problem having a DCTL tool made available (not really needed in Baselight as they basically already have this tool) as well as detailed write ups and tutorials for what this algorithm is doing and when, to be used when and if these parameters need tweaked - mostly as part of the grading process.

1 Like

I’m not sure I have a complete grasp on your terminology, but I’ll take a stab at it to further the discussion.
I do think it’s important to discuss the output transform / display rendering transform if we are talking about gamut compression in the context of a pre output transform gamut mapping. (As opposed to other gamut mapping contexts like VFX working gamut transformations, DI session colorist adjustments, where this might be less relevant).
I do think it’s important to discuss the output transform / display rendering transform if we are talking about gamut compression in the context of a pre output transform gamut mapping. (As opposed to other gamut mapping contexts like VFX working gamut transformations, DI session colorist adjustments, where this might be less relevant).

First question: What is impossible to judge? What are we judging?

By “pre-ODT mapping” I’m guessing you might be referring to some type of color transformation which currently does not exist in the ACES output transform just after

ACESlib.OutputTransforms.ctl : 144
// CIE XYZ to display encoding primaries

which would handle color values which are out of the display gamut volume? If you want to explain this a bit more I would welcome the extra information! :slight_smile:

If my interpretation is close to the mark, I might point out that the endeavor we are working on is this transformation, and what we are trying to judge is what the ACES Output Transform might look like with such a transform. Correct me if I’m wrong.

Anyway I did a little exploring inside of the ACES output transform. If you disable all of the clamps in the output transform, the only version of the images I posted above that have out of gamut values after the CIE XYZ to display encoding primaries transformation is arri_ludwigstrabe_purple_highlight_fringe_aces_rec709.jpeg

All others have all values within the BT.709 gamut at that point in the Output Transform.

For a point of comparison using another display transform, here is the first image with out of gamut values, rendered instead through Filmlight’s TCAM display rendering transform, with their Classic scenelook.

The display gamut volume, yes. The current transform is only focusing on the “area”, not volume.

You are on the money with the exception being that the current transform is not addressing this in any manner, and only the Cartesian XY plane, not the Cartesian Y plane that is implicit in the display output.

Further, no aesthetic transfer function (EG: Random “filmic” or “film-like” curve) fixes this. The transfer function while being a component of a gamut map, it is not a gamut map. That is, assuming we keep to the idea that we are capable of isolating intensity and chrominance intents.

As a result, the values we are looking at are skewed by default. You can see this in the results posted over in the other thread, or using Thomas’ Jupyter tool. Values are skewing to magenta because they are escaping or being quantised to bad ratios at the display. Adjusting the exposures up or down will reveal that the gamut volume issues shift when they are representable within the display’s output volume. Noise rainbows reduce when increased, and cyan, magenta, yellow problems cease when the exposure is adjusted the other direction.

The quantisation problem is slippery, but can at least be visualized via something like simple Reinhardt where the curve asymptotes to 1.0 but never touches. Those values are being quantised at the display, and the intended ratios are skewing based on the somewhat irregular surface of quantisation. This is evident to folks who have used Reinhardt in a synthetic rendering system such as video games, where the sky, car paints, etc. all skew to cyan, yellow, or magenta as the ratios skew.

Hence why it is somewhat problematic evaluating footage without a gamut volume to display map in place.

This has still the problem of coupling all the AP0 <—> AP1 transformations with a second transform which, if we have to change it (and probably will) will require versioning up everything that uses AP0 <—> AP1, i.e. bubble-up versioning.

From an architecture standpoint, atomicity is important, if not critical as it is what allows replacing chunks of the pipeline without shattering the house. We would break that by weaving a web that binds together a lot of components that are not connected in the first place. Consequences would potentially be dramatic too: a defect in the gamut mapping operator would effectively affect all the transformations that internally use an AP0 <—> AP1 conversion!

Even if the gamut mapping operator was defect-free, which hopefully it should be, to provide support for the gamut mapping toggle would require doubling the LMTs that use AP0 <—> AP1, doubling all the RRT/ODTs combos, etc…

Granted the IDT & ODT VWGs have not even started, I would certainly favour something we can add/remove/tweak easily without profound side effects.

Hope that clarifies my thoughts! :slight_smile:

Thanks Thomas, yeah, it does clarify, and your points are super valid.

Honestly I guess at the end of the day though, I personally favor approaches that are easiest for the end users, even if that’s more work/a bit cumbersome on the developer side. The developers and implementors have the toolsets to deal with it (though we should do everything we can to make it as smooth as possible, obviously) whereas the end user just has what we give them. Finding a balance there is going to be tough for sure - your point about possibly breaking every AP0 <–> AP1 conversion is well heard! Yikes.

1 Like

Question 1: Do we want to get rid of the sub-modules and pull the BlinkScript and DCTL code directly.
Question 2: Not everyone has Nuke or Resolve, so while I don’t really care about removing the notebook, it also means that you are excluding the users relying on it that are maybe part of the researching community and having an interest with our work.

Proposal: We should gather as much as possible of the code produced in that repository along with the final model and stamp the whole with a Zenodo DOI that can be referenced in the future. We should rename the current repositories directory into research and include everything pertinent alongside with an implementation or model directory where the CTL, BlinkScript and DCTL, name_your favorite_software implementations resides. @Justin_Johnson has for example updated his Nuke script yesterday and I (strongly this time :slight_smile: ) think that we should be respectful of the time the members of this group have spent testing models even if they are not the one chosen. From a research standpoint, experiments are sometimes even more important than the final result. I will paste the second bullet point of the RAE paper which was a catalyst in the formation of these Virtual Working Groups:

Cheers,

Thomas

True, but the implementation works in free Resolve and Nuke non-commercial, so there’s nothing stopping people interested from downloading. I’ve truly got nothing but praise for your notebook solution, and can think of a million uses for it (in end documentation, for example). If it’s something you’d like to keep in parity, I’d love it - we were just thinking of the minimum requirements to keep the group moving, and reduce confusion around what version has what, etc.

Personally, I think we should pick a point where we’re happy and the two have parity, and merge them into the actual repository’s master. I like your idea of having research and implementation directories - we should do that. Which leads to the last quote…

I agree! I think moving them into a research directory and linking everything related as you propose is a great solution. It keeps it clean and separated from the implementation (which was our first goal) while indeed honoring in posterity the research done.

1 Like

I don’t think I will have much free cycles to keep up the notebook up to date all the time although it will easy to do at the end if required. As said, I don’t mind if it is removed, and at this stage it is mostly more a fancy research toy than anything production grade :slight_smile:

Given we are thinking alike, I will proceed tonight and move everything in the aforementioned research directory! It will be easier to then add other stuff while avoiding confusion!

1 Like