Notice of Meeting - ACES Gamut Mapping VWG - Meeting #6 - 3/19/2020

ACES Gamut Mapping VWG Meeting #6

Thursday, March 19, 2020
9:30am - 10:30pm PT (Los Angeles time / UTC-4:30pm)

Please join us for the next meeting of this virtual working group (VWG). Future meeting dates include:

  • 3/26/2020: 5pm

Dropbox Paper link for this group:

We will be using the same GoToMeeting url and phone numbers as in previous groups.
You may join via computer/smartphone (preferred) which will allow you to see any presentations or documents that are shared or you can join using a telephone which will be an audio only experience.

Please note that meetings are recorded and transcribed and open to the public. By participating you are agreeing to the ACESCentral Virtual Working Group Participation Guidelines

Audio + Video
Please join my meeting from your computer, tablet or smartphone.

First GoToMeeting? Let’s do a quick system check: []

Audio Only
You can also dial in using your phone.
Dial the closest number to your location and then follow the prompts to enter the access code.
United States: +1 (669) 224-3319
Access Code: 241-798-885

More phone numbers
Australia: +61 2 8355 1038
Austria: +43 7 2081 5337
Belgium: +32 28 93 7002
Canada: +1 (647) 497-9379
Denmark: +45 32 72 03 69
Finland: +358 923 17 0556
France: +33 170 950 590
Germany: +49 692 5736 7300
Ireland: +353 15 360 756
Italy: +39 0 230 57 81 80
Netherlands: +31 207 941 375
New Zealand: +64 9 913 2226
Norway: +47 21 93 37 37
Spain: +34 932 75 1230
Sweden: +46 853 527 818
Switzerland: +41 225 4599 60
United Kingdom: +44 330 221 0097

Hope everyone is staying safe out there!


1 Like

sorry, thought the seminar was later :-/, next time then

1 Like

Some highlights from today’s call (we missed you @daniele! Daylight savings is the worst…)

  • Called out @Thomas_Mansencal’s work in generating spectrally rendered test imagery, very relevant now more than ever: Spectral Images Generation and Processing
  • @matthias.scharfenber outlined his simple algorithm (with the caveat that it should be a conversation starter rather than an actual proposal, because it is overly simplistic and has many issues). Will share a description on a separate ACES Central post.
  • @joachim.zell pointed out that AP0 is a relevant and necessary archival container and that data should be preserved, and also that we should use the best tools available to solve this problem right now, knowing that it may not be 100% perfect and also be obsolete in the future when a better workflow comes along.
  • @hbrendel and @joseph pointed out the necessity of knowing your source gamut (beyond just AP0 container) as well as your target gamut, otherwise you end up with a very general mapping that is good for some cases but much to extreme in others
  • Put out a call for examples of gamut mapping - ideas, failures, etc to show the group in a visual way
  • Noted that @SeanCooper has a jupyter notebook with some work to share soon - be on the lookout for that.
  • If we were to start taking meeting ‘notes’ to distribute after each meeting, is that more/less useful than the transcript/recording? (this accumulation of highlights is sort of what we’re thinking, to keep conversations going)
  • Acknowledgement that times are weird right now - anything we can do to make participation more productive or helpful please let us know, door for feedback always open!

Sorry, I could not make it, with the time change it is 5h30 and we are starting to brace for COVID here in NZ.

I have started to generalize the CornellBox rendering so that it can be done easily, I will send more details when the notebook is in a good state but this is the one:

Does everything from A to B: Pulls down the sensitivities we have, compile Mitsuba with them, renders and output the images. It is a quite slow operation because the Google Colab VMs are underpowered but a) it should work b) is reproducible c) can run on beefier hardware on GCP.

Glad to hear :slight_smile: Agnosticism will cripple the capabilities of the algorithm or make some desiderata/requirements impossible to fulfill.

That would be very useful for people to catch up and get up to speed quickly. This very message @carolalynn is great in that respect!



I’ve posted said notebook here:

This was very helpful. Thank you!

This is looking really cool, Tomas! Thanks!

This was continuing to irk me a bit as well, though I’m sure for different reasons.

As a general note, I was getting a bit concerned that we were muddying the waters a bit in our calls. Namely that we set the general guidelines to focus on a generic input/output agnostic A-Gamut to B-Gamut conversion, which is a fair delineation to draw, but the immediate conversation that follows is getting example media that exhibits “the problem”. This is probably just a failure of understanding the mixed conversation of the phone calls, but to me it seemed like we were talking about two different things.

I just want to be clear that if the goal for now is this generic anything-to-anything gamut conversion, we really shouldn’t be looking at media to judge it. We really only can/should be judging its performance on technical merits. That is, based on whatever technical requirements we set for the algorithm, and not any aesthetic or case-study specific way.

What I’m worried about doing, is say that we’ve created a “generic anything-to-anything” gamut conversion, when in reality all we’ve done is solve a very specific problem that was a problem circa 2020. In some ways it reminds me of the RRT’s design and its “sweeteners”, where it was proposed as generic rendering transform but in reality was designed to solve very specific issues of that time. Granted I wasn’t involved when that happened, but that’s roughly how I understand the situation.

Again, I’m not saying that we’re doing that, but I do want to make sure we don’t do it by accident. If we do want to prioritise solving very specific circa 2020 issues that solves “the problem”, that’s totally fine too, I just don’t want us to sell it as a “generic” solution.

Rant over. Thanks!