Notice of Meeting - Output Transforms Architecture - Meeting #1 - 12/2/2020

Hello everybody. Today we are excited to announce the official kick-off meeting for the Output Transforms VWG. (finally! :partying_face:)

This virtual working group will be co-chaired by Alex Fry @alexfry and Kevin Wheatley @KevinJW, with Nick Shaw @nick and myself, Scott Dyer @sdyer assisting.

We invite you to join our kick-off meeting, which will be held three weeks from today: Wednesday, December 2nd, 1 pm PST.

(Note: Here is a calendar item for this event only, or you can subscribe to the Output Transforms calendar. An Outlook calendar invite will be sent in the coming days to participants of other VWGs for which we already have emails.)

This groups goals are to:

  • Establish requirements for a new set of ACES Output Transforms
  • Design new Output Transforms that fulfill those requirements
  • Document all of the above

In the 3 weeks prior to this meeting, we are asking the community (you!) to please do the following:

  1. Review the Output Transform Working Group Proposal
  2. Reply to this thread with your wish-list items, gripes, or any points that you most want addressed by the new system. Here are some example prompts:
  • What are your top requirements for a V2 rendering transform?
  • What are your top issues or complaints about the current transforms?
  • What’s on your wishlist for the new transforms?

We have of course heard many of these points before, but it will be helpful to hear them again so they are freshly seen and so we can more easily compile them. Your input will be used to construct the agenda to focus and guide our kick-off discussion(s) so that we can make them as productive as possible.

We are yet not pre-supposing any solution or particular architecture. The first phase of meetings will be to establish firm requirements that fix current issues and ensure ACES 2.0 is robustly constructed to provide for new or previously unanticipated use cases. Once we’ve established the requirements, anything is on the table that will best fulfill those requirements.

Everyone will, of course, have an opportunity to discuss the nuances of their points in the VWG meetings.

We are looking forward to hearing your responses here and then seeing many of you on December 2nd.

Note: If you cannot attend this meeting, worry not! We recognize that not everyone has the time or the desire to sit and watch an entire hour meeting. Therefore, in addition to our normal meeting recordings w/ transcriptions, we will also post detailed notes of the meeting discussion.

3 Likes

Hello everyone, it is quite intimidating to reply to this thread. :wink: I’ll probably be writing only obvious stuff. I am a lighting lead at Illumination and a teacher at ENSI (Avignon). We are using ACES at both sites on Full CG renders.

One thing that I would like to mention is that we do not have a color scientist at the studio nor at school. So we are pretty much satisfied with the way ACES worked in a simple way, out-of-the-box with a pleasing look. We do not modify nor tweak anything within the ACES OCIO config, our images rely basically on a full ACES workflow from IDT to ODT.

My top requirements would be to keep it simple as stated in the Group Proposal. ACES has been great for us since it was pretty much plug and play and I think many non-technical image makers would say the same.

My top issues is that currently ODTs clip which gives us artifacts when you want to light with saturated colors such as ACEScg or Rec.2020 primaries for example. We end up with a lot of posterization in our renders and the only temp fix we have found is to use the GM algorithm as a side effect. Hue skews are also an issue and are noticeable on the path to white (see examples below).

My wishlist would be to improve the hue paths. As you may know I have posted a few frames on acescentral with hue skews and posterization this summer and I would love to see these artifacts disappear. :wink: I’ll repost some of these renders in this thread for clarity. If I understood correctly, these issues are the result of the per-channel lookup in the current OT.

I am available if you need more information, renders or plots to explore any solution (such as gamut mapping or max(RGB) for instance). This a pretty naive post from a lighting artist I reckon since I have never checked the ACES ODT on an HDR monitor for example. :wink:

In this first render, when overexposed, a sRGB red primary goes orange and a sRGB blue primary goes purple/violet (hue skew/shift).

In this second render, when overexposed, a Rec.2020 red primary gets posterized and a Rec.2020 blue primary goes purple/violet (hue skew/shift).

In this third render, an ACEScg red primary produces some artifacts on the impact. This is a plane with a midgray shader at 0.18.

In this fourth render, a sRGB red primary goes dorito orange when overexposed. This is a volumetric light by the way.

In this fifth render, an ACEScg blue primary produces some artifacts on the impact.

In this sixth render, a sRGB blue primary gets a hue skew towards purple/violet when overexposed. This is probably the most obvious color/example.

Here is a plot of ACEScg values path to white using the P3D65 ODT. I believe the posterization/clip can be observed on the edges of the gamut and several hue skews are also noticeable.

Finally, here is a plot of sRGB values using the P3D65 ODT. We can clearly observe the blue going purple and the red going orange on the path to white, as in the first render.

These great plots were done by our amazing in-house developer Christophe Verspieren. Thanks to him for generating them and letting me share them.

Update : I have fixed our plots using Output Display Linear Light by rolling the values through the inverse display EOTF. Thanks everyone !

Thanks for you help,
Chris

6 Likes

Hi,

I’m the Founder & Director of an independent studio. I’m in charge of the color pipeline for the Film and CGI projects and it has become a high priority, especially nowadays. As many, I’ve got interested about ACES some years ago and adopted it… but with some consequences.

The way I see it is that we’ve made a step forward an advanced, better and “unified” color pipeline but one element (among others) is still present, in some ways, like “before ACES”… Which makes me think of the past with the well-known “sRGB OETF look” that I still see today in some CGI productions, marketing images of all kind, student projects and even movies (including online content such as Netflix).

I’ve spent a whole year diving into the confusing world of color science deeper than ever before. I do not qualify myself as a color scientist at all (maybe for another life) but I’ve realized how broken, literally everything, is. TVs, phones, Web, OS…

Troy Sobotka have been raising the alarm in regard to the issues Chris illustrated above (giving credit where credit is due) and the good majority of “ACES users”, from what I’ve seen in the CGI industry, are absolutely not aware at all.
I do not need to quote what has already been discussed and reported many times, but I look forward a progress in the future.

Thank you Chris for your post and everyone involved into ACES, the Working Groups and the others contributors.

Best.

5 Likes

This is exciting! Keen to follow on this one (and see agendas & notes) :smiley:

The time is not really NZ friendly though, 10AM is middle of meetings :frowning:

1 Like

I hear you, with our chairs in Sydney and London, it is challenging finding times that work for all. Meeting cadence, duration, and start time is something already scheduled for discussion at the kick-off meeting - (e.g. will we alternate times like the Gamut Mapping group has been doing?).

I understand the desire to attend in person, especially a group as exciting as this. I promise notes will be thorough for those who can’t attend and I really hope that work and discussions for this group in particular will continue outside of the brief meeting times. And, if what we’re doing isn’t working, we’ll adjust as necessary to make this as community-friendly as possible.

1 Like

Hello again,
as discussed with @KevinJW on Slack, we would like to see this SMPTE paper incorporated into the OT VWG. Its name is Core Color Rendering Algorithms For High Dynamic Range Display (written by @doug_walker and @garydemos).

Update : found a nice video presentation about this paper.

On a personal note, I have been collecting papers about Gamut Mapping for the past six months and was intrigued by this approach : AMD Variable Dynamic Range (GDC 2016). There is also a Youtube video.

Hope this helps,
Chris

3 Likes

I agree. But how do we deal with incorporating something which is published behind a pay-wall into an open-source project?

2 Likes

For clarity and openness I was not endorsing Gary and Doug’s paper as a fait accompli, but that we would want to take all proposals including that one under consideration.

Full disclosure: My personal point of view aligns well with the goals of the changes as outlined in the video, such as separating out creative adjustments from technical, improving inversion, etc.

Kevin

1 Like

Hi,

I ported an implementation of Timothy’s TOMO a while ago for Nuke, here is a link to the Blinkscript Kernel: https://gist.github.com/KelSolaar/1213139203911a72fef531c32c3d4ec2

Cheers,

Thomas

1 Like

Hello again,
I don’t mean to hijack this thread but I have been offered some very interesting information by @llamafilm.

I have done a few CG images for this group that has been uploaded to dropbox. I have tried to come with colored renders to illustrate some ODT limitations. And I have shared these images with @llamafilm who was interested to have a look. :wink:

In this first render, I have used ACEScg primaries for the light sabers. It is probably the most extreme case for a RGB render engine. We can see that the shirt of Mery (left character) is posterized. I tried to do the most brute-force render I could do (without any light-linking or funky stuff).

Here is the same render after Elliott’s work. I was quite pleased with the result to be honest and thought it was worth sharing. Only “issue” I have noticed is that the pixel values for the green light saber went from “170” to “7” in the green channel.

Here is a render of the first 18 patches of the CC24. Render is in ACEScg, displayed in sRGB (ACES).

It is interesting to notice that this particular render does not improve a lot with Elliott’s LMT. Probably because the albedo values were not very saturated.

Here is another render where I have put Rec.2020 primaries and complementaries in emission.

I kind of like how the blues behave in this one, even if the energy issue is similar to the one described in the light sabers example.

Here is another render with sRGB primaries used as albedo. The hue skew on the blue sphere is quite strong.

We can see here that path to white of the red sphere and blue sphere look more coherent, like if hue skews were gone.

Here is a blue ACEScg primary in a volumetric light.

Another nice improvement… :wink:

Here is a red ACEScg primary in a volumetric light.

Same behavior here.

Here is a blue sRGB primary in a volumetric light, with hue skews on the impact.

Hue skews are gone here.

And finally here is a red sRGB primary in a volumetric light, with the orange dorito.

Improved by Elliott’s LMT.

I asked Elliott about this process and here are his explanations :

I’ve been looking at these images on a Vizio OLED. This is great fun! It really showcases the benefits of HDR, more than what I usually see with live action content. Using the stock ACES HDR ODT (Rec2020 PQ 1000), the hue shift and posterization is significantly better than Rec709, but the problems are still there. I ran this through the “Colorfront Engine” tool in Transkoder which acts as an LMT. This helps a lot with Rec709, and a little with HDR. […] The Colorfront Engine was developed over many years by Bill Feightner, and it can be inserted right in the middle of an ACES pipeline as an LMT. It provides a few simple parameters to help craft a unique look, but here I just used the default settings. Right now this tool is available in Transkoder and On-Set Dailies software, and they are planning to release an OFX plugin for Resolve in the future.

I had a look at the website and I thought there was some interesting information that I’d copy-paste here :

Colorfront is an advanced color volume remapping tools using the Human Perceptual Model for multiple display mastering maintaining the original creative intent. […] The following questions should be carefully considered when choosing a color processing pipeline for a mastering/delivery workflow:

  • Does it support the common master workflow?
  • Does it handle both SDR to HDR, and HDR to SDR?
  • Does it support camera original and graded sources?
  • Is it based on LUTs created with creative grading tools?
  • Do they break with images pushing the color boundaries?
  • Does it support various input and output nit levels?
  • Does it support different output color spaces with gamut constraints?
  • Does it support various ambient surround conditions?
  • Will SDR look the same as HDR? Is the look of the image maintained?

You guys certainly already know about this stuff but from a naive perspective, I thought these were proper questions. :wink:

Chris

3 Likes

pretty much yes to everything, maybe we could get someone from them to participate? @brunomunger ?

As for the questions:
What are your top requirements for a V2 rendering transform?
Simple, keep chromaticities in the right place, no artistic intent

What are your top issues or complaints about the current transforms?

forced look by default difficult to disable, weird distorsions on some colors

What’s on your wishlist for the new transforms?

Get state of the art perception models as a starting point, clear labelling

3 Likes

A post was split to a new topic: Where should mid-gray end up through an Output Transform?

other points via @ChrisBrejon
I have more questions than answers for sure… I guess OT are tricky because they are not only mathematical but also perceptual. More questions will come about :

  • Surround : a pure power function pushes midgray, right ? Is this questionable for imagery ?
  • Refactor existing VS indepedent new beginning ? I am curious to see if there is agreement on this one. What should we do about per channel lookup and the infamous sweeteners of the RRT (glow module, red modifier and global desaturation) ?
  • max(RGB). I have often been told that it should not be used as a tone-mapper but more of a 2 stage solution, which would include gamut mapping… We are looking at a norm that literally delivers the chromaticity directly at the same energy level, right ?

Thanks all who were able to join yesterday’s call.

Here is the Meeting Recording & Transcript and also, if you don’t want to watch the whole recording, a detailed meeting summary.

The recording for each meeting is also linked on the Output Transforms Dropbox Paper site

Hello everyone.
My name is Liam Collod, I am a CG student specialized in the “Rendering” side. I have a strong interest in colour-science, which I started to study thanks to ACES one year ago. I currently cannot stop myself from diving deeper into it.

Aside from my interest for ACES in the personal side we are also using it on the 3 short movies me and other student are working on. ACES allowed us to have a unified color-management solution across all the pipeline without too much effort.

Unfortunately I wasn’t allowed to share our renders but I will when our short is officially online. I think they are a good example of the “ACES” look. A look that we, and other people, find too much contrasted and with a strong artistic intention (e. g. it is not neutral).

Aside from that I have followed and learn with a lot of interest the issues that ACES and color-management in general suffer from.
I don’t want to repeat what was already explained but I agree with all the great exemples mentioned above.
For me it is crucial that the display-referred result maintained the scene-referred color intentions without having to deal with skews and posterization.

With the spread of ACES in the CG industry and especially among individuals artists I would like the Academy to not forget about “simple users” and SDR content.
If I take our production as an example we are all working on different brands monitors , some of them are good quality, others might give you the “mexico in american movies” look but we are all working on SDR monitors. Monitors were “calibrated” (gamma 2.2) but I have learn that unfortunately there is no magic. Furthermore our viewing conditions can be very different from what is specified by the ODT

With all of that I just wanted to highlight that if we manage to solve these issues, it would benefit to absolutely everyone. With more and more people adopting ACES as a color management system, ACES 2.0 can become a unique opportunity to solve a lot of color issues that have been around for years and maybe, who knows, lead big companies (microsoft, adobe, …) to care more about color-management ?

So please my only wish is, don’t focus only on the movie industry, this project is a great opportunity to make things move in every digital creation domain.

Thanks to every people that were involved in making ACES better and every person that will !

Cheers,
Liam.

1 Like

Thanks for your comments. You make some good points.

In the project proposal that chartered this group, Chris and I emphasized that ACES 2.0 should deliver a more “neutral look”, avoid or at least dramatically reduce skews and posterization, and also make the system work more simply for everyone. So your comments are “on the list” but it’s great to hear again in different words and to re-emphasize the issues that important to many.

2 Likes

Hi Liam,

It is worth to mention that almost all student movies I see from 3D Schools do not have a DI/grading “stage”. You must know that there is one in all budgeted productions (short, indie feature films to higher budgeted productions).

What you are “judging” is the out-of-the-box ACES image processing result.
Not to mention that what you are working with, I assume, is a linear scene referred HDR EXR, while in many productions work with log footage/workflow.
“In production”, the workflow would be log-based. It is not impossible to grade linear footage or renders but “unfriendly” and log is the “common workflow”.

I would also add that what is missing in several DCCs is “OCIO looks” as part of the OCIO implementation. It would be practical if more DCCs would support it and would allow you to “add your own look”. This post might interest you.

ACES (cg in our case) is wide gamut but also OOG (out of gamut). You will still have to potentially deal with OOG images because of its “design”.
I tend to believe that we should not be working with a “dangerous” rendering/working space such as ACEScg. It is unfortunate that it goes OOG.

For instance: I do not think many are aware that their color picker becomes ACEScg color managed when they use ACES. Some may potentially pushing saturation in their color picker to high values as when they are used to when using the sRGB native standard management of their DCC (generally speaking), and on potentially improperly (or un)-calibrated display devices like you mentioned.
This increases the risk of students (and similar individuals) of having non-plausible, out of gamut and unrealistic values (shading, lighting…).

Regarding the mentioned multi-nationals, ACES should not be the only reason to make them care more about color management.

As Chris.B often says, “with great power comes great responsibility”. I would also add that ACES is not obligatory to produce beautiful images (and not for everyone either). But if it is being considered, the whole color and imaging process has to be taking care, meticulously. Plausible values, gamut mapping and compression, tone-mapping, scene referred color processing (tools and manipulation), etc, are important during the whole image creation process.