Gamut Mapping Part 2: Getting to the Display

Hey @TooDee - Sorry I didn’t specify in my post: it’s updated at the same link,

1 Like

Totally understand … thank you for playing with this and sharing what you’ve learned.

I think that’s the part I’m reacting to really. I don’t know if we do or don’t. A chromaticity preserving tone scale sounds great, makes great plots, looks great on some images and absolutely horrible on others. We spent a ton of time in the ACES 1.0 development process looking at a chromaticity preserving tone scale it it was like an onion. We’d peel back layers and just run into more and more issues. The models kept getting more complex and eventually we pulled the plug. I think it’s worth looking at again but I’m not sure we want to make it a hard requirement.

This is just the way my brain splits up the problem but going from scene-referred colorimetry to the colorimetry as we want the image to appear on the display is the rendering. For the sake of argument let’s call that output XYZ. There’s a separate, and relatively easy, task that needs to occur to take those output XYZ values and turn them into a set of code values. When those code values drive the display in question measurements of the light coming from the display should match the rendering output XYZ values.

process

Obviously, depending on the details of the rendering, the output XYZ values may exceed the capabilities of what the display can actual show. If that happens choices need to be made in the signal generation stage. With this sort of conceptual split though, you can do interesting things. For instance, you can make the rendering create output XYZ values for a limited dynamic range and a small set of primaries (e.g. rec709) and then generate the corresponding code values to correctly display those output XYZ values on a display with much higher dynamic range and a larger set of primaries. This effectively lets you use an HDR monitor as a SDR monitor without changing the monitor calibration. @sdyer and I do this all the time with our BVM-X300.

1 Like

The simple fact it would be such a departure to the current rendering makes it a no-go as a default for me. It does not mean we should not provide a way to be able to do it though, the advantage is that it could be introduced alongside and let people test and migrate their workflows to a hue-preserving one.

100% agreeing for the gamut-mapping point!

Cheers,

Thomas

Just to complement the hue-preserving discussion, I remembered the presentation Alex Fry (EA) gave at the GDC 2017: https://www.gdcvault.com/play/1024466/High-Dynamic-Range-Color-Grading

2 Likes

Sorry I may not have respond to this directly. I wasn’t trying to say gamut mapping wasn’t part of the rendering but rather the conversion from output Colorimetry to display code values. If the rendering produces values outside the displays capability obviously gamut mapping is needed. Probably better to make the rendering not produce out of gamut values to begin with though.

Thanks for clarifying. As is so common in conversations about color, differently understood terminology seems to be the source of my misunderstanding here. As your diagram shows, I believe you are thinking about “Rendering” as a separate step from the transform to get to display colorimetry, where I was thinking of it as the whole process to get from scene-linear to display.

I do wonder though - is it really possible to separate rendering from transformation to display colorimetry?

Isn’t this impossible? - (This may be another stupid question - like I said, color pragmatist)

What if we need to create a rendering for Rec.2020 and for Rec.709? Are we going to use the same gamut mapping approach? Would we limit the colors to the smaller one? Or would there be two different approaches specialized for each display gamut?

Certainly we could conceptually move the gamut mapping problem into a box between “Output XYZ” and “Display Code Values” but it seems to me like this would still need to be a specialized process for each output display gamut?

Please let me know if I’m missing something here, I am genuinely curious.

There has been a very interesting conversation on slack and I thought it would be worth doing a summary on acescentral.

@Thomas_Mansencal :

my only point is that removing hue shifts as a default is not desirable because of the arguments Alex Forsythe brought up and the simple fact it would change the look massively. […] Reminded me Alex Fry GDC presentation from 2017 where he explained that he was adamant about removing them but it did not worked for all cases, quite the opposite!

Which @Troy_James_Sobotka replied :

Optionally, I think the point is being missed.

  1. The display can’t represent the colour of the flame, so it skews.
  2. It is possible, if one were so inclined, to add a saturation adjustment to return more of the “yellowish” flame.
  3. It is impossible to remove the saturation surface of the image once the skews are introduced.
    […] But again, the crux of this discussion is essentially about why bother with colour science at all if it never comes out of the display?

The conservation followed like this with @Thomas_Mansencal explaining :

Without entering into subjective discussions, changing something like the default tonescale contrast has much less impact than making the tonescale hue preserving. So making the latter change a default in a new ACES rendering transform has much deeper consequences, so I don’t think it is a good idea, it should be optional. If you have built a ton of assets under the current rendering transform and you end up in a situation where the new one forces you to rebalance all of them, it is not a great change. […] Not saying there is no problem, just saying that the medecine should not be the nuclear option.

@daniele then added :

If the skew is part of the Display Rendering it needs to do the same skew for all deliverable. In fact your cg assets will look completely different in HDR under current transforms. […] Well the starting point should be same appearance , I guess.

Which Thomas replied :

They will (look different) and it is acknowledged, entirely depends on the deliverables. And the client might also either be fine with a different look on HDR but even request for it. […] Just look at the Haywire trailer by Steven Soderbergh… Not saying that I like it, but the client pays the bills here and he is certainly pushing for that look. I don’t know if the appearance should be the same with vastly different class of displays: are clients expecting that the appearance of a 100nits, 10000nits and the real world appearance are the same? It is a fundamental question to which I have no answers. Say I’m paying extra bucks to go to HDR cinema, what is the point to get there if things look the same than SDR exhibition ? That is the type of questions producers ask themselves.

Then Daniele explained :

I am not saying HDR and SDR should be identical. But a yellow candle in SDR should not be orange in HDR I guess. At least I know many colourists which are confused when that happens.

To which Thomas replied :

I certainly could see that happening but all your pyro effects turning red/white under a hue-preserving rendering transform is equally bad, and it does not affect a single person but hundreds of people. All of sudden, all your Lookdev, FX, Lighting & Rendering teams are affected. Sometimes, broken by force of adoption is what is correct, but we are entering philosophical discussions here. And again, what feels broken for you might be a feature or the expected behaviour for others. […] I can certainly point at the dangers of changing the rendering transform in such a dramatic way that it preserves hues. […] Changes to a a large system like ACES must be incremental, you just can’t modify something as critical as the RRT without triggering a storm, instead you should give people the option to change their workflow, at their will and gradually. Happy to be corrected but it seems like the only sane thing to do to avoid having an angry mob knocking at the Academy’s door!

Jed entered the conversation :

2.0 seems like a good version to make a change to the default display rendering transform. I’m not sure there would be an angry mob if it looked better than the current version. I think a lot there are a lot of people especially in vfx who use aces because it is easy and off the shelf, who think that it is a “neutral” view transform, and are trusting it to represent their asset work faithfully, and they do not realise what is happening to the color that they are so careful to create faithfully once it goes through the view transform to the display. Certainly a chromaticity preserving rendering method is not the easiest path forward, but I do not think an approach should be discarded because it is difficult or inconvenient. As professional image makers we should be able to make intentional decisions about the appearance of color, to have control over what is happening. I think there is benefit in the approach of trying to pull apart the reasons why certain things are happening, why we are seeing certain artifacts. It is very difficult to break down problems like this because there are so many factors at play. But I think if we try to isolate problem areas maybe there are solutions we haven’t thought of.

And Daniele suggested :

You can make a LMT to match ACES 1.0?

@nick also shared his thoughts :

If we offer a few options out of the box, then I think the default should feel close to what we have now, to keep the people who already use ACES comfortable. But we could have a hue preserving alternate, and maybe people would gradually move over. Then we can deprecate it in 3.0.

A big question brough up by Thomas and Troy was about objectivity :

The problem is that looking better is highly subjective and any large DRT change has potentially a large cost.

I can’t help but find this “highly subjective” line of discourse completely quite disturbing. What’s our ground truth? We have a working space. We have values. We are not displaying them. At all. Not even remotely.

Interestingly enough it looks like the famous K1S1 DRT is not hue-preserving.

Then @sdyer entered the conversation :

[…] all options are on the table - if they meet the requirements. Which we are trying to define. everyone here seems to be acting like decisions have already been made. Can we focus on defining absolute requirements and then maybe even things that “would be nice to have” (if we can get them without breaking other stuff)?

We then agreed that we would need to see examples of chromaticity preserving tone scale sounds great, makes great plots, looks great on some images and absolutely horrible on others to better understand what happened for the creation of ACES 1.0.

I also tried to give my tuppence :

Keeping skews for a major release doesn’t sound right to me. For a major release, we should not be shy of changing stuff. This is the moment to do things “right” or “better”… Not 3.0. By definition, a major change means stuff will change and eventually break.

Which Scott replied to :

I can tell you I am 100% in favor of changing stuff, even drastically, IF it fixes the things we consider broken and still delivers on the other things we want it to keep doing. The requirements define the solution. so figure out what it must do and then we can focus our efforts on making something that does. We can’t sell a big change jsut for the sake of changing things. But if we can show the many ways on which it improves known issues with v1, we will have a lot easier time convincing the stalwarts to change and the holdouts to revisit it.

Finally Daniele shared some thoughts :

I still have difficulties to accept the “cg assets” argument. If the assets are so fragile to the DRT in use how do they survive a normal grading session. All the cg and VFX elements I got my hands on work very well with different DRTs and looks. They look different, but so does all the live action too. […] In a way a cleaner DRT should actually help to produce neutral assets.

Final words from thomas about CG issues :

Not all assets are equally fragile, pyro elements certainly would be. You have a whole class of people using pre-rendered texture sheets and such in games. And they tweak the look through the DRT directly.

I have tried to reproduce this conversation in the most faithful way. If you think I haven’t done a good job, please let me know and I will modify/delete the post. But I thought it would be important to share with everyone on acescentral.

  • Should the new Output Transforms be hue-preserving ?
  • Should there be a default behaviour and then an optional one ?
  • Do we want a smooth transition or a fresh start ?
  • Should we consult (again) with studios/people using ACES to see what they think ?

Regards,
Chris

4 Likes

Thanks @ChrisBrejon,

To complement the CG assets point, I made an emphasis on pyros, but it can be extended to any emissive source generally or assets with high-reflectance or that we are super picky about, e.g. CG skin.

As support to what could be the widescale effect, here is an old public thread that highlights the type of challenges that a change of DRT can induce.

The worst that could happen would be people trying again to apply the inverse new RRT with some custom massaging to maintain the previous look. Noteworthy, a significant portion of the time I spend on ACEScentral has been either explaining to people the ACES look compared to their “Old sRGB” one BUT also discouraging, sometimes with friction, their attempts at maintaining it by applying the inverse RRT on their textures.

I have been fortunate enough to work in a few studios and experienced DRT change from the first or second rows and, no, it is not as simple as flicking a switch.

Cheers,

Thomas

1 Like

@Thomas_Mansencal I have seen the effects of DRT change on cg asset look development. I agree that it is not a simple or well-understood problem, and it can cause serious headaches if the workflow is not understood and planned properly from start to finish.

My perspective here will be from feature film VFX - and from my personal experiences.

A conflict often arises in VFX studios when there is a desire for a consistent internal DRT in cg asset look development. An internal DRT has benefits for workflow, re-usability, and makes things easier for lookdev and texture. It can help a lot of things.

However, on different shows, the DRT from the client DI house is very rarely consistent. Every show has a customized display rendering transform.

At some point the cg asset has to be composited into a shot and sent to the client. If significant dialing of lookdev has happened under an internal DRT, and that asset gets rendered and comped there is significant danger for the appearance to change significantly - especially on more saturated colors like pyro, and more sensitive objects like human skin, as you mention.

So my question is: if a studio is using an internal DRT for cg asset work, don’t they already face the problem you mention? And as @daniele pointed out, wouldn’t a more chromaticity-accurate display rendering transform actually help solve some of these issues?

In the past, I have suggested workflows where initial cg asset work happens with the internal DRT, but at a certain point, evaluation of that work should happen under the show DRT, to avoid a big surprise when the work goes into comp.

Maybe cg asset look development work should even be checked under multiple DRTs in order to verify that assumptions are correct: an internal DRT, the show DRT, and a simple linear-to-linear display as you were talking about in your earlier post.

2 Likes

This is a good point, here is my take on it: The Lookdev cost to fine-tune any reused assets from a facility asset library is almost always factored in the bid for the Show. If you are smart, you develop your reusable assets under the facility DRT and you delegate Show specific changes as late as possible, e.g. Lighting and Compositing tweaks.

When you get to the point where you have a consistent set of changes or improvements that look good on the show one, you can merge them to the show variant as a base and you might also decide to merge them back to the asset library variant if they look good under the facility DRT.

The idea is to try branching from the asset library/templates and merge back in show improvements if deemed appropriate.

This works, and I Lookdev"ed" a lot of assets without ever looking at the show DRT. Depending on your workflow, fine-tuning as mentioned above can be delegated to the Lighting artists. It is also extremely hard to have an asset that worked well under the constrained setup of the Lookdev lighting scenarios to systematically behave correctly under the thousands of shot Lighting variants and artists probably have hooks to tweak the shaders at this stage of work.

On the paper, it should, but see EA which is a public experience trivial to quote :slight_smile: At any rate, I’m not saying that we should not have such a feature, I’m saying that we should not make it a default for the new rendering transforms: It has not only a dramatic impact on the appearance of everything but also on the behaviour of tools themselves, i.e. people are used to working with hue skewing.

Cheers,

Thomas

1 Like

Hum… I don’t know to be honest. From what I understand :

  • Few studios actually use the ACES Output Transforms. So impact would not be as dramatic as it may sound.
  • Assets are probably lookdeved already with different lighting/viewing conditions and should be robust enough to survive this stress test.
  • Lighting artists tweak assets on a per-shot basis all the time anyway.
  • CG assets in my experience are never re-used exactly in their exact original state. Between an original movie and a sequel, a couple of years have generally passed by with the introduction of new technologies. So we always end up adapting and tweaking the assets for each show for this particular reason.
  • I think this Working Group has some of the best people in the industry in it and I fear that these limitations will inhibit us. The Academy should show the way rather than trying to respect a legacy behavior.
  • It is also important to think of the scope of this group and I do think that people who actually uses and cares about ACES are on acescentral. Will they manifest their concerns by reading this post ? I’m not sure. What am I trying to say is : can we reach out to MPC (for example) and see if they’d be concerned ?
  • People who likes the skewing look could/should use an LMT in my opinion. Or even stick to ACES 1.2 if needed ?

I just asked a VFX supervisor about this specific topic. Short answer is :

It does not change anything to me if the Output Transforms are modified. We will adapt the assets, especially if it is to reach a better look.

And I asked another CG supervisor about this :

In fact, I compare this issue to the way we manage projects. Each project has a version configuration of particular softs and plugins. This project is supposed to work with this version of software. And we know very well that intermediate releases should not break backward compatibility, while major releases would potentially do. It seems a little illusory to me to imagine a version system where you never break something. And then hey … We’re not talking about making assets unusable … Just a little effort to adapt.

It is interesting to notice that they both used the word adapt. Because it is probably what they are used to do already. :wink:

And I will check with our CTO tomorrow to see if I can get an official response from our studio. But so far my personal take is that any animation studio or school I have worked with would prefer a better Output Transform even if it means tweaking some assets rather than the contrary. We should aim at the highest quality possible in my opinion.

Happy to discuss,
Chris

PS : I agree that this is the limited point of view from a lighting artist working on animated features. Especially the argument of couple of years have passed by between movies.

2 Likes

Any statistics that we don’t have about this? Seems quite important! :slight_smile:

But it is certainly not mutually exclusive with having a main area, either facility or show centric, where you define 98% of the Look right?

How many shows are you working on concurrently at Illumination, 2-5? Scale that to 15-20 and then you will have a very different perspective to the problem :slight_smile:

As I read it, you have already decided that the current per-channel processing is legacy which is entirely fine, (although the VWG and the TAC will need to be convinced), while I’m suggesting, again, that such drastic changes if adopted, should be introduced incrementally with all the carefulness required.

I think you either mispresented the topic here or your CG supervisor did not understand it properly. This group would simply not exist if the Academy was not willing to implement any changes or have ACES evolve. Also, we would not have written the ACES RAE paper 3 or 4 years ago if we knew the Academy would not be responsive to our proposals. The crux of the problem is about how you introduce those changes to your userbase. The Academy, us, have a responsibility toward them and we simply cannot act like cowboys. If we want to introduce critical look changes, e.g. ACES 0.1.x to ACES 1.x.x, we have the responsibility to introduce them gracefully, a hot-swap is anything but graceful IMHO.

It is a given :slight_smile: but you don’t necessarily need to resort to the BFG to get there.

Cheers,

Thomas

Thanks Thomas ! As always, your help and insight are much appreciated.

A few disclaimers :

  • I owe you guys everything ! I have been part of the ACES community for more than two years and it has been a great journey and learning experience.
  • Apologies if I ask too many questions or post too much but I like to get to the bottom of things as you guys have probably noticed.

My assumption that few studios actually use the ACES output Transforms comes from personal conversations. I was able to discuss about this topic with several people and this is the answer I have been given several times. I’d love to help for an official poll among VFX and Animation Studios to come with accurate data.

Totally ! But may I ask which studio ? Is there actually a studio with 15 shows going using the ACES Output Transforms for lookdev and rendering ? You’re not talking about Weta, right ? Since you guys do not use ACES.

Not trying to convince anyone here. :wink: If per-channel lookup is responsible for blue spheres going purple, I can only encourage to look at other options. About the drastic changes and the carefulness it requires, I have been asking another CG sup who uses ACES and our CTO. They both gave me the same answer : if there is a jump in quality, we’d rather patch the assets. I guess without seeing any images or examples, it is impossible to take a decision anyway.

I don’t necessarily disagree with you but I think it would only be fair to ask the userbase in my opinion. I shared the link to meeting#1 on linkedin so people could express their needs. It got 3169 views but 0 answer. My take on this is that people/companies/studios will adapt to what ACES 2.0 offers. And that the userbase that will be affected by ACES 2.0 is already present on acescentral… Isn’t that the case ?

More thoughts on the topic. I have been thinking on the audience after reading this very interesting post. From our conversation, I see 2 different audiences for ACES :

  • Expert users/companies which main request for ACES 2.0 would be Daniele’s idea : modularity and flexibility (I hate to put words into other people’s mouth but that’s the way I currently see things, especially after last meeting).
  • Non-expert users/studios (a group to which I belong) and I guess it would be fair to say that our request would be a more robust Output Transform since we do not have necessarily access to color scientists. That’s at least the feedback I got from my VFX sups friends…

As far I know, Illumination (with our current slate of movies) is currently the biggest studio using ACES from input to final output. I’d love to be proven wrong on this one though. :wink: But I never heard of a studio who uses ACES the way we do, except On Animation Montreal.

I was able to talk to our CTO today and he was absolutely positive that if we’d ever make a sequel to a movie done with ACES 1.2, he’d rather use ACES 2.0 to get a quality jump and patch thousands of assets if needed. So I think it would be fair to say that this is the official position of our studio. I’d love to hear from the userbase if anyone is in disagreement. And of course, you could still have a LMT emulating ACES 1.2 if needed. Or worst case scenario : stick to ACES 1.2 until you can make the jump.

Furthermore, we also think that Hue Skews should be part of a look. If a client wants a hue skew, it should be done through an LMT as well. Not through the default look. Don’t we agree that the ACES 2.0 Output Transform should be as neutral as possible ? And that hue skews are not neutral by definition ? Anyway I want just to emphasize that all the examples I have provided come from real-production situations. The saturated spheres, characters, volumetric lights come from our extensive testing with ACES.

Final word from an other CG sup friend (I have asked seven supervisors so far) :

As long as ACES 2.0 does not change the whole workflow, it will be easily accepted I think. Let’s not kid ourselves, when you do a sequel, you have to re-check every single asset. Because a lot of project features change from one project to an other. Look at Random Walk SSS or Ptex…

I think that doing the right thing (if there’s such a thing) is a great way to be innovative and that it will make more people wanting to use ACES. But I hear you : we cannot act like cowboys and should consult with the userbase if that’s an option.

Looking forward to your future answers,
Chris

2 Likes

I think I might modify the statement about ‘studios using the ODT’ slightly. First what do we mean by studio, could be many different groups of people, so I’ll interpret it as applying to a specific film/project.

I would agree with the statement that I’ve rarely (possibly never) seen a project use the ODT as-is with no LMT, but I do have 1-2 out of many current projects using an LMT combined with stock ODTs for making their editorial media - no idea what happens in DI. These projects tend to be from the same “Film studio” . It is not uncommon for these projects to emulate another look via the inverse of the RRT+ODT.

However more of a problem for me are the “15-20” other projects which don’t deliver an ACES workflow, but instead hand me a single baked LUT, which makes adapting the look to any of my other output devices more work than it could be. I don’t know if these are baked down versions of an LMT+RRT+ODT, or some print stock emulation, a simple tonescale+gamut adjustment, or something totally creative (I’ve reverse engineered examples of at least all of these).

Supporting these and other creative choices that are in conflict with each other, e.g. desaturating highlights vs preserving pure high luminance colours vs hue preserving within the overall framework is a suitable goal to try achieve.

This is separate to what the stock rendering should be, as Scott suggests we can break things if it makes sense, as can be seen in the previous releases, attempts have been made to preserve historical backwards compatibility by supplying LMTs and or other emulations “under” the current rendering, we should of course follow suit.

Perhaps we need to categorise the current wish list by stating if it is possible under the current framework to agreeably solve the problem or not. To me this means that modifications to the current rendering needs to be made in the direction of facilitating more possible outputs and moving the “restrictions/constraints” to the LMT.

If we have enough of a case on the ‘not’ side then it makes sense to consider what adjustments to the framework need to be made to allow for the desired flexibility, whilst minimising the other valid concerns the content owners have such as wanting to limit the scope of black box/secret sauce components, such as baked LUTs - I think it is OK to have such a component for creativity, but shouldn’t prevent anybody from retargeting to an alternate output device. i.e. I do not think we can go as far as “providing a set of LUTs” that replace the whole output Transform.

Kevin

3 Likes

Thanks for the precision Kevin ! This reinforces me to think that we’re one of the few using ACES as a color management system without any modification/LMT.

Chris

1 Like

Maybe, or they will move on :slight_smile:

I don’t know, I have not seen a lot of people of MPC or ILM roaming around here, I certainly see a lot of the same faces, but saying we are representative of the entire ACES user base would be wrong. When in doubt, think about all the studios in China, Vietnam or India that are using ACES but do not come here because of the language barrier, maybe we don’t care, I won’t be the judge here. With that said, we are certainly the people interested steering the project direction which gives us a ton of power BUT also a lot of responsibility.

Here is an experiment for you guys, take a large dozen of shots with as much varied content possible from your current shows, and process them through the 0.1.0 and see if it addresses your wish list, and if not report why. You should also try with Jed’s nuggets above.

You probably wanted to say “NOT be part” :wink: You guys are totally entitled to wish for a hue-preserving rendering but I would like to point out that non-hue preserving rendering accounts for the majority of all the content produced by the entertainment industry. Given that, would I say that all the movies shot with ARRI and their rendering pipeline are broken or that the rendering of the Planet of The Apes is busted because hue skew? I would not dare, would those productions have benefited from hue-preserving rendering, I cannot say but at any rate, they are out and made their producers and consumers happy.

Broken could be the normal, the neutral. Nobody would be fool enough to type on a keyboard optimized for morse code right?

To me, innovative would be to offer more choice to our user base, not impose it. Quite coincidentally, I will finish by quoting @KevinJW:

Thanks Thomas for the answer !

I understand that there might be studios using ACES which are not present on acescentral. But I also think we should not forget about people who are active here and try to point real issues from productions. Not saying it is the case though. :wink:

I’ll try the ACES 0.1.1 experiment out of curiosity. Probably this week when I find some time.

And just for the record, I meant in a clumsy way that hue skews should be part of a separated look, aka LMT. Sorry for the inaccuracy.

Take care,
Chris

From my standpoint, you are adamant that hue-preserving rendering is a requirement for your workflows but at the same time, I have the feeling, and I could be wrong, that you did not put it through test thoroughly.

If this is the direction you would like to go, I would not try that as a curiosity experiment but as a truly objective process to come up with results that can inform the VWG direction.

Some of the issues with the ACES 0.1.x were pertaining to noise/camera black levels/negative values, with pure CG rendering you should not really suffer from that and should be in a position where, hopefully, the DRT does not exhibit too many artefacts.

My above suggestion is valid for any changes we do, they must be motivated and we ideally need to show that, objectively, they fix problems or make the work easier. I’m quite glad that @jedsmith is spearheading the effort with hue-preserving rendering and has been producing images, please keep them coming!

Cheers,

Thomas

1 Like

At the risk of repeating myself, I am not adamant on anything. I work for people who are adamant on certain things as you have certainly understood by now. :wink: I hope that you realize that I am currently in a tricky situation and am a tiny bit desperate for solutions.

I personally feel like that I have done nothing but test ACES upside-down for the past two years (which has been a great experience). Hence my questions, posts and images I provided to this group. Only today I have shared three more images to compare RED IPP2 with the ACES Rec.709 Transform. You should also see the amount of documentation about ACES at work, it’s insane. :scream:

The main issue for 0.1.1 to be discarded was the lack of invertibility, which if I have understood correctly is a requirement for the ACES 2.0 Output Transform. Nonetheless I did some tests with 0.1.1 not more than ten days ago. I was just curious on why some people liked it and others not.

I do understand that changes must be motivated but I think that my examples and explanations from my numerous posts speak for themselves. I have also tried to provide for each render an example of what I think it should look like, as a point of comparison.

I am more than happy to do more testing but I would ask you to point me in the right direction if possible. :slight_smile:

@sdyer I have a question for you : is a production using 0.1.1 Output Transform currently possible ? What would be the cons ? No HDR Output Transform ? No Resolve implementation for DI ?

Thanks guys !
Chris

Far from me the idea of shooting the messenger! Please invite them here, the more the merrier! I think we have distracted the thread enough, so I will stop and we can continue on Rocket/Slack anyway.