ACES 2.0 Seeing a few issues

I find the discussion about DRTs and picture formation a bit one-sided.
Sure, a DRT should produce plausible imagery given camera data, but what is
as important (and not really discussed here) is the translation to other
viewing conditions and display capabilities.

I personally see the “predictability” over the range of possible display
capabilities as most important.

If a DRT is “predictable” and “simple” you can move all the fun part
into an LMT.

For example, it does not help if the yellow fire looks nicer in ACES1.0 SDR,
because as you go to another output, the yellow of the fire gets pink.
I’d rather have a DRT with pink fire, but the ability via an LMT to tune in my gusto of yellow fire - knowing that me fires are all presented equally in all the different deliverables.

In my experience, if a DRT is too “skew-iey”, it makes proper Look-Dev very hard.

I hope some of this makes sense.

5 Likes

Does this not make an a priori assumption about the idea that we know what we are doing?

We have displays all around us, for example, that we can jack the emission on. Does this change the pictorial depiction? In the case of the chrominance dimension, we have an equally tricky problem to solve here, in that we have to consider how the 2D light array is parsed by us as organisms.

That again is an a priori assumption that we know what we are doing, and that we are doing it “correctly”, no?

Wrapping the algorithm that forms the picture into the language of “Display Rendering Transform” is arguably foul here, as it is creating the illusory idea that we already have a formed pictorial depiction in the input data state, and that begs the very question we are seeking to answer.

What are we doing in forming a pictorial depiction? Under the framing of “Display Rendering Transform” we are suggesting that we are “merely” '“rendering” to the display. Where’s the picture? I would hope that we can all agree that scalar magnitudes of energy in the open domain is not a picture. The values in an EXR are not a pictorial depiction, but depends upon what we do to those discretized scalar quantities of open domain energy.

If we couple the pictorial depiction to this idea of “Display Rendering Transform”, we are no better off to where we started. We haven’t segmented the algorithmic approach to forming the pictorial depiction from the presentation context.

This is a a wholly different problem as best as I can tell. We are yet again conflating where the pictorial depiction is. To draw an analogy, if we have a giclee print of Starry Night, there’s no confusion as to where the pictorial depiction is. Most of this we can blame on Giorgianni of course, who erroneously placed the “picture” in the open domain scalar quantities of energy. But moving forward, we can reject that outright.

Again, burying the forming of the picture, which is arguably a very distinct stage in the algorithmic chain, to the idea of output is foundationally problematic. These are two different problems. One involves creating a 2D light array of the pictorial depiction, and one involves the complexities of presentation.

Except now we’ve done the exact same problem; we’ve pushed some incorrect idea of how to form a picture to the authorship. Sure it might be argued that we can gain some degree of predictability and “control”, but that doesn’t answer the far harder question as to how and why we’ve made a mistake in arriving at the salmon fire in the first place!

I can state with a degree of cautious confidence that it is incredibly challenging to correct the pink in pictures, because the actual mechanisms are wrong foundationally. For example, try to “correct” the picture forming algorithm’s output of “pink-ness” in the context of a tint. Now it’s not pink at all, but rather some other local gamut entirely.

Funny how I started this thread and completely disappeared from it. Never thought I would launch such a discussion. :rofl:

:laughing: @chuckyboilo Indeed, there is a tendency here to turn a simple post into a super-thread, but it is great to see the continued discussion.

I think most people who have been around here understand that it’s not possible to design a single transform that is ideal for every use case or is universally preferred. Everyone has an opinion about what constitutes a “good picture”, and while these discussions get messy at times, they are also informative.

All of the example shared here (and elsewhere) are helpful in identifying shortcomings of or gaps in the current system and in better understanding what behaviors that different segments of the user base are looking for.

Early on in the ACES 2.0 Output Transforms work, the group spent a significant amount of time exploring different rendering approaches and design directions for a range of use cases. At that stage, the TAC directed the working group to focus its efforts producing a single transform that addressed the “top 10 complaints about ACES 1” . While the result of that effort involved compromises, the output of the group did objectively improve on nearly all of the issues with v1 (acknowledging, of course, that this came at a cost of increased algorithmic complexity) .

That said, “improve” is a loaded term, and it was always expected that not everyone would perceive the changes in 2.0 as “improvements”. Even with the ongoing conversations and theory on “picture formation” the bottom line is that at some point we still need to actually produce images.

With ACES now operating under ASWF, the new TSC wants to establish clearer milestones and determine the future direction of the project in a way that makes it genuinely useful for those who want to continue using it, or even adopt it for the first time. That could take many forms, including a “default” LMT, a suite of “creative” LMTs, and potentially alternate rendering approaches.

Within the ACES TSC, we’ve been actively discussing the role of LMTs and alternate looks, particularly in response to needs raised by DCC users and others who may not routinely go through a traditional color grading step. The scope of this specific initiative is intentionally limited to one or two “technical” look transforms with clearly defined objectives. I’m hoping to share progress on that work very soon, and I will be adding a Look Transform category on the forum to provide a more focused space for ongoing discussions about Look Transforms and their capabilities and/or limitations.

This work will almost certainly evolve into an LMT working group, where the substantial exploration and work already contributed by @ChrisBrejon and @priikone will prove extremely valuable.

3 Likes