ACES2065-1 to ACEScg negative value issue question?

Agree. My issue with this though is that we are simply deferring our problem further down the road. Often putting greater labour on a colourist to face against an unsolvable problem by the time they get the unfortunate series of scientism choices.

A key point with the “larger” ( :roll_eyes:) stimuli encoding is that we can at least draw a line to a correlated response; all wattage must be positive.

The deeper problem is that we can create problematic pictures with BT.709. Or more specifically, no one has solved the problem with forming pictorial depictions within BT.709 yet. Anyone who claims so, is speaking from a disingenuous vantage.

Hence why this cup and ball routine of non-solutions needs to be arrested, as they are clearly non solutions.

Your Additive Video is a huge step in the right direction, but we need to further flesh out that problem surface. We can:

  1. Create problematic gradient stimuli relationships, yet remain “straight lines” projected into the CIE xy colourimetric stimuli projection.
  2. Create cognitively acceptable gradient stimuli relationships, yet be “curved lines” when projected into the CIE xy colourimetric stimuli projection.

Offsetting in uniform wattages does seem to resist breakage, hence why your suggestion makes a reasonable entry point.

If we can at least provide a conjecture around transparency cognition, even if it is not encompassing as a “final” solution, we can potentially supply a much firmer foundation than all of the prior number fornicating dead ends.

3 Likes

I work in camera vendor (well, not sensor native, but at least something) gamut as long as possible. The fact that it moves the problem to a LUT/DRT, doesn’t help, because they have the same issues then. Not always visible in the default state, but I haven’t seen yet a clear filmic emulation LUT (or LMT or whatever) with reach “dense” (we all know this is a bad term, but everybody knows what it means) colors, that could take wide gamut input and map it smoothly to display without clipping. Not saying, this is impossible, but I guess it is hard enough, if even most expensive LUT/software packages fail with this. Not to mention per project show LUTs, that are usually way worse, often just terrible (including very successful shows).

This is an interesting take on the matter.

Could you precisely describe what should be observed in these two images ?

Is this “transparency mechanism” you are describing related to the polarity topic ?

Thanks !

1 Like

I have been forwarded these examples that illustrate the “transparency coginition” machanism. I found them interesting to look at and so I thought it would be interesting to share here.



@sdyer We are disgressing from the original question from @vfxjimmy How do you feel splitting the thread into in a new one about “transparency coginition” or something like that ? Thanks !

1 Like

EDIT: Since pictures are not replicated in reply, I have to make clear that I’m replying to Troy’s post with two versions of the lightsaber picture and what looks like a comped in transparent shield. I still have to process the other posts in this thread.

Wooo. I’m not sure what I should expect from this image since the transparent shield looks like it was clearly comped after through simple alpha blending however I’m going to give it a shot. First things first, the second picture is clearly broken. It exhibits the weird saturation and brightness cut in the blues due to which I had to request more time from my current employer in order to refine the ACES 2 code which I had initially proposed to integrate. The process to figure out a solution was a long and complicated one. It even involved me carving “ideal” tone and gamut mapping curves in Resolve using nothing but Hue vs Hue, Hue vs Sat, and Hue vs Lum in order to properly understand the final result that I was looking for.

In the second picture, the aforementioned cut can be seen on the floor, notice that the gradient from front to back is not uniform at all, and on the edge of the background lights, notice the uncanny two or three pixel border of really dark blue. It also happens with red to a lesser degree. Moreover, the contrast is so high that the lightsabers and the background lights look like they pierce the comped shield in a very weird way.

The first picture exhibits none of these artifacts although the blue is a bit too purplish for my personal tastes and I like the image better when Mery comes out in a more saturated shade of red but that’s a bias I have due to almost never working in sRGB 100 nits mode anymore unless I really have to. I’ve been working almost exclusively in ST.2084 1000+ nits Rec.2020 limiting for months and, oh God, I don’t recommend trying ACES 2 ST.2084 4000 nits Rec.2020 limiting unless you are prepared to put in the work to fix it (@ChrisBrejon 's sRGB balls image is enough to break it, especially if the raw PQ code points of the output are examined for sanity checking).

Case in point, we should be examining how we cognitively arrive at probabilities of reading a pictorial depiction along transparency mechanisms. I believe this is the axis that also embodies pictorial exposure .

EDIT: Added quote for making clear that I was moving on to another point.

However, is it just a case of transparency that causes the second image to look broken? I believe that it is not so. As mentioned earlier, the artifact is visible on Chris’ sRGB balls and Rec.2020 circles images, especially if you know what you’re looking for. The comped in transparency just creates another case where the artifact can easily show.

I’m trying to look for transparent things in these three picture and I have a hard time finding what I would call a transparent surface. Lots of emissive glowy bits and particles that would either be an extra channel in the deferred rendering or forward rendered then alpha-blended or additive blended but that’s transparent in the rendering sense not the cognition sense.

I believe it is fair to say yes; the cognitive result trends toward a probability of reading a decomposed boundary condition, instead of what we expect as “gloss”, “sheen”, and other air-material-like decomposition. That is, instead of us cognitively decomposing the form into a continuous form, the local gradient is too steep, and we track toward a higher probability of decomposing the region into a separate form. This is the exact same error that the (misattributed) subject of this thread refers to by way of negative RGB tristimuli; a tristimuli volume hull run creates a maximal excitation purity in the discrete sample, and the resulting local spatial gradients stand a higher probability to trend toward cognitively dissonant cognitive decomposition.

It is an incredibly easy one to miss, as it would seem our visual cognition has evolved to “see through” these forms and instead see the base form colour in the decomposition.

If we look at the totality of the selections, the cognitive computations track toward a decomposition of the more obvious forms, with an additional decomposed form of the “air-material”. That is, a “haze”, or “smoke”, or “glare”, etc. Typically what we’d associate with a strictly additive contribution of energy. I believe it is this additional decomposed form @ChrisBrejon was referring to? Perhaps he can confirm it?

This is plausibly because all “Colour Appearance Models”, and their siblings “Uniform Colour Spaces” ignore the baseline gradient domain neurophysiological signals, they deform the stimuli energy relationships.

Apologies I forgot to reply. I am not sure this is the best place to do it but here is my tuppence.

The screenshots that I shared were examples of cool/achromatic values “punching through” the warm volumetric/glow/wash.

I have made many tests on the current project I am working on with haze/atmosphere and it is a fascinating facet of image formation. I guess similar to the glass milk test you were mentioning.

Even if don´t understand all the details, I guess all of this related with the gain/offset mechanics that you have been hammering.

I think we could build a simple CG scene with some atmosphere of different colours and several lights and see how the “rate of change” affects our “sensation” of over/under.

A few more example:

And one shot on film:

I hope any of this mumbo-jumbo makes sense. Otherwise sorry about the noise ! :wink:

Chris

I’m curious to hear more about your experiences… the ACES 2 blues are troubling. If I’m understanding correctly, you went through a rather painful process of carefully, manually divining what one might consider finely-targeted, output-specific “sweeteners”… which, I guess, technically could be represented as ultimate per-output LMTs that invert and replace the vanilla fixed ACES-2 output transforms, which have the effect of selectively “smoothing out” the saturation-vs-lightness rate for a slice of hues, in order to compensate for the not-totally-desirable “clumps” seen in the Blue Bar ceiling, etc.?

I went through something even more painful. I first made that pseudo-ideal DRT in pure Resolve nodes to get an idea of what I would like the images to look like then I debugged the ACES 2 transform by going back to earlier prototypes in order to find the last known good version. Once I had that, I started adding back things from later versions to create a sweet spot DRT for our company’s internal usage. Here are some details I can give without running afoul of Netease’s sharing policies :

  • The iterative gamut compressor in the v39 prototype doesn’t produce any artifacts no matter whether limiting primaries are Rec.709, P3-D65, or Rec.2020.
  • It is the same iterative gamut compressor as was used for the ZCAM prototype (v13 and earlier).
  • All approximate gamut compressors from v35 onwards produce artifacts.
  • I haven’t tested versions v14 to v34 so I can’t speak about them.
  • The CAT data and the chroma compression space from the final version produce better results when combined with the iterative gamut compressor.
  • The chroma compression algorithm in v39 didn’t have the yellow desaturation issue that is noticeable in the final version. YMMV in whether that is desirable or not. Choose whichever chroma compression algorithm you like best.
  • There is a way to modify the implementation of AP1 compression mode in v39 such that it gives acceptable results no matter the choice of limiting and output primaries. You will have to figure that out for yourself though.

Of course, we’re making extra additions to the DRT on top of what I’ve described because we want to make the HDR 1000 nits version the reference image and have the SDR version visually match that instead of limiting the saturation of the HDR version to match the limitations of the SDR version like what we’ve seen the final version do to our images. That’s something that I can’t share though.

Hope those hints help,
Jean-Michel

1 Like