Rec.2100 HLG to ACES 2.0 Rec.709 rev060 ODT

The feeling is a bit different through ACES2 compared to other older transforms like ACES1 (or ARRI K1S1). The feeling through ACES2 is similar to ARRI Reveal. So there’s a bit of a learning curve I’m sure. Here’s a crappy grade of mine. I didn’t find it difficult to skew that any color I wanted.

Thanks @priikone

I managed to get a similar result as you, but in order to get that warm golden sunset I was looking for, it is quite hard to get to it without digging into secondaries. I struggle with the same under ARRI Reveal to be honest.

Here is a different example from a RED RAW shot, not graded but with different DRTs.

ACES 1.3 Rec.709

ACES 2.0 rc1 Rec.709

RWG_Log3G10 to REC709_BT1886 with LOW_CONTRAST and R_1_Hard size_33 v1.13.cube

RED to DWG to DaVinci Rec.709

RED to DWG to JP 2499

This stock shot can be downloaded from Artgrid if anybody else want to test.

1 Like

I think we are really accustomed to a display rendering having red skew to yellow and blue to cyan that we expect them to behave like that. We usually don’t take that into account in our creative decisions. From an ideology perspective I think it’s desirable to not have skews be part of a DRT but rather of the look development stage. The only tricky part is that maybe not all productions have time/room/knowledge or tools to produce these.

I don’t belive offering parameterization on the DRT itself like JP2499 or AgX is practical for ACES, but it could be quite cool if the group would engineer some DCTL intended as LMT to introduce such skew. Resolve in particular just ain’t the prettiest when it comes to scene-referred grading tools so it does indeed feel awkward using tools like HDR wheels or ColorWarper. And such adjustments would be really shot specific and not really implementable as a global look.

I will add some new LMTs for ACES2.0 in GitHub - priikone/aces-looks: ACESLooks is a set of Look Modification Transforms (LMTs) for ACES1 and ACES2, and one of them is going to be ACES1 colorimetry and emulation, if someone really wants to have that. Here for example @thomasberglund’s graded image through ACES2 using ACES1 colorimetry:

ACES 2.0 with C-ACES1 Look LMT:


ACES 2.0 clean:

And that will look pretty much exactly like that in Rec.2100 as well…

2 Likes

What is the spec of the system you are running Resolve on? I’m wondering if my DCTL is running incorrectly on your GPU. I’m on a M1 Ultra Mac Studio.

Nick, you seem to be outputting ACES1, not ACES2. I’ve checked this as well and Thomas’s output is correct.

Well spotted!

I was too quick to just reopen the project and screenshot. I forgot I had looked at ACES 1 for comparison.

It does have a more pinkish hue.

But @thomasberglund’s sceenshot looked much darker:

But I now realise that was before the Hellwig library was installed.

Apologies if I confused things by also posting a screenshot with the wrong OT earlier. It seems we are all seeing the same thing.

I am not a colourist, so can’t really comment on how easy it is to grade under. But I find the Color Warper an easy way to nudge the hue to where I want it.

Decided to play with matching the sun between the ACES 1.3 SDR grade with the ungraded still running through ACES 2 RC1.

Focusing purely on trying to match the area immediately surrounding the sun (not the water or overall grade / contrast), I first used npeason’s Tetra DCTL to nudge hues around, and was able to get the general sky fairly close with just a minute of tweaking, but not the area around the sun which remained washed out. I believe that’s because of the more aggressive path-to-white in ACES 2. So the trick to get around this was pulling down “highlights” with the Log wheels, so the sun wasn’t being pushed into the DRT as aggressively. This restored the orange hue, and also allowed me to nudge it around. This might not have the results you want when switching to HDR however, but I’m unable to test at the moment.

I could have kept tweaking to get it closer but gave up after a minute or two heh. I also only did this with the ungraded sample, but in retrospect perhaps it would have been more useful to tweak your graded version, but oh well. Hope it’s helpful.


Without using Log wheels, the area around the sun washes out due to the path-to-white in ACES 2.

Performed in DWG/DI space
Highlights Wheel
Tetra

This is the crucial point of departure and why I use the term “Picture Formation”.

The open domain colourimetry is nothing more than an ingredient in the broader recipe. All of the prevailing evidence is that colour is not about discrete scalar magnitudes of stimuli, but rather the cognition of the articulation.

The use of a highly warped model of stimuli wattage deformations, as within ACES 2.0, is one pictorial formation approach, that leads to a pictorial depiction that is encoded under an implicit distal stimuli. It’s these relationships that lead to our cognition and inferential computation of the pictorial depiction’s “meaning”.

Specifically in this case, the purity dimension is well out of whack due to the warps within the pictorial formation recipe. Given few folks if any have been paying attention to the distribution along the purity dimension, there has been no test bed identified. I would encourage people to try “transparent” pictorial depictions; anything that depicts smoke, fog, milk glass, translucent objects, etc. will suffice. There are more than a few folks here who are able.

TL;DR: The idea that all creative intentions can be achieved “under” the pictorial formation is false, as the purity dimension is foundational and crucial in the cognition of form and proximal placement in pictorial depictions. This purity dimension does not exist in any form within the open domain data fed to a pictorial formation recipe, and is instead created and integrated at the stage of the pictorial formation.

Is the “pictorial formation” not the combination of the fixed “recipe” and the creative choices made by the colourist? Nobody is claiming that creative intent can be fully realised by simply feeding ACES scene data unmodified into an Output Transform.

If a colourist makes modifications to scene data while viewing through a display transform, then the output of that display transform is by definition the creative intent if it was created by the colourist and signed off by the director.

I’m speaking more specifically to the nuts and bolts of the stimuli that ends up in the pictorial depiction.

If we think about the wattages of stimuli in the open domain working space cauldron, we adjust the exposure by gaining those signals. All well and good. But consider a well formed pictorial depiction exposure for a moment.

There is a somewhat beautiful irony that folks can’t quite even “see” pictorial depiction exposure as a completely different mathematical operation. It’s not a gain. It’s an offset.

The key takeaway is that this offset (or lack thereof depending on the pictorial formation recipe) is created during the pictorial formation step. It literally does not exist in the open domain colourimetry. This an incredibly overlooked and vastly salient point.

I realize it is common vernacular to say “scene” but we really need to stop.

We also really need to stop saying “display”. These are all terms that come from Giorgianni and Kodak, and I think we are at a point where we can conclusively say that the totality of this model is impoverished and problematic.

While a Maya Deren or Ansel Adams picture would quickly disprove the idea that the pictorial depiction is a simulacrum of the stimuli in front of the camera, we can focus in on the colourimetric purity dimension to disprove Giorgianni’s claim.

Giorgianni’s entire model emerges as problematic in that his theory is predicated on a seductive, and absolutely false assumption; that an idealized pictorial depiction is a conveyance of the stimuli in front of a camera. In the modern era, this equates with the idea that the “idealized” pictorial depiction is the post-quantal catch transformed colourimetric values off the sensor. To whit, Giorgianni fails to identify the super importance of the colourimetric purity dimension, and the realization that the creation thereof occurs during the pictorial formation stage.

Colour is cognitive inference and decomposition, not stimuli. That means that how we parse a pictorial depiction in terms of forms and proximity, and ascribe colour qualia to these decomposed forms, hinges on this pictorial formation recipe.

More specifically, the offset dimension outlined above, that again is not present in the open domain colourimetry, is foundationally critical to our decomposition of form.

Think about this pictorial depiction as it would exist in open domain colourimetry:

In terms of the open domain wattages, there could be literally no distinction between each of the seven pie wheels as a series. If we adjust the open domain “exposure” for increment 2 through 7, we can expect to arrive at something loosely akin to what is depicted here.

Now watch what happens when we perturb the spatiotemporal articulation of the colourimetric purity.

And if we perturb the colourimetric purity dimension further?

The salient point in these demonstrations is that “pictorial exposure” has nothing to do with “cone bleaching”, which as best as I can tell as a proxy description of pictorial exposure is a myth. We can see how the choice of pictorial formation can lead to perturbations in the colourimetric purity dimension, and as such, our cognitive decomposition of form will trend in a different direction depending on which pictorial formation is chosen. For example, in the third demonstration, our cognition under one decomposition of the perturbed pie slice may be ascribed a colour qualia of “pinkish”, and decompose to a state of “over” or “through” the form of the “haze” or the remainder of the “pie wheel”. In the second perturbation example, the 7 slot may lead to a cognitive assignment of “under” a form of “haze”, but shift the colour cognition ascribed to the remainder of the wheel.

The ACES 2.0 model in fact is a massaged luminance mapper, and as a result, as a pictorial formation recipe, falls short on the cognitive decomposition front along pictorial exposure. Luminance mappers are broken at their core due to the disconnect between the chrominance signals and the luminance. This is verifiable and testable if one were to test the cases I’ve outlined for those keen.

Further, suggesting that we can apply all required creative manipulations to that open domain colourimetry is rather unfortunate. To demonstrate, if we emulate a filter “over” the open domain colourimetry, introducing a biased gain (EG: A “blue filter”), and gradually increase the open domain exposure by way of uniform gaining after the biased gain is applied, we will find that the pictorial exposure must attenuate the purity to the global achromatic centroid in order for form decomposition to emerge according to the implicit authorial intention. For example, here are a few “exposure sweeps” of a colourimetrically pure “illumination” as formed through creative chemical film.

If we were to carry on upward in this offset / colourimetric purity dimension, we would eventually (and correctly) hit minimum density in the print, aka the global achromatic centroid value.

The Kodak-Giorgianni model privileges the open domain colourimetry as “the picture” as indicated through the language of “scene” versus “display”. A “display” is nothing more than “a render” of this “ground truth”. This is again, an impoverished lens to understand pictorial depiction from, and it is flatly wrong. We cognize and decompose from the distal stimuli of the pictorial depiction. There is no other text.

Judd, Plaza, and Balcom made this exact logical error back in 19501, and none other than David MacAdam smashed it in 19512.

The range of face colors in the portraits was entirely separate from the range of natural face colors, and the separation of the centers of those ranges is approximately the same as indicated in Fig. 11. Therefore, it seems to be not only quixotic but fallacious to assume exact reproduction to be the norm, or to measure degradations from that basis.

If I were being optimistic, I would suggest that MacAdam was standing on the cusp of discovering the relationship between colour and cognitive decomposition of form.

TL;DR:

  1. The purity dimension spans the entire stimuli volume of a pictorial depiction.
  2. The purity dimension is utterly critical in cognitive decomposition of form and proximal relationships.
  3. The purity dimension along pictorial exposure is not present in the open domain colourimetry.
  4. The purity dimension is created within the pictorial formation recipe.
  5. It is impossible to shift the “white balance” from under a pictorial formation recipe.

These are just a few of the many implications of demarcating “the formed picture” in pre and post states. Creative manipulations must occur on either side due to the cognitive implications and creative needs.

1 Judd, Deane B., Lorenzo Plaza, and Margaret M. Balcom. “The Present Status of Color Television; a Report by the Senate Advisory Committee on Color Television.” Edited by N. Smith, E. U. Condon, S. L. Bailey, W. L. Everitt, and D. G. Fink. The Proceedings of the IRE 38, no. 9 (September 1950): 980–1002.

2 MacAdam, D.L. “Quality of Color Reproduction.” Proceedings of the IRE 39, no. 5 (May 1951): 468–85. Quality of Color Reproduction | IEEE Journals & Magazine | IEEE Xplore.

1 Like

Here’s a little look at the planckian locus through ACES1, ACES2 and ARRI Reveal. The input image is planckian locus nuke node: aces-looks/util/LMT_Tools/Nuke/PlanckianLocus.nk at master · priikone/aces-looks · GitHub, or this EXR: PlanckianLocus_crop_ap0_linear.exr.txt (88.1 KB). The diagrams below show display linear output. Images are sRGB or Rec.709 output.

ACES1 (sRGB)

Not much to say, it just clips and skews. And just one stop over makes it entirely yellow:

ARRI Reveal (Rec.709)

ACES2 (sRGB)

With an LMT one can make it track the overall arc of the planckian locus better. This small adjustment is enough to make fire, sunset, etc. look more accurate across different exposures without introducing other undesired skews or changes in skin tones, etc.

ACES2 with LMT:

ACES2:


ACES2 with LMT:

1 Like