ACES 2.0 CAM DRT Development

That was the idea with this rendering in the first place.
It should work with nearly every display rendering transform. No negative values; all setup with linear-sRGB primaries; the main image content stays in a range roughly between 0-2 and the only high values can be found in the specular reflection of the chrome ball (approx. 9000).

Just finished grading a real project using ACES 2 v33 candidate. It was a commercial with lots of colorful clothings (it’s a brand that sells designer clothing). One thing I really liked is the presense of gamut compression in the darkest shadows.


Comparing SDR and HDR for v034:
Checked both SDR and HDR on both computers A & B described last week.

There looks to be pretty good consistency.
In the images with blues of sky and frame 0015 there may be a slight lesser saturation to make look a bit grayish, but no where near what I saw in v033. As stated at the last meeting, this may be from Resolve working with the LUT, however v034 is much improved.
Overall v034 looks to have the best fit thus far comparing SDR and HDR.

As to using the inverses for v034:
Overall the inverses seemed to work best when the colors are not far from the rec.709 space.
SDR inverses worked better than HDR.
HDR inverses failed in many frames, especially the CGI.

As a possible interest I found this presentation by Art Adams discussing ARRI’s REVEAL Color Science:

Especially take note of when he describes the skin tones under various lights and the quality of colors.

1 Like

I mentioned in previous meetings that I was looking into my alternate approach to finding the gamut boundary for gamut compression, to make the gamut compression invert more precisely, and why that seemed to be causing a brightness shift in near-achromatic pixels.

I believe I have found the reason. Even when the colour is well within the threshold, so no gamut compression should take place, the code is calculating a value v which is the ratio of the distance of the colour from the J-axis to the distance of the boundary from the J-axis. It then compresses v, which will be unchanged if it is below the threshold parameter for the gamut compression (which it obviously will be for near-achromatic values). The final JMcompressed value is found as the proportion v along the line from the J-axis intersection to the gamut boundary. In theory for colours well within gamut this should all cancel out, and return the original JM value. However, for near-achromatic colours, the value of v is very small, and it appears to be processed in DCTL with insufficient precision to get back to the original value when multiplied again.

A simple 1D analogy:
\space \space \space x_{in} = 1.1
\space \space \space d = 100
\space \space \space v = \frac{x_{in}}{d} = 0.011

But let’s say our system precision only allows us to store v to 2 decimal places, so it becomes 0.01
The compression function does nothing to v because it’s below the threshold, so it remains 0.01

\space \space \space x_{out}=v \times d = 0.01 \times 100 = 1.0

So because of precision, dividing by d and then multiplying by d has the net effect of changing 1.1 into 1.0 rather than leaving it unchanged. The reason for this is not immediately obvious when you say you are working to two decimal place precision.

I therefore added an extra line to my DCTL implementation so it bypasses the compression entirely if v is below the compression threshold. This seems to have solved the problem, and is implemented in DRT_v31_709_alt.dctl in the latest commit to my repo.

I realise of course that this is still based on v31, so is behind @priikone’s latest experiments. As I have said previously, I will bring the DCTL up to date when things stabilise a bit more.

That’s similar to the old faithful

if (r == g && g == b)
    return unchangedValue;

but with an epsilon

1 Like

I made a pull request for CAM DRT v035 for @alexfry, also available in my fork.

This version is identical to the v035 prototype version posted in Alternative compress mode for ACES2 CAM DRT - #7 by priikone.


  • Changes Hellwig2022 achromatic response formula and post adaptation code response formula to not includes the 0.305 and 0.1 offsets, respectively. This improves inverse with the current compression algorithm. See Alternative compress mode for ACES2 CAM DRT - #6 by priikone for more information.
  • Add Achromatic response slider to GUI.
  • Changes primaries to have slightly better blue SDR/HDR match
  • Includes @nick’s fix to gamut mapper. Pixels that used to come out as NaNs now come out as uncompressed.

LUT repo is updated with @priikone’s v035

1 Like

Been doing some tyre kicking on the sRGB vs P3 clip/clamp issue.

Both the image below are encapsulated within a P3 container.

Nothing conclusive, feels like we need to pump up the compression when we’re targeting a wider gamut, but it’s not obvious to me why. Looks like the same thing has been happening at least as far back as v028. Also looks much worse when we look at individual channels vs actual imagery.

Something we can talk through in the meeting more tomorrow.

I just tried v035 and get similar results for the solarization like issue mentioned in posts:


I did check where the cyan-green bottle ended up going in P3 and Rec.709. The scene values are highly saturated. The only difference seemed to be that Rec.709 was just the more compressed value, and so also the less saturated one and also darker. The P3 was more saturated and brighter version of the color. Below 2D diagrams, but I looked this in 3D too, and I think it was clear to me the Rec.709 version was darker because it’s compressed more. But I guess, in this case, it’s supposed to be compressed more.




Input EXR in dropbox

So, following up on the discuission about brightness differences in sRGB vs P3 for highly saturated images, I’ve made some example images to help discuss the trade offs.

The image we were looking at was the backlit panels in the Arri 35 bar image:

The image below is a sort of side view JMh scope.
The lines on far right represent a JMh ramp from 0->100 in J, an M value of 118 (which is the intensity green from the image above), and a h of 169 (again from the image above)

The lines projecting in from that border show the angle and the distance the values move as they get yanked into the gamut hull.

Red = Rec709
Green = P3
Blue = Rec2020

And with a sweep of h:

The thing to note here is that whilst the angles arent radically different here, the extra distance the values have to move here to reach the 709 boundrey produces a larger drop in J.

This is not totaly unexpected. The original intention with this was that we wanted to trade some level of brightness to maintain more colour information. But this is a slightly subjective matter.

If we straighten the projection lines (by using a very large focus_distance value), we can better maintain brightness, but at the expense of colourfullness.


The effect of this is nothing on images that exist entirely withing the target gamet, negligable on highly saturated reflective colours, and very noticable on high brightness high saturation colour like neon signs.

I’ve put a selection of images below to compare. It’s highly debatable which one is “better”, depending on the context.

The upsides of the linear (or near linear) projection is greater brightness consistancy between gamuts, and greater transmission of brightness ‘information’

The downside is once you’re above the cusp, we’re always sacrificing colour in favour of brightness.

Paticularly with the SDR/HDR match, it’s highly debatable which one matters more for any given image.


Thanks Alex ! Great examples.

From everything that I have read on this forum, I would go for brightness. “Tone/Tonality” seems much more respected in the images you posted. Clear win for me.

Thanks for the hard work !

1 Like

I suppose an important question is if the DRT takes approach A, can you achieve the result of B through grading? And vice versa.

If one is possible and the other isn’t, that might influence the choice.

1 Like

My understanding is that the downloads of the images above are all SDR Rec.709 with two focus distances (3 & 1000) - Is this correct?

If so, could downloads be provided in HDR - Rec.2100 ST2084 (Rec.709sim) & Rec2100 ST2084 (Rec.2100 540nit & 1000nit limited)

Also could the frame of blue bar with the pool ball shadows be included.

I can maybe see advantages to both focus distances. Maybe even an option could be provided if grading cannot accomplish the range of results. However, getting a good look of HDR with the SDR will be beneficial. I am using the iPad Pro M2 so can get some nice comparisons with that display. I understand that the image viewing is limited here, but as downloads should provide full viewing potential. These downloads seem to be SDR Rec.709.

Thanks, and by the way even at this point the results are outstanding. Indeed there has been much talented thinking going into this.

I believe those colors will become impossible to reach. So then you can try to get a similar color by darkening the colors.

But, I don’t believe the choice has to be between 100% lightness preserving gamut mapping and the current one. For SDR/HDR match the current one provides obviously better appearance match, IMO. For SDR appearance match of different gamuts perhaps not. But that doesn’t mean we couldn’t achieve both, ie. trying to use same projection angle (other than horizontal) for each gamut. I think that should be tested first.

Edit: I believe this should be easy to test with the Blink script by, for example, replacing following line:

    float focusJ = lerp(JMcusp.x, midJ, cuspMidBlend);


    float focusJ = midJ;

Rendering will change of course from the current one, but the projection angle should be same despite the gamut (assuming the gamut cusp distance doesn’t affect the angle, which I believe was the whole point of the quadratic so that it wouldn’t, right @nick?)

Isn’t that just the same as setting cuspMidBleand to 1.0?

Yes, point being the value can’t change with the gamut, or has to come to same value (like maybe use cusp lightness of Rec.2020 gamut always).

Thanks again Alex for all the visualizations, these are crucial to understanding what is happening inside the model.

I am going to post two images just as a point for continued discussion.

These were made by the simplest means to show a simple point, and could be easily done on any images if someone was curious.

These are simply a 50/50 blend of P3 and 709 limited P3.

One version is kept in P3 and the other is converted to Rec709 (2.4 Gamma)

The only point that is worth making with this example is that as certain logic should be that the rec709 image that has clipping if it is to be as “colorful” as P3 and P3 should be able to be as colorful while retaining more detail.

That said, it does not mean that this logic applies if the tools available do not agree!

But what this example could suggest is that maybe there is an abstract/virtual middle cusp that that clips 709 and is well inside P3/2100.

Pictures are not really needed to make that point, but these images do match very well and the 709 is a little brighter/more colorful and the P3 is only a little dimmer but with all the detail/tone retained.

None of this changes all the work Alex, Nic and Pekka have been doing regarding focus distance and angles, but it could demonstrate what a target could look like and whether it is a good target.

I don’t know if ACES Central processes images when you upload them, but when I download both images it’s not that they look identical. They ARE identical to all intents and purposes, and both tagged as sRGB.

1 Like

If they look identical that is good. (they should)

If they are tagged srgb that is bad.

Maybe I should tag both srgb or no tag so they will look wrong in browser but correct if you download?