ACES 2.0 CAM DRT Development

Derp, yeah I was swizzling that whilst looking at the inverse. Will update before I bake anything.
(edit: repo updated)

1 Like

Taking a look at DRT_v30_709.dctl
Got working after fixing 2 errors and 1 warning.

Warning fixed in line 854 change 0.1 to 0.1f

Similar naming errors fixed by dropping the s from degrees and radians in lines:
548, 550, 555, 557, 744, 822, 940

Hope this saves a few moments for those running at this point.
Thanks and looking forward to more as it comes.


Regarding the white point discussion last week:

I always think of the white point journey like this:

  • Scene White Point →
  • Scene-Referred Encoding White Point / Mastering White Point →
  • Encoding White Point

You see, I conceptually treat the scene-referred encoding white point and mastering white point as the same. But let’s start from the beginning.

Scene White Point

The scene white-point is the location the observer in the scene is adapted to. In our case, this is the white point you balance your camera.
We leave the balancing algorithm to the camera manufacturer typically (but not always).
So we can assume that a spectrally unselective object in the scene will land on R=G=B thanks to the camera’s white balance. And we can also assume that colours are falling roughly into place based on a given scene-referred encoding white point / mastering white point.

Scene-Referred Encoding White Point / Mastering White Point

The following white point I like to define in a backward direction. The mastering white point is the white point of your mastering colour space. Your mastering colour space describes the actual capabilities of your mastering displays. It encompasses all colours your mastering display can show. Typically the mastering colour space is something like

  • Rec.1886/709 up to 100 nits for Video
  • P3* up to 48 nits for theatrical (leaving HDR cinema out of this discussion)
  • P3* above 600 nits for HDR TV

The mastering colour space can easily accommodate different white points by scaling its RGB primaries differently. For theatrical, we can have P3D55, P3D60, P3D65 etc… So I can dial in the mastering white point on my mastering display to my liking; this is why sometimes it is called Creative White Point.
It makes little sense to me to have a scene-referred white point which is different to the mastering white point, for several reasons:

Suppose a motion graphics artist creates a grey constant in a motion-graphic package by typing (r=g=b=a number) or a colourist fully desaturates an image until every pixel is (r=g=b=a number). In that case, the white point she sees is intuitively the white point of the data she works with. She does not care if we label the encoding D60 and the DRT D65sim. What counts is the mastering white point.

You could argue that the IDT target defines the scene-referred encoding white point. But the IDT optimisation assumes that the observer also sees the scene-referred white point; hence it should be equal to the mastering white point. Otherwise, the IDTs are not ideal.

Encoding White Point

That is the white point of the encoding colour space.
Sometimes the encoding white point is equal to the mastering white point; sometimes, it is not. You must NEVER put data in the encoding colour space outside the mastering colour space. Otherwise, you would produce colours the creatives have never seen and approved.
Very often, the encoding colour space is much larger than the mastering colour space (think of XYZ), but this is just because we are lazy in defining encoding colour spaces, so we always define the biggest one we can think of and move the actual challenges to someone else (who is typically not present in those meetings).


So I think (from a practical point of view) the mastering white point backpropagates the white point into the scene encoding, and the DRT should keep r=g=b on r=g=b.
You could argue that if that is the case, you must redo all the IDT matrices when you change the mastering white point. Yes, that is correct. We tried it with a few cameras where we knew the spectral sensitivities, but it is not worth the hassle for the amount of white point shifts we usually do.

Here is a video explaining some of that:

I hope this helps


I should have been more clear here. I mean, you literally scale the primaries of your display.
So if you define a P3D55 mastering projector, you calibrate the projector’s primaries so that 1,1,1 lands at D55 48 nits. That is conceptually the definition of mastering colour space.

Of course, you could be lazy and don’t bother to change your projector’s calibration from P3D65 to P3D55, but then the projector you are using is not the mastering projector but the encoding projector. And in fact, the white would be needed to be scaled down to fit inside the encoding gamut. (so you better scale up the lamp/laser power ).
Another issue with the lazy way would be that your output RGB scopes would always have that offset built in. So if you desaturated an image, the RGB parade would still show an offset. And that is not so nice for the colourist , because you’d expect that your mastering white point produces equal r=g=b values on the scopes.
I always advise calibrating the mastering display to the intended mastering colour space (including white of course) , all professional monitors and projectors can store presets for different colour spaces with different white points.


While working on the DCTL implementation, it occurred to me that the conversion from mono J to linear for the tone curve must be a simple function, that we are applying via a complex method (understandably for development purposes).

I wrote a Python implementation of the conversion, and then tried to fit a curve to it. I hoped it would be a simple power curve, but although close, it isn’t. I then tried an offset power curve of the form:

y = \frac{(x+o)^p - o^p}{(1+o)^p - o^p}

It’s not quite right, but I found a pretty close fit:


Is that (or something similar) close enough for our purposes? Perhaps @luke.hellwig can tell us what the actual curve should be.

I managed to get a new simpler version of the chroma compression working. It’s available in my repo for testing as a prototype version CAM DRT v032-pex.

New in CAM DRT v032-pex prototype

  • New chroma compression algorithm, old one is removed
  • Cusp smoothing is now enabled by default
  • Small change to custom LMS primaries
  • Small changes to LMS compress mode to avoid NaNs/Infs
  • Gamut mapper parameters changed
  • ZCAM is removed
  • Linear extension is removed

The New Chroma Compression (in-gamut compression)

The new algorithm is simpler than the old one and has only one curve as opposed to three curves the previous algorithm had. The steps of the new algorithm are as follows:

  • Scale down M (colorfulness) by tonescaledJ / origJ
  • Normalize M to compression gamut cusp (becomes hue-dependent)
  • Compress M with a compression curve. Compression is increased as tonescaledJ increases to create the path-to-white.
  • Denormalize M with the gamut cusp
  • Apply global saturation to taste, boosting saturation in shadows more

The M is normalized to the cusp of a compression gamut. The compression algorithm then compresses the colorfulnes in 0-limit range. It does not compress anything beyond the limit. This makes this curve more controllable than the previous one and it protects pure colors well. Forward direction has better yellows than previously which helps with rendering of fire/pyro, etc. Path-to-white is IMO better than in the previous algorithm. Path-to-black is on par with the previous one with slightly more colorful noise.

The Compression Gamut

The default chroma compression gamut is always Rec.709 but I think it could also be something else, including a custom gamut specifically chosen for the compression use case. This gamut doesn’t change even when limiting/encoding primaries are changed. This means it keeps the range that it compresses (0-to-limit) always the same regardless of the display gamut. This may or may not be a good idea…

The gamut cusp is always scaled with an eccentricity factor (ET). The current implementation has 3 ETs to choose from: CAM16, Hellwig2022 and Custom. The default is Custom and provides the best forward and inverse directions of the three. Without applying the eccentricity factor it seems the shape of the resulting compression fits the display gamut poorly.


The 3 compression parameters are limit, compression strength, and compression expansion rate.

The Rec.709 Inverse

Following images show the inverse compared to CAM DRT v031 (second image):

Following images show the inverse with CAM16 and Hellwig2022 eccentricity factors applied:

Following image show what the inverse looks like when eccentricity factor is not applied in compression (all else being equal):

SDR/HDR Rendering

The match between SDR and HDR is decent but not perfect. Shadows can be more colorful in HDR compared to SDR. Needs more testing and adjustment. Path-to-white for normal range of colors appears to be almost identical in SDR and HDR, which is an improvement over the previous chroma compression, IMO.

Purer colors can now appear even more pure in HDR than before because chroma compression no longer compresses as far out as it did before. Pure red in particular seems to come out really hot on HDR RGB screen (with either P3 or Rec.2020 limited primaries).

Example Images

First image is CAM DRT v032-pex, second image is CAM DRT v031 for comparison:


Oooh! That’s looking really nice!

I opened a pull request for CAM DRT v032 for @alexfry, also available in my repo.

This version is otherwise identical to the prototype version above but the SDR/HDR match has been improved a little bit, and colorfulness of noise has been reduced.

Here’s a recap what has changed in v032:

  • New chroma compression algorithm, old one is removed
  • Cusp smoothing is now enabled by default
  • Small change to custom LMS primaries
  • Small changes to LMS compress mode to avoid NaNs/Infs
  • Gamut mapper parameters changed
  • ZCAM is removed
  • Linear extension is removed

The PR has been merged, and the LUT Candidates repo has been updated with bakes for v032.

So far the SDR to HDR match is looking pretty sane to me.


I am going to wait until people have tested the LUT bake versions for a bit before porting this new stuff to DCTL. If we are confident that this new direction is the one we are taking, I will work on it then.

I have the new MacBook now and able to view SDR and HDR. A very crude run through the exr set tells me that they match very well on most images especially the color/hue. The contrast/dynamic range is a bit weird to experience viewing both at the same time in same conditions on a 16" display. I have to get used to the presentation of HDR on reference monitor in general, but there are definitely some things that stood out on which I want to write about in more detail when I have more time.


Have given a quick look to all 6 variations, but have not yet looked at the inverses.

Not to contradict Shebbe, but I have found some differences between
ACES2 Candidate CAMDRT rev032 Rec709,
ACES2 Candidate CAMDRT rev032 Rec2100 (Rec709 sim), and
ACES2 Candidate CAMDRT rev032 Rec2100 (P3D65 1000nit Limited)

In particular the CIE Chromaticity scope shows some variations of which cause unknown, but differences worth pointing out.
The Vectorscope and color charts seem to show some differences in saturation for these three which were not the same with comparisons of earlier versions.
Also, it is very difficult to evaluated 3-D with a 2-D scope.
Also I am not sure if being in the 709 or P3 color space is behaving differently or as it should.

Otherwise the images (all sample frames) look good and maybe consistent between SDR and HDR, but this has only been a very quick look. More time is needed to compare and I would feel better without the issues mentioned above.

I understand Nick not wanting to rush ahead with DCTLs and would encourage closer inspection of what the compression and model is doing in HDR and if it is behaving as expected and not messing with saturation or hue.


Could somebody sanity check me here! I am trying to work out why my DCTL isn’t matching the LUT bake, and I 've found a few things.

  1. A few of the parameters seem to have changed slightly between v30 and v31 (at least in the LUT bake project) so it does not seem to be the case that v31 with the 7-axis values set to detents is identical to v30. I have changed the parameters in my DCTL accordingly and get a better match. But I’m wondering now if the LUT bake is using out of date parameter values.
  2. My DCTL is using my quadratic solve based gamut compression. But looking at the Blink in the v31 LUT bake Nuke project, it does not appear to be using that. The blink in Alex’s repo does, so there seems to be a mismatch.
  3. The group nodes in the rev31 LUT bake Nuke Project do not seem to include the sixAxisCompression, which suggests they may not actually be rev31.
  4. All the Blink nodes still seem to be using [4200, -1050] for the white point in the LMS matrix calculation. Should those not have been changed to Equal Energy White? Or maybe that decision came after rev31.

I won’t push my updated DCTL yet, as I have a feeling I may be tracking the wrong version of the Blink.

EDIT: It appears I may have been working from an earlier commit of the rev31 LUT bake project. The one in the repo currently matches the parameters I originally used, and also includes sixAxisCompression.

I’m not sure about how the v31 LUTs were baked, but the CAM_DRT_v031.blink is the correct blink code for v031. It is otherwise identical to CAM_DRT_v030.blink but it has the hue-angle gamut mapping compression parameters. However, those parameters in v031 node should not be in use, so in practice v030 and v031 are identical (rendering should be identical).

v031 node still uses the wrong illuminant for the LMS matrix. This was fixed in v032.

I think the rev31 LUT bake Nuke project may not have been properly updated in the repo until the rev32 commit yesterday.

Yeah, that was a bit of a weird sequence of events thing.

I baked the v031 luts, but hadn’t hit save in Nuke before I hit commit on the repo. The .autosave was in the correct state, but not the main .nk.

I corrected that before yesterday’s commit, which is why the v031.nk updated yesterday. The relationship between the v031 .nk and the v031 LUTS should be correct.

1 Like

I don’t know if it’s at all meaningful, but looking at the image set on my M1 Macbook Pro with the OCIO baked LUTs for rev032, I’m seeing a pretty decent match between SDR (P3D65 Limited) and HDR (P3D65 1000nit Limited). However when I instead compare P3D65 Limited to P3D65 540nit Limited, the 540nit HDR looks a lot more saturated. A good example to look at is ACES OT VWG Sample Frame - 0007.

FWIW, I don’t see this in rev031.

1 Like

I just looked at HDR P3D65 540nit and HDR P3D65 1000nit and I experience the same thing. The reds are more saturated in 540nit than in 1000nit. Blues are closer, greens couldn’t see a noticeable difference. They match a lot more in rev031. Overall rev031’s 540 nit more closely matches rev032’s 1000nit, so not sure if the rev032 P3D65 540nit is as it’s intended to look.

Confirm what both Derek and Shebbe are seeing in that the 540 nit shows more saturation than the 1000 nit, 709sim and SDR. Is this related to the max luminance somehow?

Also red Christmas now looks to be orange Christmas for every case.
And I and seeing Red neon in HDR look more pink in SDR and sim.
Also noted that the powder blue shirt in Isabella (good in SDR and sim) gets some magenta tint in HDR. Note that for the sim and HDR I am using split screen on the same monitor.
More to come as studied further.

Last night I compared rec.709 sim and rec.2100 again after a week or so, and I was immediately seeing the difference. HDR clearly has higher overall saturation level. The match is worse than in v031 which I considered quite a good match.

This needs more investigation, testing and tweaking to learn more, but I am finding it much more difficult the create the match. This reminds me of how it was also difficult with version v026 that had the original curve, similar to what the original highlight desat curve was.

Following image is showing the v026 chroma compression curve for 100 and 1000 nits (in all graphs the higher curve is the 1000 nit curve):

The match was improved to v028 with a new curve. Following image is showing the v028 chroma compression curve for 100 and 1000 nits:

The curve in v028, and in subsequent versions until v032, was the key creating better SDR/HDR match. I found that it was important to keep the curves at and around the middle gray (the red vertical line) at similar rate of change. Darker colors wouldn’t match at all with v026 curve but matched immediately with v028 curve. The v026 curve, while overall was more colorful, was less colorful in shadows in HDR (as can be seen in the v026 curve the HDR curve has faster rate of change in shadows). In v028 curve they’re closer in shadows. So only thing I had to adjust was global saturation to get the match to overall similar level.

In v032 all this went away and the algorithm now starts by dividing the tonescaled lightness with the original lightness. The following image shows how colorfulness will change with this, again for 100 and 1000 nits lightness (image shows tonescaled J divided by original J):

These curves tells us that the HDR image will be more colorful, including in the shadows. This happens because the middle grey is lifted in the tonescale. I quickly tested last night also lower lift in the middle grey, and lowering the lift from ~15 nits (from 100 nits to 1000 nits) to ~12-13 nits created much better overall match.

(I’m also wondering (and not knowing) whether HK effect plays a role here too since not only is the exposure lifted for HDR it’s also more colorful so it appears even brighter? My eyes seem to agree.)

Now, these curves are only one part of the final chroma compression which includes another curve for the inner-gamut compresion, but the fact that they differ so much at and around middle grey I think is making it harder to create the match simply by adjusting saturation.

I don’t have, at this very moment, a solution to bring the match closer. I will hope to continue testing and tweaking and see if I can get them closer without adding too much complexity.