Do we need to change ACEScct

After recapping the last meetings, I recommend not changing ACEScct/ACEScc.

Here is my reasoning:

Trading off precision for a minimal amount of occasions

The ARRI S35 camera exceeds ACEScct only for ISOs above 1600. And in those cases, it is only a one-stop overshoot. We know that the first stop below the clipping is unchartered territory because the camera processing is doing all sorts of things up there to sweeten the saturation of the camera.
If we change the log encoding to accommodate that, we reduce quantisation for all the other 99.999% of pixels within the ACEScct range. Also, if we visualise what those values look like (high iso / high exposure), we are talking about a signal which is very well in the highlight compression range of the DRT - even for a 4000-nit target.
Do we have images that contain these values?
Do they look problematic if clipped to ACEScct?
The discussion is not grounded in an actual use case without looking at images.

On Set

The potential clipping would only occur on set because ACES-compliant software needs to deal with floating point data anyway. Further, to my knowledge, no camera is outputting ACEScct. The SDI signal would always be encoded in the camera manufacturer’s log encoding, and we do not have a problem in the first place. And if someone wants to encode the camera feed up to CV 400 in ACEScct, they can use the SDI headroom.

CDL Grading

ASC CDL works only ok-ish if the signal is encoded in a cineon-like format with a sufficient signal-spread between 0 and 1. If the signal spread gets reduced more and more, offset power and slope provide no real differentiation anymore, and the tools become unusable.

LUT Shaper

As CLF becomes the standard for LUT exchange, linear light and extended-range LUTs become widely accessible. How LUT-box manufacturers map those ranges internally to meet the limitations of the hardware is an implementation detail that should not crosstalk into the creative domain.

How much more dynamic range can we expect from future cameras? To accommodate more dynamic range on the camera side, we need to look into new ways of encoding the signal, like mini_float_12 or positive only mini_float_10, for example. Stretching the cineon idea will not elicit optimal results.


From a novice dailies colorist perspective, and not to open a different can of worms… but I feel as though adding a few range parameters or perhaps enhancing the CDL spec to become a more colorspace-aware / perceptually uniform tool that can go beyond 0-1 would go a lot further in the long run to improve the on-set grading experience than ACESlog would. Especially considering the ever increasing amounts of dynamic range.

How that gets implemented would be above my pay grade (how many parameters? using CAM02? code to generate CLF LUTs on the fly?) but I imagine leaving hardware up to the hardware folks and letting ACES focus on solid standardized tooling makes sense.

CDL has been a brilliant and simple tool but as time goes on I worry it will become harder to use, and I think that may be a better area to focus on. It would also help all color-correctors use ACEScct more effectively, while continuing to communicate under the same terms.

One could argue this was the purpose of LMTs, though if only due to my limited exposure I don’t see those used too often in the wild, perhaps due to lack of common tooling to make them.

@daniele, while I trust your statements on this issue since it has come up several times in discussion can you link to the relevant documentation from ARRI?

I’m inclined to agree with you in that I don’t think a small expansion change for the brought forth problems is worth it. Maybe just re-branding ACES cct. Alternatively / additionally IF a new transfer curve is to be defined, I think it’s much more valuable to focus on 12 bit quantization. I’m a future forward kind of thinker, and it’s not worth addressing what I see as small concerns for 10 bit on-set workflows

Have a look here:


Well then, this looks excellent. Hopefully it can be considered again.

It’s important to consider the purpose of ACES CCT. This was not designed to be the onset grading space, but a more user friendly version for DI coloring. Specifically because of usability problems in ACES CC.

One important aspect of grading spaces is to have some additional latitude that can allow for values to be timed up without clipping values.

The specific use case where the values go beyond ACES CCT are the exact use cases where the need to time shots brighter may exist. ISO1600 and above are only going to be used in cases where light dim and most of the usable image will be in the lower code values. This makes it more likely that images shot with iso 1600+ will need to be timed brighter to match to surrounding shots. There should be a place in the tonal curve that allows for s35 images to be brightened up without clipping.

Beyond just the hardware and computation needs artists need to be able to interact with this imagery in a way that facilitates working. Even if a lut can be modified to work on linear imagery, artists need a space like ACES CC or CCT for controls predictable non-destructive image manipulation.

As opposed to changes ACES CCT it would be best to make a new space that allows for both timing images darker and brighter. A new space every decade or so is not an imposition.

Could you elaborate on where clipping might occur when you “time up” images?

A few of us didn’t get the memo and still showed up to today’s call. Since we’re running out of time before 2.0, we still had a very brief conversation.

For the most part, I believe everyone is in agreement that changes might not be necessary, and we can clarify the potential cases and/or workarounds where one might experience clipping if they’re not careful.

One thing that was specifically suggested was to still take the 2.0 release as an opportunity to “rebrand” ACEScct. To summarize the suggestions:

  1. Define the log encoding as “ACESlog”.
  2. Therefore, ACEScct can be defined as “ACESlog with AP1 primaries”. The shorthand name could be recommended to change to simply ACESlog (implying AP1) or remain ACEScct (implying ACESlog/AP1), but the important thing is to assign a name to the log encoding itself, separate from the color primary encoding.
  3. Still “officially” deprecate ACEScc. The legacy spec can still exist for legacy users who are attached to it, as with any other choice of working space, but ACES documentation and example workflows will recommend ACEScct (aka ACESlog/AP1) as the working space.

Hopefully that mostly captures the few points that were made.


If an image maps into 1.0 or near in acesCCT then is timed to be brighter it will have values above 1.0 in ACES CCT. Depending on what operations are performed on it after that fact values above 1.0 may be clipped.

In theory, you are right; in practice, this is no longer an issue. Also, an ACEScct value of 1.0 will be mapped way into the shoulder of the DRT even for a 10000 nits viewing condition.
Further, giving ACESlog “more range” will result in worse grade-ability for legacy tools because the useful range becomes more compressed.

1 Like

First just a small technical correction, LogC4 exceeds ACEScct at EI 1600 and above (inclusive).

While the conversation has focused on the ALEXA 35, it need not be, this ACEScct update (or lack thereof) will last for a long time so should be viewed on the ~10 year timescale if I base it on the current lifespan of ACEScct. I just view it as an opportunity for change and to anticipate needs over the next decade or so ( not that this is the last opportunity, but certainly a natural break ).

I definitely agree the principle problem area is on-set: with quantization, CDL behavior, exposure evaluation, and exposure latitude working against each other.

Enabling/promoting native camera working spaces in the ACES ecosystem is one approach, via AMF/CLF or otherwise.

I would personally be against this if nothing changes. It will end up being a lot of needless work and headache for those in the color trenches just for typographic aesthetics. If you need an independent unique name for the encoding curve, you could just introduce “ACEScct Curve” or “ACEScct Transfer Function”

1 Like

Well, in this case we are not changing ACES CCT so fear not. The exact purpose of the WG is to create a new space with a new name and push around a new transfer curve :smile:

Very sorry Scott. I was in China last week and communicated with Alex, but did not communicate with the larger group. I’ll be in Spain next week, it looks like I might be able to make the meeting timezone wise, but maybe not other obligations wise. I will advise on Tuesday morning central European time.

Yes, and my point earlier about quantization in on set grading is that there has been some neighing about 10-bit quantization fundamentally limiting how aggressive this update can be. If we desire a more aggressive curve with a much larger DR between 0 and 1 (or we define a separate quantization scheme for display-routed signals) then we should not focus on the limitations of 10 bit. They will be gone long before our idealized 10 year timeline.

However, (and I can’t remember who made this point in meetings but also @daniele here) there is some concern that a more aggressive, higher DR curve might start to lose functionality with CDL and legacy tools.

I think at this time we have a reasonable proposal from @jzp ACESlog Strawman Proposal

We should evaluate a more aggressive proposal without considering the limitations of 10-bit signals and then see a few tests of CDL and other tools to see how well they behave.

Lastly… There is a future issue that camera DR will continue to improve and improve and improve. So with that in mind… at some point limiting the log encoding for the purposes of preserving legacy tool “feel” sounds like it will become impossible. What will happen at that point?

1 Like

At this point grading tools should be adjusted to be able to work with unbound curve. I think it is a Daniele’s point and since he is a developer of one of the leading grading packages in the industry… it will be the future, I guess.

Or legacy tools will be changed. It will be a pretty big deal since we acquire muscle memory grading with panels. And three primary controls are Lift, Gamma and Gain. New tools should feel familiar in their response or there will be steep learning curve.

What this all means is that dynamic range problem will not be resolved by new log encoding.

One thought:
Let’s swap colour resolution for spatial resolution for the sake of the argument.
One camera out there captures a resolution of 10kx5k.
Should every production use this raster size from now on?
What about productions that have an HD camera?
They will pay a huge penalty if the processing is only ever 10kx5k.

Another thought:
The dynamic range of natural scenes is more important than the camera’s dynamic range. The dynamic range of natural scenes will stay the same over time. By reducing the quantisation per stop (to fit more data in), we mainly lessen the quality of our captured images and make the grading behaviour of legacy tools worse.

Another thought:
A system needs to deal with floating point data to fully support ACES.
CDL does not clamp outside the 0…1 range. So, the problem we are trying
to solve is only an OnSet problem. Why do we not fix the problem
at its core? We could have an intLog (with a very different design) for transmitting data over the wire.
An onset system should be able to convert from intLog to ACEScct, apply CDL + LMT + DRT + ODT, and back all of this into a preview LUT.

Another thought:
Maybe digital cameras will improve significantly in low-light performance instead of full-well capacity. Then, we suddenly need more values on the bottom end.


I’m afraid this metaphor works against your argument. If I have 1080p, 720p and 4K footage, I’ll set my timeline working resolution at 4K so all effects could be rendered in 4K, and 4K quality would be preserved. If footage is shot in 8K, downsampling is used to preserve some of the original quality.
If I deliver in 1080p and do not care about archiving master, I can use 1080p timeline.

Following that logic, timeline color space should be flexible enough to account for “1080p” and “4K” dynamic ranges. And we should have an ability to “downsample” bigger linear values.

My point is that you do not use the resolution of the biggest camera on the market but maybe the biggest camera within your production.