Validity of ACES criticisms?

Howdy all, I’ve been using ACES 1.2 for all my color management needs for a while now. I made a simplified OCIO config for CG work (soon to be replaced by the official CG config that’s in the works) and I’ve been very happy with the workflow and results. Colors are consistent in every app and the “ACES look” really pops. In short, it’s a pretty perfect solution for my projects.

That being said, I’m curious about the criticism of ACES 1.2 coming from some quarters, pointing out hue shifts, clipping, and other issues. I’m not a color guy by any means, so it’s all over my head, but I realized that I’ve never heard a pro-ACES rebuttal. Would love to learn more about if these complaints have actual, practical validity and if they have been dismissed or taken under advisement by the ACES peeps. Thanks!

1 Like

Hi Brian,

Appreciate your question and sure that others will chime in. Part of the reason you’re aware of people’s criticisms is that we are very transparent about the system and committed to hearing people’s comments and needs and addressing them. In fact, ACES 1.1, 1.2, 1.3 and the coming 2.0 all address user feedback, and represent improvements on the most requested issues/problems that users faced with 1.0.

We’ve documented many, many hundreds (and there’s probably thousands) of productions that have successfully used ACES. We don’t always know when people use ACES as it just works for them, (like in your case), so let’s see what other say.

Thanks for sharing your positive experiences, and welcome to the ACESCentral community!

ACES Adoption


Hi @BrianHanke !

Yes, the current ACES Output Transform indeed has a number of hue shifts and skews that the ACES Working Groups have been working to address. As I understand it, these are largely due to the per-channel approach of the RRT, and the current proposal for the Output Transform for ACES 2.0 aims to fix this using a chromaticity preserving approach.

Check out the ACES NEXT area of AcesCentral for all sorts of discussion of that. In particular you can get a sneak peak at the proposed Output Transform candidates on this Webpage (Needs to be viewed with a Chrome Browser) as well as this Github with implementations of the ODT candidates for Nuke, Resolve and Baselight.


Oh man, cool stuff. I’m glad I asked! Took the candidates for a spin and already very impressed by one or two. Looking forward to following along as development progresses.

Hey @BrianHanke

Keep in mind the development stuff isn’t final and is really exploring some very different approaches.

It’s probably also worth reviewing the group’s call recordings. One topic discussed at length is “hue skews”. It’s a complex topic to say the least. On paper, it sounds like “well I don’t want those” in practice it’s not as straight forward. That said, the group is certainly looking for something more well behaved and consistent especially across dynamic ranges.


Appreciate the nice welcome everyone! Thanks Alex, will check out the calls. I’m always interested to learn more. My official position on hue skew right now is “don’t care.” :stuck_out_tongue: My renders looks nice, skewed or not, so I’m happy.

Criticism are going from couple of reasons.

IMHO mainly from ACES fanboys behaviors and overal lack of color science knowledge in CG artists.

When design and DTP industry having solid color management workflow and protocols for decades, VFX industry had tons of unique and incompatible standards. But instead of adapting or working together with ICC to make unified standard, decided to make another one ACES. ICC itself had lot of issues from incompatibility with modern media to closed vendor “magic” sauces inside ICC profiles. ACES in same moment completely rid off at start DTP, photographers, 2D CG artists. And made two camps one with wide ICC support another with slow but growing ACES adoption. And with some child issues, like lack of proper gamut mapping (because who need it when ACES-AP0 so big and no one use sRGB anymore, joking).

Add developers that add ACES into their apps but in reality just adding one of “filmic” gamma curves from ACES (Marmoset).
Or artists that “render in ACES” with random textures in sRGB, sRGB linear or just use sRGB colors inside aces pipeline without proper IDT.

Better understand that ACES is not a silver bullet. And it only have some workarounds for origin of all problems tristimulus color model that build around human vision, when colors and light just hard or almost impossible to manipulate using model based on sensation of standard observer. And cg artists must understand that limitations.

ICC has never found a use for high-end VFX because it simply did not support the dynamic range required for motion-pictures. To put things back in context, Nuke was built in 1993 with 32-bit floating precision from the ground up. ICC came half a decade after and only started to support unbounded encoding circa 2006.

ACES was designed for motion-pictures and its architecture was defined by Giorgianni, E. J. (2005). Color Management for Digital Cinema - A Proposed Architecture and Methodology for Creating, Encoding, Storing an Displaying Color Images in Digital Cinema Systems. Note that it predates ICC discovery that floating point processing is actually important by a year.

I don’t think anybody thought that gamut mapping would not be needed. It is however not trivial to implement. The increased usage of narrowband solid state lighting and IDTs mapping their values outside the spectral locus certainly highlighted problems with the current ODTs.

Hue skews and per-channel rendering were also not really a concerns until a few years ago with HDR where it has gotten much harder to maintain colour appearance between SDR and HDR. Thousands of movies have been done with per-channel rendering and look great, ultimately, the artists, colorist and director make the difference, not the tools.


That’s true.

But in stand time from 1998 graphic designers and DTP already have wide support for color management, in some systems even system wide.

Regarding Nuke, I know that it industry standard for video processing, but VFX is not a video only.

As a part of big user group that work in between two camps ICC and ACES (3D scanning, photogrammetry). I see poor support of photo cams in ACES. That force me to find workarounds to fulfill enthusiasm of ACES evangelists and reality when you need processing thousands of high resolution raw images daily.

Situation slowly changing, but until big players in graphic design and photo and web will not add ACES support in their apps, it will be a Video/3DCG mostly “standard”.

Regarding 32bit significance. Sometime it exaggerated. When 16bit float is a pure disaster and give worse precision that fixed 16bit precision in closed domains.

Have you tried Fast Cinema DNG? Lee Perry Smith uses it and having talked to Fyodor a few years ago, I told him that it would be great to have native EXR support.

1 Like

I’m in contact with Fyodor :slight_smile: but Fast Cinema DNG still pretty limited comparing to CameraRAW or Lightroom, Capture One, DxO… if you need some strong preprocessings. And having issues with high resolution images. :frowning:

For this moment best candidate is Prophoto RGB (RIMM/ROMM) as internal color space form in Adobe and have as wide as ACEScg gamut.
Only small problem that Adobe use sRGB gamma in Prophoto RGB instead of Prophoto standard. And sometime this color space named Melissa RGB. So working with it, required some attentions.

Btw, OpenEXR is bad for digital camera inputs.
14bit sensors with closed [0-1] range perfectly fit into 16Bit int formats and required much less disk space.
16Bit half float in same moment have too much rounding errors on [0.5-1.0] range with it ~11bit precision.

Only derivatives like displacement, especially in photometric stereo capture systems can require 32bit floats. But that one can easily working with “unknown” Camera RGB spaces and embedded linearization curves as soon as from them only intensities are important.

Another major issue with ICC is that the reference medium and gamut are based on ink on paper.


The code-space of half-float is certainly not optimised for data limited to the 0-1 range. And when that data represents the whole of the camera range from black to clipping it is even less so.

When storing sensor data in EXR it is often beneficial to add gain first, so that mid grey is mapped to 0.18 and the clipping point is several stops above 1.0.

1 Like

While this helps with utilising more dynamic range (if you hit the subnormals) it does not help increase the precision per stop.
I think while someone could complain about the precision per stops in OpenEXR, one could also complain that an int representation allocates its bit extremely unideal (half of the code value for the first stop etc…)

OpenEXR is designed as an intermediate file format. Storing camera data is an important topic but not the only one.

Considering that almost all operations in computer graphics are done on the GPU it would be unwise to go back to an integer-based intermediate file format. Also, I think it would be unwise to mix integer and floating-point data in the same container.


I complex agree with @daniele. I think the larger point is there are pros and cons to various data types and encodings for any particular use case. Floating point, signed integer, unsigned integer, fixed-point, and encodings like twos compliment all have their own attributes that make them preferable in various applications and non-preferable in others. It’s all just a matter of trade offs.

I think you just coined a new phrase! :slight_smile:



It’s called replying from your phone.

1 Like

There are some misconceptions here. I worked extensively in DTP and ICC creation before my time in motion picture color science. I’ve spoken at conferences for both sets of technologies.

ICC and ACES are for different industries and solve different issues.
ICC is both a color management architecture and color management system. It’s similar to ACES+OCIO.

As color management systems they have a lot in common. The new ICC max format removes most of the print centric limitations from the past.

While ICC was paper bound, color science for motion pictures was film bound. IIF which was renamed ACES was a film centric workflow and design. So while ICC originally was based around a d50 white of print shops, ACES used a D60 that was a compromise between film white and video white.

It’s hard to over state the speed and capability difference that has happened over my time in the industry especially for rendering. From my time beginning in ACES Most color transformations were done were exceptionally optimized. Most were 1-d luts that were optimized for use, so 8,10,12 and 16 bit 1-d tables were benchmarked against render time. The fastest systems were still significantly integer based. Results were written directly to and from the graphics cards. The ICC framework was impossibly processor intensive for that application.

However going the other way was a non-issue. Making copies of the motion picture pipeline that could be used in ICC is pretty trivial. This is why OCIO has bakeicc commands. You can have matte painters and visualization artists work in native colorspaces via ICC.

Color management in motion pictures is still in its infancy. It’s catching up quickly but ICC style color management benefited from additional decades of implementation and development. As well since it can be used in mass market devices there is a volume of support and money that ACES will be challenged to meet.

A quick gut check for this can be had even in products targeted towards motion picture processionals. Prorez raw launched without an ACES output function, but nearly every Apple product uses and supports ICC.

1 Like

Thanks for the post Joseph …
I think you make some great points, but I want to add a few points for clarity.

While I understand your point, I just want to clarify for others who may be less familiar with ACES and ICC that there’s no “film color science” built into ACES. We often hear that ACES has a very filmic aesthetic. Some like it, some don’t, but the ACES 1.0 Output Transforms aren’t film models.

ACES, like film, does leverage the concept of image states architectures. This just means that ACES files are scene referred, retain as much dynamic range as possible from the original scene, and reproduce some subset of that dynamic range on a display. This is similar in concept to the way film negatives have a wide dynamic range of which only a subset makes it onto the print stock.

ICC was originally designed for cross-media color management and is most comfortable taking one device dependent output referred image and transforming it to another set of device dependent output referred code values for reproducing the image on an different output device. ICC doesn’t really have the concept image states architectures built-in. It has been adapted over the years to deal with scene referred images, but it’s not it’s natural workflow.

I think it’s also important to clarify that ICC uses the concept of a Profile Connection Space (PCS) which has a reference medium, with a D50 based device independent color encoding, a reference viewing environment, and a reference gamut boundary.

While this may sound similar to ACES, it’s actually very different. ACES doesn’t use a PCS and has no reference medium. As part of the ACES 2065-1 color encoding specification, equal RGB values are defined as having a chromaticity of about the chromaticity of D60, but this is for the purpose of having context building transforms into and out of the ACES 2065-1 encoding. There’s nothing lossy about going into ACES 2065-1 and your neutrals don’t have to have the ACES white point.

ICC PCS, however, is lossy. The dynamic range, in particular, is limited to the dynamic range of the PCS reference medium (a dynamic range of 288:1). There are ways to work around these limitation using the ICC system, but this is the default and by far most common ICC workflow. This is a problem not only for motion picture production, but HDR color management in general. The ICC recognizes this and has a working group investigating what modifications they would need to make the architecture to address HDR workflows.

In summary, I think you’re 100% correct in saying “ICC and ACES are for different industries and solve different issues.” I just wanted to provide more specifics for those who aren’t as familiar with color management or ICC as those of us with backgrounds in printing technology from RIT :wink:

1 Like