I’ve tried ACES on a recent color correction project, but find myself frustrated.
The frustration has to do with the ACES transform, which seems to be always the first action taken on the the camera original footage. (in my case, ArriLogC). The difficulty is that I don’t always like the transform contrast, but I feel, if I were able to change the the contrast of the camera original, before the transform, I could do what I need to do. But this seems impossible and has led to the ACEScc vs ACEScct in an attempt to deal with this, at least from the shadow portion of the image. Neither of these solutions satisfies me.
So my question: Why is the characteristic curve built into the transform? Why not have only the color space transform in the ACES transform, and leave the characteristic contrast curve up to the colorist?
Photoshop works in this way, and quite well. Why not ACES?
It seems to me, by combining the two concepts into the transform, the options and control of the colorist is limited.
I have not seen this discussed, or just haven’t been able to think of a search that brings it up, but I would like to start a discussion about this. Maybe I’m mistaken, it’s possible, but I don’t think so. And I think it’s a serious issue.
Very interesting feedback as it is inline with what a significant amount of people think, it is a very subjective topic, some people like ACES tone rendering, some don’t care and some don’t like it.
Now to be super specific, ACES has multiple components and I think you are referring to the RRT because technically until you enter it your ACES data represents scene referred relative exposure values which are linear, i.e. without tone rendering.
One of the goals of ACES was to simplify interchange and archiving of scene-linear images. To do this, there was a need for a standard (or reference) rendering. Without that, experience has shown that the process is ambiguous (i.e. “what LUT was I supposed to look at this through?”).
If we left it up to the colorist as you suggest, then there would be no standard and if you received an ACES image you would also need to receive a LUT (for example) to know what the creative intent was.
As to whether the contrast of the RRT (and ODT, because they both affect the contrast) is too high or too low, this was something that was validated by many very experienced industry professionals in a carefully setup projection environment. That said, I would imagine that the ACES Next effort will take a fresh look at those tests. It may be that there are differences between how the images were viewed during the engineering work and how people such as yourself are viewing them in practice.
I have to disagree with your statement about how Photoshop works. In Photoshop you are working with images that are either rendered by the digital camera vendor or rendered by Adobe in the Camera Raw module. Working with unrendered images in Photoshop would be difficult – the tools have other expectations.
Wouldn’t it be possible to just set a flag in the EXR like: rrt=true/false with the default being true ? Tools then could react to that, reflecting it in their interface. Options like an RRT toggle to see what looks best for the given shot. That way it looks the same for everybody and yet the colourist has the freedom to go “nope, not for this shot”.
I still happen to like the RRT look, but I haven’t come across a shot where it completely doesn’t work. If that happens the colourist has to work against a built-in look, which isn’t such a great experience.
What I don’t like is the inability to easily go to regular sRGB from ACES without getting ghoulish faces on humans. While I made a toolkit for converting frame sequences after the fact, that can’t be a long term scenario either.
Thank you Thomas and Doug for your replies. I think we understand each other and I’m glad that this issue is being discussed.
I understand the desire for a “one size fits all” solution to the RRT as it makes everything so much simpler. And, of course, I suppose I always have the option to not use ACES workflow go avoid the assumptions in the RRT.
My concern, as a Director of Photography, is that I recently read a paper from Netflix describing the workflow requirements for Netflix original content. And it contractually specifies an ACES workflow. And to me, as a cinematographer, this indicates that, going forward, I will be facing a lot of pressure to conform to the ACES workflow.
The problem it seems to me, is that the marketing of ACES never discusses the limitations of the approach. Famous cinematographers and post houses tout how great the system worked for them on the latest tent pole film. I get it. But the content distributers don’t understand the issues, and I’ve found, many many colorists don’t either. I’ve been accused of being a luddite for my concerns by colorists.
There are two major reasons for my concerns, as a DP.
Firstly, I’ve found with digital capture and color grading of the project that I am often lighting to very high contrast ratios. Much higher than I would do when we captured on film with a photo/chemical output. I find that these images do not fit into the RRT. And there is no way for ACES to adapt to this. So, as it is, ACES is forcing my lighting style to match the RRT. This is a step backwards towards “the old days of film” in a way.
Secondly, we are now faced with delivering for Theater, TV, HDR. HDR will require a completely different RRT than one designed for standard cinema or TV. So, I would guess, that going forward, the contrast curve will need to be a stand alone transform apart from a single RRT. It seems there will be no way around this. I think ACES needs to be a little more complicated if it’s really going to work.
ACES is already the simplest way to start from the same basic grade and deliver SDR and HDR versions with only a trim pass needed for whichever is the secondary version. The tone-mapping (contrast curve) is already split between the RRT and the ODT, with much less shadow and highlight compression happening in the HDR ODTs, varying with the maximum white level of a particular ODT (there are 1000, 2000 and 4000 nit variants of the ST2084 ODT).
Whether the roll-off in the current SDR ODTs is too strong is the subject of debate. The “Retrospective and Enhancements” document @Thomas_Mansencal refers to above suggests that the default rendering transform be made more neutral, with the “look” aspect being moved to a default LMT, which is optional. I think this sounds like what you want too.
If a project is immediately applying the RRT to a file, it isn’t really doing the right thing. They should only be applying an IDT to get into linear light (and then possibly into ACEScc) for grading. The grade should be
applied before the RRT and so any contrast change (slope in log, or power in linear) should change the scene light in a consistent way that is independent of the RRT. Applying an ACES Viewing Transform (RRT/ODT) should be able to be turned on or off without affecting the data.
Of course, what you see on the monitor might be a different thing. The biggest apparent effect from the RRT is the roll-off in the highlights and shadows that in the RRT are chosen for a 0.005 to 10,000 nits display (so it is HDR ready). If you just looked at the RRT, there sometimes is a better image though with a lot of clipping.
The ODTs have a big effect on the look shown on SDR displays. I think it would be a nice feature I think if an
HDR ODT could be used appropriately on an SDR display. This would let you see lighting ratios correctly at the expense of not seeing what the final image would look like exactly.
There is a problem in applying the RRT in the Red direction because of how intense the Reds get. The feedback was to try and control that (as part of the process Doug mentioned). So some of the changes in the RRT are not necessarily an optional ‘Look’ as described in the paper, but rather trying to patch objectionable parts of the rendering so that to first glance, the image appears in the ballpark. We had DPs
say that without some of the things that were done, “they might be fired” if they showed that image. That alarmed us and therefore the correction may have been too far. Sometimes one fix then led to another to compensate. If you have a good sense of the needed target for everyone, a cleaner implementation would really help. I predict this will get looked at for sure.
Early in the development of the RRT, the tone curve ACES used was very similar to Vision 3 print-through curves and changes were done to lower that contrast and clean the roll-offs some, but it still has an overall s-shape that resembles Vision 3. The design choices were targeted at Cinema with adjustments for HDR/SDR.
The prevalence of different output targets ( HLG/PQ/Game Engines/VR ) is highlighting that the RRT/ODT viewing transforms have some limits that need to be thought about… Timing of contrast should still be completely under the control of the user and where that doesn’t happen it should be fixed. Even in the current system, you can make ODTS that don’t have the rolloffs, then there would be obvious clipping in the image that would have to be timed out, and you are back into an arena where each grade has to be unique for each display device. The main justification for the ODT system was to allow changes in the reproduction for each device so that any contrast changes due to viewing conditions could be dealt with and the same creative grade could still be used. The ACES Viewing Transforms are a do-your-best on each device with different viewing conditions. The dim vs. dark surround issue is really important in ACES to get the right results.
I see in the thread a ‘crossed-streams’ about the ‘Look’ of the RRT - sometimes there is discussion of the Red ‘extras’ as being the look that needs to be moved, and sometimes there is discussion of the 1.5 gamma present in the RRT being removed to get it to be neutral (i.e. if you take an AcesCC image with no viewing transform and apply a slope of about 1.5, middle luminances will be about the same)
Putting in greater or lower contrast, and systematically enhancing certain colors has always been part of the LMT approach. Regretfully, the tools for making these are not often in the hands of the users. (There is a LoCon LMT in the ACES distribution, but it is often not apparent in user interfaces or even included in a vendor distribution). Yet another thing that needs more work.
Since AcesNext is in a gathering feedback phase, this is a good discussion to have.
I really like the tone mapping in the RRT, however, I do agree with comments about the contrast.
My way of dealing with this is to always start with a contrast setting on 0.85.
This keeps what I like form the RRT curve but gives me more room to tweak.
It’s how I produced all the recent CML camera evaluations http://www.cinematography.net/UHD-Digital%20Cinema%20Camera%20Evaluations%202017.html
You can get EXR’s of all of these in ACES space http://www.cinematography.net/EXR-EVALS.html
The contrast in the EXR’s is obviously untouched so you can easily compare my output in the UHD QT’s with the originals.
I find that the really simple approach of ACES works well with my way of shooting as long as I make that contrast adjustment initially.
It’s also important to know what the contrast pivot is set to. Resolve in ACES mode defaults to a different pivot (approx ACEScc mid grey) than it does in YRGB mode (0.5).
I’m using both Resolve and Prelight/Daylight the effect is the same.
There are untouched EXR’s here http://www.cinematography.net/EXR-EVALS.html
These are for…
ARRI Alexa SXT, Blackmagic Ursa Mini-Pro, Canon C200, Canon C300-2 & Odyssey, Canon C700 4.5K, RED Dragon, RED Helium 7K, RED Scarlet, Sony F5, Sony F55, Sony F65, Varicam LT & Odyssey, Varicam Pure
All EXR’s produced by the manufacturers own software with the exception of Varicam and Odyssey material which was handled in Resolve.
Thank you Geoff. There is something that I don’t understand though. Where, in Resolve software, do you set the contrast to 0.85? Doesn’t one need to do this before the ACES transform? It’s my impression that all node grades are after the transform, so I want to learn how to change the contrast before the transform.
Thanks!
What do you mean by “The ACES transform”? There is an input transform (IDT) and an output transform (concatenation of RRT and ODT). Grade nodes are applied between these two, so after the transform to your selected working space (ACEScc or ACEScct) but before the RRT, which is what applies the “ACES look”. So you are reducing contrast in a scene referred space, not applying highlight and shadow roll off and then reducing contrast, which would probably give an undesirable result.
My issue is that reducing contrast in a node works fine to pull the image out of the toe and shoulder. But, I almost always find that I will also need to extend the maximum black down and the maximum white up. I can’t do this in a node without re-entering the toe and shoulder of the RRT (or maybe ODT), hey I’m new at this terminology:). That’s the frustration.
I think in the post that you quoted, I wrote in reverse. Sorry. Or maybe there was something in Geoff’s post that confused me for a moment…
That is true. It is part of the way ACES is designed to work. You cannot operate directly on the display referred values after the RRT/ODT.
My understanding is that Geoff’s use of contrast reduction is doing exactly what he wants it to do, that is pull the shadows and highlights slightly out of the toe and shoulder curves, to make more of the shot range visible.
Using, for example, OpenColorIO or CTLrender to incorporate an inverse ODT and RRT, it would be possible to create a LUT which could be applied before the RRT and which would have the effect of shifting the black and white points after the ODT without being affected by the toe and shoulder curves.
However, this approach would not be interactive, and would be specific to the ODT it was designed for. It would therefore not fit the ACES goal of creating graded image data which would work as intended with a range of current and future ODTs. Effectively modifying an SDR ODT to map white to a different place would almost certainly have an undesirable result if the SDR ODT was switched out for an HDR ODT.
On that topic, I wonder if it would be good to add a bright surround option for some working environments. Not everybody involved in image creation and edition like to work in windowless rooms.
Nick, I find that I much more desire the Photoshop approach where the curves are not included in the transforms, and use soft-proofs to guess at what my image will look like on another device and make the necessary corrections manually, using my artistic judgement.
Sometimes “one size fits all” … only fits one size.
I think that the “ACES goal of creating graded image data which would work as intended with a range of current and future ODTs”, as currently configured, may be a “wild goose chase” and … creative dead end. In lieu of a standard display technology, ACES is creating a standard photographic concept to make the transforms consistent. It would be easier to just go back to analog film
That said, the work that has been done on the ACES transforms have been quite good and extremely useful. I hope that they will be made available to use in an alternative workflow.
That’s exactly what I’m trying to do, in my initial tests, several years ago, I found that in really extreme cases I could still get extra show/highlight detail down to 0.666.
I loved getting that number! it was great for presentations, the number of the beast!
In reality 0.75 was always plenty and as I used ACES more I ended up standardising on 0.85 for all my dailies as it would let me see what I had shot that was going to be totally gradable.
Yes, I think extending the surround compensation for other viewing environments has been brought up.
There is both ‘average’ surround which is typical of office environments, and ‘bright surround’ where everything in the environment is brighter than the image. There may even be subdivisions of these, so yes would be nice to have a model of how to handle all of them. (VR environments have also been mentioned, which is a bit harder because in immersive environments, the VR image may be its own dynamic surround).
A nice topic to discuss.