Linear-sRGB not behaving as expected

No, the values are going over 1 upon inversion because the forward path is rolling values down. This is unrelated to inversion of the sweeteners.

But this never happens because you are never ever burning the View Transform in your textures right?

Please do not do that, the Digital Emily textures should not require that, they should have been processed as per the book.

No, they are not being burned in, but the blacks are getting crushed on the digital Emily textures as you can see here. I posted these picks from Nuke, but the exact same thing happens in Mari. The darks get crushed because of the toe on the RRT, and crushed blacks are not desirable.

What would you propose instead to get rid of unwanted crushed blacks on textures that are frequently present when bringing an sRGB image into ACES (including in this example)?

The issue is not really with digital Emily, but on a broader level with the ability to translate any image made in sRGB gamut into ACES in a way that translates as faithfully as possible the intent of the artist in a WYSIWYG workflow. Given that this workflow does not does not break any principles of PBR, it seems to me like a win-win.


The _crushed black[ are a View Transform thing, nothing to do with the textures.

  • The first thing to check is whether your display is calibrated to sRGB standard and then if your Viewing Conditions are appropriate and compatible with sRGB standard. If yes, then you can slightly over-expose your viewer to taste. On my calibrated display chain your right image looks great.
  • Then, did you check whether the crushed black values are within appropriate albedo range? Most of Emily hair are under the commonly recommended 30-50 sRGB 8-bit value, thus are not really appropriate for albedo textures.

The RRT’s, or any View Transform for that matter, tonescale has very little to do with the sRGB gamut. If your imagery is processed linearly with a Scene-Referred workflow in mind, then you are good to go, you simply work under the View Transform. It requires a little bit of time to adapt but this is what the VFX industry has been doing for years if not decades.

I think perhaps you are assuming the context of VFX where one would be attempting to take a photo and use that to create the textures for a CG asset, as is the case with Digital Emily. In that case I agree with you that the best approach is to take raw photos and process them with ACR. It’s of course understandable that when you see the example of Digital Emily here that you assume that this is what I am trying to do.

So please allow me to clarify that my context is primarily in animation rather than VFX. There we are painting textures we make from scratch for cartoons. There in that texture creation workflow it would be common to begin with an sRGB image as a base layer, and then add in details on additional layers. The artist reading that image in of course wants the image to “look like it looks” and does not want any “automatic” shifts in either color or levels to be introduced against their will. Rather, they want a “faithful translation” of their image into ACES as a starting point they can build on.

It’s important to note that in this workflow all textures are painted and rendered in the Utility-sRGB-texture in linear space, working in color managed linear space. Additionally, no files are created with illegal values, clipping or clamping. So there is nothing problematic happening from the perspective of physically based rendering. We are simply adding a “preprocess” step to aid the translation of images made in sRGB space into linear space in a more faithful and predicable way, allowing artists a WYSIWYG intuitive workflow that is, at the same time, respecting of principles of PBR. In Mari, when reading an image into the Image Manager, one simply chooses the DT32_sRGB color space (an inverse of the display transform with a matrix to keep the values below 1). With the channel set to Utility-sRGB-texture, when the image is projection painted into the channel Mari will convert from DT32_sRGB to Utility-sRGB-texture.

Well, hard to assume otherwise given the OP :wink:

The issue is that any faithful translation obtained by applying some sort of Inverse View Transform stops as soon as shading more complex than a Emissive or Lambertian BRDF and lighting more complex than a white skylight are involved.

Ultimately, what I would like to know is what your renders should be faithful to? To an sRGB image rendered without a S-Curve?

If that is the case, why are you using the RRT or even ACES? It is like trying to force a cube into a cylinder and you will always be fighting the system because you are using it in a way it is not designed for.

Let me quote TB-2014-004 for reference:

The Academy Color Encoding Specification (ACES) defines a digital color image encoding appropriate for both photographed and computer-generated images. […] In the flow of image data from scene capture to theatrical presentation, ACES data encode imagery in a form suitable for creative manipulation. […]
Based on the definition of the ACES virtual RGB primaries, and on the color matching functions of the CIE 1931 Standard Colorimetric Observer, ACES derives an ideal recording device against which actual recording devices’ behavior can be compared: the Reference Input Capture Device (RICD). As an ideal device, the RICD would be capable of distinguishing and recording all visible colors, and of capturing a luminance range exceeding that of any contemporary or anticipated physical camera. The RICD’s purpose is to provide a documented, unambiguous, fixed relationship between scene colors and encoded RGB values. When a real camera records a physical scene, or a virtual camera (i.e. a CGI rendering program) creates an image of a virtual scene, an Input Device Transform (IDT) converts the resulting image data into the ACES RGB relative exposure values the RICD would have recorded of that same subject matter.

From this introduction, we have gleaned that the system is designed to manipulate physical quantities, whether they are generated from the real world or via CG rendering. Quoting again:

ACES images are not directly viewable for final image evaluation, much as film negative or files containing images encoded as printing density are not directly viewable as final images. As an intermediate image representation, ACES images can be examined directly for identification of image orientation, cropping region or sequencing; or examination of the amount of shadow or highlight detail captured; or comparison with other directly viewed ACES images. Such direct viewing cannot be used for final color evaluation. Instead, a Reference Rendering Transform (RRT) and a selected Output Device Transform (ODT) are used to produce a viewable image when that image is presented on the selected output device.

Then we learn that a View Transform, i.e. the RRT is required to view ACES imagery. Quoting again:

Practical conversion of photographic or synthetic exposures to ACES RGB relative exposure values requires procedures for characterizing the color response of a real or virtual image capture system.

i.e processing as-per-the-book! :slight_smile: Which the Emily dataset you linked should be close to. Quoting again:

Encoding in ACES does not obsolete creative judgment; rather, it facilitates it.

In your case and from what you have been describing the past weeks, I don’t think it really does, you are really twisting the arm of the system.

That being said, the various workflows you are talking about are contextually fine. My worry, and because you mentioned you are teaching to students in other threads, is that they become standard practice. It would be counter-productive for your students.

The paragraphs quoted above are the most important aspects to understand what the system was designed to accomplish. This is what everybody should have in mind when using ACES, subsequently, if required to deviate for practical reason, then feel free to do it but always keep in mind the purpose of the system.

To reinstate it, the question to ask yourself is whether ACES is the good tool for your cartoon renders.



I think we are still talking past each other here Thomas :slight_smile: When I speak of “faithful translation” I am not talking about rendering, but about texture painting.

It is exactly parallel to picking a color when painting textures. As an artist I want to pick the color I want to paint and have it paint that color. If it instead painted a color that was a different hue than the color I picked or darker than what I picked, that would be a frustrating tool to work with. I want the color picker to faithfully give me the colors I picked, and I likewise want the same when projection painting.

When working in a linear paint program I am picking colors and viewing them as they will appear through the view transform. That’s fine, and I can still perceptually pick the color I want to have for my texture. This works fine for color picking in Mari with ACES. I pick mustard yellow and I get mustard yellow. I just want something similar with projection painting using an sRGB image, i.e. I want the colors I project to be the colors I want. I want to pick an image with mustard yellow and get that when I projection paint.

I don’t see how that in any way is in conflict with rendering with a filmic view transform or a PBR workflow (which I have been happily doing for years). Again I am not talking about rendering at all and very much agree with everything you are saying about rendering. I am talking about texture painting, and specifically about choosing the colors that I intend to have as they will appear when viewed through the display transform. I do think that it should be standard practice that if I pick mustard yellow, I get exactly that. I don’t see how that in any way conflicts with PBR. Indeed, if we are talking about the color picker it does not. So why should it when we are talking about the same thing with projection painting. A pixel color is a pixel color.

Shaders and Textures should not be dissociated because then that type of discussions ensue :slight_smile: They are the two faces of the same medal.

I will try to put it another way: An albedo/diffuse texture is nothing less than a coloured plane normal to the optical axis of the camera and illuminated with a light aligned to the optical axis, i.e. an albedo texture is merely a special case of shading/rendering. It might as well be an emissive plane rendered in the scene. You would surely not review a rendered image without a View Transform? No! I know that you would not, at least not today after having roamed weeks on ACEScentral :wink: And remember:

ACES images are not directly viewable for final image evaluation, much as film negative or files containing images encoded as printing density are not directly viewable as final images.

If the Display Referred Non-Look Look of your textures is the Look you want to see when authoring them, you are, again, absolutely free to do so. You will most likely not be able to do that in any major studios nowadays though. And, as I mentioned it, you will not be able to get that Non-look Look under the View Transform when producing renders anyway, never ever :slight_smile:

I will not venture into the colour picker territory because it is too complex and contextual (depends what you pick, how the colour picker is colour managed, etc…) and will only serve to complexify an already complicated discussion.

I would be keen if you were able to share some images of what you are looking at, cartoon is a broad category and maybe it would help the discussion.

Your last post confuses me because it seems to assume I am attempting to view textures/colors not through the View Transform. That is absolutely not the case. The “look I want to see when authoring [textures]” is what I see through the view transform. My desire when painting color is to have the color I want be the color I get as seen through the view transform. I get precisely that with the color managed color picker in Mari working in ACES.

Really my primary issue with ACES are the “sweeteners” and how they shift the colors. It sounds like that may be eventually removed and that you and I are on the same page in suggesting that they should not be locked into the view transform. I look forward to “ACES with no artificial sweeteners added.”

When I say “cartoon” I’m thinking of something like Cloudy with a Chance of Meatballs or Secret Life of Pets.

Hi Derek,

One of your last writing on what are your expectations was:

Given this, plus your OP, plus the many many threads you started around that topic, I hope you will concede that I was not confident that your understanding of the workflow was solid. :slight_smile:

Cloudy and all the late Pixar, Disney & co movies should fall quite well under the prescribed workflow.

Yes, I still stand by that statement. To quote from the ACES homepage:

ACES ensures a consistent color experience that preserves the filmmaker’s creative vision.

The creative vision is to have the film look as desired on the screen, That’s what I want too.

The reason for the " many many threads I started around that topic" is that, while I am very familiar with working in scene referred linear through a filmic view transform, there are certain aspects of ACES in particular that hinder the pursuit of that creative vision, and so I’ve been trying to figure out what those little gremlins are exactly. I think I’ve pretty much narrowed it down to the color shifts of the “sweeteners”

Let me express my gratitude to you in helping me to identify that.

Which is absolutely fair and the system should not prevent to achieve it. When we authored the RAE, I was (and still is) in the camp of people thinking that the RRT should present images with less contrast and should be more “averagely average”, to be frank, I actually prefer ARRI’s K1S1 which is less aggressive. That being said for a given number of people not liking the RRT, with some colorists having gone to the stretch of inverting the RRT completely with an LMT, you will find an equal amount liking it. It becomes a subjective matter of taste and if the system gives you a mean to adjust to your taste, which it is, then it is fine.

1 Like

Well said Thomas. Amen.

Is that something that can be done with OCIO? My understanding was that since the RRT and ODT have been combined into a single LUT that it was not possible to remove the RRT.

Here’s a thought: Based on what you say above “the RRT should present images with less contrast and should be more “averagely average”, to be frank, I actually prefer ARRI’s K1S1 which is less aggressive”

and this here which echos what I was noticing with the green Emily above in my OP"

“The ACES Reference Rendering Transform has a distinct look that I think isn’t very natural. The RRT has an extreme shoulder that extends down towards the midtones, reducing saturation and shifting skin tones towards green.”

and this here

“Ideally, the RRT should not exist, or should at least be neutral, imparting no predetermined ‘Look’ onto the image…Ideally, the ODT should take images directly from ACES space, with no RRT involvement.”

What if the RRT was removed and instead there was an LMT where one could put the RRT filmic tone curve or if desired one could put something less aggressive like you mentioned. That way one would still be able to have an s-shaped curve (which I think we both agree is very important for rendering), but would not have a particular “look” imposed on the filmmaker (which is just as important for artists to have tools that allow artists to achieve their vision rather than imposing one on them).

The thing to understand is that as soon as you do something on the image, you are imposing a Look on it. Not doing anything is also akin to pick a Look, i.e. a Non-Look Look. Basically, there is no such thing as No Look.

It is easier to find negative commentary online about something rather than positive. People tend to grab the pen more often in case of problems rather than when everything is awesome.

I think that the ACES RRT is rather appreciated, if not, ACEScentral would have attracted negative comments about it via all the Unreal Engine developers, which are millions, yes, many millions :slight_smile: Brian Karis, who implemented ACES support in UE4 really likes it. I will not quote him but he was super positive. Now, importantly, he introduced a tiny technical transformation: he over-exposed everything with a gain of 1.45, i.e. a gain LMT, that was all that was required to satisfy the artists at Epic Games. This implementation is being used successfully by most of the UE4 developers since a few years now. My gut feeling is that this was needed because most artists are working under viewing conditions that are simply too bright and are used to the sRGB No Look.

When I say that I’m “in the camp of people thinking that the RRT should present images with less contrast and should be more “averagely average””, I’m saying it while assessing ACES imagery on a multiple thousands dollars display, calibrated with a colorimeter costing even more and under relatively well-controlled Viewing Conditions that are metered and tuned accordingly. When I’m confident that my entire viewing environment is appropriate to review imagery, only then I allow myself to express subjective creative comments.

With that in mind, it is effectively critical that your viewing environment is appropriate, thus a few questions:

  • What is your display?
  • How is it calibrated?
  • What are your viewing conditions? Can you give me illumination information at your desk?

Another critical point if you are using sRGB, is that most consumer displays adopt a naive and simple 2.2 Gamma EOTF approximation instead of the piece-wise function defined in the IEC 61966-2-1:1999 Standard. It will result in the decoding of ACES imagery for sRGB which effectively crushes blacks:

If you remember, I actually commented that the OP Emily right image was looking fine to me, I was reviewing it in my calibrated environment but are you? One recommendation I could do is to try to look at ACES imagery under Theatrical Exhibition Viewing Conditions to appreciate the original Look intent.

All that to say, that I think you should try to use the system as it is intended first and under a proper viewing environment. See where it falls short in your context, and then proceed with incremental and small changes as required. However, I would never ever go as far as changing the whole RRT. It was designed under perfect viewing conditions and if you don’t have access to a Theatre or DI suite, you will probably do worse than better.

@jim_houston was involved significantly in the RRT design and might have extra recommendations to offer.

1 Like

Let me clarify that I am not advocating for removing the RRT, nor am I advocating for changing it (well maybe I am a little bit there). But mostly I am advocating making it more modifiable by decoupling it from the ODT so one would have the option of leaving it as is or possibly tweaking it, if one wanted for example something less aggressive. In other words, I’m advocating for choice.

Ha! :slight_smile: Hard to think this is not the case when reading What if the RRT was removed and instead there was an LMT where one could put the RRT filmic tone curve or if desired one could put something less aggressive like you mentioned.

The RRT is already decoupled from the ODT by design, this is exactly how the system works and in that design, an LMT prior to the RRT is the prescribed way to modify the Look. Note that the new SSTS will effectively couple them but this is another discussion.

If you need something less aggressive, that is where you put a contrast reduction or a gain, like Epic did.

I notice that you haven’t answered my questions about your viewing environment, those are the most important things to address before doing anything else.

Yes that’s a fair point.

Okay, I’ll certainly look into that with the tone curve, but are you also suggesting that this would be a way to undo the color shifts of the sweeteners? I thought you had said earlier (post 7) that that wasn’t really possible.

As far as answering your questions:

  • What is your display? sRGB
  • How is it calibrated? sRGB
  • What are your viewing conditions? random/varied

So you might say I’m the consumer-grade scenario, while you have optimized ideal viewing conditions. But the stated goal of ACES is to have an image look good in ideal and also in non-ideal conditions, for it to look good in the theater, but also for it to look good on Netflix on a laptop, right? :slight_smile:

Gamma 2.2 or sRGB? :wink:

The mission is to deliver images that look great under the supported viewing environments, i.e. the ones covered by the ODTs, the assumption though is that the viewers adhere to them, if not, well, one cannot expect the system to deliver good results.

Thanks for all your input Thomas, it gives me a lot of food for thought. Hope you have a great weekend