Linear-sRGB not behaving as expected

In theory one should be able to read in a linear EXR image with Utility-Linear-sRGB and thus have a linear file that was made using sRGB primaries display the same in ACES.

So I would expect if I viewed an EXR in Nuke default, and viewed the same EXR in ACES, read in with Utility-Linear-sRGB that they would be identical. However as you can see below they look quite different. There is a notable shift in colors (the ACES going towards green) and the blacks are crushed.

You can recreate the above by downloading the Digital Emily 2 texture files here:
http://gl.ict.usc.edu/Research/DigitalEmily2/

FWIW, if I read it in as Utility - raw it appears bright red, but if I run that through a Utility-Look-Blue Light Artifact Fix then the colors match the image in Nuke (while still having the crushed blacks).

here’s a side by side of the Utility-linear-sRGB and Utility-Raw with Blue Artifact Fix so you can more clearly seen the difference in hue.

Hi @Derek,

No, no, no :slight_smile:

The problem is that your expectations are not correct. The image on the right, in your first post, is the image one should be expecting to see, i.e. a rendered image that went through a S-Curve. If it looks too dark, crank-up the exposure and be sure that your display and viewing conditions are appropriate.

Let me also paste this important quote:

Friends don’t let friends view scene-linear without an S-shaped viewing transform.

The one on the left should never be presented on display the way you did except for diagnosis purposes, e.g. antialiasing evaluation and such.

1 Like

That would account for the darks and crushed blacks. However what I am primarily concerned with is the shift in colors.

Update: I am getting pretty faithful colors with Utility-Linear-Adobe RGB. With Utility-Linear-sRGB however I get the green shift as shown above.

The RRT is not really neutral and has some sweeteners that can be annoying depending on the context, i.e. glow modifier, red module and global desaturation. They will effectively contribute to change your colours, especially the red module and global desaturation in that case. My guess is that it was arguably less visible when the RRT was designed because of the warmer D60 whitepoint.

Cheers,

Thomas

1 Like

Hmm is there a matrix that could undo the effect of those “sweeteners”? To some extent I guess the Adobe sRGB and Blue Fix seem to in this case, but that may be coincidental.

Not really, unfortunately, we have suggested they are removed and reintroduced in the form of LMTs in the RAE paper.

1 Like

I see you say here “The RRT will be made fully invertible, and that the various sweeteners will be moved to a default LMT.”

When the RRT is made invertible would that mean that one could then use that as an input transform without getting illegal values over 1 as in now the case?

In this particular case here (with the Digital Emily textures), where I am reading in an albedo texture map made in sRGB primaries ljnear space into ACES, in addition to not wanting shifts in colors, I would also like to be able to read it in without having the RRT tone curve crush the blacks. This seems to be an issue when reading color textures made outside of ACES that the RRT filmic tone map changes them in undesired ways. A filmic tone curve is very desirable when viewing a render, but it is definitely not desirable to get crushed blacks on an albedo texture. We might say

“Friends don’t let friends view scene-linear without an S-shaped viewing transform, and friends don’t let fiends crush the blacks on their textures.” :slight_smile:

For now the best workaround I have for that is to preprocess the image through an inverse of the display with an added matrix to keep the resulting pixels below 1, and then read that into Utility-sRGB-texture space for texture painting and rendering. That works pretty well. But the possibility of a fully invertible RRT might open up better options.

No, the values are going over 1 upon inversion because the forward path is rolling values down. This is unrelated to inversion of the sweeteners.

But this never happens because you are never ever burning the View Transform in your textures right?

Please do not do that, the Digital Emily textures should not require that, they should have been processed as per the book.

1 Like

No, they are not being burned in, but the blacks are getting crushed on the digital Emily textures as you can see here. I posted these picks from Nuke, but the exact same thing happens in Mari. The darks get crushed because of the toe on the RRT, and crushed blacks are not desirable.

What would you propose instead to get rid of unwanted crushed blacks on textures that are frequently present when bringing an sRGB image into ACES (including in this example)?

The issue is not really with digital Emily, but on a broader level with the ability to translate any image made in sRGB gamut into ACES in a way that translates as faithfully as possible the intent of the artist in a WYSIWYG workflow. Given that this workflow does not does not break any principles of PBR, it seems to me like a win-win.

1 Like

Hi,

The _crushed black[ are a View Transform thing, nothing to do with the textures.

  • The first thing to check is whether your display is calibrated to sRGB standard and then if your Viewing Conditions are appropriate and compatible with sRGB standard. If yes, then you can slightly over-expose your viewer to taste. On my calibrated display chain your right image looks great.
  • Then, did you check whether the crushed black values are within appropriate albedo range? Most of Emily hair are under the commonly recommended 30-50 sRGB 8-bit value, thus are not really appropriate for albedo textures.

The RRT’s, or any View Transform for that matter, tonescale has very little to do with the sRGB gamut. If your imagery is processed linearly with a Scene-Referred workflow in mind, then you are good to go, you simply work under the View Transform. It requires a little bit of time to adapt but this is what the VFX industry has been doing for years if not decades.

1 Like

I think perhaps you are assuming the context of VFX where one would be attempting to take a photo and use that to create the textures for a CG asset, as is the case with Digital Emily. In that case I agree with you that the best approach is to take raw photos and process them with ACR. It’s of course understandable that when you see the example of Digital Emily here that you assume that this is what I am trying to do.

So please allow me to clarify that my context is primarily in animation rather than VFX. There we are painting textures we make from scratch for cartoons. There in that texture creation workflow it would be common to begin with an sRGB image as a base layer, and then add in details on additional layers. The artist reading that image in of course wants the image to “look like it looks” and does not want any “automatic” shifts in either color or levels to be introduced against their will. Rather, they want a “faithful translation” of their image into ACES as a starting point they can build on.

It’s important to note that in this workflow all textures are painted and rendered in the Utility-sRGB-texture in linear space, working in color managed linear space. Additionally, no files are created with illegal values, clipping or clamping. So there is nothing problematic happening from the perspective of physically based rendering. We are simply adding a “preprocess” step to aid the translation of images made in sRGB space into linear space in a more faithful and predicable way, allowing artists a WYSIWYG intuitive workflow that is, at the same time, respecting of principles of PBR. In Mari, when reading an image into the Image Manager, one simply chooses the DT32_sRGB color space (an inverse of the display transform with a matrix to keep the values below 1). With the channel set to Utility-sRGB-texture, when the image is projection painted into the channel Mari will convert from DT32_sRGB to Utility-sRGB-texture.

1 Like

Well, hard to assume otherwise given the OP :wink:

The issue is that any faithful translation obtained by applying some sort of Inverse View Transform stops as soon as shading more complex than a Emissive or Lambertian BRDF and lighting more complex than a white skylight are involved.

Ultimately, what I would like to know is what your renders should be faithful to? To an sRGB image rendered without a S-Curve?

If that is the case, why are you using the RRT or even ACES? It is like trying to force a cube into a cylinder and you will always be fighting the system because you are using it in a way it is not designed for.

Let me quote TB-2014-004 for reference:

The Academy Color Encoding Specification (ACES) defines a digital color image encoding appropriate for both photographed and computer-generated images. […] In the flow of image data from scene capture to theatrical presentation, ACES data encode imagery in a form suitable for creative manipulation. […]
Based on the definition of the ACES virtual RGB primaries, and on the color matching functions of the CIE 1931 Standard Colorimetric Observer, ACES derives an ideal recording device against which actual recording devices’ behavior can be compared: the Reference Input Capture Device (RICD). As an ideal device, the RICD would be capable of distinguishing and recording all visible colors, and of capturing a luminance range exceeding that of any contemporary or anticipated physical camera. The RICD’s purpose is to provide a documented, unambiguous, fixed relationship between scene colors and encoded RGB values. When a real camera records a physical scene, or a virtual camera (i.e. a CGI rendering program) creates an image of a virtual scene, an Input Device Transform (IDT) converts the resulting image data into the ACES RGB relative exposure values the RICD would have recorded of that same subject matter.

From this introduction, we have gleaned that the system is designed to manipulate physical quantities, whether they are generated from the real world or via CG rendering. Quoting again:

ACES images are not directly viewable for final image evaluation, much as film negative or files containing images encoded as printing density are not directly viewable as final images. As an intermediate image representation, ACES images can be examined directly for identification of image orientation, cropping region or sequencing; or examination of the amount of shadow or highlight detail captured; or comparison with other directly viewed ACES images. Such direct viewing cannot be used for final color evaluation. Instead, a Reference Rendering Transform (RRT) and a selected Output Device Transform (ODT) are used to produce a viewable image when that image is presented on the selected output device.

Then we learn that a View Transform, i.e. the RRT is required to view ACES imagery. Quoting again:

Practical conversion of photographic or synthetic exposures to ACES RGB relative exposure values requires procedures for characterizing the color response of a real or virtual image capture system.

i.e processing as-per-the-book! :slight_smile: Which the Emily dataset you linked should be close to. Quoting again:

Encoding in ACES does not obsolete creative judgment; rather, it facilitates it.

In your case and from what you have been describing the past weeks, I don’t think it really does, you are really twisting the arm of the system.

That being said, the various workflows you are talking about are contextually fine. My worry, and because you mentioned you are teaching to students in other threads, is that they become standard practice. It would be counter-productive for your students.

The paragraphs quoted above are the most important aspects to understand what the system was designed to accomplish. This is what everybody should have in mind when using ACES, subsequently, if required to deviate for practical reason, then feel free to do it but always keep in mind the purpose of the system.

To reinstate it, the question to ask yourself is whether ACES is the good tool for your cartoon renders.

Cheers,

Thomas

1 Like

I think we are still talking past each other here Thomas :slight_smile: When I speak of “faithful translation” I am not talking about rendering, but about texture painting.

It is exactly parallel to picking a color when painting textures. As an artist I want to pick the color I want to paint and have it paint that color. If it instead painted a color that was a different hue than the color I picked or darker than what I picked, that would be a frustrating tool to work with. I want the color picker to faithfully give me the colors I picked, and I likewise want the same when projection painting.

When working in a linear paint program I am picking colors and viewing them as they will appear through the view transform. That’s fine, and I can still perceptually pick the color I want to have for my texture. This works fine for color picking in Mari with ACES. I pick mustard yellow and I get mustard yellow. I just want something similar with projection painting using an sRGB image, i.e. I want the colors I project to be the colors I want. I want to pick an image with mustard yellow and get that when I projection paint.

I don’t see how that in any way is in conflict with rendering with a filmic view transform or a PBR workflow (which I have been happily doing for years). Again I am not talking about rendering at all and very much agree with everything you are saying about rendering. I am talking about texture painting, and specifically about choosing the colors that I intend to have as they will appear when viewed through the display transform. I do think that it should be standard practice that if I pick mustard yellow, I get exactly that. I don’t see how that in any way conflicts with PBR. Indeed, if we are talking about the color picker it does not. So why should it when we are talking about the same thing with projection painting. A pixel color is a pixel color.

Shaders and Textures should not be dissociated because then that type of discussions ensue :slight_smile: They are the two faces of the same medal.

I will try to put it another way: An albedo/diffuse texture is nothing less than a coloured plane normal to the optical axis of the camera and illuminated with a light aligned to the optical axis, i.e. an albedo texture is merely a special case of shading/rendering. It might as well be an emissive plane rendered in the scene. You would surely not review a rendered image without a View Transform? No! I know that you would not, at least not today after having roamed weeks on ACEScentral :wink: And remember:

ACES images are not directly viewable for final image evaluation, much as film negative or files containing images encoded as printing density are not directly viewable as final images.

If the Display Referred Non-Look Look of your textures is the Look you want to see when authoring them, you are, again, absolutely free to do so. You will most likely not be able to do that in any major studios nowadays though. And, as I mentioned it, you will not be able to get that Non-look Look under the View Transform when producing renders anyway, never ever :slight_smile:

I will not venture into the colour picker territory because it is too complex and contextual (depends what you pick, how the colour picker is colour managed, etc…) and will only serve to complexify an already complicated discussion.

I would be keen if you were able to share some images of what you are looking at, cartoon is a broad category and maybe it would help the discussion.

Your last post confuses me because it seems to assume I am attempting to view textures/colors not through the View Transform. That is absolutely not the case. The “look I want to see when authoring [textures]” is what I see through the view transform. My desire when painting color is to have the color I want be the color I get as seen through the view transform. I get precisely that with the color managed color picker in Mari working in ACES.

Really my primary issue with ACES are the “sweeteners” and how they shift the colors. It sounds like that may be eventually removed and that you and I are on the same page in suggesting that they should not be locked into the view transform. I look forward to “ACES with no artificial sweeteners added.”

When I say “cartoon” I’m thinking of something like Cloudy with a Chance of Meatballs or Secret Life of Pets.

Hi Derek,

One of your last writing on what are your expectations was:

Given this, plus your OP, plus the many many threads you started around that topic, I hope you will concede that I was not confident that your understanding of the workflow was solid. :slight_smile:

Cloudy and all the late Pixar, Disney & co movies should fall quite well under the prescribed workflow.

Yes, I still stand by that statement. To quote from the ACES homepage:

ACES ensures a consistent color experience that preserves the filmmaker’s creative vision.

The creative vision is to have the film look as desired on the screen, That’s what I want too.

The reason for the " many many threads I started around that topic" is that, while I am very familiar with working in scene referred linear through a filmic view transform, there are certain aspects of ACES in particular that hinder the pursuit of that creative vision, and so I’ve been trying to figure out what those little gremlins are exactly. I think I’ve pretty much narrowed it down to the color shifts of the “sweeteners”

Let me express my gratitude to you in helping me to identify that.

Which is absolutely fair and the system should not prevent to achieve it. When we authored the RAE, I was (and still is) in the camp of people thinking that the RRT should present images with less contrast and should be more “averagely average”, to be frank, I actually prefer ARRI’s K1S1 which is less aggressive. That being said for a given number of people not liking the RRT, with some colorists having gone to the stretch of inverting the RRT completely with an LMT, you will find an equal amount liking it. It becomes a subjective matter of taste and if the system gives you a mean to adjust to your taste, which it is, then it is fine.

2 Likes