Utility-rec.709-camera as diffuse texture

I am aware that the general wisdom is to use utility-sRGB-texture as the color space for albedo/diffuse texture maps because if one instead uses an inverse of the Output Transform this can produce illegal values above 1 making the CG object light emitting.

I’ve tested this and confirmed that a raw value of 1 on a texture read with an inverse Output Transform does indeed map to 16.3 while the same value of 1 on texture with utility-sRGB-texture returns 1.

However I also note that that value of 1 read with the input color space of utility-rec.709-camera (which more faithfully returns the color of the video sRGB image “of unknown origin”) does not map to 16, but rather maps to 1.

So it would appear that using utility-rec.709-camera does not have the problematic issue of illegal colors for the albedo shader.

Am I missing something?


You should really use Utility - Rec.709 - Camera for material that comes from a camera that you know will encode data using BT.709 OETF, those are mostly HDTV cameras. If you are unsure about your texture origin, the lowest common denominator is Utility - sRGB - Texture really. You can use Utility - Rec.709 - Camera but it will not massively change the outcome anyway as the functions are mostly the same but with a different power.



Yes I think the power (2.4 gamma for rec709 as opposed to 2.2 sRGB) is part of what I was noticing. The util-sRGB-tex thus crushes the blacks which is aesthetically unpleasing on a texture (as opposed to on a render where it is pleasing, whereas this is less the case with the util-rec709-cam. Rec709 also is less saturated than sRGB.

I do see however that neither are an inverse of the display transform, just the differences between how sRGB vs rec.709 works.


It is obviously contextual but I don’t think you should pick a transformation over another at this stage, especially for aesthetic reasons. I’m taking that from the standpoint that if you are bringing your textures from the Old School Display Referred world into the Modern Scene Referred world and with the aim of building a reusable texture library, you should strive at producing textures that are as much neutral as possible. Averagely average is the goal here as it will help to branch them and tweak accordingly, if required, to the current show style. If your base textures carry too much look to start with, it will be extremely hard to depart from it and will be counter-productive in the long run for artists.

You certainly don’t want them to be inverse of any display transform, they need to represent plausible values either in the scene, i.e. Scene Referred or for the shader parameters they are driving. Having them bound to a particular display means that as soon as you change your display, your textures will not work anymore.

Hope that makes sense, it is super important to understand :slight_smile:



Yes that makes a lot of sense and I had not thought of that. Thanks.

Let me explain my context a bit: I’m working in education and have some students who are working in paint programs that work in linear, and others who work in paint programs that are not. So I need to have a workflow that accommodates both

It may be that I conclude that it’s best for students to not use aces if they are working in a non-linear paint program. I certainly don’t want to teach wrong workflows at an early level as that’s bad pedagogy.

But I’m still in the process of weighing the options. Thus a question: is it possible to inverse the tone curve but not the display? If the tone curve is in the RRT then it should be, but I heard it was actually part of the ODT.


My take on it is that Display Referred applications such as Adobe Photoshop should not be used anymore for texturing. I have pretty much never used it during the last decade, Mari, Nuke and more recently Substance Painter are the way to go. The only place where Photoshop has still a place is Digital Matte Painting but besides that, it is simply not equipped for modern productions.

I don’t really understand your last question :slight_smile: The RRT is what maps the ACES values to an ideal reference display, producing OCES values, those values at then mapped to an effective target display using an ODT. The ODT is mostly responsible for fitting the OCES dynamic range into the smaller target display dynamic range.

While very elegant on the paper, the drawback of this approach is that it is extremely hard to control the look because the two separate functions, i.e. RRT and ODT are somehow antagonist, or at least do not play well together. If you modify the RRT, you need to change all the ODTs accordingly making it hard to define clear ownership of what function should do what. What has been done recently is combining them back together, via the so-called Single Stage Tone Scale (SSTS) transform, the resulting transformations are simply called Output Transforms: https://github.com/ampas/aces-dev/tree/master/transforms/ctl/outputTransforms



We would not be using Photoshop for textures. What we would however do is use sRGB 8-bit images as a starting point for a texture map made in a program like Mari. With the utility-sRGB-tex that comes in with crushed blacks because of the toe on the display transform. I understand that one cannot do something like the dt color space in spi-VFX as this locks the render to a particular display transform. However the crushed blacks of the 2.2 gamma on textures in aces make for unhappy artists. I understand that Epic’s version of ACES in Unreal Engine had similar issues with ACES and uses a more washed out/less contrasty gamma (1.45 I think). Is there an OCIO color space that would do this that I could try instead of the utility-sRGB-tex?


Oh but this is an entirely different problem and adopted resolution, the 1.45 value is a scene exposure change, i.e. a gain, on the rendered Scene Referred values prior to enter the RRT + ODT. It has nothing to do with a texture gamma or textures at all for that matter.



I see. But is there anything problematic about the approach I suggested? I for example added a CDL after the matrix transforms on the utility-sRGB-tex color space where I set the power to 0.66 (thus raising the gamma on the image) and sat to1.12 (to compensate) and it solves the issue of crushed blacks.

Yes because you are trying to solve aesthetic issues at a place in a system where it is not really ideal to do it. What are you going to do once you mix your textures with properly acquired Scene Referred textures? In ACES if you need to introduce a creative look modification, you should really do that with a Look Modification Transform (LMT): LMTs Part 1: What are they and what can they do for me?



“ You certainly don’t want them to be inverse of any display transform, they need to represent plausible values either in the scene, i.e. Scene Referred or for the shader parameters they are driving. Having them bound to a particular display means that as soon as you change your display, your textures will not work anymore.”

I’d like to return to this point to hopefully understand it better. My understanding is that the display transform is basically doing two things: converting from linear to the display (Say sRGB) and a tone curve. If I have a texture map in sRGB space and I invert sRGB on it as well as inverting the tone curve then it has been made linear. The only issue is values above one which can be solved by an additional matrix like the dt in spi-VFX. But texture has been properly linearized now and has no illegal values.

When that linearized texture is rendered and a different display transform is used to take it from linear to say P3, I would expect that to work just fine because it’s going from linear. It does in fact work fine with spi-VFX

Yes, to be exact, the RRT + Output Device Transforms or Output Transforms map the ACES Scene Referred values to the display of choice. During that process though, the tonescale part is applied first then the resulting values are encoded with the Display Inverse/Reverse EOTF.

The first critical step here, thinking about your students, is to be extremely precise with the terminology: https://www.colour-science.org/posts/the-importance-of-terminology-and-srgb-uncertainty/

Your statement is unfortunately not correct. If you have a sRGB texture that is encoded with the inverse sRGB EOTF, you will need to decode with the sRGB EOTF, i.e. Utility - sRGB - Texture, to make it linear. You don’t have to do anything else, except for those cases where you don’t really know where the texture is coming from and it might be useful to use the ACR curve for example. I repeat and this is super important: You decode with the sRGB EOTF and your texture is effectively linear, nothing else to do.

To push the thinking, imagine using a super nice texture shot with a 5D Mark III. It has been processed as per the book and saved as a 16-bit Half EXR file and is thus intrinsically linear.
Nothing forbids you to also save as a copy as a 8-bit JPG file. In that case, your software, e.g. Photoshop, would most likely use the inverse sRGB EOTF to encode the file as you certainly do not want to put linear values in a 8-bit container. Should you want to bring that 8-bit JPG file into Nuke, you would use the Utility - sRGB - Texture colorspace. This is the only step that is required and your 8-bit JPG file would match the 16-bit Half EXR file. No need to invert any tone curve/tonescale whatsoever.



Thanks for your patience with me as I fumble through this Thomas.

Let me offer this quote from the Maya docs:

"Output-referred images, such as video and sRGB, should have the gamma removed and an inverse tone map applied. These two operations are often combined in one transform. Ideally, you should use the inverse of the tone map that will be used for output and display. For example, if you will be using the ACES tone map, then you can use sRGB_to_ACES from the RRT+ODT/ directory as an inverse tone map. Alternatively, inversePhotoMap_gamma_2.4 from the primaries/ directory is a generic transform that works well in many cases.

Note: Although it is common to simply remove the gamma from output-referred images, this is not enough to convert images to scene-linear — an inverse tone map is always required."

I’d like to inverse the tone map as stated above, but not lock this to a particular display as you mentioned earlier. This can be done in spi-anim, I believe because of the way the vd16 and p3dci8 color spaces are written.

- !<ColorSpace>
name: vd16
family: vd
bitdepth: 16ui
description: |
  vd16 : The simple video conversion from a gamma 2.2 srgb space
isdata: false
allocation: uniform
to_reference: !<GroupTransform>
- !<FileTransform> {src: vd16.spi1d, interpolation: nearest}

So vd16 (the output transform for an sRGB monitor) is just using the vd16.spi1d LUT which does both the tone mapping and going from linear to sRGB. The p3dci8 is used for P3-DCI and begins with the vd16.spi1d LUT and going from there to P3.

- !<ColorSpace>
name: p3dci8
family: p3dci
bitdepth: 8ui
description: |
  p3dci8 : 8 Bit int rgb display space for gamma 2.6 P3 projection.
isdata: false
allocation: uniform
from_reference: !<GroupTransform>
    - !<ColorSpaceTransform> {src: lnf, dst: vd16}
    - !<ExponentTransform> {value: [2.2, 2.2, 2.2, 1]}
    - !<FileTransform> {src: srgb_to_p3d65.spimtx, interpolation: linear}
    - !<FileTransform> {src: p3d65_to_pdci.spimtx, interpolation: linear}
    - !<FileTransform> {src: htr_dlp_tweak.spimtx, interpolation: linear}
    - !<ExponentTransform> {value: [2.6, 2.6, 2.6, 1], direction: inverse}
    - !<FileTransform> {src: correction.spi1d, cccid: forward, interpolation: linear}

Because of this one can invert the tone map on a texture map with vd16 (FWIW actually with dt16 which keeps the inverse vd16 in a 0-1 range), and still be able to view that render on a sRGB or P3 display.

So my question is: is the same possible with ACES?

Your statement

gives me hope that it is.

Or to state it differently:

If I do an inverse of the sRGB Output Transform, this would (1) decode with the sRGB EOTF, thus making my sRGB encoded texture map now linear, and (2) inverse the tone map. It’s the same as utility-sRGB-texture plus the inverse tone map. I can then view that with whatever Output Transform is appropriate. To make it a quasi-math equation

INPUT: (sRGB encoded texture) - (sRGB) - (tone map) = (linear untoned)
OUTPUT A: (linear untoned) + (tone map) + (P3) = (projector)
OUTPUT B: (linear untoned) + (tone map) + (sRGB) = (monitor)

But most of the time you have no clue about what tonemapping function was applied, there is literally an infinite amount of ways to tonemap an image.

The sRGB case cannot be generalised as shown by my JPEG example just above. How many images have been produced without tonemapping in the past and still continue to be? Same goes for videos, you have no guarantee that an HDTV camera encoding a video with BT.709 OETF will tonemap its images, and if you were to apply a random inverse tonemapping function to them you could break them.

Consequently, the following statement is not true and cannot be generalised:

In any case, one has no guarantee that the recipe he applies to a texture of unknown origin can be applied to the next one. This process needs to be done with discernment.

It surely is but the ACES OCIO config does not provide any colorspace for doing that, I could not recommend one over another anyway. If anything the ACR curve is my goto curve because most of the online resources are processed with Adobe tools. With it, you are at least on a more plausible track than using colorspaces from the SPI VFX config that I would be nobody beside Sony has used to produce textures that you find or buy online. I would simply never use it for that particular purpose.

What would be interesting and a good avenue of research is how to convert Output-Referred images to Scene-Referred one, this seems like a Deep Learning 101 task because you can generate the dataset very easily.

The intent is not to remove an unknown tonemap encoded into the image, but rather to invert the tone map that is applied in in the output transform. That tone map of course is known.

quote from Cinematic Color, section on texture painting (p39) “our goal of perfectly inverting
the display transform.”

Under no typical circumstances on a modern VFX production you would need to do that.

Cinematic Color states that

scene-linear texture reference is only available in rare situations.

This was maybe true for Sony at the time when the paper was authored but it is certainly not the case anymore. Not only you can find resources online that are offering Scene Referred Linear (or simply encoded with the sRGB transfer function) textures, e.g. Quixel, Texturing.xyz, Triple Gangers, etc… but the tools to process your own imagery with a Scene Referred workflow in mind have been available for over well over a decade, e.g. DCRaw or Adobe Camera Raw.

It is critically important to understand when one might need to apply an inverse tonemapping function and when it is not warranted. Nowadays, most of the time you don’t need to do that. It would be extremely useful for you and your student to try acquiring and processing your own data for textures. You mostly require a DSLR that shoots RAW images, a colour rendition chart, and then either Adobe Camera Raw (in either Photoshop or Bridge), DCRaw, or RawToACES.

In bold for good measure:

Nowadays, in modern productions, you rarely need to apply an inverse tonemapping function anywhere in your workflow.

Given all the threads you have been spawning around that topic recently, I have the feeling that this is not clear at all :wink: which is absolutely fine because it is complicated. Feel free to ask questions until it is crystal clear and limpid :slight_smile:

I would sat that inverse tone mapping is primarily useful when rather than creating genuinely scene-referred linear values, your aim is to create hypothetical scene values for the sole purpose of passing them unmodified through the same forward transform, in order to create identical pixel values for display when you are supplied with something such as a display-referred logo image. As soon as you treat the resulting pixels as if they actually have any real meaning in terms of scene light, and start manipulating them in any way, you create potential problems for yourself, particularly for HDR. That even includes things as simple as scaling a logo image in scene-referred linear; 100% white pixels in the logo will map to very bright scene-linear values (6.5 stops above mid grey if you use the inverse sRGB Output Transform) and scaling can cause these pixels to overwhelm surrounding darker pixels


Exactly! This is a use case that pretty much never occurs in the VFX industry. We however do need it in tablets and phones AR games as you might want to avoid to tonemap the camera video feed.

That’s helpful to understand your context Thomas, thank you. To sum up, Cinematic Color was a big step forward at the time, adding a tone curve to a linear workflow. Now for VFX that is being taken another step forward where you would not work with display referred images at all, but instead shoot raw, and only work in paint programs that can work in linear.

Well, it was a time where everything was more wild wild west and texture artists were pulling down random textures from Mayang or 3d.sk without having a clue how they have been processed in the first place.

People knowledgable with colour science knew that it was needed to try revert any kind of processing that would have made the values non-linear but even then it was most often not very successful. How do you know what to revert if you don’t have any way of knowing what was the forward process?

The VFX industry has adopted Scene Referred workflows for the last two decades, if not more. You can find papers such as Renderman on Film dating from 2002 where it was actually an important part of the rendering pipeline. I would say that Debevec and Malik (1997) got the ball rolling for most of the industry with Recovering High Dynamic Range Radiance Maps from Photograph.