ACESCG and SRGB Questions - Texture Asset Creation

Hey guys,

I’m trying to learn acescg and would like to incorporate it on my color map processing in nuke.
Just read this awesome thread:

I have few questions:

  1. Is it correct to say that OCIO-ACESCG color space will make my “color grading, highlight and shadow recovery” more “easy and doable” because I have a bigger gamut range with acescg versus srgb.
    Or should I just stay in the default linear srgb (nuke-default) color space in nuke because OCIO-ACESCG just does not give a significant benefits visually for shadows and highlights recovery of color maps.

  2. The color maps that I’m processing in nuke were exported straight from reality capture scanning program, like this color map scan of a muddy ground:

Now it makes me wonder, most of the people were saying that using OCIO-ACESCG colorspace will make the color map darker and “less” saturate BUT why is it that mine is looking exactly the same on JPG write node using OCIO-ACESCG colorspace? Here is the JPG render of a write node using Utility-SRGB:

Maybe what they meant to say is the “viewer transform”? My viewer is indeed much darker and less saturate when using OCIO-ACESCG colorspace versus the normal srgb of nuke.
Viewer in Nuke-Default colorspace:

Viewer on using OCIO-ACESCG:

Here are my project settings using Nuke-default:


  1. For the conversions, do I need to use the “OCIO ColorSpace node” to convert my input sRGB color maps (from reality capture) into ACESCG before I can start to process and color grade it? Because if I use the OCIO ColorSpace node, my Color map will be super super dark and unworkable in the Viewer. Like these below node graph screenshot:

I cannot find any thread that directly answers the question - Do I need to use “OCIO ColorSpace node” to process (grade, exposure adjustments etc…) my color map inside Nuke?

And btw, I’m referring to the raw color map itself (JPG or Camera Raw formats) and not the actual 3D scene render outputs from Maya or 3dsmax that needs post-processing later on.

Is that the proper workflow in ACESCG? At first, I thought that nuke is doing the conversion automatically after changing to OCIO-ACESCG colorspace and using Utility-SRGB on the “Read Node” of the color map.
So using a separate OCIO ColorSpace node will “double” the conversion that results to a very dark image? Or maybe my color map is just too dark and is out of the proper albedo range for a PBR workflow (well, I also need to look for these correct albedo range values as I always read it on other threads but no one is posting the “actual” range for people to see.


I went ahead and create the comparisons of “Write Node’s” render of different maps WITH and WITHOUT using “OCIO Colorspace Node”. Now I became more confused than ever… really don’t know what is the correct way to do this… :frowning:

It’s 4 different images… I can see that using “Output-SRGB” colorspace on the write node will give a much darker image BUT based on the threads that I have read, people are saying that I should be using Output-SRGB to render my maps as this is the part on the ACES workflow where the maps are getting converted back to SRGB colorspace to be displayed on the “output device” like monitors.

BUT I’m still on the process of “creating” the PBR texture maps that sourced from my Camera / Reality Capture and NOT on doing the “final” post-processing of “3D renders” from Maya.

Is it the same workflow? Weather creating texture maps (JPG/EXR ACES colorspace) or doing a post-processing of 3D renders?

Hello jomallord,
I didn’t understand everything that you explain but I think there is some confusions on your side.

If i get right you have a texture coming from RealityCapture that you wish to conform to the ACES workflow ?

As we can assume that RealityCapture is generating a sRGB image I think that you will not benefit much to convert your texture to the ACES workflow (anyone correct me on this one if I am wrong). It will just allow you to use them in an ACES workflow if you are working with ACEScg texture in your 3d software.

Then the how-to:
With OCIO set in Nuke:

  • With the screenshot you posted it seems that you exported a 8/16bit tiff (or a 32bit tiff with the sRGB transfer function ?) so it is correct to use Utility-sRGB-Texture on your Read node. This will convert the input to the working space which is ACEScg.

  • You don’t need to do anything more to work with your texture, but now if you want to export it :
    On the write node choose ACEScg to export for your DCC, with of course the exr file format.
    You could choose Output-sRGB only if you wish to create a preview thumbnail.(exported in a integer format like jpg,png,…)

1 Like

Yes. The Color map came straight from reality capture (8-bit JPG or TIFF format).

I see. I always do my textures in SRGB inside photoshop or nuke but now I heard about ACES and did some reading about it and wanted to confirm if this “bigger” colorspace would benefit me on processing the raw textures or not. Processing means - mostly shadow and highlights recovery, and probably a little color grading for my look and preference.

My color map input is just an 8-bit TIFF (sometimes 8-bit JPG).
So you mean to say that I don’t need to use a separate “OCIO Colorspace Node” after the Read Node to convert my color map to ACESCG working space? Using Utility-SRGB on the read node of the color map will do the conversion?

Ok… Yes, I mostly just use JPG for my color map. I’ll be using Output-SRGB on the write node.

What do you think of my color map though? So my value range is a little low and I need to increase my exposure?
Using Output-SRG on the write node gives me a really dark image (Top right - since I won’t be using an OCIO Colorspace node anymore)

Yes !

Hum, if you want to be able to use your map you have to export to exr using ACEScg on the Write node.
As i specified the Output-sRGB is really only for preview like if you would like to share what you texture looks like.
And if you want an ACEScg jpg, well don’t …

Sorry, can’t really know what this kind of map is supposed to looks like under the ACES ODT. That’s why you would use a colorchecker when scanning to be sure that your final texture is balanced.

Also you comparison is not really clear, if the previews are screenshot from the nuke renderview, or the image directly opened, …

1 Like

Awesome. Thanks for the confirmation.

I’m only referring to the color map texture and not the final 3d scene renders in EXR from Maya / 3dsmax.

I wanted to use JPG format with my color maps (because of small file size). Looking at this awesome thread by @ChrisBrejon ACEScg for Animation feature and further questions
I presume he is referring to a JPG color maps for texture that uses Utility - sRGB inside Maya.

The 4 images comparison are from Windows 10 photo veiwer. It is not from nuke renderview.
Reason I put a label of write nodes on the bottom of each images.

Sorry if I’m misunderstanding, but are you saying that I should not use JPG format as my exported image if I’m gonna use ACESCG as working space in nuke?

So far, here is my understanding now:

Then the exported image will be used as my color map in Maya using Utility-SRGB colorspace.


your write node should be set to sRGB-Texture, otherwise you will bake in also the view transform.

1 Like

Noted with thanks.

For the sake of completion, on what instance do I use the Output-sRGB colorspace on the write node then?

You rendered your 3D-scene and did your comp in Nuke:

  • If you want to hand out your comp for grading in ACES or to archive it, write out EXR files in ACES 2065-1.
  • If you want to show it to someone, but not inside Nuke, you bake in the view transform RRT&ODT (sRGB) for a computer screen and write it as a JPG, TIFF, PNG, etc.
1 Like

Hello @jomallord,

here are a few tips :

It is not correct to say that. Working in ACEScg will give you access to wider primaries in your working space, allowing for a better GI overall and more saturated colors.
The highlight and shadow behavior mostly comes from the Output Transform, not from ACEScg.
You have two possibilities when it comes to choosing your working space for your textures :

  1. Keep in Linear - sRGB if your DCC application / render engine can convert on the fly like Maya/Arnold for example.

  2. Convert your color maps to ACEScg prior the rendering phase if your rendering engine has not implemented the IDT, like Redshift for example.

Here is a thread on this topic :

Converting a sRGB texture to ACEScg will make it darker and less saturated since ACEScg has wide primaries and a linear transfer function (versus SRGB primaries with a sRGB transfer function). But you would also have to display it properly (using an ACES ODT). This is why we should be careful with the above statement and should not display ACEScg textures with a simple sRGB nuke viewer. :wink:

If I have understood correctly, you are comparing the sRGB (ACES) ODT versus the sRGB viewer of Nuke. It is completely normal that sRGB (ACES) looks darker since it includes some tonemapping. If I may, I would avoid to use the sRGB viewer of Nuke on CG renders since it does not include any intensity mapping. I think this is well explained in the posts you sent and on my website as well.

The settings you sent in Nuke are both correct. You have now to make a choice between which workflow you want to use. :wink:

This is a common mistake. You do not need to use the OCIOColorSpace node to convert to ACEScg. The IDT already does that. So you would have a double transform. Hence a very dark texture. The IDT, if set correctly on Utility - sRGB - Texture will convert to your working space, aka ACEScg.

No you don’t need it. We have seen a Youtube video a year ago or so that was claiming it. This is pretty much incorrect as you would be having a double transform.

I have posted a couple of charts for ACEScg albedos if I remember correctly. And your intuition was correct : no need for OCIOColorspace node. If I may, I would avoid using the term OCIO-ACESCG colorspace. They are two different things. OCIO is the “how”, ACES is the “what”. You can just name the colorspace ACEScg or AP1.

No no no ! :wink: Never render you maps/textures using Output - sRGB. This is for final renders only. What you should do is display you textures with sRGB (ACES) ODT to assess properly their range. But never ever write your textures with the ODT in it.

The ACEScg renders do need to go through the sRGB (ACES) Output Transform to be displayed properly on a sRGB monitor. You can also display your textures with sRGB (ACES) ODT to check them. But never write textures using Output - sRGB, you would burn the tonemapping inside the texture.

Honestly it looks fine. :wink: I can see that @MrLixm already gave you some pretty good answers. I hope I am not confusing more with mine. :wink:

You are presuming right. :wink: But I am using flat primary colors here, for a simple cornell box test. Many studios have switched to linear exr textures for all their maps. From the tests we have been doing, it was actually more efficient in terms of memory than tiff or png.

You can write a jpg texture file using Utility- sRGB -Texture to then load it in maya as Utility - sRGB - Texture. Or write an ACEScg exr file to load it in Maya as ACES - ACEScg. Never use Output - sRGB to write a texture.




I see… but please correct me if my assumption is wrong - now I’m starting to think that ACES-CG is really intended for post-processing of 3D scene renders? Would it be correct to say that ACES-CG wouldn’t give any substantial benefits on texture asset creation even though it will provide you with a bigger gamut versus the traditional linear sRGB.

It would give benefits to reach more saturated colors than sRGB allows (see my previous answer as well for more info). The sRGB gamut only covers 69% of the Pointer’s Gamut. See this excellent article for more information : But in your example of desaturated ground, you would probably not see much difference indeed.

And not only post-processing of 3D scene renders, but the process itself ! Aka the better Global illumination. :wink:

Hope this helps,


Awesome, those are some solid crystal clear, direct to the point info @ChrisBrejon. Thank you!

I think would be using the 1st workflow, which is what I’m doing right now, just stick to the traditional linear sRGB working space of Nuke and just assign each maps with the correct colorspaces inside Maya/Arnold.
I would just then rely on the albedo range charts and xrite colorcheckers to balance and finetune the exposures of my color maps.

Thanks again Chris. Really appreciate the help!

1 Like

My pleasure :wink: I am supposed to write some articles on these topics soon for the ACES Knowledge Base. I will certainly use our conversation to provide great examples. Have a nice day !




I would advise against doing that, you will trash any textures with subtle gradients, e.g. skin colours, or dark colours.

Mmmh this doesn’t really make much sense because upon being decompressed an 8-bit jpeg will consume the same memory than any other 8-bit image container. A renderer might support direct access on the compressed data but it is certainly not the norm AFAIK, and given the type of random access required, it seems far fetched to even implement. Where you can get some net gain is on IO because the smaller file size means it is sometimes faster to read it from network storage. A notable exception to what I just said is GPU texture compression, the algorithms, in that case, are specifically optimised for random texel access, unlike jpeg.

It is a better rendering space than sRGB and as Chris pointed, a wider gamut means that you can represent much more natural reflectances than sRGB, which is important for a lot of man-made objects or natural entities such as flowers.



1 Like


a pre-processed JPG (which is most likely sRGB) file from let’s say a DSLR RAW photo is not having a real benefit when you color correct it in Nuke (in float precision) and write it out again as a JPG texture. Write it out as a ACEScg EXR and at least you keep your grading changes in the highest precision.

If you can develop your RAW photo to ACEScg, then you can benefit from a bigger color gamut and a higher dynamic range that was captured by the camera. I do this with Affinity Photo, but I am not sure if the gamut is still limited to sRGB in the RAW development process.

I am not really sure what to do with values over 1.0 in this case, because a texture should not have a higher value than 1.0 if I am not mistaken. I was playing around with that here: Understanding the sRGB-Texture IDT - taking an image from the “screen” into the “scene”

Best regards


1 Like

Hey Daniel,

Just to be clear, your statement “you bake in the view transform RRT&ODT (sRGB)” means to use Output-SRGB as the colorspace for the write node?


yes, you are exactly right.

1 Like

Thanks for your insight. I did not do these tests myself at our studio but from what I have been told, the exr texture workflow was less heavy than our previous texture workflow based on png/tga/tiff. The exr files’ size are smaller in our tests (which surprised me). Hence my deduction about memory usage. Unfortunately, I do not have access or could share to the data test.