New filters for rendering ACES in Substance Painter

Actually a filter to convert textures to ACEScg and a lut to convert ACEScg render view to ACES sRGB (in my lut release).
I also made another filter called PBR_SmartFit which helps on the inverse RRT conversion so values above 0.81 are not lost.
The filters are pretty straight forward with some settings exposed for linear or sRGB gamma input/output since Substance Designer unlike Painter defaults to sRGB. Substance Designer also lacks lut support so no option to render in ACES with Substance Designer.
*The ACEScg filter is based on the work of Stephen Hill (@self_shadow)

There’s only a gist when using “Output - sRGB” as IDT which I will now explain.
If you followed this thread a common issue when using “Output - sRGB” as IDT is HDR values. Source pure white 1.0 will remap to value of about 16.29, and target pure white corresponds now to value 0.81 in source losing 1/5 of texture information (textures should be in the 0-1 range, or less for PBR). A common recommendation and a standard in other painting programs is to use “Utility - sRGB - Texture” as IDT, this won’t apply an inverse RRT and hence after the forward RRT our images will look darker than normal. Depending on how you author your maps working in “Utility - sRGB - Texture” is fine, like when you create your albedo maps from scratch using ACES without relying on photographs or other media so it’s also available as an option in the filters. But nowadays in look development it’s not uncommon to mix textures from photographs, other non-ACES managed painting applications, or using library materials authored in sRGB.
This force us to stick to “Output - sRGB” for faithful tonal and color reproduction. To deal with the HDR values I created an special setting in PBR_SmartFit to keep all information within PBR legal range.

PBR_SmartFit is a filter to convert your source albedo maps to PBR legal range (0.0134-0.871 linear float as per Substance guidelines) in the smartest way, retaining source colorimetry, perceptual middle gray and full range details, unlike the internal “PBR Safe Albedo Color” which performs a range clamp on the out-of-PBR values. It tries to be faithful to source while making it PBR compliant. As I explained above, there’s an special option called “PBR for ACES”, when set to “Output - sRGB” what it does is create a special PBR range adapted to the inverse RRT which in turn maps said values to target PBR range while retaining source colorimetry, perceptual middle gray and full range details.
Following some comparisons rendering with Arnold for Maya using syntheticChart.01_ODT.Academy.sRGB_100nits_dim

.
.
.
Here I made a test asset to evaluate different materials under an ACES workflow. The default for Substance Painter is sRGB color space without any tonemapper. Following is ACESFilm tonemapper which like Brian Leleux ACES lut render the image too dark due to the lack of inverse RRT for the textures. Raising the gamma can patch this side effect but the premise is wrong to start with; bad environment exposure, contrasty textures, wrong light interaction -reflections, color bleed-… The last one is using ACEScg textures with the “ACESFilm - ACEScg” lut. Now environment has proper exposure at default gamma, color bleeding is more pronounced and overall light interaction is more correct.

The image below is after conversion of the HDRI assumed to be sRGB into ACEScg space. We normally don’t know what primaries the HDRIs were saved in because it is not normally specified but visual analysis can gives us a hint. Some use sRGB, others AdobeRGB and the ideal ones RAW, for which you should use rawtoaces for conversion of sensor spectral data.

.
.
More examples with different materials and environments.

.

Free download with instructions and more images on my ArtStation post.

2 Likes

As far as i know HDR map need to convert to ACES
because the color Primary changed, otherwise it will cause some saturate issues.
https://community.acescentral.com/t/hdr-texture-environment-light/1297/5?u=baiking

1 Like

It depends on how you create the HDR image. I was researching on that the last day because how unsaturated and dull environment HDRIs generally are when preparing the presentation images.

The marble ball render on the left shows a dull EXR on the magic hour, so I knew something was off. Later I found that many OpenEXR HDRIs due to some color clipping are authored in AdobeRGB.
Others are directly written in RAW format and this is the recommended format in Arnold documentation and other programs. So I decided to stick to RAW, this matches Mari’s color space aliases and the gamut better matches to AdobeRGB (rather than sRGB). Since so few HDR authoring programs make use of the OpenEXR chromaticities attribute we can only assume. I decided to err on the saturated side, after all the HDRIs are heavily color edited (saturation, temperature, sharpness) to look nice. Still if you know your HDRI is sRGB you can still convert to ACEScg and load it in Substance Painter.
If the description is confusing I might change it after suggestions.

em… you are right , i thought all the HDRI maps were in sRGB primaries ,since most DCC and renderer said set the colorspace to Linear/RAW( no gamma correction ) for the HDR map and sRGB for color textures, i thought its stay in the same sRGB primaries.

Hello Jose L,

sorry for the late reply. Things take a lot of time to implement here. and we are now starting to think about Look Development (and Surfacing) with an ACES workflow. :wink:

So far, here is the information I have gathered.
4 softwares are used to generate our textures :

  • Substance Painter : we can use your exr as a lut.

  • Substance Designer : no color management possible ?

  • Mari : we should ACES 1.0.3 OCIO config.

  • Photoshop : we can use ACES 1.0.3 ICC profiles. Otherwise there is a possible workflow described here.

Do you confirm this information ?

Now my question is : our rendering space is ACEScg and our display space is P3-D60 (ACES). The exr you provide displays as a SRGB (ACES) in Substance, right ? Could you make one for P3-D60 (ACES) ?

Thanks a lot in advance for your answers !

Chris

For the Substance Painter using only a lut (ACESFilm 2.0) is not enough. It is a hack, sort of. The proper way to do it is with the “ACESFilm - ACEScg” lut AND the ACEScg filter. By doing so you are properly converting Albedo to ACEScg with the filter and previewing in sRGB ODT with the lut, think of them as IDT (ACEScg filter), ODT (ACEScg lut).

Let me explain (I should probably record a video).
Let’s take Mari as an example. If you import an sRGB texture (photo, scan, etc), and use the default “Utility - sRGB - Texture”, it will paint/project darker than how you authored it, this is because “Utility - sRGB - Texture” doesn’t do an inverse RRT transform. Since your viewing transform includes an RRT everything will look darker.
The solution(?) is commonly said to be “Output - sRGB” used as IDT, this actually does an inverse RRT, hence when you apply the viewing transform the IDT and ODT cancel and you see what you authored. Now doing an inverse RRT (exponential function) presents a problem, you get an ACEScg albedo texture that is HDR, with values going as high as 16.29 (illegal albedo range, breaks PBR workflow), this will typically means that only values within 0-1 get validated (0-0.81 in source) losing 20% of texture information. We need ACEScg textures that represent the source material and at the same time are within 0-1 range, my solution is the PBR SmartFit filter.

While 0-1 is a valid texture range it still is an illegal PBR range, so with the PBR SmartFit filter (using a special mode) we can output a PBR range that is valid for the inverse RRT.

To sum it up, load both filters into Substance Painter and (recommended) place PBR SmartFit into the top stack of EACH material. Then place the ACEScg filter onto the top stack of ALL your materials. Check my second image for settings.
Following, load the “ACESFilm - ACEScg” lut, and in camera tonemapping use the “log” function transform. You are set.
To be on the safe side I also set Base Color, in “TEXTURE SET SETTINGS” panel to RGB32F.
Remember the exported Albedo maps from Substance Painter are sRGB gamma encoded, linearize to bring it back to ACEScg spec.
For a faster workflow, publish a “Convert to Linear” filter from Substance Designer and stack it on top of the ACEScg filter so you directly export linear within Painter.

The filters also work in Substance Designer but you can’t get a preview of your material since Substance Designer lacks lut support. Still you can validate PBR ranges and export your materials in ACEScg.

*Keep in mind, every color component of the render should be converted to ACEScg, this includes emission color, and HDRI environment maps. My filters are strictly for albedo. In a proper pipeline the HDRIs should also be converted to ACEScg, identify the source color space and convert to ACEScg in a third application, this way reflections and light interaction happen in ACEScg space.

Give me a day for the P3-D60 lut. If you have any questions don’t hesitate.

1 Like

That’s awesome. Very clear and detailed explanation ! looking forward the P3-D60 exr then. :wink: Thanks Jose, much much appreciated !
Chris

Hello @ChrisBrejon
Sorry for the delay. Took the chance to revamp the luts and include 4 non-ACES tonemappers so people have something more accurate than ACESFilm 2.0 in case they don’t want to go the PBR+ACES route.
One tonemapper matches Marmoset Toolbag Hejl. The other three are “Uncharted 2” Hable’s based filmic tonemapping curves with two of them matching Blender’s “High” and “Medium High”. One of them converted to a Toolbag compatible .frag file as an extra tonemapper.
In the “LUTs for PBR+ACES” folder is included the “ACESFilm - ACEScg (P3-D60 ODT)” lut, tell me if it works. Download from my ArtStation.

Thanks Jose. That sounds great ! I am currently on holidays but I’ll pass the link to my supervisor. Many many thanks ! Chris

Hello Jose,
hello Brian,

sorry for the delay. It took us around two months to start our tests with Substance. Things are a bit slow here. I have a couple of questions if you don’t mind :slight_smile: I understand there are 2 methods to bring ACES into Substance :

  1. Just a diplay lut from Brian Leleux. Let’s say it is a simple work around to visualize in Substance from Linear sRGB to sRGB (ACES). Is this correct ? @bleleux Is there any way we could get a Linear sRGB to P3-D65 (ACES 1.1) ?
  2. The method from Jose which is a bit more complex but more accurate I guess. Where a filter is used to simulate the conversion to ACEScg and color profile to view in P3-D65 in our case. The smartfit helps to keep a proper range for the albedo when output - sRGB is used.

My question would be : it seems that the baseColor in the viewer is not affected by the color profile. Is that correct ? Here is a printscreen attached.

Thanks for your help,

Chris

PS : I would love to watch a video from you Jose. :wink:

Correct, LUTs and tonemapping in Painter are a post-process and do not affect the source content.

If you want to have the source content modified, a Filter, or anything in the layer stack, would be needed or done outside of Painter.

I work in the game industry where we generally use some form of the ACES ODT as a tonemapping operator, but we don’t do any color space conversions with our content manually. A majority of the texturing tools don’t support ACES(yet), which causes some confusion for artists when they go from Painter to engine. That was pretty much the sole purpose of my LUT.

If I understand the VFX workflow correctly, you often create your content in the desired color space, which is where Jose’s Filter and LUT combination would come in.

Hi,

I will chime in because that statement is misleading, incorrect and dangerous. The main reason for using this workflow, i.e. the one that tries to nullify out the effect of the view transform, whether it is the ACES one or another, is to preserve the specific Output-Referred look of an existing asset.

Nowadays, most studios are using Scene-Referred workflows: they create their assets in a standard Working Space and review those assets under a chosen standard View Transform.

Onset references used to create assets textures or HDRIs are converted to the standard Working Space and reviewed under the standard View Transform.

Resources coming from outside, e.g. Google Images, are usually rendered, i.e. likely to have an S-Curve applied, and Output-Referred encoded, so if you want to integrate them in your workflow, you have to undo the encoding and rendering (which might prove impossible).

If you have “library materials authored in sRGB” then it is easy, convert them to the standard Working Space and make sure that you are always looking at them and presenting them under the standard View Transform. If somebody argues that they look different compared to before, just let that person know that the materials were seen without an appropriate View Transform back then and that it this is a situation that is unadapted for modern Scene-Referred workflows where computation accuracy and correctness are required.

Here is a small interlude quote:

“Friends don’t let friends view scene-linear without an S-shaped viewing transform”.

Now let’s take a critical but essential step back and think about what is happening when you are applying the Reverse View Transform on textures, e.g. using Output - sRGB as an IDT to preserve their Output-Referred look. You are effectively applying a Non-Linear transformation to Scene-Referred linear light values and thus transforming values that were previously radiometrically linear into values that are now non-linear. I will repeat in bold and with emphasis because it is CRITICAL: You are transforming initially correct radiometrically linear light values into now incorrect non-linear light values!!!

The Ramp in the following screenshot speaks for itself:

Note that in this above example I was purposely dis-regarding the fact that the textures might be stored in a 8-bit container and thus requires decoding, the problem would be the same nonetheless:

You should never-ever-ever-ever do that without an excellent reason, the only important one I see is to preserve logos and branding appearance for picky clients.

Cheers,

Thomas

4 Likes

Thanks Thomas and thanks Brian. I am starting to see a bit more clearer on this topic.

@bleleux Is there any way you could provide a color profile for a P3-D60 or D65 for Substance Painter ? That would be very helpful.

A BIG thank you as we are really starting to get closer to a proper color management…

Chris

I apologize for the delay, the notification mails were going into the spam folder :confused:

It’s not a matter of right or wrong but consistency in a non-physically accurate mixed sRGB-ACES pipeline.
At any rate the tools offered here allow for any workflow described.

Maybe a year ago you authored a great Smart Material in Substance, this is our ground truth. The issue is that it is in sRGB, you can’t slap “Utility - sRGB - Texture” IDT, make everything darker and grow used to it. Reason is, if at the time instead we created the material in an ACES workflow from the first place we would reach to the exact same conclusion, same tones and colors and viewed through the sRGB ODT viewing transform, because the real ground truth is our perception of things[1].
(…)
With the above method the first reaction would be to manually adjust the material Albedo tones to resemble the original look, after all as an art what is most important is the final look of the shaders when rendered, let’s call this the “correct but time consuming” method.
But, why do that when we can; reverse engineer the process. What we get at is inverse RRT for the textures, but keeping the RRT for the render. The material looks the same (ground truth) but we get the filmic look of the Reference Rendering Transform. As a result now our ACEScg Diffuse Albedo would match one that was made in an ACES workflow from scratch (scene referred) or using the “correct” method because we used the look of the material as the ground truth, and simply reversed the process.

[1]
Let’s say that using our perception of things is not enough ground truth to be an accurate approach (failing to interpret a physically accurate Albedo map), and I would agree although at some level textures are always “tuned” in some way or form to conform the look of the final render.

In a big studio all (?) textures and materials are made from scratch so it’s a matter of converting RAW polarized photos to ACES, pass through a Macbeth chart and do some texture retouch, or use as reference for color sampling. No need for PBR fitting or reversing the RRT. Actually physically accurate acquired sRGB Albedo maps shouldn’t be reverse anything, the ground truth is known and they should look fine straight away (“Utility - sRGB - Texture” IDT), but, how many of them look right? (think textures.com, Substance Source, sampling Google images) Mostly none I guess given how dark (crushed blacks) the image renders with physically accurate HDRIs. They were authored to look good in a non filmic sRGB viewing transform. Heck, even Substance Painter bundled HDRIs are loaded as sRGB when in fact they are in AdobeRGB space.

With that said my suggestion for sRGB conversion is not for the big studios that can afford an end-to-end ACES pipeline and in-house accurate acquired maps, but smaller ones that reuse artistically made assets/resources.

You should never-ever-ever-ever do that without an excellent reason, the only important one I see is to preserve logos and branding appearance for picky clients.

This being the same principle with unknown materials or photographs…

transforming values that were previously radiometrically linear into values that are now non-linear

…with photographs you can still (non trivial) ungamma, apply a generic inverse camera response curve, which is a bit mud waters because we don’t know the sensor dynamic range, expect the photo is evenly lit and polarized and later pass through a Macbeth chart.
Less than that we will not be working in scene referred linear light. Now we can gamma encode and safely use “Utility - sRGB - Texture”.
With physically accurate acquired photographs however the treated RAW source is our ground truth.
To summarize I don’t think it’s correct to punish users to assume darker than dark materials because at creation time they were not authored through the look of sunglasses. It’s not something that can be arguable because it’s an opinion.

Closing

In closing, the “PBR SmartFit” and “ACEScg” filters and LUTs work as intended, they are not less correct than (the corresponding part of) Nuke’s OCIOColorSpace node or at least its simplification from Stephen Hill’s code my filters are based on. What workflow to follow is user’s choice. The PBR range fitting is original code.

For two weeks I have been wanting to make a workflow video showing all the tools and options. I want to refine an old asset to showcase a just made matching lut and frag file of Jim Hejl’s “Ilford FP4 Push” function.

1 Like

@Dogway:

Hi,

If you want or need to use the Reverse View Transform, please feel free to do so, but you cannot say this:

and not generate waves :slight_smile:

Nobody forces anyone, and this statement simply spreads misinformation. I do not really want to see texture artists starting to throw Reverse View Transform everywhere because they read on ACEScentral that there is no choice and this the way to go. You might have reasons to do it and a good understanding of why you are doing it, but it is not an assumption that can be generalised and I’m sure more than one person reading might have been confused. I’m trying to correct that.

The issue is that it is in sRGB, you can’t slap “Utility - sRGB - Texture” IDT, make everything darker and grow used to it.

In that case, you overexpose your render view to a level that suits you. Epic Games did it when they adopted ACES: their artists were finding the change from sRGB to ACES too dramatic, they applied a 1.45 gain to their ACES fit to be closer to what the artists were used to. Unity Technologies did a similar thing but when I updated the PostProcessing stack code, I removed the gain, deciding it would be best to be compliant with the ACES reference, without any subjective choice. People can manually adjust exposure to their liking with an LMT, a custom OCIO config with a CDL transform, a render setting, whatever does the job for them and their DCC applications.

But, why do that when we can; reverse engineer the process. What we get at is inverse RRT for the textures, but keeping the RRT for the render.

You do realise that applying the Reverse View Transform on your textures and re-rendering using an ACES workflow will never match your previous sRGB ground truth? You will have to tweak your textures anyway, and I would argue that systematically applying the Reverse View Transform will make your job harder in the long run.

Heck, even Substance Painter bundled HDRIs are loaded as sRGB when in fact they are in AdobeRGB space.

What makes you think it is the case, is it based on facts, an educated guess or a gut feeling?

To summarize I don’t think it’s correct to punish users to assume darker than dark materials because at creation time they were not authored through the look of sunglasses.

It is not about punishing users, it is about educating them to proper and correct workflows that, at the end of the day, will make them faster, more efficient, and give them solid foundational knowledge.

I certainly don’t want to sound harsh and all contributions are welcome :slight_smile:

1 Like

@Thomas_Mansencal:

It’s only a matter of defining the ground truth. If you physically acquired your materials you are good to go. If you didn’t you need to reverse engineer the process (not talking about the RRT here). Reverse engineer sounds complicated but in the scope of look development is what we have been doing in all history of computer imagery…

…you set your physically measured lighting environment and based on that tune your maps and shaders until they look good (realistic). In reality what you are doing is reverse engineering the true physical linear light Albedo map under your viewing conditions including display gamma, look (if you used one) and lighting setup. That’s why in lookdev it’s recommended to use flat lighting and later test with different lighting setups. We are reverse engineering the true Albedo (and different) maps. And because of this the artistically crafted material is now considered our ground truth, a reverse engineered approximation to a physically acquired Albedo map.

Let’s step back and check what happens when our viewing conditions include an ACES sRGB viewing transform.
Now you are compensating for the viewing transform’s RRT [1], effectively embedding the compensation into the Albedo map. Is this good or a bad thing? Well, this is the same as asking someone how he remembers tones and colors. Reference material as we talked before has a camera response curve, an S-curve type function and this has been so since we used film cameras so it is safe to assume that we remember things in a filmic way. So for me this is a good thing.

And this takes us back to how we did look development through all these years, if doing lookdev with the RRT is a good thing because it matches the filmic way we perceive the world, does that mean we did it wrong during all this time? Sort of. Filmic LUTs or looks for CG rendering is a rather recent concept, and without them we were embedding the filmic look (RRT if you wish) into the Albedo map directly, because a filmic look is our perception of things (and the default look of the reference material we use).

What this means is that when you apply the RRT to a render of an old authored sRGB material you are effectively applying a filmic look TWICE. The filmic ground truth embedded into the sRGB material, and the ACES RRT.

[1]
In broad strokes what is really the RRT? Among other, two things; a crafted tonemapper and a crafted (filmic) look function based on an array of print film stocks (PFE).

By applying the reverse RRT to our old sRGB filmic ground truth Albedo we are removing* the embedded filmic part of the map. Now we have a reference ACES compliant material** that we can safely view under the sRGB display referred viewing transform (RRT+ODT).

*Yes, reversing the tonemapper also creates illegal HDR textures, and this is handled in the “PBR SmartFit” filter, and responding you not by subjectively applying a 1.45 gain to render (check the 4 renders comparison) but by fixing Albedo input PBR min, max and middle grey to output, and a soft clip in the highlights. With physically acquired maps you will run into similar issues, a RAW to ACES is in HDR range, an illegal PBR range Albedo map, how do you convert that to an ACEScg PBR compliant Albedo map? do you cut off the HDR and out-of-PBR values ?
**To answer your post, the material (passed through the sRGB viewing transform) matches our sRGB ground truth, the render doesn’t because the RRT is applied also to the light, something we couldn’t embed when authoring the sRGB material.
To prove it check the sRGB color charts in OP. When light component is removed with a surface shader you get a match of the ground truth (plus PBR and unclamped HDR values)
***To answer the AdobeRGB HDRIs, an educated guess. AdobeRGB is the defacto color space for HDRIs given its wider gamut, you are certainly safe honoring the assumption that an HDRI is AdobeRGB unless specified otherwise (it should be tagged in the metadata). Telltales are lack of banding in skies or very dull HDRIs when viewed with the sRGB IDT.

  • Closing

I can’t write a 2 page essay when presenting my work as this is going to confuse users, people expect to use something and not get frustrated and here there are many posts of frustrated users trying to assume that things should look darker than your already-filmic ground truth.
You are a respected color science individual and not without reason everything stated has a great influence on readers. That’s why I’m writing these long explanation posts, you might get my point, but I don’t expect the average user to do it. People want easy to understand concepts.
The RRT development roadmap includes a parametric RRT to ease on the reversible aspect of the function, that tells something important.
Now people is free to use the tools however they choose. If I stressed too much in the original post to follow a certain direction that I believe is correct, I apologize for that. All in all I was trying to help the frustration by shedding some light.

  • Summary

There’s a place for each workflow and I tried to explain why and how to use each.
"Utility - sRGB - Texture" IDT : Using physically accurate acquired photographs/textures or crafting a texture/material from the very scratch (painting or/and adjusting non-ACES photographs).
"Output - sRGB" IDT : Everything else. Preserving an already authored material, collage a color map with unreversed non-ACES photographs.
The ACESFilm 2.0 LUT by its own reproduces (with some compromises) the first option (“Utility - sRGB - Texture” IDT) in Substance Painter.

I’m not allowed to edit the OP so rather check the image posted in my ArtStation.
The image in the OP (4th render sample) is wrong because I had issues thinking Painter was doing the right thing with the HDRIs.

Absolutely not, studios across the world have been working with filmic View Transforms for many-many-many years. This presentation about games (not films but games) by @hpduiker is from 2006.

I disagree with that, who is to say that all the sRGB materials are authored with a filmic look in the first place? Making incorrect assumptions can be dangerous and might bite back.

How many times are surface shaders that are not light sources rendered and end up in final frames on shows? My guess is pretty much none.

The complex BxDF and light transport interactions are what makes your suggested workflow something I would not consider to start with because applying the Reverse View Transform on textures will not get back to the previous rendered sRGB View Transform look anyway. The more complex the shaders and lighting are and the more it will diverge. This only works for Logo and Branding that sits on top of the rendered image really…

If you are working with an annoying client that is so fond of the previous look developed with an sRGB View Transform, why shoot yourself in the foot and use a filmic View Transform? If you really want to use a filmic View Transform, then have your Comp artists or Colorist build a compensation for you.

This is an assumption I would certainly not make, if anything the only commonly agreed colour space is the lowest common denominator: sRGB. I shoot myself a lot of HDRIs, process them with my own code, and I use sRGB as encoding space very often.

Who is to say that the HDRI author did not simply desaturate it? A ton of people online use Adobe tools to generate them and they end up with ACR tone curve embedded in their images without even knowing it. If one has no clues on how the HDRI was processed and encoded, i.e. does not ship with a Colour Rendition Chart, he cannot expect to have physically correct results. As a matter of fact, he does not even know if the HDRI is representing radiometrically linear illuminance values. In that case, the adopted gamut is probably not the major concern and the HDRI look will be subjectively tweaked until it produces the expected results.

Yes, very much aware of that, for what it is worth, it is something the authors of the ACES RAE, I’m one of them, have been pushing for.

Again, the frustration point of things looking too dark should be addressed with the hooks the system was designed with, e.g. exposure compensation with an LMT, OCIO config tweaks, etc… In LookDev, we spend our time changing image exposure to assess if the tones are correct everywhere anyway, one should not be attached too strongly to a particular exposure value.

@Thomas_Mansencal:

At this point the conversation is going nowhere, I do have a point (what you requested) already explained in the last two posts, and saying more than that it’s simply more noise to the thread.

In the previous posts I made big efforts on explaining myself and now we are nitpicking quotes arguing about small words, so this is my last elaborated post of the conversation.

Given how you still ask things I explained I don’t think you are making the littlest effort on understanding and even less on explaining things. But this is expected from an ACES team member whose duty is no more than to enforce the specification (that studios break time and again within reason to get the job done). Would you explain how do you go from RAW acquired HDR ACEScg Albedo map to LDR ACEScg PBR Albedo?

Absolutely not, studios across the world have been working with filmic View Transforms for many-many-many years. This presentation about games (not films but games) by @hpduiker is from 2006.

Yes, more or less around the time we realized that we had to linearize our gamma encoded textures. Feels like yesterday.
In the scope of look development (the topic in discussion) filmic looks got popular with the implementation of OCIO such that now we can apply looks in the DCC’s framebuffer “on the fly”.

I disagree with that, who is to say that all the sRGB materials are authored with a filmic look in the first place? Making incorrect assumptions can be dangerous and might bite back.

I’m not saying all, I’m saying likely, but let me quote you on these earlier words

Resources coming from outside, e.g. Google Images, are usually rendered, i.e. likely to have an S-Curve applied…

Referencing material with embedded looks leads to embedding looks to materials.

The clear evidence that Epic and other studios had to compensate for that shows that the authored materials were film look embedded. It’s in your linked “ACES Retrospective and Enhancements” pdf, part III. A. Framestore, Unity, EA… Everybody is breaking the hard-coded specification because it makes little sense to throw an established ground truth out of the window.

"Often, when the ACES system is used, the client look transform is concatenated with the inverse RRT and inverse ODT into a LMT so as to completely cancel out the ACES look. Recent experience at Framestore and Eclair has seen projects where this has been the case. Even outside of the motion picture industry, discussions with Unity Technologies have shown that the RRT contrast was deemed too high"

Reading into it, it’s obvious they are reusing old assets (sRGB authored ground truth materials) directly into ACES with your recommended IDT. I don’t think the RRT is contrasty or harsh, if you author your material from scratch under the ACES sRGB viewing transform you have total control of the look of your material and Albedo map. Honoring the old sRGB ground truth material is my personal solution to a common problem.

To further expand on how ubiquitous embedded filmic looks are in authored materials check this GIF I made. It’s the PBR SmartFit filter mathematically correcting (and being generous in the limits) Substance Painter bundled materials, proof enough that they were not even PBR and far less physically accurate acquired maps.


And this is a repeating pattern in other packages and shared materials. This is only for PBR which can be easily fixed even when the filmic look is embedded, so guess figure how many PBR valid but physical inaccurate materials are floating around.

The complex BxDF and light transport interactions are what makes your suggested workflow something I would not consider to start with because applying the Reverse View Transform on textures will not get back to the previous rendered sRGB View Transform look anyway

I already explained that.

the material (passed through the sRGB viewing transform) matches our sRGB ground truth, the render doesn’t because the RRT is applied also to the light, something we couldn’t embed when authoring the sRGB material.
To prove it check the sRGB color charts in OP. When light component is removed with a surface shader you get a match of the ground truth

To your question:

How many times are surface shaders that are not light sources rendered and end up in final frames on shows?

On the “complex BxDF and light transport” and surface shader note, following a diffuse reflection model the next quote should adhere.

Diffuse Albedo: How bright a surface is when lit by a 100% bright white light (...) with 1 in brightness and point it directly on a quad mapped with a diffuse texture, you get the color as displayed in Photoshop. (Sebastian Lagarde)

So pretty common. The principle of a Macbeth chart based color correction.

I shoot myself a lot of HDRIs, process them with my own code, and I use sRGB as encoding space very often.

When half of the work is not correctly done (tagging in the OpenEXR chromaticities attribute), you can’t do worse. Wide gamuts make sense with higher bitdepths common in high dynamic range images to prevent quantization errors. A correctly authored HDRI is in a wide (wider than sRGB) gamut and tagged in the file’s metadata if not explicitly informed.
As to why AdobeRGB and not other space, I did a research and AdobeRGB was a constant color space on creation of HDRIs, there were not much more choices than that and sRGB in the processing tools.
A 32-bit float HDR image in sRGB doesn’t make much sense, another reason why I am using ACES for renders.

Who is to say that the HDRI author did not simply desaturate it? A ton of people online… end up with ACR tone curve embedded

If you start tweaking curves you are doing it wrong as now it doesn’t represents the environment’s scene referred linear light anymore. Which takes us back to the beginning when we built our materials from non-accurately acquired resources (photographs, HDRIs) and simply tried to match the look of the material (against the reference) by artistically reverse engineering it. The same is happening with HDRIs, people is fine tuning it to represent what they consider is a good photo not an accurate representation of spectral data.

(if the HDRI) does not ship with a Colour Rendition Chart, he cannot expect to have physically correct results

A Color Checker or Colour Rendition Chart is not usually representative of physically correct colors and tones, and my following project goes in that direction. Check this video.

  • Closing

I’m certainly not enjoying on defending your persistence on dismissing my words and hence my work. I’m losing my time and I guess you too. When I see someone did or said something I don’t agree I don’t go to his presentation thread and insistently try to put down the work, I didn’t see you do this to Framestore, Epic games and so on. I’m presenting a valid point, a solution to a recurrent topic. You can express your well-founded opinion, ignore it or download it and use “Utility - sRGB - Texture” in the ACEScg filter. I’m not shoving this down to anyone’s throat, I shared the tools for free to allow any workflow.

An image speaks by itself. I post the correct self explanatory comparison image that should replace the one in the original post.

  1. old sRGB ground truth 2) similar to “Utility - sRGB - Texture” IDT 3) Epic Games style global look compensation 4) sRGB ground truth honoring in an ACES environment (RRT still applies to light/shading)

Not all S-Curves are filmic and they are certainly not made equal. Not all S-Curves compress highlights like a ARRI K1S1 or the ACES RRT do for example, the Adobe ACR tone curve does not.

But they corrected for that by adjusting the exposure of their imagery not by applying the Reverse View Transform to all their texture.

Sorry, but you don’t know what Framestore, Eclair or Unity Technologies were doing nor you know the context of those discussions.

How often are you rendering unity and uniformly lit Lambertian quads? I would bet a beer that you never produced final frames on any show with such specific quads :slight_smile:

Sorry but this is effectively getting non-sensical: if the colours of the location you are capturing are fitting within sRGB gamut then this is a completely meaningful and valid option.

Please come up with your numbers and statistics, this could be useful.

It is a reference that will help assess it but then, of course, it needs to be captured correctly.

I’m not trying to put your work down, again I reacted to this statement of yours:

I will finish on that:

I’m not trying to enforce the specification, I’m trying to avoid people feeling forced to adopt a confusing workflow, simple as that. This is subjective but your last updated image does not support the point you are making very well anyway.

1 Like