ACES 2.0 Output Rendering

I’m not sure how @LeeNiederkofler created that image but it does look a bit strange and does not appear to match any of the provided OpenDRT presets. For the record:



I’ll agree with Doug about one thing: It is quite an interesting test image, thanks for sharing @prolost – What I like about it is the red light on the yellow sphere. The intensity is so high that we don’t even cognize it as red light through picture formation approaches with stronger red->yellow curvature in hue angle. Without picture formation (just converting to rec.709 gamut and gaining down until the lights on the balls don’t clip) by comparison, paints a bit of a different picture:

Edit: Apologies, I made the same mistake as Doug and had the input gamut incorrectly set as AP0. I’ve updated the images above to correct my mistake.

4 Likes

Yes, I used the ACEScg EXR from @prolost although the filename tag suggests AP-0.
In Nuke I can’t see any negative RGB values with the CurveTool when reading the file in as ACEScg.

Here are the commands for the plot that I used:

# plot C4D render
RGB = colour.read_image('acesDemo_05_aces_i4.0000.exr')
colour.plotting.plot_RGB_chromaticities_in_chromaticity_diagram_CIE1931(RGB[::5, ::5, ...], colourspace='ACEScg', colourspaces=['ITU-R BT.709'],scatter_kwargs={'c': 'k', 'marker': '+'}, bounding_box=[-0.1, 0.9, -.1, 0.93]);

Here is a plot done in a Google Colab. That would seem to match the shape, but not the size of @doug_walker 's plot. I think Doug may be interpreting the ACEScg EXR as AP0.

1 Like

Thanks Jed, that matches my results. Default and Colorful both look very good to me on this image.

Thank you, I’d delighted that it turned out to spur a discussion. I originally created it as an example of the benefits of rendering in a wider-than-the-destination color space. The same scene rendered in linear sRGB with a simple display transform has all kinds of issues, from posterized highlights to inaccurate light interactions.

But here it became a part of what I know is one of the biggest topics on this forum, how should bright, colorful things be rendered? If I personally have any takeaway in that regard it’s the rather unsurprising conclusion that there’s no one correct answer, and subjectivity plays a big role.

Specifically, the idea that hue skew in highlights can be pleasing and solve real imaging problems is one that will stick with me. The 2499 DRT seems to be literally built around this principle, with parameters for global and per-primary skew. But maybe there’s also a need for creative tools that can help with this under a more dispassionate ODT.

2 Likes

Yeah, sorry, the filename made me think it was an ACES2065-1 image rather than ACEScg (even though Stu clearly stated it was ACEScg above!). Apologies for adding confusion.

1 Like

Is everyone around these parts convinced of this? The idea can be disproven with a small bit of theory, and validated with praxis.

If possible, it might be prudent to render out the additive air material pass, separate from the base form pass in the PBR colourimetric stimuli data.

1 Like

The RGB render engine itself does not care what color space you are rendering in. RGB is RGB.
The rendered pixels in the EXR file are identical whether you render in the working color space linear sRGB/Rec..709 or ACEScg when you don’t adapt the RGB shader (and textures) and RGB light values from one working colourspace to the other.

A red sphere’s base color 0.5/0.0/0.0 has a different meaning in linear sRGB/Rec.709 and in ACEScg and will look different through a simple sRGB EOTF view transform or through an ACES ODT, but the RGB calculations in the renderer are in both cases identical.

Have a look at this article by @Thomas_Mansencal on the significance of the choice of rendering primaries.

1 Like

Hi Nick,
I am aware of this article, but I struggle to connect the render example from @prolost with the article of @Thomas_Mansencal. Maybe you can explain it a bit more in detail?

Of course I did adapt the material and light colors between sRGB and ACES for the demo, otherwise, as you say, there’d be no difference between the renders. This was a while ago and my methodology would be more streamlined now thanks to years of shipping an OCIO color-managed 3D application and renderer, but here were the results from 2022:

This was inspired by a simpler demo I shared a year prior with a blue ball and red light.

2 Likes

I see very often shader base colours with way too high values set by experienced 3D artists which are not so familiar with ACEScg.
What I would like to see in DCCs is a color picker that “tells” you:
“In this working space you are out of gamut sRGB/Rec.709 or P3 or even Rec.2020.
Be aware how far you drag the color sliders.” (in a graphical representation)

I think this could help.

But another thing, now that I see the old sRGB rendering from you.
Where does this big shadow on the red sphere is coming from?
Has somehow the shape of the window on the left? This looks odd.

Cinema 4D doesn’t do exactly this, but it allows you to choose the color space of the picker, and the default is sRGB to discourage folks from habitually using colors that would be implausible in a wider gamut.

When you switch from one color space to another, the values are converted in place.

For example, if you pick this “pure red” in sRGB:

…and then switch the picker to ACEScg, you can see how much “headroom" you have gained:

That’s one of the undesirable artifacts I was mentioning earlier. It’s just the very blue light failing to light up the very red ball. Converting the highly-saturated colors to ACEScg gives the renderer a more realistic chance to add blue light to a not-implausibly-pure red thing, showing some of the benefits that @nick was referring to in @Thomas_Mansencal’s article.

2 Likes

Using a simple example of an sRGB blue light illuminating an sRGB red surface, the reflected light is the result of multiplying the two values:

>>> sRGB_red = np.array([1, 0, 0])
>>> sRGB_blue = np.array([0, 0, 1])
>>> sRGB_red * sRGB_blue
array([0, 0, 0])

So “rendering” in sRGB results in no light reflected.

Converting the colours to ACEScg, multiplying them, and then converting the result back to sRGB does not give a zero result, illustrating the effect of changing the rendering primaries.

>>> sRGB_red_as_ACEScg = colour.RGB_to_RGB(sRGB_red, 'sRGB', 'ACEScg')
>>> sRGB_blue_as_ACEScg = colour.RGB_to_RGB(sRGB_blue, 'sRGB', 'ACEScg')
>>> sRGB_red_as_ACEScg
array([ 0.61311781,  0.06993408,  0.02046299])
>>> sRGB_blue_as_ACEScg
array([ 0.04578734,  0.01193278,  0.87271591])
>>> reflected = sRGB_red_as_ACEScg * sRGB_blue_as_ACEScg
>>> reflected
array([ 0.02807304,  0.00083451,  0.01785838])
>>> colour.RGB_to_RGB(reflected, 'ACEScg', 'sRGB')
array([ 0.04589599, -0.00284283,  0.01973478])

Display those values with an sRGB curve, and they look like the purple you might expect:

Thanks for that Nick! My original example of this was hard for some folks to believe until I showed the math in a similar fashion.

In sRGB-land, this red-orange light actually makes the blue ball green!

In ACES the red light is allowed to make the ball purple:

Some folks assumed I’d made some kind of mistake to get the green result, so I had to show my math:

I even re-created the phenomenon “live” in After Effects using simple cards and OCIO for the color conversions:

1 Like

This is because under a colourimetric PBR, the B part… the “Based”, is always dependent upon the working space.

In a colourimetric world view, the reflectance of the surface and the colourimetric stimuli response of the surface is enmeshed into one somewhat ridiculous bundle.

The takeaway should not be that using a larger stimuli volume is “better”, but rather that the whole thing is a house of cards predicated on some assumptions around colourimetric stimuli projections.

We could say the same issue re-emerges because we end up with even more residual energy if we were using Sony’s SGamut, etc. The bottom line is that conflating a radiometric response of reflectance with colourimetric stimuli definitions is ripe for logical errors such as this. It is a logic and inference mistake, not a technical one.

“Thou shall hear our prayer Oh Grand Spectral Overlord!”

Relevant to the discussion here: The Apparent Simplicity of RGB Rendering

2 Likes

Thanks @Thomas_Mansencal, that’s good stuff.

Hot take (really just aimed at anyone lurking on this thread and wondering if they should, god forbid, trust their eyes): ACES is designed for the film biz, where we fake everything, and if it looks better, it actually is better.

We do spectral rendering where it matters — in our lens flare renderer, for example, but otherwise rendering and grading in ACES AP1 is quite often more than just good enough for show biz, it’s great.

2 Likes

Hi Stu,

thanks for sharing the renderings from your blog, there is always a lot to learn from them.
I was not aware that you have this blog and this article in it.

Image 1:
With a bit of time on my hand I re-created your experiment with success in Blender (Standard sRGB)
A ground plane, a blue ball, a white light and an orange light.


It does not look so fancy as yours, but it works well too I think :slight_smile:
The orange lit side of the blue ball looks greenish.

Image 2:
And for the Blender ACES 1.3 version I converted the blue ball shader values from linear-sRGB to ACEScg (via Nuke as Blender does not offer an internal way to do so as far as I know).
The rgb values for the orange light “color” I converted as well.

I see the same changes happen as in your images.

As @nick explained and as I understood it in the first example (Image1) we operate close to a boundary with our RGB values when they get displayed, therefore the orange light reflecting side of the blue sphere turns greenish. It’s math as you showed. Is that about right?

And by converting our shader and light values from linear-sRGB to ACEScg we get the second result (Image2), because we “operate” in a bigger working color space. Is that right too?

The orange light reflecting side of the blue sphere turns more to the magenta side.

The disadvantage of Blender not having a “color managed” colorpicker, something that Cinema4D and other DCC apps have, makes the next experiments even easier.
I still would like to have a colorpicker that “tells” me:
With these values you are inside of the sRGB gamut, now you enter P3 or Rec.2020 etc and therefore you are out of the sRGB gamut for example, because this is your chosen display output.

Image 3:
Here are the first shader values again for the blue sphere and for the orange light.
Just this time I do not convert them and start Blender with the ACES 1.3 OCIO config.
As far as I understand, now I am operating again on the edge of my working colorspace, but this time I assign it the meaning - you are ACEScg instead of linear-sRGB as in Image 1.

I would say the Images 1 and 3 look somehow similar, they have different look because of the different approach to map the RGB values in the EXR to be displayed on an sRGB display.
And comparing the RGB pixel values in the same nuke script setting both colourspaces to “RAW” shows me the rendered pixels are actually identical.

In Image 1 I assign the EXR file the meaning linear-sRGB and I view them through a simple inv.EOTF/EOTF on the monitor.

In Image 3 I assign the EXR file the meaning ACEScg and I view them through the ACES 1.3 sRGB ODT on the monitor.

Next is Image 4 (same OCIO config as Image 1 but with slight adjustment to the shader and the light rgb values. This resembles a bit more Image 2 again in my point of view.

I end up with four images of a blue ball lit by two lights.

And the main thing that changed the look of images are changed ratios of the RGB channels in the shader and light and assigning the RGB image date a different meaning - a different working colourspace and a different viewing pipeline.

The PBR world has some strange rules:
The albedo of the “virtual physical” blue ball that “reflects” 1,3% red, 8,9% green and 48% percent blue of the RGB lights in the linear-sRGB working colourspace changes to 6,1% red, 8,9% green and 43% blue in ACEScg after a OCIO colourspace conversion. The blue ball changed its “virtual physical” properties when changing the working colourspace.

To bring the conversation back to ACES 2.0 I also rendered the same two images in Blender directly with the latest OCIOv2 config for ACES 2.0 as this is one of the few DCC apps that reads this config at the moment. But right now you must use the Blender 4.5 alpha version, because the OCIOv2 config support is broken in the current Blender 4.4.x release and won’t be fixed until the release of version 4.5 as far as I understood.

At last here are all six images together to make it easier to see them at once.

You showed on your blog a photo of a blue object lit by an orange light to prove your point that the blue box does not turn greenish.

I also did a similar test a while back, but I used a HomeKit LED bulb, changed the colours and made lots of photos. The resulting images are rendered out with the ACES 1.3 sRGB ODT. It’s amazing how strange the objects look under narrow band lighting.


2 Likes

Very cool @TooDee, I’m glad you found it illuminating to recreate my purple/green test. For me I know this stuff doesn’t really click until I get my hands dirty with it. It’s also nice to see the green phenomenon corroborated, as it definitely surprised me and others when I first shared it.

In my own testing, I’ve also found that it’s possible to over-index on the advantages of wide-gamut rendering. I created this test animation to swing the Macbeth ColorChecker (which famously does not fit in sRGB) through a spectrum of color temperatures by lighting it with our sun/sky rig in C4D (which is color-managed).

ACES_AP0_GIF

I rendered it in sRGB primaries, AP1, and AP0. I used ACES 1.2 to convert to Rec. 709, same ODT for all three. There are differences, but they are so minor that you really only see them when they are placed right next to each other.

I remain a fan of rendering in ACES AP1 for 99.9% of my work, but this also makes me appreciate the advantages of stopping short of a massive rendering gamut that could include colors outside the visibility of the human eye.

Here are some key frames from the animation. The fourth quadrant has each patch split among the three rendering spaces:











@prolost
what is the albedo/diffuse of yellow sphere from the first scene (three spheres, three lights)?
while using acescg_0.72_0.32_0.02 (edge of Pointers gamut) more “yellow” seems to be retained under ACES 2.0.