Hello guys, here is a very long post about some questions that were raised last week.
Following the OT VWG from the 13th of January, it looks like we go down the road of “fixing the OT issues” which is great. As Carol put it nicely, let’s clean our own house first ! As you know, this is the naive point of view from a lighting artist with a growing interest for ACES and Color Management in general.
It can be tricky sometimes to name these issues and track them down properly. One advantage of checking Full CG renders is that we can remove half of the equation by being IDT independent : simply render with constant colors and no textures.
On the other hand, these images may be a bit too abstract to judge an Output Transform properly : do we know what a sRGB sphere should look like ? Or an ACEScg light saber ?
So, I have tried to provide these “pure” AP1 renders/images, with no negative values, nor any IDT involved and which may show perfectly the issues we’re trying to fix here. I generally use CC24 values on the assets and ACEScg primaries in the lights. Even without any geometry, a spotlight with a volumetric may be enough to highlight some issues I think.
Of course, these images are “limited” and I am not saying here that they are enough to improve the current Output Transforms. But they do have this one quality that makes them useful I think.
Another thing that is important to take into account is that we are not really using the Gamut Compress as intended if we use it on full CG footage for the following reasons :
- The algorithm distance limit is based on digital cinema cameras.
- Full CG renders are in AP1 and we are compressing to AP1. It’s just a side effect.
- A “clipped” render like my light sabers is not sampled properly (by definition) and from the tests I have been doing, any attempt to fix it in post would reveal the noise.
Hence the conclusion I have reached : we need the Output Transform to do the job, not a scene-referred step after rendering (unfortunately). The bullet point 3 is debatable as it would depend on the render engine used and the way it samples. I am taking here the example of a render engine which would sample using the ODT as a convergence criteria (or threshold).
So full CG artists may be in a position where positive values within the AP1 Gamut, such as strong saturated lights, will generate all kinds of issues. I just want to emphasize the fact that It does not only happen on live footage (from different cameras/IDT issues) and cannot be properly solved by the gamut compress algorithm (as far as I know).
So the purpose of this post is :
- To accurately define these issues.
- To make sure that these artifacts are real and not a brain construct : do we see the same thing ?
- To try to identify where these issues may come from (with the little knowledge I know).
Hopefully this post will succeed in my attempt to share images showcasing and demonstrating each offending issue.
About the renders :
- All the exr will be uploaded to dropbox next week as AP0 exr files.
- All the images from this post are 8-bits tiff using Rec.709 (ACES) burnt-in.
- All these images have been processed using ctl render.
As @nick has stated, we do of course have the issue that many of us are viewing the images on computer monitors that are 8-bit or less. So there may be “posterization” visible in the displayed image, when it doesn’t exist in the image data.
Friendly warning : there are no negative values in AP1 but there are in the shared AP0 exr files. I believe it is due to the 16 bits half float limitation from the “aces compliant exr” checkbox on the Nuke write node.
I was not able to find a definition of Hue Skews or Hue Shifts online. So I came with this one.n A Hue Skew is a shift of perceived color on the path to white. We can observe a shift of hue when increasing the exposure on different spheres here :
- On the blue sphere towards purple
- On the red sphere towards orange
- On the green sphere towards yellow
In this render, I used sRGB primaries on the spheres, rendered in ACEScg and displayed in Rec.709 (ACES) using ctl render. Each row has a one stop increase.
By hue here, I mean both a color and a shade of a color. Here is a close-up to avoid any ambiguity :
It seems that per channel lookup is the main responsible here, as these animated gifs from @nick show :
Please note that these two gifs are not exactly the same as the ones from the original post. The new ones are rendered in Nuke using Baselight 5.3’s shader based implementation of ACES, rather than OCIO’s LUT based one. The exact numbers are different, but the visual result is essentially the same as the old version.
So far the workaround has been to add a bit of green to compensate for the hue skew. This is far from ideal as the chromaticities from the scene are modified.
I do believe that the creative intent from the scene should be displayed in the most faithful way.
Some plots have also been doing using colour-science to study the path to white of sRGB primaries :
If we look at the blue primary’s path to white, we can see that it skews wildly towards purple. I believe these lines should neither be straight but somewhat less curved. From my understanding, there is no consensus on what perceptual hue paths should be.
Hue Skews not only appear with sRGB primaries but also with ACEScg primaries. In the following render, I did a sweep from blue (0,0,1) to purple (1,0,1). We can observe a really significant Hue Skew when increasing the exposure : each row has a one stop increase. Think of each sphere as an individual light source on a mid gray plane (0.18).
I find the down left corner to be very intriguing. Here is a close-up :
We go from blue directly to pink on the sphere itself but the illumination from the spheres on the plane stays blue. I thought it would be worth mentioning.
An alternative technique we could try is to apply the tonescale on max(RGB) rather than per channel (R, G, B).
Gamut clipping occurs when colors that are different in the input image appear the same when displayed. Clipping in some color channels may occur when an image is rendered to a different color space and when the image contains colors that fall outside the target color space.
I have been asked what the goal of the image was. Short answer is I want to be able to light with a red saturated color with no clipping nor skew.
- If I limit the gamut of color selection to sRGB, my light saber will skew to orange and I want it to be a true saturated red.
- And if I use an ACEScg red primary, the render will be very clipped and I think it is an issue.
In real life, lasers are Rec.2020 primaries and I personally like to have a real world reference. I did have a look at a Star Wars (Empire Strikes Back) reference to do this scene.
I think this render using ACEScg primaries in the lights looks clipped and flat. Especially the face of Mery (left-screen character). Would you agree with this statement ? It may come from the hard clip in all the display gamuts, and/or the clamp right at the first step of the RRT. There is also a hue skew on Mery’s shirt (looks like magenta in the red).
What should this render look like ? Some tests have been done using Colorfront of this render (with a perceptual OT) and I was pretty pleased with the result. The only “issue” I have noticed is that the pixel values for the green light saber went from “170” to “7” in the green channel. I would have preferred to keep the same amount of energy here.
Since I did this render myself and know the values used in the scene, I believe the Display Rendering Transform from Colorfront to be more faithful to the scene I created than the ACES Output Transform.
I also did a test with the Gamut Mapping Algorithm to compare with the Colorfront result. I am using here the gamut compress algorithm as a side effect.
Here is a closeup to remove any ambiguity of the issue we’re seeing :
In CG, we generally want at the same time saturated values and shaping on the faces, which makes the left render unacceptable for our movies. I personally consider the two examples (Colorfront and Gamut Compress) to be a reasonable target for the next Output Transforms. Obviously more testing should and will be done.
I also did these three renders to show different issues with only one light and one chromaticity for more clarity :
- A red ACEScg primary in the light -> Gamut Clipping
- A red ACEScg primary in the light + Gamut Compress -> Clipping and Skews
- A red ACEScg primary in the light -> Hue Skews (red gets orange)
We also did a plot to study the path to white of ACEScg primaries. I think the clipping can be seen when the values are stuck along the border of the gamut.
I believe some sort of Gamut Mapping/Compress for the Output Transforms would fix the issues.
Mach bands is an optical illusion named after the physicist Ernst Mach. It exaggerates the contrast between the edges of slightly differing shades of gray, as soon as they contact one another, by triggering edge-detection in the human visual system.
This one I quite tricky but I may have found an example in the following render :
Here is a close-up of the last row :
I do see some weird bands around the red and pink spheres which may be related to edge detection and Mach bands. Here is the same exr render displayed with Colorfront (you may notice the blue sphere’s hue skew is also gone with Colorfront) :
If we focus on the bottom row, I find it to be less disturbing visually :
I hope this summary (sorry for the long post) will remove any ambiguity on the issues I was trying to point out. Apologies for not having used the right terms in the first place. This post is just an attempt to clarify these concepts and start a conversation. I am more than happy to discuss all of this with you and I am looking forward to meeting#4.
I will add three descriptions below (polarization, solarization and banding) just for the sake of it, even if I was not able to observe them on my CG renders using ctlrender.
Posterization implies a lack of precision in the signal via quantisation. Here is the Wikipedia definition :
Posterization or posterisation of an image entails the conversion of a continuous gradation of tone to several regions of fewer tones, with abrupt changes from one tone to another. This was originally done with photographic processes to create posters. It can now be done photographically or with digital image processing and may be deliberate or an unintended artifact of color quantization.
This phenomenon is sometimes referred to as “banding” because it creates bands of the same color in graduations. Posterization has multiple flat areas due to quantisation, resulting in visual “bands”. Posterization is a loss of smoothness in gradients, akin to quantisation of the signal, you basically reduce the signal quality.
As @KevinJW explained :
Banding and Posterization can look the same, but the cause is what distinguishes them in my book. e.g. clipping can produce areas of flat colour due to a limit of range being ‘hit’, posterization can also have areas of flat colour, caused by a reduction in variation of adjacent pixels, typically due to discrete values clumping together for precision reasons, […] emulsion break down can result in the density curve bending the other way, which can I suppose look similar in some circumstances.
While solarization may, like posterization, manifest as areas of flat colour, it is not the same issue. Historically solarization referred to an effect caused by extreme overexposure of photographic film. In digital imaging it is used to refer to parts of the image becoming flat due to clipping.
While the analog photographic origin of the term solarization may be a more complex effect, in digital images I would use it to refer to an area of the image going “flat” due to clipping.
However, in some cases there is a risk of producing abrupt changes between shades of the same colour. For instance, displaying natural gradients (like sunsets, dawns or clear blue skies) can show minor banding.
This issue is called “color banding” and it happens when values within a gradient get pushed so much that there is no color/value in the file to actually represent the mathematical change you’ve applied with a tool. Banding is the visual result of posterization of a gradient.
As @KevinJW explained :
I think people use the term banding because they see bands (stripes) in graduated areas (like the sky) as the discontinuity in intensity becomes visible but that is a special arrangement of pixels, you can still posterise without seeing explicit bands.
I sometimes wonder if the complexity of the Output Transforms could be improved here and hopefully this post allows us to do some nice experiments.
I would also like to thank @nick for reviewing this post.