Proposal for resolving the conflict beween 'swappable core rendering' vs 'doing everything in LMT'

So in yesterday’s VWG meeting I was attempting to reason between the two positions of where we should allow the variation. My argument was that I believe we require the ability to swap out the core part of the rendering in order to satisfy my literal hand waving that said the desire for a desaturating path to white precludes the requirement or being able to reach highly saturated colours.

In order to progress this and not repeat ourselves, I’d like to resolve the conflict by proposing the following requirements, I will suggest that this be an item for ‘voting’ on at the next meeting if people are agreeable to that approach.

Requirement
If the final output transform is capable of spanning the destination gamuts’ of at least the following displays/viewing conditions then my requirement would be satisfied and I would not be opposed to it on this ground.

Gamuts/Conditions

  1. Rec 709/1886 Monitor
  2. sRGB Monitor
  3. DCI P3 Reference projector (dark environment)
  4. D65 P3 Display

I’ll define span as being able to generate output encodings that completely fill an 8 bit RGB sampling grid of the encoded output

Notes

one could argue about if 8 bit grid is too course, but I have had cases with the current rendering which failed to be able to generate the required output colours on a Rec 709/1886 ‘video’ output, in this case being able to generate the values even with quantisation would be an improvement.

I’m sure something should be added in this requirement to include HDR outputs, but other than anecdotes around WRGB OLED displays being too limited, which would argue for a similar constraint, I can’t say I have a hard requirement/example.

If other people have similar requirements or arguments for their situations, please feel free to reply/create other topics in a similar manner so we can collect these for discussion in future meetings.

7 Likes

Here’s a suggestion that might not be well received but… Why shouldn’t default ACES 2.0 output transform provide solution for both? These two things (path-to-white and ability to get super saturated colors) are indeed in conflict but I don’t think it has to mean that one output transform couldn’t satisfy both as long as the output transform has a parameter for it. Yes, this parameter would have to tracked but so would any other solution. With the parameter user could decide which one they want to use.

As far as OpenDRT goes, it’s interesting test to see what kind of rendering we can get out of it if it has to retain full saturation for these gamuts (Rec.709 and P3).

Quickly testing with current ACES, inverting CMSTestPattern and outputting Rec.709 and P3, you can get the cube back out perfect if you disable both the “global-desaturation” and “red-modifier” sweeteners. But not with them. Normal images without super-saturated colors still look ok with both disabled.

1 Like

This may be a conflicting requirement because the protocol isn’t well defined and based on erroneous assumptions.

Ask ourselves about some BT.709 red stimulus and what should happen. We could have 15 units of light stimulus of pure red, and then 10372722 units.

The whole discussion of why the dechroma must occur has been sadly avoided, and as such, it is possible that the reasoning of an “either or” situation is erroneous.

Until someone can locate what to do in the above example, and then the “why to do”, it’s an ill defined problem with ill defined speculation.

4 Likes

Yes, this. I’d like to understand more about dechroma as well. In testing I have convinced myself, correctly or not, that dechroma is the most critical part of OpenDRT but if you search for it in the forum, there’s very little information… But, I wasn’t suggesting that there wouldn’t be dechroma happening.

This is from the dropbox and is in the third requirement:

  • highlights shall desaturate at a certain point for low-saturation things and less so for items that are bright and saturated (e.g. neons, car taillights, lightsabers, etc.) - (how do we determine the threshold? - is this purely subjective? can we make it objective?)

So bright saturated colors should desaturate less than colors that have lower saturation. So the latter should desaturate faster than the former? Does this happen with current dechroma in OpenDRT?

This assumes the claim is correct.

The claim does not state what the mechanic achieves, hence the most important “Why” portion, so it is dubious to begin with.

Without an underlying theory behind the dechroma, it’s all just rubbish flourish. I would hope there were an underlying theory / hypothesis to guide it.

2 Likes

Troy what is your answer to your questions?

The requirement should still be explored, no? You can read it being just a perceptual thing and how it’s achieved (with a desaturation trick or something more fundamental) is for the implementation to figure out. And if the requirement is rubbish then it should be abandoned or revised.

Mainly I’m curious how the dechroma in OpenDRT works currently and why it works the way it does…

I don’t really mind about “requirements”, but we should be interrogating the theory as to what is behind the mechanic and how it contributes to imagery. Couldn’t agree more!

Folks cite this, however as best as I can see, this leans on the theory of cone saturation. Cone saturation, according to everything I have read and learned, including basic tests folks can conduct daily, happens at rather “extreme” levels. That facet invalidates the idea that a dechroma is some biological convention we are emulating.

Instead, it seems far more fundamental to the nature of additive and subtractive media. The “Why it ‘works’” question remains outstanding. Given that, the mechanics of paint / creative film work resist the above hypothesis, and also throws shade at the seemingly goofy tend to say “highlight” around here.

It would seem less rubbish and more absolutely fundamental. Until we interrogate it however, it’s random rubbish with respect to whatever it is facilitating.

  1. Aesthetic conventions of subtractive media and the nature of their rendering of tonality. With respect to the extremely limited dynamic ranges of representation in subtractive media.
  2. Expand the representational range of “apparent brightness” in a manner that additive emissions cannot express.
1 Like

About Chroma Compression

Chroma compression seems to be a common source of confusion so I will take a try at explaining it better. I did attempt to explain why it is a necessary component of a display transform at the beginning of this thread quite a while ago, but I’ve learned a lot since then so maybe it’s time for another stab at it.

First a definition of the problem. In a display transform we must remap scene-linear to display-linear. What does this mean? In scene-linear we are dealing with a large range of values without an upper bound. Here’s a plot of a hue sweep, animated from 0 chroma to 100% chroma, plotted in RGB. The input intensity ranges from 0 to 6 stops above 18% grey.
dechroma01_scene-referred

In a display transform we need to map this large range of input values down into a little cube: the bounded display-referred gamut volume.
dechroma02_to_display-referred

In the 3-dimensional RGB space, the plot of all saturated colors forms an inverted pyramid.
dechroma03_to_display-referred2
As you can see above, if we just compress saturated colors into the display gamut cube without changing the colors, we aren’t using the top half of the cube! That’s a lot of wasted area we could use for making nice looking images.

dechroma04_display-gamut-chroma-compression
If we compress the chroma of saturated colors towards the achromatic axis with increasing brightness, this allows saturated colors to be rendered with more apparent brightness in the image. This is a cheat necessary because of the limited dynamic range of our display devices compared to the real world. In HDR, we have a higher available dynamic range in our display devices, therefore we need less chroma compression in the rendering for these devices.

Per-Channel Chroma Compression

First let’s take a look at what is happening in our familiar per-channel rendering approach.

Viewed from the side, with 80% chroma:
dechroma06_per-channel-side-0.8
Note how chroma is being compressed above middle grey, and expanded below middle grey.

And with 100% chroma:
dechroma06_per-channel-side-1.0
With more saturated input colors that near the edge of the display gamut cube, we can see much more significant distortions in hue as chroma is compressed. It’s also interesting that there are no values outside of the cube after compression.

Here is a view from the top, this time animating input chroma from around 50 to 100%.
dechroma05_per-channel_chroma_compression
Note how how the hue distortions converge towards the secondary colors: cyan magenta and yellow. Red stays perfectly aligned.

But what if we do a very small 1 degree hue rotation on the input?
dechroma06_per-channel_chroma_compression_rotate
Now the hue that is very close to red is distorting significantly towards yellow at the top end.

OpenDRT Chroma Compression

Here is what OpenDRT’s chroma compression looks like:
dechroma07_opendrt_dechroma

The algorithm is very simple, but there are a few key things that make it work well.

First, the math of the dechroma is a lerp towards an achromatic axis defined by some vector norm, controlled by some factor.

The norm is important because it controls the shape that the dechroma takes when compressed. OpenDRT uses a euclidean distance norm, with a weighting applied to the 3 channels. This norm forms a shape that compresses secondary hues more than primary hues, and works in our favor for pleasing image appearance.

The factor is derived from the highlight compression, and uses a simple hyperbolic compression function.

In simple psuedocode:

float3 rgb = <input vec3>;
float norm = sqrt(pow(rgb.x * 0.24) + pow(rgb.y * 0.1) + pow(rgb.z * 0.09));
float sx = 0.7; // input domain scale
float dch = 0.5; // dechroma strength (high end)
float sat = 1.2; // saturation strength (low end)
// chroma compression factor
ccf = pow(sx / (norm + sx), dch) * sat;

// apply dechroma with a lerp
rgb = norm * (1.0 - ccf) + rgb * ccf;
return rgb;

Another key aspect of the chroma compression is the domain that the compression is applied in. OpenDRT uses ~CIE 2006 LMS.

If we adjust the dch parameter, it affects the strength of the chroma compression at the “top end”.
dechroma08_opendrt_dechroma_dch

And if we adjust the sat parameter it affects more the chroma at the “bottom end”.
dechroma08_opendrt_dechroma_sat

Represented above we have two different approaches. Per-channel is a “water fills cube” approach.
dechroma10_water

OpenDRT uses a “cube inside a balloon” approach.
dechroma09_balloon

As usual here is the nuke script I used to generate the above images if you want to play.
chroma-compression.nk (37.1 KB)

10 Likes

Given that it is used for aesthetics and technical reasons (or aesthetics reasons because of technical limitations) and people want more or less of it contextually, “cheat” might not be the right name.

It is worth pointing out that it is harder to reach the primaries and secondaries with the norm based dechroma, but it is much easier to fill the volume homogeneously, i.e. without distortions.
It is visible in your images in that the “balloon” never reaches the corners nor have cusps.

Cheers,

Thomas

Thanks Jed. This deserves a thread of its own to discuss further.

1 Like

To my mind, if we end with parameters that open up that level of control, you in effect have multiple different transforms, just hidden in a single blob with lot of options.
To my mind, the parameterisation should be there to target different viewing conditions (monitor capabilities, room, etc) but not deal with fundamentally different rendering pathways.

It’s worth thinking through though.
I do also worry that in that scenario, not only would you end up with an LMT being dependant on a specific rendering transform, but also on the specific settings of that rendering transform.

When you say subtractive media here, are you talking painting? Or negative film? Or both.
Do you feel like dragging some of these aesthetic conventions along is a positive or a negative?

I also tend to agree that de-chroma “to the eye” does seem to only happen at pretty extreme levels (for me, looking into taillights and other direct light sources), but it does seem to be real, and going back to everyone’s favourite “paintings of flames” example, it does seem to be something that people “feel looks right” even when nothing about the medium inherently pushes things in that direction. Can we seperate out which parts are tied to the medium and which are perceptual effects we’re trying to emulate?

By introducing de-chroma effects that happen at high levels in our eyes lower down in the SDR domain are we trying to elicit that feeling of much higher brightness levels?

I think you’re right pointing to 'expanding the representational range of apparent brightness" as the main issue.

(Are we sure this needs to stay in the per pixel domain? Can’t we just fall back on the oldest trick in the Comper’s toolbox for conveying additional luminance information beyond the display range? Stick a big ass glow on it!!! :thinking:)

3 Likes

I agree. It’s a pickle. I wonder though if it forces the decision to retain the saturation by default, because it’s easier to deal with those colors in the scene data than it is to come up with a new DRT if the default one can’t reach those colors. Also there’s the inversion workflow…

Both.

It’s a convention in as much as it is impossible to replicate imagery without varying filter level in subtractive media; paint / film isn’t just the same chromaticity at higher emission assuming a diffuse source of say, 100 nits. It varies along the entire range.

I believe saying “highlight” and “high levels” is part of the problem when tackling this; it happens across the entire range.

If one tests this hypothesis by limiting a chromaticity at a given brightness, one will find that while exposing the entire display gamut volume can work fine for CGI content, on “naturalistic” imagery from a camera, the pull of that aesthetic convention feels pronounced enough to warrant application across the totality of the range, and not simply for values beyond the output medium.

…and we’re back to the ACES Retrospective and Enhancements document which asked for a more parameterizable transform. This is not inherently a bad thing and I don’t think that it conflicts with the goal of having no look in the DRT and proposing look development tools so I’m really all for it. Actually, it would be hypocritical of me to say otherwise since I just implemented a modified version of OpenDRT 0.0.82b1 (newer versions are probably going to fix the problems I sought to fix with my modifications though). As for look development tools and doing more in LMTs, I don’t think that is very controversial although doing everything in LMTs might be hard if something needs to be done after compression like, taking a modified ACES as an example, applying an inverse blue highlight fix matrix right after the SSTS.