Gamut Mapping Part 2: Getting to the Display

That’s just a digital RGB accidental look. All digital RGB skews to the compliments of the working space; cyan, yellow, and magenta. Skins skew to a nasty yellow, skies to a nasty cyan, etc.

It’s a good tell-tale sign of broken colour handling.

Awesome replies ! Thanks guys !

What ? :wink: We need more people like you in the VWG ! Every opinion counts and I feel like there are not enough image makers currently in this group.

That’s a completely valid opinion. What if clients want/like skew ? Personally I would rather have it engineered in a LMT than relying on a happy accident. This would also allow for a diversity of hue skews rather than everybody relying on the same skew all the time. Does this sound reasonable to you ?

Totally ! They are lit red but what should happen when you hit the display limit : 100% red emission ? Here are two sweeps to show you the differences between ACES and the DRT.

Values collapsing at display and skewing towards yellow (one of the notorious 6) :

Values elegantly going to white thanks to gamut compression/mapping :

Finally here is another sweep with different bias values to show the difference :
0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6. There is not only one white to path, right ?

Not bad for a DRT that is “only” 1/3 complete. :wink:

Chris

Thank you!

Wouldn’t this mean we need individual LMT for each ODT?
I think it’s ok to have a pack of different LMT for each ODT, but who will create them? And how? Is there any simple way to emulate this without artifacts? I mean, if this would be a part of ODT, a lot of people and time would be envolved. And if it would be about 25 LMT for each ODT made as unofficial LMT, it probably won’t get the same amount of human resources to make it artifact free.

I think, to make everything more clear for people like me, who are just users of color modification software, probably a good way is to make some stills with skin, neon lights, nature, fire in different display transforms including k1s1 lut, new alexa lut, ipp2, resolve DaVinci mapping algorithm, current ACES Rec709 ODT and hue preserving mapping.
I think this would let more people to understand what is going on and to feel they can say something about it, if the community needs more opinions from end users, not developers. Because now I’m just scared to say what I think, because you all speak at so high skills level (and I like it), so anything I could say would definitely be too obvious for anybody here. I feel like I’m saying spaceship developers if they understand that it should be airtight. Of course they know it without my so ‘important’ opinion.

Not necessarily. I liked Daniele’s idea of having some LMT designed by the VWG and based on different needs. For example an animation festure film does not necessarily need the same look as a feature film. So, on top of my head we could have basic LMTs like : film emulsion, saturation boost, contrast boost, previous ACES look…etc So not 25 LMTs for each ODT. But a set of LMTs for the whole system.

We could update this page with the naive DRT. That’s a good idea.

Don’t be ! Seriously this community is great at embracing people and making them feel welcome ! Nobody will be annoyed because a freelance colourist gave his opinion. On the contrary ! As Thomas said last time : the more, the merrier. And what is obvious to you may not be obvious to others. Sometimes (and I am not saying this is the case here) developers can loose track of the goal. This is why it is important to also have end-users as part of this group. In the end, we are the ones who will be using the system !

Chris

5 Likes

Many developers are dogfooding here so we are all good! :slight_smile:

There seems to be an inherent view that we are reproducing a “film type” image; that we are more or less trying to mimic the chemical response of the photographic film process (albeit selectively). I am challenging that assumption as a matter of discussion (philosophically as well as considering the variety of mediums/use cases that are embracing ACES outside motion picture production). I hope others will join me in that discussion.

My question of how to deal with colors outside a display’s gamut are related more around this question of intention. I know we need to remap them into a smaller gamut (coming from AP0 or AP1 to say Rec709/sRGB) which necessitates information loss in some form or other. Moving a high luminance, saturated color towards white is certainly one approach (and is maybe the best one), but it is not the only option.

1 Like

May be some interesting moment (or may be not): look at the yellow overexposed region on the first image of both scales. First scale (ACES) with increasing exposure distributes such color skew on other “overexposed” parts of image. So, from this point of view - ACES transform is more consistent according to initial image :slight_smile:

What is “overexposed”? Accident.

What is “overexposed”? Accident.

Yellow skin is more consistent with a random and arbitrary point where colours skew based on some idea of “overexposed”.

It’s all rubbish. Complete. Utter. Rubbish.

Sorry, I was not so serious in that moment). I only noticed that visual detail: the visual artefacts of clipped color data are same on initial image and transformed.
(and sorry for out of gamut colors was called “overexposed”)

Do not be “sorry”! It’s an absolutely critical definition that is ill-defined currently.

What is “overexposure” relative to a radiometrically open domain “scene”?

What is the sound of one hand clapping?
:wave:

Apologies for the 1 month delay in any update. I have started a new job so my infinite free time has been significantly reduced which means progress is slower.

I have continued to work on my chromaticity preserving display rendering transform project as “color pragmatist by night”, and have made some progress which I will share.

I am still continuing under my initial assumption that this would be a neutral simple robust invertible display rendering transform which would not look great out of the box, but would do a good job at being information preserving and unbiased, and that it would be accompanied by an upstream LMT which would do the heavy lifting of crafting a pleasing image according to subjective aesthetic considerations.

I’m going to put this disclaimer in bold.

This project is highly experimental, under active development, subject to change without notice, and should not be used for production image creation. USE AT YOUR OWN RISK.

I have started a github repository here and written up some initial rough documentation here.

A few words about where I’m at. I am pretty sure that there is a much simpler version of this model hiding somewhere in the long labored hours of this monkey at a typewriter here, but I have yet to get there. So for the moment, this is not as simple and elegant as I would like. However I do think the image is getting to a place that is at least in the ballpark of looking marginally acceptable.

More to come…

7 Likes

There’s one thing that is currently not implemented correctly in the OpenDisplayTransform and I was wondering if any of you smart folks might be able to help me out. Maybe @sdyer if you have time?

It is regarding the HDR outputs. I admit I have a limited knowledge in this domain, and could probably use a little more reading and learning on the topic.

For the HDR outputs I have the implemented the PQ and HLG inverse EOTFs (basically copying what was done for the ACES Output Transform). However I’m not entirely clear on what needs to happen in the tonescale when changing the output to a higher nit-range display device. I know the mid-grey point might need to be shifted, and I know the luminance range mapped to the log domain might need to change, and there might be an overall exposure adjustment needed. But my knowledge here is a bit fuzzy.

Can any expert in this domain explain it like I’m five?

Thanks in advance!

As you have probably seen from the CTL, the HLG Output Transform is just the 1000 nit PQ Output Transform, followed by the PQ EOTF then the HLG inverse EOTF (with L_W = 1000) thus creating an identical image on a 1000 nit PQ and 1000 nit HLG display.

While obviously everything is up for debate, the current HDR ODTs all map mid grey to the same level – 15 nits. You can see clearly what changes if you run a linear ramp in ACEScct through the various HDR Output transforms, and look at the results on a waveform. The mid grey point stays at 15 nits (10-bit PQ CV 340) and everything below that barely shifts between the 1000 nit and 4000 nit PQ OTs. It is really only the roll-off above mid grey that changes.

1 Like

Last month I stumbled across something interesting which I will share here.

I was re-reading the paper A 3D-Polar Coordinate Colour Representation Suitable for Image Analysis - Hanburry / Serra 2002 for the 4th or 5th time, trying to wrap my head around the various forms of HSV HSI HSL and how they are constructed. I was trying to see if it were possible to construct a cylindrical HSV space defined with V in terms of max(r,g,b) instead of (r+g+b)/3.

As most of you probably already know, Chroma can be constructed as max(r,g,b) - min(r,g,b), and S in HSV is defined as (max(r,g,b) - min(r,g,b))/max(r,g,b). At the time I was also experimenting and testing as many norms as possible to better understand the pros and cons of each. In that paper there is a proof that Chroma is a semi-norm. So I wondered if I could use something like that as a norm for constructing RGB Ratios.

Max(r,g,b) works well as a norm. It’s arguably the most robust. Using Min(r,g,b) as a norm is just weird. It results in RGB Ratios that are entirely negative and … giant. Not useful. So then I thought, “Hey what if we go somewhere between the two?” So I chucked down a dissolve node and lerped between the two. The results were super interesting.

vec3 rgb = INPUT_RGB
float norm = max(rgb.x, rgb.y, rgb.z)*0.5 + min(rgb.x, rgb.y, rgb.z)*0.5
vec 3 irgb_rats = (norm - rgb) / norm

RGB Ratios generated with the above method look pretty similar to those generated using the max(r,g,b) norm, except they had less secondaries mixed in. With “traditional” Inverse RGB Ratios, the red green and blue channels represent the secondaries: cyan magenta and yellow. But the secondaries are “wide” in that they incorporate all colors, just centered on the secondaries. In my many experiments it has been useful to have access to individual hue directions. Make reds darker, reduce luminance of blue, bias hue direction of red towards yellow, etc, etc. Ideally you would want a model that did the right stuff without resorting to per-hue adjustments, but per-hue adjustments are still necessary at a certain point.

So to show what this looks like here are a couple screenshots of a standard colorwheel.

“Traditional” inverse RGB Ratios look like this.


Notice how the “cyan” (red channel) incorporates all colors from input yellow around to input magenta, centered on Cyan. This is what I mean by “wide”.

Using the “halfway between min and max rgb” norm, the inverse rgb ratios look like this instead:

Complements are positive, primaries are negative. It’s like a 3-component opponent colorspace which represents all corners of the RGB cube hexagon.

The red channel representing cyan, looks like this:

And if you multiply the ratios by -1 you get the primaries:

Anyway, just something interesting I stumbled upon in my experimentation, which I thought might be useful.

3 Likes

It is a good model I came across last year during the gamut mapping ramblings (and poked at a bit), I made an implementation in Colab back then and there is a PR pending to get it in Colour here:PR: Implement support for "IHLS" colourspace. by amulyagupta1278 · Pull Request #614 · colour-science/colour · GitHub

Hey folks,
Appreciate the conversation here. I wanted to share something I am observed in regards to color/hue shifts when sRGB primaries are brought into ACEScg color space. In this image the color swatches on top are from a linear EXR in sRGB primaries. Below are the same in ACES on bottom (read in with the scene-linear Rec.709-sRGB to ACEScg color space).

sRGBPrimariesHueShift

Most notable to my eyes are swatches 4, 6, 10, 11, 18, 21.

de/Saturation
#4 is desaturation of yellow. This was listed in the dropbox paper Output Transforms Architecture.
#6 desaturation of cyan
#10 desaturation of teal
#22 saturation of blue.

Darkening
#18 (also 14 & 23) darkening (luminance). Paper says under tone scale design requirement “slightly lower default contrast” I think this would likely solve this.

Hue shifts
#2 hue shift green to yellow
#10 hue shift cyan to green (plus desat)
#11 hue shift violet to blue (plus desat)
#21 hue shift green to blue
The paper mentioned hue shifts of primaries with increased exposure, but I did not see mention of hue shifts going from sRGB primaries to ACES. Is this something on the radar for the new Output Transform?

Let me say that I am coming at this from the perspective of an artist, and that I’m offering these naieve observations in no way to make conclusions, but rather in the hopes that bigger brains than me might have some insights into what’s going on. It’s been pretty awe inspiring lurking here and listening. They say if you are the brightest guy in the room, you’re in the wrong room. I think I’m in the right room :slight_smile:

Yes, it is part of the discussions pertaining to a chromaticity-preserving rendering transform. Keep in mind though that as desaturation occurs and because the sRGB and ACEScg basis are different, it is expected to get different hues.

Cheers,

Thomas

@Derek #18 (and #1 too if you look carefully) is the red modifier of the RRT. If I’m not mistaken, it’s in there because Rec.2020 red was deemed too red. Moving this modifier to a standard LMT is one of the big requests of the RAE paper.

Jean-Michel

1 Like

Hi @ChrisBrejon and everyone who has not seen enough the RED XMAS clip :nerd_face: yet.

Here is my take on the “Red XMas” footage. I ran the R3D file though Resolve 17 (ACES) and FCPX (HDR-WideGamut project). As the bulbs are clipped in the sensor I am never able to “see” them as a red light source, I can only see all the objects around that receive the red light from the bulbs. I had a a simple 3D scene for a HDR test from last year and turned the “lights” red. In this way I can play with the intensity of the red bulbs without any clipping (only in LogC encoding for in later tests)

If you view this page on an iPhone (11 or higher) in Safari you can see the following links in HDR (EDR).

Without a proper HDR monitor setup I cannot judge a HDR project on my iMac (EDR) screen in Resolve. FCPX works far better here, that’s why I do the majority of my HDR tests with FCPX and review the results on my iPhone screen (and soon on the new iPad Pro :slight_smile:)

First I took the R3D file and passed it though Resolve 17 (ACES)
https://vimeo.com/544454696 (check the work steps in the subtitles)

Next I passed it though FCPX that allows via the RED SDK to read the file as RedWideGamut or Rec.2020.
I also converted the R3D in Nuke to Alexa LogC, because I found FCPX works best with Alexa encoded footage when working in a HDR WideGamut project, especially if material is rendered in ACEScg.
ACES is not supported in FCPX.
https://vimeo.com/544466973 (check the work steps in the subtitles)

The next clip in SDR. I rendered some red bulbs in Blender linear-sRGB and compared them with the standard sRGB (left) and FILMIC view transform (right). The red emission is in the first and last image the same, only the strength of the HDRI is different. The “cyan” patch on the colorchecker sticks out a lot in this strong monochromatic light setup.
https://vimeo.com/547176470 (check the work steps in the subtitles)

Blender FILMIC has no HDR view transform to my knowledge. So I exported the linear sRGB renderings as LogC ProRes files and imported them into FCPX. This time you see on the left the ungraded result and on the right a more please grading attempt with the color correction tools from FCPX. The next clip is again in HDR.
https://vimeo.com/547185384 (check the work steps in the subtitles)

The next HDR clip is showing the R3D footage and my rendering side by side.
I had to grade down the R3D bulbs to around 400 Nits to get a pleasing result, but the 3D render looks also fine when I push the bulbs up to the 1.000 Nits limit (FCPX has a nit limiter that can be set to different maximum levels. I am always using maximum 1.000 Nits).
https://vimeo.com/547202998 (check the work steps in the subtitles)

And the last clip is again from Resolve 17 in ACES. The renderings full screen.
https://vimeo.com/547213466 (check the work steps in the subtitles)

Its a pity I cannot judge any HDR output directly on the Resolve UI. I filed this as a bug, I hope this will get fixed one day. FCPX works very good with HDR material without an external monitor setup. FCPX is working with a linear Rec.2020 working colorspace as far as I could find out. I don’t think it has any gamut mapping function. But the Color Wheels are very intuitive to control and I find it a very good tool to learn about HDR grading.
I check my results also from time to time on a LG C8 in an office and must say, the general impression of the HDR clips feels quite similar between the TV screen and the iPhone display.

Best

Daniel

3 Likes