Yes I’ve been having some conversations about this topic lately.
The IEC 61996-2-1 specification document clearly specifies a pure 2.2 power function as the EOTF of the display. The document specifies a different encoding function.
The encoding function uses an offset 2.4 power function with a linear section. This mismatch produces crunched shadows. The intention of this encoding was for flare compensation and to reduce quantization in the 8-bit code value world of 22 years ago.
In today’s world things are a bit more complex. “sRGB” monitors could be using a pure 2.2 power function as their EOTF, or could be using the piecewise function. In this world is it impossible to know which monitor a user will have. It is also pretty common for a normal user not to know which EOTF their monitor is using, or even what an EOTF is. This is why I have chosen to only include a pure 2.2 power function inverse EOTF in OpenDRT.
In this world maybe a good question to ask would be: “Which option looks better?”, or “which option is safer?”
#1: Image with piecewise inverse EOTF encoding
a. displayed on a pure power sRGB EOTF monitor (shadows crunched)
b. displayed on a piecewise sRGB EOTF monitor (as intended: linear light to display light)
or
#2: Image with pure power sRGB Inverse EOTF
a. displayed on a pure power sRGB EOTF monitor (as intended: linear light to display light)
b. displayed on a piecewise sRGB EOTF monitor (shadows lifted)
I my strong opinion is that option 2 is the safer and better looking in most common scenarios.
Well, it is unfortunate but the standard is not clear at all! People read it the way they want and they also change opinion every 6 months on that topic. We wrote about it there: sRGB EOTF: Pure Gamma 2.2 Function or Piece-Wise Function? | Colour Science and here is the relevant image that people need to consider before any debate starts:
Isn’t more important how content has been creating for years, than a standard by itself? I mean, the instructions for people, not people for the instructions (I for sure translated this in a wrong way from my native language, but hope, it’s still possible to understand). So, if all srgb content is created on 2.2 power function displays and all consumers also view it on 2.2 displays, probably we can think of it as this is correct and make this a new standard?
For example, hopefully nobody follows rec709 encoding curve, but instead compresses highlights as a minimum. But we still call these images rec709. This is also far from perfect situation of course. But this shows, that standards are not reflect the real world situation, so they probably should be changed.
I apologize for making this thread another srgb vs 2.2 thread
You are right I should probably not have used the term “clear” given the amount of confusion and misinformation being spread around. I have read the spec and through my reading of it, it seemed pretty clear to me, but maybe I’m an outlier.
Anyway regardless of the spec, the reality is that there are displays with both EOTFs in circulation. And handling of low code values in display hardware is probably another complicating factor.
This is why I think the question I asked above is relevant: Given that we can’t control or even predict which EOTF a consumer’s display is using, which approach is safer? As I stated above I believe that the opposite approach than that used by the current ACES sRGB output transform is the better option.
I’m not aware of any particular issue in modern displays, i.e. not CRT. The EOTF is typically implemented as a hardware LUT in the display circuitry and it can be set to any values by the manufacturer really.
Absolutely! Some of the complaints about contrast are likely to come from that. Now, if we swap the mismatch around, all the other people that were fine will start to complain! I don’t think there is a safer option here though, any option that results in a mismatch should be discarded in modern colour management.
The way I see it: People that truly care about their colorimetry will religiously calibrate their displays to a specification or standard. Let’s now imagine that they work concurrently on two projects, one with ACES 1 and another one with ACES 2 where the sRGB EOTF has been replaced with pure power law. Are we expecting that the artists swap their calibration every single time they jump from a project to another? They also might not have a pure power law calibration target, which means recalibration. In a large facility with dozens or hundreds of displays, it will make some people really sad and angry!
It is certainly a huge can of worms.
This is unfortunately not the case and a new standard is not created by a finger snap!
@daniele ran a poll not so long ago and the situation is certainly not white or black.
From my own measurement around in town, most factory calibrations are closer to a pure power law in the shadows. There are all sorts of funny other things happening in factory calibrated displays, but here we are.
I would be very curious to hear more about what you have seen and discovered about this.
@alexfry shared with me a couple of images to roughly check what EOTF the display is using. I did some limited subjective testing among a few compositor friends, and the results are … perplexingly diverse.
I’ll share the test images here, with the disclaimer that they are difficult to read and no very scientific:
The idea is that you load the image with 1:1 pixel scale on your screen, without any color transformation, and determine which image has a better match between top and bottom. The match will be the EOTF your display hardware is using.
Wow! It’s actually working! On my 2.4 gamma calibrated GUI display I got perfect match with gamma 2.4 images. And when I turned on CMS in Fast Stone viewer, I also got perfect match again, this time with sRGB images, as expected with ICC calibrated display.
The patterns will tell you what the effective EOTF is for the entire display chain (software/OS/hardware), not necessarily the hardware EOTF (although it can be depending on the scenario).
As Anton noted, they also reveal the effects of the ICC chain (or similar), when those are in the mix. You can experiment with assigning Gamma 2.2 or 2.4 ICC profile to the image to see how that reacts for instance. Or the different behaviours apps like Nuke and Resolve exhibit when they’re in their respective “OS colour managed” modes vs their unmanaged modes.
The principal behind them is pretty simple. The solid patches have a particular value in display linear, the adjacent checker areas have alternating values that average out to the same value.
ie:
solid patch 0.25 = checker 0.0 and 0.5
solid patch 0.75 = checker 1.0 and 0.5
These are then run through the inverse EOTF.
Once they pass through the entire display chain, you have display linear light being emitted from the display, and the eye will average out the checkered areas to be the same as the corresponding grey patch. Assuming the EOTF is what we expected it to be, otherwise we see a mismatch.
My feeling is the same as it’s been for years. Reasonable people can disagree about which is “correct”, but the reality is both exist in the wild, and we need standard ODTs that address both.
The reality on the ground is frustrating and complex. I’m particularly annoyed by situations like Apple simultaneously shipping 16 inch MacBook Pros with a stock display/ICC combination that reacts like a Gamma 2.2 display, and the Pro Display XDR that acts like a piecewise sRGB display.
That unknown word “piecewise” can force people at least think about what they should choose: gamma 2.2 or sRGB, instead of just clicking sRGB as this is what’s printed on their consumer monitor box.
Yes, I believe the Pro Display XDR is the correct implementation here.
In the case of the MBP, it’s ships with a “Color LCD” ICC profile that describes a screen with P3 primaries, D65 white, and a piecewise sRGB EOTF, but in reality the display behaves like it has Gamma 2.2 EOTF.
In fact, understanding how the ICC protocol is implemented as a “new school” or the “managed” component referenced in the standard makes it pretty clear that the intention is direct code value to display, with discrepancy between the encoding and the display.
In a modern system where the “overhead” is removed, the management would happen in the middle layer now with direct stimulus in and out being on either side. However, historically, it makes perfect sense, and in fact only makes sense, that the encoding differs from the EOTF to achieve the noise suppression.
To be fair the colour LCD ICC profile has two tone curves attached. An inward facing which is compound function and. Display Tone Curve which is pure 2.2 power law.
And I make my test pattern in linear light always with 0.0 in the checkerboard.
0.25 solid with checkerboard 0.5 and 0.0
0.5 solid with checkerboard 1.0 and 0.0
From a purely pragmatic VFX perspective, in a world where artists are working remotely and the display chain can’t realistically be managed end-to-end (without shipping out hardware), I find it safer to use pure power inverse EOTFs, due in no small part to the fact that we’re invariably delivering content intended for theatrical environments / broadcast. It’s far from ideal, but in a worst-case scenario, I’d rather have artists complain that their blacks are lifted while they’re working, than complain that their blacks are crushed after they deliver – it’s sort of like Type I vs Type II error (false positives vs false negatives).
(And if I’m being totally honest, when in doubt, I tend to mimic whatever FilmLight is doing, absent a reason not to…!)
A little bummed to discover Display P3 uses sRGB (~2.2). Out of curiosity, what do you guys tend to calibrate your wide-gamut desktop displays to?
@jack.holm : Hey Jack, as I see that you are here, would you mind chime in about the sRGB 2.2 vs Piece-Wise thing if you think there is anything else to add besides your quoted message above!
It might take me a while to get caught up on 178 posts!
I stand by what I said in my email that was posted, although I am no longer Technical Secretary of IEC/TC 100/TA2. That is one of the positions I gave up as I moved into semi-retirement.
I can add a few comments before reading the posts:
sRGB is for computer displays that are calibrated using a LUT in the video card, or color managed using ICC color management, so there is no need for or benefit of a pure gamma EOTF.
The sRGB ICC profiles have always used the two-part EOTF.
During the time CRTs were used as computer displays accuracy and uniformity were limited, but by current standards the difference between the sRGB EOTF and gamma 2.2 is large, reaching up to 9 counts in 8-bit.
A lot of color encodings are called sRGB that are not really sRGB. My suggestion would be if it is not exactly sRGB don’t call it sRGB, just say what it is, e.g. 200 nit peak white gamma 2.2 EOTF 709 primaries (or whatever).
In most cases SDR video displays should be calibrated to the ITU-R BT.1886 EOTF.
Thanks @jack.holm, great to hear it from you and sincerely appreciate you took the time!
Do you think that the standard should be amended to provide explicit and definitive information about the expected EOTF of the display? As you saw it is a topic that is all rage and people almost read it as a function of ambiant temperature. If you think it should be amended, what is the process to get that rolling?