I think it should read:
The display is 2.2 Gamma,
but for efficiency reasons relating to 8bit processing we encode and decode sRGB images with the compound function.
If your interpretation were true, compatibility to ITU BT 709 would not be given (Annex B).
And it was never stated that the Display Characteristics given in 4.1 should be overridden.
Interestingly, you went from there is no room for misinterpretation to “I think it should read” rather that “It read”, does not sound like unambiguous to me
The standard might seem unambiguous on the surface but it has some cracks and those cracks have been infiltrated by the authors and contributors themselves. We could ignore them as you suggest and I would be fine with it if that “contextual” information was given by outsiders but it is unfortunately not the case here so I would be keen to get an official stance on it, once for all.
Apple uses a pure 2.2 Gamma EOTF.
If someone could find out what Windows does by default then we could
come to a conclusion from a pragmatic standpoint.
It was simply an hypothesis on why input was defined. A display does not care about the incoming 3-channel pixel data, it will decode it using the settings you chose. If input was given here, I think it is to highlight that there is no viewing conditions compensation like with BT.709 for example.
Maybe, there is a ton of API, software and libraries that use the piece-wise function, it is hard to quantify but we are talking about many many software sources versus a handful of panel vendors.
I cannot disagree here but I suspect that most people would not even notice and they have far bigger issues than the discussed mismatch:
Poor display factory calibration
Display aging
Display settings that have been tweaked to hell, “let me raise brightness to 100% and lower contrast to 25%…”
Terrible viewing conditions
Etc…, the list could go on.
Not that it is an excuse to make things arguably worse but it would ultimately fix itself over time as people change their display where software sometimes cannot be fixed.
I wanted to run an experiment and randomly change the display calibration between 2.2 and piece-wise at work every lunch to see if artists would notice.
Shadows will be raised to a very noticeable level. I’m sure even gamers while playing a game would notice much more visible details in the shadows using piecewise. Especially in the darker locations like dungeons.
To be honest, I sometimes see this point of view that piecewise sRGB display has very little visual difference from gamma 2.2 display. And I’ve never understood this. It has a huge visual difference in the shadows, that for sure affects the creative decisions of an artist.
If a Windows application supports ICC color management, then by default it does nothing to sRGB image and sends it as is to a monitor input.
It works fine as long as it is a synthetic profile, not a one based on a measured data of the display. If ICC profile was created using a colorimeter, then the image is altered in a way to display it like on a sRGB piecewise display, no matter what the display original EOTF was.
Even if this profile has vgt-curves that give us gamma 2.2 EOTF (that works on a videocard level and applies to everything), any app I know, that supports ICC color management, still will alter the image to look like it’s decoded to the linear light by display sRGB piecewise EOTF.
So:
Overall Windows isn’t color managed at all. You rarely can find an app that supports it. So it sends sRGB image as is to a (most likely) gamma 2.2 display.
If an app supports ICC color management and the profile selected in Windows settings is default sRGB, then again sRGB image is sent to a display as is.
If an app supports ICC color management and ICC profile selected in Windows settings was created using a colorimeter, then it is always lifting the shadows of the image to make it look like displayed on a piecewise sRGB display.
I would resist the temptation to scrutinize minutiae here; a display is engineered to possess an input / output relation.
The reason I would resist the temptation to firewall this compulsion is that historically, there was a Wild West of competing ideas happening, spanning many different realms, as we can take a hint from if we read the PDF proposal that predates the document at w3.org.
Lensed under this, it is reasonable that those forces were likely deeply squabbling over many of the points.
I would caution against hearsay here.
There is no “said”, only the specification. Even if Michael Stokes himself were to arrive and declare that the specification held some miscommunication or error, it would be moot; the standard is the standard.
Note that a good body of historical evidence I have accumulated locates Michael Stokes as a primary author, hence why I used his name here.
At best, written feedback from a primary author can at least help to historicize what is present, but it cannot change what is written. Nor can it change the surface of what is already established in terms of vendors.
You have included a citation as “authoritative”, despite said citation not being a primary author, nor one of the three acknowledgements from the original proposal. Further, there is a consistent returning to hearsay as “evidence”, without an acknowledgement that all parties are not neutral here.
Whether intentional or not, that is an agenda, either in the ulterior motive sense, or in the Useful Idiot sense.
I would issue caution here given that again, there is every indication that the history of the specification points toward a distinct lack of consensus. Revisionist history included.
I would draw attention to “infiltrated”. Prudent terminology.
I think this point is misstated in that it is not isolating encoding from decoding. There are many cases where the encoding is indeed, and in terms of specification, correctly the two part. For example, thousands of lines of video game code. What is also known is that the output is explicitly 2.2 in the vast majority of cases. And this of course aligns with the totality of the specification’s outline!
In fact, if one actually follows through the chain, one will find more often than not that the implementation frequently matches the proposed discrepancy between what amounts to an OETF and an EOTF. I concur 100% with Daniele that in 2023, there are many reasons to deviate from the OETF portion of this discussion, but that is a much wider discourse.
I think we are in complete agreement here.
I would like to point out however, that the reason I am incredibly sensitive to the totality of this discussion has nothing to do with what is presented to a casual audience member; literally not a soul will notice because of the incredible plasticity of the differential system of our visual cognition system.
What I am keen on pointing out is that there is an underlying logic present in the Implicit Discrepancy approach that wildly shifts our vantage about the importance of presentation. That is, beneath this seemingly goofy and ridiculous discourse is a pivotal ideological difference. I’d say that it is a foundational one; the importance of Appearance over the scourge of the minutiae of colourimetric measurement.
I’d go further and suggest that it is this discrepancy that continues to plague all efforts around “colour management”, all the way to hijacking terminology and presupposing constructs around visual cognition systems that are flatly false.
If you pay close attention to your lower diagram, it is impossible under the constraints of the ICC protocols. This is plausibly why Apple has deviated from the more rigid ICC systems in many ways, adding their own bits to the framework.
Specifically, under a strict ICC system, the protocol will broadly:
Undo any encoding transfer function.
Encode to the output medium’s transfer function.
This prohibits the ability to shim in what is effectively the potential for Appearance adjustments.
In some spatiotemporal articulations this is 100% correct. But the picture should be considered as continuous with the surround from the phenomenological vantage. That is, the surrounding of the picture plays a role based on viewing frustum, but simultaneously the fields within the picture play a role. Remember, all evidence points toward a visual cognition system that has no inferences of the discrete time magnitudes, but is rather continuous time differentials. It is entirely plausible that the EOTF itself is at fault, depending on the spatiotemporal articulation of the fields present in the picture, not just the erroneous construct of “surround”.
Which happens to align to the IEC specification.
I believe @nick is tremendously capable here. I stand by my analysis, but it has been a while since I tested Windows! Broadly:
Windows used to be more or less “unmanaged” in the sense that desktop items etc. were never actively routed through an ICC chain. This would yield elements encoded with the two part as broadly aligning with the IEC specification.
Windows, in the cases of “managed” software, would use an ICC based chain, and undo (or skip as a no operation where a 1:1 match was deemed heuristically) the encoding function. This, by default, would be the IEC specification two part. We are now in a closed domain, colourimetrically uniform state.
Windows would look up the display encoding. By default, this was always the HP / Microsoft IEC specification sRGB definition, including the two part transfer function. When identified as the display encoding, the ICC protocols would re-encode the closed domain, colourimetrically uniform tristimulus according to this transfer characteristic. That is, the encoding would again match the two part sRGB definition.
The display hardware would decode. In the vast majority of cases, following the Berns and Motta work, this was broadly a fixed power function. Currently, we can probably all agree it is approximately a pure 2.2.
The above outline used to match the IEC outline present in the standard; a two part encoding routed through a single exponent function decoding.
I wanted to bring back some earlier posts in this infinity long thread which express the point I am arguing for.
Jed writes:
Zach echoes this:
Relevant to the current debate on “what the spec says” Jed adds:
So it is admittedly not perfect or “correct,” but it is the the safer option, the “do less harm” option.
You then reply to Jed and say
Yes!! From my perspective, 100% of my complaints of “ACES is too contrasty” came from that. I’m sure you recall all of those long threads when I first came to these forums where I complained that textures had crushed shadows, and that “ACES is too dark” and such? All of that came from me viewing through the ACES sRGB output transform which was crushing the shadows, and all of it was solved by me switching to using the 2.2 pure gamma output transform.
You suggest an amusingly sinister experiment:
In a certain sense, having the default ACES output for sRGB has done this experiment. This artist definitely did notice and I struggled to figure out what was wrong, going down all sorts of wrong avenues to try to fix it (like changing the texture input transform), while you vacillated between patiently educating me, and pulling your hair out saying “please never do that!” and such!
So yes, artists definitely do notice the difference! In my case it is of a lookdev and lighting artist being very bothered by it, in @meleshkevich 's it is a colorist being very bothered by it. It should not be surprising that artists notice. That’s kind of the definition of what being an artist means. Art is all about learning to observe and see the world around you, and transfer that on your “canvas.” This “learning to see” is what artists develop. So yes, we notice! The corollary to that for artists is controlling their medium, whether that’s getting the water colors on paper or the pixels on the screen to do what you want, so you can get your artistic vision out there as you intend. An artist will fight with the medium to do that. That’s again, kind of the definition of what being an artist means: we struggle to get what’s in our head out there visually for the world to see, and push and pull our medium of choice to do that. We will color outside of the lines and break the rules to do that.
To quote Zach again,
As a university professor training animation and VFX artists, I can definitely echo Zach here! Based on both my own experience and my experience supervising art students at my university, that fight is much less painful with a pure gamma 2.2 output transform. I have switched to it in our school’s OCIO config as the default for artist’s monitors and never regretted it.
I strongly believe that it should be the default for artist monitors in the ACES OCIO config. Call it a “harm reduction” approach.
The upper follows the IEC specification, and the vast majority of installations as best as I understand things.
The real issue is that seeming innocuous box “Color Management”.
I would suggest that the ICC protocol is an attempt at a “stimulus” management (if we can even subscribe to such a nonsense ontology) system. There’s exactly zero protocol within ICC based systems, as of version 4.X, to account for “picture quality constancy” of the sort being discussed here.
Example;
A default installation across Apple and Microsoft (needs confirmation here in 2023) will follow the IEC standards, routing a two part piecewise encoding to a direct exponent.
A characterized display would report the display’s actual transfer function, and would lead to a two part encoding being dissolved and re-encoded as the (common) pure 2.2 exponent of the medium.
There is no mechanism, to the best of my knowledge, present within the ICC protocol to supply picture quality constancy here. The four prescribed rendering intents are nothing more than colourimetric mumbo jumbo number fornicating, with only two of them, being absolute and relative colourimetric, only broadly and loosely defined.
None of this addresses the ultimate goal of the “older” systems, being an implicit management chain that seeks a degree of picture constancy.
The “factory profile” Apple created from the EDID info from my NEC display doesn’t match the display’s native primaries or current state of the display. It matches Adobe RGB though. Might be that Adobe RGB is the default factory state of the NEC monitor, it’s been a while since I’ve done a factory reset of this display.
To make things a bit more confusing in terms of how Apple handles Gamma 2.2, take a look at their “Generic Gray Gamma 2.2 Profile.icc”. It uses a sRGB TRC
Daniele is one of the few folks who have a more-than-intimate understanding and testing of the Windows systems. The bulk of those links are little more than nonsense mumbo jumbo.
The ndin aka NativeDisplayInfo tag is an internal tag if memory serves. It is unclear how this sort of vendor tag is used.
It will be derived from the EDID information, and dynamically formed within Apple’s subsystem.
The “TRC” is indeed the reference encoding assumed. That is, this is the specific lookup that is used as the canonized characterization of the display medium in this case. That is, the system will identify the display transfer function as that lookup, which means that the ICC protocol will encode to that TRC. Again:
Determine encoding of the origin colourimetric encoding.
Decode the transfer function according to the specified encoding, or guess based on internal library heuristics.
Apply colourimetric transformations according to rendering intent.
Reencode to the transfer characteristic enumerated by the rTRC, gTRC, bTRC status, or use the parametric tags.
In most cases, assuming a closed domain sRGB encoding, tagged or untagged:
Decode the sRGB two part transfer as issued in the tagged ICC profile, or assume two part in the case of untagged.
Reencode for the TRC of the specified display medium. In this case, it would amount to a no-operation as the incoming encoded and outgoing transfer characteristics are identical. The state of the colourimetric encoding is therefore the two part transfer characteristic.
Feed the encoding out to the display medium, which internally holds a pure 2.2 exponent.
I believe this broadly will follow the outlined pattern present in the IEC document.
Sorry, but it seems to me that Windows 98 and Mac OS 9 did not follow IEC standards. As I recall, in Windows 98 all images with and without built-in color profiles looked the same in the built-in viewer, because the RGB values from the file were output by the operating system to the monitor as is. Without recalculation.
Yes, Windows 98 had its own color management system, ICM (Image Color Management). But ICM didn’t work in that operating system! You could assign all kinds of color profiles to the monitor and images, but nothing changed.
Mac OS 9 had its own color management system, ColorSync 3, but it worked the same way as in Windows
Only programs that supported color management or had their own color management system (e.g. programs from Adobe, Corel) showed “correctly” in operating systems, that is, had recalculations for sRGB and 2.2 profiles and monitor profiles, for example.
And only in Windows 7 and Mac OS Panther files with built-in sRGB or 2.2 looked different - one lighter in shadows, the other darker in shadows, depending on the monitor profile and image profile.
Let me clarify that this is my personal opinion, based on my experience of calibrating monitors and adjusting color management systems in graphics programs. I may have already mixed something up and forgotten, my apologies. But I can try to install Windows 98 in a virtual machine and check
@Troy_James_Sobotka : The major axis of the discussion is whether the standard is ambiguous or not. Some people, e.g. yourself, @daniele, find it unambiguous which is fantastic, removes a lot of headaches! Unfortunately, other people do not read it the same way and find multiple interpretations, so they queried people involved in the standard authoring and got the answers quoted/reported above. This itself warrants further official clarification and pointing to a “distinct lack of consensus. Revisionist history included” reinforces the need to me.
@Derek : At this point it is pretty much a given that there will be a pure gamma 2.2 output transform for ACES, if not for aces-dev, it will be done for the OCIO configs.
As for artists, there is an expectation that anyone doing colour critical work is on a calibrated display, so there should not be any issue matching a particular standard. We do use Display P3 for example in my team with the piece-wise sRGB transfer function.
Can we cease this unattributed citation? There are people on this forum who are privy to information that has not been posted publicly, yet have resisted the temptation to post any sort of citation.
Just because some person or group is willing to post, does not mean that the veracity of the claims should be necessarily considered as evidence. In order of precedent, I’d suggest the four original authors with a public citation, or one of the three original acknowledgements with a public citation, would qualify as insightful. This is a bare minimum, but even then, we ought to be reading the scripture itself.
Curious… given that Apple designed the specification, and included the two part sRGB transfer function as part of the specification of the RGB encoding, has anyone checked the rTRC, gTRC, or bTRC of the EDID generated display for such monitors?
I did using the iccinspector tool and the handy LUT dumping functionality that @remia wrote in.
I’ve provided the sampling of the *TRC values in this Google Spreadsheet. We can see that the display is being characterized via the *TRC tags as a two part curve. It is however, a pure 2.2 EOTF broadly in terms of hardware. As per the above outlines, when the display is (mis)characterized as the two part, the *TRC lookups are what the uniform, closed domain colourimetry are encoded to, prior to routing to the display medium.
Under this chain of events and characterizations, we end up adhering to the IEC standard, but within the constraints of the ICC protocol.
I don’t think you are understanding what I’ve written. The IEC is the standard, and files encoded with the two part, of which many are, will be routed directly to the display in this case, as per the IEC standard. That is, in both cases, an unmanaged case, and in the case of (clever) workarounds through substitution of the *TRC tables, the IEC Standard outline is met.
Happens! In this case, if you work through the broad ICC (not IEC!) protocol, or generic “unmanaged” case, you should find that both situations will yield the protocol outlined in the IEC Standard. But please, check my claims, and validate!
Just to visualize how much of a difference we are talking about.
(ACES forum can be switched to a dark theme in the settings)
Below are 3 images. 2nd and 3rd shouldn’t be compared to each other in terms of how different they are from each other. Both 2nd and 3rd images should only be compared to the 1st one.
Here is the Normal image. Doesn’t matter, what DRT it is and what display EOTF it is encoded for. Let’s imagine it’s a photo or a painting. This way it’s easier to use it as a reference no matter what EOTF your display has.
Here is sRGB/Gamma 2.2 mismatch, that results in a brighter image in the shadows. For example when Gamma 2.2 ODT is used, but the actual display has sRGB piecewise EOTF.
Here is sRGB/Gamma 2.2 mismatch, that results in a darker image in the shadows. For example when sRGB piecewise ODT is used, but the actual display has pure gamma 2.2 EOTF.
@meleshkevich In Photoshop, the last two images will look the other way around
An image with a 2.2 gamma color profile on an sRGB monitor will be darker than an image with an sRGB profile on a monitor with a 2.2 gamma.