sRGB piece-wise EOTF vs pure gamma

According to this Apple support page the standard modes are “for general home and office environments”. The “special modes” you refer to say they are for “controlled viewing environments”

It seems reasonable to me that ACES should support users who are using their Apple displays in the manner recommended by Apple.

The ACES sRGB ODT has always specified that it is intended for a dim surround, not an office environment, and as such is designed so a 100 nit sRGB monitor with the sRGB ODT and a 100 nit BT.1886 monitor with the Rec.709 ODT will produce the same display light and either will look correct in that viewing environment.

And you want to ask every producer, director and dop to fiddle with system prefs whenever they want to watch something that comes out of an ACES pipeline?

1 Like

No. I’m assuming a decent proportion of them will already have their MacBook Pro / iPad set up as Apple recommend.

Such an exhibition of bad faith, it is disappointing. We are talking about encoding and decoding for and at the display. This happens once at the very end of the processing chain.

The inverse encoding function is given should one need to invert the encoding when manipulating files, you know it and you know that I know it!

What counts for us now is the display sRGB EOTF. Everything else is legacy.

This is your opinion, please show us the standard addendum or new sRGB standard specifying that.

You are clamped knee deep in a bear trap and you are willing to lose your legs to stay upright. Two paths out:

  • You use a pure 2.2 power function for encoding and you admit that you are not following the standard, which is mildly comical after having been lectured for years about standard compliance.
  • You use the piece-wise function in the 8-bit world, e.g., when saving images and the pure 2.2 power function for display… No one sane would ever do that for obvious reasons.

Nothing is truly black and white, is there a prior art? Let’s see what a standardisation body does when it needs to support different bit-depth, e.g., ITU-R BT2020:

The constants are slightly tweaked to have the piece-wise function offer the best join possible at 8 and 12-bit. Nowhere ITU suggests that a pure power function should be used at higher bit-depth than 8-bit!

The only place where IEC 61966-2-1:1999 recommends doing something different as a function of bit-depth is in AMENDMENT 1 2003-01 where it suggests using a higher precision matrix, not a different encoding function:

1 Like

I don’t see anything in the sRGB standard that refers to this use of the piecewise curve for decoding to (approximate) linear light for processing and then re-encoding. Indeed, the standard says:

These non-linear sR’G’B’ values represent the appearance of the image as displayed on the reference display in the reference viewing condition. The sRGB tristimulus values are linear combinations of the 1931 CIE XYZ values as measured on the faceplate of the display, which assumes the absence of any significant veiling glare. A linear portion of the transfer function of the dark end signal is integrated into the encoding specification to optimise encoding implementations.

So that says that a curve with a linear portion is used to encode the display light. That’s the only thing I can see sRGB code values being described as an encoding of.

One reading of this (and I’m not suggesting it is correct) could be that a hypothetical reference display with a 2.2 gamma exists, and would do the job of replicating in an office environment the appearance of a BT.1886 display in a dim environment. But end users have an 8-bit pipeline to their display, so in order to replicate the appearance of the reference display on an end user’s display, the light that would have been created on the reference display should be re-encoded with the piecewise curve, to produce sRGB, and the end user’s display should also use that curve as the EOTF.

1 Like

Isn’t this exactly what has been recommended by Danielle and Jed earlier in this thread as best practice to avoid crushed shadows?

My vote would be for users to have both options provided so they can choose.

The role of ACES would be to educate to them the relative merits of these options as discussed in this thread.

I’m going to go out on a limb here and suggest that everyone is actually… agreeing?

Thomas’ suggestion that the utterance of “sRGB” implies the entire chain, aka the whole three stage sandwich. This is, as best as I can see, correct.

  1. An OETF encoding transfer characteristic.
  2. An EOTF decoding transfer characteristic.
  3. Some component of “context” for the cognitive appearance stuffs “waves hands”.

Daniele’s point is one that I also agree with, and probably will state that from my reading and interpretation of Daniele’s words, that in terms of the utterance of sRGB, in this context, it’s the reference EOTF we care about.

Now the only issue in this that seems to be a bit of a “semantic” argument, although I’d make a strong case it is an essential discourse, is that we have to ask ourselves what we are “encoding”.

That is, if we say “Oh it’s colourimetric nonsense off a sensor!”, and use the two part encoding as the pictorial formation stage, we end up with one specific closed domain relative wattage representation of a pictorial depiction.

That is a very different pictorial depiction to an alternative approach, that also, as an endpoint ends up with a closed domain relative wattage colourimetric representation, that includes some components of the three step sRGB specification totality.

Specifically, long before the cognitive Presentation Transformations (IE: The discrepancy between the OETF and EOTF here), we can discuss the state of the closed domain colourimetry wattage (relative or absolute, depending on target medium) before it is encoded for down-the-wire transmission.

As such, I’m not sure how to reconcile a reading of the combinatorial “sRGB” three stage series of assumptions, with a fully controlled pictorial formation chain that discounts blunt trauma pictorial formations in exchange for more bespoke patterns. It screams to me that dumping the term “sRGB” is wise, or at the very least, specifying a very clear “sRGB Decoding EOTF of a 2.2 Pure Power Function”.

Am I doing justice to the discussion here, or way off?

3 Likes

Sadly the sRGB standard does not use the term “reference EOTF” anywhere, or there would be no debate.

and

If we agree that Video EOTF is 2.4 then sRGB EOTF can only be 2.2 Gamma, Annex B will describe this in greater detail:

So ideally, we could just use the 2.4 Gamma Rec.709 ODT and stream it to a 2.2 Gamma sRGB Monitor and call it a day.

3 Likes

Sorry I’m jumping in the middle. I think this hits on an important point. It’s my assertion, and others, that the intention of sRGB is to encode display light and it is independent of how the image was created on the display. As such, it offers serves a very different purpose than the camera to display processing chain.

I’m sure others will disagree, but that’s been going on for 30ish years

That is exactly NOT what is happening, the CRT does not follow the piecewise function.
Also sRGB primary design is to be compatible with Rec.709. The piecewise function is not compatible with anything.

1 Like

Agree. But can we agree that the Reference input / output characteristic is indeed the EOTF? I mean I’m happy to use the lingo of the specification, but it seems that a greater common continuity could be had if we achieve some degree of agreement on a more centralized term?

I’ve done my homework here, including a four year quest to speak at length with the primary author. Although I completely agree with Daniele, in that all we have and should adhere to, is the specification, which is, as Daniele has recently pointed out again, rather clear, including the term “mismatch” etc.

But I can’t help to stress that these points again avoid the important part:

What are we encoding?

If we can broadly accept that BT.709 to BT.1886, and sRGB OETF two part to sRGB EOTF power function will, whether we like it or not, include some visual cognition implications, the question remains…

What is it that we are encoding?

It is not enough to describe this mysterious data state as “Golly it’s colourimetry”, because colourimetry exists in an open domain “I dun used my spectro to measure this colourimetric stimuli wattage” and “I am standing in front of the Mona Lisa and measuring the relative colourimetric wattage under this 1000 watt lamp”. (And again, I stress, early pre-numpty-party Kodak research knew that a pictorial depiction is not a “conveyance” of the stimuli in front of the camera.)

We need to discuss domains (doubly so given gain regulation mechanisms), and strip out what Dr. Poynton has labelled presentation facets, which I believe we can compartmentalize into Presentation Transforms. After all, we don’t have any visual cognition models that are even remotely tenable out of the colourimetric world view, so I’d ease off the gas on that and leave it as an aside for now.

And I do know that at least one major streamer sends out iPads pre-configured in Reference Mode for viewing of dailies.

2 Likes

Exactly this still happens with the latest ACES 1.3 release. There is no ODT for the most common display in the world. Which has pure gamma (somewhere around 2.2).

Funny thing is on a 1000:1 contrast ratio display (which is typical for most IPS panels) calibrated to 1886, its EOTF becomes exactly the inverse sRGB curve. What a beautiful world of standards.

Absolutely. It will make people start asking questions, youtube bloggers will make the reviews on ACES 2.0 and explain somehow, why there is Gamma 2.2 now. I don’t even see any better option than ACES 2.0 release.

2 Likes

I just undestood how it does in practice

There is a series of games (Metro) with a lot of dark corridors where you barely could see anything without a source of light. But with monitor tuned to piece-wise you sunddenly see almose everything in almost any dark rooms… which may not be intended…
Similar way Windows emulates piece-wise sRGB on HDR displays. (At least PQ is well-defined)
And this may be the source of many complaints about “washed colors” or even “poor HDR color management in Windows”

Ok. I’ll repeat it one last time. The reason Windows does piecewise in HDR in because piecewise is the standard. GPUs have actual acceleration in hardware for piecewise sRGB so there is 99% chance that your image is encoded for piecewise sRGB. Some GPUs actually understand HDR10 (PQ + Rec.2020) but others decode it to linear FP16 in extended sRGB color space, a.k.a. scRGB, with negative values to represent wide gamut for internal processing then only rebuild the HDR10 signal at scan-out. No GPU has hardware acceleration for gamma 2.2, rec.709 or bt.1886. Those are video production formats only, not things that are in daily use on most computer applications. Games exclusively use the sRGB transform in SDR, either through hardware-accelerated encoding to a sRGB-tagged framebuffer or manual encoding. Also, there are almost no games developer in that use calibrated monitors. I do but most don’t and, when they do, the monitors are calibrated to match the sRGB gamut but not the 80 nits brightness of the sRGB standard. It’s unworkable in bright office spaces. I haven’t seen anybody try to calibrate the gamma of their monitor yet because it’s assumed to be sRGB not anything else.

TL;DR Gamma 2.2 ain’t sRGB.

1 Like

I’ll repeat it one last time:

Conflating encoding with decoding is the source of the errors here, and all this does is perpetuate the same erroneous logic.

Look at the vast evidence, including implementations in the systems.

This is right back at square one, where a small group of people are not drawing a crucial distinction between OETFs and EOTFs, and erroneously assuming the two are the same thing. Let’s just restart the thread at the very first post all over again.

I would advise to read the entire standard.
The primary goal of sRGB is to be compatible with video.
And a piecewise EOTF is not compatible with video.

You can do all sorts of conversion, hardware accelerator or not.
You just need to be aware how your signal will be displayed in the end.

2 Likes

I just realized
The standard describes different OETF and EOTF to make images intentionally darker at near-blacks.
OETF is piece-wise to hide camera’s noise and produce a smooth fall-off instead.
EOTF is pure gamma becaure real CRT’s are pure gamma. But CRT’s are 2.5 to be precise. Therefore they make images more darker.

No reason to use different OETF and EOTF multiple times (e. g. in a photo editing software) if you don’t want to make your image even more darker.

Modern LCD’s can be tuned and calibrated to follow any curve. But they have much more brightness and contrast and, being calibrated as sRGB, they can show visible banding at these near-blacks. It may be the reason why many monitors calibrated as pure gamma. When you come to a shop to buy a new monitor - which one will you choose - the one that shows visual artifacts or a smooth roll-off?

So when an artist works with a professional monitor calibrated in sRGB - resulting image will appear correctly on monitors calibrated in sRGB - and too dim in blacks on monitors with pure gamma. But other creators just use their pure-gamma monitoirs and produce tons of content containing visual artifacts at low-end of tone curve. These artifacts are simply invisible on pure gamma monitors but clearly visible on sRGB monitors.

The standard is simply obsolete. And that old dirty hack is a source for issues and debates for many years.

1 Like

Following the logic of using a pure 2.2 gamma for the display transform as described here in the gloriously long thread… am I correct that for CG renders one would still want to use a piecewise sRGB for 8-bit input albedo textures?