sRGB piece-wise EOTF vs pure gamma

I am not so sure about that.
I think ACES 1 is so contrasty because most of the early ACES test footage were film scans, and ADX does not unbuild fully to scene exposure.

I know you do not want to hear that, but Apple’s Display P3 implementation does exactly the same ‘sRGB mismatch’. So Display P3 in fact is a 2.2 Gamma (if you want a plain display encoding pipeline).

Yeah I know legacy tends to stick around (like sticky glue)

I banned the compound function from my life, I live a happy life now :-).

2 Likes

What is this statement based on?

it is based on observation :-).
We have discussed this a few times in the output transform meetings.
ACES 1 does not look too contrasty with the original ACES test material.
It only looks to contrast for footage shot on modern CMOS cameras.
Hence we suggested using new footage for the output transform work.

1 Like

My comment on ADX is not a criticism to ADX, I think it is wise not to try to unbuild film scans fully.
There is too much variation in the developing and scanning process that an attempt to fully unbuild the original camera negative would lead to an IDT that is not general and potentially harmful.

I’m repeating what @Alexander_Forsythe said publicly above and told me during discussions with him. I consider that to be a trustful source of information, consistent with what I heard from other sources, if you don’t think it is, it would be desirable to know why.

I certainly can hear that as it does not affect us! :slight_smile: We don’t use Apple hardware for anything creative at work. We however do use ACES and want no-op encoding/decoding at the display. The ACES Output Transforms encode with the piece-wise transfer function and so does DisplayP3 and also Unreal Engine, our workhorse.

1 Like

makes sense… as long as you are in control of both the displays and the encoding it does not really matter.

1 Like

In order to try not be partisan about any of this, given my situation with regards the Output Transforms group, I have not posted anything, however I’ll break this trend and try summarise things.

Is there confusion - clearly there must be, this thread alone is over 100 posts long. Solution is either to try do something about it, or move along. To me something should be done at the level of an amendment or clarification to the standard based on more than anecdotal evidence. (|'m not suggesting what, who, when etc).

There is a big difference between the EOTF of the display and what ever the processing system does prior. In the ACES output transforms our goal would be to produce a specific light output from the display in some viewing condition, what ever the required encoding needed to produce that output, as such we do have a mechanism to use whatever transfer function needed to encode. At this point from an ACES point of view we can say what our intended output should be and consider it closed from a pure ACES perspective.

I have created test patterns designed to rely on the optical effects of checkerboard patterns similar to those others have made. I find these are really simple but effective ways of evaluating the results of whatever goes on in a display chain, these work great if you do not have a formal measurement device. It even works if you take a picture of the screen…


[Picture taken half the way around the world from me, so I have no idea what the overall display system is in detail]

Each row of the pattern is encoded assuming the display chain has an effective EOTF of either simple 2.4 gamma, 2.2, piecewise sRGB or Rec709 camera function. If the image is decoded and displayed matching one of these then the small square boxes will visually blend into the surrounding rectangle This shows me, whatever the claim made on the box, this monitor in question was certainly using something closer to the piecewise function.

I know this because changing the monitor to 2.4 as an example would “fix” the top line making the boxes disappear and “break” the 3rd row of boxes, thus at least we know any colour management was zero’d out somehow. (in this case by using an unmanaged buffer).

This separates out the intent from the mechanism, once you know what it is doing, then you could select the appropriate Transform.

2 Likes

You could build the pattern in linear light after the DRT, so it would work out of the box for any ODT.

While this helps for viewing ACES in a controlled environment, it does not help for delivery.

Oh that file has nothing to do with ACES no Output Transform was harmed in the making of it. It is setup such that if nothing is applied to it during the display process other than one of the EOTFs the effect will occur.

The assumption is that if you have a raw display buffer and you send it to the monitor directly, you are only analysing the monitor. Then as you add more complexity like application viewing transformations or ICC, or whatever you can see if they are also doing anything.

Optically it only works if displayed pixel for pixel, or if you sample and hold scale the image, any rescaling or other filtering usually breaks the illusion.

I am very familiar with such a pattern. I just wanted to hint that if you produce a single pattern in display linear light, it becomes a generic EOTF checker.

A few points:

  1. The specific discrepancy broadly impacts the lowest 10 code values in force, at 8 bpp. An overall difference in power function can create similar deviations across the full range.
  2. Some displays do implement the two part sRGB function as an EOTF.
  3. Checkerboard fields are dubious, due to the influence of frustum field size on the cognitive manifestations. Our visual cognition will gain the patterns adjacent based on field dimensions.

EDIT ADDENDUM:

Thought I’d try to elucidate the code value deviations via a simple plot. Note that as a proponent of avoiding discussions about discrete tristimulus magnitudes, I am extremely cautious of considering discrete code values as any form of meaningfulness. This is purely as a numerical comparison of total code value differences.

The following plot is a representation of code value deviations between the sRGB two part OETF and a pure inversely encoded 2.2 EOTF. The ground truth starts with L* as a “uniform” sweep along 256 code values. From this “uniform” sweep assumption, the values are taken to luminance. From the “linear” luminance, the values are simultaneously encoded to both the sRGB two part transfer characteristic and the pure 2.2 inverse EOTF exponent.

As we can see, deviations are broadly one to two code values maximum along the upper scale of the range. The checkerboard demonstrations appear to manifest larger deviations.


Google Sheet for the calculations and folks to verify.

1 Like

Just my attempt to bring more attention to this very important point.

  • Pure gamma
  • sRGB
  • Both
0 voters

Including the sRGB encoding OETF function as an EOTF seems like a poor judgement from the pedagogical and historical vantage.

The current penetration of EOTFs that reflect a two part encoding is of a quantity that could be considered a statistical error, as well as being utterly ahistorical.

2 Likes

Whilst I may agree with you on the reading of the specifications etc. practically speaking the Output Transforms group is interested in producing the same emission out of a display no matter what its decoding is set to.

So given monitors do decode it in both ways (including the same monitor with different settings) we need to support those users, this is the same requirement as displaying SDR in HDR display modes. It is the same rendering encoded differently.

I also agree it would be nicer to eliminate confusion, but I can site examples within my own sphere of influence where similar monitors have an ‘sRGB mode’ with both used as EOTFs for historical reasons where inertia and compatibility requirements mean I have to allow for both - obviously with a gradual migration to ‘the one true way’.

Applying industry pressure to reduce this confusion may be something the ACES TAC might consider, but I see that as outside the scope for this working group.

Kevin

1 Like

The number of monitors that permit the non-Reference sRGB EOTF amount to a statistical error in the broad scheme of things?

Folks are already confused enough, no? I mean we have a false Kodak-originated mindset being rammed forward in 2024 along both transfer functions and colour, which is so clearly ahistorical, biologically untenable, and flatly wrong, and an army of authors caught in this nightmare of options-of-options-of-option-for-an-option.

As folks have mentioned before, would it not make more sense to cleave to the normative system, and let the statistical anomalies create / uncomment their own edge cases?

1 Like

No, they aren’t a small number of statistical outliers. A large number of displays, including every Apple Pro Display XDR, MacBook Pro and iPad Pro implement the piecewise EOTF* when in “Reference” mode. Should we just tell everybody with one of these devices “Sorry. Your display is wrong. You are not going to see pictures correctly, because we don’t support your display.”

* To be precise, whether or on the actual display uses a piecewise EOTF, the Nuke raw buffer to display light transform is piecewise, which is what matters to a great many people who will use the Output Transforms we provide. Whether that is the actual EOTF of the display, or due to the way Nuke handles its image buffers is academic. This is how it presents to the user.

If said people will likely also use the same ODT to encode rather than only view to match their display? Then the result is technically wrong for everyone?

Of course they are.

I’m pretty sure of that lot, only the Pro XDR has the wrong piecewise EOTF. On my MBP it was certainly 2.2 last I checked.

And again, normative. Even if the statement were true, the normative definition is that most folks aren’t running in reference mode.

This is controlled by OpenColorIO in OCIO instances, no? Otherwise I suspect there would be an OCIO double up on the OETF.

The transform in the “default” is according to the originating sRGB specification, which means there is a discrepancy OETF to EOTF, which is in accordance with the specification.

At any rate, it’s time to end this ahistorical and politically motivated madness.

Let us not forget that this flawed logic would also suggest including the BT.709 OETF as an EOTF as well, given the vast number of televisions and displays that have shipped with the incorrect EOTF baked into the hardware under a mode.

Normative usage is not only a responsible approach, it can also aid the pedagogical vantage. Uncomment a stanza if you are in the edge case where your vendor put the wrong EOTF into the hardware, or fix it on the hardware side.

1 Like

You make a valid point, but it’s always been the case that people need to consider the view transform they render with separately to the one they look at while working.

The intent of the sRGB Output Transform is that somebody will see the same on their sRGB display as would be seen on a BT.1886 display with the Rec.709 ODT. So a VFX artist with an sRGB display is getting a preview of what will be seen on the reference BT.1886 display in the grade.

But working only on an sRGB display, and delivering from that for direct viewing on the web is (and I’m afraid always will be) the wild West!

No. I am talking about what happens downstream of the OCIO Display Transform (if used) between the final buffer and the display light. It may be that Nuke is internally using a linear float buffer (and tagging it as such for the OS) which is not exposed to the user, and using the piecewise function to linearise to that. But as I say, it does not really matter. What matters is that the values the user has to encode into the Nuke buffer (with OCIO or something else) need to be encoded from desired linear light with the piecewise function in order to achieve a one to one mapping.