sRGB piece-wise EOTF vs pure gamma

In Nuke, pixel data is processed in 32 bit float. The Viewer node allows you to display that pixel data in the viewer window. The viewer node has functionality to transform the pixel data using something it calls a viewer process. What transform is used in that viewer process depends on how the Nuke script is configured.

If nuke root color management is set to Nuke, the viewer node has 4 “legacy” settings:

  1. None: No transformation is applied between float pixel data processed from the node graph and the pixel data shown in the viewer window.

  2. sRGB: The sRGB Encoding function is applied to the pixel data, clipping pixel values above 1. This is a “legacy image formation” that when combined with the sRGB Display EOTF on the monitor the result is some basic flare/glare compensation.

  3. rec709: The the Rec.709 Camera OETF is applied to the pixel data,clipping pixel values above 1. This is a “legacy image formation” that when combined with the 2.4 power function EOTF of a CRT monitor, results in some basic flare/glare compensation and image appearance modification.

  4. rec1886: The inverse of the Recommendation ITU-R BT.1886 EOTF is applied to the pixel data.

If nuke root color management is set to OCIO, then the viewer process uses an OCIODisplay node to apply some “image formation” transform to the float pixel data before sending it to the viewer window, as specified by the OCIO config.

The claim that there is somehow an extra sRGB Encoding transfer function between the viewer process and the pixel data being sent to the viewer window is misleading nonsense.

You misunderstand me. I am well aware how Nuke’s viewer processes work. I am talking about what happens to the final result of the viewer process (or “None”) as it is handed off to the OS for display. None of us outside Foundry know exactly how the application works internally, and making EDR / XDR work under MacOS is not trivial. There are a number of ways it could be done.

My point is that if I measure the relationship between the values in Nuke’s buffer (with the VLUT set to none, to take all of that out of the equation) and the light produced by the display of my MacBook Pro XDR display in a Reference HDR mode, that relationship is the piecewise sRGB curve.

2 Likes

So I guess this pattern of gesticulations to identify the wrong EOTF on this edge of an edge of niche situations is the reason to include the wrong EOTF. Which by the way sounds awfully like “Route the reference sRGB OETF through the pure power 2.2” standard…

It seems this all went back to the original point; the claim that this proliferation of the wrong sRGB EOTF is broadly a nothing burger.

Setting up the deliverable space independent of the viewing space isn’t always as trivial in some applications. (not ACES’ problem I know… but it exists)

My conception of the ACES ODTs has always been that they should fulfill the target deliverable. A mismatch in display characteristics of the user vs target deliverable intent would be ‘the user’s problem’. If a user views a Rec.709 only deliverable on P3 there would still be a mismatch because they see more colors than the final deliverable? It feels wrong to me that a mechanism like ACES provides ‘corrections’ for this. Aren’t display profiles managed by the OS responsible for the rest? Maybe I’m wrong.

I feel like ACES has the opportunity to at least mitigate one part of the several potential issues.

Maxon/Redshift’s current OCIO config for example only provide 1 ODT in total; sRGB. For ACES 2.0 I can imagine them implementing only sRGB again which doesn’t help, or they add both but that will lead to tons of new confusion. Or will the group provide clear information about this to implementors if the decision will be to include both variants for Rec.709 and P3?

On the OCIO side everything is highly customizable, but I think most ‘simple’ artists will use whatever implementation comes shipped with the software they use.

1 Like

Hey there, having worked at a few studios myself on 3 different continents, most common scenarios were either P3 or Rec.1886.

Have you been frequently in touch with VFX artists who were actually using a sRGB display ?

I know this topic has been going on either slack or here for years, but don´t you think this is an opportunity to set things straight and actually stick to the standard ? I mean, there must be some sense of responsibility in the ACES leadership team to see that, right ?

Regards,
Chris

1 Like

Can’t believe that we are still arguing over that. Such passion! :slight_smile:

The reality is that both variants of the EOTF are seen and used. How picking one over the other is helpful? Are we naively thinking that the Academy and ACES will magically force all the hardware and software vendors on the planet to adopt one? It is not going to happen anytime soon.

The pragmatic approach is to offer both options so that people outside this forum (and the colour nerds are aware of the issue). Users will have two transfer functions, and they will be forced to pick one: They will have to figure out which Output Transform is the correct one for their display device.

We can then create a bit of documentation and some patterns to help them determinate their display EOTF.

1 Like

We are using Display P3 calibrated displays: P3 primaries, D65 whitepoint, sRGB piece-wise EOTF. We don’t want any sort of flare or surround compensation, what gets in is what gets out.

Because normative decisions reduce the cognitive overhead of edge case and incorrect EOTFs.

If this logic holds true, then the following would also hold true:

  1. Support all power functions in 0.2 increments as per some television presets such as 2.0, 2.2, 2.4, 2.6, 2.8
  2. Support BT.709’s OETF as an EOTF as some TVs and displays erroneously included.
  3. AdobeRGB as per many Dells etc.

Etc.

The point is, the normative EOTFs are rather well defined. If the rule is “The reality is that both variants…” then follow the logic and support every EOTF that some vendor has implemented for whatever reason.

Otherwise, accept a reduced set of normative definitions and comment out the edge cases / incorrect inverse OETFs misinterpreted as EOTFs.

The latter approach supports the normative definition of terms, the pedagogical vantage, and reduces the cognitive overload that will lead to errors plaguing option-option-option-option-obfuscating-option.

I believe if one sifts through the Display P3 implementations, via Apple’s own ICCs via ndin etc., they will find that they are a perfect mirror of the sRGB reference; the two part encoding OETF and a pure 2.2 power function decoding EOTF.

We are on Windows so this is irrelevant. What is though, is the Apple specification: displayP3 | Apple Developer Documentation and the ICC one: https://www.color.org/chardata/rgb/DisplayP3.xalter

As I have patiently tried to illustrate countless times, ICC “colour management” provides exactly zero mechanism for standardized OETF encoding to EOTF decoding mechanisms.

This is part of a larger problem.

As for the EOTF of Display P3 displays, I believe the pattern is strongly following the sRGB display standard, with a disproportionate number of displays following the pure power function EOTF.

It is common for the ICC protocol to strictly define the encoding characteristics in ICCs, and again, there is not a single ICC implementation outside of Apple’s, via the ndin tag, capable of providing for OETF to EOTF discrepancies.

All of that said… I will stand by the idea to promote:

  1. Display Standards.
  2. Normative definitions.
  3. A strictly reduced subset of options, with commented out nuances and edge cases for those capable.

These sorts of decisions might seem trivial, but the “appeal to authority” of implementation will have reverberating repercussions.

We have already seen this happen with an impetuous promotion of the sRGB OETF as an EOTF on some websites, using questionable authority and misreading of specifications.

I need to produce colorimetrically accurate signals, which is the primary intent of a display standard. This is only possible if the decoding is exactly the inverse of the encoding. There is no Display P3 standard but the transfer function is well defined. As for IEC61966-2.1, which is a display standard, we have talked about it at length but I will point out again that it defines specifically the encoding characteristics as a piece-wise function, thus if we use the 2.2 power function to decode, colorimetry is broken. If the Encoding Characteristics section did not exist, we would not have that discussion but it is there.

1 Like

And the standard says a pure 2.2 Gamma EOTF. So not sure what your argument is about.
If a display manufacturer implements the compound function as EOTF of their display they are not according to the IEC61966-2.1 standard.
If we provide an ODT with the compound function and even call it sRGB we continue to ship a decades old flaw.
I think this is the only message most of the people in the survey above try to say.

Nicks argument about Apple is misleading because from factory apple EOTF follows a 2.2 Gamma.
Yes they ship some special modes which alter this, but this is not the norm and labeled wrongly in my opinion.

4 Likes

Are you being sarcastic or did you actually not read what I wrote?

Let me put it in another form, I need f(x) = Decoding(Encoding(x)) = x.

People encoding their imagery using a Gamma 2.2 power function are not following the IEC61966-2.1 either, the standard is explicit about what needs to be done. Is this what you are doing in Baselight? You cannot have it both ways.

1 Like

This is false.

You are conflating the down the wire signal encoding with the picture formation. If the pictorial formation is as the author sees fit, feel free to encode with the inverse of the EOTF.

I do not believe this is a disagreement with whatever or however the signal is encoded; the author asserts the control over the presentation closed domain wattage signal. That closed domain wattage signal is then encoded with the inverse of the EOTF if the author’s intention is in that closed domain relative wattage signal.

Only if one insists that there is only an OETF or an EOTF, and that the two cannot be different. This would be a completely ahistorical suggestion.

However, this is a discussion about the appropriate EOTF, and in this case, the discussion should be tempered against normative distributions of encodings if the discussion is about the appropriate inverse EOTF for a close domain relative wattage to be encoded to.

To further Daniele’s point, here is the listing from the company that created the Display P3 encoding function, when choosing to roll a custom encoding. Note the default transfer characteristics in each, as they are presented, without any adjustment:



I’m not confusing anything, picture formation has nothing to do here. It is display colorimetry 101, the standard specifies the encoding, you must abide to it if you claim you are supporting sRGB, you cannot have it both ways.

unambiguous methods to represent optimum image colorimetry when viewed on the reference display

It seems rather unambiguous.

Only if one insists that there is only an OETF or an EOTF, and that the two cannot be different. This would be a completely ahistorical suggestion.

Who said they cannot be different? The OETF is on the camera side and the EOTF is on the display side. When you encode for display, one use an inverse EOTF, i.e., EOTF^{-1}, not an OETF.

1 Like

Who knew!

I guess the ACES folks could have just looked in the sRGB specification to form the picture then, instead of frittering away two years.

It seems to me that the need here is for education, rather than dictating or restricting. ACES central and all of you do a great job of that! There is such a wealth of wisdom and knowledge here.

That’s why I keep coming back. Even when… ahem… conversations sometimes devolve into bickering. No hate.

1 Like

The encoding function is a trick to make 8bit processing work:

You take a video signal (meant to be viewed with a pure 2.2 Gamma) decode and reencode with the compound function. You introduce a small error linearising to display light but then converting back you undo the error. This allows you to somewhat work with linear data in 8 bit. And the process is “invisible”.

But this workflow has nothing to do with what we do. And certainly the compound function has nothing to do in the ODT.

You are interpreting IEC 61966-2-1:1999 as see fit. At the time the standard was written, 8-bit processing was prevalent. It is still around by the way, we are saving 8-bit PNG or TIFF files and display them all the time… There is no provision or alternative encoding given for 10-bit, 12-bit, 16-bit, or floating point representation. It is also nowhere written that pure 2.2 power function should be use with other bit-depth representation instead of the piece-wise function. As a matter of fact, the word float is not used once in the standard! However, plenty of software using floating point representation adopt the piece-wise function, e.g., The Foundry Nuke and its default viewer, Unreal Engine, etc…

Anyone who is strict about IEC 61966-2-1:1999 compliance, must encode using the piece-wise function and have the display decoding the signal with the pure 2.2 power function.

  • If one encodes with a pure 2.2 power function, he is not compliant with IEC 61966-2-1:1999
  • If one decodes with a piece-wise function, he is not compliant with IEC 61966-2-1:1999

It cannot be simpler than this. ACES 1.x, in its current form, actually do implement IEC 61966-2-1:1999 strictly. An alternative where the encoding is replaced with a pure \cfrac{1}{2.2} power function is simply not a strict implementation, it is an adaption.

Of course, people that are interested into maintaining accurate and predictable colorimetry have a conundrum because F(x)\ =\ Decoding(Encoding(x))\ \neq \ x if following IEC 61966-2-1:1999. By introducing the pure \cfrac{1}{2.2} power function for Encoding, the TAC and VWG are enabling a whole new class of “not respecting the standard”, which is fine, because what most people want is, again, F(x)\ =\ Decoding(Encoding(x))\ = \ x, they will get the possibility to do that now.

Summary Table

IEC 61966-2-1:1999 Compliance Decoding Piece-Wise Decoding Pure 2.2 Power Function
Encoding Piece-Wise :x: :white_check_mark:
Pure 2.2 Power Function :x: :x:
1 Like

Your interpretation of the standard is wrong, I am afraid. Your interpretation would mean we crush the shadows every time we decode and encode the image; that makes no sense.

At the time, the idea was that in image processing, we use the compound function in and out, but we finally show the image on a 2.2 monitor and so because most image processing was done in 8 bit. I don’t think there is any image processing left which does linear light editing in 8-bit.

What counts for us now is the display sRGB EOTF. Everything else is legacy.
If we want to encode an image ready for display, we need to use 2.2 Gamma because we do not want to introduce a mismatch.

Don’t get me wrong, I find mathematically the compound function more beneficial for many reasons (limited slope at the origin, encoding of negative values etc…) . But the compound function as EOTF is incompatible with Video (2.4 Gamma), DCinema (2.6 Gamma) and HLG. It also causes a lot of headage when repurposing TV deliverables for web applications.
Misinterpreting the sRGB standard caused decades of suffering around the world.

4 Likes