Where exact numbers of Rec709 formula came from? And a question about P3

I have a few questions that are not directly related to ACES, but I don’t know a better place than this amazing community where I could get answers. I hope this is not completely off-topic here and really appreciate any answers. And also please feel free to correct my statements. Probably my questions only exist because I have misunderstood something (or everything).

I would like to know more about the exact numbers in Rec709 encoding curve formula and displays’ gamma. Rec709 formula has a linear segment below 0.018, and 0.45 gamma above 0.018.
0.45 is ~1/2.22. But Rec709 encoding was made for CRT displays with gamma 2.4. Is this correct that it was done for compensation of different subjective contrast in real life vs contrast on a display which is surrounded by a dim environment? When something surrounded by darker colors looks less contrasty compared to a bright surround (or to our 360 degree reality). If this is true, then:

a) Why is this compensation needed if there is a linear segment near black, that also makes an image darker? So this 1/2.22 encoding gamma for 2.4 gamma display makes no sense. Because we already get the image darker after a linear segment near black. Is there just a historical reason? For example, at first they decided to encode with 1/2.2 and then to add a linear segment near black to avoid multiplying noise to infinity (is this the reason for a linear segment?).

b) Did CRT displays actually have 2.4? I’ve read that their “gamma was a power law gamma, and at that time thought to be 2.2 (but later found to be closer to 2.4)”. If this is true, then what I wrote above at “a)” makes no sense.

Rec709 encoded video should be displayed on gamma 2.4 display in a dim environment. Consumer displays have gamma 2.2 which is also ok, as they are usually used in bright environments. And the projector gamma is 2.6, because its environment is dark. But all Rec709 to P3 transforms compensate for this by converting from 1/2.4 gamma to 1/2.6 gamma, so 2.6 gamma of the projector is compensated and we get the final image that should be displayed in a dim environment, but is displayed in a dark environment. So, what’s the point of compensating for different decoding gamma if it is different because of the different environment?

Hi Anton,

To start with I am not into the technical specifications of the math and science.

Having said that I work in print and digital as well as an amateur photographer and filmmaking and use numerous software to deliver brand assets from all the above sources in ACES, linear workflows, rec709, sRGB exec etc.

It seems you are looking for a correct answer to where one does not exist. They are called ‘standards’ where the science try to create a point of reference. There are lots of display, tv, camera, projector, printer manufacturer brands and they all do their own thing. Every TV if measured has tiny differences, and time erodes the components. In a real working environment you can only try to replicate the conditions referred to as the standard (2.2 or 2.4) knowing that someone will view the work in a different environment.

It may sound like a bullshit answer, but I found things become a lot easier when you try for the best but accept imperfect results.

1 Like

Thank you @chrisconway333 !
Let me explain why I want to know the reason for those exact numbers in rec709 formula. Me and other 5 post production guys are the authors of the largest free educational Russian speaking colorists community. And I’m planning to make a youtube stream about ACES and to show that they shouldn’t be afraid of using it. And I can’t talk about any color management before I explain basics of “gamma” and “oetf” So at first I decided to make a video explaining linear light encoding. And now I’m trying to understand where that 0,45 value came from and why. They already get the whole image darker by adding linear segment near black to deal with the noise. And than, for some reason, they also make it darker for different surround compensation by 0,45 in rec709 formula. Is there just a historical reason for that? For example, at first they’ve added 0,45 for surround compensation. and then decided to deal with noise by adding linear segment. And as a side effect - making image darker one more time. And just decided that it’s ok, not a big deal. Or this 0,45 value in the formula was added for a different reason? Not for surround compensation?
So my question is not about how it should be now, but why it was as it was back then.

1 Like

EBU Tech3320 explains that specific question pretty well!

Basically, CRTs had an EOTF gamma of approximately 2.35, with the intended result of an end-to-end/system gamma of 1.2– in order to compensante for the dim-surround effect. This mean that the image OETF times the 2.35 EOTF should equal 1.2:

OETF * 2.35 = 1.2

So, therefore:
OETF = 0.51

As you referenced, however, a pure 0.51 gamma curve would cause noise, so instead the Rec709 function was defined as piecemeal with a linear segment near black and .45 gamma curve the rest of the way.

Despite this, the “best fit” single power law curve for this modified OETF is still 0.51. ie, the overall Rec709 curve is still similar to a pure 0.51 gamma curve. So using .45 with linear near black, the overall system gamma still ≈ 1.2, compensating for the dim surround effect the same as CRTs but without the noise a pure power curve would have.

Regarding your second question– I have wondered the exact same thing, it appears to me that we should always let the display compensate for viewing environment and not modify the image for it, but I don’t understand why this is not the common practice.

2 Likes

As someone who does concern themselves with the math in the standards I will try to provide some additional information from that perspective.

It’s all about what results in a pleasing image on the display!

When producing a pleasing image on a display, it is necessary to consider the expected or reference viewing environment of the display, but that is not the only reason for differences between the display colorimetry and the scene colorimetry. Generally viewers prefer slightly more midtone contrast and saturation than was in the scene, so long as the results look natural and the display can handle the resulting dynamic range.

The historical presence of a “toe” in photographic systems has indicated a soft rolloff of the blacks can allow for more midtone contrast and saturation, and is preferable to a black clip. The straight-line portion of the Rec709 OOTF looks like a photographic “toe” in a log-log plot, which is more representative of perception than a linear plot. However, you are absolutely correct that the straight-line portion also handles noise in the dark areas better. There can be more than one reason for something.

The Rec 1886 EOTF, with its 2.4 gamma exponent, is the result of measurements of many mastering displays. Note that in this standard the display black and white points are also considered. CRT displays are almost extinct, but the Rec 1886 EOTF is still used.

The Rec 709 OETF is a “reference” OETF – a reference for camera makers and for conversions of scene-referred colorimetry to the Rec 709 encoding. In practice, camera makers may use slightly different OETFs to optimize results for specific camera characteristics and to produce their desired default looks for their cameras. Camera controls can also alter the actual OETF the camera is applying, which can then be altered further in grading in post. These alterations are often done viewing the image on a specific display, and if a very different display or viewing environment were used the adjustments could be quite different. It is not reliable to assume that the Rec 709 OETF has actually been used, unless this is known to be the case. A far better reference for the desired colorimetry is the Rec 1886 EOTF, and even in this case one is assuming the content was mastered for the Rec 1886 reference display and viewing conditions.

The breakdown is basically that the Rec 709 OETF is a starting point, where actual content is then adjusted through camera choice, camera controls and grading to produce the desired “look” on a Rec 1886 reference display in the reference viewing environment. Then, it is up to the display to make additional adjustments to account for differences between the reference display and the actual display and viewing invironment.

If you are interested in an even deeper dive than the above on this topic, you may want to review the ITU-R HDR documents, BT.2100, BT.2390 and BT.2408. With HDR there are even more variables so these documents collectively provide much more information on these topics.

1 Like

There’s a lot of good questions here, but I’ll only address one tiny issue. You said, “So this 1/2.22 encoding gamma for 2.4 gamma display makes no sense.” It’s worse than that.

The Rec.709 two-part scene-referred transfer function indeed specifies an exponent of 1/2.2, but because of the linear section near black, the overall curve is closer to a simple gamma curve of about 2.0, not 2.2. Actually a little less than 2.0.

Philosophically, I’m not sure what the Rec.709 (scene-referred) gamma curve is good for. When I render my test patterns I dutifully use the curve for any “709 scene-referred” sections of the pattern. But is it implemented in cameras? Cameras that have knobs for scene contrast compression and other artistic controls that affect the curve? This is outside of my expertise.

Norm

2 Likes

Lots of good info here.
The Rec.709 curve is a reference curve based on what was great for 30 year old cameras.
No, no camera uses that curve in production today.
Exposure adjustments in camera change the curve. The adjustments aim to create a good looking image on a gamma 2.4 (BT.1886) display. Finally BT.1886 is mentioned in Rec.709.
The Rec.709 curve clips! Real production adds a knee.
The linear segment maybe was useful 30 years ago when cameras and transmissions were more noisy.
Linear is good for inverting a noisy signal. Gamma is not.
If the gamma extends to zero, then the inverse curve has infinite gain at zero.

When 709 was written there was no standard for displays. Maybe that’s why 709 is a camera standard.

A tie in to sRGB. Mid-90s, Microsoft and HP developed sRGB as a way to reproduce 709 video in an office environment. Lower (system) gamma for a brighter surround. I think this was done by Charles Poynton and Michael Stokes.

Now, you want to get back to scene colorimetry?
That info is not available, as you don’t have access to the camera’s actual OETF.
The best compromise is to use gamma 2.0.
Gamma 2 is what you find in several specs related to scene-referred color conversions. BT.2087…

2 Likes

Thank you! Looks like this is the answer I was looking for! So its combined result gives us the approximate system gamma that is ~1.2.

I thought that we take 0.45, which is 1/2.22. Then we subtract 1/2.22 from 1/2.4, and somehow I thought that we get gamma ~1/1.2 as a result. Don’t ask. I’m really bad at maths :slight_smile:

the second part is something I am having a hard time with personally…

there seems to be 2 approaches in how to display content (for sdr)

  1. Dont touch values and just let display handle it.

  2. Touch the Values to match image across screens.

If you have a 2.2 and a 2.4 gamma screen in the same environment there will be a difference unless you convert one of them to match the other so they are absolute-light preserving. Same idea with ACES as I understand it, put a sRGB (note aces uses sRGB piece wise instead of pure 2.2) screen next to a 1886 screen and they will match if luminance is the same and you use the appropriate ODTs.

Apple does this stuff for every pixel in macOS because they have to in order to deal with their wide-gamut displays… Resolve on a mac does this as well when selecting “Us mac profile blah blah” , flame does this even on linux with aces.
Colorsync as a example just goes input->linear->Displayprofile (ICC something something).

the problematic part is that it relies on knowing what the source actually is and the treating it right. Apple assumes rec709 means gamma 1.96 as a example(why is mysterious).

But how does it compensate for different environments? lets say we view a “correctly” tagged graded movie that was graded on a Bt1886 screen in a dim/dark environment. MacOS would display it to 100% match the luminances of the grading monitor but in a bright surround.

And here comes - what I think - is the trick here: you just let people mess with the display lumiance to compensate rather than compensating with the gamma. Newer displays are all 350+ NITs. Is that a good idea? I dont know there are about 100 perception tests I want to do, but what is it we do when we change environment with mobile devices? Change the luminance.

This issue didnt exist back when the whole “system gamma” stuff was invented, there was a dark room and a “normal” lit room, in bright rooms there was no chance even seeign a crt image, let alone taking them outside, my phone can now do like 1000NIT … its insane, we need better methods for this stuff this is where HDR is trying to step in, but there we have 1 million standards again with HLG having a system gamma like 709 and PQ doesnt but then there is dolby Vision IQ which measures the surround luminance and adjusts the image in some way and Apples EDR just beign like I do whatever I feel like .,. sigh we cant win.

funny enough iOS does not do this and just dipsplays stuff as-is because… reasons…

Regarding 2.6… I run my home projector at D65 2.6 at like 50NIT because I like it, just displaying bluray masters mastered to 1886 I assume … Am I mad? such a braintwister.

4 Likes

We can’t win indeed because different monitors and TVs also interpret the PQ curve in whatever way they like too in order to system-tonemap it for their displayable range. They might also have different system-tonemaps for different modes, e.g. Standard mode, Movie mode, Game mode and those might even have different white points. A good workflow to avoid going mad is to define a few reference displays and reference viewing environments for HDR and SDR, stick to them and screw everything else.

2 Likes

Mike Stokes, Ricardo Motta, lots of people from Kodak including Ed Giorgianni. I may be wrong but I’m pretty sure @doug_walker did the math for the linear segment. There were obviously a lot more people involved but I know the people mentioned above were key.