Still images vs video (sdr) sRGB/rec709

So in SDR we still generally live with a difference between sRGB and rec709 especially on macOS where apple decided to remove the 1.22 system gamma boost for rec709 viewing. i understand this comes from print vs video

Most other players and operating systems (even iOS) do not make a distinction anymore and treat all pixels the same.

To me - in a world where still images and video is viewed on the same device , side by side in the same environment, given the same source pixel values it should be rendered the same way, should it not?

Which brings me to the mastering/cormanagement issue with that.

if we have to master rec709 in a dark room with a gamma 2.4 monitor - how does one master still images? I would assume in the same way - as they are viewed in the same way, probably ok a gamma 2.2 monitor in a normally lit room.

Or on a phone… or you know .

So how does one deal with this “mixed media” mastering.

also considering that generally still images can look more contrasty than moving inages without looking bad, (i dont know which psycho visual effect this is)


Ignoring the veracity of an infinite Grassmann additivity system, I suspect your question is impossible to answer without first defining what your vantage on what “colour management” is.

Given the dearth of theory, the term “colour management” with respect to pictures, remains an ill-defined surface. As such, only ill-defined non-solutions result, with the general author left scratching their heads with the disconnect between what their expectation is, and the reality of the current crop of nonsense.

for me it would be any system that takes a set of input values and does transforms to different values based on a set of rules. in the scope of discussing images.

Those rulesets are what bother me a bit - practically, because of a disconnect wirh reality and missing things.

ACES for example is missing surround compensation in their ODTs for SDR, ir rather its always burned in, so like in the rec709 odt a 1.22 gamma boost is burned into the image, which implies dim surround viewing , which is fine but then the sRGB odt also has that…

Which is fine if the sRGB monitor is in the same environment as the 2.4 monitor , which according to sRGB specs is incorrect as thats made for a pretty bright office surround.

but my main question is why do we differentiate between sRGB and rec709 if they are mastered in the same environment on the same display.

We are so far off those specs in reality, why does any colormanagement system follow these specs

it seems like it could easily be simplified.

Nobody is going to turn on the lights and change the display gamma to sRGB, when working on a graphic (sTGB OOTF=1) and then turn it off, change display gamma to 2.4 and then view it in context withe the video.

I guess I am just lost in how one can even approach mixed media productions in a sensible way using colormanagement.

If I just ignore everything , set my monitor to 2.2 , sit in sone random environment , use no colormanagement and make pretty pictures, its all easy and thats what I would argue 99% of people do in SDR.

Does it work? Maybe, has anyone ever complained about something like unmanaged resolve? not really.

has everyone an their grandma complained about macOS colorsync absolutely.

This doesn’t seem to have anything to do with managing “colour” though? As an image author, one would hope that the image holds some importance?

There are zero visual cognition models that emulate the impact of “surround”. But this also implies some sort of “picture constancy”? That is, it seems you are asking for what is being presented to be re-presented in a varying set of contexts?

The best we have has been a power function applied to the values, and even then, the veracity of this technique can be heavily questioned.

I believe you will find that the Sony guidance from ages ago was a surround of 1 nit approximately. This places the BT.709 “surround” in a dim-dark range, while with sRGB it’s at a different level. This would at least hint that the “same environment” assumption is not quite correct.

You are quite right. It’s all a crock of nonsense.

The salient “errors” in formed pictures that are most noticeable are the byproduct of the “absolute colourimetry” approach to visual cognition. That is, a current “clip” in a picture will yield a frequency domain transition in the gradient differentials that creates a cognitively dissonant region of “other”. On the other hand, a well formed picture, completely devoid of any and all colour management will look less broken on any colourimetric medium. “Colour management” here is breaking pictures.

Arguably the entire colourimetric vantage has been a bit of a bend in the wrong direction, largely peddled on the back of Kodak’s swing to dominance during the tail end of their empire in the 1980s and sadly into the electronic era. The absolute colourimetric vantage has led to all sorts of errors like these, complete with a dearth of default mechanics to “fix” them. EG: The ICC protocol’s gargantuan gap at outlining a reference rendering algorithm for “Perceptual” or “Saturation” intents, or even clearly defining the “chromatic adaptation” default mechanic.

Historically it is interesting because while Kodak was in their too-big-to-fail stage, and pushing colourimetric theory well beyond the limits of its utility, Edwin Land and John McCann were laying the foundations of what might one day be considered the proper path to colour cognition solutions. Land of course left a tremendous crater of an impact on the landscape with instant photography that carries forward to today’s common cellular cameras, via his founding of Polaroid. In a funny way, Polaroid picked up the visual cognition research where Kodak was pushing rope.

TL;DR: We still lack models that work. More importantly, we lack theory on what pictures are and how they work.

The only folks I’ve seen complaining about the macOS processing is effectively folks who are obsessing over bitwise exact tristimulus with BT.709 content, as opposed to what seems to be the higher order goal of Apple’s attempts to create a presentation constancy.

It seems like a challenging surface to discuss “colour management” when literally no one can clearly define the surface of the problem.

1 Like

That would pretty much be the default workflow on Windows since it has really bad support for color management and most users don’t load ICC profiles for their monitors in any case. However, when they do, it’s up to the applications to apply them correctly and that makes dealing with colors on Windows super super fun as an application developer. I had started to write an explanation why but it was getting too technical so I cut it :slight_smile: