Thanks Alex. That’s much much appreciated !
Yes, I have read this in the document you shared (about reproduction). Very interesting part indeed :
Because these differences will produce substantial changes in the physical and perceived color of the displayed image, the colorimetry of that image must be altered such that its appearance will be correct in the intended viewing environment. In the jargon of the industry, this alteration is one aspect of what is called rendering. […] But as with all color reproductions, for correct color appearance, the colorimetry of those images must be entirely different from that of an original live scene.
Yes, we do manipulate the scene as artists to get the reproduction we want. 100% agreed. But would it be fair to say that the most faithful this reproduction process is, the better it is ? An other way to put it would be : the less we need to compensate/manipulate, the better it is ? Sure, people can compensate and have done that for many years. But isn’t this group a great opportunity to improve this process and aim at the most faithful reproduction possible (like hue-preserving for instance) ?
If I have to put like 7 or 8% of green on a blue sphere so it doesn’t go purple under a sunlight, wouldn’t you say that this compensation/manipulation of the scene should be improved ? Again, part of the issue for me is the scale : possibly hundred of CG artists compensating on a single show. How do you control that ? Or better formulated : how do we make their lives easier ?
I don’t think you can make the light sabers look good with the current Output Transform. I agree it is an extreme example but it is not uncommon for an animated feature to reach this level of saturation. I have tried on our very saturated show at the studio and did not succeed. I either clipped or skewed.
In most Animation studios I have worked at, all of the CG work is done under one Display Transform. If the Display Transform is broken, what are the consequences ? At a place I cannot name for legal reason, we used to work with a LUT which highlight’s rolloff was broken. As a consequence, we could never set the sun’s exposure high enough (on a city for example). The renders were therefore lacking energy : not enough Global Illumination, not enough SSS in the leaves of the trees… So we had to compensate for that by tweaking the values, sometimes outside of the PBR range (physically-based rendering). Not ideal.
I am the king for stupid questions. And as once Thomas put it : there is no such a thing as a stupid question, only poor explanations! So let’s see if I can come up with an explanation. With these examples I am trying to show that lighting with ACEScg primaries (displayed in Rec.709 ACES) kinda puts us back in the same place as lighting with BT.709 primaries (displayed with a simple BT.1886 eotf). Sure, the s-curve is doing part of the job but I believe that without gamut mapping, we are kinda stuck. I think Jed’s videos are a great way of showing that. In the document, M. Giorginanni wrote about the encoding method :
It allows information to be encoded in a way that places no limits on luminance dynamic range or color gamut. Any color that can be seen by a human observer can be represented.
So we have possibly infinite values encoded both in luminance range and color gamut. But what about the display and its limitations ? How do we alter the values so they don’t clip for instance ?
Funny example ! I love this kind of anectdotes. Thanks for sharing !
Disclaimer : I do not mean to be annoying. I am certainly passionate about this stuff and will never thank you enough for welcoming me here and making me feel part of the family. I hope this discussion is not bothering anyone and that some of the stuff I have written makes sense.
Regards,
Chris