Agree. My issue with this though is that we are simply deferring our problem further down the road. Often putting greater labour on a colourist to face against an unsolvable problem by the time they get the unfortunate series of scientism choices.
A key point with the “larger” ( ) stimuli encoding is that we can at least draw a line to a correlated response; all wattage must be positive.
The deeper problem is that we can create problematic pictures with BT.709. Or more specifically, no one has solved the problem with forming pictorial depictions within BT.709 yet. Anyone who claims so, is speaking from a disingenuous vantage.
Hence why this cup and ball routine of non-solutions needs to be arrested, as they are clearly non solutions.
Your Additive Video is a huge step in the right direction, but we need to further flesh out that problem surface. We can:
Create problematic gradient stimuli relationships, yet remain “straight lines” projected into the CIE xy colourimetric stimuli projection.
Create cognitively acceptable gradient stimuli relationships, yet be “curved lines” when projected into the CIE xy colourimetric stimuli projection.
Offsetting in uniform wattages does seem to resist breakage, hence why your suggestion makes a reasonable entry point.
If we can at least provide a conjecture around transparency cognition, even if it is not encompassing as a “final” solution, we can potentially supply a much firmer foundation than all of the prior number fornicating dead ends.
I work in camera vendor (well, not sensor native, but at least something) gamut as long as possible. The fact that it moves the problem to a LUT/DRT, doesn’t help, because they have the same issues then. Not always visible in the default state, but I haven’t seen yet a clear filmic emulation LUT (or LMT or whatever) with reach “dense” (we all know this is a bad term, but everybody knows what it means) colors, that could take wide gamut input and map it smoothly to display without clipping. Not saying, this is impossible, but I guess it is hard enough, if even most expensive LUT/software packages fail with this. Not to mention per project show LUTs, that are usually way worse, often just terrible (including very successful shows).
I have been forwarded these examples that illustrate the “transparency coginition” machanism. I found them interesting to look at and so I thought it would be interesting to share here.
@sdyer We are disgressing from the original question from @vfxjimmy How do you feel splitting the thread into in a new one about “transparency coginition” or something like that ? Thanks !