Curious what this means?
This is not correct. The above “do nothing” will never shred the way colourimetric scalars do. It’s a different line of thinking to abandon Kodak’s scalar colourimetry altogether.
The sole point is though, that doing nothing and arriving at a vastly superior pictorial depiction should be cause for pause. The problem surface:
- May be poorly defined.
- May have a poor conceptual framework.
Does this make reasonable sense?
Imagine taking the pictures from Star Wars: A New Hope and suggesting that folks go back to the stimuli in front of the camera to remake the pictures. Imagine suggesting for a moment that Darth’s lightsaber should be the familiar attenuated purity one that is part of the “canon” in one picture, and in another somehow be less attenuated in another picture?
If that is the premise, doesn’t it make sense to outline a protocol such that authorial intention can control that prior to engaging in forming the pictorial depictions?
If the goal is to outline a protocol whereby authors are creating their authored pictures, this protocol should have been fleshed out to facilitate the authorship. I suspect the vast majority of authors author a singular picture. If a cinematographer puts a white ultra bounce outside of a window, they are purposefully making the picture such that the ultra bounce is not seen.
If there are going to be decisions made in forming the picture, doesn’t it make sense to empower the authorship by providing them with a mechanism to choose whether they want their picture to be re-authored? I am trying to imagine Deakins with a one click “SDR” picture, and then a fundamentally different “HDR” picture? Surely he should have an authorial choice in this matter? What is the parameter space that can exert control over the “type” of HDR facets?
Given the visual system appears to do a dynamic normalization of sorts, which leads to the aforementioned Gamut Expansion and Gamut Contraction, what is the meaning of this? What’s the goal? We don’t cognize the stimuli, but the relational field, and the computed colour that emerges from the computed field is very different to what we think we are looking at.
For example…
Most reasonable folks think that the colour of the lightsaber is in the picture. What folks discover however, is that no such singular stimuli leads to a satisfactory match of the cognized and computed colour.
Given that cognitive computation of colour is happening, what do display medium gamuts mean in this?
The photographic Apex Predator - chemical film - is not invertible from the pictorial depiction. Nor is the entire genre of black and white photographic films.
For the case of specific workarounds requiring fast hacks to energy, would it have not made more sense to engineer a specific approach to solve the energy back projection instead of placing the constraint on the All-In-One Kitchen Sink? Given that the energy fields will be very different based on creative choices made in any given picture, no back-projection “inverse” will be correct to begin without a selection of energy gradient options.
This looks like the “Blue Light Fix” and is cognitively beginning to shred in the upper region, no? The purity is creating a fission mechanism that is shifting the depiction of “illumination” to being “in front of” or “ripped through” the offset haze. It’s a mess, no?
Maybe it’s just my myopic view, but the picture formation mechanism seems problematic?