I probably wasn’t terribly clear. There’s no “trade off”. It’s an impossible nut to crack as not a single person on the planet has even the slightest idea as to how visual cognition works.
Following the tremendous amount of research given by the likes of Adelson, Grossberg, Shapiro, Kingdom, Gilchrist, and a couple of dozen other incredibly important names, no model that uses a discrete sampling approach can even remotely behave as a “colour appearance model”. It’s pure nonsense. (The demonstrations above will range from compelling to meh depending on field of view dimensions, adjacent field contribution, etc.)
These aren’t “edge cases”, nor “illusions”, nor anything more than isolating how visual fields appear to play critical roles in the formulation of colour cognition psychologically.
Try any “CAM” on even the most basic “achromatic” values example as per the Adelson Snake. They will all fail spectacularly because, lo and behold, not a person on the planet understands visual cognition. What are they affording this project, because again, a discrete sampling of tristimulus approach will never and can never be a colour appearance model? Where did this orthodoxy manifest?
And I wish I didn’t have to spend a breath of time on the subject of colour appearance models, because it has become the subject where the subject should be forming pictures from abstract colourimetrically fit data…
Rewinding to creative film from this subject for a moment…
Creative colour and black and white film was, and remains, the apex predator of photographic picture formation.
Creative film wasn’t a colour appearance model.
Where did this whole idea to use a “colour appearance model” even come up in the first place? Surely someone can trace where the wisdom to start with the overly complex numerical fornicating of ZCAM to a patient zero? Who was this person? I swear I’ve been around these parts and I have no idea where this started.
There seems to be some goofy idea that the notion of “What is a picture?” has long since been “resolved” as a matter of stimulus conveyance. This is also pure nonsense, and the heavens know that countless tomes have been written on attempts to reconcile how pictures work in terms of mechanics. From Gombrich, to Peacocke, to Millikan, to Gibson, to Caron-Pargue, to name but less than a sliver of hundreds of minds.
To this end, we could enumerate the importance of picture-formation and picture authorship in two broad strokes categories:
- Make pictures that don’t look like complete ass.
- A protocol that affords picture authors full authorship control.
On the subject of 1., again, creative film has been the apex predator here, and it was not a “colour appearance model”.
Specifically, the method of chromatic attenuation and amplification afforded by dye layer channel mechanics, or even the adjacent per channel mechanics, have not been enumerated in the totality of this effort as best as I can tell. Those rates of change along the amplification / attenuation of purities are absolutely essential to bridge between “looks hideous” to “looks acceptable”.
On note 2., does anyone actually believe that creative control is being afforded? When folks open up their ACES project and see five different pictures depending on output medium, I’m 95% confident that there isn’t a single author on the face of the planet with cognitive faculty who says “Yep… this is working!”
I want to restate loud and clear that this whole of “Colour Appearance Model” has failed to address a critical point of protocol. In the earliest historical manifestations of “the film colourist”. As many people who are familiar with he history of cinema, the earliest film colourists, often women, would use dyes or paints to tint the formed pictures present within creative film, often times with stark depths of purity. It should be viewed as completely ironic that the current protocol outlined in ACES and other protocols prohibit the application of the first appearance of the profession of colourist work as we know it. This is mechanics. No one tried this I guess?
If there is a joke in here, that has to be a glaring punch line. Even the protocol is fundamentally… curious. There’s no attempt to localize “Where is the picture I am looking at?” so that further visual manipulations can be applied, but I guess that’s another great “That’s a different VWG” problem like we saw with the “Gamut” nonsense…
So even if everyone agrees that because visual cognition of pictures is complex, and that therefore authorship affordances are of utmost importance, the insistence on this “Colour Appearance Model” protocol, or the reductio-ad-absurdism of “Take derp values to derp display” has hampered even the most basic investigations as to what the apex predator did (or even the earliest per channel additive models) in terms of “effective” versus “ineffective” picture formation.
Honestly, and I would bet that most folks who have been involved in this process in some capacity, can probably realize that these nonsense “CAMs” (hello fault line patient zero ZCAM nonsense) are not the fundamental mechanics of what is delivering anything remotely “acceptable” in the formed pictures.
The clip clamps off the working data pre-picture formation by the per channel are.
That is it. Nothing more. The mechanics of the per channel are creating wholly new measurable values out of the “input tristimulus”. That’s forming the picture, and that picture, the thing we are all looking at, is formed out of the most basic of mechanics that has nothing to do with putting on a black robe and worshipping at the foot of what amounts to a per channel model, for no good reason.
Could a basic clip clamp be “improved”? No, because the surface of these problems has been so obfuscated with absolute nonsense that no one can even formulate in words what the precise “problem” is.
Ill defined problems beget ill defined non-solutions.
(In fairness, I still don’t have a shred of a clue what “tone” is, so I’m a little behind. Apologies.)