For historical purposes as I was replying and you deleted your post (which is quite an habit I must admit)
The proposition is not boolean, I was describing the opposite: Spatially induced effect have an infinite quantity of magnitudes. Those magnitudes form a standard distribution and it turns out that you actually picked the most outliers and extreme examples.
I then proceeded to take one of those and shown that perceptual uniformity is still a thing, even under the strongest spatial induction but you still dismiss it, which is quite baffling. No one with normal vision would say that the Oklab and IPT gradients look less perceptually uniform than the CIELab or HSV ones. Do overall their overall hues change because of the purple induction, yes they certainly do.
I asked you to highlight areas in the Blue Bar image where spatial induction has magnitudes similar to your examples. I’m genuinely curious if they can be identified with precision and what should be done with them.
Again, no one denies that spatio-temporal induced effects are not important but I (and plenty of others) have put a cross on modelling them years ago because it is the hardest problem in vision. The current models (or their extensions), i.e. iCAM06, Retinex, are not exactly successful either and introduce objectionable artefacts, e.g. haloing. I tend to leave this stuff to researchers while following their work very closely.
From a pure complexity standpoint, we are talking about easily order(s) of magnitude more code, so if the 50-60 lines of Hellwig et al. (2022) is “one of the most complex piece of software engineered by man. Ever”, well… hold my beer .
Ultimately photographers, artists and colorists have always done a better work than any spatial-temporal model or algorithm.
This brings me those fond memories when local tonemapping operators halos were all rage: