Luminance-Chrominance Polarity Based Display Rendering Transform

The spatiotemporal aspect is connected in that film, for example, doesn’t have a mechanic that can flip polarity. It is after all, a balanced system just like any other RGB, with the added complexity of density.

It’s easy to test using a simple inverse EOTF lighting setup where the peak diffuse is 1.0. This also avoids the issue of purity attenuation along “whiteness” etc. Clips, hull runs, etc. will contribute to polarity problems.

Super simple test bed that has the same mechanic of all per channels. Compare against other results.

The combined “force”, if we consider luminance as a single vector and chrominance as two, would be luminance + chrominance. Luminance alone will trip polarity, because it’s missing the other “force” we ultimately will cognize.

1 Like

Agreed, lets branch out and start to formulate some concepts.

I still don’t get why film has this built in polarity preserving?
Seeing what happens in the various processes it is quite a funky process. While the RGB sensing side is rather simple the density side of thing is quite complex which also includes 3x3 matrices type operations.

2 Likes

If we leave out the DIRs and DIARs, which I would speculate could pooch polarity, there’s two things that appear to hold:

  1. Unique spectral shape “input” would resist generating “more neurophysiological energy” if the underlying system is engineered toward an appearance of a given “without colour” centroid. Conjecture #1: If the photosensitive granules spectral sensitivities are “balanced” to achromatic, then it follows that any dye replacement coupling, of any spectral characterization, if also balanced to achromatic, will hold the chrominance-luminance stasis. Basically, any system where we have “oriented” our differentials to achromatic should hold? From a few napkin Colab tests, this seems to be invariant in a spectral energy model as well?
  2. Per layer / channel picture formation mechanics. For a given engineered, via varying ground up photosensitive material as a monotonic densitometric H&D curve, the achromatic A=B=C case will always be “up” relative to an imbalanced version at equivalent neurophysiological energy. Conjecture #2: If the resultant output, regardless of channel swizzling “down”, doesn’t push either luminance plus chrominance “above” the total combined threshold, the polarity is maintained.

I don’t think so? A 3x3 gains the basis vectors arbitrarily. I have always viewed this, rightly or wrongly, as a two dimensional transform applied to a three dimensional Cartesian model. Perhaps a 4x4 is required to maintain polarity? No clue.

The notion of an “whiteness decrement HKE” remains here, however. It seems at least viable that there’s a peculiar “intercept” threshold along the whiteness dimensions downward. EG: If we look off into the distance and see depth cueing in successive “layers” of mountains, a sudden and abrupt differential threshold toward a “deep dark blue” would pop out as forcefully as the standard HKE polarity dimension does in the increment direction.

Something something something cognitive decomposition here?

1 Like

If we want to get down this path we better get some neuro scientists into the loop.

I’m learning a lot in this thread. :slightly_smiling_face:

Maybe Bevil Conway would be good to talk to? He’s a senior investigator at NIH.
I found his work/research while trying to learn more about the topics/concepts ya’ll are discussing.

So that we speak all the same language, can we

Depends on the 3x3 matrix but basically once the matrix rotates the basis you start introducing channel crosstalk which in turns produces the polarity change. A 3x3 that only scales should be fine.

Can we agree on the decomposition from RGB to Luminance / Chrominance so that we are all talking about the same thing?

Matrices

Unsure.

I’ve tried with sum to unity matrices and the results still shift, Illuminant E for example. The coefficients aren’t constrained enough it seems, and I’m unsure how to constrain them. I think it would require a constraint between Y and X to Z? Unsure, but would be interested to hear ideas. If I were to speculate as to the “why”, my best guess is that the neurophysiological differential signal, in the “fixed” case of the Standard Observer model is entwined into the three XYZ Cartesian coordinates. More specifically, given an attempt was made to isolate the P+D via the Y component of XYZ, the remaining vector is buried in X and Z. This is just pure speculation of course, with a degree of evidence to support that claim. TL;DR: I am unsure a 3x3 matrix, with only two degrees of freedom in a three dimensional Cartesian projection can supply enough control?

Chrominance

Chrominance ConspiracyTheory.GIF from the Arkham Sanitarium

Chrominance follows the MacAdam work. I’ve recreated the original MacAdam diagram using a “uniform projection” space, and sampling concentric circles, then reprojected back to CIE xy:

Here is the original diagram1 I was seeking to replicate:

We can calculate the ratios for any colinear CIE xy chromaticity through any arbitrary coordinate using a projection into a uniform vector space. It’s easiest to think of chrominance as the magnitude vector that forms from relative X to Z. Y remains isolated.

Oddly, I was unable to find any papers that cited the reprojection to a “uniform chrominance” projection. That projection looks like this when using the 1931 Observer system:

It can be useful to think of chrominance as the X to Z plane, with Y divided out to form an equi-energy projection. Here are the MacAdam “Moment” elliptical shapes, when projected into this equi-chrominance projection, cropped for clarity:

And here’s BT.709 projected into this chrominance plane:

The final “ratios” of underlying current stimulus are derived from the simple Euclidean distance ratios, which given they are ratios, plausibly is a direct line to the underlying differentials:

If we wander down this path of nonsense, total “neurophysiological” influence is the combined force of the resultant neurophysiological differentials, where “differentials” are not only the differences of the absorptive profiles, but the On-Off and Off-On assemblies:

  • The (P+D) signal can be broadly considered {Y}.
  • The combined vector magnitude of the (P-D) and (P+D)-T signal path is entwined in X to Z.

This means we should be able to calculate the ratios of gains of energy required to hit the target global frame achromatic using the ratios of either Y or the X to Z magnitudes. Using the projection above, the total Euclidean distance of the chrominance vectors is 5.239601 + 0.407705, for a total of 5.647306. If we divide each vector magnitude by the sum, we end up with the two cases for BT.709 “blue” and “yellow”:

  1. 0.407705 / 5.647306 which yields 0.07219460039.
  2. 5.239601 / 5.647306 which yields 0.9278053996.

If those resulting ratio values look awfully familiar to folks, they should. Those happen to be the luminance weights of BT.709 “blue” and “yellow” respectively.

Incriminating Evidence against CAMs and UCSs

Moving along, if we consider literally all of the Colour Appearance Models and Uniform Colour Space attempts that have been made over the past half of a century, a peculiar trend line follows out of them. Let’s take a look by way of a post here on the forum, cropped, flipped, and collated against a Uniform Colour Model, and a Totally Not CAM / UCS Model from Yours Truly. The “model” I generated was simply a pure luminance mapping, with attenuation of purity to hit the target luminance, such as when a given luminance was unachievable at the mapping peg position. Note how all of the “clefts” in the CAMs match the exact clefts in the UCS, which in turn match the exact clefts in the luminance mapper:

If we were to plot the chrominance ratios for a similar sweep of tristimulus using BT.709, it looks like this:

Here is the exact same measurement, inverted as over complementary values that accumulate to the achromatic Grassmann middle:

I believe it was Luke who, in the most recent meeting, stated:

The relationship between linear light and J is different than the relationship between linear light and M.

If we correct for that pattern, by way of gaining the underlying magnitudes of the tristimulus, I’ll leave it up to the reader to speculate with a wild guess as to what model we end up with if we were to indeed yield constant relationships between the chrominance ratio and the luminance ratio. Here is a passage of an example of the stated problem from the MacAdam original work:

Figure 1 can be used to advantage in any problem in which a neutral additive mixture is to be established. If, for instance, it is necessary to secure a white image on the screen in connection with an additive method of projection of colored photographs, suitable relative brightnesses of the three projection primaries can be readily determined.

Chrominance: The Shortcut

Given we are already often in balanced systems, working through the values from the Standard Observer has a direct shortcut. Chrominance is to Luminance as:

  1. Luminance = (R_w * R) + (G_w * G) + (B_w * B)
  2. Chrominance = max(RGB) - Luminance(RGB)

If that happens to look all too convenient, it is because RGB is already engineered to be a balanced system, and the underlying “force” gains are baked into the model. To recover the magnitude we just have to “extract” it.

Note the definition of chrominance here is following the one outlined by Boynton2, and is congruent with the other definitions such as those of Chromatic Strength, Complementation Valence, from folks such as Evans and Swenholt, Hurvich and Jameson, and Sinden:

We are now in a position to define a new term: chrominance. Whereas luminance refers to a weighted measure of stimulus energy, which takes into account the spectral sensitivity of the eye to brightness, chrominance refers to a weighted measure of stimulus energy, which takes into account the spectral sensitivity of the eye to color.

The simple description is that vector magnitude between X to Z.

1 MacAdam, David L. “Photometric Relationships Between Complementary Colors*.” Journal of the Optical Society of America 28, no. 4 (April 1, 1938): 103. Photometric Relationships Between Complementary Colors*.

2Boynton, Robert M. “Theory of Color Vision.” Journal of the Optical Society of America 50, no. 10 (October 1, 1960): 929. Theory of Color Vision.

5 Likes

At some low value of luminance relative to the surround, of the order of 1% or less, the color is seen as black, the exact value depending on dominant wave- length and purity. For all colors, as this ratio is increased the appearance of the color passes through a predictable series. At first the blackness decreases, then hue appears at low saturation mixed with black or dark gray. As the ratio continues to increase, gray decreases and saturation increases, until a point is reached at which the gray has disappeared. This point is usually considerably below and in some cases very much below a luminance match with the surround. As luminance is still further raised, the color becomes fluorent1 and saturation continues to increase. The saturation and this fluorence continue to increase up to and slightly beyond a luminance match with the surround. Above this, brightness continues to increase but the fluorence disappears and saturation decreases. The color takes on the appearance of a light source.1

The following is a blunt force trauma scise of the upper row swatches into three distinct neurophysiological clusters:

  1. Both luminance and chrominance of the swatch “figure” are in a null or decrement orientation relative to the receptive “ground” field. This is all upper row swatches from the dashed 191/255 “ground” to the right.
  2. Luminance of the “figure” in a decrement orientation to the “ground”, but Chrominance in an increment orientation.
  3. Both Luminance and Chrominance of the “figure” swatch are in an increment orientation to the “ground” strip. A peculiar modulating “lustre” might be cognized somewhere along this range. Perhaps there’s another null between the twin neurophysiological signals of P-D and (P+D)-T?

1 Evans, Ralph M., and Bonnie K. Swenholt. “Chromatic Strength of Colors: Dominant Wavelength and Purity.” Journal of the Optical Society of America 57, no. 11 (November 1, 1967): 1319. Chromatic Strength of Colors: Dominant Wavelength and Purity.

1 Like

Luke’s dissertation has done a pretty great job of fixing this… introducing a relationship between achromatic and chromatic pathways that produce brightness. He’s done great research in understanding the relationship between magnocelular and parvocelular pathways, and addressed some failures about the transformation from opponency to hue and chroma, mainly that the magnitude of chroma and it’s relationship to physical purity is not the same for all hues. After addressing this significant issue, it became very clear that the relationship to perceptual brightness and the physical stimuli was a simple combination of achromatic magnitude taking place in the parvocellular path and chromatic magnitude.

Although I don’t have his full model yet available, the preliminary stuff I’ve seen looks quite good at addressing the mountains the color maps you’ve shown. It will be well worth looking at once his dissertation is published (coming soon)

This was fixed in 1907 approximately? Munsell?

I’m pretty sure this is an already solved problem?

We can fix the mountains. The answer is to look at what already “works” in every single technique that balances to an achromatic equivalent illuminance level I suspect. Heck, it’s in every single display!

But as per @daniele, I suspect the “problem” begins to unfold as an ill defined one, to which only ill defined non-solutions will arise.

Ok I’ll bite.

Any such model that attempts to map stimulus to cognition is going to be one to one, goofy noodleberry Hunt-like parameterization notwithstanding, that as Hunt’s model does, breaks down the moment one tries to put it to work.

Most beautifully experienced via the Riddle of the Twin Discs.

This dilemma goes all the way back to at least Katz, and lingers as the most problematic component of “the eye as photometer / measurement device”, which all neurophysiological research points against.

For those unfamiliar, take twin discs with annuli, and set the outer annuli to minimum luminance in a display. Set the other to maximum. Set the discs to equivalent luminance. Of course, everyone on the planet knows that the cognition of the interior disc will be “incompatible”. Now freely adjust the interior disc to any achromatic value.

What will become clear is that all values will be unsatisfactory. To really appreciate this, one would be advised to try it. The setup is dead easy to try at home on any DCC.

What unfolds from that, or arguably anyone paying attention to the Hubel & Wiesel1 work, will see very quickly why such a “match” is impossible; the “optic signal” ain’t a measurement. Full stop. As best as the research I’ve looked at, the “optic signal” is strictly differential based along twin dimensions.

If someone wants to read a wonderful paper on this subject, that really tackles the centuries old “Eye as Measurement” concept, I would strongly recommend reading Mausfeld’s The Dual Coding of Color2. I have heard that Mausfeld was running the lab that the deities of visual cognition Ekroll and Faul were at during their time. It’s an absolutely brilliant paper that cuts to this twin differential signal mechanic, and along the way effectively dismantles the “Measurement Conjecture”.

TL;DR As anyone who honestly tries the twin disc situation will realize, any attempt to work on a rather archaic notion of stimulus to sensation is doomed to failure; no one to one “measurement” correspondence will ever possibly work, given that the signals are likely differential in nature, and of a duality that is never expressed in such models.

1 https://youtu.be/OGxVfKJqX5E?feature=shared

2 Hatfield, Gary, and Sarah Allred, eds. Visual Experience: Sensation, Cognition, and Constancy. Oxford University Press, 2012. Visual Experience: Sensation, Cognition, and Constancy | Oxford Academic. https://aardvark.ucsd.edu/color/mausfeld.pdf

1 Like

Munsell is not an appearance model… it’s not really mathematically defined. It’s just a color order model. So… sure I guess it doesn’t have the same problem (except it does… because it’s based on flicker photometry).

It’s not an already solved problem as far as I or several dozen other color scientists that I speak to very frequently are aware. There are no mathematical appearance models that can predict HK effect in simple stimuli. If we are wrong… I look forward to finding the survey paper explaining what we are all missing.

While some color order systems may not have the same issues, we’d still like a useful mathematical model to do the same. The thing that seems the closest to working by wide peer reviewed consenseious is CIE CAM. And yes, there are several issues that originate from Hunt… Which have been addressed in multiple papers by Hellwig and Fairchild.

Lastly, your disks demonstration of simultaneous contrast is interesting… But seems to me like just changing the topic instead of a real response. Totally dismissing CAM as useless because it doesn’t predict simultaneous contrast (which it was never designed to do) is a really defeatist and ignores other extensive modern research on extending CAM to do so (iCAM for example)

I’m reasonably sure the papers are already out there.

But first I guess I have to ask what is the Helmholtz Kohlraush effect to you? I am well versed on the linkable Wikipedia entries, the endless research papers from Nayatani, Ware-Cowan, etc., but I’m curious as to what you think the “problem” is?

Ok. And what does CIECAM (or literally any of these models, your pick) reveal of the following pictures? Can any model even bring even a sense of understanding?

I’ve seen plenty of models based on colourimetry, and I would say they are all abject failures. Doubly so perhaps given how basic this articulation is? We could even extend any of them to the simple twin disc demonstration, and I believe they are failing?


I’ll address this when we can nail down what you would outline as the problem surface of HKE. Because honestly, I don’t really know what some folks think is HKE and what isn’t, and it would do my pea sized parrot brain well to get a handle on what this specific facet is about.

(And yes, I’m totally dismissing all CAMs and “Uniform Colour Spaces” as utter nonsense, but that’s just me. So I guess that’s either me being defeatist, or totally unsatisfied with following along with the orthodoxy from the colourimetric approach.)

Wait… isn’t it? Doesn’t it have a notion of “without colour” baked into it? I’m confused as hell at this point.

There is an orientation, local CSF, and luminance based model of lightness that would predict the interesting parts of the images you’ve shared. Here but I have to ask a friend for the reference so I will need to wait until I have more information to share.

But I also want to point out that there is yet another topic / illusion switch to in your first image to metelli’s transparency illusion. There is a paper you would love in the book “illusory contours” which is saddly at my desk in NY. It will have to wait a few weeks until I can scan it in.

In this thread you’ve pointed out several different visual phenomenon that at least at some point are all related but it’s too difficult to discuss and track them all at the same time. HK effect (your rainbow mountains), failures of lightness models accounting for local contrast, transparency illusion, and water color illusion. If you took some time to really write an extensive explanation about what you think the relationship between all of these and do some mathematical modeling on them it would be well worth the time.

I think that myself and some others here would happily engage with that, but it’s just too much to try to comprehend in a few posts consisting of only a few paragraphs each and mixed with other commentary.

I am excited to read it. I have yet to see any except for one generalized approach that is satisfying. The Kanizsa / Minguzzi demonstration is a shot across the bow of inhibition theory.

When a white homogeneous surface surrounded by a black area is divided by a thin
black line, the two resulting regions can appear slightly different in brightness (see
figure 1). Since the conditions of brightness contrast are identical for the two regions,
this unexpected effect is not easily explained. In fact, it cannot be accounted for by
any simple physiological mechanism such as lateral inhibition or frequency filtering.

Metelli holds a tremendous sway over me, so I’d be very keen to have even a clue as to the title. I am reasonably decent at scrubbing up the works.

Oddly, this is exactly the reason I’ve rejected the CAMs outright. I did the manual calculations to “correct” the mountain range, as the tristimulus balancing is remarkably basic. Thank god because I’m the idiot who’s currently arguing with someone with an actual degree in colour science from RIT, which is a sure sign that they should not be listened to.

If you scroll up, you can actually see the result. It’s surprisingly underwhelming as much as I was very excited to see if I could correct them. The model we end up with has been sitting here all along.

Which is exactly why I asked for you to define the HKE effect in your terms, on your grounds.

I am very confused however.

I cannot see anything except Munsell, and all of CIE Colourimetry, indeed being an appearance model. Specifically, the appearance of a global frame of achromatic. If it were not, the notion of “Without colour” would not exist, and it seems to be central (no pun intended) to the notion of “colourimetry” itself.

So, to recap:

  1. What’s HKE to you? If it is the mountains, I’d like to explore what you see as “wrong”.
  2. What is an “appearance” model? I consider “without colour” or “whiteness” as an appearance dimension, but perhaps we need to have a clear definition here from your vantage.

And please don’t quote CIE term lists or Wikipedia, as it doesn’t do me any good in parsing what your vantage is, and I’m very keen on the demarcation line between what is an “appearance” and what is not.

1 Kanizsa, Gaetano, and Gian Franco Minguzzi. “An Anomalous Brightness Differentiation.” Perception 15, no. 2 (April 1986): 223–26. https://doi.org/10.1068/p150223.

Well to answer at least your last question, which is about all I have the energy for at this moment. A color appearance model uses some inputs about a stimuli to predict what it looks like. Munsell is not a color appearance model because it’s just a list of XYZ values under a particular illuminant (single state of adaptation) and a list of corresponding value, chroma, and hue numbers. Maybe you can interpolate within it. But it doesn’t have any adaptation model attached or describe any mathematical functions to descrive the relationship between the physical parameters of the stimuli and the appearance values.

And in your own words, and in your own examples within this thread, “CIE Colorimetry” isn’t an apperance model because the same colorimetric values can look different depending on context. Like state of adatpation or relationship to other elements in the scene.

I’m not going to suggest that CIECAM is a “good” appearance model because you rightfully point out it is missing things like simultaneous contrast and “filling in” but at least in includes luminance adaptatiom and chromatic adaptation. And is actually a “model” in that it provides some mathematical proceedures for calculating appearance correlates.

Reading through this thread again, I just want to highlight this comment here and maybe this creates a more actionable conversation. If there is some egregious failure of some regular ACES operation, like a polarity change… Maybe there are tests or models we can formalize to check these. Other “polarity” changes that I’ve seen in other non-ACES color algorithms is causing some colors to cross Key’s naming boundaries. I.e. turning green into cyan / turquoise.

I would claim that any idea of a null of achromatic does indeed have an appearance facet, as does any appearance of a “match”, which implies a cognitive evaluation. But ignoring that…

But let’s stretch the limits of good taste and pretend that all of experience is isolated stimulus against a dark surround. That too seems to be a challenging suggestion, if the escape hatch is “Well it doesn’t predict appearance”. That is, either the totality of the receptive fields play a role in varying magnitudes of influence, or they do not. This is a binary proposition.

OK! Great. “Context”. So to be an “appearance” model we need some parametric description of “context”? To what granularity? When we have a global frame achromatic tristimulus and it is cognized as “yellow”, what sort of “context” do we need to arrive at an appearance model that predicts such?

At risk of sounding like pedantry, I worry that “context” ends up being an infinite regression of an infinite ontological. Who decides what is “context” and what is not? And if we add one more sprinkle of “context”, is that within the model, or do we need another model with such additional “context”?

Yet not a single CAM or UCS can “calculate” a tristimulus correlation of the appearance of the twin discs or Adelson, or literally any of the field dependent articulations provided in this post. Not a single CAM nor UCS.

What I am getting at is that if we place any veracity in the inadequacy of a match of the twin discs and annulus, it at least points in the direction that the basic model of measurement is fraught with failure from the onset. Chasing the dragon of such a model is forever going to fail us miserably due to the wild disconnect between the elementary neurophysiological signals and the erroneous projection of “eye as photometric device”, to quote Gilchrist1, and later expanded upon by Mausfeld2.

Which is why it would seem the “force of a given chromatic signal” problem is already solved via Munsel 19073; Munsell used Maxwellian discs to estimate the relative “force” of purity of opponent colours, which ends up confirmed into the CIE 1931 system. Later, folks like Evans and Swenholt4 and Jameson and Hurvich5 further confirm the pattern.

Hence the need for your specific description of what HKE “is”?

Each of the demonstrations loop to a core construct, which is echoed in primate neurophysiology6. We can distill the demonstrations down to a most basic two foundational concepts, they would be:

  1. That the “optic signal” is a two dimensional construct for each of the assemblies. One construct leads to an increment signal, the other leads to a decrement signal, which are strictly differential in nature.
  2. That the increment and decrement differential signals are propagated and inhibited by way of those twin paths. This propagation of the differential varying magnitude “boundaries” can be considered the “fill” mechanic described by Grossberg and Todorović7.

Drawing this all together into how pictures are formed and why we ought to be paying attention to them, is that the increment and decrement mechanics are plausibly tied to how we scise pictures apart into cognitive assemblies. When we form pictures from measurement devices, we should be paying close attention to how we create those differentials to avoid unintended scission.

Note that I’ve loosely tried to put the terms “Brilliance” and “Luminous”, to borrow terms from Evans8, in the examples to point to an approximation of the “lustrous” effect that seems to be a biased Gaussian-like range about the fulcrum of increment to decrement at equiluminance. In the “yellow” and “cyan” cases, the labels should be considered to flag a region that isn’t expanded in the swatch demonstrations; the “Brilliance” lustre seems to be related to the global frame equivalent achromatic step point in the diagram, 191/255. The lustre also seems correlated to the chrominance of a given chromaticity angle. YMMV.






I find it interesting that the lustre is incredibly similar to the rivalrous increment versus decrement mismatch in the middle disc of Kingdom’s example9.

1 Gilchrist, Alan, Stanley Delman, and Alan Jacobsen. “The Classification and Integration of Edges as Critical to the Perception of Reflectance and Illumination.” Perception & Psychophysics 33, no. 5 (September 1983): 425–36. https://doi.org/10.3758/BF03202893.

2 Mausfeld, Rainer. “12. Color Perception: From Grassmann Codes to a Dual Code for Object and Illumination Colors.” In Color Vision, edited by Werner G. K. Backhaus, Reinhold Kliegl, and John S. Werner, 219–50. DE GRUYTER, 1998. 12. Color Perception: From Grassmann Codes to a Dual Code for Object and Illumination Colors.

3 Munsell, A. H. A Color Notation. Boston: G. H. Ellis co., 1907. Catalog Record: A color notation | HathiTrust Digital Library.

4 Evans, Ralph M., and Bonnie K. Swenholt. “Chromatic Strength of Colors: Dominant Wavelength and Purity.” Journal of the Optical Society of America 57, no. 11 (November 1, 1967): 1319. Chromatic Strength of Colors: Dominant Wavelength and Purity.

5 Jameson, Dorothea, and Leo M. Hurvich. “Some Quantitative Aspects of an Opponent-Colors Theory I Chromatic Responses and Spectral Saturation.” Journal of the Optical Society of America 45, no. 7 (July 1, 1955): 546. Some Quantitative Aspects of an Opponent-Colors Theory. I. Chromatic Responses and Spectral Saturation.

6 Dacey, D M. “Circuitry for Color Coding in the Primate Retina.” Proceedings of the National Academy of Sciences 93, no. 2 (January 23, 1996): 582–88. https://doi.org/10.1073/pnas.93.2.582.

7 Grossberg, Stephen, and Dejan Todorović. “Neural Dynamics of 1-D and 2-D Brightness Perception: A Unified Model of Classical and Recent Phenomena.” Perception & Psychophysics 43, no. 3 (May 1988): 241–77. Neural dynamics of 1-D and 2-D brightness perception: A unified model of classical and recent phenomena | Attention, Perception, & Psychophysics.

8 Evans, Ralph M. The Perception of Color. New York: Wiley, 1974.

9 Kingdom, Frederick A. A. “Levels of Brightness Perception.” In Levels of Perception, edited by Laurence Harris and Michael Jenkin, 23–46. New York: Springer-Verlag, 2003. Levels of Brightness Perception | SpringerLink.

I’m only going to respond to one small point because I can’t waste all my day talking in circles about this… but yeah… that would count has having an appearance facet. But XYZ values don’t incorporate that, XYZ is just the result of a simple weighted integral. You’ve taken my reply, ignored what I was actually replying to and moved the goal posts on why I would say colorimetry is not an appearance model. So once again, colorimetry does not incorporate an appearance model by itself. Some additional instructions / calculations are needed.

And as I said in my last reply, you correctly point out that CAM does not predict the appearance of a lot of the stimuli you’ve shared in this thread. And you raise another good point about how much of the context needs to be parameterized and with how much granularity.

Lastly, actually I think that CIECAM can predict the change in appearance in the twin disks example. That’s what the Y_b parameter is intended for. I’ve tested it to model some other examples of simultaneous contrast and it does that for achromatic contrast quite well.

How about you write your own appearance model and compare it to iCAM that does include spatial factors and then we’d have something to compare to and see that it is indeed better?

Go for it.

I’ll wait.

I won’t make you wait long. Y_b does indeed model simultaneous contrast. It’s error prone in it’s formulation, and should be improved. Or perhaps an entirely new CAM model derived from your methods would be better. But my point is just to show that there is actually a parameter for the disks example.

Here I’ve used the value 10 in place of 0, because the model is unstable and baddly formulated. And will once again use the disclaimer that I’m not suggesting it’s a good model. But the idea that there is nothing or no effort in understanding these phenomenon in CIECAM or in other maybe more “traditional” color science models is plainly untrue.

If you have a better model derived from a fully spatial image and want to analyze it against iCAM, which would actually do a better job of modeling your image I think there are many people who would be interested.

It’s well known that CAM is not perfect, and some people like myself would even call it bad. But it’s not as devoid of merit as you suggest.

The point I was making is that even if we remove the diffusion-like processing out of the equation, these “measurement” approaches are all going to deliver similar nonsense without accounting for the increment vs decrement direction.

That was my point, and I stand by it. I do not see anyone even sniffing around these sorts of mechanics out in CAM / UCS land, and I would hope that they would become more well understood. We could even make a case that the “unsatisfactory” result of the twin discs is due to the increment and decrement signals being pushed along upstream as cognitive metadata, as a flag for a different mechanic. There’s a massive amount of research here hinting that way, not the least of which is likely Anderson’s entire body of work1.

Given that it is well documented the “dipper” function sometimes manifests in increment vs decrement magnitudes of sensitivity2, but also that there are well documented asymmetries between the increment and decrement dimensions3, it seems odd that few folks in colourimetry are exploring this?

1 ‪Barton L. Anderson‬ - ‪Google Scholar‬

2 Whittle, Paul. “Increments and Decrements: Luminance Discrimination.” Vision Research 26, no. 10 (January 1986): 1677–91. Redirecting.

3 Lu, Z.-L., and G. Sperling. “Black-White Asymmetry in Visual Perception.” Journal of Vision 12, no. 10 (September 14, 2012): 8–8. Black–white asymmetry in visual perception | JOV | ARVO Journals.