Luminance-Chrominance Polarity Based Display Rendering Transform

Evidence. Try it.

Read the post. Read the first bullet point. Don’t forget that there was someone talking about basis vectors and the fact that only IE has a conservation of total scaling energy a long time ago.

Again, don’t get distracted from the point I’m trying to make:

CAMs and UCS are nothing more than luminance based mappings, devoid of chrominance. As such, they will introduce these polarity errors.

I think what @Troy_James_Sobotka is trying to say is, while it appears that Opponent Spaces (and all their derived scales) give you more degree of freedom (modifying A Vs not touching B) in reality those edits can produce unwanted folds and other unpleasant effects in the corresponding RGB data.
He tries to formulate a sensible constrain to what you can do with those individual scales.

From my experience with working in different colour models with real images he is absolutely right.
The degree of freedom in those spaces is actually not as big as you wish. They are seductive, and pretend to give you easier control while hiding potential drawbacks.

In the end all those models only predict the data they were fit against. None of those data sets actually resemble what we are doing here.

I disagree with his categorical rejection of the utility of appropriated models, while admitting that none of the “fit against data” models seem to work out of the box for this use case.

I admire though that he tries to construct a route from first principles. I hope he succeeds. (And I hope we have a framework to plug his work in - once it is working).

7 Likes

I certainly don’t disagree, my point was that rendering in different spaces (even RGB) produces different results and that it is enough to create what @TooDee was showing.

We were talking about that stuff with Steve Agland (who pointed out the issue), Zap Anderson, Rick Sayre, Anders Langlands, HPD and a bunch of other people on 3D-Pro back in 2014. That was in the context of CG rendering but it applies here and everywhere else.

With that in mind (and where I was going to) is that a common rendering space (irrespective of what that space is) is highly desirable because this is the only way to ensure that an image appears the same on different displays.

Cheers,

Thomas

2 Likes

ok,

I understand now that creating a 3D render in one working colourspace (e.g. linear Rec.709) will change some of its original meaning (e.g. base “color” shader value/settings)when doing the comp in ACEScg as an example.
The ratio between the three channels red, green and blue are changing with a colorspace transform.

So I could repeat the tests that I did and assign ACEScg primaries for the ACES tests, E-Gamut primaries for the T-Cam test etc.
Would the test showing then more useful results than now?

Thanks,

Daniel

Stop with this revisionist history nonsense.

Not a single soul was discussing polarity. Where are you making this stuff up?

It has nothing to do with the complements channels and other rendering facets. Nothing at all to do with polarity of On-Off / Off-On.

Mythical fictions.

More mythical fiction and quite a claim of “appear the same”.

Try parsing what is being said before mashing the keyboard.

1 Like

You are impossible and I’m reading you well don’t worry.

Let me summarise for you: If you read the OP, images were presented showing a behaviour and seeking for explanation as to why or the meaning of it:

What I did is showing that a basis change, a simple 3x3 matrix, causes that. Again, the ACES 1.x DRT does the same and it is RGB rendering. No need to go down the rabbit hole or 4-dimensional chess game to find an explanation.

Fiction? How so? We do exhibit images that have been rendered in a common working space on different displays with different technologies, e.g. monitors, TV, LED wall. This is how we produce all our movies. We do that on a daily basis and I’m pretty sure that Framestore, ILM, DNeg and hundreds of vendors do the same.

String due to the community flagging the post as hateful conduct.

[quote=“Thomas Mansencal, post:19, topic:5161, username:Thomas_Mansencal”]
What I did is showing that a basis change, a simple 3x3 matrix, causes that. Again, the ACES 1.x DRT does the same and it is RGB rendering. No need to go down the rabbit hole or 4-dimensional chess game to find an explanation.
[/quote]

1. ACES is a terrific demonstration if one seeks to show that poor design doesn’t work. Nothing more.
2. Did one actually test the claim about per channel?

Here is the basic idea, again. Apologies, perhaps it’s a language barrier?

  1. Because of On/Off and Off/On neurophysiological signals, polarity matters, specifically at the “null”.
  2. The combined force of chrominance-luminance is a threshold, where luminance can be broadly correlated to the Protan+Deutan absorptive combination, and the remaining two signals are effectively (Protan+Deutan)-Tritan, and Protan-Deutan, in both On/Off and Off/On variations. (At risk of grotesque oversimplifications.)
  3. Per channel, devoid of colourimetric transforms, does not induce a polarity flip.

It is perfectly fine to reject the first point, which dismisses all subsequent points.

However, if anyone detects the same cognitive “oddness”, then the premise might hold veracity. Note this is the same general premise that Briggs covers in a video if one wants to see a live demonstration.

Note I am not 100% confident the max(RGB) is “the” threshold, but I’m leaning toward it being the combined neurophysiological force that correlates with the combined force of chrominance and luminance. I believe this can be shown mathematically as well.

Now, again, if the premise of polarity playing a foundational role in visual cognition, then it can be shown that:

  1. Per channel mechanics devoid of colourimetric transforms will not exhibit this. This can be tested.
  2. Colourimetric transforms, by way of a 3x3 matrix can and will exhibit polarity flips.
  3. All of these glorified luminance-devoid-of-chrominance-contribution mappers will induce a polarity exchange.

Again, folks are free to reject the polarity issue outright. That’s fine. However, if there is a visual result that is indeed cognitively disruptive, then points 2 and 3 are valid concerns.

This isn’t “4 dimensional chess”. It’s actually very basic deductive reasoning by way of removing complexities.

A case in point, I would encourage anyone to distill and reduce the complexities by way of:

  1. Generate a PBR case where R=G=B at 100% albedo for an ideally reflective surface emulation.
  2. Set any chromatic textures to the discs that are biased in balance.
  3. Set the “diffuse source” close to the top of the surface, and render such that the upper most point is 1.0 units, to showcase a “gradation” of the model “light fall off”.
  4. Render using BT.709 or any working space such at BT.2020, without any colourimetric transforms.
  5. Apply a simple inverse EOTF to the values for display, or analyse the origin RGB tristimulus colourimetry.

It should reveal that no such polarity “flip” will result where the chromatic discs exceed the “illumination”. The same applies for any monotonic per channel curve.

In terms of visual cognition, zero of the swatch samples will “pop out”.

Again, and I stress, if folks want to reject the literature about polarity, they are free to do so. Move along. Carry on. Nothing to see here! No problem!

If folks have concerns about the polarity, it is worth understanding the various discrete signal processing positions that such polarity errors will manifest, and understand their causation.

The choice is up to the individual.

2 Likes

I will disagree with that, there are issues otherwise we would not have this discussion but we use ACES like many others, and we are successful with it. If it did not work, we would not be using it. There is no point debating about this with you anyway as you will not change your mind!

Yes, I did.

Did anyone reject it? If you think that I am then you are wrong. I actually agree with most of what you wrote just above. Especially when it is so trivially verifiable.

What I’m more interested in is having multiple rendering spaces and the consequences on reproduction.

If we look at one dimension (achromatic for example).
Is it true to say, if f(X) is monotonic then we do not introduce a polarity flip?

If so is it true does it generalise to more dimensions?
If f(X) is monotonic in all three dimensions then it does not flip polarity?

I believe so? In this case, max(RGB) is total luminance.

I believe this holds true if and only if we are in a balanced global frame adapted system? (EG: “CAMs” / “UCSs” are not this due to the luminance-devoid-chrominance relationship.)

2 Likes

Ok, let me rephrase:
If a transform is monotonic in all dimensions in the final display RGB, can we assume we have no polarity flip?

Interesting question that I reckon would be dependent on the domain of the underlying model, no? EG: If the model itself is flakey / out of whack, then all bets would be off, no?

I want to arrive at a metric to determine if polarity is “preserved” or not. A monotonic function does not change the order of a set. So that could be a starting point.

Even if you are doing calculations in a none RGB space, there must be a subset of functions in that space which do not cause “polarity flips”.

If we can test for that, it would be a great help, don’t you think?

1 Like

I only agree 30000%.

If they hold to the lower level neurophysiological signals, I can’t see why not? I don’t know of any models that remotely come closer than the Standard Observer though?

My concern here is the map territory relation. The only model that seems to provide a remote hint of the null is the one that we use every day, and the one that provides a very good baseline of the differentials across the signals.

For example, if we use a CAM, we can see broad correspondence to the Standard Observer model, but the CAMs and the UCSs miss the boat and drown by attempting to account for field dynamics using a direct mapping 1:1 Cartesian model. This is of course a dead end, and in fact breaks the model that actually works. EG: CAMs and UCSs break chrominance.

I don’t see much wrong with the Standard Observer models, beyond the glaring botch job of 1924, which leads to the more broken 1931 along Tritan response for purer radiation in the shorter range. A picture emerges of what seems to be a glaring oversight of the neurophysiology, leading to some attempts that break the few parts that “work”; a fictional hyper extension of the Standard Observer utility into something it can never be harnessed for in its basic form.

I can’t imagine a system that works better than the one that we use every single day, and validate accordingly?

But the standard observer model holds polarity also only for a subset of functions. I can easily break polarity in RGB by applying a none-monotonic function. Actually, the subset of functions which are monotonic is quite small, even in RGB.

I guess your hope is, if we find the right representation the functions we can use increase again, right?

Thinking about this more,

Probably the set of monotonic function is a subset of all functions which maintain polarity because (I guess) on one side of the NULL we are allowed to be none-monotonic, as long as we do not flip? So really we need to formulate the NULL.

And as it is a spatial process I think we cannot find the NULL point for any given complex image without a spatial process. (Maybe I am totally wrong here)

Exactly this I believe. We can swizzle to our heart’s content as long as we don’t cross that neurophysiological null energy point.

I believe creative film (outside of the complexities of DIRs and DIARs etc.) followed a similar pattern; swizzling “down”. I suppose there was a double guard with the spectral sensitivity of the photosensitive layers as well?

An interesting side note might be to think along the “whiteness” dimension, where the picture is afforded a decomposition by way of the magnitude of the differentials perhaps? Note how peculiar a maximal blackness region would manifest in the furthest right strip? It is almost as though there is a cognitive “layer” that if we cross too far down, we end up with an inverse HK effect; we cross some blackness threshold, perhaps.

No clue how to calculate that differential “intercept” though, or what the hell it even is. Interestingly, we can increment as far as we wish in the left most strip and it would integrate fine. It seems blackness has some sort of a “floor”.

At any rate, it is interesting to think of a 21 step chart sweep as “cognitive layers”, perhaps involved in the decomposition in picture reading.

From what I have seen, the transducer mechanic is further along the chain, applied to the differentials. This shouldn’t impact the null, given the signals are discretized and decomposed in the increment and decrement directions. There is literally no neurophysiological signal in non-differential regions, and the On-Off / Off-On are different paths.

Along a constant null “no change” region, the firing is literally null, and the area mechanic performs the cognitive fill, based on the boundary condition. The watercolor effect is an elegant demonstration of the null “area” mechanic.

Interestingly, the boundary condition wholly drives the cognitive area mechanic, which becomes evident in the after image tests. Fixating on the chromatic star will induce different cognitive area “fills” based on the differential boundary mechanic:

I was thinking about film in my previous post, because film is not monotonic in R3.

As the photoreceptors are spatially distributed, any combinations can only be spatially combined.

(Maybe having R,G,B available at every virtual position drove us all into a dead end. Just thinking :slight_smile: )

If we want to get down this path we better get some neuro scientists into the loop.

:slight_smile:

P.S

I think your examples show very well that there are other “polarities” in our HVS. Clearly the two coloured edges form another polarity of “surface belonging”. Maybe we place a special surface NULL on edges to find surfaces… A simple Laplacian pyramid would not predict that.

1 Like

Can we (@Troy_James_Sobotka?) write down a definition of what the point is as this should help?

A tentative definition seems to rely on its (spatial) relationship with the (surrounding) field. I don’t think it is possible to define it without the field thus it would require spatial processing to be found.

The “null” is the “no signal” point between the differential signals.

More specifically, the bulk of the evidence supports the idea, as wonderfully laid out by Gilchrist, that the sensory apparatus is not a photometer1, or “measurement of quantity” device; it’s entirely differentials based. At the lowest level apparatus, beginning with Hartline and Kuffler, and later culminating in the famous Hubel and Wiesel work, our signals are only spatiotemporal temporal differences across the On-Off, and Off-On cells2, 3, 4, even though we think we can evaluate “quantities”.

The “null” is the “no signal” point. Given the underlying system is purely differential, without motion or (field based) difference or change, the signal becomes a “null”.

In turn, the idea of a given cognition of “colour” seeming like it is “not part of” the “region”, would be in the increment direction from the null / no signal. “Part of the region” would be a decrement, from the null / no signal. “Polarity” could be used here, with the increment being considered “positive” and the decrement being considered “negative”, but that might be a bit of a bridge too far to try. They are different signals, and could indeed be subject to an orthogonal ontology.

If one is able to withstand cat torture, the videos are online showcasing the neuronal firing with audio from Kuffler, and Hubel and Wiesel. I won’t link them here.


1 Gilchrist, Alan, Stanley Delman, and Alan Jacobsen. “The Classification and Integration of Edges as Critical to the Perception of Reflectance and Illumination.” Perception & Psychophysics 33, no. 5 (September 1983): 425–36. The classification and integration of edges as critical to the perception of reflectance and illumination - Attention, Perception, & Psychophysics.

2 Hartline, H. K. “THE RESPONSE OF SINGLE OPTIC NERVE FIBERS OF THE VERTEBRATE EYE TO ILLUMINATION OF THE RETINA.” American Journal of Physiology-Legacy Content 121, no. 2 (January 31, 1938): 400–415. https://doi.org/10.1152/ajplegacy.1938.121.2.400.

3 Kuffler, Stephen W. “DISCHARGE PATTERNS AND FUNCTIONAL ORGANIZATION OF MAMMALIAN RETINA.” Journal of Neurophysiology 16, no. 1 (January 1, 1953): 37–68. https://doi.org/10.1152/jn.1953.16.1.37.

4 Hubel, D. H., and T. N. Wiesel. “Receptive Fields of Single Neurones in the Cat’s Striate Cortex.” The Journal of Physiology 148, no. 3 (October 1, 1959): 574–91. https://doi.org/10.1113/jphysiol.1959.sp006308.