Display Transform Based on JzAzBz LMS

wow, this escalated quickly :slight_smile:

This should be its own discussion really. (Maybe someone can split of the last comments and generate a new discussion)

simplicity

I can see the argument that during brainstorming you should not constrain yourself too much, you can later optimise. But some approaches permit themselves if you know that the delivery needs to be computed on the GPU with minimal footprint.

experiments and models

I was not directly referring to flicker photometry, but to a more general “issue” in vision science.
(about flicker photometry I have more questions than opinions)

Experiments:

In general, we put too much “weight” into certain models and use them for applications that were never targeted in the initial experiments. Let’s take an example: CIE XYZ:
CIE XYZ’s only purpose is to predict metameric pairs, nothing more; and this only in a very narrow setup which is: two stimuli in a 2 degree (or alter 10 degree) field of view - without the presents of any other stimulus. CIE XYZ does not tell us anything about equidistance, hues, saturation or any other perceptional scales.
Nor does it say that if you lower/raise one dimension (for example luminance) other attributes stay constant.

Another example is PQ:
It is designed to predict JNDs, which makes sense for encoding - if you encode always below the JND threshold you never get banding. (and even here, this is only true for b/w images, I believe).
But sometimes we expect too much from those models or we use them in different contexts and assume some magic to happen.

Models:

Sometimes the choice of model is also questionable.

For example, most colour difference models take the form of Matrix - 1D LUT - Matrix.
You can find some great explanations about why this model is based on physiological reasoning.
But I am wondering if the choice of form is not just due to hardware constraints. Most YCbCr hardware out there run as Matrix - 1D LUT - Matrix implementation. If you stick to the same general model you have the chance to update existing hardware with your “new” models.

Complex stimuli

And then we have the issue that you cannot build a coherent vision science model for complex stimuli.
So we are fundamentally doomed here for some time.
One thing is clear though, with “per pixel” operation there is only so much we can do.

5 Likes