Debating CAMs

For historical purposes as I was replying and you deleted your post (which is quite an habit I must admit)

The proposition is not boolean, I was describing the opposite: Spatially induced effect have an infinite quantity of magnitudes. Those magnitudes form a standard distribution and it turns out that you actually picked the most outliers and extreme examples.

I then proceeded to take one of those and shown that perceptual uniformity is still a thing, even under the strongest spatial induction but you still dismiss it, which is quite baffling. No one with normal vision would say that the Oklab and IPT gradients look less perceptually uniform than the CIELab or HSV ones. Do overall their overall hues change because of the purple induction, yes they certainly do.

I asked you to highlight areas in the Blue Bar image where spatial induction has magnitudes similar to your examples. I’m genuinely curious if they can be identified with precision and what should be done with them.

Again, no one denies that spatio-temporal induced effects are not important but I (and plenty of others) have put a cross on modelling them years ago because it is the hardest problem in vision. The current models (or their extensions), i.e. iCAM06, Retinex, are not exactly successful either and introduce objectionable artefacts, e.g. haloing. I tend to leave this stuff to researchers while following their work very closely.


From a pure complexity standpoint, we are talking about easily order(s) of magnitude more code, so if the 50-60 lines of Hellwig et al. (2022) is “one of the most complex piece of software engineered by man. Ever”, well… hold my beer :slight_smile:.

Ultimately photographers, artists and colorists have always done a better work than any spatial-temporal model or algorithm.

This brings me those fond memories when local tonemapping operators halos were all rage:

2 Likes

The colour-science Slack is private, ACESCentral is public. I started to write my answer then you deleted yours. It is a familiar pattern of yours that wasted my time on numerous occasions, I decided to finish writing this time.

Hello, let’s see if we can keep this conversation going in a respectful manner. Thanks !

Hello again, please do not deform or misuse my statements. In my original answer, I was talking about the whole Output Transform, not just the CAM model. And I even mentioned that this was a joke. I don’t understand why you keep coming at me about this.

I have watched every single meeting of the OT VWG and my overall sensation is that the complexity involved is getting in the way. I agree that complexity is not an issue per-se, it becomes an issue when we cannot handle it.

So it happens that I worked on Avatar (at Framestore) and I also worked at Weta Digital (War for the planet of the apes). I would argue that a 1000 artists working +60-80 hours allowed to deliver those movies. I would never say for instance that Glimpse “saved” the Lego Movie, Max Liani did.

But again, it does not really matter if things are complex or simple. In the end, those are “bait” words and just a matter of perspective. So you’ re right. What matters is if the output transform is working or not… But it strikes me that we go from “ACES is science” to layers of “tweaking” and no one stops and asks “hey, are we going in the right direction ?”

Surely I must not be the only one thinking this…

Maybe you should not have generated and shared the archive then ? I also agree with Troy that even on a public forum, a deleted post by an author should be respected. Don’t you think ?

Regards,
Chris

1 Like

We are all tremendously thankful you did.

Manuka was not used on the first movie, it did not exist. People were doing the hours they wanted, every minute of work has always been paid which is not the case in London for example. This is not a right place to debate about this anyway.

It is getting personal, as always, and out of hands, like quite often. Let’s then get a bit more personal and give some context to people here.

When I start to reply to Troy on ACEScentral and he deletes his post, again, and has done so numerous times, what do you think I should do? I could let go, I have always done that, thing is that this time I did not. I wanted to make my point heard and it needed the context to make sense.

For what its worth, I read almost all the posts on ACEScentral and Troy is the only person here I ever seen deleting his replies.

You are bringing Slack again, let’s dive there then. Do you know why I blocked post editing on our Slack? Maybe not so read this: We had a disagreement in the past that leaded him to insult me on one of our channels, writing all sort of colourful words, editing, changing them for some more, and ultimately deleting his posts. It has been very frustrating for me because I unfortunately don’t have access to those as we do not pay for the instance. This could be solved easily albeit in a costly way. Suffice to say that because of his behaviour we talked about kicking him. We did not have to as he excluded himself for a while. I think I can find an email he sent saying he would take some time off and that he loved us. He then came back one day as if nothing happened. Have I been more cautious and more reactive with him since then? Of course! Does it show? For sure, we had a lot of disagreements over Slack, Twitter and here. I enjoy them most of the time except when they start leaning toward personal attacks and insults, we are wandering in that territory now.

Let’s finish on the archive: It is not public and has never been, it is password protected purposely. That password, I give it to the members when they request it. If it was public, it would be on the colour-science website. It is unbelievable that I have to explain that, doubling so when you are also a recipient of this email:

1 Like

No worries. I still think that these messages are useful and that interesting info is shared.

In the end, debates are what make us progress and learn.

A shame it got personal and “colorful” (pun intended).

Have a nice week-end everybody !
Chris

Nothing to disagree about here, which is the reason we made the channel public from now on.

To get back to the CAMs, I could be wrong but isn’t Pomfort’s LiveGrade offering a CAM based DRT? @Alexander_Forsythe : I think we discussed about that last year no?

1 Like

To get back to the CAMs, I could be wrong but isn’t Pomfort’s LiveGrade offering a CAM based DRT? @Alexander_Forsythe : I think we discussed about that last year no?

I think Pomfort has integration with Colorfront.

Colorfront has something about “Using the Human Perceptual Model for Multiple Display Mastering”

1 Like

Thanks @cameronrad, I will read but I think that it is what I was looking for/alluding to!

Is this correct, that ACES 2 DRT most likely will be able to smoothly handle colors of AP1, but not AP0?

Hi,

This is lacking a lot of implementation details but it seems like they do use it for adapting the image to various display targets not so much for rendering which more like the intended usage for a CAM.

Cheers,

Thomas

1 Like

No

One of the goals we have been persuing (and arguably a rod we have made for our own backs) is that we are trying to gracefully handle values outside of AP1 and AP0, as many real world production camera IDTs will land values outside of those domains.

Both the infamous Blue Bar and Red-Xmas position meaningful picture information outside of both the spectral locus, and in many cases AP0.

Many things would be a lot easier if we simply said “sod it, if your data lands outside the locus, then that’s an IDT problem, not a DRT problem”

Many of the “hacks” we’ve had to implment, like the modified CAM primaries are specifically about handling these values when they sit in places that a non physically plausible (as least by the normal definition of what data in an ACES AP0 frame is meant to mean).

2 Likes

Thank you for the detailed explanation!

Wouldn’t it actually better to focus on AP0 gamut at best, or even AP1?
Whatever out-of-working-AP1-gamut values a camera have, this all is always put back into a working gamut by a RGC now and hopefully by better IDT in the future. I’m for sure infinitely far from being the best colorist as an artist, but talking about technical side of color, I’m relatively good for a colorist (not compared to all of you here of course, but more educated about the technical side, than a lot of overall good (and some of them famous) colorists). But still I can’t for sure, without checking it at first, answer you what grading operations would or wouldn’t brake out-of-gamut colors. So it’s a must have to deal with it at the first step. At least this is what I teach my occasional students on private color grading lessons. And it’s incredibly rare for them to do anything but a straight conversion by a 3x3 matrix and at least be aware of out-of-gamut colors as a thing that is not Alexa-to-ACES-police-lights specific. Using offset over LogC3 and believing it’s identical to Exposure in RAW is another popular misunderstanding. If the last one I almost destroyed among colorists from post ussr countries by strongly promoting “gain, offset, gamma wheels in linear” for the last couple of years, the first one (out-of-gamut colors as a thing for almost ANY 3x3 conversion) is still a mystery. And they usually are really good artists working on big movies and having salary 2-10 times bigger than mine. Another example is how I tried to explain to a relatively famous Nuke instructor(!) that by default Color Space in Read node is actually just a curve and does nothing with primaries. He strongly believed that it also converts primaries to some special nuke internal color space.
Sorry for the long text, but I often see here on the forum how people expect from users to have way more technical knowledge than they really have.
So I can’t expect from anybody to be careful with those fragile negative out-of-gamut colors during grading session (especially paid per hour). It’s faster to deal with them at the first step and later to stick within a working gamut. If someone uses offset over log and creates tons of negative values, it’s on them. These negative values created by offset wheel contain neutral colors as well, but you don’t add soft-clipping for zero saturation neutral colors to the shadows in DRT to protect from it. And also Show looks are often baked into LUTs that usually clamp at 0 and 1 anyway.
So my opinion is to stick to the working gamut for DRT. It’s maybe like 10% of projects where will be at least one pixel outside of the working gamut. Because the rest use Show LUT that clamps everything to AP1 ACEScct.
I see a lot more benefits for nice looking images and for pipeline in developing better IDT instead of DRT that is able to handle out-of-gamut colors.

By the way, what’s the point of AP0 now? AP0 is not used anywhere except storing the source or graded images and exchanging between some departments. But AP1 EXRs can contain negative values as well.

I have tried several methods for handling out of gamut values.

The first one was a Normalized Moment space , which is something Troy mentioned in another forum, it worked pretty well because any chroma greater than 1 in this space meant it was outside the Gamut, but this had some issues with values too far from the xy triangle and because it kept photometric luminance constant it wasn’t the best looking in some scenarios like negative luminance in all camera spaces.

The second method is a normalized Spherical Model, it basically is a smooth version of HSV, the S channel is 0 to 1 for in gamut, and the V channel is the same as Max(r,g,b) but calculated mathematically based on the first method instead of by using max(r,g,b). I have seen similar attempts explored before but not sure why it wasn’t explored further as it’s the best option I have seen so far, its chromaticity linear, it doesn’t have issues with negative luminance and its gamut independent.

This model I use pre-image formation and for mapping the formed image to smaller spaces.

I have even used it to make a chromaticy linear image formation like the current attempts for CAMDRT 2.0

Lowguardrail.nk (44.1 KB)

5 Likes

If you don’t mind, may I ask you to post some example images for those who don’t have Nuke but really curious like me? :slight_smile:

That is really interesting. It is similar to the “hexagonal” variant on the gamut compressor that we discussed but abandoned during the development of the RGC.

Because it compresses along straight lines to the white point in CIExy, it does create the same kind of cusps near the primaries as the hexagonal gamut compressor:

It is also non-invertible because it compresses infinite distance to the gamut boundary. But this could be changed by using a different compression curve. And if it was only used in an LMT to “pre-condition” problem images, rather than being part of the DRT, inversion might not be necessary anyway.

Given that CIE xy does not represent anything close to visual cognition, the notion of “cusps” is ill defined, let alone even well explored. Map to Territory logical error.

I believe this must be coupled to the picture formation, otherwise it becomes possible to break the picture formation chain; all values must hold meaning with respect to the domain the picture formation of the colourimetry is being manifested. Negative values literally represent non-domain coordinates.

I would be strongly skeptical of the suggestion there is a way to manipulate nonsense values that carry no meaning whatsoever. It’s another Map to Territory error.

Without the Grassmann straight lines in terms of additive mechanics, there’s a doubled up deformation which will lead to amplified distortions of the picture formation.

CIExy may not be representative of human perception, but a smooth curve turning into a sharp point in any encoding needs to be investigated in case it produces visible artefacts, does it not?

I would argue that while they may not carry meaningful colorimetric information for a human observer, those values still represent picture information which was meaningful to the camera as an observer. Is it not preferable to move that information into a range where it is easier for a human colourist to manipulate it and form the picture they want, rather than simply throwing it away because it is “nonsense”. With the RGC we always said the intent was always to turn out of gamut values into “better behaved pixels” for the VFX artist and colourist. We didn’t claim to make them represent truly meaningful colours.

100% worthy to investigate both the “cause”, and more importantly the potential adverse impact on picture forming. Given that some have done a bit of this, there are quite a few other ill posed inferences from CIE xy cursory glances.

I am merely suggesting that a beautifully “curved” map of Mount Everest isn’t providing the inference one might suggest about topology. Hence the Map to Territory conundrum.

100% agreement. We should frame the “information” here, relative to the signal.

There’s a lot of heavy lifting here. The “what they want” is meaningful information, so in complete agreement. How that occurs is another matter.

A double up of axial rotations is a genuinely woeful idea. The ridiculous CAM approach does this, and the slewing of relative positions is ultimately problematic when the rubber hits the road during the actual stage of picture formation, which amounts to a per channel model. The upside of the per channel mechanic is that it operates in ways that there is little to no analysis of. Having the “make this information salient to the working domain” additionally slew the information is problematic; we have an infinite Grassmann projection where we end up with a double slew. One is good, one is bad.

If we want to stir our coffee drink in a specific way to make the swirl swirl in accordance with some design, having the coffee drink pre-swirled is a problem.

Except back then, no one was listening to the bad idea of the double up. It is a shame seeing some folks suggest this is a good idea, especially in the case where matrices are involved.

The ill defined problem begot an unsurprisingly ill defined non-solution in that specific position.

Hopefully a little insight from this past two years can reveal as much.

That whole discourse was misplaced. Not only are they not colours (no stimulus is a colour as it is a purely cognitive manifestation), the entire discourse sort of got sidetracked from the implications on the things discussed here. The impact on picture forming is genuinely woeful, given that it spreads the incoming meaningful-with-respect-to-working-space Grassmann additivity across an axial range.

The singular car paint virtual “stimulus” for example, is now an axial spread of “stimulus”, present in a carefully balanced Grassmann projection (Eag: Camera or rendering colourimetric tristimulus model), will get further axially twisted as per picture forming needs. Plenty of folds and loop backs will happen, making the ability of any picture author incredibly challenging to undistort.

Think about that in the context of a picture formation delivering a black and white picture; axial slewing in terms of a double up will have a direct impact on the relation of the formed colourimetric greyscale gradients.

“By this logic, the ‘quirk of nomenclature’ is not innocuous: the use of color names
for descriptions of cones, cone-opponent mechanisms, and related color spaces steers us into
a dead end about the mechanisms of color appearance. The terminology begs the question of
the mechanisms by specifying the output as input.”

[Redirecting](Color appearance and the end of Hering’s Opponent-Colors Theory)

1 Like