Some initial thoughts

I put together some of my thoughts about Gamut Mapping:

Looking forward to your feedback.

Daniele

7 Likes

Hi Daniele,

Quickly reading through it (late and tired) good read overall. This sentence made be raise the eyebrows though:

We also cannot use spectral based methods because we cannot assume to know the spectral response of the camera. So we cannot even back-calculate a possible spectral distribution.

Am I missing something but nothing say that they could not a) be measured for existing cameras, b) be required as part of future IDTs?

Invertibility might sound like an admirable goal. But if we read and agree on the above invertibility is extremely dangerous. It means we are reducing our usable area again.

I would caveat that with the fact that every time non-invertible functions have been proposed and used it has made users sad somewhere. As an example, one of the request of the RAE was full invertibility of the RRT. Not having it causes issues for example in AR applications.

Cheers,

Thomas

Hi Thomas,

Thanks for the feedback.

Some comments:
If we required spectral data from cameras in IDTs, we would postpone the difficulties of finding a proper mapping from one observer to the other. Also, we would need to get unaltered camera RGB, so every camera module from a single manufacturer would need a different IDT. I think this would make IDTs impractical.

Most of the time, the need for invertibility comes down to someone trying to escape a workflow. We need to understand why the user might want to break out of the workflow and address this issue.
Maybe you could provide some use-cases.

Our experience shows that even if you have analytically invertible display rendering or gamut compression algorithm, you then end up not shipping them, because it is impractical to use them.
Or you ship an inverse which is not the exact inverse of the forward model.

One example:
Assume you have a tone mapping stage from scene referred to display referred which maps:
200 -> 1.0
10 -> 0.95
1 -> 0.8

Now if you would use a correct inverse tone mapping, you can easily create unsensible scene-referred data.
It will look just fine if you view it without modification again with the forward transform.
But surely this is not healthy because you might not be able to work with the scene-referred data.
Imagine adding a blur or something it.

You could add a slightly different inverse tone mapping which when viewed through the forward DRT still looks reasonably close (you introduce small roundtrip errors to favour robustness), but produce a more sensible scene-referred data.

1.0 -> 10.0 -> 0.95

But surely we can define invertibility as one design goal for gamut mapping — no problems with that. But I am just early one raising my concerns about practicality.

I hope some of this makes sense.
Daniele

Could you explain why you would need an analytical inverse RRT/ODT in AR application?

We already talked about it together at some length here, and nothing changed on my end, if anything, we have more use case where it is required :slight_smile:

The TLDR being: we always need to map camera data on geometry, e.g. backplate, shadow receiver, destruction effects, world distortion and those effects have to go through the engine tonemapping function. Those are just a few examples but the reality is that we are doing that on a daily basis. We have of course similar requirements with Virtual Production.

My point is that it is not because it seems impractical in a particular domain that it will be in another, it might be very well required.

Likewise, with the spectral data for the camera, I don’t think it would be wise for us to say right off the bat that we cannot use spectral data. It might prove being extremely useful and we should not impose any limits on ourselves, especially in early stages like that. We might actually find a slow but effective solution with spectral data. The problem could then turn into fitting a simpler model. I’m thinking that everything is on the table at this early stage and that limits should be imposed until the very last moment.

Cheers,

Thomas

Thanks Thomas,

these are all very valid points and it is exact the discussion we should have at the beginning of this journey:
I forgot that we have had a similar one already :-).

I think we should keep the invertability as a design goal, later we can decide on a per use-case if we want the full inverse transform or a slight less accurate one, to gain robustness. Also this is a useful constrain to strive for simpler models :-).

About spectral rendering, I agree that if you have the spectral response of the camera and use the same for the cg renderer things fall into place better.

About the AR use-case, would it not be wise to unbuild the video data with the model that was used to create the video data in the first place. If you use forward transform A and inverse transform B most often strange things happen in the extremes.
Imagine Videocamera vendor A maps 10.0 linear light to video 1.0 and ACES analytical inverse tone mapping maps video 1.0 to 200.0, this will cause problems. But I guess we cannot assume to get forward tonemapping A nor that it would be invertible.

Again having the ability to fine tweak the inverse (to adapt to use-cases) is a good goal.

About using inverses to skip a process (like using custom Display Rendering Transforms):
I think this is still very dangerous because it will elicit a greatly quantised result. We shall not encourage people to do this. Our kids who take out the digital film archives in 50 years will thank us for this.
We should rather allow to swap building blocks if we want to customise things.

Thanks
Daniele

Absolutely and I wish I did not have to do it for the exact reasons you describe. The reality is that more often than never you have to deal with what you are being given by the manufacturer, e.g. Apple :slight_smile: Took years to get them to expose camera intrinsic, one baby step at a time!

Cheers,

Thomas

Very interesting, @daniele! I hadn’t thought about the additional challenges that scene referred data brings with it (incl. inability to determine perceptual attributes) and I like the approach of thinking about trust in this context (it makes me think also of metamer set volumes in other contexts, which could also be framed in “trust” terms). I also have a question: on page 3, where you talk about gamut mapping for scene referred data, and given all of the constraints you set out, in what space would you be looking to preserve energy, detail and monotonicity?

1 Like

Hi,
I think it strongly depends on the approach of the algorithm.
I have the feeling that a linear lms-ish space could work.
But ideally the gamut mapping algorithm could work in a number of spaces.
At this point in the pipeline we are just bound in one direction. Positive Numbers greater than 1.0 are ok. So we really only need to deal with negatives.

Daniele

1 Like

I think if a scene-referred space aligns with perceptional attributes it is a plus. But it might be less useful as we approach the areas further away from the spectral locus.

If ACES would be based on 2006 CMFs we could construct elegant spaces as my College Richard Kirk has proposed at CIC27. Here is a link to his paper:
http://private.filmlight.ltd.uk/9401280911288/Kirk19.pdf

But in 1931 CMFs this does not work as nicely because of the (probably wrong) curvature in the blue/cyans.

3 Likes