Gamut Mapping in Cylindrical and Conic Spaces

But can you have a physically-plausible solution when no model gives paths/space distortion outside the spectral locus, i.e. in the non-physical realm?

For example, what paths the values in that blue lump should be taking?

Cheers,

Thomas

That’s a very exciting proposition, I’d say? At least worth spending a sliver of time on? At the cursory evidence level, I’d say there is enough compelling evidence to suggest that it is at least plausible!

I think that’s pretty clear, no? More than enough research on that front with a very direct line to the proper path for S cone response, and a good enough of an entry point to seek a solution for L cone.

Sorry, no it is not clear at all! It is the realm of extrapolation/prediction, outside the known human dataset: we don’t have any data for what is outside the spectral locus and for good reason, we cannot see it! If you are asking me what should values do there, well I don’t know :slight_smile:

Except we are already carrying forward via extrapolation as it is, using what we believe is reasonable, and it doesn’t seem to be stopping anyone.

With that said, I’d suggest that there is at least as compelling evidence on what should happen within the spectral locus that gives us a much better indication of a decent path forward. (IE Not the Abney approach.)

There’s no difference between extrapolating that theory to the approach and using it to push values into the spectral locus than the current body of equally reasonable assumptions. It would be an equivalent construct of broken at the very worst.

But without any assumptions on the underlying space. Starting to make assumptions is what is dangerous here, especially if we stray away from any linear transformation.

Again, too late. There’s an implicit assumption of fitness via Abney desaturation.

I don’t believe I have advocated for anything nonlinear here, but rather the opposite.

  1. All XYZ manipulations are implicitly nonlinear.
  2. The pretext that Abney based desaturation is any more acceptable than any other is already an assumption.

With respect to perceptual attributes, yes, with respect to XYZ itself, the transformation dictates if it is linear or not, likewise for any other space. One point that was talked about very early in the first Working Group meetings is that we cannot really use perceptual attributes for scene-referred transformations because we don’t have a model for that.

Full agreement. They also appear to be rather brittle.

The question is ultimately whether or not it is plausible to work closer to energy conservation and use an alternative means of desaturation other than Abney based.

Hi,

I added the hue clumping controls to the notebook BUT because nothing is simple, the above equation only really works at full saturation which makes things a bit more trickier, will continue looking some more.

Cheers,

Thomas

Hi

For the easier access record, @nick asked on Slack:

nick_shaw 10:20 PM

Can I check. Is this experiment in order to prove that the two methods are equivalent if you add in the offset from your function / LUT?

And is your conclusion from it that we need to continue evaluating both methods, or that because they may be equivalent that we can concentrate future development on only one of them?

It would certainly simplify the task if we could say that “method X is what we are pursuing, and we just need to find the optimal compression curve and parameters.”

To which I answered (reformatted):

thomas 3:37 AM

Yeah it is experiment to see if the two methods can be roughly equivalent in term of their output, the only advantage of the HSV path is that it gives controls over hue but besides that, nothing.

The RGB approach is better, more elegant, no contest. I would move forward as if it was the only model retained.

I will spend some more time on the HSV approach though, because:

  • It allowed me to fix some issues I could not with the RGB model
  • I’m curious
  • I’m personally settled on the curve, I explained a few times why I prefer tanh, mostly because of better preservation of saturation, better continuity, and less artefacts which we should avoid introducing. We need to be as gentle as possible with the imagery, and none of the other curve is.

So basically, the TLDR is that I think the group should focus on the RGB model and I will do some more exploration on the HSV one.

Cheers,

Thomas

1 Like

Hi,

I have been poking some more at the HSV Clumping thing which is turning into a fascinating and great exercise, so here is a graph showing the required offsets at different level of saturation:

The underlying curve varies with the compression threshold and goes from an almost pure parametric sine wave with a threshold at zero to a quadratic form when the threshold is closer to 0.

I will continue zipping into the rabbit hole :slight_smile:

Cheers,

Thomas

1 Like

Hi,

On the way down, I realised I needed a smarter logistic function for fitting purposes and remembered John Hable’s Piece-Wise Power Curves.

Here is a Desmos (+ Python) implementation: https://www.desmos.com/calculator/12vlon6rpu

Cheers,

Thomas

1 Like

Nice. I hadn’t actually seen that before. I know I’m going to end up writing a DCTL version of that as an elegant parameterised s-curve.

One comment. The Desmos version and the Colab don’t seem to match. The Desmos version passes through the control points, whereas the Colab does not unless you add a power function with an exponent of 0.5 to the curve.

Edit: I just realised that the g parameter in the curve function is gamma, so changing that to 1 instead of 2 matches the initial state of the Desmos version.

Here’s the interactive version of Thomas’ Colab version for those interested, with sliders. I’m sure the refresh can be improved, but I took the shortest path between two points.

1 Like

The Desmos version is interactive, with the control points draggable interactively in two dimensions.

Been meaning to implement it for 3 years almost: Implement support for "Piece-wise Power Curves" tonemapping operator. · Issue #2 · colour-science/colour-hdri · GitHub, never had a chance or a true need until now.

I forgot to reset the parameters in the Colab notebook, should be fixed!

I have continued the fitting work, slowly because I’m slammed, and got distracted easily. I was poking at some plots of scaled AP1/ACEScg to get a better grasp of their associated RGB values.

The RGB values of ACEScg x32 primaries are as follows:

[[-21.95870677 9.58788939 9.58788939]
 [-0.65048598 1.79800924 -0.65048598]
 [1.15541233 1.15541233 -1.73923715]]

A mere -22 for the largest negative value, even ignoring the various possible maximum representable values, I would be keen to find the largest meaningful values in all the imagery we have at disposal. I have some HDRIs reaching -10 in areas with chromatic aberration. Accounting for dead-pixels, below -200, but those are cases we should probably ignore.

Cheers,

Thomas

2 Likes

Hi,

I went through a few HDRIs and nothing incredibly high. I shot this one a bit earlier with my Star Analyser, about -20 in the purple area:

It varies with exposure obviously :slight_smile:

The image is here, but link is subject to change because of double-name.

Cheers,

Thomas

Cool image! You just got me interested in spectral analysis of stars :smiley:
Interestingly that pixel with the largest negative value (rgb=8.76902, -19.4676, 106.88 at x=2554, y=2051) is not the pixel with the largest distance. The distance for that pixel is rgb=0.9719, 1.1821, 0.0000. And the maximum RGB distances for this image is rgb=1.313, 1.234, 1.183 with no shadow rolloff, and rgb=1.032, 1.18, 1.067 with a shadow rolloff of 0.1.

I think this image might be somewhat in the realm of “synthetic” since it is an HDRI stitch as well. I would be curious to see how the source camera raw files compare.

I spent a bit of time gathering some distance data for a representative selection of the source imagery we have so far. These distance values are calculated with a shadow rolloff value of 0.1 to reduce meaningless distance values. And if a max distance is less than 1 the value is set to 1.

I was going to try to upload this as a quicktime but the format is not allowed so I will dump this as a big vomit of images. Hope that’s okay.


1 Like

Thanks @jedsmith this is super useful! Everything (as suspected) is in the tiny realm so far which is great news.

Taking a tangent for 5 minutes! If you are really into that, Christian Buil has a fantastic website here: Home This will make you wish you had a lot of free time, or 3 or 4 lives (and money!)

If you have a free afternoon and some time to kill for a cheap DYI project here is something cool: VFX Ramblings: A Homemade Spectroscope!

Cheers,

Thomas

2 Likes