That is great! I would be keen to participate to this one (especially having started to finally look at that in Colour), I hope the chosen time will be compatible.
Hi Thomas! We’re working on meeting time proposals now - would love to have you participate, so will reach out soon! I’m considering rotating meeting times to accommodate time zones. Open to suggestions to make this work, as well.
Definately interested in this, when I’ve experimented using ADX I’ve had gamut issues going to and from ACEScg and so I’ve found it useful to nudge the primaries around so one becomes a strict subset of the other to avoid issues going back and forth.
Looks like fun! I’ll look forward to the discussions around this.
Looking forward to participating also.
Very interested in this one! We are now experimenting with different custom gamut mapping or compression methods and it would be great to find good solutions. Too much out of gamut footage is generated in commercial vfx for my taste
I think Gamut Mapping and compression is an essential part of a successful image processing pipeline.
It is needed in a different context at different parts of the processing stack:
Cameras can produce out of gamut colours because they are not meeting Luther Condition and therefore cannot be transformed into a human-centric pipeline without errors and compromises.
But also if a camera would produce only valid signals, creative processes like VFX and grading might reintroduce invalid values. So further gamut compression might be needed to prepare scene-referred data for further display rendering (very close to the end of the scene-referred stage (maybe within an LMT)).
Here we are still in scene-referred image state, and gamut compression in the scene-referred domain might require some different design to display referred gamut compression.
It needs to be:
- simple (non-iterative)
- quick to compute
- slightly parameterised
After display rendering, we might need further gamut compression to produce the final image on a given display gamut. However, if the scene-referred data is already sensible, gamut mapping on the display side is not that big of a problem in our experience.
I am happy to join the group if this might help.
I’m interested. I can see two areas that overlap with this, both of which had been mentioned in the past as candidates for VWGs: more colorimetrically accurate IDTs, and a new rendering transform that didn’t start by hard-clipping to AP1.
If we don’t change the latter, for example, we both set a harder goal for ourselves, and we have to decide what to do with colors that could be captured (yes, I know, highly unlikely blah blah Pointer gamut blah blah) or come out of a CG renderer or a color correction system that are inside the spectral locus but outside AP1.
I presume it’s up to Carol to set the scope of her investigation, but you would be the one who knew what parallel investigations were standing in the wings.
I would also love to be involved. Gamut mapping solutions currently deployed outside of ACES have their own individual issues and I would love to voice my concerns from a colourist perspective. The mapping of OOG colours can be very subjective but I think we can find consensus on the approach which is a collective / subjective solution, rather than an approach that is based from scientific comparisons such as difference deltas to the OOG values. I believe there are simple solutions available that take into account the OOG values to display a pleasing representation in a smaller gamut/dynamic range. Proprietary solutions have existed for some time but I think a standardised open approach could yield even better and flexible adaptations we could all use…
While I can only provide the point of view of a colorist, I would like to help also.
Interested. Gamut mapping can be frustrating moving to an ACES pipeline.
Great! I’d like to help anyway I can. I’m a plugin/tools developer working with vision science researchers in tone-mapping / gamut-mapping, lately having come up against clipped colours using ACES and would love be part of this investigation to both learn more and hopefully contribute back with ideas or any testing or development that’s needed.
Not sure if this is the right place to start this discussion… I have a suggestion for scoring the results of the gamut mapping from the encoding space to AP0 or AP1. What I got from the meeting was that there’s no current metric for doing so and that we cannot rely on the colour appearance models that display-to-display gamut mapping relies on. So if we manually graded the shots but only from the encoding space to AP1 just to get the clipped colours into where we can agree they look good before going through the RRT/ODT and use that as our ground truth, perhaps we can apply a display-referred metric as an automatic comparison against the gamut mapping + RRT/ODT? Maybe the shots can be balanced a little as well… I’m suggesting this also because, it’s hard to know what the correct colour should be based on was said about the fuzziness/inaccuracy of those out-of-gamut LED colours so it seems we might have to optimise it from people’s preference anyway. I might be off the mark here but I wanted to start this discussion
I would be happy to contribute. There are some GMAs I have implemented in the past ( https://www.researchgate.net/publication/281039764_Gamut_Mapping_for_Digital_Cinema ). However, key finding of this paper in retrospective was for me that the space to perform gamut mapping in is more important that the individual Gamut Mapping Algorithm.
Hi Jan and Welcome!
I haven’t read your papers (and will) but quite coincidentally I was suggesting yesterday during the Working Group meeting that we might want to look at 3D LUTs (especially with CLF around) to represent complex models that are unpractical to run in realtime. Another positive point with that approach is that in the case of camera “gamut” mapping, it could also be done by the vendor himself without disclosing too much IP in the making process. The Working Group mission is to solve that particular problem but I reckon that nothing should prevent a vendor to propose their own mapping, in which case 3D LUTS would be a good candidate to represent the model.
I think 3D LUTs can only be used if source and destination are known (like in Jan’s paper).
With scene referred data coming from all sorts of cameras and modified by all sorts of image processes I would prefer an unbounded gamut mapping algorithm.
Not mentioning invertablity…
I guess it depends where you are in the chain, in the contexts of Cameras/IDTs, the source and destinations are known, i.e. the current camera sensitivities and the Standard Observer.
I’d love to participate in the group as well. This VWG just slipped away from my radar, but I’m happy to contribute.
It would be cool to have you in the group, lots of smart talk. Me as a humble colorist I just listen If you want to cach up, you can view the recordings of the meetings and check the papers on the dropbox.
Thanks Fabián, I appreciate you warm welcome.