Pipeline Architecture

I would like to start a new thread where we discuss the envisioned pipeline. The VWG proposal says

  • Proposing a suitable color encoding space for digital motion-picture cameras.
  • Proposing a suitable working color space.
  • Propose a suitable gamut mapping/compression algorithm that performs well with wide gamut, high dynamic range, scene referred content that is robust and invertible.

This describes a two-step process, right? First, the IDT transforms the camera signal into the encoding color space (ECS). Then, the algorithm(s) developed by this group gracefully transform the data from the ECS into the WCS (working color space).
This would make it a requirement for the IDT to not produce signal values outside of the ECS. Hence the manufacturer may already apply some gamuts-mapping/compression to fit the non-colorimetric response from the sensor into the ECS.
Different camera systems (= sensor + IDT) may fill different parts of the ECS. It may be helpful to come up with meta-data that describes the actual gamut. It will be helpful for any gamut-mapping algorithm to know that it does not need to compress the whole ECS into the WCS. The actual gamut may not be a convex shape in the chromaticity plane.
Or we give up the idea of a bounded ECS (i.e. we do not demand that the IDT produces positive numbers only, a it is today) and relay on the metadata to set the parameters for an algorithm that can map any gamut into the WCS (may be wishful thinking though).
The second approach is more compatible with existing ACES. The camera manufacturer would need to add the metadata that describes the actual gamut (I think that all existing camera systems produce signals outside of AP0).

Summary

A) Camera systems are required to produce signals within a bounded ECS and the gamut-mapping algorithm maps the ECS into the WCS. Additional metadata may indicate that not the whole ECS is used.

Pros:

  • Ensures that only positive numbers are used in the encoding.
  • Gamut-mapping algorithm can be designed for a maximum source gamut.

Cons:

  • Difficult to combine with existing ACES files.

B) Camera systems encode in a common space but actual color values may be outside. Additional metadata is required to describe the actual gamut. The gamut mapping-algorithm is flexible enough to handle a wide range of source gamuts.

Pros:

  • Can handle existing ACES files when the gamut descriptor metadata is generated.

Cons:

  • The universal fit-everything-into-the-WCS algorithm may be hard to design. :frowning_face:
1 Like

Hi Harald,

If a manufacturer goes through the pain of ensuring that the camera values fit into the ECS, why not taking the next step and ensure that they fit gracefully within the given Observer?

Cheers,

Thomas

Hi Thomas,

I’m not sure what you mean by fitting within the given Observer. Do you mean having all signals within the spectral locus, i.e. the horse shoe shape instead of the enclosing triangle (i.e. ACES AP0)?

Harald

Hi,

As the minimum requirement, but ultimately into the WCS, e.g. ACEScg/BT.2020 given that no display is going to be wider than that for the foreseeable future and likewise R, G, B colour reproduction is here to stay.

And, if electing to use the spectral locus, I would think that bringing outliers sitting between the spectral locus and BT.2020 boundaries is a much simpler job than bringing them from all the way from camera sensitivities boundaries, especially if we do not have to abide by energy conservation concerns and only by the results to appear plausible. I would also think that metameric induced differences among human observers soften our requirements and that if we end up with “plausible” and “looking good” we will be fine.

Cheers,

Thomas

Then it would be a one-step process, the camera system maps everything into the WCS. This would combine the IDT (scene analysis transform) with the gamut mapping algorithm (GMA).

I compare again the options, adding this one as @) before A) and B) from the initial post.

@) Camera systems map everything into the WCS. IDT and GMA are combined.
A) Camera systems are required to produce signals within a bounded ECS and the GMA maps the ECS into the WCS.
B) Camera systems encode in an ECS but some color values may be outside. The actual gamut is described by additional metadata, which is used to control the GMA.

1 Like