I’ve long wondered if this is possible/feasible/practical, but I’ve not read about any studios doing it, so - I hope it’s not too absurd.
Typically an ODT like Display P3 will produce output that we send to a notionally perfect ‘Display P3’ device… actually a physical device with its own physical characteristics and (in the best case), a built-in 3D LUT that allows it to closely emulate Display P3.
That’s great - but displays with 3D LUT capability are expensive, 3D LUT boxes are expensive, and LUT management in either case is a bit unwieldy. Also - the color data is being double-handled because the image is being transformed into Display P3 before being sent to the device, and then another LUT is applied to the data on the device itself (to conform Display P3 to the physical properties of the device).
So, with that all in mind…
I can disable all the color management on my physical display, and use a colorimeter to measure its primaries and white point, and perhaps to characterise its gamma.
Armed with that information - should it not be possible to create an output transform that’s specific to that individual display?
Rather than constraining the output gamut to P3, it would be constrained to the physical limits of the device, so although the image displayed would be the device’s most faithful possible rendering of the scene-referred color information, it might not be that useful in practice, because it’s not an experience that anyone else looking at the same image on a different device could confidently expect to share.
(You could make a case for using this kind of system for theatrical projection though)
So - perhaps my objective should be to create a P3 emulation ODT that is targeted at a device with the precise physical characteristics I was able to measure from my display. If the display’s gamut is a superset of P3, this could in theory produce a great result. The output could be measured and compared to a notionally perfect P3 display, and to my studio’s reference displays, which might be an Eizo CG319X or so.
Some drawbacks:
- everything that’s displayed on the monitor without the application of the ODT will be very incorrect
- no ODT is going to turn an 8-bit+frc budget display into a reference display, let’s not get carried away
- studio ocio management would get a bit interesting
Have I misunderstood something crucial?
Has anyone already tried this?
Is there an existing toolchain I should use to try this?
Thanks!