Hello all,
I’m getting more and more comfortable with color management but a topic I have not strayed into at all is HDR. However, I’m currently working on a project as a VFX artist where the DI is delivering P3-D65 ST2084 HDR 1000 nits with 15 nits as the mid-gray level. They are working in Resolve with the aforementioned settings as the Output Transform. I am doing CG and compositing work and delivering to them in ACES AP0 with reference gamut compression applied. However, I am working only with a standard SDR sRGB monitor.
I was formerly under the impression that it is best to have a unified output transform across a project, but I am unable to match their 1000 nit HDR P3-D65 ST2084 view while working on my display.
I’m wondering a few things, then:
How big of a deal is this? Is it fine if I use the ACES output transform matching my monitor despite it not being the same way that they are viewing the images while finishing?
This is somewhat a continuation of the previous question, but in general is the idea of “unified ODT across a project” correct, or instead should you just opt for the ODT matching your monitor spec? eg. if three people on a project had different monitors such as a standard sRGB monitor, a BT.1886 Rec.709 monitor, and a Display P3 monitor, is it best for them all to have a matching ODT (Which in this case I would imagine would be the sRGB or Rec.709 variant), or would they all use the ODTs matching their monitors?
You should always choose ODT that matches your particular display.
If you work on sRGB display, you choose sRGB ODT, then render the result without any ODT, in Linear AP0 EXR
If a colorist works on 1000 nit HDR P3-D65 ST2084 display, they choose 1000 nit HDR P3-D65 ST2084 ODT. Then they render it according to a target audience viewing conditions and display device. Usually it’s Rec709 ODT for SDR TV (let’s forget for a moment that most of consumer TV have 2.2 gamma by default instead of 2.4).
If it is, for example, a Graded Archive Master (GAM) for Netflix, then they also render it without ODT in Linear AP0.
If the target is a theater projection, then a colorist chooses DCI-P3 ODT (there are different white points, but let’s not focus on this now) and render. Then DCP is created from this rendered file or a sequence using formulas inside of a DCP creating software to convert to XYZ space. There is a DCDM ODT for this, but it will give less controlled results, so this is one case where you not exactly choose the most suitable (by its naming - DCDM) ODT.
Regarding the use of SDR display for VFX work - I’m not a CG or VFX artist (3D CG is just my hobby), but I guess it’s totally fine as long as you control what you do by lowering the exposure slider before ODT but after all your VFX work (for example built-in exposure slider in Nuke viewport) from time to time. This way you are able to see those extra highlights on your SDR display. By the way, changing exposure up and down from time to time is what you should do anyway, even for SDR delivery. Because even for SDR, a colorist should have the option to lower the exposure on their end. So it’s your job to make sure that your work is done fine across all the dynamic range of the source material.
The same is possible in Blender, and I’m sure in Maya too.
If you really really want to do everything the right way, you may also want to use wide gamut textures and lights, but it’s very complicated, so maybe it just don’t worth it for the project you have.
Yet it means less different pictures than using the same ODT for displaying on SDR and HDR displays, as the contrary option that was mentioned in the first post. And also it is how the current system intended to be used. I know that you know it better than me. I just want to clarify for the original poster that, despite the drawbacks of ACES 1.x (and maybe 2.x, according to the majority of your posts, and really appreciate all your posts here on forum), the use of ACES framework I described is correct.