ACES and the ColorChecker - Part One

I use the X-Rite ColorChecker Classic, Photo and Video to color balance and match the exposures of plate photography (from digital cinema cameras or DSLR photo cameras) to 360° HDRIs in ACES. A 3D scene is then lit by the HDRI and if everything went well, the 3D render should match very good to the background plate.

I finished up an introduction to a topic that “riddles” me a lot over the last years. The color patches on the ColorChecker in plates photography and HDRIs. This is Part One. Comments and feedback are welcome. Thanks.

https://www.toodee.de/?page_id=1763

1 Like

Two things are likely showing for you.

Some cameras do not show a pure linear which shows when the gray scale does not match but one patch does. The iPhone is a good example it precorrects to make it look good on the iPhone display. The other color issue is that 3x3 matrices cannot bring in all colors unless you are rendering RGB with the exact same spectral sensitivities as the camera had. There will always be some colors that are not quite. In fact, neutrals can match but all colors can be ‘off’. The average error can be pretty low which is somewhat the goal with ACES, but perfection requires full 1nm or better spectral rendering which isn’t practical.

3 Likes

this is a great topic to see coming up here. I have been working on automated HDRI sphere capture robot that uses a 5d mk2 (the hardware and software is all open source if anyone is interested https://www.thingiverse.com/thing:3841535) and the ACES link with this is something i have been looking forward to getting more clarity about.

a question.
is it appropriate to try to match a 32 bpc HDRI image to the ACES gamut?
my understanding (admittedly rudimentary) is that and HDRI image is more akin to radiometric data than what we typically think of as an RGB image.

wouldn’t the goal be to create as close to a linear 32 bpc HDRI file as is possible with the bracketed images and use that to light the 3d scene with 32 bpc rgb then target the render output either to a linear or perhaps aces space?
i’m sure i’m missing some important details here so i am looking forward to learning how wrong i am about this.

2 Likes

ACEScg, BT.2020 or P3 are good RGB colourspaces to encode HDRI data with, 32-Bit Float is recommended as you will likely clip your data with 16-Bit Half, i.e. 65504.

2 Likes

i am certain my ignorance of the details here is holding me back, but i struggle to find places to learn so I appreciate your help.

isn’t the HDRI sphere that is used as the emitter in a 3D scene relying on the 32 bpc of data to not only get the color correct, but also the intensity of the emitter?
my (probably incorrect) understanding is that the renderer would see each pixel as a light source in the ray tracer and for that needs the color and energy. if you go to 16 bpc is there enough data for that?
I feel like my understanding is missing a fundamental piece of this puzzle, so hopefully my question isn’t so far off base that it doesn’t make sense. :grin:

Entirely depends on the HDRI, if the maximum value is under 65504, then you are fine with a 16-Bit Half encoded HDRI. 32-Bit Float will help to go above that value and will bring more precision, at the expense of larger storage space. That being said I don’t think one could see the difference between the two files.

2 Likes

Hi Jim,
thanks for your explanation.
I find it very useful and good to understand.

Hopefully I can finish up the next part of the webpage soon and learn more about this topic.

Best regards

Daniel

Hi @TooDee,

Went through the blog post quickly, something I would suggest is to lay the ColorChecker flat where you are shooting the HDRI. A few reasons for that:

  • The Illumination captured by your HDRI is only right at the very single point where the lens entrance pupil was, anywhere else and you start to introduce error, thus the ColorChecker needs to be very close.
  • The ColorChecker to be useful in this context should be sampling as much of the same Illumination than the HDRI, given that we are mostly interested by the upper hemisphere, it makes sense to put it flat pointing to the sky so that there is no preferred angle in the upper hemisphere that would bias the measured values.

Cheers,

Thomas

Hi Thomas,

I’ve been lurking on this thread and it’s been helpful for me establishing my own personal setup.

Your suggestion to lay the color checker flat makes a lot of sense to capture the dominant diffuse reflection.
Often the habit (at my current VFX company) is to repeat the first or last photo of an HDRI Pano to get the color checker, but generally it’s just presented to the camera straight on (with grey and steel balls attached).

Do you have any practical suggestions on how to capture the horizontally flat color checker in a timely fashion?

In our current workflow, I would worry if it was presented it at an oblique angle to the camera you would introduce more fresnel reflection on the card and skew your results?

Or the alternative is another camera position setup (adding a time consuming step) with the camera pointing as far down as possible to get the color checker as flat-on as possible. But sometimes this is hard to sell this on-set when people are waiting.

Any advice welcome.

Cheers
Dan

1 Like

Dan,

Welcome to ACESCentral…thanks for your first post!

Steve T and the ACES Team

Hi,

I guess it depends on the Camera/Lens combination you are using. I usually put it on the floor or a box a few steps away from tripod. You might/can also shoot a top/down brackets series specifically for the chart.

Cheers,

Thomas

@stobenkin Thanks!

@Thomas_Mansencal
We’re usually using a 8mm fisheye tilted up slightly with 4 sets of brackets at 90 degree intervals.

So this would put the color checker at quite an off-angle from camera.

So again, I’d worry that the measurements from the color checker would be quite skewed by the more exaggerated fresnel reflection on the card - but perhaps in practice it does not matter as much as I think?

I guess I’ll try a few experiments when I get a chance with a 5th camera angle pointing down with the color checker under the tripod.

Cheers
Dan

Yes, in this case I do shoot a specific brackets series top-down to minimise reflections on the ColorChecker

When shooting with the 16mm I always have one top-down, so it is part of the process anyway.

Cheers,

Thomas

Hi everyone,

I’m new to ACES and this is a great topic indeed! @Daniel, thank you for the blog series, I went through all the posts multiple times but I’m still confused about the proper workflow for some scenarios. Would be great if anyone could help me to figure it out.

Assume I have video samples coming from these sources:

  1. Canon EOS-5D Mark IV.
    Output: .mov file, sRGB colorspace.
  2. Iphone / Android phone shooting with FiLMiC Pro app using LogV2 profile.
    Output: .mov file in unknown colorspace (as far as I understand FiLMiC LogV2 isn’t a true LOG space)
    Nuance: FiLMiC provides deLog LUT, that converts this pseudo LOG LogV2 footage to Rec709. So I can preconvert the footage before switching project settings to ACES color management config in NUKE.
  3. HDRI taken with Ricoh Theta V, saved as 32bit EXR.

All samples and the HDRI have X-Rite colorchecker classic in them.

The main question is — How should I match these plates to each other and to the HDRI in the ACES workflow using Nuke?

What IDTs should I use?

I’m confused because only Output-sRGB and Output-Rec709 used as IDTs are looking like the source but they mess the color values so I’m not sure if I can rely on them when matching.

Hi,

thanks for reading all my stuff :slight_smile:

Please share some photos of your footage if you can. It is always interesting to see.

I will try to answer your questions:

  1. A plate photography which is shot with a highly compressed coded like MP4 with a sRGB tone mapping curve and color gamut won’t give you a good base for your compositing. You can use the Camera Rec.709 IDT, but every white or clipped pixel will result in a scene linear light value of around 16.
    We went this road some years back and had to introduce compression artifacts to the 3D renders to match them into the plates. If you can, use a camera model for which an IDT exists.

  2. Same as point 1

  3. For sure you need to exchange the sun in the HDR from the Theta, as the values will be too small for outside direct sun HDRIs.

Yes, after you match the plate and the HDRI to each other, then you will get predictable results from your 3D renderings - but only if all the sources are scene linear and in the same colorspace.

Best regards

Daniel

Hi,

Thanks for reply! I believe I got your point, unfortunately I don’t have possibility to use a camera for which an IDT exists at the moment.

Here’s a link containing a plate (single frame from the test footage shot on Canon EOS-5D Mark IV) and an indoor HDRI taken with Ricoh Theta-V.

It would be very helpful if you could share any insights on how should I approach matching the plate to the HDRI. I still think that I’m doing it wrong.

P.S.: Is there any benefit in using ACES workflow at all, if the output is mostly intended for WEB?

Hey,

Please upload the Canon RAW file of the colorchart. With the JPG I cannot do a lot. And how did you create the HDRI from the Theta? Can you also upload all the brackets?
How did you create the Theta-V EXR file?

Daniel,

The issue is exactly that I don’t have possibility to shoot RAW, The sample I shared is just a single frame from a .MOV video file saved as a JPEG.

The Theta-V EXR file was aquired with AUTHYDRA plugin (There’s a new version of it that I didn’t check yet though). I found it producing better results compared to manual bracketing.

It does some denoising and other OpenCV tricks under the hood, so I’m not sure that I can replicate the same result by just merging the brackets inside photoshop for instance.

Anyway, I uploaded the brackets captured by the plugin to the same folder

Regarding the matching itself. I was hoping that it would be possible to match the plate to HDRI via mmColorTarget plugin inside Nuke. But I don’t get the workflow.

Should I ‘neutralize’/‘balance’/‘tech grade’ both - Plate and HDRI before I use the mmColorTarget? For example by doing a ColorLookup of a grey patch RGB values picking a desaturated grey patch as a target

What IDT should I use for the jpg plate I have? (I assume Utility-sRGB-Texture) - but it mixes up the colors.
What IDT should I use for the HDRI (I assume Utility-sRGB-Linear)

I’m really confused :slight_smile:

Hi,

the brackets folder has a lot of images in it - like this I am not able to help you because I can’t search though your images to find the right ones.

As for the IDT question:

  • a JPG plate will look a bit dull and desaturated (Utilitiy-sRGB-Texture) - you only got values between 0-1 and therefore no specular highlight information. An ARRI Alexa will clip at around 55 in comparison.
  • I managed with the Theta-S indoors in a studio shoot to create an HDRI (Utility-sRGB-Linear) with light values up to 340, while the 18% patch in the HDRI was multiplied/gained up or down to 0.18 in linear light values.

If you compare the 18% grey patch and overlay it with a ACEScg reference chart, you are able to see if the values of the “whitest” patch also matches to the value around 0.9. At least then you can be kind of sure that your HDRI is actually scene-linear.

So when both plates are matched to the 18% grey patch, both show you the same exposure.

I hope this helps against the confusion.

Best Daniel

Thank you Daniel! That was helpful. I’ll share the results if I succeed.