Trying to clean up a pipeline and bumped into this one - to use the Macbeth chart in scenes for asset creation etc. would we need to use a lambert shader or something else?
It seems when using just standard surface, there is too much added variation from specular and other values.
Yes but keep in mind that the values of the ColorChecker Classic are always given for a particular geometry, typically 0:45 or 45:0: Color Measurement Instrument Geometries
Should we really remove all specular contribution ?
I learned at some point that real-world macbeth are not perfectly rough objects and still have a tiny specular contribution.
Assuming the use-case is to match a real-world macbeth and not just ensuring consistency between full cg shots.
I cannot argue the specular values I pick are accurate as they are eye-based. But I do full cg shot so I only ensure all renders have the same macbeth configuration for consistency.
If you do have some measurements I’ll be very curious !
What are folks thoughts on going in the opposite direction and, rather than having either specular or Lambertian diffuse contribution, instead putting the MacBeth texture into the emission input of a Standard Surface?
The intent of the chart in such renders is to provide a cursory understanding of what is going with rendering, typically that the exposure is in the average ballpark and white-balance under control or in a known state. With that in mind, it needs to be within the scene, next to the objects. It also serves as a proof that no cheating occurred, especially when you put gray and chrome balls and they start reflecting the scene content.
I would certainly not use it as a precise calibration tool because of the aforementioned lack of public measurements that can be used in the customary commercial renderers. It is certainly possible to shoot some reference pictures in a controlled environment and start dialling a specular response (we do that at work) but it is a science project and the benefits are limited. Is the extra work worth the effort? Depends what you are doing, for most people, probably not, hence “Lambertian/diffuse is good enough for most use cases”. On the other hand, if you are doing high-end Digi-Doubles with ICT-like data, totally worth it.
You will also find in the wild charts that are old, full of fingerprints and human skin oil and be left wondering why you cannot match them with your perfectly calibrated CG chart. It is an endless quest.
I would not map the emission personally because then the chart won´t react to the light.
Two years ago, a guy at work who was in photogrammetry shared a setup with us for the macbeth and to our eyes, it was “good enough”. Probably not scientific but it served our purpose.
Yes agreed! On the other hand, having two cards side by side, one mapped to base color, and another mapped to emission for comparison, could give a pretty good idea if the light exposure is good.
That would certainly depend how the illuminance is integrated by the camera, as an example if you were using a Physical Camera with a correct exposure model, what peak luminance value would you set the emissive chart with?
I guess I was naively just assuming that if the exposure is set to 0 in the camera, and the emission value is 1 this would make it appear as it does in Nuke. This would be for a purely CG animation environment, rather than for a VFX show. The idea would be to get the exposure at a reasonable level. But perhaps I’m missing something?
I’m not sure Arnold has what you are referring to as a physical camera. I do know that Arnold lights do not have real world physical values like they do in Unreal or Vray. I’d love to learn more about how one would set the peak luminance as you mentioned. Is that possible in Arnold?