HDRI Bracket distance

I know, not directly ACES related, but I hope that allright, I have a question regarding photographing HDRi images with a DSLR using exposure bracketing.

The general Wisdom seems to be to do 2 Stop increments when bracketing, ideally from a full white to a full black image.

Now to me this would only make sense if the camera used can only capture 2 stops of Dynamic range, if my camera can capture 10 stops of DR, would it not be perfectly sufficiant to bracket in 10 stop increments?

As I understand modern cameras are pretty linear in their response curve, certainly more than 2 stops?

I want to do this test tomorrow with my Nikon Z6 and test some scenarios. Maybe its just some left over from very non linear sensors that made it hard to calculate the scene radiance with too little steps in exposure or it helps the solver to find similar things across shots?

I’m not a photograhpy expert but I think merging software creates simple blends between taken exposures. Per bracket you need as much captured data as possible in the mid-range to avoid noise or clipping. So doing it with only 1stop increment would be even cleaner. Otherwise you can only combine super noisy shadows with near clip highlights because the clean data of that stop is absent. Merge software probably also use all images to clean up noise so the more the better.

About the linearity. I’m not even sure if merge softwares actually take the linear data or only gamma corrected when shot in raw. Maybe you can test by converting your images to linear exrs first and then merge, see if it makes a difference when using only 3 images total.

Curious to see your test results! :slight_smile:

1 Like

What you want is to have a good overlap between the brackets, and an odd number of images if possible, e.g. 3, 5, 7, 9. I tend to shoot from 1 / 8000 to 1s+ depending on the scenario, indoor I might go to 30secs. The good overlap will ensure that noise is reduced significantly.

Another approach is to take many photos at the fastest speed, e.g. 1 / 8000 and stack them together, however, this is highly dependent on your camera and you will need dark fields.

Here is a classic that everyone should read: https://blog.selfshadow.com/publications/s2016-shading-course/unity/s2016_pbs_unity_hdri.pdf

4 Likes

That’s a great resource thanks!

the papers I red on HDR merge talks about calculating the camera response curve from the bracketed exposures but yea I am also not sure if things like ptgui does some magic with the raw data or instead passes it through libraw before and outputs some gamma corrected thing? hmm. the HDR stuff I tested seems to make sense though, so yea magic.

For most cases, you don’t need to compute the camera response functions (CRFS) because it is possible to access the raw data which is intrinsically linear and sometimes encoded with a know function, e.g. Nikon cameras. While sensors tend to behave non-linearly when they reach saturation, it does not matter because that information is culled during the merge to HDR by the weighting function.

If you are not afraid of reading some Python, we have an implementation here: colour-hdri/radiance.py at develop · colour-science/colour-hdri · GitHub

1 Like

thats my jam! thx as always thomas.

1 Like