After playing around a bit with all of these different variations, I decided that I did not like the behavior of Toe constrained by the grey intersection point. Changing contrast when adjusting toe doesn’t make sense.
I settled on using the piecewise hyperbolic with “decoupled toe and surround”. I played around a bit and roughed in a few presets for
- SDR dark surround
- SDR dim surround
- SDR average surround
- HDR PQ 600 nits
- HDR PQ 1000 nits
- HDR PQ 2000 nits
- HDR PQ 4000 nits
Tonemap_PiecewiseHyperbolic_Presets.nk (4.0 KB)
The behavior and appearance is nice but I don’t like how complex it still is.
I keep going back to @daniele’s original formula, and thought I would give the same exercise a try with his original “minimally constrained” formulation and see how it went. It was actually surprisingly easy to dial in the same behavior, even without being able to specify the precise intersection points. (I did make one modification in order to get the output domain scaled to the right level for PQ output).
Tonemap_PiecewiseHyperbolic_Presets.nk (4.0 KB)
Edit: Woops! I realized looking back at this that I uploaded the wrong nuke script here. Here’s the correct one which contains Daniele’s compression function, as described above.
Tonemap_Siragusano_Presets.nk (2.5 KB)
Which leads me to an observation that I probably should have had before spending a few weeks on the above math exercise: In a scenario where we are building something that has preset parameters for different displays and viewing conditions, is a high level of complexity in the model justified in order to achieve precision of intersection points? What level of precision is necessary here and why?
Edit:
I realized I may not have shared the super simple EOTF / InverseEOTF setup I’ve been using, which may be a necessary counterpart for testing the above tonescale setups, especially with HDR:
ElectroOpticalTransferFunction.nk (5.2 KB)