Looking back at my tonescale models colab, I realize I did not describe well my thinking behind what scene-linear value we decide to map to peak display value. I’ve added a bit more description on this topic there, but in short:
For all 3 of the tonescale models I shared in that colab, the scene-linear value to peak display value mapping roughly follows the following table:
L_p | value |
---|---|
100 | 35 |
600 | 65 |
1000 | 75 |
4000 | 100 |
The regression fits you see linked in the colab code are to find a function of L_p that roughly fits these values through the tonescale function, so that we can smoothly vary L_p and get an “interpolated” result that makes sense. FYI there are also a few variations of tonescale functions which allow you to explicitly specify this mapping (at the expense of mathematical complexity), in the tonescale functions colab I posted earlier.
I don’t believe this is the only valid approach though. I think @daniele suggested in one of the meetings last year to have a constant peak white mapping, perhaps based on the peak value of the log space used for grading (ACEScct in the ACES system, I guess). There are pros and cons to each approach that should probably be considered.
Edit: removing the term “luminance”, because this discussion really has no consideration of color information.