Still, as @Thomas_Mansencal says, to properly use the inverse ODTs you have to do Legal to Full conversion. However, if your video is tagged Limited but has data outside the Limited range then I’m thinking that you should probably ignore the metadata and ingest the clip as Full range.
No. That really is a bad idea!
But equally, trying to apply an inverse of the ACES tone mapping (which rolls of to 100%) to something from a video camera which rolls off to 109% is probably also a bad idea!
My current workaround with soft clipping before Inverse IDT is far from good. But If Inverse ODT wouldn’t clip below 0 and above 1, those super white and subblack values would be available in ACES, but still be mapped above 1 and clipped by SDR ODT. So they could be brought back if needed, but until that the image would look identical to the source.
If you are always working with the same type of camera, it is worth trying to find out the CRFs, it is a bit of work but you will get closer to proper linearisation than using a random transform on random footage while hoping for the best.
In my continuing mission to document all sigmoid functions known to exist, here is another post on the subject.
The HillLangmuir equation f\left(x\right)=\frac{x^{n}}{x^{n}+k} (often referred to as the “NakaRushton equation” in the literature) has been shown to describe the response of eye cells to light stimulus
update: I got lost in my trash pile of papers about colorscience and linked the wrong paper before… I’ve updated the links below.
 MichaelisMenten Equation

The LuminanceResponse Function of the DarkAdapted Rabbit Electroretinogram  KeeHa Chung, M.D., SangHa Kim, M.D., JinHo Cho, 1994
Korean J Ophthalmol. 1994;8(1):15. Published online June 30, 1994
DOI: Links for doi: 10.3341/kjo.1994.8.1.1 
SPotentials from Luminosity Units in the Retina of Fish (Cyprinidae) K. I. Naka and W. A. H. Rushton, 1966
J Physiol. 1966 Aug;185(3):58799. doi: 10.1113/jphysiol.1966.sp008003. PMID: 5918060; PMCID: PMC1395832. 
The Luminanceresponse Function Of The Human Photopic Electroretinogram: A Mathematical Model  Hamilton R, Bees MA, Chaplin CA, McCulloch DL.
Vision Res. 2007 Oct;47(23):296872. doi: 10.1016/j.visres.2007.04.020. Epub 2007 Sep 24. PMID: 17889925. 
Light Adaptation And Photopigment Bleaching In Cone Photoreceptors In Situ In The Retina Of The Turtle  D A Burkhardt 1994
Burkhardt DA. Light adaptation and photopigment bleaching in cone photoreceptors in situ in the retina of the turtle. J Neurosci. 1994 Mar;14(3 Pt 1):1091105. doi: 10.1523/JNEUROSCI.140301091.1994. PMID: 8120614; PMCID: PMC6577543. 
Visual Adaptation In Monkey Cones: Recordings Of Late Receptor Potentials  R M Boynton, D N Whitten 1970
Science. 1970 Dec 25;170(3965):14236. doi: 10.1126/science.170.3965.1423. PMID: 4991522.
The MichaelisMenten equation is a simpler form without the power.
f\left(x\right)=\frac{x}{x+k}
I recently rediscovered something which is probably obvious to most of you already: the function I’m using in OpenDRT is based on MichaelisMenten, but adds a power function after the tonemap is applied.
f\left(x\right)=\left(\frac{x}{x+k}\right)^{n}
This is not quite the same thing as the HillLangmuir equation where the power is applied directly to the input data, and the behavior (and curve) is slightly different.
So I thought it might be a fun exercise to solve this slight variation for some intersections, and see if it could also be useful for a tonescale in a display rendering transform.
It turns out that solving for middle grey and 1.0 intersection constraints, as well as the inverse, is quite easy to do (or maybe I’m just smarter than I was a year ago when I started this thread originally).
Here is a google colab using sympy with some excessively descriptive explanation, for the mathcautious among us.
And here is a desmos with the final solve.
If it were desired to apply an “scurve” on log domain data, this form of the equation might work better than the one used in OpenDRT.
Just as a quick proof of concept, I spent a few minutes manually tuning the intersections and powers on a log2 input encoding to match the SSTS using the 0.001, 10, 100
settings, as shown in the last meeting.
HLTonemap_match_ssts.nk (13.6 KB)
I bet it would be pretty easy to to fit @KevinJW 's average sdr tonescale data with this function too, if that data were available. Might be a fun experiment.
Edit
Made a couple little tweaks to the post above for readability, and added a variable m_1 for the upper point intersection constraint, in case it’s useful for fitting.
Nice work,
How does it Transfer to HDR?
I think your approach is better for HDR. I also think that it simplifies a lot to apply the tonescale to linear domain input data instead of log domain input data.
However there seems to be strong resistance to new ways of doing things here, so I thought it might be useful to post this alternate formulation which might work better in the log domain.
If matching the behavior of the SSTS is desired, it seems like this function works pretty well.
HLTonemap_match_ssts_hdr.nk (24.9 KB)
Personally I don’t really like the look of the SSTS in HDR though. It compresses highlights too much. But there doesn’t seem to be much interest in evaluating or even thinking about HDR in this working group either so maybe it’s a moot point.
Hope it helps clarify…
I think there is plenty of interest.
But my sense of if it is that’s HDR is in someways a simpler issue, as all of the boundaies we’re crashing into in the SDR domain will be relaxed in the HDR world.
Although it’s much harder in some other ways, as there are very few displays in the wild we can work with that aren’t complicating the issue by imposiinng their own secondary rendering/tonemapping step on top.
Isn’t the SSTS curve applied to linear domain data?
At the end of the day, it doesn’t matter if the curve is applied to linear or logarithmic data. If the maths is adapted accordingly, the same result can be achieved with either approach. I think it is simply that for visualisation of the scurve, plotting on log/log axes can be clearer.
No. The Single Stage Tonescale is a bezier spline applied to pure log10 of input linear pixel value.
I would say it does matter quite a bit from a system design perspective. There are pros and cons to each approach.
Glad to hear this!
NakaRushton Tonescale Function
I spent a bit of time today experimenting with this alternate idea for a tonescale function (Let’s call this thing the NakaRushton Tonescale Function.)
“But what is tone exactly?”)
– The voice of @troy_s in my head
Long story short I came up with a model for how this function might be used across display devices with varying peak luminance. The function might actually work okay for HDR, though I will need to do more testing, and the usual caveat applies that I don’t have a professional quality HDR display device.
I made a colab to plot and compare different tonescale functions. The plots that follow have the following attributes:
 XAxis: 0.18*2^ev stops from Middle Grey, 10 to +10 stops
 YAxis: Display light nits output, Displayed with PQ distribution  (comments welcome on if this is appropriate. I thought it did a good job of showing the behavior of the curves in HDR).
One advantage of the NakaRushton tonescale function seems to be controllability. With this function it is pretty easy to get the same shape out of the curve with varying middle grey output y position.
With the model I’m using in OpenDRT, I have to adjust the exponent on the curve as I transition between SDR and HDR. The power is set to ~1.4 in SDR and ~1.2 in HDR. This results in a bit of a change in the shape of the curve between SDR and HDR, especially in shadows. This might be good or bad depending on what you want.
OpenDRT Tonescale Model
The exponent in the NakaRushton function just sets the slope through the middlegrey intersection constraint. So if we keep the slope constant, the slope of the curve at grey stays constant. (I’ve added a flare compensation as well so this is not 100% true, but more true than with OpenDRT).
NakaRushton Tonescale Model
And as a desmos graph
And quickly hacked into OpenDRT (sat 1.1, no surround compensation model)
OpenDRT_NR.nk (26.8 KB)
The model was created by me by eye, looking at images, and comparing with other tonescales like the Arri transforms. I’ve also included a bit of an exposure boost on middle grey as peak luminance increases, as Daniele suggested in one of the meetings last year. One thing I’m a little worried about is that it is difficult to reduce slope at the top end as peak luminance increases, without changing slope by adjusting the exponent. So this curve might have stronger highlight appearance. Though I can not really confirm or deny this with the display devices I have access to. Any testing from people who have access to a professional quality display would be appreciated.
This weekend I learned me some matplotlib, so I thought it would be fun to use pandas and plot some other tonescales as well.
ARRI Tonescale Model
ACES Tonescale Model
Hope it helps and doesn’t distract too much from the important investigations into dish soaps.
I went back to @daniele’s original post and took another pass at understanding it.
One thing I noticed making the nakarushton model above (and many times in the many garbage deadend post I did before) is that with an intersection constraint at grey, it’s very easy to get undesireable behavior in the low end below the constraint. Slope changes or contrast changes, “wobbling” as you do tweaks of the curves for different peak luminance outputs.
The really cool thing about Daniele’s original formulation is that it is very stable down there. (Why am I am only fully appreciating this now nearly a year later )
The reason it is stable is because in simple terms if you have
f\left(x\right)=s_{1}\frac{x}{x+s_{1}}
The bottom end is very stable as you adjust s_1
If you add a power function
f\left(x\right)=s_{1}\left(\frac{x}{x+s_{0}}\right)^{p}
where s_{0}=e_{0}s_{1}^{\frac{1}{p}}
The bottom end is still very stable, and we have control over scenereferred exposure e_0 and “contrast” p.
This can give very controllable and predicable results actually.
Here’s another desmos comparing this function with the nakarushton one, with s_0 normalized in the same way. It’s pretty clear the different behavior of the exponent between the two curves.
More Tonescale Sigmoid Ramblings
The last couple weeks I’ve been doing some more explorations on this topic. I’ll summarize some of the more interesting points and thoughtprocesses here for those rare persons who might still be following this thread.
A Tale of Two Contrasts
At the end of this meeting, I did a quick demo of the different behavior of contrast / exponent adjustments between these two functions
Daniele’s “MichaelisMenten Spring Function”
f\left(x\right)=s_{1}\left(\frac{x}{s_{0}+x}\right)^{p}
where s_{0}=e_{0}s_{1}^{\frac{1}{p}} and e_{0} is the scenelinear exposure control.
the “NakaRushton Function” I posted before
f\left(x\right)=\frac{s_{1}x^{p}}{s_{0}+x^{p}}
The difference between the two functions is essentially where the exponent is applied.
 In Daniele’s function, contrast is adjusted as a power function in displaylinear.
 In the NakaRushton function, contrast is adjusted as a power function in scenelinear.
Based on all the dumb experiments I’ve done with the above two tonescale functions, it seems necessary to have more contrast in SDR than HDR. This implies the slope through middle grey changes subtly between an SDR rendering and an HDR rendering. Logically this makes sense: Since we have more dynamic range available in HDR, we would want to have less highlight compression and less stretching of midrange values through boosted contrast. The question I’ve been exploring is how do you create a tonescale that continuously changes between SDR at 100 nits peak luminance and HDR at > 1000 nits peak luminance?
What the heck is a spring function?!
I think Daniele used this term in one of the previous meetings (or maybe I imagined it, just like I think I imagined @SeanCooper using the term “water vs balloon” to describe display gamut rendering methods). Or maybe I just have a psychological vulnerability for inventing stupid names for things.
Anyway spring just refers to a sigmoid function which can be scaled in Y without the slope through the origin changing. A simple example being f(x)=s_{1}\frac{x}{s_{1}+x}. With this function you can multiply up s_1 and the slope through the origin stays constant, while the rest of the sigmoid is scaled up vertically. This way of thinking about HDR display rendering tonescales is much more elegant and simple than the messy way I was thinking about this before.
The basic approach is to
 Set contrast with the power function
 Set “exposure” or middle grey point using the scenelinear exposure control
 Set peak white luminance using the y scale s_1.
NakaRushton Spring?
It’s easy to set up a “NakaRushton” equation in “spring” mode: f\left(x\right)=\frac{s_{1}x^{p}}{s_{0}+x^{p}}
where s_{0}\ =\frac{s_{1}}{e_{0}} and e_0 is our scenelinear scale.
As a simple example, here is a variation on the tonescale model based on the “NakaRushton Tonescale Function” I posted previously. It has a constant contrast of p=1.2, constant flare compansation of f_l=0.02, and maps middle grey to 10 nits at 100 nits peak luminance.
In this model, the output yscale is normalized so that at 100 nits peak luminance, output displaylinear = 1.0, then as peak luminance increases the output peak y value increases up to 40 at 4000 nits. To normalize into a pq range where 1.0 = 10,000 nits and 0.01 = 100 nits, you would divide by 100. This makes it simple to turn on pq normalization for HDR or turn it off for SDR.
As I hinted at before, I think we would want to reduce the contrast with increasing peak luminance. With a contrast of 1.2 at 4000 nits I think the highlights are pushed too bright. Or maybe this is a problem with the tonescale function, and the reason Daniele was asking “how does it work in HDR?”
Pivoted Contrast?
After the above description of the “NakaRushton” function, you might be thinking
Gee if that function is just applying a power function to scenelinear input data, why not turn it into a pivoted contrast function instead, so that middlegrey isn’t shifted around when adjusting contrast?!
It actually seems like a valid approach using something like a 3stage tonescale rendering:
 Scenereferred pivoted contrast adjustment (possibly with linear extension above pivot)
 Scenelinear to displaylinear rendering using pure MichaelisMenten function
 Flare compensation
Many Valid Approaches
Given the large quantity of garbage in my previous posts in this thread I thought it might be useful to assemble a list of tonescale functions into a single place.
In this notebook there are 3 categories of tonescale functions

MichaelisMenten : \frac{s_{1}x}{s_{0}+x}
Just a pure MichaelisMenten function, no exponent, no contrast. 
MichaelisMenten
DisplayPostTonemap Contrast : s_{1}\left(\frac{x}{s_{0}+x}\right)^{p}
The variation Daniele posted, with exponent applied in the displayreferred domain. 
MichaelisMenten
ScenePreTonemap Contrast : \frac{s_{1}x^{p}}{s_{0}+x^{p}}
The variation I posted above with the exponent applied in the scenereferred domain.
I have included “spring function” variations, and variations with intersection constraints where possible.
A Note on Names
Just a brief interlude to justify my decisions against @Troy_James_Sobotka 's pedantic trolling in the previous meeting.
In the original NakaRushton 1966 paper, the function they use is a classic MichaelisMenten function y=s_{1}\frac{x}{x+s_{0}}. I agree that strictly speaking using this name to refer to my above function is disingenuous.
I used this name because in this other paper the “NakaRushton equation” is referenced as f\left(x\right)=s_{1}\frac{x^{p}}{s_{0}^{p}+x^{p}}.
Also technically speaking the function Daniele posted is a MichaelisMenten function with an added contrast. MichaelisMenten refers strictly to the hyperbolic function \frac{s_{1}x}{s_{0}+x}
So yeah, maybe moving forward we call these functions by what they are: The MichaelisMenten function with contrast added in displayreferred domain or scenereferred domain.
MichaelisMenten SceneContrast
Here is another idea: If we apply contrast in the scenelinear domain using a pivoted power function with linear extension, and use a “pure” michaelismenten equation with no exponent, you can almost get reasonably consistent results from 1084000 nits peak brightness with a constant contrast setting.
m01 above ignores SDR and treats DCI 108 nit / 7.5 nit HDR as a valid datapoint in the spectrum of peak brightnesses.
m02 is a continuous range in middle grey from 10 nits at 100 nit peak, through 16 nits at 4000 nit peak.
Please keep those coming, I’m certainly lacking of time but this is great stuff that I have been enjoying between boring builds
Is there a function you prefer in your latest batches?
Cheers,
Thomas
I will second Thomas’ comments, please do keep them coming.
Something I have been wondering about, is if only using peak output is appropriate for setting the parameters. I think we need to include the surround/adopting field luminance as one of the key parameters to allow us to understand how to bridge between dim and dark surrounds and to try rationalise the midgrey luminance between the different outputs.
In a dark surround we assume the average picture luminance is scaled based upon the image luminance so we could make an assumption of ‘10%’ of peak for SDR when switching to Rec 1886 Output we switch our anchoring to be based on the reference surround value which by magic is also 10% of peak luminance. But when migrating to HDR this might break, so obviously we need to be a little more involved for choosing the mapping function, but this is just an off the cuff thought for consideration.
Kevin
Yes please do keep those points coming. It is an interesting discussion to be had.
I’ll add my own grain of salt with regards to mapping SDR to HDR and I found that getting the black levels in the same place is important for a consistent experience given of course that both are viewed in the same viewing conditions.
I put together a Tonescale Model Selects colab, with the most successful models in my trash pile of experiments.
I would say they all have different pros and cons.
I like the simplicity and look of the pretonemap contrast with linear extension + michaelismenten function. The MichaelisMenten Spring DualContrast
model in the above colab is the one I will use moving forward I believe.
The posttonemap contrast MichaelisMenten Spring function is very neutral and performs very nicely in HDR, but the shadow contrast is too low in SDR. This is what I was previously fighting by modeling an exponent that started higher and decreased as peak luminance increased. I never liked this. Included in the MichaelisMenten Spring
model is an idea for a “default tonecurve lmt” which adds a bit of contrast to compensate and seems to work okay through the transition to HDR.
And I figured I would throw in a refinement of an earlier experiment with the Piecewise Hyperbolic Tonescale Model
. In this one I do like that values below middle grey can be kept strictly linear if desired. It is more controllable. I also like the stronger highlight appearance, and the ease with which you can transition from SDR to HDR with a consistent contrast.
All models include a parameter for surround compensation using an unconstrained posttonemap power function.
Tonescale_Selects_v01.nk (33.8 KB)
Here’s a nuke script with all the models as well.
I’ve also pushed OpenDRT v0.1.2 that uses the “dual contrast” tonescale model above.
While doing the OkishDRT I wanted to use the same tonescale as the ACES2 candidates use, the MichaelisMenten Spring DualContrast
and needed its derivative to drive the pathtowhite. It turns out there is a sudden transition in 0.18 where the tonescale changes from the linear extension to the michaelismenten function, as seen in the following desmos plot: ACES2 MMSDC tonecurve. Not sure if this is a problem for the tonescale but for using the derivative to do things it probably would be better if it was smooth.
I made a modified version of your Desmos plot, where C_1 is automatically calculated to make the curve continuous as you vary C_0, instead of being a slider.
You need to drop C_0 to 1.0 instead of 1.2 to make the first derivative smooth. That is probably not enough contrast for our purposes. But it is interesting to see where the kink comes from.
My original suggestion was smooth in both derivatives.