ODT B-spline in python and Interactive Plot

I have taken a first stab at translating the ssts (single-stage-tone-scale) function that I have been using in Matlab and CTL into python. There is also a very basic interactive plot demonstrating the different parameters which is built using matplotlib.

Python branch [scottdyer/aces-dev]

The goal here is to help others to grasp the behavior that this curve function allows control over and to understand how the different values affect the curve. One can change the x and y coordinates as well as the slope through each of the min, mid, and max points. The % sliders allow control over the “sharpness” of the bend in the shadows and highlights.

Side note: I personally believe an ideal implementation of this (after we’ve determined the right ranges for the parameters) would not give user access to many of these parameters. Instead, the user would only input their display’s black luminance, peak white luminance, and desired luminance for mid-gray and the other values would be automatically determined. To that effect, I will be adding a second plot that should help illustrate exactly what I mean by this.

P.S. Please excuse the crudeness of my python code. I am far from an expert but I’m learning through these efforts. I am hoping that @Thomas_Mansencal or others more savvy in python than me can take a look and maybe even suggest some modules to make the interactive plots a little more “slick”. Also, the hope is that the ssts will eventually be easily added into Nuke for visual testing since it’ll already be coded in python.

Hey @sdyer,

Started to gave it a quick stab, how are you planning to derive the % parameters automatically, one can quickly kink the curve with them:

It would be great to be able to model a straight line also (and have one as reference), right now it is impossible because of the range of the % parameters:

A Nuke implementation could not really leverage the Python code directly unfortunately by the way. I think we would have to fully implement it with nodes like @alexfry did it for his RRT or C/C++.

Cool to see that taking shape btw!

Cheers,

Thomas

Yes, this allows full flexibility (well, mostly full flexibility - as you point out, I did set reasonable ranges for the values…) over all parameters. In the CTL code, I established the percentage for 48-nit and 10000-nit on the upper half and 0.02-nit and 0.0001-nit on the lower half of the tone scale, with simple linear interpolation for any values in between. This should become clearer in the “smarter” version that I plan to make tomorrow.

Ok, I can easily change the allowable range of parameters which should allow for a straight line to be created.

This is beyond my experience, so I hope if and when it comes to that, you and/or @alexfry would be able to assist with how one might do this. @alexfry’s Nuke implementation of v1.0 was pretty impressive…

I hadn’t wanted to say anything, cuz I need to get permission to share, but on and off over the past several weeks, I’ve been picking at pilfering heavily from @alexfry’s implementation, and extending it with the goodness from @sdyer’s outputTransformExperiments aces-dev branch… I’ll have a look at how far I am today or tomorrow and see if I can get a Toolset or gizmo up on GitHub, with Scott’s and Alex’s (and Deluxe Entertainment’s) blessings. At the very least, I can send screen shots.

  • update – 11/15 – sorry this is taking longer than I had anticipated. I had previously been working to make something akin to the “wireframe” parametric odt example, using the hard-coded values for the C5 splines; I’ve since implemented the SSTS parametization and coefficient derivation, and I’m hoping to find time to wrap things up with that later today or tomorrow. Just keeping you posted.

One thing, there’s not a real wonderful way to actually see the curve or its points / tangents directly – I’m having a look at driving a color lookup node with expressions if possible, but that may not be the most reliable representation. Worst case scenario, it’s a matter of plotting a scanline through an appropriately broad-ranged ramp. More soon…

4 Likes

I’ve pushed a few updates to my python branch.

I added a “constrained” plot that creates a similar interactive plot but where the user only inputs the min and max luminance of their display device. There is also a control for the luminance of mid-gray. If you play with this slider, you will see that it just shifts the curve left/right, i.e. an overall exposure adjustment. I also provided a few “presets” as examples of ways this might be used for different display device dynamic ranges.

Note this is just an example, attempting to communicate my intent. The slopes at the end points are still set to 0 although we may ultimately decide to change this. Also, the decision about how many stops of scene exposure gets mapped to display luminance is something that needs to be determined. As is the sharpness of the toe/shoulder. So I’m not sure my full wide open (“OCES”) and cinema curves are “correct”. I just modeled them roughly after where they are in v1.0.

The behavior of this interactive plot is more representative of the extent to which I think a system tone scale should be “controllable” by the user. I’m not in favor of full parameterization exposed to user, but an intelligently designed tone curve algorithm that adjusts to the dynamic range of the display and consistently produces the same results given the same display parameters.

1 Like

@sdyer: Just gave it a quick try at the new version, exciting and promising!

Slope at end points equal to zero will create inversion issues, given how often I have to invert the tonemapper, I would be super keen to have it fully invertible, i.e. adopting a non-zero slope.

I guess from here the next step is probably to apply the curve on images?

Cheers,

Thomas

Understood. I really wanted to illustrate the toolset that I think gives us the best control from a system standpoint. The exact parameters for end-point linear extension slope(s), sharpness, contrast, etc. all still need to be determined. But once locked down I hope we can use a formula similar to this one to allow for any dynamic range rather than just providing a small subset.

I have personally been using this formulation in order to support productions that have needed ACES Output Transforms for other dynamic ranges/settings than the few that ship in v1.0. Not “official ACES” but I would have been making up a curve regardless, so it made sense to me to try to put some logic and reproducibility behind it. In those tests, the tone curve settings I chose based on v1.0 seems to have worked well, but I’d definitely like some experimental data to defend the settings that ultimately get picked. Applying to images for visual testing will be definitely be a part of that…

1 Like

Hey guys – just wanted to post my WIP of the SSTS nuke thing:

*Update – 11.20

  • Fixed two bugs – thanks @nick! Actually did some testing, but nothing too comprehensive yet.
  • Implemented the inverse SSTS algorithm… incorrectly. Nothing to see here. Don’t press the toggle that says *inverse". Don’t do it.
  • First stab at the exposure shift jams

*Update – 12.01

  • Fixed the inverse SSTS implementation.
  • Added some bells and whistles to help communicate what the node is doing in the DAG
  • Got rid of the debug tab.
  • I’d hazard to say this is finished, pending any suggestions or requests…? Otherwise, next stop will be the parametric ODT gizmo…

Enjoy

1 Like

Hi,
Sorry for the newbie question but I’m having trouble getting this to run with my MSYS2 Mingw64 installation. Keep getting errors trying to load mulitiarray numpy extension (some DLLs could not be found, a problem with a long thread on the web indicating some versions incompatible with others). Also should I be using python 2.7 or 3.6? Also installing matplotlib installed 44 dependencies over 2GB of stuff. Could you give me known installation steps?

Thanks,
Jim DiNunzio

I am far from a python expert, especially on other platforms. Perhaps someone else more savvy with python can assist you.

I wrote and tested this code on macOS 10.12.6 in python 2.7. I use Anaconda as my package and environment manager; matplotlib and numpy were installed with conda.

1 Like

Hey @jimdinunzio,

Please do yourself a favour and install Anaconda, this is a Python distribution that ships the whole scientific ecosystem and it just works. The Python version should not matter for @sdyer code.

Cheers,

Thomas

1 Like

Hi,
Ok I got it working on CentOS after a bit of install effort and google searches. Didn’t know installing python modules was such a pain. I am still interested in a good windows installation,
Thanks,
Jim

Glad to hear you got it working.

Yes, python can be a bit frustrating, but I used it because it’s just a bit more open and accessible than Matlab and more usable than CTL, which is where the previous implementations that I was working with were developed.

Thanks. I saw mention of Anaconda and numpy in my searches, but didn’t know what it was or look it up. Sounds like a very useful platform for data analysis.
Jim

@Thomas_Mansencal Definitely off topic, but this assumes conda is already installed. What’s the best recommended way to install anaconda on a mac?

brew cask install anaconda ?
brew cask install miniconda ?
wget http://repo.continuum.io/miniconda/Miniconda3-4.3.0-MacOSX-x86_64.sh -O ~/miniconda.sh ?

Extra points for a nice CLI based method :wink:

I usually do brew cask install anaconda which will install Anaconda there: /usr/local/anaconda3/

I then create an environment so that I don’t pollute the main installation:
conda create --yes -n python-3.6 python=3.6.

You can activate it as follows: source activate python-3.6 and then conda install your_package_name.

Cheers,

Thomas

1 Like

Having a play with Zach’s Nuke implementation.

I seem to get a slightly held down highlights vs the standard 48nit style ODTs , but the result is pretty similar.

At 250 it’s very similar to my own hand rolled 250nit ODT. Very promising.

It certainly seems to be worth the near term confusion for the long term gains, vs the current RRT/ODT shared responsibility tonecurve.

1 Like

Interesting @alexfry … can you elaborate? I think I know what you’re saying (and agree if I do) but I want to make sure I understand your perspective fully.

I feel like it’s harder to get a proper handle on what everything is doing when it’s split between 2 separate curves, across two different files. (3 when you have to dig into ACESlib.Tonescales to get the actual inputs for the curves).

[edited to remove weirdness]

The two are conceptually sequential, but separated by two matrices, which are the inverse of each other. I understand the legacy of AP0 OCES behind it, but it seems unnecessary at this point.

I’m not even 100% clear on what the "tonescale’ portion of the RRT is actually doing?
Line 61 // --- Apply the tonescale
It seems to be pulling the bottom end down a bit? but i’m not entirely clear why?
Is the RRT section the toe? and the ODT section the shoulder? Is this explained anywhere?

I can understand why people are leery of messing with the RRT, which is meant to be a rock, when we’re only meant to be talking about re-thinking the ODTs. But it feels like it’s worth it in this case to simplify the overall Display Transform going forward.

Making a quick Nuke prototype where I snip the RRT at line 60, and hand off to a modified ODT that uses the SSTS function just feels much cleaner.

1 Like

It’s actually more significant than a matter of convenience. @sdyer pointed out in the working group meeting some of the real limitations associated with the tone scales being split into 2 parts.

I came to exactly the same conclusion when we were doing the forward work for the ODT Working Group. Here’s a few slides that I shared with the small group that was looking into all this over the summer.

ToneScaleSlides.pdf (308.9 KB)

This requires illustration. I’ll see if @sdyer can work something up for us to discuss.

:+1:

I think it’s important to recognize that the unified tone scale model that we presented at the last ODT meeting doesn’t eliminate the RRT. It simply restructures the code so there’s a single, more flexible, code block that represents the entire output transform rather than having the code split. The parametric model we described during the meeting would allow users to use that unified output transform to get any output they need, including OCES.