Continued from today’s call…
I now see Jim’s concern and think I’ve traced the origin of the “noClamp” “style” attribute. I think this was a compromise introduced back in the v2.0 of the spec.
The historical threads jive with the same differences we highlighted on the call, namely - understanding
<Range> intended behavior in two different ways:
<Range>is a scale and always clamps
<Range>is just a scale (but not always a range limiting scale - no clamping) - In this context, you could use the node to ‘range’ (or ‘scale’) floating point values from [-0.5, 222] into [0,1] and values such as -1 or 230 would pass through as -0.00224719101 or 1.035955956, respectively.
I propose two (maybe three) solutions, of which I’d like to hear opinions…
If the goal with this spec revision is a more understandable, user-friendly approach, then my suggestion is:
<Clamp>into two separate nodes
<Scale>since that is how it would behave. Yes, this would add yet another node for implementers but for a LUT creator or user it would serve as a “convenience” function. Behind the scenes in the implementation it would most likely still create a scaling matrix w/ offsets to apply to RGB channels by populating a 3x4 or 4x4 matrix along the diagonal and in the 4th column. A bit more for implementors but perhaps slightly more direct for the user.
If the goal is to simplify the spec for implementors, then my suggestion switches to:
- Remove the
"noClamp"options. Those who want no clamping use a matrix and those who just want to clamp use the range - with either all 4 sub-elements or just a min or max pair of sub-elements.
- Clearly state for LUT creators/users that
<Range>is intended for uses such as limiting range or scaling to a limited range (e.g. SMPTE legal range).
- Instruct LUT creators/users who do not want clamping to occur to use a
<Matrix>to achieve non-clamping ‘range scaling’.
For example, we’d show how to populate the diagonal of a matrix with the scale value defined in
scale = (maxOutValue - minOutValue)/(maxInValue - minInValue)and then the offset column would be equal to
minOutValue - minInValue * scale
A third option would be to leave “as-is” but supplement with better guidance on the effect of the present
MIN( maxOutValue, MAX( minOutValue, value)) construct…