Exclusion of Integers

A few reasons I have for wanting to remove from spec:

  • Simplicity - one format is easier to maintain than six.
  • Few implementers are going to have a full integer workflow - meaning it is rare that you would read the LUT in as integer, perform all ops in integer space, and save out as integer without any floating-point intermediate steps.
  • Managing state with in/outs in between nodes is a pain.
1 Like

I feel that any LUT implementation will need some degree of processing at parse time, so multiplying floats by 1023 or 4095 is not a great burden. Parsing XML is never likely to need to be a real-time per frame process.

Likewise for writing back to CLF from an integer implementation.

My testing indicates that with sufficient decimal places (5 for 16 bit, if I remember correctly) an integer can be round tripped to a text float representation without loss.

But as was mentioned in the VWG, LUTs are always approximations of a transform in any case, so is absolute precision really a big issue?

I’m repeating this form another post, just because it’s an opinion of mine that symmetrically addresses both questions:
I think the questions on adding metadata like colorspace conversions, vs the keeping integer/halfs/floats paves the way to an integrated philosophy, whose main driver, in my opinion, is which applications should CLF be designed to work natively with

  • If low-computability devices (LUTboxes, monitors, etc.) are in-scope as well, to be able to read and use a CLF, then integers and half-floats should be kept, whereas references to colorspace transforms and higher-level color operators (possibly with the exclusion of CDLs) should be removed.
  • If CLF should be elective format to only “devices” with some computational power, conversely, higher-level operators should be factored-in, while integers/halfs may be let go without a tear, as those higher-power applications may be demanded to do the back-end conversions (in an as-much-as-possible controlled/pre-determined way).

That said, I think that having a chance of keeping rounding errors away from LUTs (because those who use them in production may not deeply understand this) is a benefit to all. Therefore sitcking with floats only is a good chance to decrease the rounding errors, particularly when several nodes are stacked in a CLF, or several LUTs are stacked on one another.

A hardware LUT box is always only going to be able to support a subset of what is possible in a format like CLF, which can chain an arbitrary number of operations. So from an unconstrained CLF, some degree of conversion is always likely to be necessary to prepare it for use in a hardware device.

Conversely, if taking a LUT box such as a BoxIO which can contain a 12-bit input 1D LUT, a 12-bit 33^3 cube, and a 12-bit output 1D LUT, my calculations suggest that sequence of LUTs could be written to a CLF file with only five decimal places, and could be the reloaded into the BoxIO and have identical 12-bit table values to the originals.

My calculations also suggest that eight decimal places is sufficient to unambiguously represent any half-float value. So I think rawHalfs just provides yet another way to represent the same numbers in the file. I think keeping it simple and having only float values as decimal text in the file reduces the number of permutations necessary, without decreasing functionality.

Having written a basic CLF implementation myself (it’s very much a work in progress in Colour Science for Python) I feel the less permutations there are, the easier it will be to persuade developers to implement it fully in their apps.

Hi all,

Please take a look at this gist containing a python script I wrote that illustrates 32-bit floating-point accuracy.

It performs a test to verify that 8, 10, 12, and 16-bit code values can be converted to normalized 32-bit floats, rounded to only 5 decimal places (!!!), and reconverted losslessly back to the original integer code value.

2 Likes

I wrote a similar Python script, and got the same result.

And taking it a step further, is there a good reason to differentiate between float16 and float32? The current spec says that processing should be done to at least 32-bit precision. The sample implementation interpolates even for halfDomain LUTs and returns higher than half-float precision. Clamping to the half range can be performed with a Range node if required.

1 Like

Thank you for starting this thread. I still have to find a convincing argument to keep integer support in the specs. The argument of ‘higher precision’ or ‘exactly the intended values’ doesn’t hold thanks to Greg and Nick’s tests. Also, every device I know has the computing power to parse a floating point based LUT and making it integer before sending it to the device’s memory. Furthermore, some CLF nodes can’t be written as integer (e.g. matrices) and most of the CLF defined LUTs will have to be transformed by the manufacturer’s software to match a hardware implementation (manufacturer will have to support CLF in any of its configuration or not at all).
Now, Let’s try to rope in more manufacturers (please invite anyone you know who could contribute to this discussion, I will) and try to find a sensible argument to keep integer in CLF. Otherwise, let’s remove it.

I’m going to rephrase this discussion as a question, “If CLF is specified to contain only float data, how, if at all, would that restrict it’s usage?” or maybe even more simply, “Do we lose anything if we eliminate integers from the specification?”

I am also in favor of dropping integer support from CLF.

Yet, if integers are out, then --for the same very reasons-- also distictions between halfs/ float16 / float32 / etc data types must alo be expunged off. More compactly:

A number in CLF is a string of digits and it doesn’t lose any meaning whether it has 4, 5, or 12 significant digits.
Mantissa/mnemonic/word-length/IEEE-754 issues are lower-level parameters that don’t belong the human-readable, CPU-agnostic representation of the same numbers as used in CLF.

Agreed, that said and it might warrant its own discussion thread but the CLF specification should also probably recommend warmly (dictate cough cough) the number of significant digits written to the file, e.g. 15 for double on my platform. We have all been bitten hard by those 4 digits precision matrices that don’t round trip properly, e.g. sRGB ones. There is certainly a space consumption issue with that but it could be a good trade-off in exchange of increased precision.

Hi Thomas.
Arithmetic precision is a very discussed topic in Engineering classes.
I understand your point and the practical reasonin behind that (we don’t want to have applications “ruining” a CLF because they simply strip 3 decimal digits to any look-up values when the LUT is processed by them.
Despite that, on the Engineering side, it’s safer to say that the requirement should be something like:

An application will always produce numbers with the same decimal precision within the same CLF element (e.g. all values of a IndexMap, or of 1DLUT. For example, there will be no fieds mixing numbers with, say, 3 decimal digits and numbers with 6 decimal digits. If this happens, zeroes should be added to compensate for missing digits.
and
An application parsing an existing, previously-read CLF should not decrease the number of significant digits present in the arithmetic representations from the source CLF (or portions of it) during such processing – unless such behavior is explicitly forced

I guess I’m talking about adopting or recommending a fixed number of significant digit/decimal precision while you are discussing about maintaining existing decimal precision. They are related topics but not quite the same to me.

All,

This is a really interesting discussion and I’ve written, and deleted, about 5 posts because I find myself going back and forth on the issue. On one hand my, my background working for a hardware company (Intel) just gets really uncomfortable when talking about eliminating support for integer values. That said, I can’t seem to poke holes in the points that others are bringing up either.

I’ve looked at @Greg_Cotten code and initially thought to myself “How could a float with less significant digits than an float16 hold a int16 accurately but the answer here seems to be rounding.” This seems to suggest that at the very least we’d need to clearly specify what the rounding rules are when going from float to int.

I think in the end this comes down to use case and requirements. @walter.arrighetti post seems to resonate with me. Why keep complexity to support hardware directly when other aspects of the format (not the least of which is that the whole file needs to be parsed) requires software to interpret.

I’m still not sure I’m completely on board with the idea of eliminating ints but I’m starting to see the arguments for it.

Probably not a big concern but it is worth noting the files likely get bigger by about 33% when using a 5 significant digit float vs a unit16.

Specifying rounding rules certainly seems like a good idea for robustness.

However, I don’t think it actually ever comes into play here. I wrote some quick Python which iterates over all uint16 values, divides by 65535, rounds to 5 decimal places and then multiplies by 65535. The maximum difference from the original integer is 0.32765. So it never gets to the delta = 0.5 situation where rounding rules make a difference.

n = 65535.0
max_error = 0.0

for i in range(int(n) + 1):
    s = '{0:0.5f}'.format(i/n) # 5 decimal place float string
    f = float(s)
    delta = abs((f*n) - i) # difference after round trip to string
    if delta > max_error:
        max_error = delta

print max_error

Hi Nick, I didn’t yes read (or run) your code because I’m now in a rush to get out, but roughly thinking about it. It’s a two-fold scenario actually:

  • If you convert float to uint with --as in your code’s case-- you never have problems as long as the starting floating-point number is one-magnitude less than the maximum uint value (so, for 16-bits, for < 6500). Otherwise the rounding algorithm makes sense
  • If you convert uint to float, you will have troubles for numbers less than 1, because the conversion might not be reversible.

So, if you stich with the above two cases you’re safe saying rounding is not relevant. The former case is relevant, as CodeValues in the linear-to-light world of VFX only reach 6000 on highlights).

In all other cases you should be mandating/requiring rounding rules, but then it will --again-- become a restriction to both hardware and software manufacturers in their products, because this might be completely in contrast with their products’ internal logics or constrains.

Hi @walter.arrighetti, I’m afraid I don’t follow. I think we may be talking about different things.

I am talking about a situation where you have specific n-bit integer values that you have calculated for a LUT in an integer based hardware device. You want to be sure that writing those to the table in a LUT file as decimal floats ranged 0-1 calculated by dividing the integer by 2^n - 1 (e.g 1023 for 10-bit) will allow you to read those back and convert to integers again, and result in identical integers to those you started with. My code shows that this works perfectly, with no concerns about rounding method, even for 16 bit integers when the floats are written to the file as text with five decimal places.

1 Like

Reigniting this discussion based on the conversation in the last VWG meeting…

Much was discussed on this in the first half of the last meeting. And while we started by agreeing that float encoding could encompass all we eventually talked ourselves back to asking, “Why is such a change necessary?” A large part of the discussion talked about if we used float how we would indicate scaling. For example, we could just make an existing 10-bit LUT into float by adding a decimal to everything (e.g. 1023.0). But the same LUT could also be made into float and have it mean something else if everything was divided by 1023, such that 1023->1.0. Would we then need a reference value to indicate scaling? There was much more to it that these small points, but essentially we arrived at:

“If supporting multiple encodings is not the major impediment to CLF being implemented, then we should move on.”

Some points that were made:

  • It doesn’t add that much complexity to the implementation. It’s very easy to scale based on the inBitDepth and outBitDepthattributes.
  • There is some convenience in being able to copy/paste LUT entries from other integer LUT formats, change the header/footer in a text editor, and use as CLF.
  • Integers can be a bit more “human-readable” than always seeing lots of decimal places even for simple LUTs. Not a show-stopper, but sometimes reading 10- or 12-bit values in a text editor when examining a LUT is a bit more intuitive.

So for now, we’re definitely intending to leave support for multiple encodings. For those who weren’t able to join the meeting or review it yet, are there any thoughts?