The bit-depth attributes in CLF have caused a lot of confusion over the years. Following up on my action item from the other thread, I’m proposing the following to help clarify.
A string that is used by some ProcessNodes to indicate how array or parameter values have been scaled.
A string that is used by some ProcessNodes to indicate how array or parameter values have been scaled. The supported values … (same as before).
5.1.2 Input and Output to a ProcessList
Applications often support multiple pixel formats (e.g. 8i, 10i, 16f, 32f, etc.). Often the actual pixel format to be processed may not agree with the inBitDepth of the first ProcessNode or the outBitDepth of the last ProcessNode. (Note that the ProcessList element itself does not contain global inBitDepth or outBitDepth attributes.) In some cases an application may therefore need to rescale a given ProcessNode to be appropriate for the actual image data being processed.
For example, if the last ProcessNode in the ProcessList is a LUT1D with an outBitDepth of 12i, it indicates that the LUT Array values are scaled relative to 4095. If the application wants to produce floating-point pixel values, it should therefore divide the LUT Array values by 4095 before processing the pixels (according to 5.1.4). Likewise, if the outBitDepth was 32f and the application wants to produce 12i pixel values, it would multiply the the LUT Array values by 4095. Note that in this case, since the result of the computations may exceed 4095, the application would need to clamp, round, and quantize the value for integer output.
5.1.3 Input and Output to a ProcessNode
In order to ensure the scaling of parameter values of all ProcessNodes in a ProcessList are consistent, the inBitDepth of each ProcessNode must match the outBitDepth of the previous ProcessNode (if any).
Please note that an integer inBitDepth or outBitDepth of a ProcessNode does not indicate that any clamping or quantization should be done. These attributes are strictly used to indicate the scaling of parameter and array values. As discussed in 5.1.1, processing precision is intended to be floating-point.
Furthermore, since the processing precision is intended to be floating-point, the inBitDepth and outBitDepth only control the scaling of parameter and array values and do not impose range limits. For example, even if the outBitDepth of a LUT Array is 12i, it does not mean that the array values must be limited to [0,4095] or that they must be integer values. It simply means that in order to rescale to 32f that a scale factor of 4095 is to be used (as per 5.1.4).
Because processing within a ProcessList should be done at floating-point precision, applications may optionally want to rescale the interfaces all ProcessNodes “interior” to a ProcessList to be 32f according to 5.1.4. As discussed in 5.1.2, applications may want to rescale the “exterior” interfaces of the ProcessList based on the type of pixel data being processed.
For some applications, it may be easiest to simply rescale all ProcessNodes to 32f input and output bit-depth when parsing the file. That way, the ProcessList may be considered a purely 32f set of operations and the implementation therefore does not need to track or deal with bit-depth differences at the ProcessNode level.
5.1.4 Conversions between Integer and Normalized Float Scaling
As discussed above, the inBitDepth or outBitDepth of a ProcessNode may need to be rescaled in order to accommodate the pixel data type being processed by the application. This means that the array or parameter values of the ProcessNode may need to be rescaled.
The scale factor associated with the bit-depths “8i”, “10i”, “12i”, and “16i” is 2^n - 1. The scale factor associated with the bit-depths “16f” and “32f” is 1.0.
To rescale Matrix, LUT1D, or LUT3D Array values when the outBitDepth changes, the scale factor is newScale / oldScale. For example, to convert from 12i to 10i, multiply array values by 1023/4095.
To rescale Matrix Array values when the inBitDepth changes, the scale factor is oldScale / newScale. For example, to convert from 32f to 10i, multiply array values by 1/1023.
To rescale Range parameters when the inBitDepth changes, the scale factor for minInValue and maxInValue is newScale / oldScale. To rescale Range parameters with the outBitDepth changes, the scale factor for minOutValue and maxOutValue is newScale / oldScale.
Please note that it is simply a scale factor. In none of the above cases, should clamping or quantization be applied.
Aside from the specific cases listed above, changes to inBitDepth and outBitDepth do not affect the parameter or array values of a given ProcessNode.
If an application needs to convert between different integer pixel formats or between integer and float (or vice versa) on the way into or out of a ProcessList, the same scale factors should be used. Note that when converting from floating-point to integer that the values also need to be clamped, rounded, and quantized.