I’ve seen this issue during using ACES with ocio in Nuke. For example, in short, if we have pure black(0, 0, 0) pixels around CG renders, and convert it from ACEScg to Arri LogC Wide Gamut and go back to ACEScg, then the black value is slightly changed to (5.6e-11, 5.6e-11, 5.6e-11).
I understand we could say this is almost the same and shouldn’t be an issue but just wanted to know what is going on here under the hood and also wanted to know if it was an expected behavior.
Yes, this is expected and caused by floating-point precision (and precision of the LUTs in depending on the context). Whether it matters depends on how many times you perform that conversion but for practical purposes, it should not.
Keeping in mind that the ACES container for inter-exchange is 16-bit Half and that the smallest representable values for that precision are as follows:
Minimum Sub-Normal Representable Value: 2.38E-07
Minimum Normal Representable Value: 6.10E-05
5.6E-11 cannot even be represented!
It makes sense. I overlooked the fact that ACES is operating on a 16-bit Half.
Thank you for the explanation.
Well, computations do not have to be performed in 16-bit Half, but if you store or exchange the imagery, it will technically have to be converted to 16-bit Half if you follow the book!
Sounds good. Thank you again!