Nuke Multi Layer EXR in ACES: Issues on older hardware

Hello!

So we have a pretty niche issue we’ve encountered…

I was hoping that one of the devs might be able to let us know if a specific CPU instruction set is required to use ACES with multi layer exr files.

We are trying to render exr’s from nuke with multiple layers:
RGBA, Layer1, Layer2, Layer3, etc.
All layers have 4 channels (R, G, B and A). When rendering, most of our machines render with no issues however, there is one batch of machines running with Intel Xeon CPU E5620.

https://ark.intel.com/content/www/us/en/ark/products/47925/intel-xeon-processor-e5620-12m-cache-2-40-ghz-5-86-gt-s-intel-qpi.html

Which I appreciate is quite dated now and running older instruction sets, however it’s been absolutely fine with everything we’ve thrown at it, apart from things in ACES space.

I setup a test script, it rendered a multi layer exr from nuke with the colour management set to OCIO and the OCIO config set to ‘nuke-default’ as expected, all layers intact and accessible when reading back in.
When I change the OCIO config to either ACES 1.2 or ACES 1.3 studio it corrupts the main RGBA layer. It’ll fill the image with either nan pixels or pixels with RGB values of 65504.

Any suggestions would be greatly appreciated as this has essentially cut our farm in half.

I’ve even tried disabling the Meltdown and Spectre vulnerability fixes as a stab in the dark with no joy…

Thanks in advance!

So a development on this issue!

It seems to only happen when rendering to .exr through a write node if an ADX10 (orADX16) transform has been done at some point in the script.

I thought this might’ve been a negative value thing, but in testing, this is not the case.
It is only if an OCIOColorSpace node is use transforming either in or out as ADX10 / ADX16 with the other colorspace being any other value and being rendered on a machine with an Intel Xeon CPU E5620.

If anyone has any suggestions / solutions they would be greatly appreciated!

Cheers!

Another update in case people have encountered the same issue.
The Foundry have replied and currently think it is related to a known bug:
BUG 533781
https://support.foundry.com/hc/en-us/articles/9914005541650-ID-533781-OCIOColorSpace-produces-incorrect-transforms-for-some-negative-colour-values-on-certain-CPUs-creating-artifacts

Where there is a small subset of machines that still fail regardless of the OCIO_OPTIMIZATION_FLAGS setting suggestions.
Unfortunately the machines we have fall into this group :frowning:

If any here has some suggestions to work around this, that would be awesome.
I’ll try to update with any fixes the foundry may or may not come back with.

The Foundry have done some digging and traced the issue to a bug within the Imath version used in the OCIO version included with Nuke14.1v1 (as well as 14.1v4 and 15.0v4).

It’s looking like this was a bug fixed in Imath version 3.1.10:

Bug fixes:

  • Fix half to float giving wrong results on older x86_64 CPUs on Windows

Not sure how I could use this to test if it has indeed fixed the problem, I guess it’ll be a wait and see when the Foundry roll out their next release.

As with the other posts, I’ll do my best to update with findings.

1 Like