The recording and notes from meeting #164 are now available.
From meeting notes:
We need some test images and CTL renders of those through all the renderings, and a way of comparing another implementation
There’s also the ASC StEM2 image material that could be used: https://dpel.aswf.io/asc-stem2/. Unfortunately the EXRs are in a 1.2 TB file…
I’ve been experimenting with a simple Python script which uses Colour to read images and NumPy to compare them.
Comparing renders of the four targets in my rc1 DCTL with the reference CTL I get:
still_life_709.tif:
Max difference: 0.0015564202331
7 pixels difference > 0.001
still_life_P3D65.tif:
Max difference: 0.00161745632067
177 pixels difference > 0.001
still_life_PQ2020_1000.tif:
Max difference: 0.000442512333393
0 pixels difference > 0.001
still_life_PQ2020_500.tif:
Max difference: 0.000442512333393
0 pixels difference > 0.001
synth_chart_709.tif:
Max difference: 0.00154116121121
1048 pixels difference > 0.001
synth_chart_P3D65.tif:
Max difference: 0.00309758144431
5628 pixels difference > 0.001
synth_chart_PQ2020_1000.tif:
Max difference: 0.0025329887867
1568 pixels difference > 0.001
synth_chart_PQ2020_500.tif:
Max difference: 0.00170901417732
210 pixels difference > 0.001
So most of the maximum differences are between one and two 10-bit code values.
The extreme values contained in the synthetic chart push the maximum difference to three 10-bit code values. I need to investigate further to see which inputs create these largest differences, so we can consider a less extreme test chart, which will enable us to set a smaller threshold.
I also made a modified version of the CLF oiiotool
based comparison script, which people who don’t have Colour installed but do have OpenImageIO can try.
Both these scripts only compare TIFF files for the forward transforms using an absolute difference. I haven’t yet done versions for inverse transforms, which probably need to be relative comparisons of EXR images, like the original CLF script which could perhaps just be run unmodified. Or we look at round trips of synthetic images such as those described by @KevinJW during the meeting, which include dense coverage of the display cube surfaces and less dense coverage of the interior.