Hi Joseph, thanks for your input.
Personally, I would have picked up a fewer number of camera alternatives than you listed above, but really nailed the “chicken and egg” problem about camera-native colorspaces.
Most of all, I agree with you about some cases needing an AMF containing the Input Transform (IDT) info only, and no Output Transform (RRT+ODT) at all in it.
Please refer to slides #15 and #16 from my March 2019 deck on ACESclip, showing how an AMF generated during on-set camera offload should just contain metadata on how to decode those camera native files into ACES2065-1 (plus a wonderful opportunity to bring eventual production/set metadata).
For other usecases, however, I believe it still makes sense to store one or more viewing pipeline(s) into an AMF. Even if the Output Transform is not a definitive one, AMF is a good way to interoperably record and transport how specific parts of a workflow “do work”.
This is a good reason to “name” --or otherwise-- comment a color pipeline in AMF, via additonal metadata (like a pipeline’s name
or id
), cfr. slides #11, #12 from the same deck on ACESclip so applications and humans make a responsible use of the color pipeline(s) contained therein.
For example, a viewing pipeline used for VFX previz may use AMF to record how the previz was viewed, whereas there should be a way for an AMF used in compositing (with OoG LUTs and such) to clearly self-declare as such.
Regards this last aspect, please see slides #21 and #23 from the same deck on ACESclip