First of all, I have to say how glad I am that the ACADEMY decided to create an official discussions board, where the engineers from every software/hardware house can meet the technicians from every stage of production and discuss openly about work-methods, issues and solutions.
As a major supporter of ACES and its aim, I keep constantly my eyes opened on the current softwares and hardwares availability that support the ACES framework. And, if it is definitely true that more and more systems on the market are embracing ACES on the color-management point of view, I feel like there is still some sort of resistance adopting the Common LUT File format and the ACESclip metadata archive.
I wonder if we should try to push a bit more on those essential pillar of the ACES workflow. The reason why I believe that is really important are the same that brought me to support ACES in the first place: find a way to avoid mistakes during a production pipeline by offering to the end users a solid and vendor-free way to manage their contents by going above the current working-methods and technical possibility (to make the system future-proof) and by standardizing the way how software and hardwares talk to each others, modifying the way of how they work in the background to assure us a smooth interoperability although all production stages.
More in details, I believe that CLF and AcesClip need to be strongly implemented because:
Trying to find a common, standard, way to express and store color transformations and track them, along with other essential metatada, within an entire pipeline, it’s as important as trying to standardize the color transformations themselves: basically, how we can hope that the average of end users, with maybe a limited or maybe just not strong enough experience in color science and workflow management, could ever hope to implement ACES without having any trouble in using all the available color transformations that make it work? For instance, creating CDL on set and expecting that those color decisions will be consistently be represented in a DI room it’s currently already possible through the ACES framework, BUT how many people in the world know ho to convert data from AcesProxy to ACEScc and then ACES2065? One will say that are the ACES-compatible softwares that should do that, but how it will ever happen if people will not know that those transformations are needed in the first place? There are many tools out there that will do the math easily and perfectly, but there is still need of someone that tells them how (or when) and what to do. I believe that it’s part of the Academy’s aim to transform ACES into something that will work seamlessly from the user prospective, avoiding any possible mistake that can happen from an inaccurate or poor use of the ACES color science. Ultimately, most of the productions and the post production houses, most of the on set technicians and DP or Colorist and Post Supervisor that want to use ACES because of its color science, they don’t want to know what ACES cc or ACES proxy are, they are artists and technicians that aim to get the best from their footage, and that’s it. Saving them the effort of reading and studying ACES documentation and ask support to the software vendors, will strongly expand the amount of medium, small or independent productions out there that will use ACES in the future.
On the other hand, talking a bit more deeply about ACESClip, there is a technical issue that we are facing everyday in trying to standardize workflows (not only between different productions, but mostly within pieces the same production): the different terminology and approach that every camera and software vendor are using when defining the same metadata values. I understand that it is not in the aim of the ACADEMY to standardize how softwares and cameras manage their metadata, but I believe that it could be VERY important if the ACADEMY will suggest a single approach on essential metadata that are currently needed to develop a image pipeline properly:
camera-vendors like ARRI, SONY and RED use different fields to express the same metadata value, such of White Balance, clip name, iso, shutter, fps, UUID of the clip, lens data and so on;
most of the time, most those data are recognized only from the camera-vendor post production software (or their SDK) and very few get read properly, plus most of the post production software will use again new fields to express the same data trying to convert the very few they read from the camera end to their way making the data then impossible to be recognized by the camera itself;
because of this very reason, in this tower of Babel scenario, trying to really make an use of those data within a production workflow it’s really hard;
even a single camera model workflow becomes hard when you try to use a simple data across your pipeline, for instance trying to use the WB data to adjust automatically your softwares to the right IDT (for instance), or to link to those data some essential piece of your (I wish) automated pipeline. The whole thing become pointless when trying to make it work on multi-camera model shows where the possible variables of combination of metadata-field-name/actual-value become limitless.
` I would love to see the ACESClip being able to standardize a bunch of essential data (again, like WB, Lens data, clip-related data) and have the ACESclip .xml sidecar implemented directly within the camera vendors and the post workflow, so one can start generating and tracking those data from set to post easily. Don’t get me wrong, cameras and software might keep their own way of doing things, but if there will be a way to bing able to merge metadata together by using the same data and keep them linked to the contents, it would be simply amazing.
So, along with LUTs and color transformations, we would be able to use the vendor-related metadata to automate certain process of the workflow or facilitate the use of some data for other steps (i.e. lens data in VFX!).
I know that the openEXR has enough big headers to handle all these data, and I’m currently pushing with some vendors to add (or avoid to loose) certain data to their .exr encoding, but the problem is more about know how define those data then actually carry them.
I really believe that we should to work together to standardize this aspect and ask to the camera and software vendors to use those naming conventions to define their metadata.
Looking forward to hear back from you,