Thanks - I hadn’t come across a coherent explanation from Apple re: the ‘nclc’ tag. It’s funny, I’m normally trying to do whatever I can to prevent Quicktime from doing anything too clever on playback, precisely because we have no knowledge of how a client intends to consume a deliverable, outside of whatever specs they may or may not give us. It’s never fun being the one to have to explain why the same deliverable looks different when played back on Quicktime X vs 7, or on a Mac vs PC, Premiere vs FCP, VLC vs whatever. Likewise, in Nuke, the only way to “safely” view a display-referred quicktime is to bypass the display transform and set the read node’s transform to “Raw”.
In any case, it’s very difficult to control for incorrectly / inappropriately / unintentionally set metadata when writing Quicktimes, just as it is difficult to control for how that metadata is or isn’t handled by the playback application. It’s kind of like handling DPX metadata or trafficking CDL values – you need a plan for these things – ideally, something describable via CLF or ACESClip.
Using iPads to view QuickTimes makes perfect sense, and I often advocate the same thing; and I’m not really against the idea of describing a viable workflow for using iPads – there’s lots of real-world value in that. But that would require a conscious decision to either delve into the nuances of QuickTime encoding for iPads, or to not open that can of worms. Either way, it’s a slippery slope to a conversation, possibly even a project, well outside the scope of ODT applications.
It’s really a much broader existential question – if ACES provides ODTs for idealized physical devices, should ACES provide something akin for video encoding – reference ffmpeg flags for HEVC, codec settings for ProRes, etc – such that accurate LMTs could be derived to accurately simulate one device on another? Would vendors be willing / incentivized to contribute whitepapers, code, ffmpeg flags, device or application characteristics, advice? Whose responsibility would it be to maintain a best-practices document like this? Where does ACES end and consumption begin?
I do think there’s an opportunity to describe iPad-based workflows here, but I’d be sensitive to the qualitative difference between signal encoding and file encoding – for a document like this, maybe acknowledging such a difference is enough, and the rest can be left as an exercise to the reader. (Although I’d again argue that this would be a missed opportunity to solicit vendor involvement, seeing as ACES positions itself as the vendor-agnostic, supremely capable, all-singing, all-dancing modernist color encoding of tomorrow
). If you guys need another block for the diagram, I’d be happy to provide acronym options.
I’d be curious to know what you guys are doing in terms of calibration – and if we’re doing something different, I’d be equally curious about what that would imply. I’ll grill one of the engineers at work for specifics, in terms of device settings and exact calibration white point (in cie1931 values).
But that we’re even having this conversation is why I wanted to bring up the Judd Vos thing in the first place. How can / to what end should ACES facilitate communication or declaration of device-specific settings, or facility-specific practices? CLF seems like the appropriate place to characterize and describe device / facility specifics, I think. Could be a good way to share just enough information between vendors without crossing into IP territory, creative whitepoints nonwithstanding.