AMF Implementation for Cameras - When to Generate an AMF

Hello metadata enthusiasts,

I wanted to continue a topic from the call yesterday, as I think it’s worth to begin dissecting it for the benefit of implementers in the camera market. This is also the step we expect many AMFs to be born. Ultimately, we want to genuinely understand what are the most critical features for cameras to support in order to make AMF successful.

If we want AMF’s to come out of cameras at all, this is the first question I have to ask: when should a camera generate AMFs?

I can see the following scenarios at the moment:
(“output” can include SDI, EVF, HDMI, etc.)

  1. The camera’s only active output is a scene-referred image signal (most likely quasi-log… LogC, Log3G10, S-Log3, etc.).
  2. The camera’s only active output is a display-referred image signal (typically matching the calibration of the reference display it travels to).
  3. The camera has multiple active outputs. At least one of them is outputting a scene-referred image signal, and at least one other is outputting a display-referred signal.
  4. The camera has no active outputs, and is just recording images somewhere.
  5. The camera is recording in-camera proxy files, simultaneously with a less-compressed RGB master recording, or RAW recording.
  6. The camera manufacturer’s RAW decoding application (e.g. ARC, RCX) is producing ST 2065-4 files from RAW files.

To provoke some thinking, these would be my current suggestions for each scenario:

  1. If your camera’s only active output is scene-referred, and not an ACES color space, do not record AMF files. Reason being is simple: the camera is wholly ignorant of whatever is being done to the image it’s sending out. Maybe one could argue we want AMF’s recorded anyways so we have a history from the camera cards that they did indeed output scene-referred. But would would these be? Some odd, naked AMF that basically communicates NULL to other software? If the camera is outputting a scene-referred ACES color space, then the camera must record AMFs with only IDT information. It would need to be confirmed that the spec allows for this, or if it mandates that some amount of viewing information be included.
  2. If your camera’s only active output is display-referred, then the camera must record AMFs. There is the question of whether than can be done with individual transforms to-be-denoted in the AMF, or if it’s a big fat LUT in the camera as we currently see most of the time. I think that is another can of worms to discuss, especially since CLF should help significantly.
  3. This is the trickiest, and most important, scenario to work out. Utilizing different outputs for different uses is a common thing to do. If anyone doubts this, I can give examples. Just don’t doubt the pragmatic approaches taken on-set, and don’t doubt the influence of someone in a chair asking camera crew for an image to look at. In the last decade or so, this has involved a quasi-log output for a LUT box + a Rec.709 output for some peoples to look at for whatever reason. Some techs just want to have a a log output on the side as a “confidence check” that they are acquiring the information they intend to acquire. Currently we are seeing more and more productions utilizing multiple camera outputs for HDR and SDR. Sometimes that is display-referred right out of the camera. Sometimes one log output is distributed through different chains to be tone mapped and color gamut mapped later. The question is… what does the AMF record? How does the camera know what we want recorded? I don’t have a confident suggestion for this yet. I can make an idealistic suggestion to never record an AMF in-camera if we are recording RAW or log, and are outputting log anywhere off the camera, but this requires assuming that the log signal is being handled and manipulated to serve as the golden reference for cinematography. Are we forced to make an assumption here - unless we really want to get into the business of storing multiple viewing recipes within AMFs? I don’t like the complexity of that venture. Imagine getting two chains of CDLs for the same footage. Any colorist would huck these in the bin.
  4. Assuming the camera is not recording ACES2065-1 images (I’m not aware of any that does currently), then it should not record AMFs.
  5. I believe it is overzealous for ACES/AMF to endeavor to consider proxy files, therefore recommend not recording AMFs that track any color pipeline associated with producing them.
  6. At the minimum, this application must generate AMFs in order to record IDT information. Theoretically this would match the IDT used on-set. But we shouldn’t restrict to this, because it may be reasonable sometimes to change the IDT used. Maybe the manufacturer releases an improved version mod-production. Maybe they didn’t actually like the IDT choice on-set, but didn’t have the time to try others. And so on. If the camera did not record AMFs for whatever reason, it may be valuable to record in the AMF that the IDT was indeed selected in the decoder software, and not what was used on-set. This is another stage where it should be acceptable to only record IDT information, because there may very well not be someone looking at these images through the RRT+ODT. Just a pure processing stage to get files to some facility.

That’s all I’ve got for now. Happy new year ACES community!


1 Like

Hi Joseph, thanks for your input.
Personally, I would have picked up a fewer number of camera alternatives than you listed above, but really nailed the “chicken and egg” problem about camera-native colorspaces.
Most of all, I agree with you about some cases needing an AMF containing the Input Transform (IDT) info only, and no Output Transform (RRT+ODT) at all in it.
Please refer to slides #15 and #16 from my March 2019 deck on ACESclip, showing how an AMF generated during on-set camera offload should just contain metadata on how to decode those camera native files into ACES2065-1 (plus a wonderful opportunity to bring eventual production/set metadata).

For other usecases, however, I believe it still makes sense to store one or more viewing pipeline(s) into an AMF. Even if the Output Transform is not a definitive one, AMF is a good way to interoperably record and transport how specific parts of a workflow “do work”.
This is a good reason to “name” --or otherwise-- comment a color pipeline in AMF, via additonal metadata (like a pipeline’s name or id), cfr. slides #11, #12 from the same deck on ACESclip so applications and humans make a responsible use of the color pipeline(s) contained therein.

For example, a viewing pipeline used for VFX previz may use AMF to record how the previz was viewed, whereas there should be a way for an AMF used in compositing (with OoG LUTs and such) to clearly self-declare as such.
Regards this last aspect, please see slides #21 and #23 from the same deck on ACESclip