ACES 2.0 Meta Framework

Hi,

as promised here is a document gathering thoughts about a possible future for the ACES project in terms of a modular meta-framework.
It is short but I hope still valuable.

Daniele
Proposal_ACES_Next_Metaframe.pdf (908.9 KB)

10 Likes

Über-CMS <3

No surprise but excellent to see that written @daniele! I only wish it came last week as some of those points were discussed during TAC this morning but not to the extent they should have been. It is certainly worth re-raising to Arch. TAC specifically. There are good points made and it would be a shame not doing it.

Cheers,

Thomas

Very interesting. Thanks Daniele !

This idea of “meta framework” or modularity was discussed during the last TAC meeting (here is a link to the recording) and it would have been great to have these slides for better illustration.

The terminology explanation about Color Management System and Color Management Workflow is quite enlightening to say the least.

Hopefully it can be discussed in a more precise way next time,

Chris

2 Likes

Some Pro and Cons of the proposal (of course not complete):

Starting with the Cons:

  • it is a lot, a lot of work!
  • it needs to revisit some of the most fundamental definitions, some examples:
    • reference scene
    • reference display(s)
    • reference observer
    • specifying display linear
      • Relative black vs absolute
  • it needs some new definitions
    • defining viewing conditions
    • defining the concept of display rendering transforms

Pros:

  • useful to modernise the assumptions and definitions which are already a few years old (things have changed)
  • makes ACES adoption much easier
  • the default implementation can be tailored more towards the novice and intermediate user because expert users can customise
  • improves the current archive situation
  • more creative freedom
  • tailored shows can maximise the value of the show
  • encourage innovation
  • ACES becomes agile and can be adapted quickly to changing requirements
  • the vanilla ACES LiveCycle is decoupled from the vendors’ implementation
  • no additional complexity for vanilla ACES shows (Metadata-driven only if you want to depart from vanilla ACES)
  • refactoring the pipeline helps to design a better vanilla output transform.
14 Likes

Hello @FedorDoc !

Far from me the idea of “stealing” this thread but I would be happy to share a few thoughts about your question:

I think one of the specific problem this proposal is trying to “fix/improve” (for lack of a better word) is the archival process of ACES-compliant projects which use a non-ACES Output Transform/DRT.

If I remember correctly (unfortunately the meeting link I shared above is now unavailable), one of the issues faced currently is that these projects have the “Output Transform/DRT” baked in their archives, which kind defeats the whole purpose of a “scene-referred” archive. I wrote a bit about it in this article.

I will share here my favorite quote about Daniele´s proposal:

I cannot understand why anybody would like to limit all of their productions to use the same output transform. It would be the same as limiting the Production to use a single camera. Documentaries, features, animation, hand drawn, all of them have their unique challenges. Do you think film would have flourished in the last 100 years if the Academy would have standardized the chemical receipt ? Instead the academy standardized the transport mechanism. The 35mm perf. And this was exactly the right thing to do. People could innovate and interchange.

I think the past three years and the “recent” comments (including Colorfront´s feedback) have proven this commentary to be quite on point. Hopefully for ACES 3.0. :wink:

Regards,
Chris

3 Likes

And by the way, if you want a bit more context @FedorDoc I think this is the original thread that “started” this proposal.

It was right after meeting#2 of the OT VWG, in December 2020. There are lots of good comments from Troy and Daniele in there.

Have a nice day,
Chris

2 Likes

Thanks a ton, @ChrisBrejon , that was a great read! I learned a lot from ACES forums and your amazing site :slightly_smiling_face:

I’ve definitely overlooked archival problem. My own archives are mostly DNxHR HQX 10-bit files with final grade, and that’s it.

Why can’t we just save good quality 3DLUTs, saved in the same folder, to deal with archival issues? Preferably two of them: creative ShowLut and DRT. We won’t be able to archive in linear, but it is not the worst thing ever. Please, correct me if I’m wrong.

LUTs maybe unprecise but they do not force manufacturers to disclose their code, which would greatly hinder “meta framework” adoption. I’m not sure if ARRI will reveal their REVEAL code anytime soon :wink:

A set of LUTs as a vehicle to transport per pixel colour transformation would work as long as the input and output meaning of those LUTs is well defined.
And in fact not all DRTs are actually based on formulas. And I don’t want to exclude those.

But I would not restrict a meta framework to the application of 3D LUTs only - or on the reverse: restrict on formula based per pixel operations.

In more general terms we want to communicate a “computation” with well defined input and outputs.

1 Like