ACES Implementation Language

It’s an unfortunate fact that the Academy’s Color Transformation Language (CTL) hasn’t garnered the amount of support that is required for this to remain as the singular official implementation of the ACES transformations. Rather than gathering the community around a single point, it has instead forced most adopters to re-implement the system in their own domain.

We as a community need to explore the best way to define the ACES transforms in one or many universal programming language(s), and provide means for a test framework to ensure equivalence if the official implementation language becomes fragmented.

I’ll also add my two cents, that OpenColorIO’s problem set fits perfectly in this domain. A more holistic solution, in my opinion, would be to push development support towards a pure implementation of ACES in OpenColorIO (which there is already a desire and plan to do so), so any adopter need only link against the OCIO libraries to fully support ACES: whose task it is to directly process the pixels on the CPU and provide the correct shader representation for the GPU. So if the ACES transforms are “officially” defined as OCIO Ops, then both projects benefit from the combined resources and attention… (stepping off the soap-box now)

If this isn’t feasible or wanted, then ideally there would be a solution akin to using the LLVM/SPIR-V intermediate representation where one “master” implementation (perhaps remaining as CTL if an LLVM front-end is written) would be maintained and many child representations could be generated as needed.

3 Likes

As discussed privately, here are my thoughts on an hypothetical CTL replacement:

CTL is not slow per see, it is just not a practical language to use with other
applications.

Given ACES is about image processing and we want fast image processing these days, a language that runs on the GPU is probably
a good choice, e.g. CUDA / OpenCL kernels or GLSL / HLSL.

I was looking at natively porting the whole codebase to GLSL. The advantage of having a native GLSL / HLSL implementation is that it
can almost be used straight away by software vendors that have adopted GPU
rendering, e.g. Resolve, Unity, Unreal Engine.

It would also make contribution from external people much easier.

Unit testing is not a problem that would be solved though.

OCIO 2 is pursuing similar objectives so it is worth seeing how all that could
be reconciled gracefully, especially given that we started conversation to bridge all
the worlds together.

A few related things to mention in that regard:

2 Likes

GLSL can also be run as a Matchbox shader in Baselight or Autodesk apps.

1 Like

I’ve been exploring SymPy as a potential avenue for a similar kind of thing-- more as a meta-language for representing / communicating the mathematical essence of all this… stuff; but also serving as a means to translate said stuff to other languages (ostensibly CTL). I found it really easy to proof-of-concept it for transfer functions, to generate nuke expressions, which is not a particularly good use of SymPy’s potential – but the code regeneration possibilities are pretty endless (I think glsl might be available out of the box?)
other benefits:

  • the math is arbitrarily precise;
  • you can print everything as Latex and look really smart.

It also comes “for free” if you’re using colab.research.google.com

all this said, I think glsl makes a lot of sense.

1 Like

OCIO v2 will have a much more generic GPU support enough to have the same quality between the CPU and GPU paths, and also to allow community development to other GPU languages.

In the OCIO v2 proposal there is also a plan to have built-in color transformations (for part or most of ‘common’ color transformations) to ease the creation of new configuration files. But it could be extended by the community to have a default, ‘complete’ in-memory configuration if needed (i.e. to get rid of any external files).

With the community support, the OCIO v2 library could become the reference implementation of ACES.

NOTE: OCIO does now have some GPU unit tests and more will come with the new GPU API. That should greatly increase the quality control of color transformations.

@PatrickHodoul: Is there a public place with how those transforms would be represented?

Is Maya synColor a good example for that?

Which leads me to the point how are the built-in color transformations written?

Cheers,

Thomas

@Thomas_Mansencal We did not have time to think about the implementation of that idea. When thinking that OCIO could be the ‘ACES reference implementation’, the public OCIO repo could be a good choice.

The idea of built-in transformations emerged when integrating synColor in tools 1) with basic color management needs and/or 2) used in render farms (i.e. reducing the need to have access to the full catalog). Today, OCIO could start without a config. file but only a ‘raw’ color space is available, and no built-in transforms exist.

Note: synColor heavily relies on a list of ctf files. But having tags in all the ctf files, the catalog itself could be rebuilt at any time.

For OCIO v2, our thinking was about C++/SSE and GPU languages (which is the current OCIO & synColor design) to ease the short-term development and maintenance.

My gut feeling is that ACES should be defined in a more abstract way than OCIO, CLF, CTL, etc, so something close to the mathematics but testable (as in unit as well as functional tests) as the primary implementation,

Then OCIO and CTL make great secondary implementations based on real world limitations/requirements such as performance, or implementation needs such as using limited range/domain textures for GPUs.

As an example, I’ve recently been using ADX based transforms for a current project and was trying to debug some artefacts, trying to determine if they stemmed from the implementation or from the native formulation (in this case the answer is both). Having a purer representation would have helped with this, (as would a more complete documentation).

To be clear I think CTL is much closer to what I would prefer, though something embedded/implemented in a language like python would be where I’d start if there was no existing implementation.

Kevin

Hi,

Overall, I share the same sentiment than Kevin.

What I would like to add, and I write it with due respect for Patrick and Doug: There is something about the reference implementation of ACES being a subset of OCIO that makes me very uncomfortable. This could create ambiguity over ownership of ACES and coupling the two projects together might make steering both of them very difficult. This also could be the undesired ground for a lot of political issues and bitterness down the road.

Cheers,

Thomas

I also completely marry Kevin’s thoughts about a Python implementation.

Nothing against CUDA/OpenGL/GLSL, but ACES really needs to be technically neutral, so a reference implementation that “lives” on a macro-family of GPU implementations is far from ideal.
Just think of how ACES is implemented in LUT boxes, monitors, camera feeds, etc…

Using a real-world language (mostly OS-agnostic), with hard references to math primitives, is to me THE solution.
One should be very very wary on the Python’s own implementation though (e.g. CPython?), in order to reproduce the finest detail of, say, bitwise floatin-point algorithmics.

One should be very very wary on the Python’s own implementation though

If you are going down the Python road and because you might want to use Numpy / Scipy, CPython would be the only real implementation, as a matter of fact, it is the De Facto canonical Python version. For Python, I would be more concerned about Packages/Dependencies versions, e.g. which version of MKL/Lapack/Blas and Platform are you using, e.g. Windows vs macOS vs Windows.

Yes Thomas, that’s what I’m saying.
I’d be tempted to add --at least for ACES Reference Implementation only-- straight Python, no third party modules. This means writing a bit of modules referring to primitives in the standard math and a few other standard modules only.

The inconvenience of doing so is that it would render your reference implementation pretty much useless as far as image processing goes. CTL is able to process images, a naive native Python implementation would not be, at least not in a way that is practical.

I kind of like the idea of using a meta-language or just plain math for a spec. Meta-language might be better because you could write a cross-compiler for various languages. When the spec changes, all you need to do is recompile. Would be great to compile for Metal, GLSL/HLSL, OpenCL, and CUDA without a lot of heavy lifting.

Now that I’ve thought about it, maybe keeping CTL is fine. There’s already a great interpreter. Perhaps we could leverage this interpreter’s result AST to compile to SPIR-V (or LLVM!) as Sean mentioned in the original post.

Thomas,
the reference implementation is just that: a reference. It can process real-world image files with the same non-production UX as ctlrender does.
There are some Python codes publicly available (not including anything but default modules) for handling EXR, TIFF, even ARRIRAW files (I wrote sone myself over the last decade).

Of course you don’t want a reference implementation to read/write Apple ProRes or MXF files, but only read/write capabilities of basic still-image files; one only needs it for color-science and benchmark image processing.
You may change the read/write modules to add file formats, if/when needed, without re-compiling.

Python was just a suggested language. Anything which is OS-agnostic and does not rely exclusively on GPU technologies/code to run may fit.

Pseudo-language is almost good – yet it’s not as one still needs an actual binary to generate the reference results for the pseudo-code.

Hi @walter.arrighetti,

It is not a reference, it is THE reference.

Anyway, what I was underlining is that it needs to have good processing speed, otherwise it will be very painful to implement or test anything for that matter. As the creator of the reference, you obviously do want to test what you are writing, which involves small unit test cases but also real world data processing, e.g. frames with multiple millions pixels.

That is where I don’t think a pseudo-language is suitable. You need a language that allows you to generate actual data in timely fashion.

Cheers,

Thomas

All,

What’s wrong with CTL? It has a compiler and the interpreter uses SIMD so it’s not super slow by any means… Good enough for the reference implementation. My thought was that we could leverage the CTL interpreter to create a modern cross-compiler that could spit out GPU or CPU compatible code.

@gregcotten: To me the two main issues with CTL are that:

  • It is an unmaintained language.
  • It is a bit convoluted to test or make a test suite for it.

There is a cross-compiler:

Would it not be more practical to keep CTL as the main language and use it as the basis to convert to other, more expedient to testing and realtime playback mediums? Having done this myself with the separate translations to DCTL (Davinci Resolve specific) and CUDA (for OFX Plugins), I can attest to the validity of CTL as a viable and practical source for translating to other languages.

This thread appears to have sustained its line of enquiry seemingly oblivious to some recent and arguably quite relevant developments, though of course I may well have missed or misinterpreted some elements that would explain this.