I think we just need some brave soul to extend the capabilities of ctlcc. If we can keep CTL as the reference language and have a tool to automatically cross-compile to various other popular shader/interpreted/compiled languages (ala ctlcc) I think that should be fine. Whatever it generates would need to be MIT or 3-Clause BSD licensed.
Not really interested in having any manual translation take place, as any additions or corrections would introduce further possibilities of translation error.
I think the issue at the moment is we have a reference implementation that is hard to implement unless you just want to use ctlrender. Yes, you could manually translate (as I assume you did?) to various shader languages, but that’s prohibitively difficult for some (there are 930 CTL files in ACES 1.0.3) and leaves a lot of room for implementation error.
@Paul_Dore , there have been many “manual” translation of ACES CTL to various other languages, (I have my fair share of porting chunks of it to Python for personal research or HLSL for Unity). The issue is long term maintenance doing so, any update to upstream is painful to back-port without major trauma. Not only that but most of the implementations are not unit or regression tested, as a matter of fact the CTL reference implementation itself is not!
Absolutely what I would be keen to avoid!
The point @SeanCooper raised (that we discussed many times about) is that everybody reimplements the CTL reference in a way or another which is wasted energy. Kimball’s CTLCC would probably have been the way to go but unfortunately at this stage it is also unmaintained (not that Kimball would not be able to jump back). Florian Kainz last commit to CTL was almost a decade ago, last commit to the repo was 4 years ago. This should make people nervous about using CTL, it makes me nervous
Doesn’t scare me the least bit. Why should it? C99 was around for decades before C11 came in to replace it. Not to say CTL is as stable as C, but there is little evidence to say it is unstable. Unless there are some changes you’d like to make to the language, I don’t see a reason to replace it.
No matter what, we’re going to need an interpreter that can spit out an AST that a cross-compiler can use to write to various other languages. We already have a stable CTL interpreter that can give us an AST - why not try to reignite CTL cross-compilation support instead of starting over completely?
The major difference is that ISO/IEC etc… are steering C, preventing it to rot and ensuring it will work for the decades to come. The user base and scope are entirely different, almost orthogonal, C will never ever go away so the comparison is a bit moot to me.
Afaik, there is nobody steering CTL and vouching it will work in the future, if that was to change, it would make a lot of people happy, I for sure would be.
That’s a good point. Perhaps then I would propose C or C++ as a long term base implementation language. Obviously you could create a cross-compiler from that language though finding the code entry point (main function with in/out) would work a bit differently. And arbitrary execution would rely on the end-user having a C or C++ compiler on the system.
The one thing that is really nice about CTL is that it is interpreted. Combine this with the fact it’s got primitives for color operations and it’s well suited to the tasks we use it for. Is it slower than we might like, sure. To me the biggest flaw is a lack of an xUnit style framework for unit testing. This just further complicates the job of anyone trying to reimplement in other languages.
It’s worth noting that CTL is not ctlrender. ctlrender was built out of necessity because we needed a more generalized tool to apply CTL modules to arbitrary images. It works well enough but has bugs. A while back I approached Larry Gritz about integrating CTL support into openimageio and to retired ctlrender. He agreed it would be a good idea but for a variety of reasons that never happened.
I am 100% in favor of a CTLcc approach but it still needs a unit testing framework.
This discussion going on here is result of the system design expectations not being represented in reality.
I was fully on board with CTL being the default language of ACES. The idea , in my mind, was that it represents the platform independent invariant target that all other implementations strive to emulate.
The reality is that most manufacturers lack the resources in time and talent to devote developers to what is essentially multi-path development. Well meaning manufacturers and developers have done their best to take CTL and make it real time as opposed to making machine specific implementations.
Looking back it seems ultimately naive to expect that platform developers would take the time to develop their own versions. Additionally the lack of any kind of tolerance specifications makes it impossible to know if anything except the exact CTL code from the academy is acceptable.
With a decade of hindsight, to me it appears that all of the transforms should be referenced specified in a well established and supported GPU language. Any language that is well supported will enable our community to solicit help from a number of non-film industry resources. It would provide immediately usable code that runs at production speed as well as debugging tools that can be applied to final implementations.
I understand the desire for a pure math implementation, as I share it, I very much like stopping down and being able to atomically focus on just one detail of a transformation with no ‘stuff’ in the way. I see how the project has been implemented and adopted in the real world. CTL should be retired unless a gpu manufacturer want to implement it on hardware and provide a full set of debugging tools.
The some degree, what tolerance specs exist are for the Logo Program and are around making
A “well established and supported GPU” language would be nice, but there is serious custom-lock-in happening with those (DirectX12, Metal, Vulcan) If we had picked something a decade ago, it might have become a backwater. We did consider CUDA at the time, but it wasn’t open enough.
In retrospect, we would have been better off building a “Pixel API” library and a "Color processing API” and writing the functions in ‘C’. That is a fair amount of work though and for a volunteer effort was a bridge too far. Even today, no company stepped up to take over CTL after ILM developed it.
The question at the moment though is, for ACESNext, what development work is needed that would facilitate growth and the future of ACES in a hardware environment that is changing each decade.
There are technology trends that are changing as well. Dynamic metadata rendering instead of static LUTs. The transition to machine-learning centric GPUs has already started. What effect would AI techniques have on colorist work and color appearance modeling? At the TV manufacturer level, they cannot do some transforms because of complexity and time when processing 4K/8K images. Could more efficient processes allow better color rendition in a TV?
These directions somewhat overlap the use of ACES in a production context, and unfortunately, the motion picture industry is not big enough to drive the technology very far (compared to gaming for example). However, in the matter of influencing useful imaging for other areas, it can – looking at the usages of ACES concepts at NVIDIA for example.
With the start of the motion picture open-source foundation, is there a project that could get support for making a more usable production tool than CTL – after all, it biggest defect is just performance which is really a solvable problem. (and yes the second biggest being debugging).
Just to play the devil advocate, HLSL and GLSL have both existed for about 15 years (DirectX 9 and OpenGL 2.0), they are not going to disappear any time soon and ACES already runs (or can run) on both.