Maybe it’s a good time to revisit the whole approach how to add graphic elements? Sort of an alpha channel for bypassing the DRT for some parts of the screen, but only for frames where it needs to be? Well, not completely bypassing DRT, but a separate special conversion for graphics between display devices, that is applied using that alpha channel.
Or even optional additional RGB channels, for better transparency, that will only be added (and take disc space), if there are some graphics in those frames.
Not exactly this, but something, that could be accepted as a standard way of adding graphic elements in ACES framework.
And to have an approximate inverse, just for using the old show LUTs or some other unusual cases. Which would not force to make compromises in the forward DRT.
A lot of unnecessary complications. All this can be achieved by using DRT only for specific clips with post-groups / adjustments layers in Resolve or layers in Mistika / Baselight. I do not think it is an ACES problem.
I assume that the main goal for DRT development team is to ship it and test in real production workflows. That would help to identify the most important visual issues and understand if they are fixable in DRT or it is better to use LMT instead. All DRT fixes that resolve one problem can create another.
Next step might be to document all remaining bugs and create LMTs that should be used in specific cases. Explainer videos and clear documentation should make adoption easier.
That said, I believe in ACES team and their judgement on what to do next and in what order.
No DRT that pursues so wildly different goals will be devoid of issues, it should be accepted at this point.
Also glad you compared rev056 with Reveal because one of the reasons I like this ACES2 version is that the Blues are more to Blue whereas Reveal more to Cyan. So the better, smoother Blues are appreciated.
I do not wish to emulate Reveal nor Resolve, but rather find an alternative that works well and may be better for some. Also feeling that “smoothness” is a high priority, I give high priority to SDR - HDR consistency. As it was stated that most use SDR, I will say that my intent as an artist is to utilize the HDR. Displays are now being produced that can really harness HDR. My appreciation goes to the ACES team for sticking with HDR as one of the priorities. I am not sure if any other DRT builders have been quite as concerned.
Literally no one has identified what the purity dimension does in pictorial depictions here yet. That means it’s not fixable because no one knows what is broken.
Try the milk glass test.
Try to tint “under” the picture formation.
Same old same old.
Visual cognition is computation from the visual fields, and as such, this is literally an impossibility.
But the protocol did not outline what the creative parameter space is at the onset.
PS: Visual cognition doesn’t have an HDR mode, despite what subscription services and TV salespeople would have us believe.
If I get it right, this should be basically a set of rules on how to build your own workflow and still be compatible with ACES using some metadata thing?
Below isn’t basically a reply to your proposal, but it reminded me about my message from one discord discussion.
And regarding DRT - I’d prefer something like that: standard s curve, that has an option to adjust contrast. But the overall design should allow to easily change the path-to-white / “hue preserving” / per-channel modules.
So no waiting for major updates. Just different modules, loaded into DRT, that provide different qualities. So if someone needs “hue-preserving”, just downloads it or makes it by themselves.
Sort of a simple to use constructor. No choosing between different DRTs, but a standard system that allows to change or modify it’s qualities as files containing some code or a 3d table responsible for the particular part of rendering.
I wouldn’t focus too much on my specific LMT. I am not proposing it as the ultimate solution. It was just a quick experiment into the concept of a JMh based LMT.
My real point is that a range of LMTs are possible, for various technical or creative purposes, and a DRT which does not limit the scope of what those LMTs can do allows the widest range of possibilities.
And ARRI Reveal doesn’t always reach the corners, whereas CAM DRT has that as a requirement we’ve chosen to have. Here’s an example sweep for v056 and ARRI Reveal:
Here is an updated version of my LMT, written as a single Blink kernel, rather than built from multiple DRT nodes in diagnostic modes. It also now includes parametric control of the hue to be compressed, rather than using a curve lookup, so it is easier to try the effect on different hues.
Well, if you want to create an archive master without DRT, this does not work. Also, how do you treat graphics for different versions (SDR vs. HDR)? If it would be simple, it would have been done .
OCIO is a colour management system; it is not the same as defining a meta-framework. For example, IDTs are only expressed as 1D LUT and Matrix in OCIO, which is unacceptable for a meta-framework. Creating a framework would actually be quite an undertaking in practice. This working group has not even started defining the rest of the pipeline (Viewing Conditions, White Point Handling, EOTFs etc…). It is still a lot of work…
It is never too late.
I think this would only move the issue a step further up the abstraction layer. People would want a different algorithm at some point, also the system would be so complex that it would be hard to control.
The only way I found satisfying all use cases is to make the DRT a swappable building block.
I was not thinking about archival purposes, point taken. You and Anton are right, this is a whole different can of worms.
Out of curiosity, why are 1D LUT and 3x3 matrix not enough for IDTs? This approach satisfies “linearity” condition.
In my mind (and mind of a lot of my colleagues) ACES is an established color-management system with its own set of pros and cons. There is some mental inertia which we should overcome to pivot it to a meta-framework. It is a very interesting challenge, especially because some of the concepts should be defined from ground up, but I wonder…
What specific problem(s) does meta framework solve? I don’t see a lack of innovation in the industry, but I may be missing something.
P.S. I think that the latter part of the discussion can be continued in the framework thread.
Thanks for taking star image for through another test.
I am not really sure if this is kind of image is a good method to test a DRT in the first place. But it is simple test image at least. And I am pretty sure if someone in AE with the new ACES configs would look for an intense blue, why not using 0/0/1?
I noticed not only the blue changed (but there is still the kink in the plot), but also the yellow changed slightly. I got more straight to the gamut boundary?
The gamut mapper is what creates the curvature in chromaticity space as it compresses along the perceptual hue lines. It’s not just the blue, green and red stars shows curving as well. There’s also clipping (which further skews) as the gamut mapping result is not quite exact to the boundary.
We should be careful to infer meaningfulness from normalized signal projections such as the Standard Observer CIE xy projection. The “hey that looks pooched” cannot be determined from an examination of the colourimetric CIE xy projection alone.
Surely we have a vast enough amount of evidence in this thread alone to disprove the idea that there is a 1:1 stimulus to cognition Cartesian-like model that can even remotely generate “perceptual hue lines”? No? Not yet? Folks are still clinging to this absolute rubbish?
— 1 Evans, Ralph M. An Introduction to Color. New York: Wiley, 1948.
Incidentally, the parameters in that LMT default to a hue centre of 288, which was a value I found worked visually well to me. But I was only looking at the Rec.709 output. The hue value which lines up with the AP1 blue primary is 250.
UPDATE: I have just pushed a commit changing the default hue centre to 250 and adding a DCTL version.