If it’s working (which it sounds like it is), then it seems like a great approach. That keeps us from having to generate a bunch of discrete transforms, and also makes the system more flexible (and robust) as you can just update the endpoints if required, instead of updating every transform. Quite interesting!
As for the topic of multiple renderers, one thing it accomplishes is formalizing a way to allow vendor (or I suppose even facility) renderers. Currently that’s being accomplished by applying an inverse RRT and the desired renderer inside the LMT space, which is less than desirable. Regardless of how good our vanilla render looks, there will be people that for one reason or another want/need to use another renderer (TCAM, IPP2, K1S1). The idea of keeping the system flexible has merit (with certain caveats of course). It’s a bit of Pandora’s box, but for instance there could be a “path to white” renderer and a “gamut clip” renderer. There was a brief discussion on having a “federated” group of renderers that is managed as opposed to letting it be wide open, but that discussion seemed to have gotten tabled for now.
To clarify a little what I wrote earlier, what I was proposing above (based somewhat on Ed’s paper) was an RRT that renders out to a defined (albeit theoretical) display; I need to revisit the details, but I’m pretty sure this is a departure of how the current OCES system is designed.