I still think the title of this thread is important and that particular step of the current prototypes is a critical step, and I hope to have something short and useful to share on that.
Unfortunately there may be no turning back from the direction this thread has taken, apologies if I have contributed to that, but maybe it is still useful and the occasional tendency to divert into deeper conceptual matters on this forum is probably overdue. Skip to the last few lines of this post to avoid this diversion.
It can often be helpful to (re)define useful terminology (yes, even “tone”) and clarify expectations to make sure there is not too much talking past each other.
Keep in mind the original post I made referred to the function of a particular line of code that might/or might not be useful. In general the “trade offs” I would refer to are are purely practical, when there is no perfect solution due to the limits of technology or understanding, as is definitely the case with the human visual system, yet something needs to be delivered!
These are the kind of choices/trade offs that members of this forum make every day. Lighter/darker, warmer/cooler, pinker/greener,etc… Often no extra technology or information make these choices/trade offs easier.
Most CAMs are not expected to model all the spatial temporal phenomena and complexity of the human visual system.
CAMs often do a decent job at the task they were designed for— to make statistically accurate predictions on observer reported color appearance/description from a specific data set(s) under very specific conditions.
This is not the goal of the ACES DRT, nor should it be.
The task that most on this forum have is to make pleasing pictures, and the ACES DRT should help them achieve that goal.
These tasks obviously have a great deal in common, but the end goals are not the same and I don’t really think anyone reading this needs reminding of that.
The hope is that tools/ideas from one (CAMs) can be applied to the other (DRT), which is very likely/somewhat already proven.
It is useful for Troy to reference the development of film and what could now be referred to as “Device Color”
Probably most of the critical development in the history of Color imaging came from innovation in dyes, Colorants, phosphors, substrates, sensors, and the chemical or electrical means to control of their application.
One could argue that Color models (like CIE XYZ ) have mostly been useful to evaluate and compare the output from “Device Color” rather than a direct means of producing/controlling the output.
This is of course no longer the case, so we should probably pay a great deal of attention to details/limits of these models if they are directly affecting the look of our pictures, and be ready to hack them or scrap them and build new ones if that is what is needed.
It is probably good to keep in mind that CIE XYZ or other tri-Color colorimetric measures are still very useful and accurate measure of “stimulus”.
If the light source and colorant/filters are known, then XYZ coordinates are sufficient to exactly reproduce that same stimulus with the appropriate colorant or filters, even if we do not have a model that describes the appearance of that stimulus under all conditions.
I will reply in further detail to some of the points/examples Troy has demonstrated, and some thoughts on CAMs and DRTs that have been raised by others (maybe in a new thread?).
A quick summary to this post (and intentionally provocative) reply to/expansion of some of Troy’s comments in the thread (which I mostly find myself in agreement, but not total agreement):
We don’t absolutely need to know how the human visual system works.
We only really need to know how to make pictures work!
(and maybe a little about the devices that we use to produce them)
What tools do we have to achieve this and what tools still need to be built.
A more productive summary that fits with the original intent of this thread would be:
WTF is going on with the shadows and the blue light on the pool table in blue bar!
How did it look? How should it look? How could it look? and what tools are available in the ACES DRT to control/modify the look?
Christopher