Right now its more about learning more and trying to undestand the underlying science behind it.
If I go back to my “understanding photography RAW” that all came down to not beign able to get anything matching the scene-linear data from a alexa from those cameras , practically all tools like rawtoaces e.t.c just gave me linear values that seemed to be normalized or “compressed” , and as @Troy_James_Sobotka said,
The device transfer function a vendor chooses is there for good reason; it’s a valuable compression function. The time of linear raw encodings is likely soon to be relegated to the discard bin of history for this very reason; they are horrifically inefficient, but because digital sensors are so f#%king horrible, we haven’t quite noticed.
I mean I guess its possible to store like 17 stops of scene-linear in a integer format but why would you do that, seems way more efficient to just fit the sensor data into 0-1 and deal with the “desqueezing” later in processing. Made a little visualisation to show how linear scaling effects the code values , aka moving middle grey around. which gives me -8 and +8 stops around middle grey, which I guess would be ok-ish
But if you look at the value distribution it seems pretty nonensensical to allocate bits like that half of all the avaiable bits are just “wasted” between stop +7 and + 8 , while float seems way more suited for this kind of stuff as its precision is going down the higher the values go. Not even just from a sensor technology standpoint, which might or might not be Scene reffered , storing data like this doesnt seem efficient, but maybe they have found a smart way of compressing this stuff I just dont know how and why they would claim its “better than 12bit” if their direct competitor is storing data as 12bit log (Arriraw) , and I never saw arri reffering to their 16bit linear sensor data as scene-linear , just linear .
Could very well be that there are 2 different meanings to “scene-linear” though, maybe what I am looking for is " radiometrically linear " ?