Hey everyone, I am trying to understand the difference between the scene-linear luminance data from “cinema” RAW files, Arriraw or R3D or some LOG, to what I get from Photography stills, I think ive now went through all the programs (rawtoaces, resolve, affinity, libraw, dcraw e.t.c) with testfiles from different cameras and manufacturers and I dont really understand how the latitude gets distributed, its like I get a normalized “linear” ?
So basically what I am testing is the highest values in my colorchannels , lets take arriraw debayered to ACES from the Alexa65 with clipped sun in frame, and the highest it goes is around 65, to me that makes sense at we would have middelgray at around .18 so from that I can derive that 65 would be approx 8.5 Stops above middle grey. this makes sense to me. Similar behaviour can be seen across the board with RED , Blackmagic e.t.c
(maths)
65=0.18*2^X
X= 8.4963
(/maths)
In case of Photo raw there is a big difference to how the RAW was developed, using the purpose build tool rawtoaces I was able to debayer a sony A7R2 and get values up to approximately 6 . But most other tools clip at just above 1. there doesnt seem to be less visible dynamic range it just seems to be Normalized, I guess to fit in Integer based formats like TIFF , which for example CR2 is based on IIRC.
Rawtoaces has native support for the A7r2.
when I have to patch something in comp with a photograph from a stills camera (like a cleanplate) the values from stills and cinema camera would be in totally different ranges, obviously one would then adjust the photograph to match the plate in comp, but is that really the point?
I really hope someone can enlighten me why this is and if it might be on purpose (are raw-stills ment do be used as diffuse-textures rather than scene-linear ?) and even better how can I get it to behave like cinema footage.
Please do let me know if I have this backwards!!