whats the best way to bring in photographic elements and HDRIs in to ACES from DNG files. As we get raw files in all shapes and sizes we are looking to adopt DNG as an intermediate format and convert footage to this using Adobe DNG converter as DNG is more widely supported than a lot of the camera raw formats out there.
As we have options of colourspace to debayer in when bringing material into Nuke what Is it best method choose? Having read up on this it seems that demosiac to RGB then use Utility sRGB Camera would seem appropriate but I am seeing some workflows going to CIE XYZ and using Utility XYZ 60 to ACES. We are about to embark on creating a new pipeline for ingesting all our stills photography material into ACES and want to adopt the best methodology.
I take it this does not apply any chromatic adaption ? As it doesn’t match to what I get in Nuke using the Bradford Matrix when using a standard nuke colourspace node from CIE XYZ to ACES. How would I get the this with the correct adaption or am I off the mark here?
Libraw as well as OpenimageIO oiiotools with libraw plugin can open Camera RAW images in ACES Color space. But there not what you actually asking. From what you probably want some chance that DarkTable or RawTherapy can probably do. But only in case if you ok to deal with highly unoptimized and slow open source code.
No I don’t want other software solutions we have those already but are out of date for our build of Centos and we don’t have the software team to compile newer versions I want to know what the best transforms are to use in Nuke from a DNG to convert to ACES that is all.
Sorry, I just want to say, if you already know your camera sensor responsive characteristics that is not a question to transform sensor RGB to XYZ and to any desired color space.
But if you don’t have it, as most of photo cams don’t have vendor camera profiles compatible with ACES protocol and libraries. This is more question how capture and compute color profiles for your camera or cameras.
It seems like there is some confusion in this thread around what is necessary to convert camera raw images into scene-linear aces exr images.
First and most importantly, you must debayer your raw image to a linear encoding, so that you preserve a proportional relationship between pixel data intensity and scene light intensity. Otherwise all bets are off.
As far as I am aware, it is 2022 and Adobe products still do not support debayering camera raw to a scene-linear output image. DNG or CR2 or NEF, the result will be the same black-box display-referred result.
Secondly you must decide what gamut you wish to debayer your image into. @simon.arnold you mentioned ACES but I am not sure if you are referring to ACES the “color encoding system” or ACES the gamut (AP0). Regardless of what your desired target gamut is, this is pretty straightforward. You might get scared of all this talk of IDTs. You might look at rawtoaces and get scared you need to have spectral sensitivity data for your camera sensor in order to convert your image into AP0. (I was at one time in the past).
Don’t be scared.
An IDT is simply a 3x3 matrix which converts raw colors to some known target gamut. From that known gamut you can get to any target gamut you want.
An “IDT Matrix” is included automatically with your raw file.
If you have the above two things, you have the answer to your question
The best solutions to this problem I am aware of are in the open source domain. RawTherapee, LibRaw / dcraw_emu are both very good and easy use. My debayer project is just a python wrapper around these tools with a sane set of defaults for the specific goal of converting to scene-linear images.
I’m not sure I agree with that but I would be curious to see comparative benchmarks. Even if these tools are slower it’s not a big deal because you will (hopefully) be batch converting folders of files at once.
The only solution I’m aware of in Nuke to convert camera raw images into scene-linear image data is to override the format reader to use the older crw reader system, which basically just dumps the raw data to the dcraw commandline, and then loads it back into nuke. It is very slow and I would strongly recommend that you batch process the raw images before working on them in Nuke.
We are phototogrammetry scanning studio. And I need process thousands raw images from 41mpx sensor cams almost every day.
Let’s choose Rawtherapy for example. I had presets that reproduce 95% of preset I have in Lightroom (qulity om rawtherapy is worse, more than sure draktable gave me same results on quality). PC have 128Gb ram. 18/36 cores/threads one image eat around 5Gb ram on export.
So I managed run up to 24 parallel threads via python.
And that was at least 50% slower than export from Lightroom that use 5x exports in one moment.
So for this moment back to LR. But coding my own GPU based image processing use libraw as backend.
Using the Adobe DNG Converter with the CLI allows you to produce linear files, you need to use the -largument for that. For what its worth, I have been working with it for almost a decade now and this came in the 3.2 version which is from 2006 or alike.
For me, what makes DNG a really good format to store data is:
Making sure that the camera metadata is consistently generated which is awesome from an automation standpoint
Having a well documented file format that you can read without trauma.
In some cases yes, but IDT is not limited to that though, it can have a 1D linearisation LUT, e.g. the aces-dev IDTs, a 3xn matrix, e.g. some Canon Cameras, white balancing factors or even 3D LUTs in some cases.
To @jedsmith’s point, the main advantage of OSS is that you can put them on the farm easily and build full automation around them rather quickly, so if you have a farm with dozen of thousands of cores, it is a no brainer.
If you are a freelancer, small or indie studio, I would certainly look at Fast Cinema DNG: https://www.fastcinemadng.com. The developer(s), i.e. Fyodor Serzhenko, built it with direct feedback from Lee Perry Smith who is doing 4D capture with really massive datasets. I had a few conversation with him and Lee back then and the tool should support direct ACES2065-1 output.
Thanks for the inputs but a lot of this is over my head that your on about with spectral sensitivities. I do understand that going to display referred is not best but having tried dcraw to linear it didn’t look great and this way got us overall better results for our texture use. I can see for HDRi it won’t cut it.
How do I find this 3x3 IDT you all mention, how and where should I apply it?
As already I explained raw2aces is out as it needs building and we don’t have resources my team tried but having issues with dependencies on our distro.
Dcraw is also pretty old on our distro so can’t go straight to ACES. Saying that having done more research going to 16bit linear tiff seems to have disadvantages and is not ideal for hdri either? so my options are very limited
I could use dcraw to get to linear tiff but then what ? We primarily use stills for hdri on set ref and for textures and reference, internally I have us using ocio and ACES 1.2 config for all projects. Using camera idts to go back and forth.
I’m of the opinion that DCRaw to ACES can work well if you shoot a grey card and set the flag to output the image with ACES primaries. Then adjust the exposure in Nuke so the grey card is .18. Then apply that to all the images saved out of DCRaw. Having said that:
I would like to start using rawToACES again, but the way it generated an IDT from DNG metadata (before the bug that makes them all really blue) is the same result as I get from OIIOTool and Resolve. Which doesn’t surprise me since they’re all using LibRaw on the back end.
When using OIIOTool for raw development, it has a flag that automatically adjusts to scene linear - though based on an assumption, so check that it’s working okay for you. Here are the flags.
The output matches the Resolve DNG ACES IDT, which does the same conversion from integer to linear floating point. I think as long as the metadata tags are as written when the image was saved to disk, then the result is consistent with Resolve, OIIOTool or rawToACES in DNG mode. It’s annoying that Nuke still doesn’t include the same DNG integer to float transform.