whats the best way to bring in photographic elements and HDRIs in to ACES from DNG files. As we get raw files in all shapes and sizes we are looking to adopt DNG as an intermediate format and convert footage to this using Adobe DNG converter as DNG is more widely supported than a lot of the camera raw formats out there.
As we have options of colourspace to debayer in when bringing material into Nuke what Is it best method choose? Having read up on this it seems that demosiac to RGB then use Utility sRGB Camera would seem appropriate but I am seeing some workflows going to CIE XYZ and using Utility XYZ 60 to ACES. We are about to embark on creating a new pipeline for ingesting all our stills photography material into ACES and want to adopt the best methodology.
I take it this does not apply any chromatic adaption ? As it doesn’t match to what I get in Nuke using the Bradford Matrix when using a standard nuke colourspace node from CIE XYZ to ACES. How would I get the this with the correct adaption or am I off the mark here?
Libraw as well as OpenimageIO oiiotools with libraw plugin can open Camera RAW images in ACES Color space. But there not what you actually asking. From what you probably want some chance that DarkTable or RawTherapy can probably do. But only in case if you ok to deal with highly unoptimized and slow open source code.
No I don’t want other software solutions we have those already but are out of date for our build of Centos and we don’t have the software team to compile newer versions I want to know what the best transforms are to use in Nuke from a DNG to convert to ACES that is all.
Sorry, I just want to say, if you already know your camera sensor responsive characteristics that is not a question to transform sensor RGB to XYZ and to any desired color space.
But if you don’t have it, as most of photo cams don’t have vendor camera profiles compatible with ACES protocol and libraries. This is more question how capture and compute color profiles for your camera or cameras.
It seems like there is some confusion in this thread around what is necessary to convert camera raw images into scene-linear aces exr images.
Linear
First and most importantly, you must debayer your raw image to a linear encoding, so that you preserve a proportional relationship between pixel data intensity and scene light intensity. Otherwise all bets are off.
As far as I am aware, it is 2022 and Adobe products still do not support debayering camera raw to a scene-linear output image. DNG or CR2 or NEF, the result will be the same black-box display-referred result.
Gamut
Secondly you must decide what gamut you wish to debayer your image into. @simon.arnold you mentioned ACES but I am not sure if you are referring to ACES the “color encoding system” or ACES the gamut (AP0). Regardless of what your desired target gamut is, this is pretty straightforward. You might get scared of all this talk of IDTs. You might look at rawtoaces and get scared you need to have spectral sensitivity data for your camera sensor in order to convert your image into AP0. (I was at one time in the past).
Don’t be scared.
An IDT is simply a 3x3 matrix which converts raw colors to some known target gamut. From that known gamut you can get to any target gamut you want.
An “IDT Matrix” is included automatically with your raw file.
If you have the above two things, you have the answer to your question
The best solutions to this problem I am aware of are in the open source domain. RawTherapee, LibRaw / dcraw_emu are both very good and easy use. My debayer project is just a python wrapper around these tools with a sane set of defaults for the specific goal of converting to scene-linear images.
I’m not sure I agree with that but I would be curious to see comparative benchmarks. Even if these tools are slower it’s not a big deal because you will (hopefully) be batch converting folders of files at once.
The only solution I’m aware of in Nuke to convert camera raw images into scene-linear image data is to override the format reader to use the older crw reader system, which basically just dumps the raw data to the dcraw commandline, and then loads it back into nuke. It is very slow and I would strongly recommend that you batch process the raw images before working on them in Nuke.
We are phototogrammetry scanning studio. And I need process thousands raw images from 41mpx sensor cams almost every day.
Let’s choose Rawtherapy for example. I had presets that reproduce 95% of preset I have in Lightroom (qulity om rawtherapy is worse, more than sure draktable gave me same results on quality). PC have 128Gb ram. 18/36 cores/threads one image eat around 5Gb ram on export.
So I managed run up to 24 parallel threads via python.
And that was at least 50% slower than export from Lightroom that use 5x exports in one moment.
So for this moment back to LR. But coding my own GPU based image processing use libraw as backend.
Using the Adobe DNG Converter with the CLI allows you to produce linear files, you need to use the -largument for that. For what its worth, I have been working with it for almost a decade now and this came in the 3.2 version which is from 2006 or alike.
For me, what makes DNG a really good format to store data is:
Making sure that the camera metadata is consistently generated which is awesome from an automation standpoint
Having a well documented file format that you can read without trauma.
In some cases yes, but IDT is not limited to that though, it can have a 1D linearisation LUT, e.g. the aces-dev IDTs, a 3xn matrix, e.g. some Canon Cameras, white balancing factors or even 3D LUTs in some cases.
To @jedsmith’s point, the main advantage of OSS is that you can put them on the farm easily and build full automation around them rather quickly, so if you have a farm with dozen of thousands of cores, it is a no brainer.
If you are a freelancer, small or indie studio, I would certainly look at Fast Cinema DNG: https://www.fastcinemadng.com. The developer(s), i.e. Fyodor Serzhenko, built it with direct feedback from Lee Perry Smith who is doing 4D capture with really massive datasets. I had a few conversation with him and Lee back then and the tool should support direct ACES2065-1 output.
Thanks for the inputs but a lot of this is over my head that your on about with spectral sensitivities. I do understand that going to display referred is not best but having tried dcraw to linear it didn’t look great and this way got us overall better results for our texture use. I can see for HDRi it won’t cut it.
How do I find this 3x3 IDT you all mention, how and where should I apply it?
As already I explained raw2aces is out as it needs building and we don’t have resources my team tried but having issues with dependencies on our distro.
Dcraw is also pretty old on our distro so can’t go straight to ACES. Saying that having done more research going to 16bit linear tiff seems to have disadvantages and is not ideal for hdri either? so my options are very limited
I could use dcraw to get to linear tiff but then what ? We primarily use stills for hdri on set ref and for textures and reference, internally I have us using ocio and ACES 1.2 config for all projects. Using camera idts to go back and forth.
I’m of the opinion that DCRaw to ACES can work well if you shoot a grey card and set the flag to output the image with ACES primaries. Then adjust the exposure in Nuke so the grey card is .18. Then apply that to all the images saved out of DCRaw. Having said that:
I would like to start using rawToACES again, but the way it generated an IDT from DNG metadata (before the bug that makes them all really blue) is the same result as I get from OIIOTool and Resolve. Which doesn’t surprise me since they’re all using LibRaw on the back end.
When using OIIOTool for raw development, it has a flag that automatically adjusts to scene linear - though based on an assumption, so check that it’s working okay for you. Here are the flags.
The output matches the Resolve DNG ACES IDT, which does the same conversion from integer to linear floating point. I think as long as the metadata tags are as written when the image was saved to disk, then the result is consistent with Resolve, OIIOTool or rawToACES in DNG mode. It’s annoying that Nuke still doesn’t include the same DNG integer to float transform.
Hello, I’ve currently scratching my head about what would be the best approach to bring camera raw files into ACEScg workflow without losing to much of it’s quality or doing to much conversion from one software to another. I think I’ve seen somewhere someone importing cr2 canon files in nuke, with nuke using dcraw as a background to interpret the files that are read in. Atm I am using DNG as an intermediate “media/information” container to read in nuke.
From there I can see two ways of convert the information into ACEScg wofklow, but I don’t know which one is the best, or if there is accuracy difference.
From a more traditional workflow that most people are used to, we usually convert texture images to ACEScg with the input transform set as Utility - sRGB - Texture for the images we read in.
Now I’ve been looking for a more “direct way” to convert raw image files to ACEScg, and it’s seems that the utility_xyz would be a more “pure way” to convert the original information.
Here’s a screenshot comparing both results sRGB input on topper part and XYZ on the bottom part.
Is it correct to do what I did above?
I am not sure if it’s ok to take information from this DNG file and convert to ACES using the input transform as utility_xyz.
Luminance-wise the results are pretty much the same, the only difference is in tone/color balance shift between the two images, so I wonder if anyone with more understanding on colorspaces could clarify if what I am doing is correct, and if there is best approach to convert raw into ACES without losing to much, or having to jump from multiple colorspace conversions that could reduce the image color range, like sRGB does.
In Nuke the Read node will convert from whatever you put in the Input Transform into ACEScg. So you would need to know what color space your DNG is in and then set the Input Transform to that.
Do you know the color space the DNG is in?
In general, I’d say that a preferable workflow would be to read the file into Davinci Resolve (which can read in pretty much all camera RAW files) and then convert that to EXR in ACES-2065-1 color space as the interchange format.
FWIW, when I open up DNG footage in Resolve, it recognizes it’s color space and displays it correctly. I can see in the Camera Raw section what it is.
When I read the same file in Nuke I can’t get it to display correctly for the life of me! I suspect you may find the same! Nuke is just not the ideal tool to use for debayering raw files!
I just used the Adobe DNG converter. Indeed it doesn’t tell me which colorspace this DNG was saved on. I though it wouldn’t change the original info of the canon cr2 file.
To be honest the logarithm approach in nuke make sense, I just don’t know which input canon log profile is the correct one.
Your Davinci Resolve workflow sounds more easy then I’ve been trying to do guessing myself.
The DNG spec usually provides two matrices for two illuminants, typically D65 and A (or a Tungsten-like one) converting from CIE XYZ D50 to Camera RGB. The SDK interpolates between those as a function of the processing white balance. Often there is only one of those matrices. The conversion then becomes applying the inverse matrix to your camera RGB values and then convert from CIE XYZ D50 to ACES2065-1.
We have been using the same workflow raw to dng then dng in nuke using utility- xyz.
I have just been playing with Rawtherapee converting applying the elle stones aces icc profiles to output linear to a 32bit floating point tiff then convert to exe. But I am not sure what base settings I should be using in rawtherapee. Read somewhere to use the neutral processing profile which does give us a wide dynamic range but the colours are way off the original ref jpegs using the base profile it chooses its close but as it’s tone mapping it does seem to loose some range. Is neutral the best way to go?
It’s looks to remove any processing of tone curves normally done via the software to match to the jpeg thumbnails. So it’s pretty much as raw as you can get I think. It’s maintaining a good range and overall much better than other methods I have tried.