I’m really new to ACES workflow and can’t really get the hang of the color controls to get the desired look ,I’m really used to the regular YRGB workflow in resolve .
Today just saw a youtube tutorial where the guy used ACES transform inside ACEScct workflow to convert from ACES to rec709 and back again to ACES using other node with ACES transform ofx,
the controls between these two nodes behave as a regular YRGB and I can really get my desired look created easily , but I’m quite new to ACES so I don’t know if this workflow will hold up or this is something that can break my image and I’m not supposed to do this .
In the screenshot below I’m using node 2 for ACEScct -> REC709 and node 3 for REC709->ACEScct and between these two nodes the controls in node 4 and 5 feels exactly as in YRGB workflow.
I think I saw the initial presentation of that video too.
ACES works differently than YRGB in resolve, but when you use a 3D-LUT, also not so much from a work step perspective.
You are used to work in DaVinci-YRGB and when you work with an ALEXA LogC Clip, I guess you would not apply the LogC2Rec709 3D-LUT in the Media page and the move over to the Color page.
I assume you prefer to add the 3D-LUT in a serial node in the Color tab.
Your screenshot above is taking your scene referred camera data (ACEScct) , apply a Rec.709 (display referred) view transform to it, which is a lossy conversion. Then you “want” to grade this result, promote it back to ACEScct and apply again whatever ODT (in your case also Rec.709).
This sounds similar to me like applying a 3D-LUT already in the media page. It is the wrong place, when you want to preserve all your high dynamic range data and wide gamut that the camera can record.
A lossy conversion should always happen at the end of the process chain and not in the middle.
If we actually saw the same presentation, then it was shown there that this workflow “works”, but the examples had a limited dynamic range, a 0-1 gradient and the 4 color grad had colors that were limited to the Rec.709 gamut.
The moment you try this workflow with a camera file from an ALEXA or any other high dynamic range footage, this proposed workflow breaks.
I also wrote about this topic before, you can find it here. Understanding Resolve Color Management with Nuke
I hope this helps.
Personally, I think this is a valid approach. It’s floating point math (it doesn’t use LUTs but the ACES transform OFX) , so you’ll not be clipping any data. I’ve heard of this technique before, and was introduced to it by a good colourist.
But scene referred grading is a different beast to display referred. I would try out contrast / pivot / Printer lights / log controls. I much prefer that. And I think the results are better anyway. I think you would quickly pick up ‘film style’ grading if it was the only option, and trying to grade with lift gamma gain is only going to hold you back.
But, still, what you mentioned is a valid approach.
Just because the maths is floating point, doesn’t mean data wont be lost. The Rec.709 ODT clips to linear 16.3 and to the Rec.709 gamut. So you won’t be able to make a proper HDR or wide gamut deliverable downstream of the inverse ODT.
this approach may be valid for you, you only need to ask yourself, why you want to limit your data to Rec.709 in the middle of the process. What if you need to deliver a Rec.2020 or a HDR project as Nick mentioned?
Just try it out with source images that are high dynamic range and wide gamut.
I tried it with some color bars that I created myself for HDR test and with an Alexa test clip.
The image data loss might not be huge but I think its good to be aware what is happening under the hood when you want to work this way.
To see the “difference” in both of the “second images”, the layer mode needs to be difference and not subtract.
I am not a colorist and use to do color correction operations on scene-linear data in Nuke and not on log encoded ACEScct data in Resolve. When I play around in Resolve, I must say I am not happy with the controls either. But I think BlackMagic needs to find an answer to that.
So the answer to your question is still the same. Yes, you might break your image and in the ACES pipeline this operation doesn’t make sense if you want to keep the image data dynamic range and wide gamut as long as possible in the process pipeline.
I should probably only comment on techniques I use, but as I had heard and spoken to colourists who use this technique, I thought I would pitch in…
Like I said in my post, I use film style grading tools, and actually prefer scene referred grading (contrast and printer lights are perfect for Aces). And I don’t use lift gamma gain. So have no need to use this technique (and actually like the resolve tools). Saying that, when I heard of this technique, I had a very quick test and in rec 709, it seemed to hold up ok to me.
However, there are colourists out there using this method. So if they are making this work, I think that it’s fair to say it’s a viable option. It’s maybe not a good approach for HDR. I don’t know, I haven’t tested it in hdr myself. I’m not sure how much is being lost in real terms when analysing an image (although I can see there is a difference in your tests). I would definitely suggest testing this before using it.
I totally agree with this for my projects.
Just to clarify, you don’t actually limit it to rec709. You transform from Aces to Rec709, grade, and then transform back out to Aces. We know there is a little bit of information lost, but the question is how much and is it visible?
Thanks guys for so in-depth knowledge .
Well this project is going for theatrical projection but not in HDR , even not in HDR if there is any data loss that maybe an issue ,so I think I would have to learn the controls better in the ACES workflow to create my desired look .
Thanks a lot .