Inconsistent ACES results

Tags: #<Tag:0x00007f34c89ff138>

I took a video levels HLG (with Rec.2020) test clip, converted this clip to V-Log/V-gamut and saved the converted video using data levels.

Comparing the original clip and the converted clip using YRGB Color Managed Science with the proper conversions gave a satisfactory result, however under ACES there was a noticeable shift, predominately in the shadows. Also as an effect of these changed levels some noise shows up as saturated pixels

Unless I made a mistake something does not add up, it could be Resolve or some ACES definitions.

For a shift in the levels see here:

I do not know what the Resolve implementation does, but there is no official ACES Input Transform for HLG as far as I am aware. Are you sure that what you are using is not an inverse HLG Output Transform? That would not give the same result as an RCM transform.

My understanding is that ACES 1.1 supports an HLG IDT

That is incorrect. ACES 1.1 does not have an Input Transform for HLG.

ACES 1.1 simply added some additional ODTs, including an HLG Output Transform (which really just re-encodes 1000-nit ST.2084 (PQ) output to HLG using the method specified in Section 7 of ITU-R BT.2390-0).

1 Like


So then what is Resolve using as IDT?

Is it the inverse of the RTT+ODT?

Would that give scene v.s. display referred issues perhaps explaining the gamma shifts and out of gamut issues in the shadows?

Quite possibly. I don’t have a Resolve 15 system here to test. But if you choose the HLG Input and Output Transforms, does the whole Resolve ACES pipeline become “transparent”? That is the test for whether it is an inverse Output Transform.

I had a quick look and the only HLG Input Transform I see in Resolve is the Rec.2020 HLG 1000 nits one, which is indeed the inverse of the HLG Output Transform. It is therefore not intended for scene-referred HLG recordings from a camera.

Bear in mind that even without the RRT, the HLG OETF and EOTF are not the inverse of one another. That’s why the Resolve Colour Space Transform includes HLG and HLG (Scene). I suspect, but haven’t tested, that if you set the Resolve Input Transform to ACEScct (i.e. do nothing) and then use a Colour Space Transform as the first node, going from Rec.2020/HLG (Scene) to AP1/ACEScct, you would get a match.


Hi Nick,

I was wondering if you or anyone else has any updates on HLG implementation in Resolve? At the moment I use HLG on my GH5 and use Colorspace Transforms to get in and out of ACES on a regular timeline ( one node at the beginning and one at the end and work in between). I have been using CSt for your reasons above - I can use HLG(scene) in CST. Is there any math difference here vs being in aces color managed timeline?

Thanks in advance for any help!

In a standard colour managed timeline you do not have an ACES Output Transform (RRT + ODT) applied after your grade, unless you add it yourself with the DCTL by @Paul_Dore, for example. Or I believe that ACES transforms are now available as an OFX operator, so knowledgable users can build their own ACES pipeline.

There is more to ACES than just the working space, and if you are not including all the transforms, you will not match the intended result. You can use the OFX Color Space Transform in an ACES timeline to apply an Input Transform. You just need to set the global input transform to match the working space (e.g. ACEScct) and then add e.g. an HLG(scene) to ACEScct transform at the start of your node tree. But be aware that this is non standard, and may not be simple to replicate in other applications. You may also find you need to add an exposure offset to get a “normal” looking image as a start point.

Thanks for the reply! This is tricky stuff. So would it make more sense for me to use an CST in a standard color managed timeline and convert from hlg(scene) to something like LogC with a CST then use an ACES transform ofx to go from Alexa to ACEScct? I’m not sure if I am making sense, I am just starting to dabble in ACES. I am pretty familiar with CSt’s and using them as Juan Melara shows in some of his videos. But ACES is a whole new beast.

Thanks for your help!

I don’t think things are tricky at all, it should simply work!

Ok, so there is no ODT… HLG, like some other formats, is implemented as an inverse output transform, how does that make the problem not a problem?

Negative values (when the input is normalized to 0…1) should not be treated as being in log2 space because they are outside the 0…0.5 range.



If there is no HLG input for camera, what is then the workflow to get HLG footage from camera in a ACES 1.1 context?
I tried to use the Output - Rec.2020 HLG (1000 nits) as input from a Sony A7m3 footage shoot in the HLG picture profile and it looks exactly how I expect. I shoot the same shot in the regular rec709 picture profile and and a still sRGB image and it matches quite well. Is that wrong to use it that way?

I don’t think there is anything wrong to use the HLG inverse output transform but the ACES implementation of Resolve for this inverse output transform does not clamp negative code values… If your camera does not produce negative code values there should not be any problems.

But for example the Panasonic GH5 HLG implementation can have negative code values. It seems to calibrate the noise level to zero. Thus In order to properly use HLG with this camera you need to clamp the negative values first before you use ACES (You could use a DCTL,clamping the negative code values and then call the InvRRTODT_Rec2020_1000nits_15nits_HLG() function. This function will convert the HLG curve to PQ first and then convert to ACES.

Other cameras may also produce negative code values for HLG.

I would avoid using an inverse Output Transform if possible. They should normally only be used with no grading to pass a display-referred image transparently though ACES.

Camera HLG is, notionally at least, scene-referred. I would suggest writing a DCTL which implements the inverse of BT.2100 HLG OETF (which is not the same as the EOTF). You may also want to include an exposure multiplier (in linear).

I didn’t noticed any negative values with this camera.

I don’t understand why using the inverse output transform as input is not intended for grading purpose? When using it that way and render it with the output transform and no grade, the result is identical. So, using another input color transform and render it in Output - Rec.2020 HLG will not be the initial look unless the grade will try to match how it looks when using output transform as input, which is quite absurd for me since start a grade with the “correct” look seems the way to go for me. Am I totally wrong? :slight_smile: how do people do?

Revert the gamma is what the input S-Log input transform does for instance, no? Use it as input to get the look it should have, then grade, and finally render in the desired colorspace. What is different when using HLG, or even rec709 footages and sRGB stills that can be reverted using its inverted output transform too?

I’m not saying you can never do it, but you should be aware of what you are doing, and take care.

A true Input Transform, such as the S-Log one transforms from the camera’s scene-referred encoding to scene-referred ACES, and so includes no tone-mapping. Using an inverse Output Transform as an Input transform means applying inverse tone-mapping, which can result in a curve which is quite steep in places. If you subsequently pass the image through a different output transform, or even the same output transform but with grading applied first, the resulting overall curve may contain some undesired distortions.

If you apply an inverse HLG Output Transform to camera HLG, you are “undoing the tone-mapping” on something that was in fact not tone-mapped in the first place.

Like I said, do it if it gives you the result you want. But take care.

Thank you for your answer but I’m a bit more confuse now :slight_smile:
What’s the meaning of scene-referred? I tried using Rec.2100 HLG and Rec.2100 HLG (Scene) as input in Davinci Resolve and the Rec.2100 HLG gives me the same result I had with Output - Rec.2020 HLG in a ACES 1.1 workflow. This is logical, but Rec.2100 HLG (Scene) gives a very bright image even with good exposure, which is visually absolutely not correct.

And even more confusing when you said that the S-Log was correct because I shoot same shot with same settings except the picture profile: HLG/S-Log2/S-Log3. Both S-Log in ACES using the corresponding input transform give me roughly the result I that Rec.2100 HLG in Davinci or Output - Rec.2020 HLG produce with the HLG footage. The only one which looks a lot different is when using Rec.2100 HLG (Scene) as input transform on the HLG footage from the camera. Is it really how it should look? and then, why? No matter what timeline color working space I use in Davinci, Rec.2100 HLG (Scene) is always too bright, so, I don’t believe this is correct and what I get initially with the only HLG option available in ACES 1.1, looks the more accurate.

Honestly, HLG can get quite complex if you drill down into the details. I personally would not recommend it as a camera encoding if using an ACES pipeline.

As @sdyer said in his earlier post, there is not support for HLG input in ACES. There is a convenience Output Transform which is really an ST.2084 (PQ) output transform followed by an HLG conversion such that on the same 1000 nit monitor PQ and HLG will give identical results. HLG is designed so that picture rendering is applied in the monitor, whereas ACES applies picture rendering in the form of the RRT during conversion between scene and display-referred encoding. They are two very different approaches, and a full explanation is too much for a forum post.

I believe that in RCM, “Rec.2100 HLG (Scene)” uses the formula from ARIB STD-B67 (flip between “ARIB STD-B67” and “Rec.2100 HLG (Scene)” and you will see no change). With ARIB STD-B67 an HLG value of 1.0 encodes a scene-linear value of 12.0, whereas according to ITU-R BT.2100 an HLG value of 1.0 encodes a scene linear value of 1.0. The are the same curve, but with a scale factor of 12 applied in linear.

According to ITU-R BT.2408, the recommended exposure for HLG puts 18% grey at 38%, which fulfils the HLG aim of “looking reasonable” if viewed unmodified on a Rec. 709 monitor, but in doing so only leaves about 4.4 stops of highlight latitude between 38% and 100%. S-Log2 and S-Log3 encode significantly more highlight latitude than that, so I would recommend one of those if using ACES. Even S-Log1has more highlight latitude than HLG exposed like this.

The inverse HLG 1000 nit Output Transform maps 38% to an ACES value of about 0.276 which while higher than the nominal 0.18 for ACES mid grey, is only 0.6 of a stop over so not unreasonable. ARIB STD-B67 on the other hand maps 38% HLG to 0.5776 linear, which is about 1.7 stops above 0.18. Hence the far brighter result you see.

I’ll stop now, as you are probably far more confused than you were originally!

My main recommendation is don’t record HLG if you are using ACES.

1 Like

Is it something similar in photography with jpeg vs raw file where jpeg has a gamma curve (from camera decision, not the sRGB one) baked in the color that raw does not have?

Maybe then the Utility - Rec2020 - Camera is more adapted in a grading use?
S-Log is not really an option since the bitrate is quite low and lead to strong artefacts with the Sony A7m3. HLG produce way more quality files that balance perfectly dynamic range and good looking colors after grade.

Not similar to JPEG versus RAW by any means.

I really do not see the big fuss.

Certainly using HLG is not the best format to start with if you want to use ACES but it is not an insurmountable problem either.

Take HLG, convert it to PQ assuming a 1000 Nits monitor and load it into PQ. And that is exactly what InvRRTODT_Rec2020_1000nits_15nits_HLG() does.