ACESlog Strawman Proposal

As i was mainly to blame for the original underlying parameters used in AcesCC (and AcesCCT), i thought i’d try and come up with an initial strawman candidate for this newly proposed AcesLog which hopefully will address the requirement to extend the range in the hightlights, IN ADDITION to insuring that current production/post-production grading operations will still work effectively.

my two main goals were:

  1. answer the requirement to extend the linear dynamic range which is represented by 0.0 - 1.0 in the new log encoding space

  2. insure that log value for midgrey and the “steps_per_stop” will allow acceptable results with current grading operations (e.g. asc-cdl/etc). (some of the newer extended camera log encodings have not realize that this is an issue affecting onset cdl grading/dailies/etc.)

as a starting point, let’s examine the current acesCCT/CC encodings:
they have:

  • 17.52 stops total range (in pure acesCC log without the acesCCT linear toe)
  • 58.4 steps per stop (in 10 bit code values) above lin/log crossover point
  • “midgrey” 2^{-2.5} at ~422 (10bit code value)
  • linear max @ 222.860944 (~10.3 stops above “midgrey”)

btw - just for comparison - the new alexa35 logcv4 has a max linear value of 460.8 - which is ~1.08 stops above current AcesCCT max. this was probably one of several reasons behind the desire for a new aces log encoding.

one side note - i’m using a value of 2^{-2.5} as “midgrey” - this value is ~0.176777 and has been a good linear target for midgrey - it’s really close to “18%”, but more important, it’s a nice “round” power of 2. (you can think of it as "…we’ll use a linear value 2.5 stops below 1.0 as a good target for “middle grey’…”)

my first proposal is that we set the steps_per_stop to 50.0 10bit_cvs/stop - which will allow for extra highlight range. (it was 58.4 for acesCCT/CC) and that we adjust the log value for midgrey slightly to compensate for shifts in where “linear 0.0” falls in the new log space.

i) 50.0 steps per stop (in 10bit code values) above lin/log crossover point

ii) set ~midgrey (2^{-2.5}) @ a 400 10bit code value (it was at 422 for AcesCCT)

iii) linear max corresponding to a normalized log value of 1.0 would now be at:
2^{(-2.5 + (1023-400)/50)} =~ 995.998666
(12.46 stops above midgrey)

iv) integer linear powers of 2 would be at convenient exact 10bit cv’s -
for example:

	10bitcv	325 == linear 0.125
	10bitcv	425 == linear 0.250
	10bitcv	475 == linear 0.500
	10bitcv	525 == linear 1.000
	10bitcv	575 == linear 2.000
	10bitcv	625 == linear 4.000

this results in the lin_to_log formula (pure log - without any linear toe):

`normalized_log_val = (log2(linear_val) + 10.5) / 20.46;``

(full maths derivation are at the end of this diatribe…)

with that in place, now we have to add the “linear toe” strategy
that was done for AcesCCT -

for AcesCCT the log/lin crossover point was at 0.007812500000 - (which corresponded to a log 10bit code value of exactly 175) but with the new slope, this would result in linear 0.0 mapping to a corresponding log 10bit code value of approximately 102 - which is a tad high. normally, linear 0.0 maps to something in the low 90’s in log 10bit code values. so, instead of placing this log/lin crossover point at a 10bit log cv value of 175, i’ve lowered it to a 10bit log cv value of 165. this corresponds to a linear crossover value of 0.006801176276. and now maps linear 0.0 to a 10bit log code value of approx 93.

here are the parameters for this proposed lin_to_log and log_to_lin transforms:

lin_xover = 0.006801176276 ;
log_xover = 165/1023 = .16129032258064516129 ;

the slope of the lin-to-log curve at this crossover point is:
lin_to_log_slope = 10.36773919972907075549 ;

and the y-intercept of this linear toe portion of the lin-to-log curve is:
y_intercept = .09077750069969257965 ;

this results in 0.0 - 1.0 in the new log encoding corresponding to a linear range of (approx) -0.008756 to 995.998666. for comparison, the corresponding linear range for 0.0 - 1.0 in AcesCCT was -0.006917 to 222.860944. so we have extended the highlight range by over two stops, and also extended the negative range slightly.

comparison between AcesCCT and this “AcesLog_strawman1” (dubbed “AcesLogSM1”)

linear value corresponding to log 0.0

	AcesCCT		-0.006917
	AcesLogSM1	-0.008756

linear value corresponding to log 1.0

	AcesCCT		222.860944
	AcesLogSM1	995.998666

log 10bit code value corresponding to linear 0.0

	AcesCCT		74.582
	AcesLogSM1	92.865

midgrey (2 ^{-2.5}) placement

	AcesCCT		422 (10 bit code value)
	AcesLogSM1	400 (10 bit code value)

steps per stop (above lin/log crossover point)

	AcesCCT		58.4 (10 bit code value steps_per_stop)
	AcesLogSM1	50.0 (10 bit code value steps_per_stop)

and here’s the maths in plain vanilla C code :

double ALOGSM1_LIN_BRKPNT = 0.006801176276; 
double ALOGSM1_LOG_BRKPNT = .16129032258064516129;	/* 10bit cv = 165 */
double ALOGSM1_LINTOLOG_SLOPE = 10.36773919972907075549;
double ALOGSM1_LINTOLOG_YINT = .09077750069969257965;

double lin_to_AcesLogSM1(in)
double in;
    if (in <= ALOGSM1_LIN_BRKPNT)
    else /* (in > ALOGSM1_LIN_BRKPNT) */
        return ( log(in)/log(2.0) + 10.5 ) / 20.46;

double AcesLogSM1_to_lin(in)
double in;
    if (in <= ALOGSM1_LOG_BRKPNT)
    else /* (in > ALOGSM1_LOG_BRKPNT) */
	return pow( 2.0 , in * 20.46 - 10.5 );

and, as promised, here’s the algebra for the underlying lin-to-log parameters:

the basic equation for a pure lin-to-log is:

logval = (log2(linval) + offset) / total_range_in_stops

the first decision was to have 50 10bit log code values per stop, this corresponds to a total_range_in_stops to be `

1023/50 = 20.460

the second decision was to have a linear “midgrey” value of 2^{-2.5} produce a 10bit log code value of 400. this 10bit log code value of 400 corresponds to a “normalized” (0.0-1.0) log value of

400/1023 = 0.39100684261974584555...

substituting this into the above equaation , we have

logval = (log2(linval) + offset) / total_range_in_stops
0.39100684261974584555... = (log2(2^-2.5) + offset) / 20.460

and through the magic of algebra -

offset = 0.39100684261974584555... * 20.460 + 2.5
offset = 10.50

et voila - our underlying linear-to-log formula (without any linear toe) is:

normalized_log_val = (log2(linear_val) + 10.5) / 20.46;

sorry for all the boring math…

AcesLog_SM1-AcesCCT_vs_AcesCC_vs_AcesLogSM1notoe_vs_AcesLogSM1_logtolin.xlsx (132.1 KB)

i’ve run the analysis on “how many unique 16bit half float code values can be represented corresponding to the range 0.0 - 1.0 in the proposed aceslog_SM1 encoding.”

here are some results:

for the proposed Aceslog_SM1encoding -

  1. counting all the possible LINEAR 16bit half floats that are representable in the range of AcesLog_SM1 0.0 - 1.0 range (linear : -0.008756 - 995.998666) - results in 33862 half float values.

  2. counting all UNIQUE 16bit half floats that are representable in the range of AcesLog_SM1 0.0 - 1.0 range AFTER being encoded in AcesLog_SM1 and saved as 16bit half floats - results in 5645 unique half float values.

so, yes, there is quite loss in the number of unique values, but i believe there are a large enough number left to deal with even the most extreme creative grading operations we’re likely to run into. (but we should run a bunch of tests.)

i ran the same calculations on AcesCCT, and, interestingly -

  1. counting all the possible linear 16bit half floats that are representable in the range of AcesCCT 0.0 - 1.0 range (linear : -0.006917 - 222.860944) - results in 31247 half float values.

  2. counting all UNIQUE 16bit half floats that are representable in the range of AcesCCT 0.0 - 1.0 range AFTER being encoded in AcesLog_SM1 and truncated to 16bit representation - results in 5070 unique half float values.

so, interestingly enough, the new AcesLog_SM1proposal somehow actually results in MORE 16bit half float values able to be respresented in AcesLog_SM1 than AcesCCT.

the attached excel sheet i made just because i was interested in seeing how the “duplicate” 16bit representations were distributed. it just shows all the possible 16bit half float values that correspond to the AcesLog_SM1 0.0 - 1.0 range (for both full linear and after being converted to AcesLog_SM1) and the “repitition count” for all of those 16bit half float representations. i’m including it here just in case you’re also interested in seeing this.

aceslogSM1_0-1_linfloat_ushort_logfloat_log_0-1_ushort_totcount.list.xlsx (2.6 MB)

Thanks for the analysis.

What we need to look at are the distribution of those values:

From 0.5 to 1.0 you get 1023 codevalues in half float.
From 0.25 to 0.5 you get another 1023 codevalues etc…

So for the majority of image content we only have 2048 CV.

In other words:
Above 0.5 we get two half float code words for each 10bit integer code word.

Another way of seeing this is that we get 11 bit of “integer precision” between 0.5 and 1.0.
And 12 bit between 0.25 and 1.0.

Relying on your value of 50 10bit CV per stop for aceslog_sm1 this would give us 100bit CV per stop for everything above 0.5 when stored in 16bit half float. That brings us to the days of 10bit Cineon in terms of precision, while using 16bit half float.

I am not suggesting that this could not work for OnSet 10bit SDI work but I would be worried caching aceslog_sm1 in half float in any process.

Another thought:

if we compare PQ and aceslog_sm1, aceslog_sm1 puts another three-ish stops ontop of PQ into the 0…1 range, bypassing the display rendering and different image states here for a second.

PQ is surfing the Barten Treshold Curve, so I would expect to see visible banding in aceslog_sm1 in 10 bit, as we can see visual banding in some cases for noise free 10bit ramps in PQ.

That is all theory for now, of course we need to test all of this.

I hope I get some time in December to implement your aceslog_sm1 tone curve so I can do some real analysis.

i’ve double checked the maths, and yes, you are correct -
100 16bit cv’s per stop in log corresponding to linear values above 1.0
(i calculated 175 cv’s between 0.5 and 1.0)

i do understand your concern - BUT -
isn’t this basically what’s happening NOW when projects are being graded in AcesCCT?

in practice, once any actual knob turning grading operations are done in one of those log spaces,
those other “missing” 16bit cv’s will probably get filled in! but yes, i agree that there is always
the noble goal of “if we do nothing we should not degrade the original image data” -

which begs the question - strictly from an implementation point of view -
would it be an almost reasonable “under-the-hood best practice” to always make sure
to convert to linear before caching 16bit half floats, and then convert back to the log
grading/working space when we’re back to doing grading operations? just a thought.

just a quick followup -

although you are correct that we would go down to 100 code values
for linear values above 1.0 if we stored the AcesLog_SM1 values as 16 half floats,
the really “useful” parts of the dynamic range of real world images isn’t up there.

here’s the breakdown on how many 16bit half floats you would have in AcesLogSM1
at various stop ranges:

    midgrey-7.5stops to midgrey-6.5stops  165
    midgrey-6.5stops to midgrey-5.5stops. 280
    midgrey-5.5stops to midgrey-4.5stops. 325
    midgrey-4.5stops to midgrey-3.5stops. 400
    midgrey-3.5stops to midgrey-2.5stops. 324
    midgrey-2.5stops to midgrey-1.5stops. 200
    midgrey-1.5stops to midgrey-0.5stops. 200
    midgrey-0.5stops to midgrey+0.5stops. 200 
    midgrey+0.5stops to midgrey+1.5stops  200
    midgrey+1.5stops to midgrey+2.5stops  174 
    midgrey+2.5stops to midgrey+3.5stops  100
    midgrey+3.5stops to midgrey+4.5stops  100
    midgrey+4.5stops to midgrey+5.5stops  100
    and above that, 100 grey levels per stop holds

so we’re really more like “twice as good as 10bit cineon” until we get up to 2.5 stops
above midgrey. i’m not saying this isn’t a problem - in fact, i think this is something
that should definitely be discussed and folks should be made aware that we’re
probably gonna have some new “best practices” when it comes to storing/caching
log encoded images. (i’m going to run the same analysis on various camera log encodings
and see if they do any better or worse.)

Yes I agree it is worth exploring further.

There are so many use cases we need to double check.

How is the situation in real time environments like game engines?

How does the encoding holdup to LUT application with 16^3 or 33^3 LUTs, which are often used OnSet.

I would also like to discuss different solutions.

For example:

An encoding log and a CDL Log?


An encoding log and a CDL 2.0 (in linear light)?

1 Like

In case this is needed, here is the original CDL 2.0 post : Cdl 2.0 - #15 by daniele

There were a couple of good comments back in the day.


1 Like

And to show that it is precisely 10.5, avoiding the use of long decimals…

\begin{equation} \begin{aligned} offset&=2\tfrac{1}{2}+\frac{400}{1023}\times\frac{1023}{50}\\\\ &=2\tfrac{1}{2}+\frac{400}{50}\\\\ &=2\tfrac{1}{2}+8\\\\ &=10\tfrac{1}{2} \end{aligned} \end{equation}


1 Like


so my main problem with both ACEScc and ACEScct is that they go below 0.0 and above 1.0. That makes them annoying to use when you are beholden to a 0.0-1.0 range for your LUTs, say because you use 3d volume textures to store them and UVW coordinates in the range of 0-1 because that allows to use normal texture sampling in shaders which give linear interpolation for free. My current workaround is to apply a scale/bias to ACEScc(t) values making -0.03 = 0.0 and a higher value than 1.0 map to 1.0. I don’t remember what exact value I use but I don’t use the full range since it makes no sense to try to display a linear value of 65504.0 even after strong tone mapping.

If the new ACESlog could fix that then I would adopt it in less than two seconds.

Has it already been decided that ACESlog will have a linear portion like ACEScct? As a colorist I would argue there is a need for a pure-log tone curve in color grading that currently only ACEScc fills.

I feel like it would be a shame if there wasn’t at least an option for pure-log ACESlog.

I also have a question regarding the references in this proposal to the 10-bit integer CVs and how neatly the curve fits into them. Is it really that relevant considering AP1/log ACES data is supposedly only for internal calculations (in floating point) and not to be written to storage?
Surely a 10bit integer representation of 50 CV per stop will invite banding artifacts.

Hi Matthias,

let me defend the linear toe for ACEScct by referring to this post:

I think one primary use-case is an encoding for integer representation, like on a SDI signal on set.
If we would drop this requirement for integer encoding, the proposal would look very different, I guess.

I agree we need to collect more use cases. Do people really send ACEScct over the wire?
Or are cameras sending their own encoding over the wire, and the onset grading software converts the signal to ACEScct, then applies CDL grading and converts back to the camera encoding (or even going straight to display referred)?

If the latter would be the use case, then one could argue that we have no issue with ACEScct.

Or are we looking for a LUT shaper space?
Or a new grading space?

We should also priorities the different use cases because their solution asks for contradicting behaviour.


Personally, I use scaled/biased ACEScc as an intermediate LUT shaper space between color grading step and tonemapping step. I also use ACEScc as a LUT space when exporting LUTs from DaVinci Resolve to our game engine (if that option is used at all since we also support mathematical grading operators). What I do with the LUTs I pass from Resolve to our game engine is outside this discussion.