We often get questions regarding strange calculation results when using decimal numbers in Web Intelligence.

Typically:

- Why don’t I get 0 when I do A + B – C, although C is equal to A + B?
- Why is the result of this running sum different, when I change the sorting order of my table?
- Etc.

These strange calculation results are due to the fact that, internally, Web Intelligence represents decimal numbers using the 64-bit precision version of the Floating Point data type defined by the IEEE 754 standard.

A detailed explanation of that standard can be found in the following Wikipedia page: http://en.wikipedia.org/wiki/IEEE_floating_point.

With the IEEE 754 standard on floating point arithmetic, decimal numbers cannot be perfectly represented in binary form, since not all their digits can be trusted. In fact, the number of digits which can be trusted actually depends on the size of the representation.

With the 64-bit representation (also called “double precision”) used in Web Intelligence, the number of bits used for stocking the data is 53 and, as explained in the above Wikipedia page, the number of digits which can be trusted is: log(2^53), rounded to 15.

**IMPORTANT TO REMEMBER**: In Web Intelligence, the maximum number of digits which can be trusted in a decimal number is **15** and that takes into account digits both before and after the decimal mark.

Example:

- 100,000,000,000 + 0.001 will correctly result in 100,000,000,000.0010 because this decimal number only requires 15 trusted digits to be represented (12 digits before the decimal mark and 3 after).
- 100,000,000,000 + 0.0001 will wrongly result in 100,000,000,000.0000 because the correct result would require 16 trusted digits to be represented (12 before the decimal mark and 4 after)

With this limitation in mind, summing decimal numbers might not always give the expected result, especially if the expected result is 0…!

For example, summing 19000.2, 0.123, -9100.3, -0.000000000002543, etc. will allow for only 10 digits after the decimal mark to be trusted. Indeed, 5 digits before the decimal mark are already used by the largest number **19000**.2. Consequently, if the result of that sum is something like: -0.**0000000000**13327, it will be indistinguishable from 0, by the IEEE 754 standard.

Therefore, if a 0 result from a decimal numbers calculation is used as a condition for further processing in a Web Intelligence document, it is highly recommended to convert the decimal numbers into integers before that condition is evaluated. This rounding operation can be done with the *Round(number; round_level)* formula, using 0 for the *round_level* parameter.

Now, why is changing the sorting of a table might also change a decimal calculation result?

Well… Another major limitation of the IEEE 754 floating point format is that the integer part of decimal numbers is represented by a fraction and is therefore approximated. As a consequence, there will be a rounding error which will be propagated along the calculation, leading to different results according to the way the values are sorted.

Example:

Say, we have A = 1,000,000, B = 1.2 and C = -1,000,000

- A + B + C will result in 1.19999999995343E0
- A + C + B will result in 1.2E0

This is because the rounding error does not propagate the same way through A + B + C, than it does through A + C + B.

I hope these “strange calculation results” will be clearer now! ðŸ™‚

If not, please do not hesitate to ask questions below.

Best regards,

Pascal.

Wow. I am very glad to know this. Thanks for sharing Pascal. Good post.

Thanks.

I've modified a bit the paragraph on error propagation, which explains why changing the order of the data will sometimes change the calculation result too!

Pascal.

Thanks for the blog post!

An important consideration is that Web Intelligence isn't unique in adhering to the IEEE754 double-precision standard. Almost all computational tools also adhere to the same standard.

In fact, one can do similar calculations in Excel:

=(1000000 + 1.2) - 1000000

and you'll get the same result: 1.1999999999534300000E+00

Here's the reference: http://support.microsoft.com/kb/78113

The important consideration in the above calculation is that there's no way, in IEEE754, to perfectly accurately represent the value 1000001.2.

Now, that might sound strange - that there is no number such as 1000001.2. In fact, in IEEE754, the most accurate representation of 1000001.2 is 1000001.19999999995343387126923

In fact, IEE754 can't accurately represent 1.2 either, the "closest" representable value is 1.19999999999999995559107901499

Wicked, isn't it!

The reason for that is that computers don't have fingers like humans do. All they have is a huge number of switches that can be either "0" or "1", i.e., binary notation.

So if you take a number like 1.2, then the decimal representation is 1 + 2/10 = 1 + 1/5. So you need to get the binary representation of 1/5.

In binary, that's 1/5 = 1/8 + 1/16 + 1/128 + 1/256 + etc etc, an infinite series. But there's only so much bits one can play with, so you truncate the series to get:

1 + 1/8 + 1/16 + 1/128 + 1/256 + 1/2048 + 1/4096 + 1/16384 + ... + 1/(2^52) = 1.19999999999999995559107901499E0

But IEE754 can represent 1000000 accurately.

So if you compute 1000000 + (-1000000) + 1.2, then the partial computation is:

(1000000 + (-100000)) + 1.2 = (0) + 1.2 = 1.2

whereas 1000000 + 1.2 + (-1000000):

(1000000 + 1.2) + (-100000) = 1000001.19999999995343387126923 + (- 1000000) = 1.1999999999534300000E+00

where the end digits 387126923 is also lost to the 15-digit 'round-off error'.

Now, this might seem like small potatoes - that the error is quite small. But imagine if you had to add up millions of likewise numbers - the 'cumulative error' can add up to be quite significant.