Calculations with floating point numbers are often producing small hidden bugs in some programming languages. For example rounding down or up with decimal digits. I want to show on some small examples in Java, JavaScript, Python and C the main problem.
My example is the floating point number 309.34
and an implementation of a function that rounds the
number with two decimal digits down. The expected result would be the floating point number 309.34
again.
ceil
function and with more or less
than two decimal digits.Here are some naive implementations of the function that should round a floating point number with two decimal digits down.
The naive implementation in Java.
public static double naiveRoundDown2Digits(double number) {
return Math.floor(number * 100) / 100.0;
}
Unfortunately is the result of the following call 309.33
instead of 309.34
.
System.out.println(naiveRoundDown2Digits(309.34));
The naive implementation in JavaScript.
function naiveRoundDown2Digits( number ) {
return Math.floor( number * 100 ) / 100;
}
The same problem as in Java, the result of the following call is 309.33
instead of 309.34
.
console.log( naiveRoundDown2Digits( 309.34 ) );
The naive implementation in Python.
def naiveRoundDown2Digits( number ):
return math.floor( number * 100 ) / 100;
No surprise, the result in Python is also 309.33
instead of 309.34
.
print naiveRoundDown2Digits( 309.34 );
Also the naive implementation in C, just to make sure that the problem is not the programming language.
double naiveRoundDown2Digits(double number) {
return floor(number * 100) / 100;
}
The same result in C, the output is 309.330000
instead of 309.340000
.
printf("%f\n", naiveRoundDown2Digits(309.34));
All naive implementations are doing the same three steps.
100
100
What is happening to our number 309.34
in the three steps?
309.34 * 100 = 30933.999999999996
floor(30933.999999999996) = 30933.0
30933.0 / 100 = 309.33
The main problem is happening in step 1 and step 2. We would expect that 308.34
multiplied with
100
has the result 30834.0
. The error of step 1 is is only 0.000000000004
,
but the floor function in step 2 is increasing the error dramatically by removing the digits
0.999999999996
.
As said above, this is not a problem of the programming language. That is the normal behavior of floating point numbers and a developer has to deal with those problems. Let us take a look at some better implementations for a function that should round a floating point number with two decimal digits down.
Here the implementation in Java.
public static double roundDown2Digits(double number) {
return Math.floor(Math.round(number * 1000) / 10.0) / 100.0;
}
As expected, the result of the following call is 309.34
.
System.out.println(roundDown2Digits(309.34));
The implementation in JavaScript.
function roundDown2Digits( number ) {
return Math.floor( Math.round( number * 1000 ) / 10 ) / 100;
}
Again, the expected result 309.34
.
console.log( naiveRoundDown2Digits( 309.34 ) );
The implementation in Python.
def roundDown2Digits( number ):
return math.floor( round( number * 1000 ) / 10 ) / 100;
The result is again 309.34
.
print roundDown2Digits( 309.34 );
Here the implementation in C, just to complete all mentioned programming languages from above.
double roundDown2Digits(double number) {
return floor(round(number * 1000) / 10) / 100;
}
The same expected result 309.340000
.
printf("%f\n", roundDown2Digits(309.34));
All the better implementations from above have the same five steps.
1000
10
100