I know that java has double precision pitfalls, but why sometimes, the approximation result is ok, but sometimes isn't.
code like this:
for ( float value = 0.0f; value < 1.0f; value += 0.1f )
System.out.println( value );
result like this:
0.0
0.1
0.2
0.3
...
0.70000005
0.8000001
0.9000001
As you state, not all numbers can be represented exactly in IEEE754. In conjunction with the rules that Java uses for printing those numbers, that affects what you'll see.
For background, I'll briefly cover the IEEE754 inaccuracies. In this particular case, 0.1 cannot be represented exactly so you'll often find that the actual number used is something like 0.100000001490116119384765625.
See here for the analysis of why this is so. The reason you're getting the "inaccurate" values is because that error (0.000000001490116119384765625) gradually adds up.
The reason why 0.1 or 0.2 (or similar numbers) don't always show that error has to do with the printing code in Java, rather than the actual value itself.
Even though 0.1 is actually a little higher than what you expect, the code that prints it out doesn't give you all the digits. You'd find, if you set the format string to deliver 50 digits after the decimal, that you'd then see the true value.
The rules for how Java decides to print out a float (without explicit formatting) are detailed here. The relevant bit for the digit count is:
There must be at least one digit to represent the fractional part, and beyond that as many, but only as many, more digits as are needed to uniquely distinguish the argument value from adjacent values of type float.
By way of example, here's some code showing you how this works:
public class testprog {
public static void main (String s[]) {
float n; int i, x;
for (i = 0, n = 0.0f; i < 10; i++, n += 0.1f) {
System.out.print( String.format("%30.29f %08x ",
n, Float.floatToRawIntBits(n)));
System.out.println (n);
}
}
}
The output of this is:
0.00000000000000000000000000000 00000000 0.0
0.10000000149011611938476562500 3dcccccd 0.1
0.20000000298023223876953125000 3e4ccccd 0.2
0.30000001192092895507812500000 3e99999a 0.3
0.40000000596046447753906250000 3ecccccd 0.4
0.50000000000000000000000000000 3f000000 0.5
0.60000002384185791015625000000 3f19999a 0.6
0.70000004768371582031250000000 3f333334 0.70000005
0.80000007152557373046875000000 3f4cccce 0.8000001
0.90000009536743164062500000000 3f666668 0.9000001
The first column is the real value of the float, including inaccuracies from IEEE754 limitations.
The second column is the 32-bit integer representation of the floating point value (how it looks in memory rather than its actual integer value), useful for checking the values at the low level bit representation.
The final column is what you see when you just print out the number with no formatting.
Now looking at some more code, which will show you both how the inaccuracies of continuously adding an inexact value will give you the wrong number, and how the differences with surrounding values controls what is printed:
public class testprog {
public static void outLines (float n) {
int i, val = Float.floatToRawIntBits(n);
for (i = -1; i < 2; i++) {
n = Float.intBitsToFloat(val+i);
System.out.print( String.format("%30.29f %.08f %08x ",
n, n, Float.floatToRawIntBits(n)));
System.out.println (n);
}
System.out.println();
}
public static void main (String s[]) {
float n = 0.0f;
for (int i = 0; i < 6; i++) n += 0.1f;
outLines (n); n += 0.1f;
outLines (n); n += 0.1f;
outLines (n); n += 0.1f;
outLines (0.7f);
}
}
This code uses the continued addition of 0.1 to get up to 0.6 then prints out the values for that and adjacent floats. The output of that is:
0.59999996423721310000000000000 0.59999996 3f199999 0.59999996
0.60000002384185790000000000000 0.60000002 3f19999a 0.6
0.60000008344650270000000000000 0.60000008 3f19999b 0.6000001
0.69999998807907100000000000000 0.69999999 3f333333 0.7
0.70000004768371580000000000000 0.70000005 3f333334 0.70000005
0.70000010728836060000000000000 0.70000011 3f333335 0.7000001
0.80000001192092900000000000000 0.80000001 3f4ccccd 0.8
0.80000007152557370000000000000 0.80000007 3f4cccce 0.8000001
0.80000013113021850000000000000 0.80000013 3f4ccccf 0.80000013
0.69999992847442630000000000000 0.69999993 3f333332 0.6999999
0.69999998807907100000000000000 0.69999999 3f333333 0.7
0.70000004768371580000000000000 0.70000005 3f333334 0.70000005
The first thing to look at is that the final column has enough fractional digits in the middle lines of each block to distinguish it from the surrounding lines (as per the Java printing specifications mentioned previously).
For example, if you only had three places after the decimal, you would not be able to distinguish between 0.6 and 0.6000001 (the adjacent bit patterns 0x3f19999a and 0x3f19999b). So, it prints as much as it needs.
The second thing you'll notice is that our 0.7 value in the second block is not 0.7. Rather, it's 0.70000005 despite the fact that there's an even closer bit pattern to that number (on the previous line).
This has been caused by the gradual accumulation of errors caused by adding 0.1. You can see from the final block that, if you just used 0.7 directly rather than continuously adding 0.1, you'd get the right value.
So, in your particular case, it's the latter issue causing you the problems. The fact that you're getting 0.70000005 printed out is not because Java hasn't got a close enough approximation (it has), it's because of the way you got to 0.7 in the first place.
If you modify that code above to contain:
outLines (0.1f);
outLines (0.2f);
outLines (0.3f);
outLines (0.4f);
outLines (0.5f);
outLines (0.6f);
outLines (0.7f);
outLines (0.8f);
outLines (0.9f);
you'll find it can print out all the numbers in that group correctly.
Related
like a decimal number 0.1, represented as binary 0.00011001100110011...., this is a infinite repeating number.
when I write code like this:
float f = 0.1f;
the program will rounding it as binary 0 01111011 1001 1001 1001 1001 1001 101, this is not original number 0.1.
but when print this variable like this:
System.out.print(f);
I can get original number 0.1 rather than 0.100000001 or some other number. I think the program can't exactly represent "0.1", but it can display "0.1" exactly. How to do it?
I recover decimal number through add each bits of binary, it looks weird.
float f = (float) (Math.pow(2, -4) + Math.pow(2, -5) + Math.pow(2, -8) + Math.pow(2, -9) + Math.pow(2, -12) + Math.pow(2, -13) + Math.pow(2, -16) + Math.pow(2, -17) + Math.pow(2, -20) + Math.pow(2, -21) + Math.pow(2, -24) + Math.pow(2, -25));
float f2 = (float) Math.pow(2, -27);
System.out.println(f);
System.out.println(f2);
System.out.println(f + f2);
Output:
0.099999994
7.4505806E-9
0.1
in math, f1 + f2 = 0.100000001145... , not equals 0.1. Why the program would not get result like 0.100000001, I think it is more accurate.
Java's System.out.print prints just enough decimals that the resulting representation, if parsed as a double or float, converts to the original double or float value.
This is a good idea because it means that in a sense, no information is lost in this kind of conversion to decimal. On the other hand, it can give an impression of exactness which, as you make clear in your question, is wrong.
In other languages, you can print the exact decimal representation of the float or double being considered:
#include <stdio.h>
int main(){
printf("%.60f", 0.1);
}
result: 0.100000000000000005551115123125782702118158340454101562500000
In Java, in order to emulate the above behavior, you need to convert the float or double to BigDecimal (this conversion is exact) and then print the BigDecimal with enough digits. Java's attitude to floating-point-to-string-representing-a-decimal conversion is pervasive, so that even System.out.format is affected. The linked Java program, the important line of which is System.out.format("%.60f\n", 0.1);, shows 0.100000000000000000000000000000000000000000000000000000000000, although the value of 0.1d is not 0.10000000000000000000…, and a Java programmer could have been excused for expecting the same output as the C program.
To convert a double to a string that represents the exact value of the double, consider the hexadecimal format, that Java supports for literals and for printing.
I believe this is covered by Double.toString(double) (and similarly in Float#toString(float)):
How many digits must be printed for the fractional part of m or a? There must be at least one digit to represent the fractional part, and beyond that as many, but only as many, more digits as are needed to uniquely distinguish the argument value from adjacent values of type double. That is, suppose that x is the exact mathematical value represented by the decimal representation produced by this method for a finite nonzero argument d. Then d must be the double value nearest to x; or if two double values are equally close to x, then d must be one of them and the least significant bit of the significand of d must be 0.
(my emphasis)
I know that java has double precision pitfalls, but why sometimes, the approximation result is ok, but sometimes isn't.
code like this:
for ( float value = 0.0f; value < 1.0f; value += 0.1f )
System.out.println( value );
result like this:
0.0
0.1
0.2
0.3
...
0.70000005
0.8000001
0.9000001
As you state, not all numbers can be represented exactly in IEEE754. In conjunction with the rules that Java uses for printing those numbers, that affects what you'll see.
For background, I'll briefly cover the IEEE754 inaccuracies. In this particular case, 0.1 cannot be represented exactly so you'll often find that the actual number used is something like 0.100000001490116119384765625.
See here for the analysis of why this is so. The reason you're getting the "inaccurate" values is because that error (0.000000001490116119384765625) gradually adds up.
The reason why 0.1 or 0.2 (or similar numbers) don't always show that error has to do with the printing code in Java, rather than the actual value itself.
Even though 0.1 is actually a little higher than what you expect, the code that prints it out doesn't give you all the digits. You'd find, if you set the format string to deliver 50 digits after the decimal, that you'd then see the true value.
The rules for how Java decides to print out a float (without explicit formatting) are detailed here. The relevant bit for the digit count is:
There must be at least one digit to represent the fractional part, and beyond that as many, but only as many, more digits as are needed to uniquely distinguish the argument value from adjacent values of type float.
By way of example, here's some code showing you how this works:
public class testprog {
public static void main (String s[]) {
float n; int i, x;
for (i = 0, n = 0.0f; i < 10; i++, n += 0.1f) {
System.out.print( String.format("%30.29f %08x ",
n, Float.floatToRawIntBits(n)));
System.out.println (n);
}
}
}
The output of this is:
0.00000000000000000000000000000 00000000 0.0
0.10000000149011611938476562500 3dcccccd 0.1
0.20000000298023223876953125000 3e4ccccd 0.2
0.30000001192092895507812500000 3e99999a 0.3
0.40000000596046447753906250000 3ecccccd 0.4
0.50000000000000000000000000000 3f000000 0.5
0.60000002384185791015625000000 3f19999a 0.6
0.70000004768371582031250000000 3f333334 0.70000005
0.80000007152557373046875000000 3f4cccce 0.8000001
0.90000009536743164062500000000 3f666668 0.9000001
The first column is the real value of the float, including inaccuracies from IEEE754 limitations.
The second column is the 32-bit integer representation of the floating point value (how it looks in memory rather than its actual integer value), useful for checking the values at the low level bit representation.
The final column is what you see when you just print out the number with no formatting.
Now looking at some more code, which will show you both how the inaccuracies of continuously adding an inexact value will give you the wrong number, and how the differences with surrounding values controls what is printed:
public class testprog {
public static void outLines (float n) {
int i, val = Float.floatToRawIntBits(n);
for (i = -1; i < 2; i++) {
n = Float.intBitsToFloat(val+i);
System.out.print( String.format("%30.29f %.08f %08x ",
n, n, Float.floatToRawIntBits(n)));
System.out.println (n);
}
System.out.println();
}
public static void main (String s[]) {
float n = 0.0f;
for (int i = 0; i < 6; i++) n += 0.1f;
outLines (n); n += 0.1f;
outLines (n); n += 0.1f;
outLines (n); n += 0.1f;
outLines (0.7f);
}
}
This code uses the continued addition of 0.1 to get up to 0.6 then prints out the values for that and adjacent floats. The output of that is:
0.59999996423721310000000000000 0.59999996 3f199999 0.59999996
0.60000002384185790000000000000 0.60000002 3f19999a 0.6
0.60000008344650270000000000000 0.60000008 3f19999b 0.6000001
0.69999998807907100000000000000 0.69999999 3f333333 0.7
0.70000004768371580000000000000 0.70000005 3f333334 0.70000005
0.70000010728836060000000000000 0.70000011 3f333335 0.7000001
0.80000001192092900000000000000 0.80000001 3f4ccccd 0.8
0.80000007152557370000000000000 0.80000007 3f4cccce 0.8000001
0.80000013113021850000000000000 0.80000013 3f4ccccf 0.80000013
0.69999992847442630000000000000 0.69999993 3f333332 0.6999999
0.69999998807907100000000000000 0.69999999 3f333333 0.7
0.70000004768371580000000000000 0.70000005 3f333334 0.70000005
The first thing to look at is that the final column has enough fractional digits in the middle lines of each block to distinguish it from the surrounding lines (as per the Java printing specifications mentioned previously).
For example, if you only had three places after the decimal, you would not be able to distinguish between 0.6 and 0.6000001 (the adjacent bit patterns 0x3f19999a and 0x3f19999b). So, it prints as much as it needs.
The second thing you'll notice is that our 0.7 value in the second block is not 0.7. Rather, it's 0.70000005 despite the fact that there's an even closer bit pattern to that number (on the previous line).
This has been caused by the gradual accumulation of errors caused by adding 0.1. You can see from the final block that, if you just used 0.7 directly rather than continuously adding 0.1, you'd get the right value.
So, in your particular case, it's the latter issue causing you the problems. The fact that you're getting 0.70000005 printed out is not because Java hasn't got a close enough approximation (it has), it's because of the way you got to 0.7 in the first place.
If you modify that code above to contain:
outLines (0.1f);
outLines (0.2f);
outLines (0.3f);
outLines (0.4f);
outLines (0.5f);
outLines (0.6f);
outLines (0.7f);
outLines (0.8f);
outLines (0.9f);
you'll find it can print out all the numbers in that group correctly.
I'm trying to learn a bit on robotics and Java. In my attempt I have written a function that (should) take a uniformed array called q(0.2,0.2,0.2,0.2,0.2), multiply each value in this array by 0.6 or 0.2 based on value Z, compared to another array world (0,1,1,0,0) that represents hit or miss of value Z compared to array world and finally return a new uniformed array.
So:
1.) Loop through (0,1,1,0,0)
2.) if i!=Z than i*0.2, if i=Z than i*0.6
3.) sum all new values in q to a double called normalize
4.) normalize each value in q (value / normalize)
Below is the function:
public static final double pHit = 0.6;
public static final double pMiss = 0.2;
public static int[] world = {0,1,1,0,0};
public static List<Double> q = new ArrayList<Double>();
public static List<Double> sense(int Z){
double normalize = 0;
for(int i=0;i < q.size();i++){
if(Z == world[i]){
q.set(i, (q.get(i) * pHit));
}
else{
q.set(i, (q.get(i) * pMiss));
}
}
//Normalize
for(int i=0;i < q.size();i++){
normalize += q.get(i);
}
for(int i=0;i<q.size();i++){
q.set(i, q.get(i)/normalize);
}
return q;
}
If I set world to (0,1,0,0,0) and Z to 1 I get following results:
0.14285714285714288
0.4285714285714285
0.14285714285714288
0.14285714285714288
0.14285714285714288
This normalizes nicely (sum = 1).
But if I set world to (0,1,1,0,0) and Z to 1 I get a strange result:
0.1111111111111111
0.3333333333333332
0.3333333333333332
0.1111111111111111
0.1111111111111111
This "normalizes" to 0.9999999999999998 ?
Many thanks for any input!
Doubles are not perfect representations of the real number line. They have only about 16 digits of precision. In successive computations, errors can build, sometimes catastrophically. In your case, this has not happened, so be happy.
The value of 0.1 is a nice example. In IEEE floating point, it has only an approximate representation. As a binary fraction, it is 0.0[0011] where the part in square braces repeats forever. This is why floating point numbers (including doubles) may not be the best choice for representing prices.
I highly suggest reading this classic summary:
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Floating point numbers are not exactly represented on computers. For that reason you'll get very small fractions off from exact values when multiplying floating values together.
See this answer as it goes into more depth about floating representations and references this Floating-Point Arithmetic article.
Welcome to the world of floating point numbers!
I did not try to understand your code in detail, but the result 0.9999999999999998 is perfectly normal. I recommend you to read a bit about floating point numbers and their precision.
Why does changing the sum order returns a different result?
23.53 + 5.88 + 17.64 = 47.05
23.53 + 17.64 + 5.88 = 47.050000000000004
Both Java and JavaScript return the same results.
I understand that, due to the way floating point numbers are represented in binary, some rational numbers (like 1/3 - 0.333333...) cannot be represented precisely.
Why does simply changing the order of the elements affect the result?
Maybe this question is stupid, but why does simply changing the order of the elements affects the result?
It will change the points at which the values are rounded, based on their magnitude. As an example of the kind of thing that we're seeing, let's pretend that instead of binary floating point, we were using a decimal floating point type with 4 significant digits, where each addition is performed at "infinite" precision and then rounded to the nearest representable number. Here are two sums:
1/3 + 2/3 + 2/3 = (0.3333 + 0.6667) + 0.6667
= 1.000 + 0.6667 (no rounding needed!)
= 1.667 (where 1.6667 is rounded to 1.667)
2/3 + 2/3 + 1/3 = (0.6667 + 0.6667) + 0.3333
= 1.333 + 0.3333 (where 1.3334 is rounded to 1.333)
= 1.666 (where 1.6663 is rounded to 1.666)
We don't even need non-integers for this to be a problem:
10000 + 1 - 10000 = (10000 + 1) - 10000
= 10000 - 10000 (where 10001 is rounded to 10000)
= 0
10000 - 10000 + 1 = (10000 - 10000) + 1
= 0 + 1
= 1
This demonstrates possibly more clearly that the important part is that we have a limited number of significant digits - not a limited number of decimal places. If we could always keep the same number of decimal places, then with addition and subtraction at least, we'd be fine (so long as the values didn't overflow). The problem is that when you get to bigger numbers, smaller information is lost - the 10001 being rounded to 10000 in this case. (This is an example of the problem that Eric Lippert noted in his answer.)
It's important to note that the values on the first line of the right hand side are the same in all cases - so although it's important to understand that your decimal numbers (23.53, 5.88, 17.64) won't be represented exactly as double values, that's only a problem because of the problems shown above.
Here's what's going on in binary. As we know, some floating-point values cannot be represented exactly in binary, even if they can be represented exactly in decimal. These 3 numbers are just examples of that fact.
With this program I output the hexadecimal representations of each number and the results of each addition.
public class Main{
public static void main(String args[]) {
double x = 23.53; // Inexact representation
double y = 5.88; // Inexact representation
double z = 17.64; // Inexact representation
double s = 47.05; // What math tells us the sum should be; still inexact
printValueAndInHex(x);
printValueAndInHex(y);
printValueAndInHex(z);
printValueAndInHex(s);
System.out.println("--------");
double t1 = x + y;
printValueAndInHex(t1);
t1 = t1 + z;
printValueAndInHex(t1);
System.out.println("--------");
double t2 = x + z;
printValueAndInHex(t2);
t2 = t2 + y;
printValueAndInHex(t2);
}
private static void printValueAndInHex(double d)
{
System.out.println(Long.toHexString(Double.doubleToLongBits(d)) + ": " + d);
}
}
The printValueAndInHex method is just a hex-printer helper.
The output is as follows:
403787ae147ae148: 23.53
4017851eb851eb85: 5.88
4031a3d70a3d70a4: 17.64
4047866666666666: 47.05
--------
403d68f5c28f5c29: 29.41
4047866666666666: 47.05
--------
404495c28f5c28f6: 41.17
4047866666666667: 47.050000000000004
The first 4 numbers are x, y, z, and s's hexadecimal representations. In IEEE floating point representation, bits 2-12 represent the binary exponent, that is, the scale of the number. (The first bit is the sign bit, and the remaining bits for the mantissa.) The exponent represented is actually the binary number minus 1023.
The exponents for the first 4 numbers are extracted:
sign|exponent
403 => 0|100 0000 0011| => 1027 - 1023 = 4
401 => 0|100 0000 0001| => 1025 - 1023 = 2
403 => 0|100 0000 0011| => 1027 - 1023 = 4
404 => 0|100 0000 0100| => 1028 - 1023 = 5
First set of additions
The second number (y) is of smaller magnitude. When adding these two numbers to get x + y, the last 2 bits of the second number (01) are shifted out of range and do not figure into the calculation.
The second addition adds x + y and z and adds two numbers of the same scale.
Second set of additions
Here, x + z occurs first. They are of the same scale, but they yield a number that is higher up in scale:
404 => 0|100 0000 0100| => 1028 - 1023 = 5
The second addition adds x + z and y, and now 3 bits are dropped from y to add the numbers (101). Here, there must be a round upwards, because the result is the next floating point number up: 4047866666666666 for the first set of additions vs. 4047866666666667 for the second set of additions. That error is significant enough to show in the printout of the total.
In conclusion, be careful when performing mathematical operations on IEEE numbers. Some representations are inexact, and they become even more inexact when the scales are different. Add and subtract numbers of similar scale if you can.
Jon's answer is of course correct. In your case the error is no larger than the error you would accumulate doing any simple floating point operation. You've got a scenario where in one case you get zero error and in another you get a tiny error; that's not actually that interesting a scenario. A good question is: are there scenarios where changing the order of calculations goes from a tiny error to a (relatively) enormous error? The answer is unambiguously yes.
Consider for example:
x1 = (a - b) + (c - d) + (e - f) + (g - h);
vs
x2 = (a + c + e + g) - (b + d + f + h);
vs
x3 = a - b + c - d + e - f + g - h;
Obviously in exact arithmetic they would be the same. It is entertaining to try to find values for a, b, c, d, e, f, g, h such that the values of x1 and x2 and x3 differ by a large quantity. See if you can do so!
This actually covers much more than just Java and Javascript, and would likely affect any programming language using floats or doubles.
In memory, floating points use a special format along the lines of IEEE 754 (the converter provides much better explanation than I can).
Anyways, here's the float converter.
http://www.h-schmidt.net/FloatConverter/
The thing about the order of operations is the "fineness" of the operation.
Your first line yields 29.41 from the first two values, which gives us 2^4 as the exponent.
Your second line yields 41.17 which gives us 2^5 as the exponent.
We're losing a significant figure by increasing the exponent, which is likely to change the outcome.
Try ticking the last bit on the far right on and off for 41.17 and you can see that something as "insignificant" as 1/2^23 of the exponent would be enough to cause this floating point difference.
Edit: For those of you who remember significant figures, this would fall under that category. 10^4 + 4999 with a significant figure of 1 is going to be 10^4. In this case, the significant figure is much smaller, but we can see the results with the .00000000004 attached to it.
Floating point numbers are represented using the IEEE 754 format, which provides a specific size of bits for the mantissa (significand). Unfortunately this gives you a specific number of 'fractional building blocks' to play with, and certain fractional values cannot be represented precisely.
What is happening in your case is that in the second case, the addition is probably running into some precision issue because of the order the additions are evaluated. I haven't calculated the values, but it could be for example that 23.53 + 17.64 cannot be precisely represented, while 23.53 + 5.88 can.
Unfortunately it is a known problem that you just have to deal with.
I believe it has to do with the order of evaulation. While the sum is naturally the same in a math world, in the binary world instead of A + B + C = D, it's
A + B = E
E + C = D(1)
So there's that secondary step where floating point numbers can get off.
When you change the order,
A + C = F
F + B = D(2)
To add a different angle to the other answers here, this SO answer shows that there are ways of doing floating-point math where all summation orders return exactly the same value at the bit level.
I have a very annoying problem with long sums of floats or doubles in Java. Basically the idea is that if I execute:
for ( float value = 0.0f; value < 1.0f; value += 0.1f )
System.out.println( value );
What I get is:
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.70000005
0.8000001
0.9000001
I understand that there is an accumulation of the floating precision error, however, how to get rid of this? I tried using doubles to half the error, but the result is still the same.
Any ideas?
There is a no exact representation of 0.1 as a float or double. Because of this representation error the results are slightly different from what you expected.
A couple of approaches you can use:
When using the double type, only display as many digits as you need. When checking for equality allow for a small tolerance either way.
Alternatively use a type that allows you to store the numbers you are trying to represent exactly, for example BigDecimal can represent 0.1 exactly.
Example code for BigDecimal:
BigDecimal step = new BigDecimal("0.1");
for (BigDecimal value = BigDecimal.ZERO;
value.compareTo(BigDecimal.ONE) < 0;
value = value.add(step)) {
System.out.println(value);
}
See it online: ideone
You can avoid this specific problem using classes like BigDecimal. float and double, being IEEE 754 floating-point, are not designed to be perfectly accurate, they're designed to be fast. But note Jon's point below: BigDecimal can't represent "one third" accurately, any more than double can represent "one tenth" accurately. But for (say) financial calculations, BigDecimal and classes like it tend to be the way to go, because they can represent numbers in the way that we humans tend to think about them.
Don't use float/double in an iterator as this maximises your rounding error. If you just use the following
for (int i = 0; i < 10; i++)
System.out.println(i / 10.0);
it prints
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
I know BigDecimal is a popular choice, but I prefer double not because its much faster but its usually much shorter/cleaner to understand.
If you count the number of symbols as a measure of code complexity
using double => 11 symbols
use BigDecimal (from #Mark Byers example) => 21 symbols
BTW: don't use float unless there is a really good reason to not use double.
It's not just an accumulated error (and has absolutely nothing to do with Java). 1.0f, once translated to actual code, does not have the value 0.1 - you already get a rounding error.
From The Floating-Point Guide:
What can I do to avoid this problem?
That depends on what kind of
calculations you’re doing.
If you really need your results to add up exactly, especially when you work with money: use a special decimal datatype.
If you just don’t want to see all those extra decimal places: simply format your result rounded to a fixed
number of decimal places when
displaying it.
If you have no decimal datatype available, an alternative is to work
with integers, e.g. do money
calculations entirely in cents. But
this is more work and has some
drawbacks.
Read the linked-to site for detailed information.
Another solution is to forgo == and check if the two values are close enough. (I know this is not what you asked in the body but I'm answering the question title.)
For the sake of completeness I recommend this one:
Shewchuck, "Robust Adaptive Floating-Point Geometric Predicates", if you want more examples of how to perform exact arithmetic with floating point - or at least controlled accuracy which is the original intention of author, http://www.cs.berkeley.edu/~jrs/papers/robustr.pdf
I had faced same issue, resolved the same using BigDecimal. Below is the snippet which helped me.
double[] array = {45.34d, 45000.24d, 15000.12d, 4534.89d, 3444.12d, 12000.00d, 4900.00d, 1800.01d};
double total = 0.00d;
BigDecimal bTotal = new BigDecimal(0.0+"");
for(int i = 0;i < array.length; i++) {
total += (double)array[i];
bTotal = bTotal.add(new BigDecimal(array[i] +""));
}
System.out.println(total);
System.out.println(bTotal);
Hope it will help you.
You should use a decimal datatype, not floats:
https://docs.oracle.com/javase/7/docs/api/java/math/BigDecimal.html
package loopinamdar;
import java.text.DecimalFormat;
public class loopinam {
static DecimalFormat valueFormat = new DecimalFormat("0.0");
public static void main(String[] args) {
for (float value = 0.0f; value < 1.0f; value += 0.1f)
System.out.println("" + valueFormat.format(value));
}
}
First make it a double. Don't ever use float or you will have trouble using the java.lang.Math utilities.
Now if you happen to know in advance the precision you want and it is equal or less than 15, then it becomes easy to tell your doubles to behave. Check below:
// the magic method:
public final static double makePrecise(double value, int precision) {
double pow = Math.pow(10, precision);
long powValue = Math.round(pow * value);
return powValue / pow;
}
Now whenever you make an operation, you must tell your double result to behave:
for ( double value = 0.0d; value < 1.0d; value += 0.1d )
System.out.println( makePrecise(value, 1) + " => " + value );
Output:
0.0 => 0.0
0.1 => 0.1
0.2 => 0.2
0.3 => 0.30000000000000004
0.4 => 0.4
0.5 => 0.5
0.6 => 0.6
0.7 => 0.7
0.8 => 0.7999999999999999
0.9 => 0.8999999999999999
1.0 => 0.9999999999999999
If you need more than 15 precision then you are out of luck:
for ( double value = 0.0d; value < 1.0d; value += 0.1d )
System.out.println( makePrecise(value, 16) + " => " + value );
Output:
0.0 => 0.0
0.1 => 0.1
0.2 => 0.2
0.3000000000000001 => 0.30000000000000004
0.4 => 0.4
0.5 => 0.5
0.6 => 0.6
0.7 => 0.7
0.8 => 0.7999999999999999
0.9 => 0.8999999999999999
0.9999999999999998 => 0.9999999999999999
NOTE1: For performance you should cache the Math.pow operation in an array. Not done here for clarity.
NOTE2: That's why we never use doubles for prices, but longs where the last N (i.e. where N <= 15, usually 8) digits are the decimal digits. Then you can forget about what I wrote above :)
If you want to keep on using float and avoid accumulating errors by repeatedly adding 0.1f, try something like this:
for (int count = 0; count < 10; count++) {
float value = 0.1f * count;
System.out.println(value);
}
Note however, as others have already explained, that float is not an infinitely precise data type.
You just need to be aware of the precision required in your calculation and the precision your chosen data type is capable of and present your answers accordingly.
For example, if you are dealing with numbers with 3 significant figures, use of float (which provides a precision of 7 significant figures) is appropriate. However, you can't quote your final answer to a precision of 7 significant figures if your starting values only have a precision of 2 significant figures.
5.01 + 4.02 = 9.03 (to 3 significant figures)
In your example you are performing multiple additions, and with each addition there is a consequent impact on the final precision.