In my course, I am told:
Continuous values are represented approximately in memory, and therefore computing with floats involves rounding errors. These are tiny discrepancies in bit patterns; thus the test e==f is unsafe if e and f are floats.
Referring to Java.
Is this true? I've used comparison statements with doubles and floats and have never had rounding issues. Never have I read in a textbook something similar. Surely the virtual machine accounts for this?
It is true.
It is an inherent limitation of how floating point values are represented in memory in a finite number of bits.
This program, for instance, prints "false":
public class Main {
public static void main(String[] args) {
double a = 0.7;
double b = 0.9;
double x = a + 0.1;
double y = b - 0.1;
System.out.println(x == y);
}
}
Instead of exact comparison with '==' you usually decide on some level of precision and ask if the numbers are "close enough":
System.out.println(Math.abs(x - y) < 0.0001);
This applies to Java just as much as to any other language using floating point. It's inherent in the design of the representation of floating point values in hardware.
More info on floating point values:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Yes, representing 0.1 exactly in base-2 is the same as trying to represent 1/3 exactly in base 10.
This is always true. There are some numbers which cannot be represented accurately using float point representation. Consider, for example, pi. How would you represent a number which has infinite digits, within a finite storage? Therefore, when comparing numbers you should check if the difference between them is smaller then some epsilon. Also, there are several classes which exist that can help you achieve greater accuracy such as BigDecimal and BigInteger.
It is right. Note that Java has nothing to do with it, the problem is inherent in floating point math in ANY language.
You can often get away with it with classroom-level problems but it's not going to work in the real world. Sometimes it won't work in the classroom.
An incident from long ago back in school. The teacher of an intro class assigned a final exam problem that was proving a real doozy for many of the better students--it wasn't working and they didn't know why. (I saw this as a lab assistant, I wasn't in the class.) Finally some started asking me for help and some probing revealed the problem: They had never been taught about the inherent inaccuracy of floating point math.
Now, there were two basic approaches to this problem, a brute force one (which by chance worked in this case as it made the same errors every time) and a more elegant one (which would make different errors and not work.) Anyone who tried the elegant approach would hit a brick wall without having any idea why. I helped a bunch of them and stuck in a comment explaining why and to contact me if he had questions.
Of course next semester I hear from him about this and I basically floored the entire department with a simple little program:
10 X = 3000000
20 X = X + 1
30 If X < X + 1 goto 20
40 Print "X = X + 1"
Despite what every teacher in the department thought, this WILL terminate. The 3 million seed is simply to make it terminate faster. (If you don't know basic: There are no gimmicks here, just exhausting the precision of floating point numbers.)
Yes, as other answers have said. I want to add that I recommend you this article about floating point accuracy: Visualizing floats
Of course it is true. Think about it. Any number must be represented in binary.
Picture: "1000" as 0.5or 1/2, that is, 2 ** -1. Then "0100" is 0.25 or 1/4. You can see where I'm going.
How many numbers can you represent in this manner? 2**4. Adding more bits duplicates the available space, but it is never infinite. 1/3 or 1/10, for the matter 1/n, any number not multiple of 2 cannot be really represented.
1/3 could be "0101" (0.3125) or "0110" (0.375). Either value if you multiply it by 3, will not be 1. Of course you could add special rules. Say you "when you add 3 times '0101', make it 1"... this approach won't work in the long run. You can catch some but then how about 1/6 times 2?
It's not a problem of binary representation, any finite representation has numbers that you cannot represent, they are infinite after all.
Most CPUs (and computer languages) use IEEE 754 floating point arithmetic. Using this notation, there are decimal numbers that have no exact representation in this notation, e.g. 0.1. So if you divide 1 by 10 you won't get an exact result. When performing several calculations in a row, the errors sum up. Try the following example in python:
>>> 0.1
0.10000000000000001
>>> 0.1 / 7 * 10 * 7 == 1
False
That's not really what you'd expect mathematically.
By the way:
A common misunderstanding concerning floating point numbers is, that the results are not precise and cannot be comapared safely. This is only true if you really use fractions of numbers. If all your math is in the integer domain, doubles and floats do exactly the same as ints and also can be compared safely. They can be safely used as loop counters, for example.
yes, Java also uses floating point arithmetic.
Related
I know that for primitive floating point types (floats and doubles), you're not supposed to compare them directly via ==. But what if you're using the wrapper class for double? Will something like
Double a = 5.05;
Double b = 5.05;
boolean test = a.equals(b);
compare the two values properly?
You need to fully understand the reason for why == comparison is a bad idea. Without understanding, you're just fumbling in the dark.
Let's talk about how computers (and doubles) work.
Imagine you enter a room; it has 3 lightswitches, otherwise it is bare. You will enter the room, you can fiddle with the switches, but then you have to leave. I enter the room later and can look at the switches.
How much information can you convey?
The answer is: You can convey 8 different states: DDD, DDU, DUD, DUU, UDD, UDU, UUD, and UUU. That's it.
Computers work exactly like this when they store a double. Except instead of 3 switches, you get 64 switches. That means 2^64 different states you can convey with a single double, and that's a ton of states: That's a 19 digit number.
But it's still a finite amount of states, and that's problematic: There are an infinite amount of numbers between 0 and 1. Let alone between -infinity and +infinity, which double dares to cover. How do you store one of an infinite infinity of choices when you only get to represent 2^64 states? That 19-digit number starts to look pretty small when it's tasked to differentiate from an infinite infinity of possibilities, doesn't it?
The answer, of course, is that this is completely impossible.
So doubles don't actually work like that. Instead, someone took some effort and hung up a gigantic numerline, from minus infinite to plus infinite, in a big room, and threw 2^64 darts at this line. The numbers they landed on are the 'blessed numbers' - these are representable by a double value. That does mean there are an infinite amount of numbers between any 2 darts that therefore are not representable. The darts aren't quite random: The closer you are to 0, the denser the darts. Once you get beyond about 2^52 or so, the distance between 2 darts exceeds 1.0 even.
Here's a trivial example of a non-representable number: 0.3. Amazing, isn't it? Something that simple. It means a computer literally cannot calculate 0.1 + 0.2 using double. So what happens when you try? The rules state that the result of any calculation is always silently rounded to the nearest blessed number.
And therein lies the rub: You can run the math:
double x = 0.1 + 0.2;
and later do:
double y = 0.9 - 0.8 + 0.15 + 0.05;
and us humans would immediately notice that x and y are naturally identical. But not so for computers - because of that silent rounding to the nearest blessed number, it's possible that x is 0.29999999999999999785, and y is 0.300000000000000000012.
Thus we get to four crucial aspects when using double (or float which is just about worse in every fashion, don't ever use those):
If you need absolute precision, don't use them at all.
When printing them, always round them down. System.out.println does this out of the box, but you should really use .printf("%.5f") or similar: Pick the # of digits you need.
Be aware that the errors will compound, and it gets worse as you are further away from 1.0.
Do not ever compare with ==, instead always use a delta-compare: The notion of "lets consider the 2 numbers equal if they are within 0.0000000001 of each other".
There is no universal magic delta value; it depends on your precision needs, how far you away from 1.0, etc. Therefore, just asking the computer: Hey, just figure this stuff out I just wanna know if these 2 doubles are equal is impossible. The only definition available that doesn't require your input as to 'how close' they can be, is the notion of sheer perfection: They are equal only if they are precisely identical. This definition would fail you in that trivial example above. It makes not one iota of difference if you use Double.equals instead of double == double, or any other utility class for that matter.
So, no, Double.equals is not suitable. You will have to compare Math.abs(d1 - d2) < epsilon, where epsilon is your choice. Mostly, if equality matters at all you're already doing it wrong and shouldn't be using double in the first place.
NB: When representing money, you don't want unpredictable rounding, so never use doubles for these. Instead figure out what the atomic banking unit is (dollarcents, eurocents, yen, satoshis for bitcoin, etc), and store that as a long. You store $4.52 and long x = 452;, not as double x = 4.52;.
I'm using Math.sin to calculate trigonometry in Java with 3 decimal precision. However when I calculate values that should result in an Integer I get 1.0000000002 instead of 1.
I have tried using
System.out.printf(Locale.ROOT, "%.3f ", v);
which does solve the problem of 1.000000002 turning into 1.000.
However when I calculate numbers that should result in 0 and instead get -1.8369701987210297E-16 and use
System.out.printf(Locale.ROOT, "%.3f ", v);
prints out -0.000 when I need it to be 0.000.
Any ideas on how to get rid of that negative sign?
Lets start with this:
How do I avoid rounding errors with doubles?
Basically, you can't. They are inherent to numerical calculations using floating point types. Trust me ... or take the time to read this article:
What every Computer Scientist should know about floating-point arithmetic by David Goldberg.
In this case, the other thing that comes into play is that trigonometric functions are implemented by computing a finite number of steps of an infinite series with finite precision (i.e. floating point) arithmetic. The javadoc for the Math class leaves some "wiggle room" on the accuracy of the math functions. It is worth reading the javadocs to understand the expected error bounds.
Finally, if you are computing (for example) sin π/2 you need to consider how accurate your representation of π/2 is.
So what you should really be asking is how to deal with the rounding error that unavoidably happens.
In this case, you are asking is how to make it look like the user of your program as if there isn't any rounding error. There are two approaches to this:
Leave it alone! The rounding errors occur, so we should not lie to the users about it. It is better to educate them. (Honestly, this is high school maths, and even "the pointy haired boss" should understand that arithmetic is inexact.)
Routines like printf do a pretty good job. And the -0.000 displayed in this case is actually a truthful answer. It means that the computed answer rounds to zero to 3 decimal places but is actually negative. This is not actually hard for someone with high school maths to understand. If you explain it.
Lie. Fake it. Put in some special case code to explicitly convert numbers between -0.0005 and zero to exactly zero. The code suggested in a comment
System.out.printf(Locale.ROOT, "%.3f ", Math.round(v * 1000d) / 1000d);
is another way to do the job. But the risk of this is that the lie could be dangerous in some circumstances. On the other hand, you could say that real mistake problem is displaying the numbers to 3 decimal places.
Depends on accuracy you need, you can multiply by X and divide by X where X is X=10^y and y is required floating poing precision.
double r = 11.631;
double theta = 21.4;
In the debugger, these are shown as 11.631000000000000 and 21.399999618530273.
How can I avoid this?
These accuracy problems are due to the internal representation of floating point numbers and there's not much you can do to avoid it.
By the way, printing these values at run-time often still leads to the correct results, at least using modern C++ compilers. For most operations, this isn't much of an issue.
I liked Joel's explanation, which deals with a similar binary floating point precision issue in Excel 2007:
See how there's a lot of 0110 0110 0110 there at the end? That's because 0.1 has no exact representation in binary... it's a repeating binary number. It's sort of like how 1/3 has no representation in decimal. 1/3 is 0.33333333 and you have to keep writing 3's forever. If you lose patience, you get something inexact.
So you can imagine how, in decimal, if you tried to do 3*1/3, and you didn't have time to write 3's forever, the result you would get would be 0.99999999, not 1, and people would get angry with you for being wrong.
If you have a value like:
double theta = 21.4;
And you want to do:
if (theta == 21.4)
{
}
You have to be a bit clever, you will need to check if the value of theta is really close to 21.4, but not necessarily that value.
if (fabs(theta - 21.4) <= 1e-6)
{
}
This is partly platform-specific - and we don't know what platform you're using.
It's also partly a case of knowing what you actually want to see. The debugger is showing you - to some extent, anyway - the precise value stored in your variable. In my article on binary floating point numbers in .NET, there's a C# class which lets you see the absolutely exact number stored in a double. The online version isn't working at the moment - I'll try to put one up on another site.
Given that the debugger sees the "actual" value, it's got to make a judgement call about what to display - it could show you the value rounded to a few decimal places, or a more precise value. Some debuggers do a better job than others at reading developers' minds, but it's a fundamental problem with binary floating point numbers.
Use the fixed-point decimal type if you want stability at the limits of precision. There are overheads, and you must explicitly cast if you wish to convert to floating point. If you do convert to floating point you will reintroduce the instabilities that seem to bother you.
Alternately you can get over it and learn to work with the limited precision of floating point arithmetic. For example you can use rounding to make values converge, or you can use epsilon comparisons to describe a tolerance. "Epsilon" is a constant you set up that defines a tolerance. For example, you may choose to regard two values as being equal if they are within 0.0001 of each other.
It occurs to me that you could use operator overloading to make epsilon comparisons transparent. That would be very cool.
For mantissa-exponent representations EPSILON must be computed to remain within the representable precision. For a number N, Epsilon = N / 10E+14
System.Double.Epsilon is the smallest representable positive value for the Double type. It is too small for our purpose. Read Microsoft's advice on equality testing
I've come across this before (on my blog) - I think the surprise tends to be that the 'irrational' numbers are different.
By 'irrational' here I'm just referring to the fact that they can't be accurately represented in this format. Real irrational numbers (like π - pi) can't be accurately represented at all.
Most people are familiar with 1/3 not working in decimal: 0.3333333333333...
The odd thing is that 1.1 doesn't work in floats. People expect decimal values to work in floating point numbers because of how they think of them:
1.1 is 11 x 10^-1
When actually they're in base-2
1.1 is 154811237190861 x 2^-47
You can't avoid it, you just have to get used to the fact that some floats are 'irrational', in the same way that 1/3 is.
One way you can avoid this is to use a library that uses an alternative method of representing decimal numbers, such as BCD
If you are using Java and you need accuracy, use the BigDecimal class for floating point calculations. It is slower but safer.
Seems to me that 21.399999618530273 is the single precision (float) representation of 21.4. Looks like the debugger is casting down from double to float somewhere.
You cant avoid this as you're using floating point numbers with fixed quantity of bytes. There's simply no isomorphism possible between real numbers and its limited notation.
But most of the time you can simply ignore it. 21.4==21.4 would still be true because it is still the same numbers with the same error. But 21.4f==21.4 may not be true because the error for float and double are different.
If you need fixed precision, perhaps you should try fixed point numbers. Or even integers. I for example often use int(1000*x) for passing to debug pager.
Dangers of computer arithmetic
If it bothers you, you can customize the way some values are displayed during debug. Use it with care :-)
Enhancing Debugging with the Debugger Display Attributes
Refer to General Decimal Arithmetic
Also take note when comparing floats, see this answer for more information.
According to the javadoc
"If at least one of the operands to a numerical operator is of type double, then the
operation is carried out using 64-bit floating-point arithmetic, and the result of the
numerical operator is a value of type double. If the other operand is not a double, it is
first widened (§5.1.5) to type double by numeric promotion (§5.6)."
Here is the Source
I'm looking through old exam questions (currently first year of uni.) and I'm wondering if someone could explain a bit more thoroughly why the following for loop does not end when it is supposed to. Why does this happen? I understand that it skips 100.0 because of a rounding-error or something, but why?
for(double i = 0.0; i != 100; i = i +0.1){
System.out.println(i);
}
The number 0.1 cannot be exactly represented in binary, much like 1/3 cannot be exactly represented in decimal, as such you cannot guarantee that:
0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1==1
This is because in binary:
0.1=(binary)0.00011001100110011001100110011001....... forever
However a double cannot contain an infinite precision and so, just as we approximate 1/3 to 0.3333333 so must the binary representation approximate 0.1.
Expanded decimal analogy
In decimal you may find that
1/3+1/3+1/3
=0.333+0.333+0.333
=0.999
This is exactly the same problem. It should not be seen as a weakness of floating point numbers as our own decimal system has the same difficulties (but for different numbers, someone with a base-3 system would find it strange that we struggled to represent 1/3). It is however an issue to be aware of.
Demo
A live demo provided by Andrea Ligios shows these errors building up.
Computers (at least current ones) works with binary data. Moreover, there is a length limitation for computers to process in their arithmetic logic units (i.e. 32bits, 64bits etc).
Representing integers in binary form is simple on the contrary we cant say the same thing for floating points.
As shown above there is a special way of representing floating points according to IEEE-754 which is also accepted as defacto by processor producers and software guys that's why it is important for everyone to know about it.
If we look at the maximum value of a double in java (Double.MAX_VALUE) is 1.7976931348623157E308 (>10^307). only with 64 bits, huge numbers could be represented however problem is the precision.
As '==' and '!=' operators compare numbers bitwise, in your case 0.1+0.1+0.1 is not equal to 0.3 in terms of bits they are represented.
As a conclusion, to fit huge floating point numbers in a few bits clever engineers decided to sacrifice precision. If you are working on floating points you shouldn't use '==' or '!=' unless you are sure what you are doing.
As a general rule, never use double to iterate with due to rounding errors (0.1 may look nice when written in base 10, but try writing it in base 2—which is what double uses). What you should do is use a plain int variable to iterate and calculate the double from it.
for (int i = 0; i < 1000; i++)
System.out.println(i/10.0);
First of all, I'm going to explain some things about doubles. This will actually take place in base ten for ease of understanding.
Take the value one-third and try to express it in base ten. You get 0.3333333333333.... Let's say we need to round it to 4 places. We get 0.3333. Now, let's add another 1/3. We get 0.6666333333333.... which rounds to 0.6666. Let's add another 1/3. We get 0.9999, not 1.
The same thing happens with base two and one-tenth. Since you're going by 0.110 and 0.110 is a repeating binary value(like 0.1666666... in base ten), you'll have just enough error to miss one hundred when you do get there.
1/2 can be represented in base ten just fine, and 1/5 can as well. This is because the prime factors of the denominator are a subset of the factors of the base. This is not the case for one third in base ten or one tenth in base two.
It should be for(double a = 0.0; a < 100.0; a = a + 0.01)
Try and see if this works instead
This question already has answers here:
Closed 11 years ago.
Possible Duplicates:
Is JavaScript's Math broken?
Java floating point arithmetic
I have the current code
for(double j = .01; j <= .17; j+=.01){
System.out.println(j);
}
the output is:
0.01
0.02
0.03
0.04
0.05
0.060000000000000005
0.07
0.08
0.09
0.09999999999999999
0.10999999999999999
0.11999999999999998
0.12999999999999998
0.13999999999999999
0.15
0.16
0.17
Can someone explain why this is happening? How do you fix this? Besides writing a rounding function?
Floats are an approximation of the actual number in Java, due to the way they're stored. If you need exact values, use a BigDecimal instead.
They are working correctly. Some decimal values are not representable exactly in binary floating point and get rounded to the closest value. See my answer to this question for more detail. The question was asked about Perl, but the answer applies equally to Java since it's a limitation of ALL floating point representations that do not have infinite precision (i.e. all of them).
As suggested by #Kaleb Brasee go and use BigDecimal's when accuracy is a must. Here is a link to a nice explanation of tiny details related to using floating point operations in Java http://firstclassthoughts.co.uk/java/traps/java_double_traps.html
There is also a link to issues involved with using BigDecimal's. Highly recommended to read them both. It really helped me.
Enjoy, Boro.
We humans are used to think in 'base 10' when we deal with floating point numbers 'by hand' (that is, literally when writing them on paper or when entering them into a computer). Because of this, it is possible for us to write down an exact representation of, say, 17%. We just write 0.17 (or 1.7E-1 etc). Trying to represent such a trivial thing as a third can not be done exactly with that system, because we have to write 0.3333333... with an infinite number of 3s, which is impossible.
Computers dealing with floating point not only have a limited number of bits to represent the mantissa (or significand) of the number, they are also restricted to express the mantissa in the base of two. That means that most percentages (which we humans with our base 10 floating point convention always can write exactly, like for example '0.17') are impossible for the computer to store exactly. Fractions like 0%, 25%, 50%, 75% and 100% can be expressed exactly as a floating point number in a computer, because it consists of either halves (2E-1) or quarters (2E-4) which fits nicely with a digital representation of a number. Percentage values like 17% or even trivial ones (for us humans!!) like 10% or 1% are as impossible for computers to store exactly simply because those numbers are, for the binary floating point system what the 'one third' is for the human (base 10) floating point system.
But if you carefully pick your floating point values, so they always are made of a whole number of 1/2^n where n might be 10 (meaning an integer number of 1/1024), then they can always be stored exactly without errors as a floating point number. So if you try to store 17/1024 in a computer, it will go smoothly. You can actually store it without error even using the 'human base 10' decimal system (but you would go nuts by the number of actual digits you have to deal with).
This is some reason I believe why some games express angles in a unit where a whole 360 degree turn is 256 angle units. Can be expressed without loss as a floating point number between 0 and 1 (where 1 means you go a full revolution).
It's normal in double representation on the computer. You lose some bits then you will have such results. Better solution is to do this:
for(int j = 1; j <= 17; j++){
System.out.println(j/100.0);
}
This is because floating point values are inherently not the same as reals in the mathematical sense.
In a computer, there is only a fixed number of bits that can be used to represent value. This means there are a finite number of values that it can hold. But there are an infinite amount of real numbers, thus not all of them can be represented exactly. But usually the value is something close. You can find a more detailed explanation here.
That is because of the limitations of IEEE754 the binary format to get the most out of 32 bit.
As others have pointed out, only numbers that are combinations of powers of two are exactly representable in (bianary) floating point format
If you need to store arbitrary numbers with arbitrary precision, then use BigDecimal.
If the problem is just a display issue, then you can get round this in how you display the number. For example:
String.format("%.2f", n)
will format the number to 2 decimal places.