Why does for loop using a double fail to terminate - java

I'm looking through old exam questions (currently first year of uni.) and I'm wondering if someone could explain a bit more thoroughly why the following for loop does not end when it is supposed to. Why does this happen? I understand that it skips 100.0 because of a rounding-error or something, but why?
for(double i = 0.0; i != 100; i = i +0.1){
System.out.println(i);
}

The number 0.1 cannot be exactly represented in binary, much like 1/3 cannot be exactly represented in decimal, as such you cannot guarantee that:
0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1==1
This is because in binary:
0.1=(binary)0.00011001100110011001100110011001....... forever
However a double cannot contain an infinite precision and so, just as we approximate 1/3 to 0.3333333 so must the binary representation approximate 0.1.
Expanded decimal analogy
In decimal you may find that
1/3+1/3+1/3
=0.333+0.333+0.333
=0.999
This is exactly the same problem. It should not be seen as a weakness of floating point numbers as our own decimal system has the same difficulties (but for different numbers, someone with a base-3 system would find it strange that we struggled to represent 1/3). It is however an issue to be aware of.
Demo
A live demo provided by Andrea Ligios shows these errors building up.

Computers (at least current ones) works with binary data. Moreover, there is a length limitation for computers to process in their arithmetic logic units (i.e. 32bits, 64bits etc).
Representing integers in binary form is simple on the contrary we cant say the same thing for floating points.
As shown above there is a special way of representing floating points according to IEEE-754 which is also accepted as defacto by processor producers and software guys that's why it is important for everyone to know about it.
If we look at the maximum value of a double in java (Double.MAX_VALUE) is 1.7976931348623157E308 (>10^307). only with 64 bits, huge numbers could be represented however problem is the precision.
As '==' and '!=' operators compare numbers bitwise, in your case 0.1+0.1+0.1 is not equal to 0.3 in terms of bits they are represented.
As a conclusion, to fit huge floating point numbers in a few bits clever engineers decided to sacrifice precision. If you are working on floating points you shouldn't use '==' or '!=' unless you are sure what you are doing.

As a general rule, never use double to iterate with due to rounding errors (0.1 may look nice when written in base 10, but try writing it in base 2—which is what double uses). What you should do is use a plain int variable to iterate and calculate the double from it.
for (int i = 0; i < 1000; i++)
System.out.println(i/10.0);

First of all, I'm going to explain some things about doubles. This will actually take place in base ten for ease of understanding.
Take the value one-third and try to express it in base ten. You get 0.3333333333333.... Let's say we need to round it to 4 places. We get 0.3333. Now, let's add another 1/3. We get 0.6666333333333.... which rounds to 0.6666. Let's add another 1/3. We get 0.9999, not 1.
The same thing happens with base two and one-tenth. Since you're going by 0.110 and 0.110 is a repeating binary value(like 0.1666666... in base ten), you'll have just enough error to miss one hundred when you do get there.
1/2 can be represented in base ten just fine, and 1/5 can as well. This is because the prime factors of the denominator are a subset of the factors of the base. This is not the case for one third in base ten or one tenth in base two.

It should be for(double a = 0.0; a < 100.0; a = a + 0.01)
Try and see if this works instead

Related

Does the .equals() method of the double wrapper class work for finding equality of floating point numbers?

I know that for primitive floating point types (floats and doubles), you're not supposed to compare them directly via ==. But what if you're using the wrapper class for double? Will something like
Double a = 5.05;
Double b = 5.05;
boolean test = a.equals(b);
compare the two values properly?
You need to fully understand the reason for why == comparison is a bad idea. Without understanding, you're just fumbling in the dark.
Let's talk about how computers (and doubles) work.
Imagine you enter a room; it has 3 lightswitches, otherwise it is bare. You will enter the room, you can fiddle with the switches, but then you have to leave. I enter the room later and can look at the switches.
How much information can you convey?
The answer is: You can convey 8 different states: DDD, DDU, DUD, DUU, UDD, UDU, UUD, and UUU. That's it.
Computers work exactly like this when they store a double. Except instead of 3 switches, you get 64 switches. That means 2^64 different states you can convey with a single double, and that's a ton of states: That's a 19 digit number.
But it's still a finite amount of states, and that's problematic: There are an infinite amount of numbers between 0 and 1. Let alone between -infinity and +infinity, which double dares to cover. How do you store one of an infinite infinity of choices when you only get to represent 2^64 states? That 19-digit number starts to look pretty small when it's tasked to differentiate from an infinite infinity of possibilities, doesn't it?
The answer, of course, is that this is completely impossible.
So doubles don't actually work like that. Instead, someone took some effort and hung up a gigantic numerline, from minus infinite to plus infinite, in a big room, and threw 2^64 darts at this line. The numbers they landed on are the 'blessed numbers' - these are representable by a double value. That does mean there are an infinite amount of numbers between any 2 darts that therefore are not representable. The darts aren't quite random: The closer you are to 0, the denser the darts. Once you get beyond about 2^52 or so, the distance between 2 darts exceeds 1.0 even.
Here's a trivial example of a non-representable number: 0.3. Amazing, isn't it? Something that simple. It means a computer literally cannot calculate 0.1 + 0.2 using double. So what happens when you try? The rules state that the result of any calculation is always silently rounded to the nearest blessed number.
And therein lies the rub: You can run the math:
double x = 0.1 + 0.2;
and later do:
double y = 0.9 - 0.8 + 0.15 + 0.05;
and us humans would immediately notice that x and y are naturally identical. But not so for computers - because of that silent rounding to the nearest blessed number, it's possible that x is 0.29999999999999999785, and y is 0.300000000000000000012.
Thus we get to four crucial aspects when using double (or float which is just about worse in every fashion, don't ever use those):
If you need absolute precision, don't use them at all.
When printing them, always round them down. System.out.println does this out of the box, but you should really use .printf("%.5f") or similar: Pick the # of digits you need.
Be aware that the errors will compound, and it gets worse as you are further away from 1.0.
Do not ever compare with ==, instead always use a delta-compare: The notion of "lets consider the 2 numbers equal if they are within 0.0000000001 of each other".
There is no universal magic delta value; it depends on your precision needs, how far you away from 1.0, etc. Therefore, just asking the computer: Hey, just figure this stuff out I just wanna know if these 2 doubles are equal is impossible. The only definition available that doesn't require your input as to 'how close' they can be, is the notion of sheer perfection: They are equal only if they are precisely identical. This definition would fail you in that trivial example above. It makes not one iota of difference if you use Double.equals instead of double == double, or any other utility class for that matter.
So, no, Double.equals is not suitable. You will have to compare Math.abs(d1 - d2) < epsilon, where epsilon is your choice. Mostly, if equality matters at all you're already doing it wrong and shouldn't be using double in the first place.
NB: When representing money, you don't want unpredictable rounding, so never use doubles for these. Instead figure out what the atomic banking unit is (dollarcents, eurocents, yen, satoshis for bitcoin, etc), and store that as a long. You store $4.52 and long x = 452;, not as double x = 4.52;.

How do I avoid rounding errors with doubles?

I'm using Math.sin to calculate trigonometry in Java with 3 decimal precision. However when I calculate values that should result in an Integer I get 1.0000000002 instead of 1.
I have tried using
System.out.printf(Locale.ROOT, "%.3f ", v);
which does solve the problem of 1.000000002 turning into 1.000.
However when I calculate numbers that should result in 0 and instead get -1.8369701987210297E-16 and use
System.out.printf(Locale.ROOT, "%.3f ", v);
prints out -0.000 when I need it to be 0.000.
Any ideas on how to get rid of that negative sign?
Lets start with this:
How do I avoid rounding errors with doubles?
Basically, you can't. They are inherent to numerical calculations using floating point types. Trust me ... or take the time to read this article:
What every Computer Scientist should know about floating-point arithmetic by David Goldberg.
In this case, the other thing that comes into play is that trigonometric functions are implemented by computing a finite number of steps of an infinite series with finite precision (i.e. floating point) arithmetic. The javadoc for the Math class leaves some "wiggle room" on the accuracy of the math functions. It is worth reading the javadocs to understand the expected error bounds.
Finally, if you are computing (for example) sin π/2 you need to consider how accurate your representation of π/2 is.
So what you should really be asking is how to deal with the rounding error that unavoidably happens.
In this case, you are asking is how to make it look like the user of your program as if there isn't any rounding error. There are two approaches to this:
Leave it alone! The rounding errors occur, so we should not lie to the users about it. It is better to educate them. (Honestly, this is high school maths, and even "the pointy haired boss" should understand that arithmetic is inexact.)
Routines like printf do a pretty good job. And the -0.000 displayed in this case is actually a truthful answer. It means that the computed answer rounds to zero to 3 decimal places but is actually negative. This is not actually hard for someone with high school maths to understand. If you explain it.
Lie. Fake it. Put in some special case code to explicitly convert numbers between -0.0005 and zero to exactly zero. The code suggested in a comment
System.out.printf(Locale.ROOT, "%.3f ", Math.round(v * 1000d) / 1000d);
is another way to do the job. But the risk of this is that the lie could be dangerous in some circumstances. On the other hand, you could say that real mistake problem is displaying the numbers to 3 decimal places.
Depends on accuracy you need, you can multiply by X and divide by X where X is X=10^y and y is required floating poing precision.

Why java calculate dividing a decimal by int like this? [duplicate]

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
How do you explain floating point inaccuracy to fresh programmers and laymen who still think computers are infinitely wise and accurate?
Do you have a favourite example or anecdote which seems to get the idea across much better than an precise, but dry, explanation?
How is this taught in Computer Science classes?
There are basically two major pitfalls people stumble in with floating-point numbers.
The problem of scale. Each FP number has an exponent which determines the overall “scale” of the number so you can represent either really small values or really larges ones, though the number of digits you can devote for that is limited. Adding two numbers of different scale will sometimes result in the smaller one being “eaten” since there is no way to fit it into the larger scale.
PS> $a = 1; $b = 0.0000000000000000000000001
PS> Write-Host a=$a b=$b
a=1 b=1E-25
PS> $a + $b
1
As an analogy for this case you could picture a large swimming pool and a teaspoon of water. Both are of very different sizes, but individually you can easily grasp how much they roughly are. Pouring the teaspoon into the swimming pool, however, will leave you still with roughly a swimming pool full of water.
(If the people learning this have trouble with exponential notation, one can also use the values 1 and 100000000000000000000 or so.)
Then there is the problem of binary vs. decimal representation. A number like 0.1 can't be represented exactly with a limited amount of binary digits. Some languages mask this, though:
PS> "{0:N50}" -f 0.1
0.10000000000000000000000000000000000000000000000000
But you can “amplify” the representation error by repeatedly adding the numbers together:
PS> $sum = 0; for ($i = 0; $i -lt 100; $i++) { $sum += 0.1 }; $sum
9,99999999999998
I can't think of a nice analogy to properly explain this, though. It's basically the same problem why you can represent 1/3 only approximately in decimal because to get the exact value you need to repeat the 3 indefinitely at the end of the decimal fraction.
Similarly, binary fractions are good for representing halves, quarters, eighths, etc. but things like a tenth will yield an infinitely repeating stream of binary digits.
Then there is another problem, though most people don't stumble into that, unless they're doing huge amounts of numerical stuff. But then, those already know about the problem. Since many floating-point numbers are merely approximations of the exact value this means that for a given approximation f of a real number r there can be infinitely many more real numbers r1, r2, ... which map to exactly the same approximation. Those numbers lie in a certain interval. Let's say that rmin is the minimum possible value of r that results in f and rmax the maximum possible value of r for which this holds, then you got an interval [rmin, rmax] where any number in that interval can be your actual number r.
Now, if you perform calculations on that number—adding, subtracting, multiplying, etc.—you lose precision. Every number is just an approximation, therefore you're actually performing calculations with intervals. The result is an interval too and the approximation error only ever gets larger, thereby widening the interval. You may get back a single number from that calculation. But that's merely one number from the interval of possible results, taking into account precision of your original operands and the precision loss due to the calculation.
That sort of thing is called Interval arithmetic and at least for me it was part of our math course at the university.
Show them that the base-10 system suffers from exactly the same problem.
Try to represent 1/3 as a decimal representation in base 10. You won't be able to do it exactly.
So if you write "0.3333", you will have a reasonably exact representation for many use cases.
But if you move that back to a fraction, you will get "3333/10000", which is not the same as "1/3".
Other fractions, such as 1/2 can easily be represented by a finite decimal representation in base-10: "0.5"
Now base-2 and base-10 suffer from essentially the same problem: both have some numbers that they can't represent exactly.
While base-10 has no problem representing 1/10 as "0.1" in base-2 you'd need an infinite representation starting with "0.000110011..".
How's this for an explantation to the layman. One way computers represent numbers is by counting discrete units. These are digital computers. For whole numbers, those without a fractional part, modern digital computers count powers of two: 1, 2, 4, 8. ,,, Place value, binary digits, blah , blah, blah. For fractions, digital computers count inverse powers of two: 1/2, 1/4, 1/8, ... The problem is that many numbers can't be represented by a sum of a finite number of those inverse powers. Using more place values (more bits) will increase the precision of the representation of those 'problem' numbers, but never get it exactly because it only has a limited number of bits. Some numbers can't be represented with an infinite number of bits.
Snooze...
OK, you want to measure the volume of water in a container, and you only have 3 measuring cups: full cup, half cup, and quarter cup. After counting the last full cup, let's say there is one third of a cup remaining. Yet you can't measure that because it doesn't exactly fill any combination of available cups. It doesn't fill the half cup, and the overflow from the quarter cup is too small to fill anything. So you have an error - the difference between 1/3 and 1/4. This error is compounded when you combine it with errors from other measurements.
In python:
>>> 1.0 / 10
0.10000000000000001
Explain how some fractions cannot be represented precisely in binary. Just like some fractions (like 1/3) cannot be represented precisely in base 10.
Another example, in C
printf (" %.20f \n", 3.6);
incredibly gives
3.60000000000000008882
Here is my simple understanding.
Problem:
The value 0.45 cannot be accurately be represented by a float and is rounded up to 0.450000018. Why is that?
Answer:
An int value of 45 is represented by the binary value 101101.
In order to make the value 0.45 it would be accurate if it you could take 45 x 10^-2 (= 45 / 10^2.)
But that’s impossible because you must use the base 2 instead of 10.
So the closest to 10^2 = 100 would be 128 = 2^7. The total number of bits you need is 9 : 6 for the value 45 (101101) + 3 bits for the value 7 (111).
Then the value 45 x 2^-7 = 0.3515625. Now you have a serious inaccuracy problem. 0.3515625 is not nearly close to 0.45.
How do we improve this inaccuracy? Well we could change the value 45 and 7 to something else.
How about 460 x 2^-10 = 0.44921875. You are now using 9 bits for 460 and 4 bits for 10. Then it’s a bit closer but still not that close. However if your initial desired value was 0.44921875 then you would get an exact match with no approximation.
So the formula for your value would be X = A x 2^B. Where A and B are integer values positive or negative.
Obviously the higher the numbers can be the higher would your accuracy become however as you know the number of bits to represent the values A and B are limited. For float you have a total number of 32. Double has 64 and Decimal has 128.
A cute piece of numerical weirdness may be observed if one converts 9999999.4999999999 to a float and back to a double. The result is reported as 10000000, even though that value is obviously closer to 9999999, and even though 9999999.499999999 correctly rounds to 9999999.

Why do floats seem to add incorrectly in Java? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicates:
Is JavaScript's Math broken?
Java floating point arithmetic
I have the current code
for(double j = .01; j <= .17; j+=.01){
System.out.println(j);
}
the output is:
0.01
0.02
0.03
0.04
0.05
0.060000000000000005
0.07
0.08
0.09
0.09999999999999999
0.10999999999999999
0.11999999999999998
0.12999999999999998
0.13999999999999999
0.15
0.16
0.17
Can someone explain why this is happening? How do you fix this? Besides writing a rounding function?
Floats are an approximation of the actual number in Java, due to the way they're stored. If you need exact values, use a BigDecimal instead.
They are working correctly. Some decimal values are not representable exactly in binary floating point and get rounded to the closest value. See my answer to this question for more detail. The question was asked about Perl, but the answer applies equally to Java since it's a limitation of ALL floating point representations that do not have infinite precision (i.e. all of them).
As suggested by #Kaleb Brasee go and use BigDecimal's when accuracy is a must. Here is a link to a nice explanation of tiny details related to using floating point operations in Java http://firstclassthoughts.co.uk/java/traps/java_double_traps.html
There is also a link to issues involved with using BigDecimal's. Highly recommended to read them both. It really helped me.
Enjoy, Boro.
We humans are used to think in 'base 10' when we deal with floating point numbers 'by hand' (that is, literally when writing them on paper or when entering them into a computer). Because of this, it is possible for us to write down an exact representation of, say, 17%. We just write 0.17 (or 1.7E-1 etc). Trying to represent such a trivial thing as a third can not be done exactly with that system, because we have to write 0.3333333... with an infinite number of 3s, which is impossible.
Computers dealing with floating point not only have a limited number of bits to represent the mantissa (or significand) of the number, they are also restricted to express the mantissa in the base of two. That means that most percentages (which we humans with our base 10 floating point convention always can write exactly, like for example '0.17') are impossible for the computer to store exactly. Fractions like 0%, 25%, 50%, 75% and 100% can be expressed exactly as a floating point number in a computer, because it consists of either halves (2E-1) or quarters (2E-4) which fits nicely with a digital representation of a number. Percentage values like 17% or even trivial ones (for us humans!!) like 10% or 1% are as impossible for computers to store exactly simply because those numbers are, for the binary floating point system what the 'one third' is for the human (base 10) floating point system.
But if you carefully pick your floating point values, so they always are made of a whole number of 1/2^n where n might be 10 (meaning an integer number of 1/1024), then they can always be stored exactly without errors as a floating point number. So if you try to store 17/1024 in a computer, it will go smoothly. You can actually store it without error even using the 'human base 10' decimal system (but you would go nuts by the number of actual digits you have to deal with).
This is some reason I believe why some games express angles in a unit where a whole 360 degree turn is 256 angle units. Can be expressed without loss as a floating point number between 0 and 1 (where 1 means you go a full revolution).
It's normal in double representation on the computer. You lose some bits then you will have such results. Better solution is to do this:
for(int j = 1; j <= 17; j++){
System.out.println(j/100.0);
}
This is because floating point values are inherently not the same as reals in the mathematical sense.
In a computer, there is only a fixed number of bits that can be used to represent value. This means there are a finite number of values that it can hold. But there are an infinite amount of real numbers, thus not all of them can be represented exactly. But usually the value is something close. You can find a more detailed explanation here.
That is because of the limitations of IEEE754 the binary format to get the most out of 32 bit.
As others have pointed out, only numbers that are combinations of powers of two are exactly representable in (bianary) floating point format
If you need to store arbitrary numbers with arbitrary precision, then use BigDecimal.
If the problem is just a display issue, then you can get round this in how you display the number. For example:
String.format("%.2f", n)
will format the number to 2 decimal places.

Rounding Errors?

In my course, I am told:
Continuous values are represented approximately in memory, and therefore computing with floats involves rounding errors. These are tiny discrepancies in bit patterns; thus the test e==f is unsafe if e and f are floats.
Referring to Java.
Is this true? I've used comparison statements with doubles and floats and have never had rounding issues. Never have I read in a textbook something similar. Surely the virtual machine accounts for this?
It is true.
It is an inherent limitation of how floating point values are represented in memory in a finite number of bits.
This program, for instance, prints "false":
public class Main {
public static void main(String[] args) {
double a = 0.7;
double b = 0.9;
double x = a + 0.1;
double y = b - 0.1;
System.out.println(x == y);
}
}
Instead of exact comparison with '==' you usually decide on some level of precision and ask if the numbers are "close enough":
System.out.println(Math.abs(x - y) < 0.0001);
This applies to Java just as much as to any other language using floating point. It's inherent in the design of the representation of floating point values in hardware.
More info on floating point values:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Yes, representing 0.1 exactly in base-2 is the same as trying to represent 1/3 exactly in base 10.
This is always true. There are some numbers which cannot be represented accurately using float point representation. Consider, for example, pi. How would you represent a number which has infinite digits, within a finite storage? Therefore, when comparing numbers you should check if the difference between them is smaller then some epsilon. Also, there are several classes which exist that can help you achieve greater accuracy such as BigDecimal and BigInteger.
It is right. Note that Java has nothing to do with it, the problem is inherent in floating point math in ANY language.
You can often get away with it with classroom-level problems but it's not going to work in the real world. Sometimes it won't work in the classroom.
An incident from long ago back in school. The teacher of an intro class assigned a final exam problem that was proving a real doozy for many of the better students--it wasn't working and they didn't know why. (I saw this as a lab assistant, I wasn't in the class.) Finally some started asking me for help and some probing revealed the problem: They had never been taught about the inherent inaccuracy of floating point math.
Now, there were two basic approaches to this problem, a brute force one (which by chance worked in this case as it made the same errors every time) and a more elegant one (which would make different errors and not work.) Anyone who tried the elegant approach would hit a brick wall without having any idea why. I helped a bunch of them and stuck in a comment explaining why and to contact me if he had questions.
Of course next semester I hear from him about this and I basically floored the entire department with a simple little program:
10 X = 3000000
20 X = X + 1
30 If X < X + 1 goto 20
40 Print "X = X + 1"
Despite what every teacher in the department thought, this WILL terminate. The 3 million seed is simply to make it terminate faster. (If you don't know basic: There are no gimmicks here, just exhausting the precision of floating point numbers.)
Yes, as other answers have said. I want to add that I recommend you this article about floating point accuracy: Visualizing floats
Of course it is true. Think about it. Any number must be represented in binary.
Picture: "1000" as 0.5or 1/2, that is, 2 ** -1. Then "0100" is 0.25 or 1/4. You can see where I'm going.
How many numbers can you represent in this manner? 2**4. Adding more bits duplicates the available space, but it is never infinite. 1/3 or 1/10, for the matter 1/n, any number not multiple of 2 cannot be really represented.
1/3 could be "0101" (0.3125) or "0110" (0.375). Either value if you multiply it by 3, will not be 1. Of course you could add special rules. Say you "when you add 3 times '0101', make it 1"... this approach won't work in the long run. You can catch some but then how about 1/6 times 2?
It's not a problem of binary representation, any finite representation has numbers that you cannot represent, they are infinite after all.
Most CPUs (and computer languages) use IEEE 754 floating point arithmetic. Using this notation, there are decimal numbers that have no exact representation in this notation, e.g. 0.1. So if you divide 1 by 10 you won't get an exact result. When performing several calculations in a row, the errors sum up. Try the following example in python:
>>> 0.1
0.10000000000000001
>>> 0.1 / 7 * 10 * 7 == 1
False
That's not really what you'd expect mathematically.
By the way:
A common misunderstanding concerning floating point numbers is, that the results are not precise and cannot be comapared safely. This is only true if you really use fractions of numbers. If all your math is in the integer domain, doubles and floats do exactly the same as ints and also can be compared safely. They can be safely used as loop counters, for example.
yes, Java also uses floating point arithmetic.

Categories

Resources