I am running weird situation. I am counting the numbers using Double parseDouble. In some situation i am getting extra 0's for instance instead of 109.1 i am getting ... 109.10000001.
I am trying to add string from json.
Here is the line of code I am executing.
points = points + Double.parseDouble(x.points);
Java uses IEEE floating point numbers for its double (and float) datatypes. A double has a lot of precision, but not infinite precision. Some numbers are only approximated.
In addition (no pun intended), adding numbers like this together compounds the floating point errors. Eventually the error is large enough to notice, such as with your issue here.
If you can accept a performance hit, then use BigDecimal, which is arbitrary precision.
It is probably unrelated to the Double.parseDouble, but rather a floating-point precision issue. Floats and doubles are encoded in binary as a mantissa and an exponent. The part after the decimal point is a particular problem because what can be represented easily in base 10 cannot necessarily be represented well in base 2. For example, the decimal number 0.2 becomes infinitely repeating in binary, rather like 1/3 in decimal (0.333...). I suggest rounding to a fixed number of digits for display, or perhaps using java.math.BigDecimal instead.
This actually has a very complicated answer, but essentially floating point numbers are often only approximations of the number you give it.
A good example from wikipedia is 0.1, which in floating point is 0.009999999776482582092285156250.
Related
double r = 11.631;
double theta = 21.4;
In the debugger, these are shown as 11.631000000000000 and 21.399999618530273.
How can I avoid this?
These accuracy problems are due to the internal representation of floating point numbers and there's not much you can do to avoid it.
By the way, printing these values at run-time often still leads to the correct results, at least using modern C++ compilers. For most operations, this isn't much of an issue.
I liked Joel's explanation, which deals with a similar binary floating point precision issue in Excel 2007:
See how there's a lot of 0110 0110 0110 there at the end? That's because 0.1 has no exact representation in binary... it's a repeating binary number. It's sort of like how 1/3 has no representation in decimal. 1/3 is 0.33333333 and you have to keep writing 3's forever. If you lose patience, you get something inexact.
So you can imagine how, in decimal, if you tried to do 3*1/3, and you didn't have time to write 3's forever, the result you would get would be 0.99999999, not 1, and people would get angry with you for being wrong.
If you have a value like:
double theta = 21.4;
And you want to do:
if (theta == 21.4)
{
}
You have to be a bit clever, you will need to check if the value of theta is really close to 21.4, but not necessarily that value.
if (fabs(theta - 21.4) <= 1e-6)
{
}
This is partly platform-specific - and we don't know what platform you're using.
It's also partly a case of knowing what you actually want to see. The debugger is showing you - to some extent, anyway - the precise value stored in your variable. In my article on binary floating point numbers in .NET, there's a C# class which lets you see the absolutely exact number stored in a double. The online version isn't working at the moment - I'll try to put one up on another site.
Given that the debugger sees the "actual" value, it's got to make a judgement call about what to display - it could show you the value rounded to a few decimal places, or a more precise value. Some debuggers do a better job than others at reading developers' minds, but it's a fundamental problem with binary floating point numbers.
Use the fixed-point decimal type if you want stability at the limits of precision. There are overheads, and you must explicitly cast if you wish to convert to floating point. If you do convert to floating point you will reintroduce the instabilities that seem to bother you.
Alternately you can get over it and learn to work with the limited precision of floating point arithmetic. For example you can use rounding to make values converge, or you can use epsilon comparisons to describe a tolerance. "Epsilon" is a constant you set up that defines a tolerance. For example, you may choose to regard two values as being equal if they are within 0.0001 of each other.
It occurs to me that you could use operator overloading to make epsilon comparisons transparent. That would be very cool.
For mantissa-exponent representations EPSILON must be computed to remain within the representable precision. For a number N, Epsilon = N / 10E+14
System.Double.Epsilon is the smallest representable positive value for the Double type. It is too small for our purpose. Read Microsoft's advice on equality testing
I've come across this before (on my blog) - I think the surprise tends to be that the 'irrational' numbers are different.
By 'irrational' here I'm just referring to the fact that they can't be accurately represented in this format. Real irrational numbers (like π - pi) can't be accurately represented at all.
Most people are familiar with 1/3 not working in decimal: 0.3333333333333...
The odd thing is that 1.1 doesn't work in floats. People expect decimal values to work in floating point numbers because of how they think of them:
1.1 is 11 x 10^-1
When actually they're in base-2
1.1 is 154811237190861 x 2^-47
You can't avoid it, you just have to get used to the fact that some floats are 'irrational', in the same way that 1/3 is.
One way you can avoid this is to use a library that uses an alternative method of representing decimal numbers, such as BCD
If you are using Java and you need accuracy, use the BigDecimal class for floating point calculations. It is slower but safer.
Seems to me that 21.399999618530273 is the single precision (float) representation of 21.4. Looks like the debugger is casting down from double to float somewhere.
You cant avoid this as you're using floating point numbers with fixed quantity of bytes. There's simply no isomorphism possible between real numbers and its limited notation.
But most of the time you can simply ignore it. 21.4==21.4 would still be true because it is still the same numbers with the same error. But 21.4f==21.4 may not be true because the error for float and double are different.
If you need fixed precision, perhaps you should try fixed point numbers. Or even integers. I for example often use int(1000*x) for passing to debug pager.
Dangers of computer arithmetic
If it bothers you, you can customize the way some values are displayed during debug. Use it with care :-)
Enhancing Debugging with the Debugger Display Attributes
Refer to General Decimal Arithmetic
Also take note when comparing floats, see this answer for more information.
According to the javadoc
"If at least one of the operands to a numerical operator is of type double, then the
operation is carried out using 64-bit floating-point arithmetic, and the result of the
numerical operator is a value of type double. If the other operand is not a double, it is
first widened (§5.1.5) to type double by numeric promotion (§5.6)."
Here is the Source
alert(Math.cos(Math.PI/2));
Why the result is not exact zero? Is this inaccurancy, or some implementation error?
Math.PI/2 is an approximation of the real value of pi/2. Taking the exact cosine of this approximated value won't yield zero. The value you get is an approximation of this exact value up to the precision of the underlying floating point datatype.
Using some arbitrary precision library, you can evaluate the difference between pi/2 in double precision and the exact value to
0.0000000000000000612323399573676588613032966137500529104874722961...
Since the slope of the cosine close to its zeros is 1, you would expect the cosine of the approximation of pi/2 to be approximately equal to this difference, and indeed it is.
Floating-point numbers are normally approximations. Since floating-point numbers are represented in memory as binary numbers multiplied by an exponent only numbers that are sums of powers of 2 can usually be represented.
Fractions such as 1/3 can't be written as a binary number and as such have no accurate floating-point representation. Even some numbers that can be written accurately in decimal, such as 0.1, can't be represented accurately in binary and so will not be represented correctly in floating point.
PI is an irrational number and can't be represented as floating-point, so there will be rounding errors. Do not compare floating-point numbers for equality without including a tolerance parameter. This link has a good explanation of the basics.
Comparing calculated floating point numbers for equality is almost always a bad idea, since (as others have stated) they are approximations, and errors appear.
Instead of checking for a==b, check for equality to within a threshold that makes sense for your application, as with Math.abs(a-b) < .00001. This is good practice in any programming language that represents numbers as floating point values.
If you're storing integers in floating point variables and just adding, subtracting, and multiplying, they'll stay integers (at least until they go out of bounds). But dividing, using trig functions, etc., will introduce errors that must be allowed for.
-m#
The following value gives me wrong precision. It is observed with only specific numbers. It might be a floating representation problem, but wanted to know the specific reason.
String m = "154572.49"; //"154,572.49";
Float f = Float.parseFloat(m);
System.out.println(f);
The output it is printing is 154572.48 instead of 154572.49.
If you want decimal numbers to come out as exactly as you entered them in Java, use BigDecimal instead of float.
Floating point numbers are inherently inaccurate for decimals because many numbers that terminate in decimal (e.g. 0.1) are recurring numbers in binary and floating point numbers are stored as a binary representation.
You must read this What Every Computer Scientist Should Know About Floating-Point Arithmetic
Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation. Although there are infinitely many integers, in most programs the result of integer computations can be stored in 32 bits. In contrast, given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation. This rounding error is the characteristic feature of floating-point computation.
Float offers a base 2 representation of a decimal number. When you parse, it is parsing the binary representation of the decimal number that will almost never be exact. You may get .4856 from its binary representation (well, I didn't do the calculation and its just a guess to get you the idea).
System.out.println((26.55f/3f));
or
System.out.println((float)( (float)26.55 / (float)3.0 ));
etc.
returns the result 8.849999. not 8.85 as it should.
Can anyone explain this or should we all avoid using floats?
What Every Programmer Should Know About Floating-Point Arithmetic:
Q: Why don’t my numbers, like 0.1 + 0.2
add up to a nice round 0.3, and
instead I get a weird result like
0.30000000000000004?
A: Because internally, computers use a
format (binary floating-point) that
cannot accurately represent a number
like 0.1, 0.2 or 0.3 at all.
In-depth explanations at the linked-to site
Take a look at Wikipedia's article on Floating Point, specifically the Accuracy Problems section.
The fact that floating-point numbers
cannot precisely represent all real
numbers, and that floating-point
operations cannot precisely represent
true arithmetic operations, leads to
many surprising situations. This is
related to the finite precision with
which computers generally represent
numbers.
The article features a couple examples that should provide more clarity.
Explaining is easy: floating point is a binary format and so can only represent exactly values that are an integer multiple of 1.0 / (2 to the Nth power) for some natural integer N. 26.55 does not have this property, therefore it cannot be represented exactly.
If you need exact representation (e.g. your code is about accounting and money, where every fraction of a cent matters), then you must indeed avoid floats in favor of other types that do guarantee exact representation of the values you need (depending on your application, for example, just doing all accounting in terms of integer numbers of cents might suffice). Floats (when used appropriately and advisedly!-) are perfectly fine for engineering and scientific computations, where the input values are never "infinitely precise" in any case and therefore the computationally cumbersome burden of exact representation is absolutely not worth carrying.
Well, we should all avoid using floats wherever realistic, but that's a story for another day.
The issue is that floating point numbers cannot exactly represent most numbers we think of as trivial in presentation. 8.850000 probably cannot be represented exactly by a float; and possibly not by a double either. This is because they aren't actually decimal numbers; but a binary representation.
I happened upon these values in my ColdFusion code but the Google calculator seems to have the same "bug" where the difference is non-zero.
416582.2850 - 411476.8100 - 5105.475 = -2.36468622461E-011
http://www.google.com/search?hl=en&rlz=1C1GGLS_enUS340US340&q=416582.2850+-+411476.8100+-+5105.475&aq=f&oq=&aqi=
JavaCast'ing these to long/float/double doesn't help- it results in other non-zero differences.
This is because decimal numbers that "look" round in base 10, are not exactly representable in base 2 (which is what computers use to represent floating point numbers). Please see the article What Every Computer Scientist Should Know About Floating-Point Arithmetic for a detailed explanation of this problem and workarounds.
Floating-point inaccuracies (there are an infinite number of real numbers and only a finite number of 32- or 64-bit numbers to represent them with).
If you can't handle tiny errors, you should use BigDecimal instead.
Use PrecisionEvaluate() in ColdFusion (it'll use BigDecimal in Java)
zero = PrecisionEvaluate(416582.2850 - 411476.8100 - 5105.475);
unlike Evaulate(), no "" is needed.
Since computer stores numbers in binary, float numbers are imprecise. 1E-11 is a tiny difference due to rounding these decimal numbers to the nearest representable binary number.
This "bug" is not a bug. It's how floating point arithmetic works. See: http://docs.sun.com/source/806-3568/ncg_goldberg.html
If you want arbitrary precision in Java, use BigDecimal:
BigDecimal a = new BigDecimal("416582.2850");
BigDecimal b = new BigDecimal("411476.8100");
BigDecimal c = new BigDecimal("5105.475");
System.out.println(a.subtract(b).subtract(c)); // 0.0
The problem is the inexact representation of floating point types. Because these can't be exactly represented as floats, you get some precision loss that results in operations have small errors. Typically with floats you want to compare whether the result is equal to another value within some small epislon (error factor).
These are floating point issues and using BigDecimal will fix it.
Changing the order of subtraction also yields zero in Google.
416582.2850 - 5105.475 - 411476.8100 = 0