I happened upon these values in my ColdFusion code but the Google calculator seems to have the same "bug" where the difference is non-zero.
416582.2850 - 411476.8100 - 5105.475 = -2.36468622461E-011
http://www.google.com/search?hl=en&rlz=1C1GGLS_enUS340US340&q=416582.2850+-+411476.8100+-+5105.475&aq=f&oq=&aqi=
JavaCast'ing these to long/float/double doesn't help- it results in other non-zero differences.
This is because decimal numbers that "look" round in base 10, are not exactly representable in base 2 (which is what computers use to represent floating point numbers). Please see the article What Every Computer Scientist Should Know About Floating-Point Arithmetic for a detailed explanation of this problem and workarounds.
Floating-point inaccuracies (there are an infinite number of real numbers and only a finite number of 32- or 64-bit numbers to represent them with).
If you can't handle tiny errors, you should use BigDecimal instead.
Use PrecisionEvaluate() in ColdFusion (it'll use BigDecimal in Java)
zero = PrecisionEvaluate(416582.2850 - 411476.8100 - 5105.475);
unlike Evaulate(), no "" is needed.
Since computer stores numbers in binary, float numbers are imprecise. 1E-11 is a tiny difference due to rounding these decimal numbers to the nearest representable binary number.
This "bug" is not a bug. It's how floating point arithmetic works. See: http://docs.sun.com/source/806-3568/ncg_goldberg.html
If you want arbitrary precision in Java, use BigDecimal:
BigDecimal a = new BigDecimal("416582.2850");
BigDecimal b = new BigDecimal("411476.8100");
BigDecimal c = new BigDecimal("5105.475");
System.out.println(a.subtract(b).subtract(c)); // 0.0
The problem is the inexact representation of floating point types. Because these can't be exactly represented as floats, you get some precision loss that results in operations have small errors. Typically with floats you want to compare whether the result is equal to another value within some small epislon (error factor).
These are floating point issues and using BigDecimal will fix it.
Changing the order of subtraction also yields zero in Google.
416582.2850 - 5105.475 - 411476.8100 = 0
Related
double r = 11.631;
double theta = 21.4;
In the debugger, these are shown as 11.631000000000000 and 21.399999618530273.
How can I avoid this?
These accuracy problems are due to the internal representation of floating point numbers and there's not much you can do to avoid it.
By the way, printing these values at run-time often still leads to the correct results, at least using modern C++ compilers. For most operations, this isn't much of an issue.
I liked Joel's explanation, which deals with a similar binary floating point precision issue in Excel 2007:
See how there's a lot of 0110 0110 0110 there at the end? That's because 0.1 has no exact representation in binary... it's a repeating binary number. It's sort of like how 1/3 has no representation in decimal. 1/3 is 0.33333333 and you have to keep writing 3's forever. If you lose patience, you get something inexact.
So you can imagine how, in decimal, if you tried to do 3*1/3, and you didn't have time to write 3's forever, the result you would get would be 0.99999999, not 1, and people would get angry with you for being wrong.
If you have a value like:
double theta = 21.4;
And you want to do:
if (theta == 21.4)
{
}
You have to be a bit clever, you will need to check if the value of theta is really close to 21.4, but not necessarily that value.
if (fabs(theta - 21.4) <= 1e-6)
{
}
This is partly platform-specific - and we don't know what platform you're using.
It's also partly a case of knowing what you actually want to see. The debugger is showing you - to some extent, anyway - the precise value stored in your variable. In my article on binary floating point numbers in .NET, there's a C# class which lets you see the absolutely exact number stored in a double. The online version isn't working at the moment - I'll try to put one up on another site.
Given that the debugger sees the "actual" value, it's got to make a judgement call about what to display - it could show you the value rounded to a few decimal places, or a more precise value. Some debuggers do a better job than others at reading developers' minds, but it's a fundamental problem with binary floating point numbers.
Use the fixed-point decimal type if you want stability at the limits of precision. There are overheads, and you must explicitly cast if you wish to convert to floating point. If you do convert to floating point you will reintroduce the instabilities that seem to bother you.
Alternately you can get over it and learn to work with the limited precision of floating point arithmetic. For example you can use rounding to make values converge, or you can use epsilon comparisons to describe a tolerance. "Epsilon" is a constant you set up that defines a tolerance. For example, you may choose to regard two values as being equal if they are within 0.0001 of each other.
It occurs to me that you could use operator overloading to make epsilon comparisons transparent. That would be very cool.
For mantissa-exponent representations EPSILON must be computed to remain within the representable precision. For a number N, Epsilon = N / 10E+14
System.Double.Epsilon is the smallest representable positive value for the Double type. It is too small for our purpose. Read Microsoft's advice on equality testing
I've come across this before (on my blog) - I think the surprise tends to be that the 'irrational' numbers are different.
By 'irrational' here I'm just referring to the fact that they can't be accurately represented in this format. Real irrational numbers (like π - pi) can't be accurately represented at all.
Most people are familiar with 1/3 not working in decimal: 0.3333333333333...
The odd thing is that 1.1 doesn't work in floats. People expect decimal values to work in floating point numbers because of how they think of them:
1.1 is 11 x 10^-1
When actually they're in base-2
1.1 is 154811237190861 x 2^-47
You can't avoid it, you just have to get used to the fact that some floats are 'irrational', in the same way that 1/3 is.
One way you can avoid this is to use a library that uses an alternative method of representing decimal numbers, such as BCD
If you are using Java and you need accuracy, use the BigDecimal class for floating point calculations. It is slower but safer.
Seems to me that 21.399999618530273 is the single precision (float) representation of 21.4. Looks like the debugger is casting down from double to float somewhere.
You cant avoid this as you're using floating point numbers with fixed quantity of bytes. There's simply no isomorphism possible between real numbers and its limited notation.
But most of the time you can simply ignore it. 21.4==21.4 would still be true because it is still the same numbers with the same error. But 21.4f==21.4 may not be true because the error for float and double are different.
If you need fixed precision, perhaps you should try fixed point numbers. Or even integers. I for example often use int(1000*x) for passing to debug pager.
Dangers of computer arithmetic
If it bothers you, you can customize the way some values are displayed during debug. Use it with care :-)
Enhancing Debugging with the Debugger Display Attributes
Refer to General Decimal Arithmetic
Also take note when comparing floats, see this answer for more information.
According to the javadoc
"If at least one of the operands to a numerical operator is of type double, then the
operation is carried out using 64-bit floating-point arithmetic, and the result of the
numerical operator is a value of type double. If the other operand is not a double, it is
first widened (§5.1.5) to type double by numeric promotion (§5.6)."
Here is the Source
This question already has answers here:
Whats wrong with this simple 'double' calculation? [duplicate]
(5 answers)
Closed 9 years ago.
While I was having fun with codes from Java Puzzlers(I don't have the book) I came across this piece of code
public static void main(String args[]) {
System.out.println(2.00 - 1.10);
}
Output is
0.8999999999999999
When I tried changing the code to
2.00d - 1.10d still I get the same output as 0.8999999999999999
For,2.00d - 1.10f Output is 0.8999999761581421
For,2.00f - 1.10d Output is 0.8999999999999999
For,2.00f - 1.10f Output is 0.9
Why din't I get the output as 0.9 in the first place? I could not make any heads or tails out of this? Can somebody articulate this?
Because in Java double values are IEEE floating point numbers.
The work around could be to use Big Decimal class
Immutable, arbitrary-precision signed decimal numbers. A BigDecimal
consists of an arbitrary precision integer unscaled value and a 32-bit
integer scale. If zero or positive, the scale is the number of digits
to the right of the decimal point. If negative, the unscaled value of
the number is multiplied by ten to the power of the negation of the
scale. The value of the number represented by the BigDecimal is
therefore (unscaledValue × 10^-scale).
On a side note you may also want to check Wikipedia article on IEEE 754 how floating point numbers are stored on most systems.
The more operations you do on a floating point number, the more significant rounding errors can become.
In binary 0.1 is 0.00011001100110011001100110011001.....,
As such it cannot be represented exactly in binary. Depending where you round off (float or double) you get different answers.
So 0.1f =0.000110011001100110011001100
And 0.1d=0.0001100110011001100110011001100110011001100110011001
You note that the number repeats on a 1100 cycle. However the float and double precision split it at a different point in the cycle. As such on one the error rounds up and the other rounds down; leading to the difference.
But most importantly;
Never assume floating point numbers are exact
Other answers are correct, just to point to a valid reference, I quote oracle doc:
double: The double data type is a double-precision 64-bit IEEE 754
floating point. Its range of values is beyond the scope of this
discussion, but is specified in the Floating-Point Types, Formats, and
Values section of the Java Language Specification. For decimal values,
this data type is generally the default choice. As mentioned above,
this data type should never be used for precise values, such as
currency
I am running weird situation. I am counting the numbers using Double parseDouble. In some situation i am getting extra 0's for instance instead of 109.1 i am getting ... 109.10000001.
I am trying to add string from json.
Here is the line of code I am executing.
points = points + Double.parseDouble(x.points);
Java uses IEEE floating point numbers for its double (and float) datatypes. A double has a lot of precision, but not infinite precision. Some numbers are only approximated.
In addition (no pun intended), adding numbers like this together compounds the floating point errors. Eventually the error is large enough to notice, such as with your issue here.
If you can accept a performance hit, then use BigDecimal, which is arbitrary precision.
It is probably unrelated to the Double.parseDouble, but rather a floating-point precision issue. Floats and doubles are encoded in binary as a mantissa and an exponent. The part after the decimal point is a particular problem because what can be represented easily in base 10 cannot necessarily be represented well in base 2. For example, the decimal number 0.2 becomes infinitely repeating in binary, rather like 1/3 in decimal (0.333...). I suggest rounding to a fixed number of digits for display, or perhaps using java.math.BigDecimal instead.
This actually has a very complicated answer, but essentially floating point numbers are often only approximations of the number you give it.
A good example from wikipedia is 0.1, which in floating point is 0.009999999776482582092285156250.
In my JAVA program there is code like this:
int f_part = (int) ((f_num - num) * 100);
f_num is double and num is long. I just want to take the fractional part out and assign it to f_part. But some times f_part value is one less than it's value. Which means if f_num = 123.55 and num = 123, But f_part equals to 54. And it happens only f_num and num is greater than 100. I don't know why this happening. Please can someone explain why this happens and way to correct it.
This is due to the limited precision in doubles.
The root of your problem is that the literal 123.55 actually represents the value 123.54999....
It may seem like it holds the value 123.55 if you print it:
System.out.println(123.55); // prints 123.55
but in fact, the printed value is an approximation. This can be revealed by creating a BigDecimal out of it, (which provides arbitrary precision) and print the BigDecimal:
System.out.println(new BigDecimal(123.55)); // prints 123.54999999999999715...
You can solve it by going via Math.round but you would have to know how many decimals the source double actually entails, or you could choose to go through the string representation of the double in fact goes through a fairly intricate algorithm.
If you're working with currencies, I strongly suggest you either
Let prices etc be represented by BigDecimal which allows you to store numbers as 0.1 accurately, or
Let an int store the number of cents (as opposed to having a double store the number of dollars).
Both ways are perfectly acceptable and used in practice.
From The Floating-Point Guide:
internally, computers use a format (binary floating-point) that cannot
accurately represent a number like 0.1, 0.2 or 0.3 at all.
When the code is compiled or interpreted, your “0.1” is already
rounded to the nearest number in that format, which results in a small
rounding error even before the calculation happens.
It looks like you're calculating money values. double is a completely inappropriate format for this. Use BigDecimal instead.
int f_part = (int) Math.round(((f_num - num) * 100));
This is one of the most often asked (and answered) questions. Floating point arithmetics can not produce exact results, because it's impossible to have an inifinity of real numbers inside 64 bits. Use BigDecimal if you need arbitrary precision.
Floating point arithmetic is not as simple as it may seem and there can be precision issues.
See Why can't decimal numbers be represented exactly in binary?, What Every Computer Scientist Should Know About Floating-Point Arithmetic for details.
If you need absolutely sure precision, you might want to use BigDecimal.
I have the following code, I want to assign a decimal value to float without losing precision.
String s1= "525.880005";
Float f = new Float(s1);
System.out.println(f);
Output:
5.88
Expected Output:
525.880005
Float only has 7-8 significant digits of precision. The "5" in your example is the 9th digit.
Even if it had enough precision, I don't know whether 525.880005 is exactly representable as a binary floating point number. Most decimal values aren't :)
You should use BigDecimal if the exact decimal representation is important to you.
There's a real contradiction implied in the question :
Assign to float <--> wiht precision
525.880005 is jus the number in this float-domain that is closest to 525.88.
The reason that floating numbers cannot be mapped to all numbers is because of the mismatch between the decimal and the binary system for fractions.
Other types, such as decimal and money, use other, much more memory consuming, techniques to store the number (for example , in a string you can store any number, but of course this is not the most performant way to do math)
a simple example : 0.3 in my own binary system :
0.1b (inary) would be 0.5 d (ecimal) so too much...
0.01b --> 0.25d (1/4 too little)
0.011 --> 0.375 (1/4 + 1/8 too much)
0.0101 --> 0.3125 (1/4 + 1/16 still too much)
...
0.010011 --> 1/4 +1/32 + 1/64 = 0.296875
Suppose my system has 6 bits to represent the fraction, 0.296875 would be the closest for this domain. The right number cannot be reached due to the decimal/binary system.
For examples see also :
Floating point inaccuracy examples
And an excellent elaborate explenation of your problems is to be found here:
http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html
Another note : it is really about mismatch, not about 'quality' of systems : in decimal notation for example, you cannot represent 1/3 100% accurate, while this would be perfectly possible in other systems.
The float type cannot hold every possible value (nor can double). If you're assigning from a string, you may prefer BigDecimal as BigDecimal can precisely hold anything that you can reasonably represent with a string.
Note that BigDecimal also cannot hold every possible value (it can't precisely represent 1/3rd, for instance, for the same reason we can't write 1/3rd precisely in our decimal notation system — it would never end). But again, if your source is a string value, BigDecimal will more closely align with your possible values than will float or double. Of course, there's a cost. float and double are designed to be very fast in computation; BigDecimal is designed to be very precise with decimal values, at the expense of speed.
Float doesn't have enough significant digits to represent your number. Try Double.