I have the following code, I want to assign a decimal value to float without losing precision.
String s1= "525.880005";
Float f = new Float(s1);
System.out.println(f);
Output:
5.88
Expected Output:
525.880005
Float only has 7-8 significant digits of precision. The "5" in your example is the 9th digit.
Even if it had enough precision, I don't know whether 525.880005 is exactly representable as a binary floating point number. Most decimal values aren't :)
You should use BigDecimal if the exact decimal representation is important to you.
There's a real contradiction implied in the question :
Assign to float <--> wiht precision
525.880005 is jus the number in this float-domain that is closest to 525.88.
The reason that floating numbers cannot be mapped to all numbers is because of the mismatch between the decimal and the binary system for fractions.
Other types, such as decimal and money, use other, much more memory consuming, techniques to store the number (for example , in a string you can store any number, but of course this is not the most performant way to do math)
a simple example : 0.3 in my own binary system :
0.1b (inary) would be 0.5 d (ecimal) so too much...
0.01b --> 0.25d (1/4 too little)
0.011 --> 0.375 (1/4 + 1/8 too much)
0.0101 --> 0.3125 (1/4 + 1/16 still too much)
...
0.010011 --> 1/4 +1/32 + 1/64 = 0.296875
Suppose my system has 6 bits to represent the fraction, 0.296875 would be the closest for this domain. The right number cannot be reached due to the decimal/binary system.
For examples see also :
Floating point inaccuracy examples
And an excellent elaborate explenation of your problems is to be found here:
http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html
Another note : it is really about mismatch, not about 'quality' of systems : in decimal notation for example, you cannot represent 1/3 100% accurate, while this would be perfectly possible in other systems.
The float type cannot hold every possible value (nor can double). If you're assigning from a string, you may prefer BigDecimal as BigDecimal can precisely hold anything that you can reasonably represent with a string.
Note that BigDecimal also cannot hold every possible value (it can't precisely represent 1/3rd, for instance, for the same reason we can't write 1/3rd precisely in our decimal notation system — it would never end). But again, if your source is a string value, BigDecimal will more closely align with your possible values than will float or double. Of course, there's a cost. float and double are designed to be very fast in computation; BigDecimal is designed to be very precise with decimal values, at the expense of speed.
Float doesn't have enough significant digits to represent your number. Try Double.
Related
I know that the range of integers that can be correctly compared is -(2^53 - 1) to 2^53 - 1.
But what is the range of floating point numbers?
// Should the double type conform to this specification? I guess
double c = 9007199254740990.9;
double d = 9007199254740990.8;
System.out.println(c > d); // return false
There's no particular range in which floating point numbers are guaranteed to be compared correctly - that would require the existence of infinitely many double representations.
All double numbers are accurate to 53 significant figures of their representations in binary. If two numbers differ only in the 54th significant binary figure, they'll generally have the same double representation, unless rounding makes them different.
This question seems to be based on a fundamental misunderstanding of (mathematical) Real numbers, and their relationship to IEEE-754.
"Floating point" refers to a representation scheme for Real numbers. Historically that comprises decimal digits and a "decimal point". There are an infinite number of Real numbers, and (correspondingly) an infinite number of floating point numbers. Provided that you have enough paper to write them down.
One property of Real numbers is that there is an infinite number of Real numbers between any pair of distinct Real numbers. That includes any pair that are really close to one another.
IEEE-754 floating point numbers are different. Each one represents a single precise Real number. There is a set of (less than) 2^64 of these Real numbers for IEEE-754 double precision. Any other Real number that is not in that set cannot be precisely represented as a double.
In Java, when we write this:
double c = 9007199254740990.9;
double d = 9007199254740990.8;
System.out.println(c > d); // return false
the value of c will be the IEEE-754 representation that is closest to the real number that the floating point notation denotes. Likewise d. It is not going to be exactly equal to the Real number. (It can only be exact for a finite subset of the infinite set of Real numbers ...)
The print statement illustrates this. The closest IEEE-754 representations for those 2 Real numbers are the same.
What actually happens is that the Java compiler works out what 9007199254740990.9 and 9007199254740990.8 mean as Real numbers. Then it silently rounds those numbers to the nearest IEEE-754 double.
So to your question:
What is the range in which IEEE-754 can correctly compare the size of floating point numbers
A: There is no such range.
If you give me any Real number "x" (expressed in floating point form) that maps to a double value d, I can write you a different Real number "y" (in floating point form) that also maps to the same value d. All I need to do is to add some additional digits to the least significant end of "x".
Note: the reasoning is independent of the actual representation of IEEE-754. It only depends on the fact that IEEE-754 representations have a fixed number of bits. That means that the set of possible values is bounded ... and the rest is a logical consequence of that.
Let's think about the actual bit representation of these double numbers:
We can use System.out.println(Long.toBinaryString(Double.doubleToRawLongBits(c))); to print the bit representation.
Both 9007199254740990.9 and 9007199254740990.8 has the same bit representation following IEEE 754:
0 10000110011 1111111111111111111111111111111111111111111111111111
s(sign) = 0
exp = 10000110011 (binary value, which equals 1075 in decimal)
frac = 111...111 (52 bits)
The precise value of this representation is:
With IEEE 754, starting from 2^53, the double type is no longer able to precisely represent all integers larger than 2 ^ 53, let alone floating point numbers.
Regarding the OP question, as mentioned in other answers, there's not an accurate range. For example, the following statement will evaluate as true. We won't say that the range of floating numbers that double can compare is less than 1.1. Generally speaking, the larger the number is, the less precise the double type could represent.
double x1 = 1.1;
double x2 = 1.1000000000000001;
System.out.println(x1 == x2); // true
We are solving a numeric precision related bug. Our system collects some numbers and spits their sum.
The issue is that the system does not retain the numeric precision, e.g. 300.7 + 400.9 = 701.599..., while expected result would be 701.6. The precision is supposed to adapt to the input values so we cannot just round results to fixed precision.
The problem is obvious, we use double for the values and addition accumulates the error from the binary representation of the decimal value.
The path of the data is following:
XML file, type xsd:decimal
Parse into a java primitive double. Its 15 decimal places should be enough, we expect values no longer than 10 digits total, 5 fraction digits.
Store into DB MySql 5.5, type double
Load via Hibernate into a JPA entity, i.e. still primitive double
Sum bunch of these values
Print the sum into another XML file
Now, I assume the optimal solution would be converting everything to a decimal format. Unsurprisingly, there is a pressure to go with the cheapest solution. It turns out that converting doubles to BigDecimal just before adding a couple of numbers works in case B in following example:
import java.math.BigDecimal;
public class Arithmetic {
public static void main(String[] args) {
double a = 0.3;
double b = -0.2;
// A
System.out.println(a + b);//0.09999999999999998
// B
System.out.println(BigDecimal.valueOf(a).add(BigDecimal.valueOf(b)));//0.1
// C
System.out.println(new BigDecimal(a).add(new BigDecimal(b)));//0.099999999999999977795539507496869191527366638183593750
}
}
More about this:
Why do we need to convert the double into a string, before we can convert it into a BigDecimal?
Unpredictability of the BigDecimal(double) constructor
I am worried that such a workaround would be a ticking bomb.
First, I am not so sure that this arithmetic is bullet proof for all cases.
Second, there is still some risk that someone in the future might implement some changes and change B to C, because this pitfall is far from obvious and even a unit test may fail to reveal the bug.
I would be willing to live with the second point but the question is: Would this workaround provide correct results? Could there be a case where somehow
Double.valueOf("12345.12345").toString().equals("12345.12345")
is false? Given that Double.toString, according to javadoc, prints just the digits needed to uniquely represent underlying double value, so when parsed again, it gives the same double value? Isn't that sufficient for this use case where I only need to add the numbers and print the sum with this magical Double.toString(Double d) method? To be clear, I do prefer what I consider the clean solution, using BigDecimal everywhere, but I am kind of short of arguments to sell it, by which I mean ideally an example where conversion to BigDecimal before addition fails to do the job described above.
If you can't avoid parsing into primitive double or store as double, you should convert to BigDecimal as early as possible.
double can't exactly represent decimal fractions. The value in double x = 7.3; will never be exactly 7.3, but something very very close to it, with a difference visible from the 16th digit or so on to the right (giving 50 decimal places or so). Don't be mislead by the fact that printing might give exactly "7.3", as printing already does some kind of rounding and doesn't show the number exactly.
If you do lots of computations with double numbers, the tiny differences will eventually sum up until they exceed your tolerance. So using doubles in computations where decimal fractions are needed, is indeed a ticking bomb.
[...] we expect values no longer than 10 digits total, 5 fraction digits.
I read that assertion to mean that all numbers you deal with, are to be exact multiples of 0.00001, without any further digits. You can convert doubles to such BigDecimals with
new BigDecimal.valueOf(Math.round(doubleVal * 100000), 5)
This will give you an exact representation of a number with 5 decimal fraction digits, the 5-fraction-digits one that's closest to the input doubleVal. This way you correct for the tiny differences between the doubleVal and the decimal number that you originally meant.
If you'd simply use BigDecimal.valueOf(double val), you'd go through the string representation of the double you're using, which can't guarantee that it's what you want. It depends on a rounding process inside the Double class which tries to represent the double-approximation of 7.3 (being maybe 7.30000000000000123456789123456789125) with the most plausible number of decimal digits. It happens to result in "7.3" (and, kudos to the developers, quite often matches the "expected" string) and not "7.300000000000001" or "7.3000000000000012" which both seem equally plausible to me.
That's why I recommend not to rely on that rounding, but to do the rounding yourself by decimal shifting 5 places, then rounding to the nearest long, and constructing a BigDecimal scaled back by 5 decimal places. This guarantees that you get an exact value with (at most) 5 fractional decimal places.
Then do your computations with the BigDecimals (using the appropriate MathContext for rounding, if necessary).
When you finally have to store the number as a double, use BigDecimal.doubleValue(). The resulting double will be close enough to the decimal that the above-mentioned conversion will surely give you the same BigDecimal that you had before (unless you have really huge numbers like 10 digits before the decimal point - the you're lost with double anyway).
P.S. Be sure to use BigDecimal only if decimal fractions are relevant to you - there were times when the British Shilling currency consisted of twelve Pence. Representing fractional Pounds as BigDecimal would give a disaster much worse than using doubles.
It depends on the Database you are using. If you are using SQL Server you can use data type as numeric(12, 8) where 12 represent numeric value and 8 represents precision. similarly, for my SQL DECIMAL(5,2) you can use.
You won't lose any precision value if you use the above-mentioned datatype.
Java Hibernate Class :
You can define
private double latitude;
Database:
Edit: I know floating point arithmetic is not exact. And the arithmetic isn't even my problem. The addition gives the result I expected. 8099.99975f doesn't.
So I have this little program:
public class Test {
public static void main(String[] args) {
System.out.println(8099.99975f); // 8099.9995
System.out.println(8099.9995f + 0.00025f); // 8100.0
System.out.println(8100f == 8099.99975f); // false
System.out.println(8099.9995f + 0.00025f == 8099.99975f); // false
// I know comparing floats with == can be troublesome
// but here they really should be equal in every bit.
}
}
I wrote it to check if 8099.99975 is rounded to 8100 when written as an IEEE 754 single precision float. To my surprise Java converts it to 8099.9995 when written as a float literal (8099.99975f). I checked my calculations and the IEEE standard again but couldn't find any mistakes. 8100 is just as far away from 8099.99975 as 8099.9995 but the last bit of 8100 is 0 which should make it the right representation.
So I checked the Java language spec to see if I missed something. After a quick search I found two things:
The Java programming language requires that floating-point arithmetic behave as if every floating-point operator rounded its floating-point result to the result precision. Inexact results must be rounded to the representable value nearest to the infinitely precise result; if the two nearest representable values are equally near, the one with its least significant bit zero is chosen.
The Java programming language uses round toward zero when converting a floating value to an integer [...].
I noticed here that nothing was said about float literals. So I thought that float literals maybe are just doubles which when cast to float are rounded to zero similarly to the float to int casting. That would explain why 8099.99975f was rounded to zero.
I wrote the little program you can see above to check my theory and indeed found that when adding two float literals that should result in 8100 the correct float is computed. (Note here that 8099.9995 and 0.00025 can be represented exactly as single floats so there's no rounding that could lead to a different result) This confused me since it didn't make much sense to me that float literals and computed floats behaved differently so I dug around in the language spec some more and found this:
A floating-point literal is of type float if it is suffixed with an ASCII letter F or f [...]. The elements of the types float [...] are those values that can be represented using the IEEE 754 32-bit single-precision [...] binary floating-point formats.
This ultimately states that the literal should be rounded according to the IEEE standard which in this case is to 8100. So why is it 8099.9995?
The key point to realise is that the value of a floating point number can be worked out in two different ways, that aren't in general equal.
There's the value that the bits in the floating point number give the exact binary representation of.
There's the "decimal display value" of a floating point number, which is the number with the least decimal places that is closer to that floating point number than any other number.
To understand the difference, consider the number whose exponent is 10001011 and whose significand is 1.11111010001111111111111. This is the exact binary representation of 8099.99951171875. But the decimal value 8099.9995 has fewer decimal places, and is closer to this floating point number than to any other floating point number. Therefore, 8099.9995 is the value that will be displayed when you print out that number.
Note that this particular floating point number is the next lowest one after 8100.
Now consider 8099.99975. It's slightly closer to 8099.99951171875 than it is to 8100. Therefore, to represent it in single precision floating point, Java will pick the floating point number which is the exact binary representation of 8099.99951171875. If you try to print it, you'll see 8099.9995.
Lastly, when you do 8099.9995 + 0.00025 in single precision floating point, the numbers involved are the exact binary representations of 8099.99951171875 and 0.0002499999827705323696136474609375. But because the latter is slightly more than 1/2^12, the result of addition will be closer to 8100 than to 8099.99951171875, and so it will be rounded up, not down at the end, making it 8100.
The decimal value 8099.99975 has nine significant digits. This is more than can be represented exactly in a float. If you use the floating point analysis tool at CUNY you'll see that the binary representation closest to 8099.9995 is 45FD1FFF. When you attempt to add 0.00025 you are suffering a "loss of significance". In order not to lose significant (left-hand) digits of the larger number, the significand of the smaller has to be shifted right to match the scale (exponent) of the larger. When this happens, its value becomes ZERO as it shifts off the right end of the register.
Decimal Exponent Significand
--------- -------------- -------------------------
8099.9995 10001011 (+12) 1.11111010001111111111111
0.00025 01110011 (-12) 1.00000110001001001101111
To line these up for addition, the second one has to shift right 24 bits, but there are only 23 bits in the significand of a single-precision float. The significand disappears, leaving zero, so the addition has no effect.
If you want this to work, switch to double-precision arithmetic.
I am running weird situation. I am counting the numbers using Double parseDouble. In some situation i am getting extra 0's for instance instead of 109.1 i am getting ... 109.10000001.
I am trying to add string from json.
Here is the line of code I am executing.
points = points + Double.parseDouble(x.points);
Java uses IEEE floating point numbers for its double (and float) datatypes. A double has a lot of precision, but not infinite precision. Some numbers are only approximated.
In addition (no pun intended), adding numbers like this together compounds the floating point errors. Eventually the error is large enough to notice, such as with your issue here.
If you can accept a performance hit, then use BigDecimal, which is arbitrary precision.
It is probably unrelated to the Double.parseDouble, but rather a floating-point precision issue. Floats and doubles are encoded in binary as a mantissa and an exponent. The part after the decimal point is a particular problem because what can be represented easily in base 10 cannot necessarily be represented well in base 2. For example, the decimal number 0.2 becomes infinitely repeating in binary, rather like 1/3 in decimal (0.333...). I suggest rounding to a fixed number of digits for display, or perhaps using java.math.BigDecimal instead.
This actually has a very complicated answer, but essentially floating point numbers are often only approximations of the number you give it.
A good example from wikipedia is 0.1, which in floating point is 0.009999999776482582092285156250.
I happened upon these values in my ColdFusion code but the Google calculator seems to have the same "bug" where the difference is non-zero.
416582.2850 - 411476.8100 - 5105.475 = -2.36468622461E-011
http://www.google.com/search?hl=en&rlz=1C1GGLS_enUS340US340&q=416582.2850+-+411476.8100+-+5105.475&aq=f&oq=&aqi=
JavaCast'ing these to long/float/double doesn't help- it results in other non-zero differences.
This is because decimal numbers that "look" round in base 10, are not exactly representable in base 2 (which is what computers use to represent floating point numbers). Please see the article What Every Computer Scientist Should Know About Floating-Point Arithmetic for a detailed explanation of this problem and workarounds.
Floating-point inaccuracies (there are an infinite number of real numbers and only a finite number of 32- or 64-bit numbers to represent them with).
If you can't handle tiny errors, you should use BigDecimal instead.
Use PrecisionEvaluate() in ColdFusion (it'll use BigDecimal in Java)
zero = PrecisionEvaluate(416582.2850 - 411476.8100 - 5105.475);
unlike Evaulate(), no "" is needed.
Since computer stores numbers in binary, float numbers are imprecise. 1E-11 is a tiny difference due to rounding these decimal numbers to the nearest representable binary number.
This "bug" is not a bug. It's how floating point arithmetic works. See: http://docs.sun.com/source/806-3568/ncg_goldberg.html
If you want arbitrary precision in Java, use BigDecimal:
BigDecimal a = new BigDecimal("416582.2850");
BigDecimal b = new BigDecimal("411476.8100");
BigDecimal c = new BigDecimal("5105.475");
System.out.println(a.subtract(b).subtract(c)); // 0.0
The problem is the inexact representation of floating point types. Because these can't be exactly represented as floats, you get some precision loss that results in operations have small errors. Typically with floats you want to compare whether the result is equal to another value within some small epislon (error factor).
These are floating point issues and using BigDecimal will fix it.
Changing the order of subtraction also yields zero in Google.
416582.2850 - 5105.475 - 411476.8100 = 0