Double Lost Of Precision - java

I'm a little bit confused because I lose precision all the time.
Maybe I'm operating with the wrong type. I have a String like "100.00". But when I do
Double.parseDouble("100.00") it is being cut off to 100. Need help. Thanks in advance.

You probably printed your number with System.out.println(d). It will internally call Double.toString(double), whose specification states
How many digits must be printed for the fractional part of m or a? There must be at least one digit to represent the fractional part, and beyond that as many, but only as many, more digits as are needed to uniquely distinguish the argument value from adjacent values of type double.
This is because the double number has no notion of "decimal precision". It is a binary number with fixed binary precision (number of binary digits after the binary point).

Just use BigDecimal instead of Double like this:
BigDecimal houndred = new BigDecimal("100.00").setScale(2, BigDecimal.ROUND_HALF_UP);
BigDecimal does not loose precision at all. Additionally you can set scale, precision and rounding mode as you prefer.

Related

Why is BigDecimal returning an approximation of my large double in Java?

I'd like to round my large double so the first thing I decided to do, was to convert it into a BigDecimal in the following way.
BigDecimal amount = BigDecimal
.valueOf(getAmount())
.setScale(2, RoundingMode.HALF_UP);
System.out.println(amount);
In my example, getAmount() returns 123456789123123424113.31.
Therefore, I expect the exact same value to be printed out by my snippet.
Instead, I get the following value:
123456789123123430000.00
Can someone explain why BigDecimal is returning an approximation of my double?
In my example, getAmount() returns 123456789123123424113.31.
No, it does not. That is not a value that a double can represent exactly.
You can easily verify that with this code:
double d = 123456789123123424113.31d;
System.out.println(d);
Which outputs
1.2345678912312343E20
This value has the minimum amount of digits to uniquely distinguish it from any other double value. Meaning that there aren't any more relevant digits in that double. You've already lost the precision before converting the value to BigDecimal.
While an integer data type such as long and int can exactly represent every (integer) value within its range, the same can't be said about floating point numbers: they have an immense range of values that they can represent, but at the cost of not being able to represent every possible value within the range. Effectively there's a limited number of digits that a floating point number can represent (about 16 decimal digits for double and about 7 decimal digits for float). Everything else will be cut off.
If you need arbitrary precision then something like BigDecimal can help: it will allocate as much memory as necessary to hold all digits (or round according to your specification, if required), making it much more complex but also more powerful.
BigDecimal bd = new BigDecimal("123456789123123424113.31");
System.out.println(bd);
will print
123456789123123424113.31
Make sure not to initialize the BigDecimal from a double value, as you'll only get the cut-off value even then.

Is it sufficient to convert a double to a BigDecimal just before addition to retain original precision?

We are solving a numeric precision related bug. Our system collects some numbers and spits their sum.
The issue is that the system does not retain the numeric precision, e.g. 300.7 + 400.9 = 701.599..., while expected result would be 701.6. The precision is supposed to adapt to the input values so we cannot just round results to fixed precision.
The problem is obvious, we use double for the values and addition accumulates the error from the binary representation of the decimal value.
The path of the data is following:
XML file, type xsd:decimal
Parse into a java primitive double. Its 15 decimal places should be enough, we expect values no longer than 10 digits total, 5 fraction digits.
Store into DB MySql 5.5, type double
Load via Hibernate into a JPA entity, i.e. still primitive double
Sum bunch of these values
Print the sum into another XML file
Now, I assume the optimal solution would be converting everything to a decimal format. Unsurprisingly, there is a pressure to go with the cheapest solution. It turns out that converting doubles to BigDecimal just before adding a couple of numbers works in case B in following example:
import java.math.BigDecimal;
public class Arithmetic {
public static void main(String[] args) {
double a = 0.3;
double b = -0.2;
// A
System.out.println(a + b);//0.09999999999999998
// B
System.out.println(BigDecimal.valueOf(a).add(BigDecimal.valueOf(b)));//0.1
// C
System.out.println(new BigDecimal(a).add(new BigDecimal(b)));//0.099999999999999977795539507496869191527366638183593750
}
}
More about this:
Why do we need to convert the double into a string, before we can convert it into a BigDecimal?
Unpredictability of the BigDecimal(double) constructor
I am worried that such a workaround would be a ticking bomb.
First, I am not so sure that this arithmetic is bullet proof for all cases.
Second, there is still some risk that someone in the future might implement some changes and change B to C, because this pitfall is far from obvious and even a unit test may fail to reveal the bug.
I would be willing to live with the second point but the question is: Would this workaround provide correct results? Could there be a case where somehow
Double.valueOf("12345.12345").toString().equals("12345.12345")
is false? Given that Double.toString, according to javadoc, prints just the digits needed to uniquely represent underlying double value, so when parsed again, it gives the same double value? Isn't that sufficient for this use case where I only need to add the numbers and print the sum with this magical Double.toString(Double d) method? To be clear, I do prefer what I consider the clean solution, using BigDecimal everywhere, but I am kind of short of arguments to sell it, by which I mean ideally an example where conversion to BigDecimal before addition fails to do the job described above.
If you can't avoid parsing into primitive double or store as double, you should convert to BigDecimal as early as possible.
double can't exactly represent decimal fractions. The value in double x = 7.3; will never be exactly 7.3, but something very very close to it, with a difference visible from the 16th digit or so on to the right (giving 50 decimal places or so). Don't be mislead by the fact that printing might give exactly "7.3", as printing already does some kind of rounding and doesn't show the number exactly.
If you do lots of computations with double numbers, the tiny differences will eventually sum up until they exceed your tolerance. So using doubles in computations where decimal fractions are needed, is indeed a ticking bomb.
[...] we expect values no longer than 10 digits total, 5 fraction digits.
I read that assertion to mean that all numbers you deal with, are to be exact multiples of 0.00001, without any further digits. You can convert doubles to such BigDecimals with
new BigDecimal.valueOf(Math.round(doubleVal * 100000), 5)
This will give you an exact representation of a number with 5 decimal fraction digits, the 5-fraction-digits one that's closest to the input doubleVal. This way you correct for the tiny differences between the doubleVal and the decimal number that you originally meant.
If you'd simply use BigDecimal.valueOf(double val), you'd go through the string representation of the double you're using, which can't guarantee that it's what you want. It depends on a rounding process inside the Double class which tries to represent the double-approximation of 7.3 (being maybe 7.30000000000000123456789123456789125) with the most plausible number of decimal digits. It happens to result in "7.3" (and, kudos to the developers, quite often matches the "expected" string) and not "7.300000000000001" or "7.3000000000000012" which both seem equally plausible to me.
That's why I recommend not to rely on that rounding, but to do the rounding yourself by decimal shifting 5 places, then rounding to the nearest long, and constructing a BigDecimal scaled back by 5 decimal places. This guarantees that you get an exact value with (at most) 5 fractional decimal places.
Then do your computations with the BigDecimals (using the appropriate MathContext for rounding, if necessary).
When you finally have to store the number as a double, use BigDecimal.doubleValue(). The resulting double will be close enough to the decimal that the above-mentioned conversion will surely give you the same BigDecimal that you had before (unless you have really huge numbers like 10 digits before the decimal point - the you're lost with double anyway).
P.S. Be sure to use BigDecimal only if decimal fractions are relevant to you - there were times when the British Shilling currency consisted of twelve Pence. Representing fractional Pounds as BigDecimal would give a disaster much worse than using doubles.
It depends on the Database you are using. If you are using SQL Server you can use data type as numeric(12, 8) where 12 represent numeric value and 8 represents precision. similarly, for my SQL DECIMAL(5,2) you can use.
You won't lose any precision value if you use the above-mentioned datatype.
Java Hibernate Class :
You can define
private double latitude;
Database:

Precision loss with java.lang.Double

Say I have 2 double values. One of them is very large and one of them is very small.
double x = 99....9; // I don't know the possible max and min values,
double y = 0,00..1; // so just assume these values are near max and min.
If I add those values together, do I lose precision?
In other words, does the max possible double value increase if I assign an int value to it? And does the min possible double value decrease if I choose a small integer part?
double z = x + y; // Real result is something like 999999999999999.00000000000001
double values are not evenly distributed over all numbers. double uses the floating point representation of the number which means you have a fixed amount of bits used for the exponent and a fixed amount of bits used to represent the actual "numbers"/mantissa.
So in your example using a large and a small value would result in dropping the smaller value since it can not be expressed using the larger exponent.
The solution to not dropping precision is using a number format that has a potentially growing precision like BigDecimal - which is not limited to a fixed number of bits.
I'm using a decimal floating point arithmetic with a precision of three decimal digits and (roughly) with the same features as the typical binary floating point arithmetic. Say you have 123.0 and 4.56. These numbers are represented by a mantissa (0<=m<1) and an exponent: 0.123*10^3 and 0.456*10^1, which I'll write as <.123e3> and <.456e1>. Adding two such numbers isn't immediately possible unless the exponents are equal, and that's why the addition proceeds according to:
<.123e3> <.123e3>
<.456e1> <.004e3>
--------
<.127e3>
You see that the necessary alignment of the decimal digits according to a common exponent produces a loss of precision. In the extreme case, the entire addend could be shifted into nothingness. (Think of summing an infinite series where the terms get smaller and smaller but would still contribute considerably to the sum being computed.)
Other sources of imprecision result from differences between binary and decimal fractions, where an exact fraction in one base cannot be represented without error using the other one.
So, in short, addition and subtraction between numbers from rather different orders of magnitude are bound to cause a loss of precision.
If you try to assign too big value or too small value a double, compiler will give an error:
try this
double d1 = 1e-1000;
double d2 = 1e+1000;

Subtraction of numbers double and long

In my JAVA program there is code like this:
int f_part = (int) ((f_num - num) * 100);
f_num is double and num is long. I just want to take the fractional part out and assign it to f_part. But some times f_part value is one less than it's value. Which means if f_num = 123.55 and num = 123, But f_part equals to 54. And it happens only f_num and num is greater than 100. I don't know why this happening. Please can someone explain why this happens and way to correct it.
This is due to the limited precision in doubles.
The root of your problem is that the literal 123.55 actually represents the value 123.54999....
It may seem like it holds the value 123.55 if you print it:
System.out.println(123.55); // prints 123.55
but in fact, the printed value is an approximation. This can be revealed by creating a BigDecimal out of it, (which provides arbitrary precision) and print the BigDecimal:
System.out.println(new BigDecimal(123.55)); // prints 123.54999999999999715...
You can solve it by going via Math.round but you would have to know how many decimals the source double actually entails, or you could choose to go through the string representation of the double in fact goes through a fairly intricate algorithm.
If you're working with currencies, I strongly suggest you either
Let prices etc be represented by BigDecimal which allows you to store numbers as 0.1 accurately, or
Let an int store the number of cents (as opposed to having a double store the number of dollars).
Both ways are perfectly acceptable and used in practice.
From The Floating-Point Guide:
internally, computers use a format (binary floating-point) that cannot
accurately represent a number like 0.1, 0.2 or 0.3 at all.
When the code is compiled or interpreted, your “0.1” is already
rounded to the nearest number in that format, which results in a small
rounding error even before the calculation happens.
It looks like you're calculating money values. double is a completely inappropriate format for this. Use BigDecimal instead.
int f_part = (int) Math.round(((f_num - num) * 100));
This is one of the most often asked (and answered) questions. Floating point arithmetics can not produce exact results, because it's impossible to have an inifinity of real numbers inside 64 bits. Use BigDecimal if you need arbitrary precision.
Floating point arithmetic is not as simple as it may seem and there can be precision issues.
See Why can't decimal numbers be represented exactly in binary?, What Every Computer Scientist Should Know About Floating-Point Arithmetic for details.
If you need absolutely sure precision, you might want to use BigDecimal.

Assign to Float from string with precision

I have the following code, I want to assign a decimal value to float without losing precision.
String s1= "525.880005";
Float f = new Float(s1);
System.out.println(f);
Output:
5.88
Expected Output:
525.880005
Float only has 7-8 significant digits of precision. The "5" in your example is the 9th digit.
Even if it had enough precision, I don't know whether 525.880005 is exactly representable as a binary floating point number. Most decimal values aren't :)
You should use BigDecimal if the exact decimal representation is important to you.
There's a real contradiction implied in the question :
Assign to float <--> wiht precision
525.880005 is jus the number in this float-domain that is closest to 525.88.
The reason that floating numbers cannot be mapped to all numbers is because of the mismatch between the decimal and the binary system for fractions.
Other types, such as decimal and money, use other, much more memory consuming, techniques to store the number (for example , in a string you can store any number, but of course this is not the most performant way to do math)
a simple example : 0.3 in my own binary system :
0.1b (inary) would be 0.5 d (ecimal) so too much...
0.01b --> 0.25d (1/4 too little)
0.011 --> 0.375 (1/4 + 1/8 too much)
0.0101 --> 0.3125 (1/4 + 1/16 still too much)
...
0.010011 --> 1/4 +1/32 + 1/64 = 0.296875
Suppose my system has 6 bits to represent the fraction, 0.296875 would be the closest for this domain. The right number cannot be reached due to the decimal/binary system.
For examples see also :
Floating point inaccuracy examples
And an excellent elaborate explenation of your problems is to be found here:
http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html
Another note : it is really about mismatch, not about 'quality' of systems : in decimal notation for example, you cannot represent 1/3 100% accurate, while this would be perfectly possible in other systems.
The float type cannot hold every possible value (nor can double). If you're assigning from a string, you may prefer BigDecimal as BigDecimal can precisely hold anything that you can reasonably represent with a string.
Note that BigDecimal also cannot hold every possible value (it can't precisely represent 1/3rd, for instance, for the same reason we can't write 1/3rd precisely in our decimal notation system — it would never end). But again, if your source is a string value, BigDecimal will more closely align with your possible values than will float or double. Of course, there's a cost. float and double are designed to be very fast in computation; BigDecimal is designed to be very precise with decimal values, at the expense of speed.
Float doesn't have enough significant digits to represent your number. Try Double.

Categories

Resources