Related
We are solving a numeric precision related bug. Our system collects some numbers and spits their sum.
The issue is that the system does not retain the numeric precision, e.g. 300.7 + 400.9 = 701.599..., while expected result would be 701.6. The precision is supposed to adapt to the input values so we cannot just round results to fixed precision.
The problem is obvious, we use double for the values and addition accumulates the error from the binary representation of the decimal value.
The path of the data is following:
XML file, type xsd:decimal
Parse into a java primitive double. Its 15 decimal places should be enough, we expect values no longer than 10 digits total, 5 fraction digits.
Store into DB MySql 5.5, type double
Load via Hibernate into a JPA entity, i.e. still primitive double
Sum bunch of these values
Print the sum into another XML file
Now, I assume the optimal solution would be converting everything to a decimal format. Unsurprisingly, there is a pressure to go with the cheapest solution. It turns out that converting doubles to BigDecimal just before adding a couple of numbers works in case B in following example:
import java.math.BigDecimal;
public class Arithmetic {
public static void main(String[] args) {
double a = 0.3;
double b = -0.2;
// A
System.out.println(a + b);//0.09999999999999998
// B
System.out.println(BigDecimal.valueOf(a).add(BigDecimal.valueOf(b)));//0.1
// C
System.out.println(new BigDecimal(a).add(new BigDecimal(b)));//0.099999999999999977795539507496869191527366638183593750
}
}
More about this:
Why do we need to convert the double into a string, before we can convert it into a BigDecimal?
Unpredictability of the BigDecimal(double) constructor
I am worried that such a workaround would be a ticking bomb.
First, I am not so sure that this arithmetic is bullet proof for all cases.
Second, there is still some risk that someone in the future might implement some changes and change B to C, because this pitfall is far from obvious and even a unit test may fail to reveal the bug.
I would be willing to live with the second point but the question is: Would this workaround provide correct results? Could there be a case where somehow
Double.valueOf("12345.12345").toString().equals("12345.12345")
is false? Given that Double.toString, according to javadoc, prints just the digits needed to uniquely represent underlying double value, so when parsed again, it gives the same double value? Isn't that sufficient for this use case where I only need to add the numbers and print the sum with this magical Double.toString(Double d) method? To be clear, I do prefer what I consider the clean solution, using BigDecimal everywhere, but I am kind of short of arguments to sell it, by which I mean ideally an example where conversion to BigDecimal before addition fails to do the job described above.
If you can't avoid parsing into primitive double or store as double, you should convert to BigDecimal as early as possible.
double can't exactly represent decimal fractions. The value in double x = 7.3; will never be exactly 7.3, but something very very close to it, with a difference visible from the 16th digit or so on to the right (giving 50 decimal places or so). Don't be mislead by the fact that printing might give exactly "7.3", as printing already does some kind of rounding and doesn't show the number exactly.
If you do lots of computations with double numbers, the tiny differences will eventually sum up until they exceed your tolerance. So using doubles in computations where decimal fractions are needed, is indeed a ticking bomb.
[...] we expect values no longer than 10 digits total, 5 fraction digits.
I read that assertion to mean that all numbers you deal with, are to be exact multiples of 0.00001, without any further digits. You can convert doubles to such BigDecimals with
new BigDecimal.valueOf(Math.round(doubleVal * 100000), 5)
This will give you an exact representation of a number with 5 decimal fraction digits, the 5-fraction-digits one that's closest to the input doubleVal. This way you correct for the tiny differences between the doubleVal and the decimal number that you originally meant.
If you'd simply use BigDecimal.valueOf(double val), you'd go through the string representation of the double you're using, which can't guarantee that it's what you want. It depends on a rounding process inside the Double class which tries to represent the double-approximation of 7.3 (being maybe 7.30000000000000123456789123456789125) with the most plausible number of decimal digits. It happens to result in "7.3" (and, kudos to the developers, quite often matches the "expected" string) and not "7.300000000000001" or "7.3000000000000012" which both seem equally plausible to me.
That's why I recommend not to rely on that rounding, but to do the rounding yourself by decimal shifting 5 places, then rounding to the nearest long, and constructing a BigDecimal scaled back by 5 decimal places. This guarantees that you get an exact value with (at most) 5 fractional decimal places.
Then do your computations with the BigDecimals (using the appropriate MathContext for rounding, if necessary).
When you finally have to store the number as a double, use BigDecimal.doubleValue(). The resulting double will be close enough to the decimal that the above-mentioned conversion will surely give you the same BigDecimal that you had before (unless you have really huge numbers like 10 digits before the decimal point - the you're lost with double anyway).
P.S. Be sure to use BigDecimal only if decimal fractions are relevant to you - there were times when the British Shilling currency consisted of twelve Pence. Representing fractional Pounds as BigDecimal would give a disaster much worse than using doubles.
It depends on the Database you are using. If you are using SQL Server you can use data type as numeric(12, 8) where 12 represent numeric value and 8 represents precision. similarly, for my SQL DECIMAL(5,2) you can use.
You won't lose any precision value if you use the above-mentioned datatype.
Java Hibernate Class :
You can define
private double latitude;
Database:
I'm wondering what the best way to fix precision errors is in Java. As you can see in the following example, there are precision errors:
class FloatTest
{
public static void main(String[] args)
{
Float number1 = 1.89f;
for(int i = 11; i < 800; i*=2)
{
System.out.println("loop value: " + i);
System.out.println(i*number1);
System.out.println("");
}
}
}
The result displayed is:
loop value: 11
20.789999
loop value: 22
41.579998
loop value: 44
83.159996
loop value: 88
166.31999
loop value: 176
332.63998
loop value: 352
665.27997
loop value: 704
1330.5599
Also, if someone can explain why it only does it starting at 11 and doubling the value every time. I think all other values (or many of them at least) displayed the correct result.
Problems like this have caused me headache in the past and I usually use number formatters or put them into a String.
Edit: As people have mentioned, I could use a double, but after trying it, it seems that 1.89 as a double times 792 still outputs an error (the output is 1496.8799999999999).
I guess I'll try the other solutions such as BigDecimal
If you really care about precision, you should use BigDecimal
https://docs.oracle.com/javase/8/docs/api/java/math/BigDecimal.html
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/math/BigDecimal.html
The problem is not with Java but with the good standard float's (http://en.wikipedia.org/wiki/IEEE_floating-point_standard).
You can either:
use Double and have a bit more precision (but not perfect of course, it also has limited precision)
use a arbitrary-precision-library
use numerically stable algorithms and truncate/round digits of which you are not sure they are correct (you can calculate numeric precision of operations)
When you print the result of a double operation you need to use appropriate rounding.
System.out.printf("%.2f%n", 1.89 * 792);
prints
1496.88
If you want to round the result to a precision, you can use rounding.
double d = 1.89 * 792;
d = Math.round(d * 100) / 100.0;
System.out.println(d);
prints
1496.88
However if you see below, this prints as expected, as there is a small amount of implied rounding.
It worth nothing that (double) 1.89 is not exactly 1.89 It is a close approximation.
new BigDecimal(double) converts the exact value of double without any implied rounding. It can be useful in finding the exact value of a double.
System.out.println(new BigDecimal(1.89));
System.out.println(new BigDecimal(1496.88));
prints
1.8899999999999999023003738329862244427204132080078125
1496.8800000000001091393642127513885498046875
Most of your question has been pretty well covered, though you might still benefit from reading the [floating-point] tag wiki to understand why the other answers work.
However, nobody has addressed "why it only does it starting at 11 and doubling the value every time," so here's the answer to that:
for(int i = 11; i < 800; i*=2)
╚═══╤════╝ ╚╤═╝
│ └───── "double the value every time"
│
└───── "start at 11"
You could use doubles instead of floats
If you really need arbitrary precision, use BigDecimal.
first of Float is the wrapper class for the primitive float
and doubles have more precision
but if you only want to calculate down to the second digit (for monetary purposes for example) use an integer (as if you are using cents as unit) and add some scaling logic when you are multiplying/dividing
or if you need arbitrary precision use BigDecimal
If precision is vital, you should use BigDecimal to make sure that the required precision remains. When you instantiate the calculation, remember to use strings to instantiate the values instead of doubles.
I never had a problem with simple arithmetic precision in either Basic, Visual Basic, FORTRAN, ALGOL or other "primitive" languages. It is beyond comprehension that JAVA can't do simple arithmetic without introducing errors. I need just two digits to the right of the decimal point for doing some accounting. Using Float subtracting 1000 from 1355.65 I get 355.650002! In order to get around this ridiculous error I have implemented a simple solution. I process my input by separating the values on each side of the decimal point as character, convert each to integers, multiply each by 1000 and add the two back together as integers. Ridiculous but there are no errors introduced by the poor JAVA algorithms.
I was just messing around with this method to see what it does. I created a variable with value 3.14 just because it came to my mind at that instance.
double n = 3.14;
System.out.println(Math.nextUp(n));
The preceding displayed 3.1400000000000006.
Tried with 3.1400000000000001, displayed the same.
Tried with 333.33, displayed 333.33000000000004.
With many other values, it displays the appropriate value for example 73.6 results with 73.60000000000001.
What happens to the values in between 3.1400000000000000 and 3.1400000000000006? Why does it skips some values? I know about the hardware related problems but sometimes it works right. Also even though it is known that precise operations cannot be done, why is such method included in the library? It looks pretty useless due to the fact that it doesn't work always right.
One useful trick in Java is to use the exactness of new BigDecimal(double) and of BigDecimal's toString to show the exact value of a double:
import java.math.BigDecimal;
public class Test {
public static void main(String[] args) {
System.out.println(new BigDecimal(3.14));
System.out.println(new BigDecimal(3.1400000000000001));
System.out.println(new BigDecimal(3.1400000000000006));
}
}
Output:
3.140000000000000124344978758017532527446746826171875
3.140000000000000124344978758017532527446746826171875
3.1400000000000005684341886080801486968994140625
There are a finite number of doubles, so only a specific subset of the real numbers are the exact value of a double. When you create a double literal, the decimal number you type is represented by the nearest of those values. When you output a double, by default, it is shown as the shortest decimal fraction that would round to it on input. You need to do something like the BigDecimal technique I used in the program to see the exact value.
In this case, both 3.14 and 3.1400000000000001 are closer to 3.140000000000000124344978758017532527446746826171875 than to any other double. The next exactly representable number above that is 3.1400000000000005684341886080801486968994140625
Floating point numbers are stored in binary: the decimal representation is just for human consumption.
Using Rick Regan's decimal to floating point converter 3.14 converts to:
11.001000111101011100001010001111010111000010100011111
and 3.1400000000000006 converts to
11.0010001111010111000010100011110101110000101001
which is indeed the next binary number to 53 significant bits.
Like #jgreve mentions this has to do due to the use of float & double primitives types in java, which leads to the so called rounding error. The primitive type int on the other hand is a fixed-point number meaning that it is able to "fit" within 32-bits. Doubles are not fixed-point, meaning that the result of double calculations must often be rounded in order to fit back into its finite representation, which leads sometimes (as presented in your case) to inconsistent values.
See the following two links for more info.
https://stackoverflow.com/a/322875/6012392
https://en.wikipedia.org/wiki/Double-precision_floating-point_format
A work around could be the following two, which gives a "direction" to the first double.
double n = 1.4;
double x = 1.5;
System.out.println(Math.nextAfter(n, x));
or
double n = 1.4;
double next = n + Math.ulp(n);
System.out.println(next);
But to handle floating point values it is recommended to use the BigDecimal class
In my JAVA program there is code like this:
int f_part = (int) ((f_num - num) * 100);
f_num is double and num is long. I just want to take the fractional part out and assign it to f_part. But some times f_part value is one less than it's value. Which means if f_num = 123.55 and num = 123, But f_part equals to 54. And it happens only f_num and num is greater than 100. I don't know why this happening. Please can someone explain why this happens and way to correct it.
This is due to the limited precision in doubles.
The root of your problem is that the literal 123.55 actually represents the value 123.54999....
It may seem like it holds the value 123.55 if you print it:
System.out.println(123.55); // prints 123.55
but in fact, the printed value is an approximation. This can be revealed by creating a BigDecimal out of it, (which provides arbitrary precision) and print the BigDecimal:
System.out.println(new BigDecimal(123.55)); // prints 123.54999999999999715...
You can solve it by going via Math.round but you would have to know how many decimals the source double actually entails, or you could choose to go through the string representation of the double in fact goes through a fairly intricate algorithm.
If you're working with currencies, I strongly suggest you either
Let prices etc be represented by BigDecimal which allows you to store numbers as 0.1 accurately, or
Let an int store the number of cents (as opposed to having a double store the number of dollars).
Both ways are perfectly acceptable and used in practice.
From The Floating-Point Guide:
internally, computers use a format (binary floating-point) that cannot
accurately represent a number like 0.1, 0.2 or 0.3 at all.
When the code is compiled or interpreted, your “0.1” is already
rounded to the nearest number in that format, which results in a small
rounding error even before the calculation happens.
It looks like you're calculating money values. double is a completely inappropriate format for this. Use BigDecimal instead.
int f_part = (int) Math.round(((f_num - num) * 100));
This is one of the most often asked (and answered) questions. Floating point arithmetics can not produce exact results, because it's impossible to have an inifinity of real numbers inside 64 bits. Use BigDecimal if you need arbitrary precision.
Floating point arithmetic is not as simple as it may seem and there can be precision issues.
See Why can't decimal numbers be represented exactly in binary?, What Every Computer Scientist Should Know About Floating-Point Arithmetic for details.
If you need absolutely sure precision, you might want to use BigDecimal.
I've been struggling with precision nightmare in Java and SQL Server up to the point when I don't know anymore. Personally, I understand the issue and the underlying reason for it, but explaining that to the client half way across the globe is something unfeasible (at least for me).
The situation is this. I have two columns in SQL Server - Qty INT and Price FLOAT. The values for these are - 1250 and 10.8601 - so in order to get the total value its Qty * Price and result is 13575.124999999998 (in both Java and SQL Server). That's correct. The issue is this - the client doesn't want to see that, they see that number only as 13575.125 and that's it. On one place they way to see it in 2 decimal precision and another in 4 decimals. When displaying in 4 decimals the number is correct - 13575.125, but when displaying in 2 decimals they believe it is wrong - 13575.12 - should instead be 13575.13!
Help.
Your problem is that you are using floats. On the java side, you need to use BigDecimal, not float or double, and on the SQL side you need to use Decimal(19,4) (or Decimal(19,3) if it helps jump to your precision level). Do not use the Money type because math on the Money type in SQL causes truncation, not rounding. The fact that the data is stored as a float type (which you say is unchangeable) doesn't affect this, you just have to convert it at first opportunity before doing math on it.
In the specific example you give, you need to first get the 4 decimal precision number and put it in a BigDecimal or Decimal(19,4) as the case may be, and then further round it to 2 decimal precision. Then (if you are rounding up) you will get the result you want.
Use BigDecimal. Float is not an approciate type to represent money. It will handle the rounding properly. Float will always produce rounding errors.
For storing monetary amounts floating point values are not the way to go. From your description I would probably handle amounts as long integers with as value the monetary amount multiplied by 10^5 as database storage format.
You need to be able to handle calculations with amounts that do not loose precision, so here again floating point is not the way to go. If the total sums between debit and credit are off by 1 cent in a ledger, the ledger fails in the eyes of financial people, so make sure your software operates in their problem domain, not yours. If you can not use existing classes for monetary amounts, you need to build your own class that works with amount * 10^5 and formats according to the precision wanted only for input and output purposes.
Don't use the float datatype for
price. You should use "Money" or
"SmallMoney".
Here's a reference for [MS SQL
DataTypes][1].
[1]:
http://webcoder.info/reference/MSSQLDataTypes.html
Correction: Use Decimal(19,4)
Thanks Yishai.
I think I see the problem.
10.8601 cannot be represented perfectly, and so while the rounding to 13575.125 works OK it's difficult to get it to round to .13 because adding 0.005 just doesn't quite get there. And to make matters worse, 0.005 doesn't have an exact representation either, so you end up just slightly short of 0.13.
Your choices are then to either round twice, once to three digits and then once to 2, or do a better calculation to start with. Using long or a high precision format, scale by 1000 to get *.125 to *125. Do the rounding using precise integers.
By the way, it's not entirely correct to say one of the endlessly repeated variations on "floating point is inaccurate" or that it always produces errors. The problem is that the format can only represent fractions that you can sum negative powers of two to create. So, of the sequence 0.01 to 0.99, only .25, .50, and .75 have exact representations. Consequently, FP is best used, ironically, by scaling it so that only integer values are used, then it is as accurate as integer datatype arithmetic. Of course, then you might as well have just used fixed point integers to start with.
Be careful, scaling, say, 0.37 to 37 still isn't exact unless rounded. Floating point can be used for monetary values but it's more work than it is worth and typically the necessary expertise isn't available.
The FLOAT data type can't represent fractions accurately because it is base2 instead of base10. (See the convenient link :) http://gregs-blog.com/2007/12/10/dot-net-decimal-type-vs-float-type/).
For financial computations or anything that requires fractions to be represented accurately, the DECIMAL data type must be used.
If you can't fix the underlying database you can fix the java like this:
import java.text.DecimalFormat;
public class Temp {
public static void main(String[] args) {
double d = 13575.124999999;
DecimalFormat df2 = new DecimalFormat("#.##");
System.out.println( " 2dp: "+ Double.valueOf(df2.format(d)) );
DecimalFormat df4 = new DecimalFormat("#.####");
System.out.println( " 4dp: "+Double.valueOf(df4.format(d)) );
}
}
Although you shouldn't be storing the price as a float in the first place, you can consider converting it to decimal(38, 4), say, or money (note that money has some issues since results of expressions involving it do not have their scale adjusted dynamically), and exposing that in a view on the way out of SQL Server:
SELECT Qty * CONVERT(decimal(38, 4), Price)
So, given that you can't change the database structure (which would probably be the best option, given that you are using a non-fixed-precision to represent something that should be fixed/precise, as many others have already discussed), hopefully you can change the code somewhere. On the Java side, I think something like #andy_boot answered with would work. On the SQL side, you basically would need to cast the non-precise value to the highest precision you need and continue to cast down from there, basically something like this in the SQL code:
declare #f float,
#n numeric(20,4),
#m money;
select #f = 13575.124999999998,
#n = 13575.124999999998,
#m = 13575.124999999998
select #f, #n, #m
select cast(#f as numeric(20,4)), cast(cast(#f as numeric(20,4)) as numeric(20,2))
select cast(#f as money), cast(cast(#f as money) as numeric(20,2))
You can also do a DecimalFormat and then round using it.
DecimalFormat df = new DecimalFormat("0.00"); //or "0.0000" for 4 digits.
df.setRoundingMode(RoundingMode.HALF_UP);
String displayAmt = df.format((new Float(<your value here>)).doubleValue());
And I agree with others that you should not be using Float as a DB field type to store currency.
If you can't change the database to a fixed decimal datatype, something you might try is rounding by taking truncate((x+.0055)*10000)/10000. Then 1.124999 would "round" to 1.13 and give consistent results. Mathematically this is unreliable, but I think it would work in your case.