Regarding Big Decimal - java

I have a csv file where amount and quantity fields are present in each detail record except header and trailer record. Trailer record has a total charge values which is the total sum of quantity multiplied by amount field in detail records . I need to check whether the trailer total charge value is equal to my calculated value of amount and quantity fields. I am using the double data type for all these calculations
In the csv file amount field appears as "10.12" or "10" or "10.0" or "10.456" or "10.4555" or "-10.12". Also amount can have a positive or negative value.
In csv file
H,ABC.....
"D",....,"1","12.23"
"D",.....,"3","-13.334"
"D",......,"2","12"
T,csd,123,12.345
------------------------------ While Validation i am having the below code --------------------
double detChargeCount =0;
//From csv file i am reading trailer records charge value
String totChargeValue = items[3].replaceAll("\"","").trim();
if (null != totChargeValue && !totChargeValue.equals("")) {
detChargeCount = new Double(totChargeValue).doubleValue();
if(detChargeCount==calChargeCount)
validflag=true;
-----------------------While reading CSV File i am having the below code
if (null != chargeQuan && !chargeQuan.equals("")) {
tmpChargeQuan=Long(chargeQuan).longValue();
}
if (null != chargeAmount && !chargeAmount.equals("")) {
tmpChargeAmt=new Double(chargeAmount).doubleValue();
calChargeCount=calChargeCount+(tmpChargeQuan*tmpChargeAmt);
}
I had declared the variables tmpChargeQuan, tmpChargeAmt, calChargeCount as double
When i searched web i came to know that double might give issues for financial calculations so need to use BIGDECIMAL. But i am wondering is this scenario applies for my calculation. In my case amount value can have upto 5 or 6 digits after the decimal point" Can i use double datatype for this calculation? I am using it for validation. Will it create an problem if i use the above code with multiplication using double?

I'll expand on what Adeel has already succintly answered. You can fit those numbers into a double datatype. The problem is when numbers get calculated, will they get calculated correctly? The answer is no - they will not. Generally it's not that much of a problem if you account for that with a delta, that is, your leeway in assuming whether or not a double value is equivalent to another double value. But for calculations involving exact numbers, such as monetary calculations, you must use a type such as BigDecimal to hold the values.
When you have this number:
1.23445
as a double, it may look like 1.23445
but it may actually be something like
1.234450000003400345543034
When you perform multiple calculations on numbers such as that, generally those extra places don't matter - however, over time, they will yield inaccurate results. With BigDecimal, when a number is specified as its String representation, it is that number - it does not suffer the "almost as good" problem doubles do.
I am updating this answer to include some notes from the double constructor of BigDecimal, found at this address.
The results of this constructor can be
somewhat unpredictable. One might
assume that writing new
BigDecimal(0.1) in Java creates a
BigDecimal which is exactly equal to
0.1 (an unscaled value of 1, with a scale of 1), but it is actually equal
to
0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be
represented exactly as a double (or,
for that matter, as a binary fraction
of any finite length). Thus, the value
that is being passed in to the
constructor is not exactly equal to
0.1, appearances notwithstanding.
The String constructor, on the other
hand, is perfectly predictable:
writing new BigDecimal("0.1") creates
a BigDecimal which is exactly equal to
0.1, as one would expect. Therefore, it is generally recommended that the
String constructor be used in
preference to this one.
When a double must be used as a source
for a BigDecimal, note that this
constructor provides an exact
conversion; it does not give the same
result as converting the double to a
String using the
Double.toString(double) method and
then using the BigDecimal(String)
constructor. To get that result, use
the static valueOf(double) method.

Its not the matter of size, its a matter of expressing floating point numbers exactly. How the BigDecimal Class Helps Java Get its arithmetic right.

Related

Java convert/cast object to Double but prevent round?

Object num = 12334555578912349.13;
System.out.println(BigDecimal.valueOf(((Number) num).doubleValue()).setScale(2, BigDecimal.ROUND_HALF_EVEN));
I expect the value to be 12334555578912349.13
but output is 123345555789123504.00
How can I prevent it from rounding?
Object num = 12334555578912349.13;
You already lost here. As per the java spec, the literal text '12334555578912349.13' in your source file is interpreted as a double. A double value, given that computers aren't magic, is a 64-bit value, and thus, only at most 2^64 numbers are even representable by it. Let's call these the 'blessed numbers'. 12334555578912349.13 is not one of the blessed numbers. The nearest blessed number to that is 123345555789123504.0 which is what that value 'compiles' to. Once you're at 123345555789123504.0, there's no way back.
The solution then is to never have '12334555578912349.13' as literal in your source file, as that immediately loses you the game if you do that.
Here's how we avoid ever having a double anywhere:
var bd = new BigDecimal("12334555578912349.13");
System.out.println(bd);
In general if you want to provide meaningful guarantees about precision, if your code contains the word double anywhere in it, you broke it. Do a quick search through the code for the word 'double' and until that search returns 0 hits, keep eliminating its use.
Alternatives that can provide guarantees about precision:
BigDecimal, of course. Note that BD can't divide without specifying rules about how to round it (for the same reason 1/3 becomes 0.333333... never ends).
Eliminate the fraction. For example, if that represents the GDP of a nation, store it as cents and not as euros, in a long and not a double.
If the thing doesn't represent a number that you ever intend to do any math on (for example, it's a social security number or an ISBN code or some such), store it as a String.

Is it sufficient to convert a double to a BigDecimal just before addition to retain original precision?

We are solving a numeric precision related bug. Our system collects some numbers and spits their sum.
The issue is that the system does not retain the numeric precision, e.g. 300.7 + 400.9 = 701.599..., while expected result would be 701.6. The precision is supposed to adapt to the input values so we cannot just round results to fixed precision.
The problem is obvious, we use double for the values and addition accumulates the error from the binary representation of the decimal value.
The path of the data is following:
XML file, type xsd:decimal
Parse into a java primitive double. Its 15 decimal places should be enough, we expect values no longer than 10 digits total, 5 fraction digits.
Store into DB MySql 5.5, type double
Load via Hibernate into a JPA entity, i.e. still primitive double
Sum bunch of these values
Print the sum into another XML file
Now, I assume the optimal solution would be converting everything to a decimal format. Unsurprisingly, there is a pressure to go with the cheapest solution. It turns out that converting doubles to BigDecimal just before adding a couple of numbers works in case B in following example:
import java.math.BigDecimal;
public class Arithmetic {
public static void main(String[] args) {
double a = 0.3;
double b = -0.2;
// A
System.out.println(a + b);//0.09999999999999998
// B
System.out.println(BigDecimal.valueOf(a).add(BigDecimal.valueOf(b)));//0.1
// C
System.out.println(new BigDecimal(a).add(new BigDecimal(b)));//0.099999999999999977795539507496869191527366638183593750
}
}
More about this:
Why do we need to convert the double into a string, before we can convert it into a BigDecimal?
Unpredictability of the BigDecimal(double) constructor
I am worried that such a workaround would be a ticking bomb.
First, I am not so sure that this arithmetic is bullet proof for all cases.
Second, there is still some risk that someone in the future might implement some changes and change B to C, because this pitfall is far from obvious and even a unit test may fail to reveal the bug.
I would be willing to live with the second point but the question is: Would this workaround provide correct results? Could there be a case where somehow
Double.valueOf("12345.12345").toString().equals("12345.12345")
is false? Given that Double.toString, according to javadoc, prints just the digits needed to uniquely represent underlying double value, so when parsed again, it gives the same double value? Isn't that sufficient for this use case where I only need to add the numbers and print the sum with this magical Double.toString(Double d) method? To be clear, I do prefer what I consider the clean solution, using BigDecimal everywhere, but I am kind of short of arguments to sell it, by which I mean ideally an example where conversion to BigDecimal before addition fails to do the job described above.
If you can't avoid parsing into primitive double or store as double, you should convert to BigDecimal as early as possible.
double can't exactly represent decimal fractions. The value in double x = 7.3; will never be exactly 7.3, but something very very close to it, with a difference visible from the 16th digit or so on to the right (giving 50 decimal places or so). Don't be mislead by the fact that printing might give exactly "7.3", as printing already does some kind of rounding and doesn't show the number exactly.
If you do lots of computations with double numbers, the tiny differences will eventually sum up until they exceed your tolerance. So using doubles in computations where decimal fractions are needed, is indeed a ticking bomb.
[...] we expect values no longer than 10 digits total, 5 fraction digits.
I read that assertion to mean that all numbers you deal with, are to be exact multiples of 0.00001, without any further digits. You can convert doubles to such BigDecimals with
new BigDecimal.valueOf(Math.round(doubleVal * 100000), 5)
This will give you an exact representation of a number with 5 decimal fraction digits, the 5-fraction-digits one that's closest to the input doubleVal. This way you correct for the tiny differences between the doubleVal and the decimal number that you originally meant.
If you'd simply use BigDecimal.valueOf(double val), you'd go through the string representation of the double you're using, which can't guarantee that it's what you want. It depends on a rounding process inside the Double class which tries to represent the double-approximation of 7.3 (being maybe 7.30000000000000123456789123456789125) with the most plausible number of decimal digits. It happens to result in "7.3" (and, kudos to the developers, quite often matches the "expected" string) and not "7.300000000000001" or "7.3000000000000012" which both seem equally plausible to me.
That's why I recommend not to rely on that rounding, but to do the rounding yourself by decimal shifting 5 places, then rounding to the nearest long, and constructing a BigDecimal scaled back by 5 decimal places. This guarantees that you get an exact value with (at most) 5 fractional decimal places.
Then do your computations with the BigDecimals (using the appropriate MathContext for rounding, if necessary).
When you finally have to store the number as a double, use BigDecimal.doubleValue(). The resulting double will be close enough to the decimal that the above-mentioned conversion will surely give you the same BigDecimal that you had before (unless you have really huge numbers like 10 digits before the decimal point - the you're lost with double anyway).
P.S. Be sure to use BigDecimal only if decimal fractions are relevant to you - there were times when the British Shilling currency consisted of twelve Pence. Representing fractional Pounds as BigDecimal would give a disaster much worse than using doubles.
It depends on the Database you are using. If you are using SQL Server you can use data type as numeric(12, 8) where 12 represent numeric value and 8 represents precision. similarly, for my SQL DECIMAL(5,2) you can use.
You won't lose any precision value if you use the above-mentioned datatype.
Java Hibernate Class :
You can define
private double latitude;
Database:

Is double the correct datatype to calculate decimal percentage?

I want to calculate percentage in my project and I am using double for that. Suggest me the correct data type to calculate percentage in decimal like 18% of 5.368 should give 0.966 exact.
I want the result truncated to 3 decimal places.
I am using this:
EditText kundan = (EditText) findViewById(R.id.kundan);
double kundangiven = Double.parseDouble(kundan.getText().toString());
EditText loss = (EditText)findViewById(R.id.losspercentage);
double lossinkundan = Double.parseDouble(loss.getText().toString());
losspercent = (lossinkundan * kundangiven) / 100 ;
losspercent = losspercent % 10 ;
displayTotalloss(losspercent);
Your original code works fine regarding storing percentage calculations in doubles. Regarding formatting the result to 3 decimal places, use String.format(), e.g.
String.format("%.3f", losspercent);
Now that I better understand your needs, this is the way to get a double to 3 digits of precision for display:
String.format("%.3f", losspercent);
This returns a String, so it can be returned from a function, passed directly to your display function, or stored in a variable of type String
It sounds like you are asking whether you are using the correct data type to store your computed percentage, or whether there is a better option.
Given your present code structure, I would say that yes, double is the right choice. Any time you are dividing (or multiplying for that matter) non-integer numbers, there is a good chance that an exact result requires a higher level of precision for the output than for the inputs. The memory cost of using double instead of float here is probably negligible, so double seems like the obvious choice.
If you are really, really concerned about accuracy, then you could use BigDecimal. BigDecimal is not a primitive though, so it would cost more in memory and processing (although still probably not noticeable in this example).
Of course, if you really want to use the "right" data type, and you have control over the code base, you could create your own data type. I think it is unnecessary here though. double is perfectly suitable. If you are concerned about readability, you may consider creating a function that takes a double x and an int p, and returns a double representing p percent of x.

Why does nextUp method in Math class skips some values?

I was just messing around with this method to see what it does. I created a variable with value 3.14 just because it came to my mind at that instance.
double n = 3.14;
System.out.println(Math.nextUp(n));
The preceding displayed 3.1400000000000006.
Tried with 3.1400000000000001, displayed the same.
Tried with 333.33, displayed 333.33000000000004.
With many other values, it displays the appropriate value for example 73.6 results with 73.60000000000001.
What happens to the values in between 3.1400000000000000 and 3.1400000000000006? Why does it skips some values? I know about the hardware related problems but sometimes it works right. Also even though it is known that precise operations cannot be done, why is such method included in the library? It looks pretty useless due to the fact that it doesn't work always right.
One useful trick in Java is to use the exactness of new BigDecimal(double) and of BigDecimal's toString to show the exact value of a double:
import java.math.BigDecimal;
public class Test {
public static void main(String[] args) {
System.out.println(new BigDecimal(3.14));
System.out.println(new BigDecimal(3.1400000000000001));
System.out.println(new BigDecimal(3.1400000000000006));
}
}
Output:
3.140000000000000124344978758017532527446746826171875
3.140000000000000124344978758017532527446746826171875
3.1400000000000005684341886080801486968994140625
There are a finite number of doubles, so only a specific subset of the real numbers are the exact value of a double. When you create a double literal, the decimal number you type is represented by the nearest of those values. When you output a double, by default, it is shown as the shortest decimal fraction that would round to it on input. You need to do something like the BigDecimal technique I used in the program to see the exact value.
In this case, both 3.14 and 3.1400000000000001 are closer to 3.140000000000000124344978758017532527446746826171875 than to any other double. The next exactly representable number above that is 3.1400000000000005684341886080801486968994140625
Floating point numbers are stored in binary: the decimal representation is just for human consumption.
Using Rick Regan's decimal to floating point converter 3.14 converts to:
11.001000111101011100001010001111010111000010100011111
and 3.1400000000000006 converts to
11.0010001111010111000010100011110101110000101001
which is indeed the next binary number to 53 significant bits.
Like #jgreve mentions this has to do due to the use of float & double primitives types in java, which leads to the so called rounding error. The primitive type int on the other hand is a fixed-point number meaning that it is able to "fit" within 32-bits. Doubles are not fixed-point, meaning that the result of double calculations must often be rounded in order to fit back into its finite representation, which leads sometimes (as presented in your case) to inconsistent values.
See the following two links for more info.
https://stackoverflow.com/a/322875/6012392
https://en.wikipedia.org/wiki/Double-precision_floating-point_format
A work around could be the following two, which gives a "direction" to the first double.
double n = 1.4;
double x = 1.5;
System.out.println(Math.nextAfter(n, x));
or
double n = 1.4;
double next = n + Math.ulp(n);
System.out.println(next);
But to handle floating point values it is recommended to use the BigDecimal class

Why does new BigDecimal("0.015").compareTo(new BigDecimal(0.015)) return -1? [duplicate]

This question already has answers here:
Why are floating point numbers inaccurate?
(5 answers)
BigDecimal compareTo not working as expected
(1 answer)
Closed 7 years ago.
Why does new BigDecimal("0.015").compareTo(new BigDecimal(0.015)) return -1?
If I expect those two to be equal, is there an alternative way to compare them?
Due to the imprecise nature of floating point arithmetic, they're not exactly equal
System.out.println(new BigDecimal(0.015));
displays
0.01499999999999999944488848768742172978818416595458984375
To expand on the answer from #Reimeus, the various constructors for BigDecimal accept different types of input. The floating point constructors, take a floating point as input, and due to the limitations of the way that floats/doubles are stored, these can only store accurately values that are a power of 2.
So, for example, 2⁻², or 0.25, can be represented exactly. 0.875 is (2⁻¹ + 2⁻² + 2⁻³), so it can also be represented accurately. So long as the number can be represented by a sum of powers, where the upper and lower power differ by no more than 53, then the number can be represented exactly. The vast majority of numbers don't fit this pattern!
In particular, 0.15 is not a power of two, nor is it the sum of a power of two, and so the representation is not accurate.
The string constructor on the other hand does store it accurately, by using a different format internally to store the number. Hence, when you compare the two, they compare as being different.
A double cannot exactly represent the value 0.015. The closest value it can represent in its 64 binary bits is 0.01499999999999999944488848768742172978818416595458984375. The constructor new BigDecimal(double) is designed to preserve the precise value of the double argument, which can never be exactly 0.015. Hence the result of your comparison.
However, if you display that double value, for example by:
System.out.println(0.01499999999999999944488848768742172978818416595458984375);
it outputs 0.015 – which hints at a workaround. Converting a double to a String chooses the shortest decimal representation needed to distinguish it from other possible double values.
Thus, if you create a BigDecimal from the double's String representation, it will have a value more as you expect. This comparison is true:
new BigDecimal(Double.toString(0.015)).equals(new BigDecimal("0.015"))
In fact, the method BigDecimal.valueOf(double) exists for exactly this purpose, so you can shorten the above to:
BigDecimal.valueOf(0.015).equals(new BigDecimal("0.015"))
You should use the new BigDecimal(double) constructor only if your purpose is to preserve the precise binary value of the argument. Otherwise, call BigDecimal.valueOf(double), whose documentation says:
This is generally the preferred way to convert a double (or float) into a BigDecimal.
Or, use a String if you can and avoid the subtleties of double entirely.
What actually happens here is this:
0.015 is a primitive double. Which means that as soon as you write it, it is already no longer 0.015, but rather 0.0149.... The compiler stores it as a binary representation in the bytecode.
BigDecimal is constructed to store exactly whatever is given to it. In this case, 0.0149...
BigDecimal is also able to parse Strings into exact representations. In this case "0.015" is parsed into exactly 0.015. Even though double cannot represent that number, BigDecimal can
Finally, when you compare them, you can see that they are not equal. Which makes sense.
Whenever using BigDecimal, be cautious of the previously used type. String, int, long will remain exact. float and double have the usual precision caveat.

Categories

Resources