Java convert/cast object to Double but prevent round? - java

Object num = 12334555578912349.13;
System.out.println(BigDecimal.valueOf(((Number) num).doubleValue()).setScale(2, BigDecimal.ROUND_HALF_EVEN));
I expect the value to be 12334555578912349.13
but output is 123345555789123504.00
How can I prevent it from rounding?

Object num = 12334555578912349.13;
You already lost here. As per the java spec, the literal text '12334555578912349.13' in your source file is interpreted as a double. A double value, given that computers aren't magic, is a 64-bit value, and thus, only at most 2^64 numbers are even representable by it. Let's call these the 'blessed numbers'. 12334555578912349.13 is not one of the blessed numbers. The nearest blessed number to that is 123345555789123504.0 which is what that value 'compiles' to. Once you're at 123345555789123504.0, there's no way back.
The solution then is to never have '12334555578912349.13' as literal in your source file, as that immediately loses you the game if you do that.
Here's how we avoid ever having a double anywhere:
var bd = new BigDecimal("12334555578912349.13");
System.out.println(bd);
In general if you want to provide meaningful guarantees about precision, if your code contains the word double anywhere in it, you broke it. Do a quick search through the code for the word 'double' and until that search returns 0 hits, keep eliminating its use.
Alternatives that can provide guarantees about precision:
BigDecimal, of course. Note that BD can't divide without specifying rules about how to round it (for the same reason 1/3 becomes 0.333333... never ends).
Eliminate the fraction. For example, if that represents the GDP of a nation, store it as cents and not as euros, in a long and not a double.
If the thing doesn't represent a number that you ever intend to do any math on (for example, it's a social security number or an ISBN code or some such), store it as a String.

Related

Is it sufficient to convert a double to a BigDecimal just before addition to retain original precision?

We are solving a numeric precision related bug. Our system collects some numbers and spits their sum.
The issue is that the system does not retain the numeric precision, e.g. 300.7 + 400.9 = 701.599..., while expected result would be 701.6. The precision is supposed to adapt to the input values so we cannot just round results to fixed precision.
The problem is obvious, we use double for the values and addition accumulates the error from the binary representation of the decimal value.
The path of the data is following:
XML file, type xsd:decimal
Parse into a java primitive double. Its 15 decimal places should be enough, we expect values no longer than 10 digits total, 5 fraction digits.
Store into DB MySql 5.5, type double
Load via Hibernate into a JPA entity, i.e. still primitive double
Sum bunch of these values
Print the sum into another XML file
Now, I assume the optimal solution would be converting everything to a decimal format. Unsurprisingly, there is a pressure to go with the cheapest solution. It turns out that converting doubles to BigDecimal just before adding a couple of numbers works in case B in following example:
import java.math.BigDecimal;
public class Arithmetic {
public static void main(String[] args) {
double a = 0.3;
double b = -0.2;
// A
System.out.println(a + b);//0.09999999999999998
// B
System.out.println(BigDecimal.valueOf(a).add(BigDecimal.valueOf(b)));//0.1
// C
System.out.println(new BigDecimal(a).add(new BigDecimal(b)));//0.099999999999999977795539507496869191527366638183593750
}
}
More about this:
Why do we need to convert the double into a string, before we can convert it into a BigDecimal?
Unpredictability of the BigDecimal(double) constructor
I am worried that such a workaround would be a ticking bomb.
First, I am not so sure that this arithmetic is bullet proof for all cases.
Second, there is still some risk that someone in the future might implement some changes and change B to C, because this pitfall is far from obvious and even a unit test may fail to reveal the bug.
I would be willing to live with the second point but the question is: Would this workaround provide correct results? Could there be a case where somehow
Double.valueOf("12345.12345").toString().equals("12345.12345")
is false? Given that Double.toString, according to javadoc, prints just the digits needed to uniquely represent underlying double value, so when parsed again, it gives the same double value? Isn't that sufficient for this use case where I only need to add the numbers and print the sum with this magical Double.toString(Double d) method? To be clear, I do prefer what I consider the clean solution, using BigDecimal everywhere, but I am kind of short of arguments to sell it, by which I mean ideally an example where conversion to BigDecimal before addition fails to do the job described above.
If you can't avoid parsing into primitive double or store as double, you should convert to BigDecimal as early as possible.
double can't exactly represent decimal fractions. The value in double x = 7.3; will never be exactly 7.3, but something very very close to it, with a difference visible from the 16th digit or so on to the right (giving 50 decimal places or so). Don't be mislead by the fact that printing might give exactly "7.3", as printing already does some kind of rounding and doesn't show the number exactly.
If you do lots of computations with double numbers, the tiny differences will eventually sum up until they exceed your tolerance. So using doubles in computations where decimal fractions are needed, is indeed a ticking bomb.
[...] we expect values no longer than 10 digits total, 5 fraction digits.
I read that assertion to mean that all numbers you deal with, are to be exact multiples of 0.00001, without any further digits. You can convert doubles to such BigDecimals with
new BigDecimal.valueOf(Math.round(doubleVal * 100000), 5)
This will give you an exact representation of a number with 5 decimal fraction digits, the 5-fraction-digits one that's closest to the input doubleVal. This way you correct for the tiny differences between the doubleVal and the decimal number that you originally meant.
If you'd simply use BigDecimal.valueOf(double val), you'd go through the string representation of the double you're using, which can't guarantee that it's what you want. It depends on a rounding process inside the Double class which tries to represent the double-approximation of 7.3 (being maybe 7.30000000000000123456789123456789125) with the most plausible number of decimal digits. It happens to result in "7.3" (and, kudos to the developers, quite often matches the "expected" string) and not "7.300000000000001" or "7.3000000000000012" which both seem equally plausible to me.
That's why I recommend not to rely on that rounding, but to do the rounding yourself by decimal shifting 5 places, then rounding to the nearest long, and constructing a BigDecimal scaled back by 5 decimal places. This guarantees that you get an exact value with (at most) 5 fractional decimal places.
Then do your computations with the BigDecimals (using the appropriate MathContext for rounding, if necessary).
When you finally have to store the number as a double, use BigDecimal.doubleValue(). The resulting double will be close enough to the decimal that the above-mentioned conversion will surely give you the same BigDecimal that you had before (unless you have really huge numbers like 10 digits before the decimal point - the you're lost with double anyway).
P.S. Be sure to use BigDecimal only if decimal fractions are relevant to you - there were times when the British Shilling currency consisted of twelve Pence. Representing fractional Pounds as BigDecimal would give a disaster much worse than using doubles.
It depends on the Database you are using. If you are using SQL Server you can use data type as numeric(12, 8) where 12 represent numeric value and 8 represents precision. similarly, for my SQL DECIMAL(5,2) you can use.
You won't lose any precision value if you use the above-mentioned datatype.
Java Hibernate Class :
You can define
private double latitude;
Database:

Number too big for BigInteger

I'm developing a chemistry app, and I need to include the Avogadro's number:
(602200000000000000000000)
I don't really know if I can use scientific notation to represent it as 6.022 x 10x23 (can't put the exponent).
I first used double, then long and now, I used java.math.BigInteger.
But it still says it's too big, what can I do or should this is just to much for a system?
Pass it to the BigInteger constructor as a String, and it works just fine.
BigInteger a = new BigInteger("602200000000000000000000");
a = a.multiply(new BigInteger("2"));
System.out.println(a);
Output: 1204400000000000000000000
First of all, you need to check your physics / chemistry text book.
Avogadro's number is not 602,200,000,000,000,000,000,000. It is approximately 6.022 x 1023. The key word is "approximately". As of 2019, the precise value is 6.02214076×1023 mol−1
(In 2015 when I originally wrote this reply, the current best approximation for Avogadro's number was 6.022140857(74)×1023 mol−1, and the relative error was +/- 1.2×10–8. In 2019, the SI redefined the mole / Avogadro's number to be the precise value above. Source: Wikipedia)
My original (2015) answer was that since the number only needed 8 decimal digits precision, the Java double type was an appropriate type to represent it. Hence, I recommended:
final double AVOGADROS_CONSTANT = 6.02214076E23;
Clearly, neither int or long can represent this number. A float could, but not with enough precision (assuming we use the best available measured value).
Now (post 2019) the BigInteger is the simplest correct representation.
Now to your apparent problems with declaring the constant as (variously) an double, a long and a BigInteger.
I expect you did something like this:
double a = 602200000000000000000000;
and so on. That isn't going to work, but the reason it won't work needs to be explained. The problem is that the number is being supplied as an int literal. An int cannot be that big. The largest possible int value is 231 - 1 ... which is a little bit bigger than 2 x 109.
That is what the Java compiler was complaining about. The literal is too big to be an int.
It is too big for long literal as well. (Do the math.)
But it is not too big for a double literal ... provided that you write it correctly.
The solution using BigInteger(String) works because it side-steps the problem of representing the number as a numeric literal by using a string instead, and parsing it at runtime. That's OK from the perspective of the language, but (IMO) wrong because the extra precision is an illusion.
You can use E notation to write the scientific notation:
double a = 6.022e23;
The problem is with how you're trying to create it (most likely), not because it can't fit.
If you have just a number literal in your code (even if you try to assign it to a double or long), this is first treated as an integer (before being converted to the type it needs to be), and the number you have can't fit into an integer.
// Even though this number can fit into a long, it won't compile, because it's first treated
// as an integer.
long l = 123456788901234;
To create a long, you can add L to your number, so 602200000000000000000000L, although it won't fit into a long either - the max value is 263-1.
To create a double, you can add .0 to your number, so 602200000000000000000000.0 (or 6.022e23 as Guffa suggested), although you should not use this if you want precise values, as you may lose some accuracy because of the way it stores the value.
To create a BigInteger, you can use the constructor taking a string parameter:
new BigInteger("602200000000000000000000");
Most probably you are using long to initialize BigInteger. Since long can represent 64-bit numbers, your number would be too big to fit in to long. Using String would help.

Subtraction of numbers double and long

In my JAVA program there is code like this:
int f_part = (int) ((f_num - num) * 100);
f_num is double and num is long. I just want to take the fractional part out and assign it to f_part. But some times f_part value is one less than it's value. Which means if f_num = 123.55 and num = 123, But f_part equals to 54. And it happens only f_num and num is greater than 100. I don't know why this happening. Please can someone explain why this happens and way to correct it.
This is due to the limited precision in doubles.
The root of your problem is that the literal 123.55 actually represents the value 123.54999....
It may seem like it holds the value 123.55 if you print it:
System.out.println(123.55); // prints 123.55
but in fact, the printed value is an approximation. This can be revealed by creating a BigDecimal out of it, (which provides arbitrary precision) and print the BigDecimal:
System.out.println(new BigDecimal(123.55)); // prints 123.54999999999999715...
You can solve it by going via Math.round but you would have to know how many decimals the source double actually entails, or you could choose to go through the string representation of the double in fact goes through a fairly intricate algorithm.
If you're working with currencies, I strongly suggest you either
Let prices etc be represented by BigDecimal which allows you to store numbers as 0.1 accurately, or
Let an int store the number of cents (as opposed to having a double store the number of dollars).
Both ways are perfectly acceptable and used in practice.
From The Floating-Point Guide:
internally, computers use a format (binary floating-point) that cannot
accurately represent a number like 0.1, 0.2 or 0.3 at all.
When the code is compiled or interpreted, your “0.1” is already
rounded to the nearest number in that format, which results in a small
rounding error even before the calculation happens.
It looks like you're calculating money values. double is a completely inappropriate format for this. Use BigDecimal instead.
int f_part = (int) Math.round(((f_num - num) * 100));
This is one of the most often asked (and answered) questions. Floating point arithmetics can not produce exact results, because it's impossible to have an inifinity of real numbers inside 64 bits. Use BigDecimal if you need arbitrary precision.
Floating point arithmetic is not as simple as it may seem and there can be precision issues.
See Why can't decimal numbers be represented exactly in binary?, What Every Computer Scientist Should Know About Floating-Point Arithmetic for details.
If you need absolutely sure precision, you might want to use BigDecimal.

Regarding Big Decimal

I have a csv file where amount and quantity fields are present in each detail record except header and trailer record. Trailer record has a total charge values which is the total sum of quantity multiplied by amount field in detail records . I need to check whether the trailer total charge value is equal to my calculated value of amount and quantity fields. I am using the double data type for all these calculations
In the csv file amount field appears as "10.12" or "10" or "10.0" or "10.456" or "10.4555" or "-10.12". Also amount can have a positive or negative value.
In csv file
H,ABC.....
"D",....,"1","12.23"
"D",.....,"3","-13.334"
"D",......,"2","12"
T,csd,123,12.345
------------------------------ While Validation i am having the below code --------------------
double detChargeCount =0;
//From csv file i am reading trailer records charge value
String totChargeValue = items[3].replaceAll("\"","").trim();
if (null != totChargeValue && !totChargeValue.equals("")) {
detChargeCount = new Double(totChargeValue).doubleValue();
if(detChargeCount==calChargeCount)
validflag=true;
-----------------------While reading CSV File i am having the below code
if (null != chargeQuan && !chargeQuan.equals("")) {
tmpChargeQuan=Long(chargeQuan).longValue();
}
if (null != chargeAmount && !chargeAmount.equals("")) {
tmpChargeAmt=new Double(chargeAmount).doubleValue();
calChargeCount=calChargeCount+(tmpChargeQuan*tmpChargeAmt);
}
I had declared the variables tmpChargeQuan, tmpChargeAmt, calChargeCount as double
When i searched web i came to know that double might give issues for financial calculations so need to use BIGDECIMAL. But i am wondering is this scenario applies for my calculation. In my case amount value can have upto 5 or 6 digits after the decimal point" Can i use double datatype for this calculation? I am using it for validation. Will it create an problem if i use the above code with multiplication using double?
I'll expand on what Adeel has already succintly answered. You can fit those numbers into a double datatype. The problem is when numbers get calculated, will they get calculated correctly? The answer is no - they will not. Generally it's not that much of a problem if you account for that with a delta, that is, your leeway in assuming whether or not a double value is equivalent to another double value. But for calculations involving exact numbers, such as monetary calculations, you must use a type such as BigDecimal to hold the values.
When you have this number:
1.23445
as a double, it may look like 1.23445
but it may actually be something like
1.234450000003400345543034
When you perform multiple calculations on numbers such as that, generally those extra places don't matter - however, over time, they will yield inaccurate results. With BigDecimal, when a number is specified as its String representation, it is that number - it does not suffer the "almost as good" problem doubles do.
I am updating this answer to include some notes from the double constructor of BigDecimal, found at this address.
The results of this constructor can be
somewhat unpredictable. One might
assume that writing new
BigDecimal(0.1) in Java creates a
BigDecimal which is exactly equal to
0.1 (an unscaled value of 1, with a scale of 1), but it is actually equal
to
0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be
represented exactly as a double (or,
for that matter, as a binary fraction
of any finite length). Thus, the value
that is being passed in to the
constructor is not exactly equal to
0.1, appearances notwithstanding.
The String constructor, on the other
hand, is perfectly predictable:
writing new BigDecimal("0.1") creates
a BigDecimal which is exactly equal to
0.1, as one would expect. Therefore, it is generally recommended that the
String constructor be used in
preference to this one.
When a double must be used as a source
for a BigDecimal, note that this
constructor provides an exact
conversion; it does not give the same
result as converting the double to a
String using the
Double.toString(double) method and
then using the BigDecimal(String)
constructor. To get that result, use
the static valueOf(double) method.
Its not the matter of size, its a matter of expressing floating point numbers exactly. How the BigDecimal Class Helps Java Get its arithmetic right.

data type to represent a big decimal in java

Which data type is apt to represent a decimal number like "10364055.81".
If tried using double:
double d = 10364055.81;
But when I try to print the number, its displaying as "1.036405581E7", which I don't want.
Should I use BigDecimal? But its displaying as 10364055.81000000052154064178466796875.
Is there any datatype that displays the values as it is? Also the number may be bigger than the one taken as example.
BTW, will using BigDecimal effect the performance of the application?? I might use this in almost all my DTOs.
You should use BigDecimal - but use the String constructor, e.g.:
new BigDecimal("10364055.81");
If you pass a double to BigDecimal, Java must create that double first - and since doubles cannot represent most decimal fractions accurately, it does create the value as 10364055.81000000052154064178466796875 and then passes it to the BigDecimal constructor. In this case BigDecimal has no way of knowing that you actually meant the rounder version.
Generally speaking, using non-String constructors of BigDecimal should be considered a warning that you're not getting the full benefit of the class.
Edit - based on rereading exactly what you wanted to do, my initial claim is probably too strong. BigDecimal is a good choice when you need to represent decimal values exactly (money handling being the obvious choice, you don't want 5.99 * one million to be 5990016.45 for example.
But if you're not worried about the number being stored internally as a very slightly different value to the decimal literal you entered, and just want to print it out again in the same format, then as others have said, an instance of NumberFormat (in this case, new DecimalFormat("########.##")) will do the trick to output the double nicely, or String.format can do much the same thing.
As for performance - BigDecimals will naturally be slower than using primitives. Typically, though, unless the vast majority of your program involves mathematical manipulations, you're unlikely to actually notice any speed difference. That's not to say you should use BigDecimals all over; but rather, that if you can get a real benefit from their features that would be difficult or impossible to realise with plain doubles, then don't sweat the miniscule performance difference they theoretically introduce.
How a number is displayed is distinct from how the number is stored.
Take a look at DecimalFormat for controlling how you can display your numbers when a double (or float etc.).
Note that choosing BigDecimal over double (or vice versa) has pros/cons, and will depend on your requirements. See here for more info. From the summary:
In summary, if raw performance and
space are the most important factors,
primitive floating-point types are
appropriate. If decimal values need to
be represented exactly, high-precision
computation is needed, or fine control
of rounding is desired, only
BigDecimal has the needed
capabilities.
A double would be enough in order to save this number. If your problem is you don't like the format when printing or putting it into a String, you might use NumberFormat: http://java.sun.com/javase/6/docs/api/java/text/NumberFormat.html
you can use double and display if with System.out.printf().
double d = 100003.81;
System.out.printf("%.10f", d);
.10f - means a double with precision of 10

Categories

Resources