Can we replace using Double with BigDecimal completely, any performance overhead? - java

I have a Java project that deals with a lot money values and the project mainly involves:
reading the data from database,
calculations (process data)
showing to users (no inserts or updates in database are required).
I need precision for only some of the money values and not for all. So here I can do:
using doubles when precision not required or
using BigDecimals for ALL.
I want to know if there will be any performance issues if I use BigDecimal for all the variables? Can I save execution time if I opt for choice 1?
Which way is best? (I am using java 6)

Don't use double for money Why not use Double or Float to represent currency?
Using Big Decimal is 100x slower than the built in primitives and you can't use + and -, / and * with BigDecimal but must use the equivalent BigDecimal method calls.
An alternative is to use int instead of double where you are counting cents or whatever fractional currency equivalent and then when formatting the output to the user, do the appropriate conversions back to show the values the way the user expects.
If you have really large values, you can use long instead of int

It's a trade-off.
With BigDecimal you are working with immutable objects. This means that each operation will cause the creation of new objects and this, for sure, will have some impact on the memory. How much - it depends on a lot of things - execution environment, number and complexity of the calculations, etc. But you are getting precision, which is the most important thing when working with money.
With double you can use primitive values, but the precision is poor and they are not suitable for money calculation at all.
If I had to suggest a way - I would say for sure use BigDecimal when dealing with money.
Have you considered moving some of the calculation logic to the DB layer? This can save you a lot in terms of memory and performance, and you will still keep the precision requirement in tact.

BigDecimal and double are very different types, with very different purposes. Java benefits from having both, and Java programmers should be using both of them appropriately.
The floating point primitives are based on binary to be both space and time efficient. They can be, and typically are, implemented in very fast hardware. double should be used in contexts in which there is nothing special about terminating decimal fractions, and all that is needed is an extremely close approximation to a value that may be fractional, irrational, very big, or very small. There are good implementations of many trig and similar functions for double. See java.lang.Math for some examples.
BigDecimal can represent any terminating decimal fraction exactly, given enough memory. That is very, very good in situations in which terminating decimal fractions have special status, such as many money calculations.
In addition to exact representation of terminating decimal fractions, it could also be used to get a closer approximation to e.g. one third than is possible with double. However, situations in which you need an approximation that is closer than double supplies are very rare. The closest double to one third is 0.333333333333333314829616256247390992939472198486328125, which is close enough for most practical purposes. Try measuring the difference between one third of an inch and 0.3333333333333333 inches.
BigDecimal is a only supported in software, does not have the support for mathematical functions that double has, and is much less space and time efficient.
If you have a job for which either would work functionally, use double. If you need exact representation of terminating decimal fractions, use BigDecimal.

Related

When shall use float type in Java?

I know float type is A IEEE floating point, and it's not accuracy in calculation, for example, if I'd like to sum two floats 8.4 and 2.4, what I get is 10.7999999 rather than 10.8. I also know BigDecimal can solve this problem, but BigDecimal is much slower than float type.
In most real productions we'd like an accuracy value like above 10.8 not a 10.7999.. so my question is shall I prevent to use float as much as I can in programming? if not is there any use cases? I mean in a real production.
If you're handling monetary amounts, then numbers like 8.4 and 2.4 are exact values, and you'll want to use BigDecimal for those. However, if you're doing a physics calculation where you're dealing with measurements, the values 8.4 and 2.4 aren't going to be exact anyway, since measurements aren't exact. That's a use case where using double is better. Also, a scientific calculation could involve things like square roots, trigonometric functions, logarithms, etc., and those can be done only using IEEE floats. Calculations involving money don't normally involve those kinds of functions.
By the way, there's very little reason to ever use the float type; stick with double.
You use float when the percision is enough. It is generally faster to do calculations with float and requires less memory. Sometimes you just need the performance.
What you describe is caused by the fact that binary floating point numbers cannot exactly represent many numbers that can be exactly represented by decimal floating point numbers, like 8.4 or 2.4.
This affects not only the float type in Java but also double.
In many cases you can do calculations with integers and then rescale to get the deciamls correctly. But if you require numbers with equal relative accurracies, no matter how large they are, floating point is far superior.
So yes, if you can, you should prefer integers over floats, but there are many applications where floating point is required. This includes many scientific and mathematical algorithms.
You should also consider that 10.7999999 instead of 10.8 looks weird when displayed but actually the difference is really small. So it's not so much an accurracy issue but more related to number formatting. In most cases this problem is resolved by rounding the number appropriately when converting it to a string for output, for example:
String price = String.format("%.2f", floatPrice);
BigDecimals are very precise (you can determine their precision -- it is mainly limited by memory) but pretty slow and memory intensive. You use them when you need exact results, i.e. in financial applications, or when you otherwise need very precise results and when speed is not too critical.
Floating point types (double and float) are not nearly as precise, but much faster and they only take up limited memory. Typically, a float takes up 4 bytes and a double takes up 8 bytes. You use them with measurements that can't be very exact anyway, but also if you need the speed or the memory. I use them for (real time) graphics and real time music. Or when otherwise precision of the result is not so important, e.g. when measuring time or percentages when downloading or some such.

Floating point types returned in ORM / DSL

The Java™ Tutorials state that "this data type [double] should never be used for precise values, such as currency." Is the fact that an ORM / DSL is returning floating point numbers for database columns storing values to be used to calculate monetary amounts a problem? I'm using QueryDSL and I'm dealing with money. QueryDSL is returning a Double for any number with a precision up to 16 and a BigDecimal thereafter. This concerns me as I'm aware that floating point arithmetic isn't suitable for currency calculations.
From this QueryDSL issue I'm led to believe that Hibernate does the same thing; see OracleDialect. Why does it use a Double rather than a BigDecimal? Is it safe to retrieve the Double and construct a BigDecimal, or is there a chance that a number with a precision of less than 16 could be incorrectly represented? Is it only when performing arithmetic operations that a Double can have floating-point issues, or are there values to which it cannot be accurately initialised?
Using floating point numbers for storing money is a bad idea indeed. Floating points can approximate an operation result, but that's not what you want when dealing with money.
The easiest way to fix it, in a database portable way, is to simply store cents. This is the proffered way of dealing with currency operations in financial operations. Pay attention that most databases use the half-away from zero rounding algorithm, so make sure that's appropriate in your context.
When it comes to money you should always ask a local accountant, especially for the rounding part. Better safe then sorry.
Now back to your questions:
Is it safe to retrieve the Double and construct a BigDecimal, or is
there a chance that a number with a precision of less than 16 could be
incorrectly represented?
This is a safe operation as long as your database uses at most a 16 digit precision. If it uses a higher precision, you'd need to override the OracleDialect and
Is it only when performing arithmetic operations that a Double can
have floating-point issues, or are there values to which it cannot be
accurately initialised?
When performing arithmetic operations you must always take into consideration the monetary rounding anyway, and that applies to BigDecimal as well. So if you can guarantee that the database value doesn't loose any decimal when being cast to a java Double, you are fine to create a BigDecimal from it. Using BigDecimal pays off when applying arithmetic operations to the database loaded value.
As for the threshold of 16, according to Wiki:
The 11 bit width of the exponent allows the representation of numbers
with a decimal exponent between 10−308 and 10308, with full 15–17
decimal digits precision. By compromising precision, subnormal
representation allows values smaller than 10−323.
There seems to be several concerns mentioned in the question, comments, and answers by Robert Bain. I've collected and paraphrased some of these.
Is it safe to use a double to store a precise value?
Yes, provided the number of significant-digits (precision) is small enough.
From wikipedia
If a decimal string with at most 15 significant digits is converted to IEEE 754 double precision representation and then converted back to a string with the same number of significant digits, then the final string should match the original.
But new BigDecimal(1000.1d) has the value 1000.1000000000000227373675443232059478759765625, why not 1000.1?
In the quote above I added emphasis - when converted from a double the number of significant digits must be specified, e.g.
new BigDecimal(1000.1d, new MathContext(15))
Is it safe to use a double for arbitrary arithmetic on precise values?
No, each intermediate value used in the calculation could introduce additional error.
Using a double to store exact values should be seen as an optimization. It introduces risk that if care is not taken, precision could be lost. Using a BigDecimal is much less likely to have unexpected consequences and should be your default choice.
Is it correct that QueryDSL returns a double for precise value?
It is not necessarily incorrect, but is probably not desirable. I would suggest you engage with the QueryDSL developers... but I see you have already raised an issue and they intend to change this behavior.
After much deliberation, I must conclude that the answer to my own question:
Is the fact that an ORM / DSL is returning floating point numbers for database columns storing values to be used to calculate monetary amounts a problem?
put simply, is yes. Please read on.
Is it safe to retrieve the Double and construct a BigDecimal, or is there a chance that a number with a precision of less than 16 could be incorrectly represented?
A number with a precision of less than 16 decimal digits is incorrectly represented in the following example.
BigDecimal foo = new BigDecimal(1000.1d);
The BigDecimal value of foo is 1000.1000000000000227373675443232059478759765625. 1000.1 has a precision of 1 and is being misrepresented from precision 14 of the BigDecimal value.
Is it only when performing arithmetic operations that a Double can have floating-point issues, or are there values to which it cannot be accurately initialised?
As per the example above, there are values to which it cannot be accurately initialised. As The Java™ Tutorials clearly states, "This data type [float / double] should never be used for precise values, such as currency. For that, you will need to use the java.math.BigDecimal class instead."
Interestingly, calling BigDecimal.valueOf(someDouble) appeared at first to magically resolve things but upon realising that it calls Double.toString() then reading Double's documentation it became apparent that this is not appropriate for exact values either.
In conclusion, when dealing with exact values, floating point numbers are never appropriate. As such, in my mind, ORMs / DSLs should be mapping to BigDecimal unless otherwise specified, given that most database use will involve the calculation of exact values.
Update:
Based on this conclusion, I've raised this issue with QueryDSL.
It is not only about arithmetic operations, but also about pure read&write.
Oracle NUMBER and BigDecimal do both use decadic base. So when you read number from database and then you store it back you can be sure, that the same number was written. (Unless it exceeds Oracle's limit of 38 digits).
If you convert NUMBER into binary base (Double) and then you convert it back do decadic then you might expect problems. And also this operation must be much slower.

Java precise calculations - options to use

I am trying to establish some concise overview of what options for precise caluclations we have in JAVA+SQL. So far I have found following options:
use doubles accepting their drawbacks, no go.
use BigDecimals
using them in complicated formulas is problematic for me
use String.format/Decimal.format to round doubles
do i need to round each variable in formula or just result to get BigDecimal precision?
how can this be tweaked?
use computed fields option in SQL.
drawback is that I'd need dynamic SQL to pull data from different tables + calculate fields on other calculated fields and that would get messy
any other options?
Problem statement:
I need precise financial calculations that would involve using very big (billions) and very small numbers (0.0000004321), and also dividing values that are very similar to each other, so for sure I need precision of BigDecimal.
On the other side, I want to retain ease of use that doubles have in functions (i work on arrays from decimal SQL data), so calculations like: (a[i] - b[i])/b[i] etc. etc. that are further used in other calculations. and I'd like to have users to be able to desing their own formulas as they need them (using common math statements)
i am keen to use "formatting" solution for String.format, but this makes code not very readable ( using String.format() for each variable...).
Many thanks for suggestion of how to deal with the stuff.
There is nothing you can do to avoid floating point erros in float and double.
No free cheese here - use BigDecimal.
From Effective Java (2nd ED):
Item 48: Avoid float and double if exact answers are required
Float and double do not provide exact results and should not be used where exact results are required.
The float and double types are particularly ill-suited for monetary claculations because is impossible to represent 0.1 (or any other negative power of ten) as a float or double exactly.
The right way to solve this problem is to ouse BigDecimal, int, or long for monetary calculations.
...
An alternative is to use int or long and to keep track of the decimal point yourself.
There is no way to get BigDecimal precision on a double. doubles have double precision.
If you want to guarantee precise results use BigDecimal.
You could create your own variant using a long to store the integer part and an int to store the fractional part - but why reinvent the wheel.
Any time use doubles you stand to stuffer from double precision issues. If you use them in a single place you might as well use them everywhere.
Even if you only use them to represent data from the database then will round the data to double precision and you will lose information.
If I understand your question, you want to use Data Types with more precision than the native Java ones without loosing the simple mathematical syntax (e.g. / + * - and so on). As you cannot overload operators in Java, I think this is not possible.

Is BigDecimal an overkill in some cases?

I'm working with money so I need my results to be accurate but I only need a precision of 2 decimal points (cents). Is BigDecimal needed to guarantee results of multiplication/division are accurate?
BigDecimal is a very appropriate type for decimal fraction arithmetic with a known number of digits after the decimal point. You can use an integer type and keep track of the multiplier yourself, but that involves doing in your code work that could be automated.
As well as managing the digits after the decimal point, BigDecimal will also expand the number of stored digits as needed - many business and government financial calculations involve sums too large to store in cents in an int.
I would consider avoiding it only if you need to store a very large array of amounts of money, and are short of memory.
One common option is to do all your calculation with integer or long(the cents value) and then simply add two decimal places when you need to display it.
Similarly, there is a JODA Money library that will give you a more full-featured API for money calculations.
It depends on your application. One reason to use that level of accuracy is to prevent errors accumulated over many operations from percolating up and causing loss of valuable information. If you're creating a casual application and/or are only using it for, say, data entry, BigDecimal is very likely overkill.
+1 for Patricias answer, but I very strongly discourage anyone to implement own classes with an integer datatype with fixed bitlength as long as someone really do not know what you are doing. BigDecimal supports all rounding and precision issues while a long/int has severe problems:
Unknown number of fraction digits: Trade exchanges/Law/Commerce are varying in their amount
of fractional digits, so you do not know if your chosen number of digits must be changed and
adjusted in the future. Worse: There are some things like stock evaluation which need a ridiculous amount of fractional digits. A ship with 1000 metric tons of coal causes e.g.
4,12 € costs of ice, leading to 0,000412 €/ton.
Unimplemented operations: It means that people are likely to use floating-point for
rounding/division or other arithmetic operations, hiding the inexactness and leading to
all the known problems of floating-point arithmetic.
Overflow/Underflow: After reaching the maximum amount, adding an amount results in changing the sign. Long.MAX_VALUE switches to Long.MIN_VALUE. This can easily happen if you are doing fractions like (a*b*c*d)/(e*f) which may perfectly valid results in range of a long, but the intermediate nominator or denominator does not.
You could write your own Currency class, using a long to hold the amount. The class methods would set and get the amount using a String.
Division will be a concern no matter whether you use a long or a BigDecimal. You have to determine on a case by case basis what you do with fractional cents. Discard them, round them, or save them (somewhere besides your own account).

data type to represent a big decimal in java

Which data type is apt to represent a decimal number like "10364055.81".
If tried using double:
double d = 10364055.81;
But when I try to print the number, its displaying as "1.036405581E7", which I don't want.
Should I use BigDecimal? But its displaying as 10364055.81000000052154064178466796875.
Is there any datatype that displays the values as it is? Also the number may be bigger than the one taken as example.
BTW, will using BigDecimal effect the performance of the application?? I might use this in almost all my DTOs.
You should use BigDecimal - but use the String constructor, e.g.:
new BigDecimal("10364055.81");
If you pass a double to BigDecimal, Java must create that double first - and since doubles cannot represent most decimal fractions accurately, it does create the value as 10364055.81000000052154064178466796875 and then passes it to the BigDecimal constructor. In this case BigDecimal has no way of knowing that you actually meant the rounder version.
Generally speaking, using non-String constructors of BigDecimal should be considered a warning that you're not getting the full benefit of the class.
Edit - based on rereading exactly what you wanted to do, my initial claim is probably too strong. BigDecimal is a good choice when you need to represent decimal values exactly (money handling being the obvious choice, you don't want 5.99 * one million to be 5990016.45 for example.
But if you're not worried about the number being stored internally as a very slightly different value to the decimal literal you entered, and just want to print it out again in the same format, then as others have said, an instance of NumberFormat (in this case, new DecimalFormat("########.##")) will do the trick to output the double nicely, or String.format can do much the same thing.
As for performance - BigDecimals will naturally be slower than using primitives. Typically, though, unless the vast majority of your program involves mathematical manipulations, you're unlikely to actually notice any speed difference. That's not to say you should use BigDecimals all over; but rather, that if you can get a real benefit from their features that would be difficult or impossible to realise with plain doubles, then don't sweat the miniscule performance difference they theoretically introduce.
How a number is displayed is distinct from how the number is stored.
Take a look at DecimalFormat for controlling how you can display your numbers when a double (or float etc.).
Note that choosing BigDecimal over double (or vice versa) has pros/cons, and will depend on your requirements. See here for more info. From the summary:
In summary, if raw performance and
space are the most important factors,
primitive floating-point types are
appropriate. If decimal values need to
be represented exactly, high-precision
computation is needed, or fine control
of rounding is desired, only
BigDecimal has the needed
capabilities.
A double would be enough in order to save this number. If your problem is you don't like the format when printing or putting it into a String, you might use NumberFormat: http://java.sun.com/javase/6/docs/api/java/text/NumberFormat.html
you can use double and display if with System.out.printf().
double d = 100003.81;
System.out.printf("%.10f", d);
.10f - means a double with precision of 10

Categories

Resources