Some years ago, I helped write an application that dealt with money and insurance. Initially, we represented money with floating point numbers (a big no-no, I know). Most of the application was just adding and subtracting values, so there weren't any issues. However, specific portions dealt with percentages of money values, hence multiplication and division.
We immediately began suffering from floating point errors and had to do a major refactor. We used an arbitrary precision library which solved that issue. However, it didn't change the fact that you can end up with fractions of a cent. How are you supposed to round that? The short answer is "it's complicated."
Now I'm getting ready to begin work on a similar application to supplant the old one. I've been mulling this over for years. I always thought it would be easiest to create a money datatype that wraps an integer (or BigInteger) to represent the number of pennies with a function to print it to a traditional, human-friendly $0.00 format.
However, researching this, I found JSR 354, the Java Money API that was recently implemented. I was surprised to discover that it backs its representation of money with BigDecimal. Because of that, it includes specific logic for rounding.
What's the advantage to carrying fractions of a cent around in your calculations? Why would I want to do that instead of saying one cent is the "atomic" form of money?
This is a broad question because its answer differs for its implementation.
If I were to purchase 1000 items in bulk for $5, then each item would individually cost $0.005, which is less than what you claim to be the "atomic form" of money, $0.01.
If we considered $0.01 to be the lowest possible amount, then we wouldn't be able to handle calculations in specific situations, like the one in my example.
For that reason, the JavaMoney API handles many fractional digits, ensuring that no precision is lost in cases like these.
Related
I'm trying to round the cents of a value.
The rounding seems to work, but there's an exception:
double amount = 289.42;
String f= String.format("%.1f", amount);
System.out.println(new DecimalFormat("##0.00").format(Double.valueOf(f)));
This is the error: java.lang.NumberFormatException: For input string: "289,4"
Your question posits an impossibility.
Here's the thing: You cannot represent currency amounts with double. At all. Thus, 'how do I render these cents-in-a-double appropriately' isn't a sensible concept in the first place. Because cents cannot be stored in a double.
The problem lies in what double are, fundamentally. double is a numeric storage system that is defined to consist of exactly 64 bits. That's a problem right there: It's a mathematical fact that 64 bits can store at most 2^64 unique things, because, well, math. Think about it.
The problem is, There are in fact an infinite amount of numbers between 0 and 1, let alone between -infinity and +infinity which double would appear to represent. So, how do you square that circle? How does one represent one specific value chosen from an infinite amount of infinities, with only 64 bits?
The answer is simple. You don't.
doubles do not in fact store arbitrary values at all. And that is why you cannot use them to store currencies. Instead, take the number line and mark off slightly less than 2^64 specific values on it. We'll call these 'the blessed numbers'. A double can only store blessed numbers. They can't store anything else. In addition, any math you do to doubles is silently rounded to the nearest blessed value as doubles can't store anything else. So, 0.1 + 0.1? Not actually 0.2. Instead, 0.1 isn't even blessed, so that's really round-to-blessed(0.1) + round-to-blessed(0.1), so actually that's 0.0999999999975 + 0.0999999999975 = 0.2000000000018 or whatever. The blessed numbers are distributed unequally - there are a ton of blessed numbers in the 0-1 range, and as you move away from the 0, the distance between 2 blessed numbers grows larger and larger. Their distribution makes sense, but, computers count in binary, so they fall on neat boundaries in binary, not in decimal (0.1 looks neat in decimal. It's similar to 1 divided by 3, i.e. endlessly repeating, and therefore not precisely representable no matter how many bits you care to involve, in binary).
That rounding is precisely what you absolutely don't want to happen when representing currency. You don't want a cent to randomly appear or disappear and yet that is exactly what will happen if you use double to store finance info.
Hence, you're asking about how to render 'cents in a double' appropriately but in fact that question cannot possibly be answered - you can't store cents in a double, hence, it is not possible to render it properly.
Instead..
Use cents-in-int
The easiest way to do currency correctly is to first determine the accepted atomary unit for your currency, and then store those, in long or int as seems appropriate. For euros, that's eurocents. For bitcoin, that's satoshis. For yen, it's just yen. For dollars, its dollarcents. And so on.
$5.45 is best represented as the int value 545. Not as the double value 5.45, because that's not actually a value a double can represent.
Why do doubles show up as 0.1?
Because System.out.println knows that doubles are wonky and that you're highly likely to want to see 0.1 and not 0.09999999999991238 and thus it rounds inherently. That doesn't magically make it possible to use double to represent finance amounts.
I need to divide, or multiply by complex factors
Division for currency is always nasty. Imagine a cost of 100 dollars that needs to be paid by each 'partner' in a coop. The coop has 120 shares, and each partner has 40 shares, so each partner must pay precisely 1/3 of the cost.
Now what? 100 dollars does not neatly divide into threes. You can't very well charge everybody 33 dollars, 33 cents, and a third of a cent. You could charge everybody 33.33, but now the bank needs to eat 1 cent. You could also charge everybody 33.34, and the bank gets to keep the 2 cents. Or, you could get a little creative, and roll some dice to determine 'the loser'. The loser pays 33.34, the other 2 pay 33.33.
The point is: There is no inherently correct answer. Each situation has its own answer. Hence, division in general is impossible without first answering that question. There is no solving this problem unless you have code that knows how to apply the chosen 'division' algorithm. a / b cannot be used in any case (as the operation has at least 3 params: The dividend, the divisor, and the algorithm to apply to it).
For foreign exchange, 'multiply by this large decimal value' comes up a lot. You can store arbitrary precision values exactly using the java.math.BigDecimal class. However, this is not particularly suitable for storing currencies (all multiplication-by-a-factor will mean the BDs grow ever larger, they still can't divide e.g. 1 by 3 (anything that has repeating digits), and they don't solve the more fundamental issue: Any talk with other systems, such as a bank, can't deal with fractions of atomary units). Stick with BD-space math (as that is perfect, though, can throw exceptions if you divide, and grows ever slower and more complicated over time), until the system you are programming for enforces a rounding to atomary units, at which point, you round, resetting the growth. If you never need to multiply by fractions this doesn't come up, and there's no need to use BigDecimal for anything currency related.
How do I format cents-in-a-long?
String.format("€%d.%02d", cents / 100, cents % 100);
It gets very slightly more complicated for negative numbers (% returns negative values, so you need to do something about this. Math.abs can help), but not very.
cents / 100 gives you the "whole" part when you integer-divide by 100, and % 100 gives you the remainder, which precisely boils down to 'euros' and 'eurocents'.
I'm working on a real time application that deals with money in different currencies and exchange rates using BigDecimal, however I'm facing some serious performance issues and I want to change the underlying representation.
I've read again and again that a good and fast way of representing money in Java is by storing cents (or whatever the required precision is) using a long. As one of the comments pointed out, there are some libs with wrappers that do just that, such as FastMoney from JavaMoney.
Two questions here.
Is it always safe to store money as a long (or inside a wrapper) and keep everything else (like exchange rates) as doubles? In other words, won't I run into basically the same issues as having everything in doubles if I do Math.round(money * rate) (money being cents and rate being a double)?
FastMoney and many other libs only support operations between them and primitive types. How am I supposed to get an accurate representation of let's say the return of an investment if I can't do profit.divide(investment) (both being FastMoney). I guess the idea is I convert both to doubles and then divide them, but that would be inaccurate right?
The functionality you are looking for is already implemented in the JavaMoney Library.
It has a FastMoney class that does long arithmetic which is exactly what you have asked for.
For New Java Developers - Why long and not double?
Floating point arithmetic in Java leads to some unexpected errors in precision due to their implementation. Hence it is not recommended in financial calculations.
Also note that this is different from the precision loss in long arithmetic calculations which is due to the fractional portion not being stored in the long. This can be prevented during implementation by moving the fractional portion to another long (e.g. 1.05 -> 1 dollar and 5 cents).
References
A Quick Tutorial
Project Website
I have a Java project that deals with a lot money values and the project mainly involves:
reading the data from database,
calculations (process data)
showing to users (no inserts or updates in database are required).
I need precision for only some of the money values and not for all. So here I can do:
using doubles when precision not required or
using BigDecimals for ALL.
I want to know if there will be any performance issues if I use BigDecimal for all the variables? Can I save execution time if I opt for choice 1?
Which way is best? (I am using java 6)
Don't use double for money Why not use Double or Float to represent currency?
Using Big Decimal is 100x slower than the built in primitives and you can't use + and -, / and * with BigDecimal but must use the equivalent BigDecimal method calls.
An alternative is to use int instead of double where you are counting cents or whatever fractional currency equivalent and then when formatting the output to the user, do the appropriate conversions back to show the values the way the user expects.
If you have really large values, you can use long instead of int
It's a trade-off.
With BigDecimal you are working with immutable objects. This means that each operation will cause the creation of new objects and this, for sure, will have some impact on the memory. How much - it depends on a lot of things - execution environment, number and complexity of the calculations, etc. But you are getting precision, which is the most important thing when working with money.
With double you can use primitive values, but the precision is poor and they are not suitable for money calculation at all.
If I had to suggest a way - I would say for sure use BigDecimal when dealing with money.
Have you considered moving some of the calculation logic to the DB layer? This can save you a lot in terms of memory and performance, and you will still keep the precision requirement in tact.
BigDecimal and double are very different types, with very different purposes. Java benefits from having both, and Java programmers should be using both of them appropriately.
The floating point primitives are based on binary to be both space and time efficient. They can be, and typically are, implemented in very fast hardware. double should be used in contexts in which there is nothing special about terminating decimal fractions, and all that is needed is an extremely close approximation to a value that may be fractional, irrational, very big, or very small. There are good implementations of many trig and similar functions for double. See java.lang.Math for some examples.
BigDecimal can represent any terminating decimal fraction exactly, given enough memory. That is very, very good in situations in which terminating decimal fractions have special status, such as many money calculations.
In addition to exact representation of terminating decimal fractions, it could also be used to get a closer approximation to e.g. one third than is possible with double. However, situations in which you need an approximation that is closer than double supplies are very rare. The closest double to one third is 0.333333333333333314829616256247390992939472198486328125, which is close enough for most practical purposes. Try measuring the difference between one third of an inch and 0.3333333333333333 inches.
BigDecimal is a only supported in software, does not have the support for mathematical functions that double has, and is much less space and time efficient.
If you have a job for which either would work functionally, use double. If you need exact representation of terminating decimal fractions, use BigDecimal.
I'm working with money so I need my results to be accurate but I only need a precision of 2 decimal points (cents). Is BigDecimal needed to guarantee results of multiplication/division are accurate?
BigDecimal is a very appropriate type for decimal fraction arithmetic with a known number of digits after the decimal point. You can use an integer type and keep track of the multiplier yourself, but that involves doing in your code work that could be automated.
As well as managing the digits after the decimal point, BigDecimal will also expand the number of stored digits as needed - many business and government financial calculations involve sums too large to store in cents in an int.
I would consider avoiding it only if you need to store a very large array of amounts of money, and are short of memory.
One common option is to do all your calculation with integer or long(the cents value) and then simply add two decimal places when you need to display it.
Similarly, there is a JODA Money library that will give you a more full-featured API for money calculations.
It depends on your application. One reason to use that level of accuracy is to prevent errors accumulated over many operations from percolating up and causing loss of valuable information. If you're creating a casual application and/or are only using it for, say, data entry, BigDecimal is very likely overkill.
+1 for Patricias answer, but I very strongly discourage anyone to implement own classes with an integer datatype with fixed bitlength as long as someone really do not know what you are doing. BigDecimal supports all rounding and precision issues while a long/int has severe problems:
Unknown number of fraction digits: Trade exchanges/Law/Commerce are varying in their amount
of fractional digits, so you do not know if your chosen number of digits must be changed and
adjusted in the future. Worse: There are some things like stock evaluation which need a ridiculous amount of fractional digits. A ship with 1000 metric tons of coal causes e.g.
4,12 € costs of ice, leading to 0,000412 €/ton.
Unimplemented operations: It means that people are likely to use floating-point for
rounding/division or other arithmetic operations, hiding the inexactness and leading to
all the known problems of floating-point arithmetic.
Overflow/Underflow: After reaching the maximum amount, adding an amount results in changing the sign. Long.MAX_VALUE switches to Long.MIN_VALUE. This can easily happen if you are doing fractions like (a*b*c*d)/(e*f) which may perfectly valid results in range of a long, but the intermediate nominator or denominator does not.
You could write your own Currency class, using a long to hold the amount. The class methods would set and get the amount using a String.
Division will be a concern no matter whether you use a long or a BigDecimal. You have to determine on a case by case basis what you do with fractional cents. Discard them, round them, or save them (somewhere besides your own account).
I have read on java site to use BigDecimal for currencies.
http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html
But what rounding mode we should use with? which is the most appropriate one and most widely us
There is no "correct" mode, it depends on the business case. Examples:
When calculating annual taxes, the fractions are often cut off (RoundingMode.FLOOR).
When calculating a bonus, you might want to always round in favor of the customer (RoundingMode.CEILING).
For taxes on a bill, you usually round HALF_UP
When doing complex financial simulations, you don't want to round at all.
The documentation of RoundingMode contains a lot of examples how the different modes work.
To get a better answer, you must tell us what you want to achieve.
That said, BigDecimal is the correct type to use in Java because it can preserve any amount of precision plus it lets you chose the rounding mode most suitable for your case.
Most of the time BigDecimal is the only valid choice for currencies. But the choice of rounding strategy is not that obvious.
The default is HALF_EVEN which happens to be a good choice. This algorithm is known as bankers' rounding (see discussion here).
Another common strategy is HALF_UP which is more intuitive but has slightly worse statistical characteristics.
Also note that in many times (especially in banking and insurances) the rounding strategy will be dictated by business requirements, often different for various use-cases.
Typically you'd use "half up" rounding, like so:
myBigDecimal.setScale(2, RoundingMode.HALF_UP);
This way, you'll round to two decimal places (which most currencies use, e.g. dollars and cents, obviously there are exceptions), and you'll round values such that half a cent or more will round up while less than half a cent will round down. See the javadoc for more details.
For the financial applications ROUND_HALF_EVEN is the most common rounding mode. That mode avoids bias.
But for display you should use NumberFormat class. This class will take care of localization issues for amounts in different currencies. But NumberFormat accepts primitives only. So use last one if you can accept small accuracy change in transformations to a double.
You should never use a decimal type for currencies. Use an integer type. This maintains accuracy buy avoiding rounding error associated with floats
When display an amount then divide by the appropriate factor to get the non-integral portion.