How can I format an integer to have two decimals? [duplicate] - java

This question already exists:
Closed 10 years ago.
Possible Duplicate:
How to format an integer to have two decimals?
I am struggling with this piece of code here.
DecimalFormat df = new DecimalFormat("00.##");
df.setMinimumFractionDigits(2);
return df.format(Integer.valueOf(amount));
Here's what I need:
Input: 2, Output: 2.00
Input 20, Output: 20,00
Input 1003, Output: 100,30
Input 120323, Output: 1.203,23 (a thousand two hundred and three and twenty three cents)
I can't use DecimalFormat because I don't have a pattern "example ##.##".
I should always have two decimals. If the amount is less that 100, then I only have to take the amount itself and add ".00" If it is bigger than 100, means that the two last digits are the two decimals I need. They are the cents.
Can anyone help me?

If you want to add two decimal places (which will always be .00 which is a bit redundant.
return String.format("%.2f", amount > 100 ? amount/100.0 : (double) amount);
I suggest you keep one set of units. You should be able to always use cents. If you don't do this you are likely to get confusion. For example, you have 100 and 10000 which are both 100.00

If the amount is less that 100, then I only have to take the amount
itself and add ".00" If it is bigger than 100, means that the two last
digits are the two decimals I need.
First you need to fix this problem. A class, or a native type, should only hold one kind of data, and should not burden other classes or code with detailed knowledge of how that data is to be handled depending on circumstances.
Get that thing to either return back numbers of pennies in all cases, or get it to return a number that supports decimal points. The better solution is numbers of pennies.
That way you don't have to put logic in your formatting, which certainly won't be there the second time you need to format the same thing.

Related

NumberFormatException for input String - Java

I'm trying to round the cents of a value.
The rounding seems to work, but there's an exception:
double amount = 289.42;
String f= String.format("%.1f", amount);
System.out.println(new DecimalFormat("##0.00").format(Double.valueOf(f)));
This is the error: java.lang.NumberFormatException: For input string: "289,4"
Your question posits an impossibility.
Here's the thing: You cannot represent currency amounts with double. At all. Thus, 'how do I render these cents-in-a-double appropriately' isn't a sensible concept in the first place. Because cents cannot be stored in a double.
The problem lies in what double are, fundamentally. double is a numeric storage system that is defined to consist of exactly 64 bits. That's a problem right there: It's a mathematical fact that 64 bits can store at most 2^64 unique things, because, well, math. Think about it.
The problem is, There are in fact an infinite amount of numbers between 0 and 1, let alone between -infinity and +infinity which double would appear to represent. So, how do you square that circle? How does one represent one specific value chosen from an infinite amount of infinities, with only 64 bits?
The answer is simple. You don't.
doubles do not in fact store arbitrary values at all. And that is why you cannot use them to store currencies. Instead, take the number line and mark off slightly less than 2^64 specific values on it. We'll call these 'the blessed numbers'. A double can only store blessed numbers. They can't store anything else. In addition, any math you do to doubles is silently rounded to the nearest blessed value as doubles can't store anything else. So, 0.1 + 0.1? Not actually 0.2. Instead, 0.1 isn't even blessed, so that's really round-to-blessed(0.1) + round-to-blessed(0.1), so actually that's 0.0999999999975 + 0.0999999999975 = 0.2000000000018 or whatever. The blessed numbers are distributed unequally - there are a ton of blessed numbers in the 0-1 range, and as you move away from the 0, the distance between 2 blessed numbers grows larger and larger. Their distribution makes sense, but, computers count in binary, so they fall on neat boundaries in binary, not in decimal (0.1 looks neat in decimal. It's similar to 1 divided by 3, i.e. endlessly repeating, and therefore not precisely representable no matter how many bits you care to involve, in binary).
That rounding is precisely what you absolutely don't want to happen when representing currency. You don't want a cent to randomly appear or disappear and yet that is exactly what will happen if you use double to store finance info.
Hence, you're asking about how to render 'cents in a double' appropriately but in fact that question cannot possibly be answered - you can't store cents in a double, hence, it is not possible to render it properly.
Instead..
Use cents-in-int
The easiest way to do currency correctly is to first determine the accepted atomary unit for your currency, and then store those, in long or int as seems appropriate. For euros, that's eurocents. For bitcoin, that's satoshis. For yen, it's just yen. For dollars, its dollarcents. And so on.
$5.45 is best represented as the int value 545. Not as the double value 5.45, because that's not actually a value a double can represent.
Why do doubles show up as 0.1?
Because System.out.println knows that doubles are wonky and that you're highly likely to want to see 0.1 and not 0.09999999999991238 and thus it rounds inherently. That doesn't magically make it possible to use double to represent finance amounts.
I need to divide, or multiply by complex factors
Division for currency is always nasty. Imagine a cost of 100 dollars that needs to be paid by each 'partner' in a coop. The coop has 120 shares, and each partner has 40 shares, so each partner must pay precisely 1/3 of the cost.
Now what? 100 dollars does not neatly divide into threes. You can't very well charge everybody 33 dollars, 33 cents, and a third of a cent. You could charge everybody 33.33, but now the bank needs to eat 1 cent. You could also charge everybody 33.34, and the bank gets to keep the 2 cents. Or, you could get a little creative, and roll some dice to determine 'the loser'. The loser pays 33.34, the other 2 pay 33.33.
The point is: There is no inherently correct answer. Each situation has its own answer. Hence, division in general is impossible without first answering that question. There is no solving this problem unless you have code that knows how to apply the chosen 'division' algorithm. a / b cannot be used in any case (as the operation has at least 3 params: The dividend, the divisor, and the algorithm to apply to it).
For foreign exchange, 'multiply by this large decimal value' comes up a lot. You can store arbitrary precision values exactly using the java.math.BigDecimal class. However, this is not particularly suitable for storing currencies (all multiplication-by-a-factor will mean the BDs grow ever larger, they still can't divide e.g. 1 by 3 (anything that has repeating digits), and they don't solve the more fundamental issue: Any talk with other systems, such as a bank, can't deal with fractions of atomary units). Stick with BD-space math (as that is perfect, though, can throw exceptions if you divide, and grows ever slower and more complicated over time), until the system you are programming for enforces a rounding to atomary units, at which point, you round, resetting the growth. If you never need to multiply by fractions this doesn't come up, and there's no need to use BigDecimal for anything currency related.
How do I format cents-in-a-long?
String.format("€%d.%02d", cents / 100, cents % 100);
It gets very slightly more complicated for negative numbers (% returns negative values, so you need to do something about this. Math.abs can help), but not very.
cents / 100 gives you the "whole" part when you integer-divide by 100, and % 100 gives you the remainder, which precisely boils down to 'euros' and 'eurocents'.

Java BigDecimal Rounding while having pointless 0's in the number

Problem
I used BigDecimal.setScale(7, RoundingMode.HALF_UP) to round the number to 7 decimal places, however now if I get a number without any decimal places or with them being fewer then 7 I get useless 0's in the number, for example after rounding 40 that way I'll get 40.0000000.
Question
Is it possible to round numbers a certain number of decimal places using SetScale and get rid of pointless 0's at the same time?
Providing you have already performed your rounding, you can simply use DecimalFormat("0.#").
final double value = 5.1000;
DecimalFormat format = new DecimalFormat("0.#");
System.out.println(format.format(value));
The result here will be 5.1 without the trailing zeroes.
So the other way to work this problem out is using BigDecimals methods such as .stripTrailingZeros().toPlainString()
It is quite lesser code to write, than the other solution, but as said iт the solution could be less comfortable to change level of precision in the future, if you'll need to

Finding Repeating part of fraction (a/b) and print only repeating integers as String in java? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm solving a problem of fraction,
if the fraction part is repeating, print the only repeating integers as String.
if the fraction part is not repeating, print upto 4 digits after decimal.
if the fraction output is simple print it directly as string.
E.g.
a) 1/3 = 0.3333... , here 3 is recurring, so need to print only 0.3.
b) 5/2 = 2.5 -- Simple Math.
c) 22/7 = 3.14287, here 14287 is repeating, so need to print 3.14287
Can you please help, the solution should have O(k) time complexity and space complexity
public String fractionToDecimal(int numerator, int denominator) {
StringBuilder tem = new StringBuilder();
long q = numerator / denominator;
return tem.append(q);
}
This question is quite complicated. You need to know quite a few advanced concepts to fully understand why it is so.
Binary vs. Decimal
The notion '1 divided by 3 repeats the 3 endlessly' only works if you presuppose a decimal rendering system. Think about it: Why does '10' come after '9'? As in, why did 'we' decide not to have a specific symbol for the notion 'ten', and instead this is the exact point on the number line 'we' decided to go for two digits, the first digit and the zero digit written next to each other? That's an arbitrary choice, and if you delve into history, humans made this choice because we have 10 fingers. Not all humans made this choice, to be clear: Sumerians had unique digits all the way up to 60, and this explains why there are 60 seconds in a minute, for example. There are remote tribes using 6, 3, and even weirder number systems.
If you want to spend some fun math time, go down the rabbithole on wikipedia reading about exotic number systems. It's mesmering stuff, a fine way to spend an hour (or two, or ten!)
Imagine a number system for aliens that had only 3 fingers total. They'd count:
Human Alien
0 0
1 1
2 2
3 10
4 11
5 12
6 20
8 21
9 22
10 30
Their number system isn't "weird" or "bad". Just different.
However, that number concept goes both ways: When you write, say, "1 divided by 4 = 0.25", that 25 is also in 'decimal' (the name for a number system that has 10 digits, like what most humans on the planet earth like to use).
In human, 1 divided by 10 is 0.1. Easy, right?
Well, in '3-finger alien', one divided by three is... 0.1.
Not 0.1 repeating. Nono. Just 0.1. It fits perfectly. In their number system, one divided by ten is actually quite complicated, where it is ridiculously simple in ours.
Computers are aliens too. They have 2 fingers. Just 0 and 1, that's all. A computer counts: 0 1 10 11 100 101 110 111 and so on.
An a / b operation that repeats in decimal may not repeat in binary. Or it may. Or a number that doesn't repeat in decimal may repeat in binary (1/5 repeats endlessly in binary, in decimal, it's just 0.2, easy).
Given that computers don't like counting in decimal, any basic operation is an instant 'loser' - you can no longer answer the question if you even write double or float anywhere in your code here.
But it requires knowledge of binary and some fairly fundamental math to even know that.
Solution strategy 1: BigDecimal
NOTE: I don't think this is the best way to go about it, I'd go with strategy 2, but for completeness...
Java has a class baked into the core library called java.math.BigDecimal that is intended to be used when you don't want any losses. double and float are [A] binary based, so trying to use them to figure out repeating strides is completely impossible, and [B] silently round numbers all the time to the nearest representable number. You get rounding errors. Even 0.1 + 0.2 isn't quite 0.3 in double math.
For these reasons, BigDecimal exists, which is decimal, and 'perfect'.
The problem is, well, it's perfect. In basis, dividing 1 by 3 in BigDecimal math is impossible - an exception occurs. You need to understand the quite complicated API of BigDecimal to know about how to navigate this issue. You can tell BigDecimal about exactly how much precision you're okay with.
So, what you can do:
Convert your divider and dividend into BigDecimal numbers.
Configure these for 100 digits after the comma precision.
Divide one by the other.
Convert the result to a string.
Analyse the string to find the repeating stride.
That algorithm is technically still incorrect - you can have inputs that have a repetition stride that is longer than 100 characters, or a division operation that appears to have a repeating stride when it actually doesn't.
Still, for probably every combination of numbers up to 100 or so you care to throw at it, the above would work. You can also choose to go further (more than 100 digits), or to write an algorithm that tries to find a repeat stride with 100 digits, and if it fails, that it just uses a while loop to start over, ever incrementing the # of digits used until you do find that repeating stride in the input.
You'll be using many methods of BigDecimal and doing some fairly tricky operation on the resulting string to attempt to find the repetition stride properly.
It's one way to solve this problem. If you would like to try, then read this and have fun!
Solution Strategy 2: Use math
You can use a mathematical algorithm to derive the next decimal digit given a divisor and dividend. This isn't so much computer science, it's purely a mathematical exercise, and hence, don't search for 'java' or 'programming' or whatnot online when looking for this.
The basic gist is this:
1/4 becomes the number 0.25, how would one derive that? Well, had you multiplied the input by 10 first, i.e. calculate 10/4, then all you really need to do is calculate in integral math. 4 fits into 10 twice, with some left over. That's where the 2 comes from.
Then to derive the 5: Take the left-over (4 fits into 10 twice, and that leaves 2), and multiply it again by 10. Now calculate 20/4. That is 5, with nothing left over. Great, that's where the 5 comes from, and we get to conclude that there is no need to continue. It's zeroes from here on out.
You can write this algorithm in java code. Never should it ever mention double or float (you immediately fail the exercise if you do this). a / b, if both a and b are integral, does exactly what you want: Calculates how often b fits into a, tossing any remainder. You can then obtain the remainder with some more simple math:
int dividend = 1;
int divisor = 4;
List<Integer> digits = new ArrayList<>();
// you're going to have to think about when to end this loop
System.out.print("The digits are: 0.");
while (dividend != 0 && digits.size() < 100) {
dividend *= 10;
int digit = dividend / divisor;
dividend -= (digit * divisor);
digits.add(digit);
System.out.print(digit);
}
I'll leave the code you'd need to write to find repetitions to you. You can be certain it repeats 'cleanly' when your divisor number ends up being a divisor value you've seen before. For example, when doing 1/3, going through this algorithm:
First loop through:
dividend (1) is multiplied by 10, becomes 10.
dividend is now integer-divided by the divisor (3), producing the digit 3.
We determine what's left: the digit times the divisor is 9, so 9 of those 10 have been used up, leaving the 1. We set dividend to 1.
As you can see, nothing actually changed: The dividend is still 1 just like it was at the start, therefore all loops through go like this, producing an endless stream of 3 values, which is indeed the correct answer.
You can maintain a list of dividends you've already seen. e.g. by storing them in a Set<Integer>. Once you hit a number you've already seen, you can just stop printing: You've started the repetition.
This algorithm has the considerable benefit of always being correct.
So what do I do?
I think your teacher wants you to figure out the second one, and not to delve into the BigDecimal API.
This is an awesome exercise, but more about math than programming.

Converting doubles to integers

I would like to convert a double (for example price with a value of 7.90) to an integer without losses! I am making a program that processes money, and I have to input them as doubles, and then convert them to integers so that I can proccess them and for example when I have 7.90 euros and i want to convert it to cents it will appear as 789 cents instead of 790! please help me :) thanks in advance
More specificaly the problem is that if it was a double it would originaly save 59.999999 instead of 60
Data type double variables are not capable of storing some values exactly, 60 is one of those values, so you should either a) use a different data type (float suffers from the same problems), or b) account for that using a tolerance in your checks.
In your case your variable shows 59.9999999 simply because its the closest that data type double can get to 60.
One possible solution:
Do not use floating point numbers at all. Instead of saying 1 Euro = 100 * 0.01 Euro, calculate with Cents: 1 Euro = 100 Cents. Calculate with Cents as integers and divide by 100 if you need Euros.
Another possibility:
When calculating big amounts of money, where it won't fit into an integer (maybe not even into a long), you use the BigInteger or BigDecimal classes. This is the usual and probably better approach when working with money. Precision and number of digits are arbitrary, but processing time and memory usage might be higher.

Remove the last grouping separator when using DecimalFormat Grouping java

I want to format lengthy double numbers in my android calculator app
2 problems ...
First is that I want to enable Grouping by 3 so 1200 would be shown 1,200
. But for numbers like 1000 what I get is 1,
How can I get 1,000 for result ?
also another problem is that I want Results with up to 25 integer digits and 4 decimal fraction digits ... but even though I set Decimalformat maximum to 25, after number of integers surpasses 18, Digits turn to be zeros. for example what I want is
1,111,111,111,111,111,111,111,111
but I get
1,111,111,111,111,111,110,000,000,
Here's the code
DecimalFormat df = new DecimalFormat();
df.setMaximumFractionDigits(4);
df.setMaximumIntegerDigits(25);
df.setGroupingUsed(true);
df.setGroupingSize(3);
String formatted = df.format(mResultBeforeFormatting);
what pattern should I use ?
Ok so for others who face this problem like me
for first part best pattern was
DecimalFormat df = new DecimalFormat("###,##0.####");
for second problem as "x-code" noted it apears there's no hope using double . should try using BigDecimal
I found answers to both parts of the question right here on StackOverflow.
First is that I want to enable Grouping by 3
Here are several solutions to the grouping problem.
another problem is that I want Results with up to 25 integer digits
In this case what's really happening is that Java doubles can't represent 25 decimal digits.
For this problem you might want to look at the BigDecimal class, which can represent the numbers in your example.

Categories

Resources