Multiplying a number with a fraction - java

I was writing some converters of units using BigDecimals and I ran across a situation where I had to multiply a number with a fraction - periodic number.
For most cases the precision is good enough, but lets say we have an equation like:
BigDecimal.valueOf(90)
.multiply(BigDecimal.valueOf(10)
.divide(BigDecimal.valueOf(90), 6, RoundingMode.HALF_UP))
Normally this would equal 10, however because of rounding, we will get 9.999999...
Is there an elegant way of implementing this without having an if condition detecting when the fraction can be cut?

The following will work:
BigDecimal.valueOf(90)
.multiply(BigDecimal.valueOf(10))
.divide(BigDecimal.valueOf(90), 6, RoundingMode.HALF_UP)
The difference is that here the operations are chained, which allows for resolving such cases. In your solution the division needs to be calculated (where error occurs) and then multiplication, because it's passed as argument.

Do not know if this will be a general case answer for you, but it works in the example given:
bd = BigDecimal.valueOf(90)
.multiply(BigDecimal.valueOf(10))
.divide(BigDecimal.valueOf(90));
Multiply by 10 then divide by 90.
a * x = ax
- --
z z
You will need to include some rounding logic for rational numbers:
bd = BigDecimal.valueOf(1)
.multiply(BigDecimal.valueOf(1))
.divide(BigDecimal.valueOf(3));
Will fail without rounding.

Related

Floats not adding up [duplicate]

I'm wondering what the best way to fix precision errors is in Java. As you can see in the following example, there are precision errors:
class FloatTest
{
public static void main(String[] args)
{
Float number1 = 1.89f;
for(int i = 11; i < 800; i*=2)
{
System.out.println("loop value: " + i);
System.out.println(i*number1);
System.out.println("");
}
}
}
The result displayed is:
loop value: 11
20.789999
loop value: 22
41.579998
loop value: 44
83.159996
loop value: 88
166.31999
loop value: 176
332.63998
loop value: 352
665.27997
loop value: 704
1330.5599
Also, if someone can explain why it only does it starting at 11 and doubling the value every time. I think all other values (or many of them at least) displayed the correct result.
Problems like this have caused me headache in the past and I usually use number formatters or put them into a String.
Edit: As people have mentioned, I could use a double, but after trying it, it seems that 1.89 as a double times 792 still outputs an error (the output is 1496.8799999999999).
I guess I'll try the other solutions such as BigDecimal
If you really care about precision, you should use BigDecimal
https://docs.oracle.com/javase/8/docs/api/java/math/BigDecimal.html
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/math/BigDecimal.html
The problem is not with Java but with the good standard float's (http://en.wikipedia.org/wiki/IEEE_floating-point_standard).
You can either:
use Double and have a bit more precision (but not perfect of course, it also has limited precision)
use a arbitrary-precision-library
use numerically stable algorithms and truncate/round digits of which you are not sure they are correct (you can calculate numeric precision of operations)
When you print the result of a double operation you need to use appropriate rounding.
System.out.printf("%.2f%n", 1.89 * 792);
prints
1496.88
If you want to round the result to a precision, you can use rounding.
double d = 1.89 * 792;
d = Math.round(d * 100) / 100.0;
System.out.println(d);
prints
1496.88
However if you see below, this prints as expected, as there is a small amount of implied rounding.
It worth nothing that (double) 1.89 is not exactly 1.89 It is a close approximation.
new BigDecimal(double) converts the exact value of double without any implied rounding. It can be useful in finding the exact value of a double.
System.out.println(new BigDecimal(1.89));
System.out.println(new BigDecimal(1496.88));
prints
1.8899999999999999023003738329862244427204132080078125
1496.8800000000001091393642127513885498046875
Most of your question has been pretty well covered, though you might still benefit from reading the [floating-point] tag wiki to understand why the other answers work.
However, nobody has addressed "why it only does it starting at 11 and doubling the value every time," so here's the answer to that:
for(int i = 11; i < 800; i*=2)
╚═══╤════╝ ╚╤═╝
│ └───── "double the value every time"
│
└───── "start at 11"
You could use doubles instead of floats
If you really need arbitrary precision, use BigDecimal.
first of Float is the wrapper class for the primitive float
and doubles have more precision
but if you only want to calculate down to the second digit (for monetary purposes for example) use an integer (as if you are using cents as unit) and add some scaling logic when you are multiplying/dividing
or if you need arbitrary precision use BigDecimal
If precision is vital, you should use BigDecimal to make sure that the required precision remains. When you instantiate the calculation, remember to use strings to instantiate the values instead of doubles.
I never had a problem with simple arithmetic precision in either Basic, Visual Basic, FORTRAN, ALGOL or other "primitive" languages. It is beyond comprehension that JAVA can't do simple arithmetic without introducing errors. I need just two digits to the right of the decimal point for doing some accounting. Using Float subtracting 1000 from 1355.65 I get 355.650002! In order to get around this ridiculous error I have implemented a simple solution. I process my input by separating the values on each side of the decimal point as character, convert each to integers, multiply each by 1000 and add the two back together as integers. Ridiculous but there are no errors introduced by the poor JAVA algorithms.

A realistic example where using BigDecimal for currency is strictly better than using double

We know that using double for currency is error-prone and not recommended. However, I'm yet to see a realistic example, where BigDecimal works while double fails and can't be simply fixed by some rounding.
Note that trivial problems
double total = 0.0;
for (int i = 0; i < 10; i++) total += 0.1;
for (int i = 0; i < 10; i++) total -= 0.1;
assertTrue(total == 0.0);
don't count as they're trivially solved by rounding (in this example anything from zero to sixteen decimal places would do).
Computations involving summing big values may need some intermediate rouding, but given the total currency in circulation being USD 1e12, Java double (i.e., the standard IEEE double precision) with its 15 decimal digits is still sufficient event for cents.
Computations involving division are in general imprecise even with BigDecimal. I can construct a computation which can't be performed with doubles, but can be performed with BigDecimal using a scale of 100, but it's not something you can encounter in reality.
I don't claim that such a realistic example does not exist, it's just that I haven't seen it yet.
I also surely agree, that using double is more error-prone.
Example
What I'm looking for is a method like the following (based on the answer by Roland Illig)
/**
* Given an input which has three decimal places,
* round it to two decimal places using HALF_EVEN.
*/
BigDecimal roundToTwoPlaces(BigDecimal n) {
// To make sure, that the input has three decimal places.
checkArgument(n.scale() == 3);
return n.round(new MathContext(2, RoundingMode.HALF_EVEN));
}
together with a test like
public void testRoundToTwoPlaces() {
final BigDecimal n = new BigDecimal("0.615");
final BigDecimal expected = new BigDecimal("0.62");
final BigDecimal actual = roundToTwoPlaces(n);
Assert.assertEquals(expected, actual);
}
When this gets naively rewritten using double, then the test could fail (it doesn't for the given input, but it does for others). However, it can be done correctly:
static double roundToTwoPlaces(double n) {
final long m = Math.round(1000.0 * n);
final double x = 0.1 * m;
final long r = (long) Math.rint(x);
return r / 100.0;
}
It's ugly and error-prone (and can probably be simplified), but it can be easily encapsulated somewhere. That's why I'm looking for more answers.
I can see four basic ways that double can screw you when dealing with currency calculations.
Mantissa Too Small
With ~15 decimal digits of precision in the mantissa, you are you going to get the wrong result any time you deal with amounts larger than that. If you are tracking cents, problems would start to occur before 1013 (ten trillion) dollars.
While that's a big number, it's not that big. The US GDP of ~18 trillion exceeds it, so anything dealing with country or even corporation sized amounts could easily get the wrong answer.
Furthermore, there are plenty of ways that much smaller amounts could exceed this threshold during calculation. You might be doing a growth projection or a over a number of years, which results in a large final value. You might be doing a "what if" scenario analysis where various possible parameters are examined and some combination of parameters might result in very large values. You might be working under financial rules which allow fractions of a cent which could chop another two orders of magnitude or more off of your range, putting you roughly in line with the wealth of mere individuals in USD.
Finally, let's not take a US centric view of things. What about other currencies? One USD is worth is worth roughly 13,000 Indonesian Rupiah, so that's another 2 orders of magnitude you need to track currency amounts in that currency (assuming there are no "cents"!). You're almost getting down to amounts that are of interest to mere mortals.
Here is an example where a growth projection calculation starting from 1e9 at 5% goes wrong:
method year amount delta
double 0 $ 1,000,000,000.00
Decimal 0 $ 1,000,000,000.00 (0.0000000000)
double 10 $ 1,628,894,626.78
Decimal 10 $ 1,628,894,626.78 (0.0000004768)
double 20 $ 2,653,297,705.14
Decimal 20 $ 2,653,297,705.14 (0.0000023842)
double 30 $ 4,321,942,375.15
Decimal 30 $ 4,321,942,375.15 (0.0000057220)
double 40 $ 7,039,988,712.12
Decimal 40 $ 7,039,988,712.12 (0.0000123978)
double 50 $ 11,467,399,785.75
Decimal 50 $ 11,467,399,785.75 (0.0000247955)
double 60 $ 18,679,185,894.12
Decimal 60 $ 18,679,185,894.12 (0.0000534058)
double 70 $ 30,426,425,535.51
Decimal 70 $ 30,426,425,535.51 (0.0000915527)
double 80 $ 49,561,441,066.84
Decimal 80 $ 49,561,441,066.84 (0.0001678467)
double 90 $ 80,730,365,049.13
Decimal 90 $ 80,730,365,049.13 (0.0003051758)
double 100 $ 131,501,257,846.30
Decimal 100 $ 131,501,257,846.30 (0.0005645752)
double 110 $ 214,201,692,320.32
Decimal 110 $ 214,201,692,320.32 (0.0010375977)
double 120 $ 348,911,985,667.20
Decimal 120 $ 348,911,985,667.20 (0.0017700195)
double 130 $ 568,340,858,671.56
Decimal 130 $ 568,340,858,671.55 (0.0030517578)
double 140 $ 925,767,370,868.17
Decimal 140 $ 925,767,370,868.17 (0.0053710938)
double 150 $ 1,507,977,496,053.05
Decimal 150 $ 1,507,977,496,053.04 (0.0097656250)
double 160 $ 2,456,336,440,622.11
Decimal 160 $ 2,456,336,440,622.10 (0.0166015625)
double 170 $ 4,001,113,229,686.99
Decimal 170 $ 4,001,113,229,686.96 (0.0288085938)
double 180 $ 6,517,391,840,965.27
Decimal 180 $ 6,517,391,840,965.22 (0.0498046875)
double 190 $ 10,616,144,550,351.47
Decimal 190 $ 10,616,144,550,351.38 (0.0859375000)
The delta (difference between double and BigDecimal first hits > 1 cent at year 160, around 2 trillion (which might not be all that much 160 years from now), and of course just keeps getting worse.
Of course, the 53 bits of Mantissa mean that the relative error for this kind of calculation is likely to be very small (hopefully you don't lose your job over 1 cent out of 2 trillion). Indeed, the relative error basically holds fairly steady through most of the example. You could certainly organize it though so that you (for example) subtract two various with loss of precision in the mantissa resulting in an arbitrarily large error (exercise up to reader).
Changing Semantics
So you think you are pretty clever, and managed to come up with a rounding scheme that lets you use double and have exhaustively tested your methods on your local JVM. Go ahead and deploy it. Tomorrow or next week or whenever is worst for you, the results change and your tricks break.
Unlike almost every other basic language expression and certainly unlike integer or BigDecimal arithmetic, by default the results of many floating point expressions don't have a single standards defined value due to the strictfp feature. Platforms are free to use, at their discretion, higher precision intermediates, which may result in different results on different hardware, JVM versions, etc. The result, for the same inputs, may even vary at runtime when the method switches from interpreted to JIT-compiled!
If you had written your code in the pre-Java 1.2 days, you'd be pretty pissed when Java 1.2 suddenly introduces the now-default variable FP behavior. You might be tempted to just use strictfp everywhere and hope you don't run into any of the multitude of related bugs - but on some platforms you'd be throwing away much of the performance that double bought you in the first place.
There's nothing to say that the JVM spec won't again change in the future to accommodate further changes in FP hardware, or that the JVM implementors won't use the rope that the default non-strictfp behavior gives them to do something tricky.
Inexact Representations
As Roland pointed out in his answer, a key problem with double is that it doesn't have exact representations for some most non-integer values. Although a single non-exact value like 0.1 will often "roundtrip" OK in some scenarios (e.g., Double.toString(0.1).equals("0.1")), as soon as you do math on these imprecise values the error can compound, and this can be irrecoverable.
In particular, if you are "close" to a rounding point, e.g., ~1.005, you might get a value of 1.00499999... when the true value is 1.0050000001..., or vice-versa. Because the errors go in both directions, there is no rounding magic that can fix this. There is no way to tell if a value of 1.004999999... should be bumped up or not. Your roundToTwoPlaces() method (a type of double rounding) only works because it handled a case where 1.0049999 should be bumped up, but it will never be able to cross the boundary, e.g., if cumulative errors cause 1.0050000000001 to be turned into 1.00499999999999 it can't fix it.
You don't need big or small numbers to hit this. You only need some math and for the result to fall close to the boundary. The more math you do, the larger the possible deviations from the true result, and the more chance of straddling a boundary.
As requested here a searching test that does a simple calculation: amount * tax and rounds it to 2 decimal places (i.e., dollars and cents). There are a few rounding methods in there, the one currently used, roundToTwoPlacesB is a souped-up version of yours1 (by increasing the multiplier for n in the first rounding you make it a lot more sensitive - the original version fails right away on trivial inputs).
The test spits out the failures it finds, and they come in bunches. For example, the first few failures:
Failed for 1234.57 * 0.5000 = 617.28 vs 617.29
Raw result : 617.2850000000000000000000, Double.toString(): 617.29
Failed for 1234.61 * 0.5000 = 617.30 vs 617.31
Raw result : 617.3050000000000000000000, Double.toString(): 617.31
Failed for 1234.65 * 0.5000 = 617.32 vs 617.33
Raw result : 617.3250000000000000000000, Double.toString(): 617.33
Failed for 1234.69 * 0.5000 = 617.34 vs 617.35
Raw result : 617.3450000000000000000000, Double.toString(): 617.35
Note that the "raw result" (i.e., the exact unrounded result) is always close to a x.xx5000 boundary. Your rounding method errs both on the high and low sides. You can't fix it generically.
Imprecise Calculations
Several of the java.lang.Math methods don't require correctly rounded results, but rather allow errors of up to 2.5 ulp. Granted, you probably aren't going to be using the hyperbolic functions much with currency, but functions such as exp() and pow() often find their way into currency calculations and these only have an accuracy of 1 ulp. So the number is already "wrong" when it is returned.
This interacts with the "Inexact Representation" issue, since this type of error is much more serious than that from the normal mathematic operations which are at least choosing the best possible value from with the representable domain of double. It means that you can have many more round-boundary crossing events when you use these methods.
When you round double price = 0.615 to two decimal places, you get 0.61 (rounded down) but probably expected 0.62 (rounded up, because of the 5).
This is because double 0.615 is actually 0.6149999999999999911182158029987476766109466552734375.
The main problems you are facing in practice are related to the fact that round(a) + round(b) is not necessarily equal to round(a+b). By using BigDecimal you have fine control over the rounding process and can therefore make your sums come out correctly.
When you calculate taxes, say 18 % VAT, it is easy to get values that have more than two decimal places when represented exactly. So rounding becomes an issue.
Lets assume you buy 2 articles for $ 1.3 each
Article Price Price+VAT (exact) Price+VAT (rounded)
A 1.3 1.534 1.53
B 1.3 1.534 1.53
sum 2.6 3.068 3.06
exact rounded 3.07
So if you do the calculations with double and only round to print the result, you would get a total of 3.07 while the amount on the bill should actually be 3.06.
Let's give a "less technical, more philosophical" answer here: why do you think that "Cobol" isn't using floating point arithmetic when dealing with currency?!
("Cobol" in quotes, as in: existing legacy approaches to solve real world business problems).
Meaning: almost 50 years ago, when people started using computers for business aka financial work, they quickly realized that "floating point" representation isn't going to work for the financial industry (maybe expect some rare niche corners as pointed out in the question).
And keep in mind: back then, abstractions were truly expensive! It was expensive enough to have a bit here and and a register there; and still it quickly become obvious to the giants on whose shoulders we stand ... that using "floating points" would not solve their problems; and that they had to rely on something else; more abstract - more expensive!
Our industry had 50+ years to come up with "floating point that works for currency" - and the common answer is still: don't do it. Instead, you turn to solutions such as BigDecimal.
You don't need an example. You just need fourth-form mathematics. Fractions in floating-point are represented in binary radix, and binary radix is incommensurable with decimal radix. Tenth grade stuff.
Therefore there will always be rounding and approximation, and neither is acceptable in accounting in any way, shape, or form. The books have to balance to the last cent, and so FYI does a bank branch at the end of each day, and the entire bank at regular intervals.
an expression suffering from round-off errors doesn't count'
Ridiculous. This is the problem. Excluding rounding errors excludes the entire problem.
Suppose that you have 1000000000001.5 (it is in the 1e12 range) money. And you have to calculate 117% of it.
In double, it becomes 1170000000001.7549 (this number is already imprecise). Then apply your round algorithm, and it becomes 1170000000001.75.
In precise arithmetic, it becomes 1170000000001.7550, which is rounded to 1170000000001.76. Ouch, you lost 1 cent.
I think that it is a realistic example, where double is inferior to precise arithmetic.
Sure, you can fix this somehow (even, you can implement BigDecimal using double arihmetic, so in a way, double can be used for everything, and it will be precise), but what's the point?
You can use double for integer arithmetic, if numbers are less than 2^53. If you can express your math within these constraints, then calculation will be precise (division needs special care, of course). As soon as you leave this territory, your calculations can be imprecise.
As you can see, 53 bits is not enough, double is not enough. But, if you store money in decimal-fixed point number (I mean, store the number money*100, if you need cents precision), then 64 bits might be enough, so a 64-bit integer can be used instead of BigDecimal.
Using BigDecimal would be most necessary when dealing with high value digital forms of currency such as cyprtocurrency (BTC, LTC, etc.), stocks, etc. In situations like these a lot of times you will be dealing with very specific values at 7 or 8 significant figures. If your code accidentally causes rounding error at 3 or 4 sig figs then the losses could be extremely significant. Losing money because of a rounding error is not going to be fun, especially if it's for clients.
Sure, you could probably get away with using a Double for everything if you made sure to do everything right, but it would probably be better to not take the risk, especially if you're starting from scratch.
The following would appear to be a decent implementation of a method that needed to "round down to the nearest penny".
private static double roundDowntoPenny(double d ) {
double e = d * 100;
return ((int)e) / 100.0;
}
However, the output of the following shows that the behavior isn't quite what we expect.
public static void main(String[] args) {
System.out.println(roundDowntoPenny(10.30001));
System.out.println(roundDowntoPenny(10.3000));
System.out.println(roundDowntoPenny(10.20001));
System.out.println(roundDowntoPenny(10.2000));
}
Output:
10.3
10.3
10.2
10.19 // Not expected!
Of course, a method can be written which produces the output that we want. The problem is that it actually very difficult to do so (and to do so in every place where you need to manipulate prices).
For every numeral-system (base-10, base-2, base-16, etc.) with a finite number of digits, there are rationals that cannot be stored exactly. For example, 1/3 cannot be stored (with finite digits) in base-10. Similarly, 3/10 cannot be stored (with finite digits) in base-2.
If we needed to chose a numeral-system to store arbitrary rationals, it wouldn't matter what system we chose - any system chosen would have some rationals that couldn't be stored exactly.
However, humans began assigning prices to things way before the development of computer systems. Therefore, we see prices like 5.30 rather that 5 + 1/3. For example, our stock exchanges use decimal prices, which mean that they accept orders, and issue quotes, only in prices that can be represented in base-10. Likewise, it means that they can issue quotes and accept orders in prices that cannot be accurately represented in base-2.
By storing (transmitting, manipulating) those prices in base-2, we are essentially relying on rounding logic to always correctly round our (in-exact) base-2 (representation of) numbers back to their (exact) base-10 representation.

BigDecimal in scala

I've stumbled across interisting thing(maybe only for me) in scala. In a word, if we have a BigDecimal(let say val a = BigDecimal(someValue) where someValue is decimal string) the result of operation
N * a / N == a
will not always produce true. I suppose that it relates to any opeartions on BigDecimals. I know that in scala BigDecimals are created with default MathContext set to DECIMAL128(with HALF_EVEN rounding and precision equals to 34). I've discovered such behavior on decimals with more than 30 digits after point
My questions is why I get such results. Can I somehow control them?
example
-0.007633587786259541984732824427480916
As previous comments already point out, this is not avoidable with irrational numbers. This is because there's no way to represent an irrational number using the standard numeric types (if at all). Since I have no examples with irrational numbers (even PI is limited to a fixed number of digits, and therefore can be expressed as a quotient of 2 whole numbers, making it rational), I will use repeating decimals to illustrate the problem. I changed N*a/N to a/N*N because it demonstrates the problem better with whole numbers, but they're equivalent:
a = BigDecimal(1)
N = BigDecimal(3)
a/N = 0.333...
a/N*N = 0.999...
As you can see in the example above, you can use as many decimal places and any rounding mode, but the result is never going to be equal to 1. (Though it IS possible to get 1 using a different rounding mode per operation, i.e. BigDecimal(3, roundHalfEven) * (BigDecimal(1, roundUp) / 3))
One thing you can do to control the number comparison is to use a higher precision when performing your arithmetic operations and round to the desired (lower) precision when comparing:
val HighPrecision = new java.math.MathContext(36, java.math.RoundingMode.HALF_EVEN);
val TargetPrecision = java.math.MathContext.DECIMAL128;
val a = BigDecimal(1, HighPrecision)
val N = BigDecimal(3, HighPrecision)
(a/N*N).round(TargetPrecision) == a.round(TargetPrecision)
In the example above, the last expression evaluates to true.
UPDATE
To answer your comment, although BigDecimal is arbitrary precision, it is still limited by a precision. It can be 34 or it can be 1000000 (if you have enough memory). BigDecimal does NOT know that 1 / 3 is 0.33<repeating>. If you think about how division works, there's no way for BigDecimal to conclusively know that it's repeating without performing the division to infinite decimal places. But since a precision of 2 indicates it can stop dividing after 2 decimal places, it only knows that 1 / 3 is 0.33.

Weird decimal calculation in Java, potentially a bug?

I have weird decimal calculation that really surprise me, I have two big decimal number, one is a normal proce and the other one is an offer price. Then I want to calculate discount percentage and ceil it to the nearest integer.
Here are the code:
BigDecimal listPrice = new BigDecimal(510000);
BigDecimal offerPrice = new BigDecimal(433500);
int itemDiscount = (int)Math.ceil(((listPrice.longValue() - offerPrice.longValue()) / (float) listPrice.longValue()) * 100);
I expect it would set 15 as value of itemDiscount, but surprisingly it has 16, wow. Then i print each calculation to show in which statement is the problem, so i put System.out.println for each statement as below :
System.out.println(listPrice.longValue() - offerPrice.longValue()); //==> show 76500
System.out.println((listPrice.longValue() - offerPrice.longValue()) / (float) listPrice.longValue()); // ==> 0.15
System.out.println((listPrice.longValue() - offerPrice.longValue()) * 100 / (float) listPrice.longValue()); // ==> 15.000001
the problem is in above statement, istead of returning 15.0, it return 15.000001. And when i ceil it, it will of course return 16 instead of 15.
What is the explanation if this case? is this the way it is or it is a bug?
What is the explanation if this case? is this the way it is or it is a bug?
It is the way it is. It is not a bug.
You are doing the calculation using floating point types (float) and floating point arithmetic is imprecise.
I'm not sure what the best fix is here. Maybe doing the computation using BigDecimal arithmetic methods would give a better result, but it is by no means guaranteed that you won't get similar problems in this calculation with different inputs ...
However, I suspect that the real problem is that you should not be using ceil in that calculation. Even BigDecimal will give you rounding errors; e.g. if your computation involves dividing by 3, the intermediate result cannot be precisely represented using a base-10 representation.
The correct way to do calculations using Real numbers is to properly take account of the error bars in the calculation.
Try using the divide method directly from the BigDecimal class. If you are casting to a float then you are not using the benefit of BigDecimal .
http://www.roseindia.net/java/java-bigdecimal/bigDecimal-divide-int.shtml

Java Glitch? Subtracting numbers?

Is this a glitch in Java?
I go to solve this expression: 3.1 - 7.1
I get the answer: -3.9999999999999996
What is going on here?
A great explanation can be found here. http://www.ibm.com/developerworks/java/library/j-jtp0114/
Floating point arithmetic is rarely exact. While some numbers, such
as 0.5, can be exactly represented as a binary (base 2) decimal (since
0.5 equals 2-1), other numbers, such as 0.1, cannot be. As a result, floating point operations may result in rounding errors, yielding a
result that is close to -- but not equal to -- the result you might
expect. For example, the simple calculation below results in
2.600000000000001, rather than 2.6:
double s=0;
for (int i=0; i<26; i++)
s += 0.1;
System.out.println(s);
Similarly, multiplying .1*26 yields a result different from that of
adding .1 to itself 26 times. Rounding errors become even more serious
when casting from floating point to integer, because casting to an
integral type discards the non-integral portion, even for calculations
that "look like" they should have integral values. For example, the
following statements:
double d = 29.0 * 0.01;
System.out.println(d);
System.out.println((int) (d * 100));
will produce as output:
0.29
28
which is probably not what you might expect at first.
See the provided reference for more information.
As mentioned by several others you cannot count on double if you would like to get an exact decimal value, e.g. when implementing monetary applications. What you should do instead is to take a closer look at BigDecimal:
BigDecimal a = new BigDecimal("3.1");
BigDecimal b = new BigDecimal("7.1");
BigDecimal result = a.subtract(b);
System.out.println(result); // Prints -4.0
Computers are 100% so in the math world that is correct, to the average person it is not. Java cant have a error on a specific number as it is just code that runs the same way but has a different input!
P.S. Google how to round a number
rounding errors in floating points
same way that 3 * 0.1 != 0.3 (when it's not folded by the compiler at least)
Automatic type promotion is happening and that is the result.
Here is some resource to learn.
http://docs.oracle.com/javase/specs/jls/se5.0/html/conversions.html
The next step would be is to learn to use formatters to format it to the given precision / requirements.

Categories

Resources