I have data that I'm trying to accurately and precisely manipulate with an ever-increasing denominator.
Please assume that the numerator will always have a decimal.
I see in the docs that divide(BigDecimal divisor) will actually reduce the scale which seems strange since as I understand the usage of "scale", number of digits past the decimal point, it should increase upon division.
I also see in the docs that multiply(BigDecimal multiplicand) increases the scale. This also doesn't make sense, according to my understanding of scale, since the likelihood of two multiplied numbers needing digits beyond the decimal point goes down.
Are these typos in the docs?
If not, is my understanding of scale incorrect?
If not, how can precision be maintained with an ever-increasing denominator that increases the number of digits past the decimal point?
This is effectively just scientific notation. As it says in the docs, the value of a BigDecimal is:
unscaledValue × 10^(-scale)
Thus multiplying two BigDecimals is equivalent to multiplying their unscaledValues and adding their scales:
a * b
== (uA * 10^-sA) * (uB * 10^-sB)
== (uA * uB) * 10^-(sA + sB)
Related
I'm writing a bank program with a variable long balance to store cents in an account. When users inputs an amount I have a method to do the conversion from USD to cents:
public static long convertFromUsd (double amountUsd) {
if(amountUsd <= maxValue || amountUsd >= minValue) {
return (long) (amountUsd * 100.0)
} else {
//no conversion (throws an exception, but I'm not including that part of the code)
}
}
In my actual code I also check that amountUsd does not have more than 2 decimals, to avoid inputs that cannot be accurately be converted (e.g 20.001 dollars is not exactly 2000 cents). For this example code, assume that all inputs has 0, 1 or 2 decimals.
At first I looked at Long.MAX_VALUE (9223372036854775807 cents) and assumed that double maxValue = 92233720368547758.07 would be correct, but it gave me rounding errors for big amounts:
convertFromUsd(92233720368547758.07) gives output 9223372036854775807
convertFromUsd(92233720368547758.00) gives the same output 9223372036854775807
What should I set double maxValue and double minValue to always get accurate return values?
You could use BigDecimal as a temp holder
If you have a very large double (something between Double.MAX_VALUE / 100.0 + 1 and Double.MAX_VALUE) the calculation of usd * 100.0 would result in an overflow of your double.
But since you know that every possible result of <any double> * 100 will fit in a long you could use a BigDecimal as a temporary holder for your calculation.
Also, the BigDecimal class defines two methods which come in handy for this purpose:
BigDecimal#movePointRight
BigDecimal#longValueExact
By using a BigDecimal you don't have to bother about specifying a max-value at all -> any given double representing USD can be converted to a long value representing cents (assuming you don't have to handle cent-fractions).
double usd = 123.45;
long cents = BigDecimal.valueOf(usd).movePointRight(2).setScale(0).longValueExact();
Attention: Keep in mind that a double is not able to store the exact USD information in the first place. It is not possible to restore the information that has been lost by converting the double to a BigDecimal.
The only advantage a temporary BigDecimal gives you is that the calculation of usd * 100 won't overflow.
First of all, using double for monetary amounts is risky.
TL;DR
I'd recommend to stay below $17,592,186,044,416.
The floating-point representation of numbers (double type) doesn't use decimal fractions (1/10, 1/100, 1/1000, ...), but binary ones (e.g. 1/128, 1/256). So, the double number will never exactly hit something like $1.99. It will be off by some fraction most of the time.
Hopefully, the conversion from decimal digit input ("1.99") to a double number will end up with the closest binary approximation, being a tiny fraction higher or lower than the exact decimal value.
To be able to correctly represent the 100 different cent values from $xxx.00 to $xxx.99, you need a binary resolution where you can at least represent 128 different values for the fractional part, meaning that the least significant bit corresponds to 1/128 (or better), meaning that at least 7 trailing bits have to be dedicated to the fractional dollars.
The double format effectively has 53 bits for the mantissa. If you need 7 bits for the fraction, you can devote at most 46 bits to the integral part, meaning that you have to stay below 2^46 dollars ($70,368,744,177,664.00, 70 trillions) as the absolute limit.
As a precaution, I wouldn't trust the best-rounding property of converting from decimal digits to double too much, so I'd spend two more bits for the fractional part, resulting in a limit of 2^44 dollars, $17,592,186,044,416.
Code Warning
There's a flaw in your code:
return (long) (amountUsd * 100.0);
This will truncate down to the next-lower cent if the double value lies between two exact cents, meaning that e.g. "123456789.23" might become 123456789.229... as a double and getting truncated down to 12345678922 cents as a long.
You should better use
return Math.round(amountUsd * 100.0);
This will end up with the nearest cent value, most probably being the "correct" one.
EDIT:
Remarks on "Precision"
You often read statements that floating-point numbers aren't precise, and then in the next sentence the authors advocate BigDecimal or similar representations as being precise.
The validity of such a statement depends on the type of number you want to represent.
All the number representation systems in use in today's computing are precise for some types of numbers and imprecise for others. Let's take a few example numbers from mathematics and see how well they fit into some typical data types:
42: A small integer can be represented exactly in virtually all types.
1/3: All the typical data types (including double and BigDecimal) fail to represent 1/3 exactly. They can only do a (more or less close) approximation. The result is that multiplication with 3 does not exactly give the integer 1. Few languages offer a "ratio" type, capable to represent numbers by numerator and denominator, thus giving exact results.
1/1024: Because of the power-of-two denominator, float and double can easily do an exact representation. BigDecimal can do as well, but needs 10 fractional digits.
14.99: Because of the decimal fraction (can be rewritten as 1499/100), BigDecimal does it easily (that's what it's made for), float and double can only give an approximation.
PI: I don't know of any language with support for irrational numbers - I even have no idea how this could be possible (aside from treating popular irrationals like PI and E symbolically).
123456789123456789123456789: BigInteger and BigDecimal can do it exactly, double can do an approximation (with the last 13 digits or so being garbage), int and long fail completely.
Let's face it: Each data type has a class of numbers that it can represent exactly, where computations deliver precise results, and other classes where it can at best deliver approximations.
So the questions should be:
What's the type and range of numbers to be represented here?
Is an approximation okay, and if yes, how close should it be?
What's the data type that matches my requirements?
Using a double, the biggest, in Java, would be: 70368744177663.99.
What you have in a double is 64 bit (8 byte) to represent:
Decimals and integers
+/-
Problem is to get it to not round of 0.99 so you get 46 bit for the integer part and the rest need to be used for the decimals.
You can test with the following code:
double biggestPossitiveNumberInDouble = 70368744177663.99;
for(int i=0;i<130;i++){
System.out.printf("%.2f\n", biggestPossitiveNumberInDouble);
biggestPossitiveNumberInDouble=biggestPossitiveNumberInDouble-0.01;
}
If you add 1 to biggestPossitiveNumberInDouble you will see it starting to round off and lose precision.
Also note the round off error when subtracting 0.01.
First iterations
70368744177663.99
70368744177663.98
70368744177663.98
70368744177663.97
70368744177663.96
...
The best way in this case would not to parse to double:
System.out.println("Enter amount:");
String input = new Scanner(System.in).nextLine();
int indexOfDot = input.indexOf('.');
if (indexOfDot == -1) indexOfDot = input.length();
int validInputLength = indexOfDot + 3;
if (validInputLength > input.length()) validInputLength = input.length();
String validInput = input.substring(0,validInputLength);
long amout = Integer.parseInt(validInput.replace(".", ""));
System.out.println("Converted: " + amout);
This way you don't run into the limits of double and just have the limits of long.
But ultimately would be to go with a datatype made for currency.
You looked at the largest possible long number, while the largest possible double is smaller. Calculating (amountUsd * 100.0) results in a double (and afterwards gets casted into a long).
You should ensure that (amountUsd * 100.0) can never be bigger than the largest double, which is 9007199254740992.
Floating values (float, double) are stored differently than integer values (int, long) and while double can store very large values, it is not good for storing money amounts as they get less accurate the bigger or more decimal places the number has.
Check out How many significant digits do floats and doubles have in java? for more information about floating point significant digits
A double is 15 significant digits, the significant digit count is the total number of digits from the first non-zero digit. (For a better explanation see https://en.wikipedia.org/wiki/Significant_figures Significant figures rules explained)
Therefor in your equation to include cents and make sure you are accurate you would want the maximum number to have no more than 13 whole number places and 2 decimal places.
As you are dealing with money it would be better not to use floating point values. Check out this article on using BigDecimal for storing currency: https://medium.com/#cancerian0684/which-data-type-would-you-choose-for-storing-currency-values-like-trading-price-dd7489e7a439
As you mentioned users are inputting an amount, you could read it in as a String rather than a floating point value and pass that into a BigDecimal.
Say I have 2 double values. One of them is very large and one of them is very small.
double x = 99....9; // I don't know the possible max and min values,
double y = 0,00..1; // so just assume these values are near max and min.
If I add those values together, do I lose precision?
In other words, does the max possible double value increase if I assign an int value to it? And does the min possible double value decrease if I choose a small integer part?
double z = x + y; // Real result is something like 999999999999999.00000000000001
double values are not evenly distributed over all numbers. double uses the floating point representation of the number which means you have a fixed amount of bits used for the exponent and a fixed amount of bits used to represent the actual "numbers"/mantissa.
So in your example using a large and a small value would result in dropping the smaller value since it can not be expressed using the larger exponent.
The solution to not dropping precision is using a number format that has a potentially growing precision like BigDecimal - which is not limited to a fixed number of bits.
I'm using a decimal floating point arithmetic with a precision of three decimal digits and (roughly) with the same features as the typical binary floating point arithmetic. Say you have 123.0 and 4.56. These numbers are represented by a mantissa (0<=m<1) and an exponent: 0.123*10^3 and 0.456*10^1, which I'll write as <.123e3> and <.456e1>. Adding two such numbers isn't immediately possible unless the exponents are equal, and that's why the addition proceeds according to:
<.123e3> <.123e3>
<.456e1> <.004e3>
--------
<.127e3>
You see that the necessary alignment of the decimal digits according to a common exponent produces a loss of precision. In the extreme case, the entire addend could be shifted into nothingness. (Think of summing an infinite series where the terms get smaller and smaller but would still contribute considerably to the sum being computed.)
Other sources of imprecision result from differences between binary and decimal fractions, where an exact fraction in one base cannot be represented without error using the other one.
So, in short, addition and subtraction between numbers from rather different orders of magnitude are bound to cause a loss of precision.
If you try to assign too big value or too small value a double, compiler will give an error:
try this
double d1 = 1e-1000;
double d2 = 1e+1000;
I have a float-based storage of decimal by their nature numbers. The precision of float is fine for my needs. Now I want is to perform some more precise calculations with these numbers using double.
An example:
float f = 0.1f;
double d = f; //d = 0.10000000149011612d
// but I want some code that will convert 0.1f to 0.1d;
Update 1:
I know very well that 0.1f != 0.1d. This question is not about precise decimal calculations. Sadly, the question was downvoted. I will try to explain it again...
Let's say I work with an API that returns float numbers for decimal MSFT stock prices. Believe or not, this API exists:
interface Stock {
float[] getDayPrices();
int[] getDayVolumesInHundreds();
}
It is known that the price of a MSFT share is a decimal number with no more than 5 digits, e.g. 31.455, 50.12, 45.888. Obviously the API does not work with BigDecimal because it would be a big overhead for the purpose to just pass the price.
Let's also say I want to calculate a weighted average of these prices with double precision:
float[] prices = msft.getDayPrices();
int[] volumes = msft.getDayVolumesInHundreds();
double priceVolumeSum = 0.0;
long volumeSum = 0;
for (int i = 0; i < prices.length; i++) {
double doublePrice = decimalFloatToDouble(prices[i]);
priceVolumeSum += doublePrice * volumes[i];
volumeSum += volumes[i];
}
System.out.println(priceVolumeSum / volumeSum);
I need a performant implemetation of decimalFloatToDouble.
Now I use the following code, but I need a something more clever:
double decimalFloatToDouble(float f) {
return Double.parseDouble(Float.toString(f));
}
EDIT: this answer corresponds to the question as initially phrased.
When you convert 0.1f to double, you obtain the same number, the imprecise representation of the rational 1/10 (which cannot be represented in binary at any precision) in single-precision. The only thing that changes is the behavior of the printing function. The digits that you see, 0.10000000149011612, were already there in the float variable f. They simply were not printed because these digits aren't printed when printing a float.
Ignore these digits and compute with double as you wish. The problem is not in the conversion, it is in the printing function.
As I understand you, you know that the float is within one float-ulp of an integer number of hundredths, and you know that you're well inside the range where no two integer numbers of hundredths map to the same float. So the information isn't gone at all; you just need to figure out which integer you had.
To get two decimal places, you can multiply by 100, rint/Math.round the result, and multiply by 0.01 to get a close-by double as you wanted. (To get the closest, divide by 100.0 instead.) But I suspect you knew this already and are looking for something that goes a little faster. Try ((9007199254740992 + 100.0 * x) - 9007199254740992) * 0.01 and don't mess with the parentheses. Maybe strictfp that hack for good measure.
You said five significant figures, and apparently your question isn't limited to MSFT share prices. Up until doubles can't represent powers of 10 exactly, this isn't too bad. (And maybe this works beyond that threshold too.) The exponent field of a float narrows down the needed power of ten down to two things, and there are 256 possibilities. (Except in the case of subnormals.) Getting the right power of ten just needs a conditional, and the rounding trick is straightforward enough.
All of this is all going to be a mess, and I'd recommend you stick with the toString approach for all the weird cases.
If your goal is to have a double whose canonical representation will match the canonical representation of a float converting the float to string and converting the result back to double would probably be the most accurate way of achieving that result, at least when it's possible (I don't know for certain whether Java's double-to-string logic would guarantee that there won't be a pair of consecutive double values which report themselves as just above and just-below a number with five significant figures).
If your goal is to round to five significant figures a value which is known to have been rounded to five significant figures while in float form, I would suggest that the simplest approach is probably to simply round to five significant figures. If your magnitude of your numbers will be roughly within the range 1E+/-12, start by finding the smallest power of ten which is smaller than your number, multiply that by 100,000, multiply your number by that, round to the nearest unit, and divide by that power of ten. Because division is often much slower than multiplication, if performance is critical, you might keep a table with powers of ten and their reciprocals. To avoid the possibility of rounding errors, your table should store for each power of then the closest power-of-two double to its reciprocal, and then the closest double to the difference between the first double and the actual reciprocal. Thus, the reciprocal of 100 would be stored as 0.0078125 + 0.0021875; the value n/100 would be computed as n*0.0078125 + n*0.0021875. The first term would never have any round-off error (multiplying by a power of two), and the second value would have precision beyond that needed for the final result, so the final result should thus be rounded accurately.
I've stumbled across interisting thing(maybe only for me) in scala. In a word, if we have a BigDecimal(let say val a = BigDecimal(someValue) where someValue is decimal string) the result of operation
N * a / N == a
will not always produce true. I suppose that it relates to any opeartions on BigDecimals. I know that in scala BigDecimals are created with default MathContext set to DECIMAL128(with HALF_EVEN rounding and precision equals to 34). I've discovered such behavior on decimals with more than 30 digits after point
My questions is why I get such results. Can I somehow control them?
example
-0.007633587786259541984732824427480916
As previous comments already point out, this is not avoidable with irrational numbers. This is because there's no way to represent an irrational number using the standard numeric types (if at all). Since I have no examples with irrational numbers (even PI is limited to a fixed number of digits, and therefore can be expressed as a quotient of 2 whole numbers, making it rational), I will use repeating decimals to illustrate the problem. I changed N*a/N to a/N*N because it demonstrates the problem better with whole numbers, but they're equivalent:
a = BigDecimal(1)
N = BigDecimal(3)
a/N = 0.333...
a/N*N = 0.999...
As you can see in the example above, you can use as many decimal places and any rounding mode, but the result is never going to be equal to 1. (Though it IS possible to get 1 using a different rounding mode per operation, i.e. BigDecimal(3, roundHalfEven) * (BigDecimal(1, roundUp) / 3))
One thing you can do to control the number comparison is to use a higher precision when performing your arithmetic operations and round to the desired (lower) precision when comparing:
val HighPrecision = new java.math.MathContext(36, java.math.RoundingMode.HALF_EVEN);
val TargetPrecision = java.math.MathContext.DECIMAL128;
val a = BigDecimal(1, HighPrecision)
val N = BigDecimal(3, HighPrecision)
(a/N*N).round(TargetPrecision) == a.round(TargetPrecision)
In the example above, the last expression evaluates to true.
UPDATE
To answer your comment, although BigDecimal is arbitrary precision, it is still limited by a precision. It can be 34 or it can be 1000000 (if you have enough memory). BigDecimal does NOT know that 1 / 3 is 0.33<repeating>. If you think about how division works, there's no way for BigDecimal to conclusively know that it's repeating without performing the division to infinite decimal places. But since a precision of 2 indicates it can stop dividing after 2 decimal places, it only knows that 1 / 3 is 0.33.
This method returns 'true'. Why ?
public static boolean f() {
double val = Double.MAX_VALUE/10;
double save = val;
for (int i = 1; i < 1000; i++) {
val -= i;
}
return (val == save);
}
You're subtracting quite a small value (less than 1000) from a huge value. The small value is so much smaller than the large value that the closest representable value to the theoretical result is still the original value.
Basically it's a result of the way floating point numbers work.
Imagine we had some decimal floating point type (just for simplicity) which only stored 5 significant digits in the mantissa, and an exponent in the range 0 to 1000.
Your example is like writing 10999 - 1000... think about what the result of that would be, when rounded to 5 significant digits. Yes, the exact result is 99999.....9000 (with 999 digits) but if you can only represent values with 5 significant digits, the closest result is 10999 again.
When you set val to Double.MAX_VALUE/10, it is set to a value approximately equal to 1.7976931348623158 * 10^307. substracting values like 1000 from that would required a precision on the double representation that is not possible, so it basically leaves val unchanged.
Depending on your needs, you may use BigDecimal instead of double.
Double.MAX_VALUE is so big that the JVM does not tell the difference between it and Double.MAX_VALUE-1000
if you subtract a number fewer than "1.9958403095347198E292" from Double.MAV_VALUE the result is still Double.MAX_VALUE.
System.out.println(
new BigDecimal(Double.MAX_VALUE).equals( new BigDecimal(
Double.MAX_VALUE - 2.E291) )
);
System.out.println(
new BigDecimal(Double.MAX_VALUE).equals( new BigDecimal(
Double.MAX_VALUE - 2.E292) )
);
Ouptup:
true
false
A double does not have enough precision to perform the calculation you are attempting. So the result is the same as the initial value.
It is nothing to do with the == operator.
val is a big number and when subtracting 1 (or even 1000) from it, the result cannot be expressed properly as a double value. The representation of this number x and x-1 is the same, because double only has a limited number of bits to represent an unlimited number of numbers.
Double.MAX_VALUE is a huge number compared to 1 or 1000. Double.MAX_VALUE-1 is generally equals to Double.MAX_VALUE. So your code roughly does nothing when substracting 1 or 1000 to Double.MAX_VALUE/10.
Always remember that:
doubles or floats are just approximations of real numbers, they are just rationals not equally distributed among the reals
you should use very carefully arithmetic operators between doubles or floats which are not close (there is many other rules such like this...)
in general, never use doubles or float if you need arbitrary precision
Because double is a floating point numeric type, which is a way of approximating numeric values. Floating point representations encode numbers so that we can store numbers much larger or smaller than we normally could. However, not all numbers can be represented in the given space, so multiple numbers get rounded to the same floating point value.
As a simplified example, we might want to be able to store values ranging from -1000 to 1000 in some small amount of space where we would normally only be able to store -10 to 10. So we could round all values to the nearest thousand and store them in the small space: -1000 gets encoded as -10, -900 gets encoded as -9, 1000 gets encoded as 10. But what if we want to store -999? The closest value we can encoded is -1000, so we have to encode -999 as the same value as -1000: -10.
In reality, floating point schemes are much more complicated than the example above, but the concept is similar. Floating point representations of numbers can only represent some of all the possible numbers, so when we have a number that can't be represented as part of the scheme, we have to round it to the closest representable value.
In your code, all values within 1000 of Double.MAX_VALUE / 10 automatically get rounded to Double.MAX_VALUE / 10, which is why the computer thinks (Double.MAX_VALUE / 10) - 1000 == Double.MAX_VALUE / 10.
The result of a floating point calculation is the closest representable value to the exact answer. This program:
public class Test {
public static void main(String[] args) throws Exception {
double val = Double.MAX_VALUE/10;
System.out.println(val);
System.out.println(Math.nextAfter(val, 0));
}
}
prints:
1.7976931348623158E307
1.7976931348623155E307
The first of these numbers is your original val. The second is the largest double that is less than it.
When you subtract 1000 from 1.7976931348623158E307, the exact answer is between those two numbers, but very, very much closer to 1.7976931348623158E307 than to 1.7976931348623155E307, so the result will be rounded to 1.7976931348623155E307, leaving val unchanged.