Precision loss with java.lang.Double - java

Say I have 2 double values. One of them is very large and one of them is very small.
double x = 99....9; // I don't know the possible max and min values,
double y = 0,00..1; // so just assume these values are near max and min.
If I add those values together, do I lose precision?
In other words, does the max possible double value increase if I assign an int value to it? And does the min possible double value decrease if I choose a small integer part?
double z = x + y; // Real result is something like 999999999999999.00000000000001

double values are not evenly distributed over all numbers. double uses the floating point representation of the number which means you have a fixed amount of bits used for the exponent and a fixed amount of bits used to represent the actual "numbers"/mantissa.
So in your example using a large and a small value would result in dropping the smaller value since it can not be expressed using the larger exponent.
The solution to not dropping precision is using a number format that has a potentially growing precision like BigDecimal - which is not limited to a fixed number of bits.

I'm using a decimal floating point arithmetic with a precision of three decimal digits and (roughly) with the same features as the typical binary floating point arithmetic. Say you have 123.0 and 4.56. These numbers are represented by a mantissa (0<=m<1) and an exponent: 0.123*10^3 and 0.456*10^1, which I'll write as <.123e3> and <.456e1>. Adding two such numbers isn't immediately possible unless the exponents are equal, and that's why the addition proceeds according to:
<.123e3> <.123e3>
<.456e1> <.004e3>
--------
<.127e3>
You see that the necessary alignment of the decimal digits according to a common exponent produces a loss of precision. In the extreme case, the entire addend could be shifted into nothingness. (Think of summing an infinite series where the terms get smaller and smaller but would still contribute considerably to the sum being computed.)
Other sources of imprecision result from differences between binary and decimal fractions, where an exact fraction in one base cannot be represented without error using the other one.
So, in short, addition and subtraction between numbers from rather different orders of magnitude are bound to cause a loss of precision.

If you try to assign too big value or too small value a double, compiler will give an error:
try this
double d1 = 1e-1000;
double d2 = 1e+1000;

Related

What is the range in which IEEE754 can correctly compare the size of floating point numbers

I know that the range of integers that can be correctly compared is -(2^53 - 1) to 2^53 - 1.
But what is the range of floating point numbers?
// Should the double type conform to this specification? I guess
double c = 9007199254740990.9;
double d = 9007199254740990.8;
System.out.println(c > d); // return false
There's no particular range in which floating point numbers are guaranteed to be compared correctly - that would require the existence of infinitely many double representations.
All double numbers are accurate to 53 significant figures of their representations in binary. If two numbers differ only in the 54th significant binary figure, they'll generally have the same double representation, unless rounding makes them different.
This question seems to be based on a fundamental misunderstanding of (mathematical) Real numbers, and their relationship to IEEE-754.
"Floating point" refers to a representation scheme for Real numbers. Historically that comprises decimal digits and a "decimal point". There are an infinite number of Real numbers, and (correspondingly) an infinite number of floating point numbers. Provided that you have enough paper to write them down.
One property of Real numbers is that there is an infinite number of Real numbers between any pair of distinct Real numbers. That includes any pair that are really close to one another.
IEEE-754 floating point numbers are different. Each one represents a single precise Real number. There is a set of (less than) 2^64 of these Real numbers for IEEE-754 double precision. Any other Real number that is not in that set cannot be precisely represented as a double.
In Java, when we write this:
double c = 9007199254740990.9;
double d = 9007199254740990.8;
System.out.println(c > d); // return false
the value of c will be the IEEE-754 representation that is closest to the real number that the floating point notation denotes. Likewise d. It is not going to be exactly equal to the Real number. (It can only be exact for a finite subset of the infinite set of Real numbers ...)
The print statement illustrates this. The closest IEEE-754 representations for those 2 Real numbers are the same.
What actually happens is that the Java compiler works out what 9007199254740990.9 and 9007199254740990.8 mean as Real numbers. Then it silently rounds those numbers to the nearest IEEE-754 double.
So to your question:
What is the range in which IEEE-754 can correctly compare the size of floating point numbers
A: There is no such range.
If you give me any Real number "x" (expressed in floating point form) that maps to a double value d, I can write you a different Real number "y" (in floating point form) that also maps to the same value d. All I need to do is to add some additional digits to the least significant end of "x".
Note: the reasoning is independent of the actual representation of IEEE-754. It only depends on the fact that IEEE-754 representations have a fixed number of bits. That means that the set of possible values is bounded ... and the rest is a logical consequence of that.
Let's think about the actual bit representation of these double numbers:
We can use System.out.println(Long.toBinaryString(Double.doubleToRawLongBits(c))); to print the bit representation.
Both 9007199254740990.9 and 9007199254740990.8 has the same bit representation following IEEE 754:
0 10000110011 1111111111111111111111111111111111111111111111111111
s(sign) = 0
exp = 10000110011 (binary value, which equals 1075 in decimal)
frac = 111...111 (52 bits)
The precise value of this representation is:
With IEEE 754, starting from 2^53, the double type is no longer able to precisely represent all integers larger than 2 ^ 53, let alone floating point numbers.
Regarding the OP question, as mentioned in other answers, there's not an accurate range. For example, the following statement will evaluate as true. We won't say that the range of floating numbers that double can compare is less than 1.1. Generally speaking, the larger the number is, the less precise the double type could represent.
double x1 = 1.1;
double x2 = 1.1000000000000001;
System.out.println(x1 == x2); // true

Biggest amount in USD (double) that can accurately be converted to cents (long)

I'm writing a bank program with a variable long balance to store cents in an account. When users inputs an amount I have a method to do the conversion from USD to cents:
public static long convertFromUsd (double amountUsd) {
if(amountUsd <= maxValue || amountUsd >= minValue) {
return (long) (amountUsd * 100.0)
} else {
//no conversion (throws an exception, but I'm not including that part of the code)
}
}
In my actual code I also check that amountUsd does not have more than 2 decimals, to avoid inputs that cannot be accurately be converted (e.g 20.001 dollars is not exactly 2000 cents). For this example code, assume that all inputs has 0, 1 or 2 decimals.
At first I looked at Long.MAX_VALUE (9223372036854775807 cents) and assumed that double maxValue = 92233720368547758.07 would be correct, but it gave me rounding errors for big amounts:
convertFromUsd(92233720368547758.07) gives output 9223372036854775807
convertFromUsd(92233720368547758.00) gives the same output 9223372036854775807
What should I set double maxValue and double minValue to always get accurate return values?
You could use BigDecimal as a temp holder
If you have a very large double (something between Double.MAX_VALUE / 100.0 + 1 and Double.MAX_VALUE) the calculation of usd * 100.0 would result in an overflow of your double.
But since you know that every possible result of <any double> * 100 will fit in a long you could use a BigDecimal as a temporary holder for your calculation.
Also, the BigDecimal class defines two methods which come in handy for this purpose:
BigDecimal#movePointRight
BigDecimal#longValueExact
By using a BigDecimal you don't have to bother about specifying a max-value at all -> any given double representing USD can be converted to a long value representing cents (assuming you don't have to handle cent-fractions).
double usd = 123.45;
long cents = BigDecimal.valueOf(usd).movePointRight(2).setScale(0).longValueExact();
Attention: Keep in mind that a double is not able to store the exact USD information in the first place. It is not possible to restore the information that has been lost by converting the double to a BigDecimal.
The only advantage a temporary BigDecimal gives you is that the calculation of usd * 100 won't overflow.
First of all, using double for monetary amounts is risky.
TL;DR
I'd recommend to stay below $17,592,186,044,416.
The floating-point representation of numbers (double type) doesn't use decimal fractions (1/10, 1/100, 1/1000, ...), but binary ones (e.g. 1/128, 1/256). So, the double number will never exactly hit something like $1.99. It will be off by some fraction most of the time.
Hopefully, the conversion from decimal digit input ("1.99") to a double number will end up with the closest binary approximation, being a tiny fraction higher or lower than the exact decimal value.
To be able to correctly represent the 100 different cent values from $xxx.00 to $xxx.99, you need a binary resolution where you can at least represent 128 different values for the fractional part, meaning that the least significant bit corresponds to 1/128 (or better), meaning that at least 7 trailing bits have to be dedicated to the fractional dollars.
The double format effectively has 53 bits for the mantissa. If you need 7 bits for the fraction, you can devote at most 46 bits to the integral part, meaning that you have to stay below 2^46 dollars ($70,368,744,177,664.00, 70 trillions) as the absolute limit.
As a precaution, I wouldn't trust the best-rounding property of converting from decimal digits to double too much, so I'd spend two more bits for the fractional part, resulting in a limit of 2^44 dollars, $17,592,186,044,416.
Code Warning
There's a flaw in your code:
return (long) (amountUsd * 100.0);
This will truncate down to the next-lower cent if the double value lies between two exact cents, meaning that e.g. "123456789.23" might become 123456789.229... as a double and getting truncated down to 12345678922 cents as a long.
You should better use
return Math.round(amountUsd * 100.0);
This will end up with the nearest cent value, most probably being the "correct" one.
EDIT:
Remarks on "Precision"
You often read statements that floating-point numbers aren't precise, and then in the next sentence the authors advocate BigDecimal or similar representations as being precise.
The validity of such a statement depends on the type of number you want to represent.
All the number representation systems in use in today's computing are precise for some types of numbers and imprecise for others. Let's take a few example numbers from mathematics and see how well they fit into some typical data types:
42: A small integer can be represented exactly in virtually all types.
1/3: All the typical data types (including double and BigDecimal) fail to represent 1/3 exactly. They can only do a (more or less close) approximation. The result is that multiplication with 3 does not exactly give the integer 1. Few languages offer a "ratio" type, capable to represent numbers by numerator and denominator, thus giving exact results.
1/1024: Because of the power-of-two denominator, float and double can easily do an exact representation. BigDecimal can do as well, but needs 10 fractional digits.
14.99: Because of the decimal fraction (can be rewritten as 1499/100), BigDecimal does it easily (that's what it's made for), float and double can only give an approximation.
PI: I don't know of any language with support for irrational numbers - I even have no idea how this could be possible (aside from treating popular irrationals like PI and E symbolically).
123456789123456789123456789: BigInteger and BigDecimal can do it exactly, double can do an approximation (with the last 13 digits or so being garbage), int and long fail completely.
Let's face it: Each data type has a class of numbers that it can represent exactly, where computations deliver precise results, and other classes where it can at best deliver approximations.
So the questions should be:
What's the type and range of numbers to be represented here?
Is an approximation okay, and if yes, how close should it be?
What's the data type that matches my requirements?
Using a double, the biggest, in Java, would be: 70368744177663.99.
What you have in a double is 64 bit (8 byte) to represent:
Decimals and integers
+/-
Problem is to get it to not round of 0.99 so you get 46 bit for the integer part and the rest need to be used for the decimals.
You can test with the following code:
double biggestPossitiveNumberInDouble = 70368744177663.99;
for(int i=0;i<130;i++){
System.out.printf("%.2f\n", biggestPossitiveNumberInDouble);
biggestPossitiveNumberInDouble=biggestPossitiveNumberInDouble-0.01;
}
If you add 1 to biggestPossitiveNumberInDouble you will see it starting to round off and lose precision.
Also note the round off error when subtracting 0.01.
First iterations
70368744177663.99
70368744177663.98
70368744177663.98
70368744177663.97
70368744177663.96
...
The best way in this case would not to parse to double:
System.out.println("Enter amount:");
String input = new Scanner(System.in).nextLine();
int indexOfDot = input.indexOf('.');
if (indexOfDot == -1) indexOfDot = input.length();
int validInputLength = indexOfDot + 3;
if (validInputLength > input.length()) validInputLength = input.length();
String validInput = input.substring(0,validInputLength);
long amout = Integer.parseInt(validInput.replace(".", ""));
System.out.println("Converted: " + amout);
This way you don't run into the limits of double and just have the limits of long.
But ultimately would be to go with a datatype made for currency.
You looked at the largest possible long number, while the largest possible double is smaller. Calculating (amountUsd * 100.0) results in a double (and afterwards gets casted into a long).
You should ensure that (amountUsd * 100.0) can never be bigger than the largest double, which is 9007199254740992.
Floating values (float, double) are stored differently than integer values (int, long) and while double can store very large values, it is not good for storing money amounts as they get less accurate the bigger or more decimal places the number has.
Check out How many significant digits do floats and doubles have in java? for more information about floating point significant digits
A double is 15 significant digits, the significant digit count is the total number of digits from the first non-zero digit. (For a better explanation see https://en.wikipedia.org/wiki/Significant_figures Significant figures rules explained)
Therefor in your equation to include cents and make sure you are accurate you would want the maximum number to have no more than 13 whole number places and 2 decimal places.
As you are dealing with money it would be better not to use floating point values. Check out this article on using BigDecimal for storing currency: https://medium.com/#cancerian0684/which-data-type-would-you-choose-for-storing-currency-values-like-trading-price-dd7489e7a439
As you mentioned users are inputting an amount, you could read it in as a String rather than a floating point value and pass that into a BigDecimal.

Java: convert float to double preserving decimal point precision

I have a float-based storage of decimal by their nature numbers. The precision of float is fine for my needs. Now I want is to perform some more precise calculations with these numbers using double.
An example:
float f = 0.1f;
double d = f; //d = 0.10000000149011612d
// but I want some code that will convert 0.1f to 0.1d;
Update 1:
I know very well that 0.1f != 0.1d. This question is not about precise decimal calculations. Sadly, the question was downvoted. I will try to explain it again...
Let's say I work with an API that returns float numbers for decimal MSFT stock prices. Believe or not, this API exists:
interface Stock {
float[] getDayPrices();
int[] getDayVolumesInHundreds();
}
It is known that the price of a MSFT share is a decimal number with no more than 5 digits, e.g. 31.455, 50.12, 45.888. Obviously the API does not work with BigDecimal because it would be a big overhead for the purpose to just pass the price.
Let's also say I want to calculate a weighted average of these prices with double precision:
float[] prices = msft.getDayPrices();
int[] volumes = msft.getDayVolumesInHundreds();
double priceVolumeSum = 0.0;
long volumeSum = 0;
for (int i = 0; i < prices.length; i++) {
double doublePrice = decimalFloatToDouble(prices[i]);
priceVolumeSum += doublePrice * volumes[i];
volumeSum += volumes[i];
}
System.out.println(priceVolumeSum / volumeSum);
I need a performant implemetation of decimalFloatToDouble.
Now I use the following code, but I need a something more clever:
double decimalFloatToDouble(float f) {
return Double.parseDouble(Float.toString(f));
}
EDIT: this answer corresponds to the question as initially phrased.
When you convert 0.1f to double, you obtain the same number, the imprecise representation of the rational 1/10 (which cannot be represented in binary at any precision) in single-precision. The only thing that changes is the behavior of the printing function. The digits that you see, 0.10000000149011612, were already there in the float variable f. They simply were not printed because these digits aren't printed when printing a float.
Ignore these digits and compute with double as you wish. The problem is not in the conversion, it is in the printing function.
As I understand you, you know that the float is within one float-ulp of an integer number of hundredths, and you know that you're well inside the range where no two integer numbers of hundredths map to the same float. So the information isn't gone at all; you just need to figure out which integer you had.
To get two decimal places, you can multiply by 100, rint/Math.round the result, and multiply by 0.01 to get a close-by double as you wanted. (To get the closest, divide by 100.0 instead.) But I suspect you knew this already and are looking for something that goes a little faster. Try ((9007199254740992 + 100.0 * x) - 9007199254740992) * 0.01 and don't mess with the parentheses. Maybe strictfp that hack for good measure.
You said five significant figures, and apparently your question isn't limited to MSFT share prices. Up until doubles can't represent powers of 10 exactly, this isn't too bad. (And maybe this works beyond that threshold too.) The exponent field of a float narrows down the needed power of ten down to two things, and there are 256 possibilities. (Except in the case of subnormals.) Getting the right power of ten just needs a conditional, and the rounding trick is straightforward enough.
All of this is all going to be a mess, and I'd recommend you stick with the toString approach for all the weird cases.
If your goal is to have a double whose canonical representation will match the canonical representation of a float converting the float to string and converting the result back to double would probably be the most accurate way of achieving that result, at least when it's possible (I don't know for certain whether Java's double-to-string logic would guarantee that there won't be a pair of consecutive double values which report themselves as just above and just-below a number with five significant figures).
If your goal is to round to five significant figures a value which is known to have been rounded to five significant figures while in float form, I would suggest that the simplest approach is probably to simply round to five significant figures. If your magnitude of your numbers will be roughly within the range 1E+/-12, start by finding the smallest power of ten which is smaller than your number, multiply that by 100,000, multiply your number by that, round to the nearest unit, and divide by that power of ten. Because division is often much slower than multiplication, if performance is critical, you might keep a table with powers of ten and their reciprocals. To avoid the possibility of rounding errors, your table should store for each power of then the closest power-of-two double to its reciprocal, and then the closest double to the difference between the first double and the actual reciprocal. Thus, the reciprocal of 100 would be stored as 0.0078125 + 0.0021875; the value n/100 would be computed as n*0.0078125 + n*0.0021875. The first term would never have any round-off error (multiplying by a power of two), and the second value would have precision beyond that needed for the final result, so the final result should thus be rounded accurately.

Why when I assign an int to a float is it not the same value?

When I assign from an int to a float I thought float allows more precision, so would not lose or change the value assigned, but what I am seeing is something quite different. What is going on here?
for(int i = 63000000; i < 63005515; i++) {
int a = i;
float f = 0;
f=a;
System.out.print(java.text.NumberFormat.getInstance().format(a) + " : " );
System.out.println(java.text.NumberFormat.getInstance().format(f));
}
some of the output :
...
63,005,504 : 63,005,504
63,005,505 : 63,005,504
63,005,506 : 63,005,504
63,005,507 : 63,005,508
63,005,508 : 63,005,508
Thanks!
A float has the same number of bits as an int -- 32 bits. But it allows for a greater range of values, far greater than the range of int values. However, the precision is fixed at 24 bits (23 "mantissa" bits, plus 1 implied 1 bit). At the value of about 63,000,000, the precision is greater than 1. You can verify this with the Math.ulp method, which gives the difference between 2 consecutive values.
The code
System.out.println(Math.ulp(63000000.0f));
prints
4.0
You can use double values for a far greater (yet still limited) precision:
System.out.println(Math.ulp(63000000.0));
prints
7.450580596923828E-9
However, you can just use ints here, because your values, at about 63 million, are still well below the maximum possible int value, which is about 2 billion.
A float in java is a number IEEE 754 floating point representation, even when it can be used to represent values from ±1.40129846432481707e-45 to ±3.40282346638528860e+38 it has only 6 or 7 significant decimal digits.
A simple solution would be use a double which has at least 14 significant digits and can cover without any issue all the values of an int.
However, if it is accuracy what you're looking for stay away from native floating point representations and go for classes like BigInteger and BigDecimal.
No, they are not necessarily the same value. An int and a float are each 32 bits but in a float some of those bits are used for the floating point part of the number so there are fewer whole numbers which can be represented in a float than in an int. Depending on what your application is doing with these numbers you may not care about these differences or maybe you want to look at using something like BigDecimal.
Floats don't allow more precision, floats allow wider range of numbers.
We've got 2^32 possible values for integers in range (approximately) -2 * 10^9 to 2 * 10^9. Floats are also 32bit, so the number of possible values is at most the same as for integers.
Out of these 32 bits, some of them are reserved for mantisa, the rest of these is for exponent. The resulting number represented by the float is then calculated (for simplicity I'll use 10-base) as mantisa * 10^exponent.
Obviously, the maximum precision is limited by the number of bits assigned to mantisa. So you can represent some integers exactly as integers, but they won't fit to mantisa, so the least significant bits are thrown off, as in your case.
Float have a greater range of values but lower precision.
Int have a lower range of values but higher precision.
Int is specific to 1, while Float is specific to 4.
So if you are dealing with trillions but don't care about +/- 4 then use float. but if you need the last digit to be precise you need to use int.

Java's '==' operator on doubles

This method returns 'true'. Why ?
public static boolean f() {
double val = Double.MAX_VALUE/10;
double save = val;
for (int i = 1; i < 1000; i++) {
val -= i;
}
return (val == save);
}
You're subtracting quite a small value (less than 1000) from a huge value. The small value is so much smaller than the large value that the closest representable value to the theoretical result is still the original value.
Basically it's a result of the way floating point numbers work.
Imagine we had some decimal floating point type (just for simplicity) which only stored 5 significant digits in the mantissa, and an exponent in the range 0 to 1000.
Your example is like writing 10999 - 1000... think about what the result of that would be, when rounded to 5 significant digits. Yes, the exact result is 99999.....9000 (with 999 digits) but if you can only represent values with 5 significant digits, the closest result is 10999 again.
When you set val to Double.MAX_VALUE/10, it is set to a value approximately equal to 1.7976931348623158 * 10^307. substracting values like 1000 from that would required a precision on the double representation that is not possible, so it basically leaves val unchanged.
Depending on your needs, you may use BigDecimal instead of double.
Double.MAX_VALUE is so big that the JVM does not tell the difference between it and Double.MAX_VALUE-1000
if you subtract a number fewer than "1.9958403095347198E292" from Double.MAV_VALUE the result is still Double.MAX_VALUE.
System.out.println(
new BigDecimal(Double.MAX_VALUE).equals( new BigDecimal(
Double.MAX_VALUE - 2.E291) )
);
System.out.println(
new BigDecimal(Double.MAX_VALUE).equals( new BigDecimal(
Double.MAX_VALUE - 2.E292) )
);
Ouptup:
true
false
A double does not have enough precision to perform the calculation you are attempting. So the result is the same as the initial value.
It is nothing to do with the == operator.
val is a big number and when subtracting 1 (or even 1000) from it, the result cannot be expressed properly as a double value. The representation of this number x and x-1 is the same, because double only has a limited number of bits to represent an unlimited number of numbers.
Double.MAX_VALUE is a huge number compared to 1 or 1000. Double.MAX_VALUE-1 is generally equals to Double.MAX_VALUE. So your code roughly does nothing when substracting 1 or 1000 to Double.MAX_VALUE/10.
Always remember that:
doubles or floats are just approximations of real numbers, they are just rationals not equally distributed among the reals
you should use very carefully arithmetic operators between doubles or floats which are not close (there is many other rules such like this...)
in general, never use doubles or float if you need arbitrary precision
Because double is a floating point numeric type, which is a way of approximating numeric values. Floating point representations encode numbers so that we can store numbers much larger or smaller than we normally could. However, not all numbers can be represented in the given space, so multiple numbers get rounded to the same floating point value.
As a simplified example, we might want to be able to store values ranging from -1000 to 1000 in some small amount of space where we would normally only be able to store -10 to 10. So we could round all values to the nearest thousand and store them in the small space: -1000 gets encoded as -10, -900 gets encoded as -9, 1000 gets encoded as 10. But what if we want to store -999? The closest value we can encoded is -1000, so we have to encode -999 as the same value as -1000: -10.
In reality, floating point schemes are much more complicated than the example above, but the concept is similar. Floating point representations of numbers can only represent some of all the possible numbers, so when we have a number that can't be represented as part of the scheme, we have to round it to the closest representable value.
In your code, all values within 1000 of Double.MAX_VALUE / 10 automatically get rounded to Double.MAX_VALUE / 10, which is why the computer thinks (Double.MAX_VALUE / 10) - 1000 == Double.MAX_VALUE / 10.
The result of a floating point calculation is the closest representable value to the exact answer. This program:
public class Test {
public static void main(String[] args) throws Exception {
double val = Double.MAX_VALUE/10;
System.out.println(val);
System.out.println(Math.nextAfter(val, 0));
}
}
prints:
1.7976931348623158E307
1.7976931348623155E307
The first of these numbers is your original val. The second is the largest double that is less than it.
When you subtract 1000 from 1.7976931348623158E307, the exact answer is between those two numbers, but very, very much closer to 1.7976931348623158E307 than to 1.7976931348623155E307, so the result will be rounded to 1.7976931348623155E307, leaving val unchanged.

Categories

Resources