This method returns 'true'. Why ?
public static boolean f() {
double val = Double.MAX_VALUE/10;
double save = val;
for (int i = 1; i < 1000; i++) {
val -= i;
}
return (val == save);
}
You're subtracting quite a small value (less than 1000) from a huge value. The small value is so much smaller than the large value that the closest representable value to the theoretical result is still the original value.
Basically it's a result of the way floating point numbers work.
Imagine we had some decimal floating point type (just for simplicity) which only stored 5 significant digits in the mantissa, and an exponent in the range 0 to 1000.
Your example is like writing 10999 - 1000... think about what the result of that would be, when rounded to 5 significant digits. Yes, the exact result is 99999.....9000 (with 999 digits) but if you can only represent values with 5 significant digits, the closest result is 10999 again.
When you set val to Double.MAX_VALUE/10, it is set to a value approximately equal to 1.7976931348623158 * 10^307. substracting values like 1000 from that would required a precision on the double representation that is not possible, so it basically leaves val unchanged.
Depending on your needs, you may use BigDecimal instead of double.
Double.MAX_VALUE is so big that the JVM does not tell the difference between it and Double.MAX_VALUE-1000
if you subtract a number fewer than "1.9958403095347198E292" from Double.MAV_VALUE the result is still Double.MAX_VALUE.
System.out.println(
new BigDecimal(Double.MAX_VALUE).equals( new BigDecimal(
Double.MAX_VALUE - 2.E291) )
);
System.out.println(
new BigDecimal(Double.MAX_VALUE).equals( new BigDecimal(
Double.MAX_VALUE - 2.E292) )
);
Ouptup:
true
false
A double does not have enough precision to perform the calculation you are attempting. So the result is the same as the initial value.
It is nothing to do with the == operator.
val is a big number and when subtracting 1 (or even 1000) from it, the result cannot be expressed properly as a double value. The representation of this number x and x-1 is the same, because double only has a limited number of bits to represent an unlimited number of numbers.
Double.MAX_VALUE is a huge number compared to 1 or 1000. Double.MAX_VALUE-1 is generally equals to Double.MAX_VALUE. So your code roughly does nothing when substracting 1 or 1000 to Double.MAX_VALUE/10.
Always remember that:
doubles or floats are just approximations of real numbers, they are just rationals not equally distributed among the reals
you should use very carefully arithmetic operators between doubles or floats which are not close (there is many other rules such like this...)
in general, never use doubles or float if you need arbitrary precision
Because double is a floating point numeric type, which is a way of approximating numeric values. Floating point representations encode numbers so that we can store numbers much larger or smaller than we normally could. However, not all numbers can be represented in the given space, so multiple numbers get rounded to the same floating point value.
As a simplified example, we might want to be able to store values ranging from -1000 to 1000 in some small amount of space where we would normally only be able to store -10 to 10. So we could round all values to the nearest thousand and store them in the small space: -1000 gets encoded as -10, -900 gets encoded as -9, 1000 gets encoded as 10. But what if we want to store -999? The closest value we can encoded is -1000, so we have to encode -999 as the same value as -1000: -10.
In reality, floating point schemes are much more complicated than the example above, but the concept is similar. Floating point representations of numbers can only represent some of all the possible numbers, so when we have a number that can't be represented as part of the scheme, we have to round it to the closest representable value.
In your code, all values within 1000 of Double.MAX_VALUE / 10 automatically get rounded to Double.MAX_VALUE / 10, which is why the computer thinks (Double.MAX_VALUE / 10) - 1000 == Double.MAX_VALUE / 10.
The result of a floating point calculation is the closest representable value to the exact answer. This program:
public class Test {
public static void main(String[] args) throws Exception {
double val = Double.MAX_VALUE/10;
System.out.println(val);
System.out.println(Math.nextAfter(val, 0));
}
}
prints:
1.7976931348623158E307
1.7976931348623155E307
The first of these numbers is your original val. The second is the largest double that is less than it.
When you subtract 1000 from 1.7976931348623158E307, the exact answer is between those two numbers, but very, very much closer to 1.7976931348623158E307 than to 1.7976931348623155E307, so the result will be rounded to 1.7976931348623155E307, leaving val unchanged.
Related
I'm writing a bank program with a variable long balance to store cents in an account. When users inputs an amount I have a method to do the conversion from USD to cents:
public static long convertFromUsd (double amountUsd) {
if(amountUsd <= maxValue || amountUsd >= minValue) {
return (long) (amountUsd * 100.0)
} else {
//no conversion (throws an exception, but I'm not including that part of the code)
}
}
In my actual code I also check that amountUsd does not have more than 2 decimals, to avoid inputs that cannot be accurately be converted (e.g 20.001 dollars is not exactly 2000 cents). For this example code, assume that all inputs has 0, 1 or 2 decimals.
At first I looked at Long.MAX_VALUE (9223372036854775807 cents) and assumed that double maxValue = 92233720368547758.07 would be correct, but it gave me rounding errors for big amounts:
convertFromUsd(92233720368547758.07) gives output 9223372036854775807
convertFromUsd(92233720368547758.00) gives the same output 9223372036854775807
What should I set double maxValue and double minValue to always get accurate return values?
You could use BigDecimal as a temp holder
If you have a very large double (something between Double.MAX_VALUE / 100.0 + 1 and Double.MAX_VALUE) the calculation of usd * 100.0 would result in an overflow of your double.
But since you know that every possible result of <any double> * 100 will fit in a long you could use a BigDecimal as a temporary holder for your calculation.
Also, the BigDecimal class defines two methods which come in handy for this purpose:
BigDecimal#movePointRight
BigDecimal#longValueExact
By using a BigDecimal you don't have to bother about specifying a max-value at all -> any given double representing USD can be converted to a long value representing cents (assuming you don't have to handle cent-fractions).
double usd = 123.45;
long cents = BigDecimal.valueOf(usd).movePointRight(2).setScale(0).longValueExact();
Attention: Keep in mind that a double is not able to store the exact USD information in the first place. It is not possible to restore the information that has been lost by converting the double to a BigDecimal.
The only advantage a temporary BigDecimal gives you is that the calculation of usd * 100 won't overflow.
First of all, using double for monetary amounts is risky.
TL;DR
I'd recommend to stay below $17,592,186,044,416.
The floating-point representation of numbers (double type) doesn't use decimal fractions (1/10, 1/100, 1/1000, ...), but binary ones (e.g. 1/128, 1/256). So, the double number will never exactly hit something like $1.99. It will be off by some fraction most of the time.
Hopefully, the conversion from decimal digit input ("1.99") to a double number will end up with the closest binary approximation, being a tiny fraction higher or lower than the exact decimal value.
To be able to correctly represent the 100 different cent values from $xxx.00 to $xxx.99, you need a binary resolution where you can at least represent 128 different values for the fractional part, meaning that the least significant bit corresponds to 1/128 (or better), meaning that at least 7 trailing bits have to be dedicated to the fractional dollars.
The double format effectively has 53 bits for the mantissa. If you need 7 bits for the fraction, you can devote at most 46 bits to the integral part, meaning that you have to stay below 2^46 dollars ($70,368,744,177,664.00, 70 trillions) as the absolute limit.
As a precaution, I wouldn't trust the best-rounding property of converting from decimal digits to double too much, so I'd spend two more bits for the fractional part, resulting in a limit of 2^44 dollars, $17,592,186,044,416.
Code Warning
There's a flaw in your code:
return (long) (amountUsd * 100.0);
This will truncate down to the next-lower cent if the double value lies between two exact cents, meaning that e.g. "123456789.23" might become 123456789.229... as a double and getting truncated down to 12345678922 cents as a long.
You should better use
return Math.round(amountUsd * 100.0);
This will end up with the nearest cent value, most probably being the "correct" one.
EDIT:
Remarks on "Precision"
You often read statements that floating-point numbers aren't precise, and then in the next sentence the authors advocate BigDecimal or similar representations as being precise.
The validity of such a statement depends on the type of number you want to represent.
All the number representation systems in use in today's computing are precise for some types of numbers and imprecise for others. Let's take a few example numbers from mathematics and see how well they fit into some typical data types:
42: A small integer can be represented exactly in virtually all types.
1/3: All the typical data types (including double and BigDecimal) fail to represent 1/3 exactly. They can only do a (more or less close) approximation. The result is that multiplication with 3 does not exactly give the integer 1. Few languages offer a "ratio" type, capable to represent numbers by numerator and denominator, thus giving exact results.
1/1024: Because of the power-of-two denominator, float and double can easily do an exact representation. BigDecimal can do as well, but needs 10 fractional digits.
14.99: Because of the decimal fraction (can be rewritten as 1499/100), BigDecimal does it easily (that's what it's made for), float and double can only give an approximation.
PI: I don't know of any language with support for irrational numbers - I even have no idea how this could be possible (aside from treating popular irrationals like PI and E symbolically).
123456789123456789123456789: BigInteger and BigDecimal can do it exactly, double can do an approximation (with the last 13 digits or so being garbage), int and long fail completely.
Let's face it: Each data type has a class of numbers that it can represent exactly, where computations deliver precise results, and other classes where it can at best deliver approximations.
So the questions should be:
What's the type and range of numbers to be represented here?
Is an approximation okay, and if yes, how close should it be?
What's the data type that matches my requirements?
Using a double, the biggest, in Java, would be: 70368744177663.99.
What you have in a double is 64 bit (8 byte) to represent:
Decimals and integers
+/-
Problem is to get it to not round of 0.99 so you get 46 bit for the integer part and the rest need to be used for the decimals.
You can test with the following code:
double biggestPossitiveNumberInDouble = 70368744177663.99;
for(int i=0;i<130;i++){
System.out.printf("%.2f\n", biggestPossitiveNumberInDouble);
biggestPossitiveNumberInDouble=biggestPossitiveNumberInDouble-0.01;
}
If you add 1 to biggestPossitiveNumberInDouble you will see it starting to round off and lose precision.
Also note the round off error when subtracting 0.01.
First iterations
70368744177663.99
70368744177663.98
70368744177663.98
70368744177663.97
70368744177663.96
...
The best way in this case would not to parse to double:
System.out.println("Enter amount:");
String input = new Scanner(System.in).nextLine();
int indexOfDot = input.indexOf('.');
if (indexOfDot == -1) indexOfDot = input.length();
int validInputLength = indexOfDot + 3;
if (validInputLength > input.length()) validInputLength = input.length();
String validInput = input.substring(0,validInputLength);
long amout = Integer.parseInt(validInput.replace(".", ""));
System.out.println("Converted: " + amout);
This way you don't run into the limits of double and just have the limits of long.
But ultimately would be to go with a datatype made for currency.
You looked at the largest possible long number, while the largest possible double is smaller. Calculating (amountUsd * 100.0) results in a double (and afterwards gets casted into a long).
You should ensure that (amountUsd * 100.0) can never be bigger than the largest double, which is 9007199254740992.
Floating values (float, double) are stored differently than integer values (int, long) and while double can store very large values, it is not good for storing money amounts as they get less accurate the bigger or more decimal places the number has.
Check out How many significant digits do floats and doubles have in java? for more information about floating point significant digits
A double is 15 significant digits, the significant digit count is the total number of digits from the first non-zero digit. (For a better explanation see https://en.wikipedia.org/wiki/Significant_figures Significant figures rules explained)
Therefor in your equation to include cents and make sure you are accurate you would want the maximum number to have no more than 13 whole number places and 2 decimal places.
As you are dealing with money it would be better not to use floating point values. Check out this article on using BigDecimal for storing currency: https://medium.com/#cancerian0684/which-data-type-would-you-choose-for-storing-currency-values-like-trading-price-dd7489e7a439
As you mentioned users are inputting an amount, you could read it in as a String rather than a floating point value and pass that into a BigDecimal.
Situation
I am in a situation where I will have a lot of numbers around about 0 - 15. The vast majority are whole numbers, but very few will have decimal values. All of the ones with decimal value will be "#.5", so 1.5, 2.5, 3.5, etc. but never 1.1, 3.67, etc.
I'm torn between using float and int (with the value multiplied by 2 so the decimal is gone) to store these numbers.
Question
Because every value will be .5, can I safely use float without worrying about the wierdness that comes along with floating point numbers? Or do I need to use int? If I do use int, can every smallish number be divided by 2 to safely give the absolute correct float?
Is there a better way I am missing?
Other info
I'm not considering double because I don't need that kind of precision or range.
I'm storing these in a wrapper class, if I go with int whenever I need to get the value I am going to be returning the int cast as a float divided by 2.
What I went with in the end
float seems to be the way to go.
This is not a theoretical proof but you can test it empirically:
public static void main(String[] args) {
BigDecimal half = new BigDecimal("0.5");
for (int i = 0; i < Integer.MAX_VALUE; i++) {
float f = i + 0.5f;
if (new BigDecimal(f).compareTo(new BigDecimal(i).add(half)) != 0) {
System.out.println(new BigDecimal(i).add(half) + " => " + new BigDecimal(f));
break;
}
}
}
prints:
8388608.5 => 8388608
Meaning that all xxx.5 can be exactly represented as a float between 0.5 and 8388607.5.
For larger numbers float's precision is not enough to represent the number and it is rounded to something else.
Let's refer to the subset of floating point numbers which have a decimal portion of .0 or .5 as point-five floats, or PFFs.
The following properties are guaranteed:
Any number up to 8 million or so (2^23, to be exact) which ends in .0 or .5 is representable as a PFF.
Adding/subtracting two PFFs results in a PFF, unless there's overflow.
Multiplying a PFF by an integer results in a PFF, unless there's overflow.
These properties are guaranteed by the IEEE-754 rules, which give a 24-bit mantissa and guarantee exact rounding of exact results.
Using ints will give you a somewhat larger range.
There will be no accuracy issues with .5's with float for that range, so both approaches will work.
If these represent actual number values, I would chose the float simply because it consumes the same amount of memory and I don't need to write code to convert between some internal int representation and the exposed float value.
If these numbers represent something other than a value, e.g. a grade from a very limited set, I would consider modelling them as an enum, depending on how these are ultimately used.
Say I have 2 double values. One of them is very large and one of them is very small.
double x = 99....9; // I don't know the possible max and min values,
double y = 0,00..1; // so just assume these values are near max and min.
If I add those values together, do I lose precision?
In other words, does the max possible double value increase if I assign an int value to it? And does the min possible double value decrease if I choose a small integer part?
double z = x + y; // Real result is something like 999999999999999.00000000000001
double values are not evenly distributed over all numbers. double uses the floating point representation of the number which means you have a fixed amount of bits used for the exponent and a fixed amount of bits used to represent the actual "numbers"/mantissa.
So in your example using a large and a small value would result in dropping the smaller value since it can not be expressed using the larger exponent.
The solution to not dropping precision is using a number format that has a potentially growing precision like BigDecimal - which is not limited to a fixed number of bits.
I'm using a decimal floating point arithmetic with a precision of three decimal digits and (roughly) with the same features as the typical binary floating point arithmetic. Say you have 123.0 and 4.56. These numbers are represented by a mantissa (0<=m<1) and an exponent: 0.123*10^3 and 0.456*10^1, which I'll write as <.123e3> and <.456e1>. Adding two such numbers isn't immediately possible unless the exponents are equal, and that's why the addition proceeds according to:
<.123e3> <.123e3>
<.456e1> <.004e3>
--------
<.127e3>
You see that the necessary alignment of the decimal digits according to a common exponent produces a loss of precision. In the extreme case, the entire addend could be shifted into nothingness. (Think of summing an infinite series where the terms get smaller and smaller but would still contribute considerably to the sum being computed.)
Other sources of imprecision result from differences between binary and decimal fractions, where an exact fraction in one base cannot be represented without error using the other one.
So, in short, addition and subtraction between numbers from rather different orders of magnitude are bound to cause a loss of precision.
If you try to assign too big value or too small value a double, compiler will give an error:
try this
double d1 = 1e-1000;
double d2 = 1e+1000;
When I assign from an int to a float I thought float allows more precision, so would not lose or change the value assigned, but what I am seeing is something quite different. What is going on here?
for(int i = 63000000; i < 63005515; i++) {
int a = i;
float f = 0;
f=a;
System.out.print(java.text.NumberFormat.getInstance().format(a) + " : " );
System.out.println(java.text.NumberFormat.getInstance().format(f));
}
some of the output :
...
63,005,504 : 63,005,504
63,005,505 : 63,005,504
63,005,506 : 63,005,504
63,005,507 : 63,005,508
63,005,508 : 63,005,508
Thanks!
A float has the same number of bits as an int -- 32 bits. But it allows for a greater range of values, far greater than the range of int values. However, the precision is fixed at 24 bits (23 "mantissa" bits, plus 1 implied 1 bit). At the value of about 63,000,000, the precision is greater than 1. You can verify this with the Math.ulp method, which gives the difference between 2 consecutive values.
The code
System.out.println(Math.ulp(63000000.0f));
prints
4.0
You can use double values for a far greater (yet still limited) precision:
System.out.println(Math.ulp(63000000.0));
prints
7.450580596923828E-9
However, you can just use ints here, because your values, at about 63 million, are still well below the maximum possible int value, which is about 2 billion.
A float in java is a number IEEE 754 floating point representation, even when it can be used to represent values from ±1.40129846432481707e-45 to ±3.40282346638528860e+38 it has only 6 or 7 significant decimal digits.
A simple solution would be use a double which has at least 14 significant digits and can cover without any issue all the values of an int.
However, if it is accuracy what you're looking for stay away from native floating point representations and go for classes like BigInteger and BigDecimal.
No, they are not necessarily the same value. An int and a float are each 32 bits but in a float some of those bits are used for the floating point part of the number so there are fewer whole numbers which can be represented in a float than in an int. Depending on what your application is doing with these numbers you may not care about these differences or maybe you want to look at using something like BigDecimal.
Floats don't allow more precision, floats allow wider range of numbers.
We've got 2^32 possible values for integers in range (approximately) -2 * 10^9 to 2 * 10^9. Floats are also 32bit, so the number of possible values is at most the same as for integers.
Out of these 32 bits, some of them are reserved for mantisa, the rest of these is for exponent. The resulting number represented by the float is then calculated (for simplicity I'll use 10-base) as mantisa * 10^exponent.
Obviously, the maximum precision is limited by the number of bits assigned to mantisa. So you can represent some integers exactly as integers, but they won't fit to mantisa, so the least significant bits are thrown off, as in your case.
Float have a greater range of values but lower precision.
Int have a lower range of values but higher precision.
Int is specific to 1, while Float is specific to 4.
So if you are dealing with trillions but don't care about +/- 4 then use float. but if you need the last digit to be precise you need to use int.
Given below the test code and its output. When I get float value from number value, precision is lost.. Can anyone tell me why this behaviour and also how to handle this?
public static void main(String[] args)
{
try
{
java.lang.Number numberVal = 676543.21;
float floatVal = numberVal.floatValue();
System.out.println("Number value : " + numberVal);
System.out.println("float value : " + floatVal);
System.out.println("Float.MAX_VALUE : " + Float.MAX_VALUE);
System.out.println("Is floatVal > Float.MAX_VALUE ? " + ( floatVal > Float.MAX_VALUE));
}catch(Exception e)
{
e.printStackTrace();
}
}
output:
Number value : 676543.21
float value : 676543.2
Float.MAX_VALUE : 3.4028235E38
Is floatVal > Float.MAX_VALUE ? false
also why the float value lesser than Float.MAX_VALUE?
floats are usually good up to 6 significant digits. Your number is 676543.21 which has 8 significant digits. You will see errors past 6 digits, and those errors will easily propagate to more significant digits the more calculations you perform. If you value your sanity (or precision), use doubles. Floats can't even count past 10 million accurately.
Now, you have 2 significant digits, which suggests to me that there is a chance you want to represent currency - DO NOT . Use your own class that internally represents values using fixed-point arithmetic.
Now, as to Float.MAX_VAL which is 3.4028235E38, meaning 3.4028235*10^38 which is about 10^32 times larger than your value. Notice the 'E' in there? That's the exponent.
I love these. But I'll make it quick and painless.
Floats and decimals (aka floating points) aren't kept in the memory precisely either. But I don't want to get into float accuracy vs precision issues here, i'll just point you to a link - http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
As far as your other question, lemme put it this way, since floats are stored in scientific notation.
floatVal = 6.765432E6 = 6.7 * 10^6
MAX_VALUE = 3.4E38 = 3.4 * 10^38
MAX_VALUE is 32 orders of magnitude bigger than your float number. Feel free to take a look at here as well http://steve.hollasch.net/cgindex/coding/ieeefloat.html
I've spent a great deal of time comparing and fixing some FP issues a few months ago...
Try to use a small delta when comparing floats.
Maybe this link will help you http://introcs.cs.princeton.edu/java/91float/
All data types have representation limits so the fact there is a limit shouldn't be surprising.
float uses 24-bit for its "mantissa" which holds all the significant digits. This means it has about 7 digits of precision (as 2^^24 is about 16 million)
double uses 53-bit for it "mantissa" so it can hold about 16 digits accurately.
3.4028235E38 is greater than 676543.2. Float.MAX_VALUE is the largest float that is possible to represent.
Decimal literals like the one you have typed default to type double, not float. java.lang.Number also uses doubles by default. If you were to replace that with java.lang.Number numberVal = 676543.21f; I would expect the same level of precision loss from both. Alternatively, replace float floatVal = numberVal.floatValue(); with double doubleVal = numberVal.doubleValue(); In order not to lose precision.
EDIT, as an example, try running:
Number num = 676543.21;
System.out.println(num); //676543.21
System.out.println(num.doubleValue()); //676543.21
System.out.println(num.floatValue()); //676543.2 <= loses the precision
to see the difference in precision of the types
Float.MAX_VALUE returns the largest value that a float can ever hold. any higher will overflow. If you look carefully, you'll see an 'E' in its textual representation. This is standard index form, that 'E' means "multiply by 10 to the power of whatever number follows the E" (or in java-speak *pow(10, numberAfterE))