Why are the values of String.valueOf(5.6d + 5.8d) and String.format("%f", 5.6d + 5.8d) diffrent?
String.valueOf(5.6d + 5.8d) will print "11.399999999999999".
String.format("%f", 5.6d + 5.8d) will print "11.400000".
Why is it so?
Edit: The question differs to Is floating point math broken? , because String.format() round up (see answers)
From the documentation of Format Strings for String#format:
If the precision is less than the number of digits which would appear after the decimal point in the string returned by Float.toString(float) or Double.toString(double) respectively, then the value will be rounded using the round half up algorithm.
By default, String.format("%f", ...) uses 6 decimal digits of precision; because this is fewer digits than what would appear when used by Double.toString(double) (which is equivalent to String.valueOf(double)), then the value is rounded as specified above.
There are two parts to the explanation.
The result of 5.6d + 5.8d is not exactly 11.4 due to binary floating point representation, precision and rounding issues; see Is floating point math broken? for explanations of why that is so. (And no, it isn't broken!)
The reason that String.valueOf and String.format are outputting different answers for the same double value is down to the respective specifications:
For valueOf:
"How many digits must be printed for the fractional part of m or a? There must be at least one digit to represent the fractional part, and beyond that as many, but only as many, more digits as are needed to uniquely distinguish the argument value from adjacent values of type double."
The double value is closer to 11.399999999999999 than 14.0 so the output is the former.
For format:
"If the conversion is 'e', 'E' or 'f', then the precision is the number of digits after the decimal separator. If the precision is not specified, then it is assumed to be 6."
When you round 11.399999999999999 to 6 digits of precision after the decimal point you get 11.40000000.
(The above quotes are from the Java 9 javadocs.)
Related
I understand that the theory of binary numbers, so operation of double numbers is not precise. However, in java, I have no idea why "(double)65 / 100" is 0.65, which is completely correct in decimal number, other than 0.6500000000004.
double a = 5;
double b = 4.35;
int c = 65;
int d = 100;
System.out.println(a - b); // 0.6500000000000004
System.out.println((double) c / d); // 0.65
Java completely messes up has its own way of handling floating-point binary to decimal conversions.
A simple program in C (compiled with gcc) gives the result:
printf("1: %.20f\n", 5.0 - 4.35); // 0.65000000000000035527
printf("2: %.20f\n", 65./100); // 0.65000000000000002220
while Java gives the result (note you only needed 17 digits to see it, but I'm trying to make it more clear):
System.out.printf("%.20f\n", 5.0 - 4.35); // 0.65000000000000040000
System.out.printf("%.20f\n", 65./100); // 0.65000000000000000000
But when using the %a format specifier, both languages printf the underlying hexadecimal (correct) value: 0x1.4ccccccccccd00000000p-1.
So, Java is performing some illegal rounding at some point in the code. The apparent issue here is that Java has a different set of rules to convert binary to decimal, from the Java specification:
The number of digits in the result for the fractional part of m or a is equal to the precision. If the precision is not specified then the default value is 6. If the precision is less than the number of digits which would appear after the decimal point in the string returned by Float.toString(float) or Double.toString(double) respectively, then the value will be rounded using the round half up algorithm. Otherwise, zeros may be appended to reach the precision. For a canonical representation of the value, use Float.toString(float) or Double.toString(double) as appropriate. (emphasis mine)
And in the toString specification:
How many digits must be printed for the fractional part of m or a? There must be at least one digit to represent the fractional part, and beyond that as many, but only as many, more digits as are needed to uniquely distinguish the argument value from adjacent values of type double. That is, suppose that x is the exact mathematical value represented by the decimal representation produced by this method for a finite nonzero argument d. Then d must be the double value nearest to x; or if two double values are equally close to x, then d must be one of them and the least significant bit of the significand of d must be 0. (emphasis mine)
So, Java does perform a different binary to decimal conversion from C, but it remains closer to the true binary value than to any other, so the spec guarantees that the binary value can be restored back by a decimal to binary conversion.
Professor William Kahan warned about some Java floating-point issues in this article:
How Java’s Floating-Point Hurts Everyone Everywhere
But this conversion behaviour seems to be IEEE-complaint.
EDIT: I have included information provided by #MarkDickinson in the comments, to report that this Java behaviour, albeit different from C, is documented, and is IEEE-compliant. This has already been explained here, here, and here.
Double.toString(0.1) produces "0.1", but 0.1 is a floating point number.
Floating point number can't represent exactly in program language, but Double.toString produces the exact result (0.1), how does it do that, is it always produces the result that mathematically equal to the double literal?
Assume that the literal is in double precision.
Here is the problem I have see:
When use Apache POI to read excel file, XSSFCell.getNumericCellValue can only return double, if I use BigDecimal.valueOf to convert it to BigDecimal, is that always safe, and why?
Double.toString produces the exact result (0.1), how does it do that, is it always produces the result that mathematically equal to the double literal?
Double.toString(XXX) will always produce a numeral equal to XXX if XXX is a decimal numeral with 15 or fewer significant digits and it is within the range of the Double format.
There are two reasons for this:
The Double format (IEEE-754 binary64) has enough precision so that 15-digit decimal numerals can always be distinguished.
Double.toString does not display the exact Double value but instead produces the fewest significant digits needed to distinguish the number from nearby Double values.
For example, the literal 0.1 in source text is converted to the Double value 0.1000000000000000055511151231257827021181583404541015625. But Double.toString will not produce all those digits by default. The algorithm it uses produces “0.1” because that is enough to uniquely distinguish 0.1000000000000000055511151231257827021181583404541015625 from its two neighbors, which are 0.09999999999999999167332731531132594682276248931884765625 and 0.10000000000000001942890293094023945741355419158935546875. Both of those are farther from 0.1.
Thus, Double.toString(1.234), Double.toString(123.4e-2), and Double.toString(.0001234e4) will all produce “1.234”—a numeral whose value equals all of the original decimal numerals (before they are converted to Double), although it differs in form from some of them.
When use Apache POI to read excel file, XSSFCell.getNumericCellValue can only return double, if I use BigDecimal.valueOf to convert it to BigDecimal, is that always safe, and why?
If the cell value being retrieved is not representable as a Double, then XSSFCell.getNumericCellValue must change it. From memory, I think BigDecimal.valueOf will produce the exact value of the Double returned, but I cannot speak authoritatively to this. That is a separate question from how Double and Double.toString behave, so you might ask it as a separate question.
10e-5d is a double literal equivalent to 10^-5
Double.toString(10e-5d) returns "1.0E-4"
Well, double type has limited precision, so if you add enough digits after the floating point, some of them will be truncated/rounded.
For example:
System.out.println (Double.toString(0.123456789123456789))
prints
0.12345678912345678
I agree with Eric Postpischil's answer, but another explanation may help.
For each double number there is a range of real numbers that round to it under round-half-even rules. For 0.1000000000000000055511151231257827021181583404541015625, the result of rounding 0.1 to a double, the range is [0.099999999999999998612221219218554324470460414886474609375,0.100000000000000012490009027033011079765856266021728515625].
Any double literal whose real number arithmetic value is in that range has the same double value as 0.1.
Double.toString(x) returns the String representation of the real number in the range that converts to x and has the fewest decimal places. Picking any real number in that range ensures that the round trip converting a double to a String using Double.toString and then converting the String back to a double using round-half-even rules recovers the original value.
System.out.println(0.100000000000000005); prints "0.1" because 0.100000000000000005 is in the range that rounds to the same double as 0.1, and 0.1 is the real number in that range with the fewest decimal places.
This effect is rarely visible because literals other than "0.1" with real number value in the range are rare. It is more noticeable for float because of the lesser precision. System.out.println(0.100000001f); prints "0.1".
This question already has answers here:
Rounding Errors?
(9 answers)
Is floating point math broken?
(31 answers)
Closed 6 years ago.
Today I find an interesting fact that the formula will influence the precision of the result. Please see the below code
double x = 7d
double y = 10d
println(1-x/y)
println((y-x)/y)
I wrote this code using groovy, you can just treat it as Java
The result is
1-x/y: 0.30000000000000004
(y-x)/y: 0.3
It's interesting that the two formulas which should be equal have different result.
Can anyone explain it for me?
And can I apply the second formula to anywhere applicable as a valid solution for double precision issue?
To control the precision of floating point arithmetic, you should use java.math.BigDecimal.
You can do something like this.
BigDecimal xBigdecimal = BigDecimal.valueOf(7d);
BigDecimal yBigdecimal = BigDecimal.valueOf(10d);
System.out.println(BigDecimal.valueOf(1).subtract(xBigdecimal.divide(yBigdecimal)));
Can anyone explain it for me?
The float and double primitive types in Java are floating point numbers, where the number is stored as a binary representation of a fraction and a exponent.
More specifically, a double-precision floating point value such as the double type is a 64-bit value, where:
1 bit denotes the sign (positive or negative).
11 bits for the exponent.
52 bits for the significant digits (the fractional part as a binary).
These parts are combined to produce a double representation of a value.
Check this
For a detailed description of how floating point values are handled in Java, follow Floating-Point Types, Formats, and Values of the Java Language Specification.
The byte, char, int, long types are fixed-point numbers, which are exact representions of numbers. Unlike fixed point numbers, floating point numbers will some times (safe to assume "most of the time") not be able to return an exact representation of a number. This is the reason why you end up with 0.30000000000000004 in the result of 1 - (x / y).
When requiring a value that is exact, such as 1.5 or 150.1005, you'll want to use one of the fixed-point types, which will be able to represent the number exactly.
As I've already showed in the above example, Java has a BigDecimal class which will handle very large numbers and very small numbers.
I'm working on a method that translates a string into an appropriate Number type, depending upon the format of the number. If the number appears to be a floating point value, then I need to return the smallest type I can use without sacrificing precision (Float, Double or BigDecimal).
Based on How many significant digits have floats and doubles in java? (and other resources), I've learned than Float values have 23 bits for the mantissa. Based on this, I used the following method to return the bit length for a given value:
private static int getBitLengthOfSignificand(String integerPart,
String fractionalPart) {
return new BigInteger(integerPart + fractionalPart).bitLength();
}
If the result of this test is below 24, I return a Float. If below 53 I return a Double, otherwise a BigDecimal.
However, I'm confused by the result when I consider Float.MAX_VALUE, which is 3.4028235E38. The bit length of the significand is 26 according to my method (where integerPart = 3 and fractionalPart = 4028235. This triggers my method to return a Double, when clearly Float would suffice.
Can someone highlight the flaw in my thinking or implementation? Another idea I had was to convert the string to a BigDecimal and scale down using floatValue() and doubleValue(), testing for overflow (which is represented by infinite values). But that loses precision, so isn't appropriate for me.
The significand is stored in binary, and you can think of it as a number in its decimal representation only if you don't let it confuse you.
The exponent is a binary exponent that does not represent a multiplication by a power of ten but by a power of two. For this reason, the E38 in the number you used as example is only a convenience: the real significand is in binary and should be multiplied by a power of two to obtain the actual number. Powers of two and powers of ten aren't the same, so “3.4028235” is not the real significand.
The real significand of Float.MAX_VALUE is in hexadecimal notation, 0x1.fffffe, and its associated exponent is 127, meaning that Float.MAX_VALUE is actually 0x1.fffffe * 2127.
Looking at the decimal representation to choose a binary floating-point type to put the value in, as you are trying to do, doesn't work. For one thing, the number of decimal digits that one is sure to recover from a float is different from the number of decimal digits one may need to write to distinguish a float from its neighbors (6 and 9 respectively). You chose to write “3.4028235E38” but you could have written 3.40282E38, which for your algorithm, looks easier to represent, when it isn't, really. When people write that “3.4028235E38” is the largest finite value of the float type, they mean that if you round this decimal number to float, you will arrive to the largest float. If you parse “3.4028235E38” as a double-precision number it won't even be equal to Float.MAX_VALUE.
To put it differently: another way to write Float.MAX_VALUE is 3.4028234663852885981170418348451692544E38. It is still representable as a float (it represents the exact same value as 3.4028235E38). It looks like it has many digits because these are decimal digits that appear for a decimal exponent, when in fact the number is represented internally with a binary exponent.
(By the way, your approach does not check that the exponent is in range to represent a number in the chosen type, which is another condition for a type to be able to represent the number from a string.)
I would work in terms of the difference between the actual value and the nearest float. BigDecimal can store any finite length decimal fraction exactly and do arithmetic on it:
Convert the String to the nearest float x. If x is infinite, but the value has a finite double representation use that.
Convert the String exactly to BigDecimal y.
If y is zero, use float, which can represent zero exactly.
If not, convert the float x to BigDecimal, z.
Calculate, in BigDecimal to a reasonable number of decimal places, the absolute value of (y-z)/z. That is the relative rounding error due to using float. If it is small enough for your purposes, less than some value you pick, use float. If not, use double.
If you literally want no sacrifice in precision, it is much simpler. Convert to both float and double. Compare them for equality. The comparison will be done in double. If they compare equal, go with the float. If not, go with the double.
This question already has answers here:
Whats wrong with this simple 'double' calculation? [duplicate]
(5 answers)
Closed 9 years ago.
While I was having fun with codes from Java Puzzlers(I don't have the book) I came across this piece of code
public static void main(String args[]) {
System.out.println(2.00 - 1.10);
}
Output is
0.8999999999999999
When I tried changing the code to
2.00d - 1.10d still I get the same output as 0.8999999999999999
For,2.00d - 1.10f Output is 0.8999999761581421
For,2.00f - 1.10d Output is 0.8999999999999999
For,2.00f - 1.10f Output is 0.9
Why din't I get the output as 0.9 in the first place? I could not make any heads or tails out of this? Can somebody articulate this?
Because in Java double values are IEEE floating point numbers.
The work around could be to use Big Decimal class
Immutable, arbitrary-precision signed decimal numbers. A BigDecimal
consists of an arbitrary precision integer unscaled value and a 32-bit
integer scale. If zero or positive, the scale is the number of digits
to the right of the decimal point. If negative, the unscaled value of
the number is multiplied by ten to the power of the negation of the
scale. The value of the number represented by the BigDecimal is
therefore (unscaledValue × 10^-scale).
On a side note you may also want to check Wikipedia article on IEEE 754 how floating point numbers are stored on most systems.
The more operations you do on a floating point number, the more significant rounding errors can become.
In binary 0.1 is 0.00011001100110011001100110011001.....,
As such it cannot be represented exactly in binary. Depending where you round off (float or double) you get different answers.
So 0.1f =0.000110011001100110011001100
And 0.1d=0.0001100110011001100110011001100110011001100110011001
You note that the number repeats on a 1100 cycle. However the float and double precision split it at a different point in the cycle. As such on one the error rounds up and the other rounds down; leading to the difference.
But most importantly;
Never assume floating point numbers are exact
Other answers are correct, just to point to a valid reference, I quote oracle doc:
double: The double data type is a double-precision 64-bit IEEE 754
floating point. Its range of values is beyond the scope of this
discussion, but is specified in the Floating-Point Types, Formats, and
Values section of the Java Language Specification. For decimal values,
this data type is generally the default choice. As mentioned above,
this data type should never be used for precise values, such as
currency