is division by infinity a NaN in java? - java

I have been having a lot of problems with NaN values propagating in a very long program I am having to look after. After much single stepping I have been able to find that at some point there is a variable whose value is shown by the Debugger as Infinity, there is another variable that gets divided by this Infinity variable, which results in NaN. Is this behaviour correct, or should it have resulted in 0? All the variables are double variables.

Is division by infinity a NaN in java?
The short answer is No.
The Java Language Specification (JLS 15.17.2) says:
"Division of a finite value by an infinity results in a signed zero."
It also mentions that this is "determined by the rules of IEEE 754 arithmetic".
The only case where division by infinity gives a NaN is when you divide an infinity by an infinity. (Same reference as above.)
If you (really) see differently, then there is a bug in your Java "platform"1. But that would be an extraordinary thing, so you need to check your evidence and methodology really thoroughly before calling "bug".
1 - ... most likely in the floating point hardware!

Think about the nature of division.
4/5
Means you've got four of something, and you're splitting it into 5 pieces. The outcome is the size of one of those pieces.
4/infinity
Means you've got four of something, and you're splitting it into infinitely small pieces. It can never be 0 because numbers are continuous, ergo it's NaN because you never stop handing out those pieces of 4, to measure one of those pieces.

Related

java.lang.Math—is “within 1 ULP” exclusive or inclusive?

java.lang.Math docs say for many functions, such as Math.pow:
The computed result must be within 1 ulp of the exact result.
But I haven't been able to find what does this mean precisely. Is it exclusive or inclusive? In other words, if the exact result can be represented with a double, will the returned value contain exact result or it may still be off by 1 ULP?
For example, can we rely on Math.pow(3.0, 2.0) == 9.0? I know using equality comparison is almost always a bad idea for doubles, so I am mostly asking out of curiosity and to be able to point people to their mistakes (or reassure them) when they do something like that.
fyi,
The quality of implementation specifications concern two properties,
accuracy of the returned result and monotonicity of the method.
Accuracy of the floating-point Math methods is measured in terms of
ulps, units in the last place. For a given floating-point format, an
ulp of a specific real number value is the distance between the two
floating-point values bracketing that numerical value. When discussing
the accuracy of a method as a whole rather than at a specific
argument, the number of ulps cited is for the worst-case error at any
argument. If a method always has an error less than 0.5 ulps, the
method always returns the floating-point number nearest the exact
result; such a method is correctly rounded. A correctly rounded method
is generally the best a floating-point approximation can be; however,
it is impractical for many floating-point methods to be correctly
rounded.
Instead, for the Math class, a larger error bound of 1 or 2 ulps is
allowed for certain methods. Informally, with a 1 ulp error bound,
when the exact result is a representable number, the exact result
should be returned as the computed result; otherwise, either of the
two floating-point values which bracket the exact result may be
returned.
For exact results large in magnitude, one of the endpoints of the
bracket may be infinite. Besides accuracy at individual arguments,
maintaining proper relations between the method at different arguments
is also important. Therefore, most methods with more than 0.5 ulp
errors are required to be semi-monotonic: whenever the mathematical
function is non-decreasing, so is the floating-point approximation,
likewise, whenever the mathematical function is non-increasing, so is
the floating-point approximation. Not all approximations that have 1
ulp accuracy will automatically meet the monotonicity requirements.
Source

Wrong Output Dollar Amount To Coins [duplicate]

double r = 11.631;
double theta = 21.4;
In the debugger, these are shown as 11.631000000000000 and 21.399999618530273.
How can I avoid this?
These accuracy problems are due to the internal representation of floating point numbers and there's not much you can do to avoid it.
By the way, printing these values at run-time often still leads to the correct results, at least using modern C++ compilers. For most operations, this isn't much of an issue.
I liked Joel's explanation, which deals with a similar binary floating point precision issue in Excel 2007:
See how there's a lot of 0110 0110 0110 there at the end? That's because 0.1 has no exact representation in binary... it's a repeating binary number. It's sort of like how 1/3 has no representation in decimal. 1/3 is 0.33333333 and you have to keep writing 3's forever. If you lose patience, you get something inexact.
So you can imagine how, in decimal, if you tried to do 3*1/3, and you didn't have time to write 3's forever, the result you would get would be 0.99999999, not 1, and people would get angry with you for being wrong.
If you have a value like:
double theta = 21.4;
And you want to do:
if (theta == 21.4)
{
}
You have to be a bit clever, you will need to check if the value of theta is really close to 21.4, but not necessarily that value.
if (fabs(theta - 21.4) <= 1e-6)
{
}
This is partly platform-specific - and we don't know what platform you're using.
It's also partly a case of knowing what you actually want to see. The debugger is showing you - to some extent, anyway - the precise value stored in your variable. In my article on binary floating point numbers in .NET, there's a C# class which lets you see the absolutely exact number stored in a double. The online version isn't working at the moment - I'll try to put one up on another site.
Given that the debugger sees the "actual" value, it's got to make a judgement call about what to display - it could show you the value rounded to a few decimal places, or a more precise value. Some debuggers do a better job than others at reading developers' minds, but it's a fundamental problem with binary floating point numbers.
Use the fixed-point decimal type if you want stability at the limits of precision. There are overheads, and you must explicitly cast if you wish to convert to floating point. If you do convert to floating point you will reintroduce the instabilities that seem to bother you.
Alternately you can get over it and learn to work with the limited precision of floating point arithmetic. For example you can use rounding to make values converge, or you can use epsilon comparisons to describe a tolerance. "Epsilon" is a constant you set up that defines a tolerance. For example, you may choose to regard two values as being equal if they are within 0.0001 of each other.
It occurs to me that you could use operator overloading to make epsilon comparisons transparent. That would be very cool.
For mantissa-exponent representations EPSILON must be computed to remain within the representable precision. For a number N, Epsilon = N / 10E+14
System.Double.Epsilon is the smallest representable positive value for the Double type. It is too small for our purpose. Read Microsoft's advice on equality testing
I've come across this before (on my blog) - I think the surprise tends to be that the 'irrational' numbers are different.
By 'irrational' here I'm just referring to the fact that they can't be accurately represented in this format. Real irrational numbers (like π - pi) can't be accurately represented at all.
Most people are familiar with 1/3 not working in decimal: 0.3333333333333...
The odd thing is that 1.1 doesn't work in floats. People expect decimal values to work in floating point numbers because of how they think of them:
1.1 is 11 x 10^-1
When actually they're in base-2
1.1 is 154811237190861 x 2^-47
You can't avoid it, you just have to get used to the fact that some floats are 'irrational', in the same way that 1/3 is.
One way you can avoid this is to use a library that uses an alternative method of representing decimal numbers, such as BCD
If you are using Java and you need accuracy, use the BigDecimal class for floating point calculations. It is slower but safer.
Seems to me that 21.399999618530273 is the single precision (float) representation of 21.4. Looks like the debugger is casting down from double to float somewhere.
You cant avoid this as you're using floating point numbers with fixed quantity of bytes. There's simply no isomorphism possible between real numbers and its limited notation.
But most of the time you can simply ignore it. 21.4==21.4 would still be true because it is still the same numbers with the same error. But 21.4f==21.4 may not be true because the error for float and double are different.
If you need fixed precision, perhaps you should try fixed point numbers. Or even integers. I for example often use int(1000*x) for passing to debug pager.
Dangers of computer arithmetic
If it bothers you, you can customize the way some values are displayed during debug. Use it with care :-)
Enhancing Debugging with the Debugger Display Attributes
Refer to General Decimal Arithmetic
Also take note when comparing floats, see this answer for more information.
According to the javadoc
"If at least one of the operands to a numerical operator is of type double, then the
operation is carried out using 64-bit floating-point arithmetic, and the result of the
numerical operator is a value of type double. If the other operand is not a double, it is
first widened (§5.1.5) to type double by numeric promotion (§5.6)."
Here is the Source

Which is more accurate? java.lang.Math.E or Math.exp(1.0)

Reading the Javadocs, I see that Math.E is "The double value that is closer than any other to e, the base of the natural logarithms.". The printed value for Math.E is 2.718281828459045 while the value of Math.exp(1.0), which should be the same value is: 2.7182818284590455 (one more 5 at the end).
From the docs, it sounds like the bits in Math.E have been "manually adjusted" to get closer to the actual value of e than the calculation produced by Math.exp(1.0). Is this correct, or am I reading the docs incorrectly?
If that is correct, then is using Math.pow(Math.E, n) more accurate than Math.exp(n), or less? I've Googled and search SO, but can't find anything on this particular issue.
The actual value to 16 decimal places is 2.7182818284590452; 2 is closer to 0 than to 5, so the constant is closer.
Note that when doing floating point calculations with either number it's quite likely the error in the floating point representation of your answer will make which one you use largely
irrelevant.
Math.E
2.718281828459045
Math.exp(1.0)
2.7182818284590455
So this is the value from Wikipedia, 2.7182818284590452 The only difference I can see is a rounding error on the last digit of the Math.exp(1.0) where the value is 5 instead of 2. So strictly speaking Math.E is more accurate but unless you're doing some really crazy stuff, it won't matter for precision.
There may be speed considerations for using the precalculated Math.E instead of Math.exp(1.0). You might want to check that out too.

when to use expm1 instead of exp in java

I am confused about using expm1 function in java
The Oracle java doc for Math.expm1 says:
Returns exp(x) -1. Note that for values of x near 0, the exact sum of
expm1(x) + 1 is much closer to the true result of ex than exp(x).
but this page says:
However, for negative values of x, roughly -4 and lower, the algorithm
used to calculate Math.exp() is relatively ill-behaved and subject to
round-off error. It's more accurate to calculate ex - 1 with a
different algorithm and then add 1 to the final result.
should we use expm1(x) for negative x values or near 0 values?
The implementation of double at the bit level means that you can store doubles near 0 with much more precision than doubles near 1. That's why expm1 can give you much more accuracy for near-zero powers than exp can, because double doesn't have enough precision to store very accurate numbers very close to 1.
I don't believe the article you're citing is correct, as far as the accuracy of Math.exp goes (modulo the limitations of double). The Math.exp specification guarantees that the result is within 1 ulp of the exact value, which means -- to oversimplify a bit -- a relative error of at most 2^-52, ish.
You use expm1(x) for anything close to 0. Positive or negative.
The reason is because exp(x) of anything close to 0 will be very close to 1. Therefore exp(x) - 1 will suffer from destructive cancellation when x is close to 0.
expm1(x) is properly optimized to avoid this destructive cancellation.
From the mathematical side: If exp is implemented using its Taylor Series, then expm1(x) can be done by simply omitting the first +1.

Rounding Errors?

In my course, I am told:
Continuous values are represented approximately in memory, and therefore computing with floats involves rounding errors. These are tiny discrepancies in bit patterns; thus the test e==f is unsafe if e and f are floats.
Referring to Java.
Is this true? I've used comparison statements with doubles and floats and have never had rounding issues. Never have I read in a textbook something similar. Surely the virtual machine accounts for this?
It is true.
It is an inherent limitation of how floating point values are represented in memory in a finite number of bits.
This program, for instance, prints "false":
public class Main {
public static void main(String[] args) {
double a = 0.7;
double b = 0.9;
double x = a + 0.1;
double y = b - 0.1;
System.out.println(x == y);
}
}
Instead of exact comparison with '==' you usually decide on some level of precision and ask if the numbers are "close enough":
System.out.println(Math.abs(x - y) < 0.0001);
This applies to Java just as much as to any other language using floating point. It's inherent in the design of the representation of floating point values in hardware.
More info on floating point values:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Yes, representing 0.1 exactly in base-2 is the same as trying to represent 1/3 exactly in base 10.
This is always true. There are some numbers which cannot be represented accurately using float point representation. Consider, for example, pi. How would you represent a number which has infinite digits, within a finite storage? Therefore, when comparing numbers you should check if the difference between them is smaller then some epsilon. Also, there are several classes which exist that can help you achieve greater accuracy such as BigDecimal and BigInteger.
It is right. Note that Java has nothing to do with it, the problem is inherent in floating point math in ANY language.
You can often get away with it with classroom-level problems but it's not going to work in the real world. Sometimes it won't work in the classroom.
An incident from long ago back in school. The teacher of an intro class assigned a final exam problem that was proving a real doozy for many of the better students--it wasn't working and they didn't know why. (I saw this as a lab assistant, I wasn't in the class.) Finally some started asking me for help and some probing revealed the problem: They had never been taught about the inherent inaccuracy of floating point math.
Now, there were two basic approaches to this problem, a brute force one (which by chance worked in this case as it made the same errors every time) and a more elegant one (which would make different errors and not work.) Anyone who tried the elegant approach would hit a brick wall without having any idea why. I helped a bunch of them and stuck in a comment explaining why and to contact me if he had questions.
Of course next semester I hear from him about this and I basically floored the entire department with a simple little program:
10 X = 3000000
20 X = X + 1
30 If X < X + 1 goto 20
40 Print "X = X + 1"
Despite what every teacher in the department thought, this WILL terminate. The 3 million seed is simply to make it terminate faster. (If you don't know basic: There are no gimmicks here, just exhausting the precision of floating point numbers.)
Yes, as other answers have said. I want to add that I recommend you this article about floating point accuracy: Visualizing floats
Of course it is true. Think about it. Any number must be represented in binary.
Picture: "1000" as 0.5or 1/2, that is, 2 ** -1. Then "0100" is 0.25 or 1/4. You can see where I'm going.
How many numbers can you represent in this manner? 2**4. Adding more bits duplicates the available space, but it is never infinite. 1/3 or 1/10, for the matter 1/n, any number not multiple of 2 cannot be really represented.
1/3 could be "0101" (0.3125) or "0110" (0.375). Either value if you multiply it by 3, will not be 1. Of course you could add special rules. Say you "when you add 3 times '0101', make it 1"... this approach won't work in the long run. You can catch some but then how about 1/6 times 2?
It's not a problem of binary representation, any finite representation has numbers that you cannot represent, they are infinite after all.
Most CPUs (and computer languages) use IEEE 754 floating point arithmetic. Using this notation, there are decimal numbers that have no exact representation in this notation, e.g. 0.1. So if you divide 1 by 10 you won't get an exact result. When performing several calculations in a row, the errors sum up. Try the following example in python:
>>> 0.1
0.10000000000000001
>>> 0.1 / 7 * 10 * 7 == 1
False
That's not really what you'd expect mathematically.
By the way:
A common misunderstanding concerning floating point numbers is, that the results are not precise and cannot be comapared safely. This is only true if you really use fractions of numbers. If all your math is in the integer domain, doubles and floats do exactly the same as ints and also can be compared safely. They can be safely used as loop counters, for example.
yes, Java also uses floating point arithmetic.

Categories

Resources