Comparison double value [duplicate] - java

This question already has answers here:
How to compare two double values in Java?
(7 answers)
Closed 7 years ago.
I have two double values. I want to check if two double values are same or not. I have two way to compare this double values.
First way :
double a = 0.1;
double b = 0.2;
if(a == b) {
System.out.println("Double values are same");
}
Another way to compare :
if(Double.compare(a, b) == 0) {
System.out.println("Double values are same");
}
Which one is the best way and accurate? Are both ways the same for comparison double values?

Double.compare handles the case of NaN and negative zeros differently than ==. For Double.compare, -0.0d is not equal to 0.0d.
For example the following code:
public static void main(String... args) {
double a = -0.0d;
double b = 0.0d;
if (a == b) {
System.out.println("a == b");
}
if (Double.compare(a, b) == 0) {
System.out.println("Double.compare(a, b) == 0");
}
}
will only print a == b.
The two operations are thus not equivalent. Using one over the other depend on your requirement.
The exact rule concerning == (and !=) can be found in the JLS, section 15.21.1:
Floating-point equality testing is performed in accordance with the rules of the IEEE 754 standard:
If either operand is NaN, then the result of == is false but the result of != is true. Indeed, the test x!=x is true if and only if the value of x is NaN.
Positive zero and negative zero are considered equal.
Otherwise, two distinct floating-point values are considered unequal by the equality operators. In particular, there is one value representing positive infinity and one value representing negative infinity; each compares equal only to itself, and each compares unequal to all other values.
Subject to these considerations for floating-point numbers, the following rules then hold for integer operands or for floating-point operands other than NaN:
The value produced by the == operator is true if the value of the left-hand operand is equal to the value of the right-hand operand; otherwise, the result is false.
The value produced by the != operator is true if the value of the left-hand operand is not equal to the value of the right-hand operand; otherwise, the result is false.

Let's analyze this part:
if(Double.compare(a, b) == 0)
By looking at the documentation of Double.compare, a possible implementation could be the following one (obviously it will be a more optimized one, but that's for the sake of the discussion):
return a == b ? 0 : (a < b ? -1 : +1);
Then, you have another comparison, so it becomes:
if((a == b ? 0 : (a < b ? -1 : +1)) == 0)
In the other case, you rely on a simple comparison using ==, that is:
if(a == b)
That said, in terms of accuracy I guess the result is the same, for the underlying representation of a double does not change and the comparison with 0 does not seem to affect the accuracy.
Which is the best?
Well, from the example above, I'd say the simpler one for you directly compare the values and you are interested only in equality, even though it's unlikely that you are facing with a problem which will benefit from choosing the best way for such a comparison.
Anyway, the approach using Double.compare is more suitable for those cases when you are not only interested in equality, but also in the concepts of greater than and/or less than.

Both of those options are perfectly valid for checking the equality of two numbers. I personally would say the == is nicer, but that's just me.
When double.compare() is better is when you want to know why your variables are not equal. It returns a little more information than true or false - a negative value if the left side is smaller, and a positive if the right side is smaller.

Related

Comparing double for equality is not safe. So what to do if want to compare it to 0?

I know that because of binary double representation, comparison for equality of two doubles is not quite safe. But I need to perform some computation like this:
double a;
//initializing
if(a != 0){ // <----------- HERE
double b = 2 / a;
//do other computation
}
throw new RuntimeException();
So, comparison of doubles is not safe, but I definitely do not want to to devide by 0. What to do in this case?
I'd use BigDecimal but its performance is not quite acceptable.
Well, if your issue is dividing by zero, the good news is that you can do what you have since if the value isn't actually 0, you can divide by it, even if it's really, really small.
You can use a range comparison, comparing it to the lowest value you want to allow, e.g. a >= 0.0000000000001 or similar (if it's always going to be positive).
What about using the Static method compare of the Double class wrapper??
double a = 0.0;
// initializing
if (Double.compare(a, 0.0) != 0) {
}

Complex number equals method

I'm making a complex number class in Java like this:
public class Complex {
public final double real, imag;
public Complex(double real, double imag) {
this.real = real;
this.imag = imag;
}
... methods for arithmetic follow ...
}
I implemented the equals method like this:
#Override
public boolean equals(Object obj) {
if (obj instanceof Complex) {
Complex other = (Complex)obj;
return (
this.real == other.real &&
this.imag == other.imag
);
}
return false;
}
But if you override equals, you're supposed to override hashCode too. One of the rules is:
If two objects are equal according to the equals(Object) method, then calling the hashCode method on each of the two objects must produce the same integer result.
Comparing floats and doubles with == does a numeric comparison, so +0.0 == -0.0 and NaN values are inequal to everything including themselves. So I tried implementing the hashCode method to match the equals method like this:
#Override
public int hashCode() {
long real = Double.doubleToLongBits(this.real); // harmonize NaN bit patterns
long imag = Double.doubleToLongBits(this.imag);
if (real == 1L << 63) real = 0; // convert -0.0 to +0.0
if (imag == 1L << 63) imag = 0;
long h = real ^ imag;
return (int)h ^ (int)(h >>> 32);
}
But then I realized that this would work strangely in a hash map if either field is NaN, because this.equals(this) will always be false, but maybe that's not incorrect. On the other hand, I could do what Double and Float do, where the equals methods compare +0.0 != -0.0, but still harmonize the different NaN bit patterns, and let NaN == NaN, so then I get:
#Override
public boolean equals(Object obj) {
if (obj instanceof Complex) {
Complex other = (Complex)obj;
return (
Double.doubleToLongBits(this.real) ==
Double.doubleToLongBits(other.real) &&
Double.doubleToLongBits(this.imag) ==
Double.doubleToLongBits(other.imag)
);
}
return false;
}
#Override
public int hashCode() {
long h = (
Double.doubleToLongBits(real) +
Double.doubleToLongBits(imag)
);
return (int)h ^ (int)(h >>> 32);
}
But if I do that then my complex numbers don't behave like real numbers, where +0.0 == -0.0. But I don't really need to put my Complex numbers in hash maps anyway -- I just want to do the right thing, follow best practices, etc. And now I'm just confused. Can anyone advise me on the best way to proceed?
I've thought about this some more. The problem stems from trying to balance two uses of equals: IEEE 754 arithmetic comparison and Object/hashtable comparison. For floating-point types, the two needs can never be satisfied at once due to NaN. The arithmetic comparison wants NaN != NaN, but the Object/hashtable comparison (equals method) requires this.equals(this).
Double implements the methods correctly according to the contract of Object, so NaN == NaN. It also does +0.0 != -0.0. Both behaviors are the opposite from comparisons on primitive float/double types.
java.util.Arrays.equals(double[], double[]) compares elements the same way as Double (NaN == NaN, +0.0 != -0.0).
java.awt.geom.Point2D does it technically wrong. Its equals method compares the coordinates with just ==, so this.equals(this) can be false. Meanwhile, its hashCode method uses doubleToLongBits, so its hashCode can be different for two objects even when equals returns true. The doc makes no mention of the subtleties, which implies the issue is not important: people don't put these types of tuples in hash tables! (And it wouldn't be very effective if they did, because you'd have to get exactly the same numbers to get an equal key.)
In a tuple of floating-points like a complex number class, the simplest correct implementation of equals and hashCode is to not override them at all.
If you want the methods to take the value in account, then the correct thing to do is what Double does: use doubleToLongBits (or floatToLongBits) in both methods. If that's not suitable for arithmetic, a separate method is needed; perhaps equals(Complex other, double epsilon) to compare numbers for equality within a tolerance.
Note that you can override equals(Complex other) without interfering with equals(Object other), but that seems too confusing.
The pathological case seems to be 0.0 != -0.0, so I'd make sure that never happens and do the rest of the it exactly the way Joshua Bloch tells you in "Effective Java".
Alternatively, the hashCode contract guarantees that hashCodes equal if the objects equal, but not that hashcodes are different if the objects are different. So you could just use this.real as the hash code, and accept the collisions. Unless there is a priori knowledge about the distribution of the numbers your library will actually encounter, it may not be possible to do better: you have 128 bits of values, and 32 bits of hash, so collisions are inevitable (and harmless, unless you can show that they pessimize your lookups for expected data sets).

Java signed zero and boxing

Lately I've written a project in Java and noticed a very strange feature with double/Double implementation. The double type in Java has two 0's, i.e. 0.0 and -0.0 (signed zero's). The strange thing is that:
0.0 == -0.0
evaluates to true, but:
new Double(0.0).equals(new Double(-0.0))
evaluates to false. Does anyone know the reason behind this?
It is all explained in the javadoc:
Note that in most cases, for two instances of class Double, d1 and d2, the value of d1.equals(d2) is true if and only if
d1.doubleValue() == d2.doubleValue()
also has the value true. However, there are two exceptions:
If d1 and d2 both represent Double.NaN, then the equals method returns true, even though Double.NaN==Double.NaN has the value false.
If d1 represents +0.0 while d2 represents -0.0, or vice versa, the equal test has the value false, even though +0.0==-0.0 has the value true.
This definition allows hash tables to operate properly.
Now you might ask why 0.0 == -0.0 is true. In fact they are not strictly identical. For example:
Double.doubleToRawLongBits(0.0) == Double.doubleToRawLongBits(-0.0); //false
is false. However, the JLS requires ("in accordance with the rules of the IEEE 754 standard") that:
Positive zero and negative zero are considered equal.
hence 0.0 == -0.0 is true.
It important to undertand the use of signed zero in the Double class. (Loads of experienced Java programmers don't).
The short answer is that (by definition) "-0.0 is less than 0.0" in all the methods provided by the Double class (that is, equals(), compare(), compareTo(), etc)
Double allows all floating point numbers to be "totally ordered on a number line".
Primitives behave the way a user will think of things (a real world definition) ... 0d = -0d
The following snippets illustrate the behaviour ...
final double d1 = 0d, d2 = -0d;
System.out.println(d1 == d2); //prints ... true
System.out.println(d1 < d2); //prints ... false
System.out.println(d2 < d1); //prints ... false
System.out.println(Double.compare(d1, d2)); //prints ... 1
System.out.println(Double.compare(d2, d1)); //prints ... -1
There are other posts that are relevant and nicely explain the background ...
1: Why do floating-point numbers have signed zeros?
2: Why is Java's Double.compare(double, double) implemented the way it is?
And a word of caution ...
If you don't know that, in the Double class, "-0.0 is less than 0.0", you may get caught out when using methods like equals() and compare() and compareTo() from Double in logic tests. For example, look at ...
final double d3 = -0d; // try this code with d3 = 0d; for comparison
if (d3 < 0d) {
System.out.println("Pay 1 million pounds penalty");
} else {
System.out.println("Good things happen"); // this line prints
}
if (Double.compare(d3, 0d) < 0) { //use Double.compare(d3, -0d) to match the above behaviour
System.out.println("Pay 1 million pounds penalty"); // this line prints
} else {
System.out.println("Good things happen");
}
and for equals you might try ... new Double(d3).equals(0d) || new Double(d3).equals(-0d)
By using == statement you are comparing values. With equals your are comparing objects.

Equals operator for zeros (BigDecimal / Double) in Java

A few interesting observations w.r.t equals operator on 0 and 0.0
new Double(0.0).equals(0) returns false, while new Double(0.0).equals(0.0) returns true.
BigDecimal.ZERO.equals(BigDecimal.valueOf(0.0)) returns false, while BigDecimal.ZERO.equals(BigDecimal.valueOf(0)) returns true.
Looks like the string comparison is being done in both the cases. Could anyone throw some light on this.
Thanks.
BigDecimal 'equals' compares the value and the scale. If you only want to compare values (0 == 0.0) you should use compareTo:
BigDecimal.ZERO.compareTo(BigDecimal.valueOf(0.0)) == 0 //true
BigDecimal.ZERO.compareTo(BigDecimal.valueOf(0)) == 0 //true
See the javadoc.
As for the Double comparison, as explained by other answers, you are comparing a Double with an Integer in new Double(0.0).equals(0), which returns false because the objects have different types. For reference, the code for the equals method in JDK 7 is:
public boolean equals(Object obj) {
return (obj instanceof Double)
&& (doubleToLongBits(((Double)obj).value) ==
doubleToLongBits(value));
}
In your case, (obj instanceof Double) is false.
The 0 in your first expression is interpreted as an int, which may be autoboxed into an Integer, but not to a Double. So the type of the two is different, hence they are not equal. OTOH 0.0 is a double, which is autoboxed into a Double, so the two operands are deemed equal.
BigDecimals also contain a scale (i.e. number of digits to the right of the decimal separator dot). BigDecimal.ZERO has the value of "0", so its scale is 0. Hence it is not equal to "0.0", whose scale is 1.
If you want to compare values, use BigDecimal.compareTo:
BigDecimal.ZERO.compareTo(BigDecimal.valueOf(0.0)) == 0
BigDecimal.ZERO.compareTo(BigDecimal.valueOf(0)) == 0
new Double(0.0).equals(0) is actually boxed as something like this:
new Double(0.0).equals(Integer.valueOf(0))
Double.equals(...) will never return true unless given another Double instance.
new Double(0.0).equals(0); //false
as the argument you passed is integer. and the equels() in Double class checks whether the argument is od instance Double or not using instance of operator.
The Double's equals() method.
if (!(argument instanceof Double))
return false;
The argument you passed is integer, which is not instance of Double, so it returns false.
new Double(0.0).equals(0)
This line compares a double value of 0 (which is not exact zero) with integer of 0.
BigDecimal.ZERO.equals(BigDecimal.valueOf(0.0))
BigDecimal will compare the scale length in the equals operation.
For performance considerations BigDecimal, BigInteger caches small values
0 to 15 in case of BigDecimal (without fractions)
BigDecimal.ZERO will be new BigDecimal(BigInteger.ZERO, 0, 0, 1)
& valueOf method typically picks up from cache for 0 to 15 :)
please try doublevalue instead of compareto if you feel is not as beautiful and readable as or simply need an alternative like below:
BigDecimal a = new BigDecimal("0.00");
BigDecimal b = new BigDecimal("0.0");
BigDecimal c = new BigDecimal("0");
if(a.doubleValue()==BigDecimal.ZERO.doubleValue()) {
System.out.println("a equals");
}
if(b.doubleValue()==BigDecimal.ZERO.doubleValue()) {
System.out.println("b equals");
}
if(c.doubleValue()==BigDecimal.ZERO.doubleValue()) {
System.out.println("c equals");
}

Why is Java's Double.compare(double, double) implemented the way it is?

I was looking at the implementation of compare(double, double) in the Java standard library (6). It reads:
public static int compare(double d1, double d2) {
if (d1 < d2)
return -1; // Neither val is NaN, thisVal is smaller
if (d1 > d2)
return 1; // Neither val is NaN, thisVal is larger
long thisBits = Double.doubleToLongBits(d1);
long anotherBits = Double.doubleToLongBits(d2);
return (thisBits == anotherBits ? 0 : // Values are equal
(thisBits < anotherBits ? -1 : // (-0.0, 0.0) or (!NaN, NaN)
1)); // (0.0, -0.0) or (NaN, !NaN)
}
What are the merits of this implementation?
edit: "Merits" was a (very) bad choice of words. I wanted to know how this works.
The explanation is in the comments in the code. Java has double values for both 0.0 and -0.0, as well as "not a number" (NaN). You can't use simple == operator for these values. Take a peek into the doubleToLongBits() source and at the Javadoc for the Double.equals() method:
Note that in most cases, for two
instances of class Double, d1 and d2,
the value of d1.equals(d2) is true if
and only if
d1.doubleValue() == d2.doubleValue()
also has the value true. However,
there are two exceptions:
If d1 and d2 both represent Double.NaN, then the equals method returns true, even
though Double.NaN == Double.NaN has the value false.
If d1 represents +0.0 while d2 represents -0.0, or vice versa, the equal test has the value false, even though +0.0 == -0.0 has the value true.
This definition allows hash tables to operate properly.
#Shoover's answer is correct (read it!), but there is a bit more to it than this.
As the javadoc for Double::equals states:
"This definition allows hash tables to operate properly."
Suppose that the Java designers had decided to implement equals(...) and compare(...) with the same semantics as == on the wrapped double instances. This would mean that equals() would always return false for a wrapped NaN. Now consider what would happen if you tried to use a wrapped NaN in a Map or Collection.
List<Double> l = new ArrayList<Double>();
l.add(Double.NaN);
if (l.contains(Double.NaN)) {
// this wont be executed.
}
Map<Object,String> m = new HashMap<Object,String>();
m.put(Double.NaN, "Hi mum");
if (m.get(Double.NaN) != null) {
// this wont be executed.
}
Doesn't make a lot of sense does it!
Other anomalies would exist because -0.0 and +0.0 have different bit patterns but are equal according to ==.
So the Java designers decided (rightly IMO) on the more complicated (but more intuitive) definition for these Double methods that we have today.
The merit is that it's the simplest code that fulfills the specification.
One common characteristic of rookie programmers is to overvalue reading source code and undervalue reading specifications. In this case, the spec:
http://java.sun.com/javase/6/docs/api/java/lang/Double.html#compareTo%28java.lang.Double%29
... makes the behavior and the reason for the behavior (consistency with equals()) perfectly clear.
That implementation allows a real number to be defined as < NaN, and -0.0 as < 0.0.

Categories

Resources