This question already has answers here:
How to compare two double values in Java?
(7 answers)
Closed 3 years ago.
I have double types within my class and have to override equals()/hashCode(). So I need to compare double values.
Which is the correct way?
Version 1:
boolean isEqual(double a, double b){
return Double.doubleToLongBits(a) == Double.doubleToLongBits(b);}
Version 2:
boolean isEqual(double a, double b){
final double THRESHOLD = .0001;
return Math.abs(a - b) < THRESHOLD;
}
Or should I avoid primitive double at all and use its wrapper type Double ? With this I can use Objects.equals(a,b), if a and b are Double.
The recommended way for use in equals/hashcode methods[citation needed] is to use Double.doubleToLongBits() and Double.hashcode() respectively.
This is because the contract of equals requires the two inputs to evaluate to 'different' if the hash codes are different. The other way around has no restriction.
(Note: It turns out that Double.compare() internally uses doubleToLongBits() but this is not specified by the API. As such I won't recommend it. On the other hand, hashCode() does specify that it uses doubleToLongBits().)
Practical example:
#Override
public boolean equals(Object obj) {
if (obj == null || getClass() != obj.getClass())
return false;
Vector2d other = (Vector2d)obj;
return Double.doubleToLongBits(x) == Double.doubleToLongBits(other.x) &&
Double.doubleToLongBits(y) == Double.doubleToLongBits(other.y);
}
#Override
public int hashCode() {
int hash = 0x811C9DC5;
hash ^= Double.hashCode(x);
hash *= 0x01000193;
hash ^= Double.hashCode(y);
hash *= 0x01000193;
return hash;
}
double values should not be used as a component to establish object equality and therefore its hashcode.
It comes from the fact that there is inherent imprecision in floating point numbers and double saturates artificially at +/-Infinity
To illustrate this problem:
System.out.println(Double.compare(0.1d + 0.2d, 0.3d));
System.out.println(Double.compare(Math.pow(3e27d, 127d), 17e256d / 7e-128d));
prints:
1
0
... which translates to the following 2 false statements:
0.1 + 0.2 > 0.3
(3 * 1027)127 == 17 * 10256 / (7 * 10-128)
So your software will make you act on 2 equal numbers being unequal, or 2 very large or very small unequal numbers being equal.
Related
This question already has answers here:
How to compare two double values in Java?
(7 answers)
Closed 7 years ago.
I have two double values. I want to check if two double values are same or not. I have two way to compare this double values.
First way :
double a = 0.1;
double b = 0.2;
if(a == b) {
System.out.println("Double values are same");
}
Another way to compare :
if(Double.compare(a, b) == 0) {
System.out.println("Double values are same");
}
Which one is the best way and accurate? Are both ways the same for comparison double values?
Double.compare handles the case of NaN and negative zeros differently than ==. For Double.compare, -0.0d is not equal to 0.0d.
For example the following code:
public static void main(String... args) {
double a = -0.0d;
double b = 0.0d;
if (a == b) {
System.out.println("a == b");
}
if (Double.compare(a, b) == 0) {
System.out.println("Double.compare(a, b) == 0");
}
}
will only print a == b.
The two operations are thus not equivalent. Using one over the other depend on your requirement.
The exact rule concerning == (and !=) can be found in the JLS, section 15.21.1:
Floating-point equality testing is performed in accordance with the rules of the IEEE 754 standard:
If either operand is NaN, then the result of == is false but the result of != is true. Indeed, the test x!=x is true if and only if the value of x is NaN.
Positive zero and negative zero are considered equal.
Otherwise, two distinct floating-point values are considered unequal by the equality operators. In particular, there is one value representing positive infinity and one value representing negative infinity; each compares equal only to itself, and each compares unequal to all other values.
Subject to these considerations for floating-point numbers, the following rules then hold for integer operands or for floating-point operands other than NaN:
The value produced by the == operator is true if the value of the left-hand operand is equal to the value of the right-hand operand; otherwise, the result is false.
The value produced by the != operator is true if the value of the left-hand operand is not equal to the value of the right-hand operand; otherwise, the result is false.
Let's analyze this part:
if(Double.compare(a, b) == 0)
By looking at the documentation of Double.compare, a possible implementation could be the following one (obviously it will be a more optimized one, but that's for the sake of the discussion):
return a == b ? 0 : (a < b ? -1 : +1);
Then, you have another comparison, so it becomes:
if((a == b ? 0 : (a < b ? -1 : +1)) == 0)
In the other case, you rely on a simple comparison using ==, that is:
if(a == b)
That said, in terms of accuracy I guess the result is the same, for the underlying representation of a double does not change and the comparison with 0 does not seem to affect the accuracy.
Which is the best?
Well, from the example above, I'd say the simpler one for you directly compare the values and you are interested only in equality, even though it's unlikely that you are facing with a problem which will benefit from choosing the best way for such a comparison.
Anyway, the approach using Double.compare is more suitable for those cases when you are not only interested in equality, but also in the concepts of greater than and/or less than.
Both of those options are perfectly valid for checking the equality of two numbers. I personally would say the == is nicer, but that's just me.
When double.compare() is better is when you want to know why your variables are not equal. It returns a little more information than true or false - a negative value if the left side is smaller, and a positive if the right side is smaller.
I have defined hashCode() for my class, with a lengthy list of class attributes.
Per the contract, I also need to implement equals(), but is it possible to implement it simply by comparing hashCode() inside, to avoid all the extra code? Are there any dangers of doing so?
e.g.
#Override
public int hashCode()
{
return new HashCodeBuilder(17, 37)
.append(field1)
.append(field2)
// etc.
// ...
}
#Override
public boolean equals(Object that) {
// Quick special cases
if (that == null) {
return false;
}
if (this == that) {
return true;
}
// Now consider all main cases via hashCode()
return (this.hashCode() == that.hashCode());
}
Don't do that.
The contract for hashCode() says that two objects that are equal must have the same hashcode. It doesn't guarantee anything for objects that are not equal. What this means is that you could have two objects that are completely different but, by chance, happen to have the same hashcode, thus breaking your equals().
It is not hard to get hashcode collisions between strings. Consider the core loop from the JDK 8 String.hashCode() implementation:
for (int i = 0; i < value.length; i++) {
h = 31 * h + val[i];
}
Where the initial value for h is 0 and val[i] is the numerical value for the character in the ith position in the given string. If we take, for example, a string of length 3, this loop can be written as:
h = 31 * (31 * val[0] + val[1]) + val[2];
If we choose an arbitrary string, such as "abZ", we have:
h("abZ") = 31 * (31 * 'a' + 'b') + 'Z'
h("abZ") = 31 * (31 * 97 + 98) + 90
h("abZ") = 96345
Then we can subtract 1 from val[1] while adding 31 to val[2], which gives us the string "aay":
h("aay") = 31 * (31 * 'a' + 'a') + 'y'
h("aay") = 31 * (31 * 97 + 97) + 121
h("aay") = 96345
Resulting in a collision: h("abZ") == h("aay") == 96345.
Also, note that your equals() implementation does not check if you are comparing objects of the same type. So, supposing you had this.hashCode() == 96345, the following statement would return true:
yourObject.equals(Integer.valueOf(96345))
Which is probably not what you want.
It is definitely not safe to just compare the hashCode() of your objects.
Your objects can have more different states than hash codes: Hash code is an int, that means it is limited to 2^32 = 4,294,967,296 possible values, but your object will probably have more than one single int field.
So it is proven, that there might be two different objects (according to equals) that have the same hash code.
But of course, you can first compare the hash codes for performance reasons (if hash code computation is faster than field comparison): If the hash codes are not equal, the objects are unequal too, so you can safely return false immediately!
I'm making a complex number class in Java like this:
public class Complex {
public final double real, imag;
public Complex(double real, double imag) {
this.real = real;
this.imag = imag;
}
... methods for arithmetic follow ...
}
I implemented the equals method like this:
#Override
public boolean equals(Object obj) {
if (obj instanceof Complex) {
Complex other = (Complex)obj;
return (
this.real == other.real &&
this.imag == other.imag
);
}
return false;
}
But if you override equals, you're supposed to override hashCode too. One of the rules is:
If two objects are equal according to the equals(Object) method, then calling the hashCode method on each of the two objects must produce the same integer result.
Comparing floats and doubles with == does a numeric comparison, so +0.0 == -0.0 and NaN values are inequal to everything including themselves. So I tried implementing the hashCode method to match the equals method like this:
#Override
public int hashCode() {
long real = Double.doubleToLongBits(this.real); // harmonize NaN bit patterns
long imag = Double.doubleToLongBits(this.imag);
if (real == 1L << 63) real = 0; // convert -0.0 to +0.0
if (imag == 1L << 63) imag = 0;
long h = real ^ imag;
return (int)h ^ (int)(h >>> 32);
}
But then I realized that this would work strangely in a hash map if either field is NaN, because this.equals(this) will always be false, but maybe that's not incorrect. On the other hand, I could do what Double and Float do, where the equals methods compare +0.0 != -0.0, but still harmonize the different NaN bit patterns, and let NaN == NaN, so then I get:
#Override
public boolean equals(Object obj) {
if (obj instanceof Complex) {
Complex other = (Complex)obj;
return (
Double.doubleToLongBits(this.real) ==
Double.doubleToLongBits(other.real) &&
Double.doubleToLongBits(this.imag) ==
Double.doubleToLongBits(other.imag)
);
}
return false;
}
#Override
public int hashCode() {
long h = (
Double.doubleToLongBits(real) +
Double.doubleToLongBits(imag)
);
return (int)h ^ (int)(h >>> 32);
}
But if I do that then my complex numbers don't behave like real numbers, where +0.0 == -0.0. But I don't really need to put my Complex numbers in hash maps anyway -- I just want to do the right thing, follow best practices, etc. And now I'm just confused. Can anyone advise me on the best way to proceed?
I've thought about this some more. The problem stems from trying to balance two uses of equals: IEEE 754 arithmetic comparison and Object/hashtable comparison. For floating-point types, the two needs can never be satisfied at once due to NaN. The arithmetic comparison wants NaN != NaN, but the Object/hashtable comparison (equals method) requires this.equals(this).
Double implements the methods correctly according to the contract of Object, so NaN == NaN. It also does +0.0 != -0.0. Both behaviors are the opposite from comparisons on primitive float/double types.
java.util.Arrays.equals(double[], double[]) compares elements the same way as Double (NaN == NaN, +0.0 != -0.0).
java.awt.geom.Point2D does it technically wrong. Its equals method compares the coordinates with just ==, so this.equals(this) can be false. Meanwhile, its hashCode method uses doubleToLongBits, so its hashCode can be different for two objects even when equals returns true. The doc makes no mention of the subtleties, which implies the issue is not important: people don't put these types of tuples in hash tables! (And it wouldn't be very effective if they did, because you'd have to get exactly the same numbers to get an equal key.)
In a tuple of floating-points like a complex number class, the simplest correct implementation of equals and hashCode is to not override them at all.
If you want the methods to take the value in account, then the correct thing to do is what Double does: use doubleToLongBits (or floatToLongBits) in both methods. If that's not suitable for arithmetic, a separate method is needed; perhaps equals(Complex other, double epsilon) to compare numbers for equality within a tolerance.
Note that you can override equals(Complex other) without interfering with equals(Object other), but that seems too confusing.
The pathological case seems to be 0.0 != -0.0, so I'd make sure that never happens and do the rest of the it exactly the way Joshua Bloch tells you in "Effective Java".
Alternatively, the hashCode contract guarantees that hashCodes equal if the objects equal, but not that hashcodes are different if the objects are different. So you could just use this.real as the hash code, and accept the collisions. Unless there is a priori knowledge about the distribution of the numbers your library will actually encounter, it may not be possible to do better: you have 128 bits of values, and 32 bits of hash, so collisions are inevitable (and harmless, unless you can show that they pessimize your lookups for expected data sets).
This question already has answers here:
How to test if a double is an integer
(18 answers)
Closed 9 years ago.
Specifically in Java, how can I determine if a double is an integer? To clarify, I want to know how I can determine that the double does not in fact contain any fractions or decimals.
I am concerned essentially with the nature of floating-point numbers. The methods I thought of (and the ones I found via Google) follow basically this format:
double d = 1.0;
if((int)d == d) {
//do stuff
}
else {
// ...
}
I'm certainly no expert on floating-point numbers and how they behave, but I am under the impression that because the double stores only an approximation of the number, the if() conditional will only enter some of the time (perhaps even a majority of the time). But I am looking for a method which is guaranteed to work 100% of the time, regardless of how the double value is stored in the system.
Is this possible? If so, how and why?
double can store an exact representation of certain values, such as small integers and (negative or positive) powers of two.
If it does indeed store an exact integer, then ((int)d == d) works fine. And indeed, for any 32-bit integer i, (int)((double)i) == i since a double can exactly represent it.
Note that for very large numbers (greater than about 2**52 in magnitude), a double will always appear to be an integer, as it will no longer be able to store any fractional part. This has implications if you are trying to cast to a Java long, for instance.
How about
if(d % 1 == 0)
This works because all integers are 0 modulo 1.
Edit To all those who object to this on the grounds of it being slow, I profiled it, and found it to be about 3.5 times slower than casting. Unless this is in a tight loop, I'd say this is a preferable way of working it out, because it's extremely clear what you're testing, and doesn't require any though about the semantics of integer casting.
I profiled it by running time on javac of
class modulo {
public static void main(String[] args) {
long successes = 0;
for(double i = 0.0; i < Integer.MAX_VALUE; i+= 0.125) {
if(i % 1 == 0)
successes++;
}
System.out.println(successes);
}
}
VS
class cast {
public static void main(String[] args) {
long successes = 0;
for(double i = 0.0; i < Integer.MAX_VALUE; i+= 0.125) {
if((int)i == i)
successes++;
}
System.out.println(successes);
}
}
Both printed 2147483647 at the end.
Modulo took 189.99s on my machine - Cast took 54.75s.
if(new BigDecimal(d).scale() <= 0) {
//do stuff
}
Your method of using if((int)d == d) should always work for any 32-bit integer. To make it work up to 64 bits, you can use if((long)d == d, which is effectively the same except that it accounts for larger magnitude numbers. If d is greater than the maximum long value (or less than the minimum), then it is guaranteed to be an exact integer. A function that tests whether d is an integer can then be constructed as follows:
boolean isInteger(double d){
if(d > Long.MAX_VALUE || d < Long.MIN_VALUE){
return true;
} else if((long)d == d){
return true;
} else {
return false;
}
}
If a floating point number is an integer, then it is an exact representation of that integer.
Doubles are a binary fraction with a binary exponent. You cannot be certain that an integer can be exactly represented as a double, especially not if it has been calculated from other values.
Hence the normal way to approach this is to say that it needs to be "sufficiently close" to an integer value, where sufficiently close typically mean "within X %" (where X is rather small).
I.e. if X is 1 then 1.98 and 2.02 would both be considered to be close enough to be 2. If X is 0.01 then it needs to be between 1.9998 and 2.0002 to be close enough.
This question already has answers here:
Why is 128==128 false but 127==127 is true when comparing Integer wrappers in Java?
(8 answers)
Closed 6 years ago.
The following code seemed really confusing to me since it provided two different outputs.The code was tested on jdk 1.7.
public class NotEq {
public static void main(String[] args) {
ver1();
System.out.println();
ver2();
}
public static void ver1() {
Integer a = 128;
Integer b = 128;
if (a == b) {
System.out.println("Equal Object");
}
if (a != b) {
System.out.println("Different objects");
}
if (a.equals(b)) {
System.out.println("Meaningfully equal.");
}
}
public static void ver2() {
Integer i1 = 127;
Integer i2 = 127;
if (i1 == i2) {
System.out.println("Equal Object");
}
if (i1 != i2){
System.out.println("Different objects");
}
if (i1.equals(i2)){
System.out.println("Meaningfully equal");
}
}
}
Output:
[ver1 output]
Different objects
Meaningfully equal.
[ver2 output]
Equal Object
Meaningfully equal
Why the == and != testing produces different results for ver1() and ver2() for same number much less than the Integer.MAX_VALUE? Can it be concluded that == checking for numbers greater than 127 (for wrapper classes like Integer as shown in the code) is totally waste of time?
Integers are cached for values between -128 and 127 so Integer i = 127 will always return the same reference. Integer j = 128 will not necessarily do so. You will then need to use equals to test for equality of the underlying int.
This is part of the Java Language Specification:
If the value p being boxed is true, false, a byte, or a char in the range \u0000 to \u007f, or an int or short number between -128 and 127 (inclusive), then let r1 and r2 be the results of any two boxing conversions of p. It is always the case that r1 == r2.
But 2 calls to Integer j = 128 might return the same reference (not guaranteed):
Less memory-limited implementations might, for example, cache all char and short values, as well as int and long values in the range of -32K to +32K.
Because small integers are interned in Java, and you tried the numbers on different sides of the "smallness" limit.
There exist an Integer object cache from -128 and up to 127 by default. The upper border can be configured. The upper cache border can be controlled by VM option -XX:AutoBoxCacheMax=<size>
You are using this cache when you use the form:
Integer i1 = 127;
or
Integer i1 = Integer.valueOf(127);
But when you use
Integer i1 = new Integer(127);
then you're guaranteed to get a new uncached object. In the latter case both versions print out the same results. Using the cached versions they may differ.
Java caches integers from -128 to 127 That is why the objects ARE the same.
I think the == and != operators when dealing with primitives will work how you're currently using them, but with objects (Integer vs. int) you'll want to perform testing with .equals() method.
I'm not certain on this, but with objects the == will test if one object is the same object or not, while .equals() will perform testing that those two objects contain equivalence in value (or the method will need to be created/overridden) for custom objects.