Effective java item 9: overriding hashcode example - java

I was reading Effective Java Item 9 and decided to run the example code by myself. But it works slightly different depending on how I insert a new object that I don't understand what exactly is going on inside. The PhoneNumber class looks:
public class PhoneNumber {
private final short areaCode;
private final short prefix;
private final short lineNumber;
public PhoneNumber(int areaCode, int prefix, int lineNumber) {
this.areaCode = (short)areaCode;
this.prefix = (short) prefix;
this.lineNumber = (short)lineNumber;
}
#Override public boolean equals(Object o) {
if(o == this) return true;
if(!(o instanceof PhoneNumber)) return false;
PhoneNumber pn = (PhoneNumber)o;
return pn.lineNumber == lineNumber && pn.prefix == prefix && pn.areaCode == areaCode;
}
}
Then according to the book and as is when I tried,
public static void main(String[] args) {
HashMap<PhoneNumber, String> phoneBook = new HashMap<PhoneNumber, String>();
phoneBook.put(new PhoneNumber(707,867,5309), "Jenny");
System.out.println(phoneBook.get(new PhoneNumber(707,867,5309)));
}
This prints "null" and it's explained in the book because HashMap has an optimization that caches the hash code associated with each entry and doesn't check for object equality if the hash codes don't match. It makes sense to me. But when I do this:
public static void main(String[] args) {
PhoneNumber p1 = new PhoneNumber(707,867,5309);
phoneBook.put(p1, "Jenny");
System.out.println(phoneBook.get(new PhoneNumber(707,867,5309)));
}
Now it returns "Jenny". Can you explain why it didn't fail in the second case?

The experienced behaviour might depend on the Java version and vendor that was used to run the application, because since the general contract of Object.hashcode() is violated, the result is implementation dependent.
A possible explanation (taking one possible implementation of HashMap):
The HashMap class in its internal implementation puts objects (keys) in different buckets based on their hashcode. When you query an element or you check if a key is contained in the map, first the proper bucket is looked for based on the hashcode of the queried key. Inside the bucket objects are checked in a sequencial way, and inside a bucket only the equals() method is used to compare elements.
So if you do not override Object.hashcode() it will be indeterministic if 2 different objects produce default hashcodes which may or may not determine the same bucket. If by any chance they "point" to the same bucket, you will still be able to find the key if the equals() method says they are equal. If by any chance they "point" to 2 different buckets, you will not find the key even if equals() method says they are equal.
hashcode() must be overriden to be consistent with your overridden equals() method. Only in this case it is guaranteed the proper, expected and consistent working of HashMap.
Read the javadoc of Object.hashcode() for the contract that you must not violate. The main point is that if equals() returns true for another object, the hashcode() method must return the same value for both of these objects.

Can you explain why it didn't fail in the second case?
In a nutshell, it is not guaranteed to fail. The two objects in the second example could end up having the same hash code (purely by coincidence or, more likely, due to compiler optimizations or due to how the default hashCode() works in your JVM). This would lead to the behaviour you describe.
For what it's worth, I cannot reproduce this behaviour with my compiler/JVM.

In your case by coincidence JVM was able to find the same hashCode for both object. When I ran your code, in my JVM it gave null for both the case. So your problem is because of JVM not the code.
It is better to override hashCode() each and every time when you override equils() method.
I haven't read Effective Java, I read SCJP by Kathy Sierra. So if you need more details then you can read this book. It's nice.

Your last code snipped does not compile because you haven't declared phoneBook.
Both main methods should work exactly the same. There is a 1 in 16 chance that it will print Jenny because a newly crated HashMap has a default size of 16. In detail that means that only the lower 4 bits of the hashCode will be checked. If they equal the equal method is used.

Related

How do I prove that Object.hashCode() can produce same hash code for two different objects in Java?

Had a discussion with an interviewer regarding internal implementation of Java Hashmaps and how it would behave if we override equals() but not the HashCode() method for an Employee<Emp_ID, Emp_Name> object.
I was told that hashCode for two different objects would never be the same for the default object.hashCode() implementation, unless we overrode the hashCode() ourselves.
From what I remembered, I told him that Java Hashcode contracts says that two different objects "may" have the same hashcode() not that it "must".
According to my interviewer, the default object.hashcode() never returns the same hashcode() for two different objects, Is this true?
Is it even remotely possible to write a code that demonstrates this. From what I understand, Object.hashcode() can produce 2^30 unique values, how does one produce a collision, with such low possibility of collision to demonstrate that two different objects can get the same hashcode() with the Object classes method.
Or is he right, with the default Object.HashCode() implementation, we will never have a collision i.e two different objects can never have the same HashCode. If so, why do so many java manuals don't explicitly say so.
How can I write some code to demonstrate this? Because on demonstrating this, I can also prove that a bucket in a hashmap can contain different HashCodes(I tried to show him the debugger where the hashMap was expanded but he told me that this is just logical Implementation and not the internal algo?)
2^30 unique values sounds like a lot but the birthday problem means we don't need many objects to get a collision.
The following program works for me in about a second and gives a collision between objects 196 and 121949. I suspect it will heavily depend on your system configuration, compiler version etc.
As you can see from the implementation of the Hashable class, every one is guarenteed to be unique and yet there are still collisions.
class HashCollider
{
static class Hashable
{
private static int curr_id = 0;
public final int id;
Hashable()
{
id = curr_id++;
}
}
public static void main(String[] args)
{
final int NUM_OBJS = 200000; // birthday problem suggests
// this will be plenty
Hashable objs[] = new Hashable[NUM_OBJS];
for (int i = 0; i < NUM_OBJS; ++i) objs[i] = new Hashable();
for (int i = 0; i < NUM_OBJS; ++i)
{
for (int j = i + 1; j < NUM_OBJS; ++j)
{
if (objs[i].hashCode() == objs[j].hashCode())
{
System.out.println("Objects with IDs " + objs[i].id
+ " and " + objs[j].id + " collided.");
System.exit(0);
}
}
}
System.out.println("No collision");
}
}
If you have a large enough heap (assuming 64 bit address space) and objects are small enough (the smallest object size on a 64 bit JVM is 8 bytes), then you will be able to represent more than 2^32 objects that are reachable at the same time. At that point, the objects' identity hashcodes cannot be unique.
However, you don't need a monstrous heap. If you create a large enough pool of objects (e.g. in a large array) and randomly delete and recreate them, it is (I think) guaranteed that you will get a hashcode collision ... if you continue doing this long enough.
The default algorithm for hashcode in older versions of Java is based on the address of the object when hashcode is first called. If the garbage collector moves an object, and another one is created at the original address of the first one, and identityHashCode is called, then the two objects will have the same identity hashcode.
The current (Java 8) default algorithm uses a PRNG. The "birthday paradox" formula will tell you the probability that one object's identity hashcode is the same as one more of the other's.
The -XXhashCode=n option that #BastianJ mentioned has the following behavior:
hashCode == 0: Returns a freshly generated pseudo-random number
hashCode == 1: XORs the object address with a pseudo-random number that changes occasionally.
hashCode == 2: The hashCode is 1! (Hence #BastianJ's "cheat" answer.)
hashCode == 3: The hashcode is an ascending sequence number.
hashCode == 4: the bottom 32 bits of the object address
hashCode >= 5: This is the default algorithm for Java 8. It uses Marsaglia's xor-shift PRNG with a thread specific seed.
If you have downloaded the OpenJDK Java 8 source code, you will find the implementation in hotspot/src/share/vm/runtime/synchronizer.cp. Look for the get_next_hash() method.
So that is another way to prove it. Show him the source code!
Use Oracle JVM and set -XX:hashCode=2. If I remember corretly, this chooses the Default implementation to be "constant 1". Just for the purpose of proving you're right.
I have little to add to Michael's answer (+1) except a bit of code golfing and statistics.
The Wikipedia article on the Birthday problem that Michael linked to has a nice table of the number of events necessary to get a collision, with a desired probability, given a value space of a particular size. For example, Java's hashCode has 32 bits, giving a value space of 4 billion. To get a collision with a probability of 50%, about 77,000 events are necessary.
Here's a simple way to find two instances of Object that have the same hashCode:
static int findCollision() {
Map<Integer,Object> map = new HashMap<>();
Object n, o;
do {
n = new Object();
o = map.put(n.hashCode(), n);
} while (o == null);
assert n != o && n.hashCode() == o.hashCode();
return map.size() + 1;
}
This returns the number of attempts it took to get a collision. I ran this a bunch of times and generated some statistics:
System.out.println(
IntStream.generate(HashCollisions::findCollision)
.limit(1000)
.summaryStatistics());
IntSummaryStatistics{count=1000, sum=59023718, min=635, average=59023.718000, max=167347}
This seems quite in line with the numbers from the Wikipedia table. Incidentally, this took only about 10 seconds to run on my laptop, so this is far from a pathological case.
You were right in the first place, but it bears repeating: hash codes are not unique!

Implementing a comparator of Java object identities [duplicate]

This question already has answers here:
How to implement the Java comparable interface?
(9 answers)
Closed 6 years ago.
I have a token class that uses object identity (as in equals just returns tokenA == tokenB). I'd like to use it in a TreeSet. This means that I need to implement a comparison between two tokens that is compatible with reference equality.I don't care about the specific implementation, so long as it is consistent with equals and fulfills the contract (as per TreeSet: "Note that the ordering maintained by a set (whether or not an explicit comparator is provided) must be consistent with equals if it is to correctly implement the Set interface.")
Note: these tokens are created on multiple threads, and may be compared on different threads than they were created on.
What would be the best method to go about doing so?
Ideas I've tried:
Using System.identityHashCode - the problem with this is that it is not guaranteed that two different objects will always have a different hashcode. And due to the birthday paradox you only need about 77k tokens before two will collide (assuming that System.identityHashCode is uniformly distributed over the entire 32-bit range, which may not be true...)
Using a comparator over the default Object.toString method for each token. Unfortunately, under the hood this just uses the hash code (same thing as above).
Using an int or long unique value (read: static counter -> instance variable). This bloats the size, and makes multithreading a pain (not to mention making object creation effectively singlethreaded) (AtomicInteger / AtomicLong for the static counter helps somewhat, but its the size bloat that's more annoying here).
Using System.identityHashCode and a static disambiguation map for any collisions. This works, but is rather complex. Also, Java by default doesn't have a ConcurrentWeakValueHashMultiMap (isn't that a mouthful), which means that I've got to pull in an external dependency (or write my own - probably using something similar to this) to do so, or suffer a (slow) memory leak, or use finalizers (ugh). (And I don't know if anyone implements such a thing...)
By the way, I can't simply punt the problem and assume unique objects have unique hash codes. That's what I was doing, but the assertion fired in the comparator, and so I dug into it, and, lo and behold, on my machine the following:
import java.util.*;
import java.util.Collections.*;
import java.lang.*;
public class size {
public static void main(String[] args) {
Map<Integer, Integer> soFar = new HashMap<>();
for (int i = 1; i <= 1_000_000; i++) {
TokenA t = new TokenA();
int ihc = System.identityHashCode(t);
if (soFar.containsKey(ihc)) {
System.out.println("Collision: " + ihc +" # object #" + soFar.get(ihc) + " & " + i);
break;
}
soFar.put(ihc, i);
}
}
}
class TokenA {
}
prints
Collision: 2134400190 # object #62355 & 105842
So collisions definitely do exist.
So, any suggestions?
There is no magic:
Here is the problem tokenA == tokenB compares identity, tokenA.equals(tokenB) compares whatever is defined in .equals() for that class regardless of identity.
So two objects can have .equals() return true and not be the same object instance, they don't even have to the the same type or share a super type.
There is no short cuts:
Implementing compareTo() is whatever you want to compare that are attributes of the objects. You just have to write the code and make it do what you want, but compareTo() is probably not what you want. compareTo() is for comparisons, if you two things are not < or > each other in some meaningful way then Comparable and Comparator<T> are not what you want.
Equals that is identity is simple:
public boolean equals(Object o)
{
return this == o;
}

subtleties of dealing with equals and hashCode in a Java interface

I'm implementing a value object for these interfaces:
interface FooConsumer
{
public void setFoo(FooKey key, Foo foo);
public Foo getFoo(FooKey key);
}
// intent is for this to be a value object with equivalence based on
// name and serial number
interface FooKey
{
public String getName();
public int getSerialNumber();
}
and from what I've read (e.g. in Enforce "equals" in an interface and toString(), equals(), and hashCode() in an interface) it looks like the recommendation is to provide an abstract base class, e.g.
abstract class AbstractFooKey
{
final private String name;
final private int serialNumber
public AbstractFooKey(String name, int serialNumber)
{
if (name == null)
throw new NullPointerException("name must not be null");
this.name = name;
this.serialNumber = serialNumber;
}
#Override public boolean equals(Object other)
{
if (other == this)
return true;
if (!(other instanceof FooKey))
return false;
return getName().equals(other.getName()
&& getSerialNumber() == other.getSerialNumber()
&& hashCode() == other.hashCode(); // ***
}
#Override public int hashCode()
{
return getName().hashCode() + getSerialNumber()*37;
}
}
My question is about the last bit I added here, and how to deal with the situation where AbstractFooKey.equals(x) is called with a value for x that is an instance of a class that implements FooKey but does not subclass AbstractFooKey. I'm not sure how to handle this; on the one hand I feel like the semantics of equality should just depend on the name and serialNumber being equal, but it appears like the hashCodes have to be equal as well in order to satisfy the contract for Object.equals().
Should I be:
really lax and just forget about the line marked ***
lax and keep what I have
return false from equals() if the other object is not an AbstractFooKey
be really strict and get rid of the interface FooKey and replace it with a class that is final?
something else?
Document the required semantics as part of the contract.
Ideally you'd actually have a single implementation which is final, which kind of negates the need of an interface for this particular purpose. You may have other reasons for wanting an interface for the type.
The contract requirements of Object is actually from hashCode: If two objects are equal according to the equals(Object) method, then calling the hashCode method on each of the two objects must produce the same integer result.
You don't need to include hashCode in the equals computation, rather you need to include all properties involved in equals in the hashCode calculation. In this case I'd simply compare serialNumber and name in both equals and hashCode.
Keep it simple unless you have a real reason to complicate it.
Start with a final, immutable class.
If you need an interface, create one to match, and document the semantics and default implementation.
For the equals and hashmap, there are strict contracts:
Reflexive - It simply means that the object must be equal to itself, which it would be at any given instance; unless you intentionally override the equals method to behave otherwise.
Symmetric - It means that if object of one class is equal to another class object, the other class object must be equal to this class object. In other words, one object can not unilaterally decide whether it is equal to another object; two objects, and consequently the classes to which they belong, must bilaterally decide if they are equal or not. They BOTH must agree.
Transitive - It means that if the first object is equal to the second object and the second object is equal to the third object; then the first object is equal to the third object. In other words, if two objects agree that they are equal, and follow the symmetry principle, one of them can not decide to have a similar contract with another object of different class. All three must agree and follow symmetry principle for various permutations of these three classes.
Consistent - It means that if two objects are equal, they must remain equal as long as they are not modified. Likewise, if they are not equal, they must remain non-equal as long as they are not modified. The modification may take place in any one of them or in both of them.
null comparison - It means that any instantiable class object is not equal to null, hence the equals method must return false if a null is passed to it as an argument. You have to ensure that your implementation of the equals method returns false if a null is passed to it as an argument.
Contract for hashCode():
Consistency during same execution - Firstly, it states that the hash code returned by the hashCode method must be consistently the same for multiple invocations during the same execution of the application as long as the object is not modified to affect the equals method.
Hash Code & Equals relationship - The second requirement of the contract is the hashCode counterpart of the requirement specified by the equals method. It simply emphasizes the same relationship - equal objects must produce the same hash code. However, the third point elaborates that unequal objects need not produce distinct hash codes.
(From: Technofundo: Equals and Hash Code)
However, using instanceof in equals is not the right thing to do. Joshua Bloch detailed this in Effective Java, and your concerns regarding the validity of your equals implementation is valid. Most likely, problems arising from using instanceof are going to violate the transitivity part in the contract when used in connection with descendants of the base class - unless the equals function is made final.
(Detailed a bit better than I could ever do here: Stackoverflow: Any reason to prefer getClass() over instanceof when generating .equals()?)
Also read:
Java API equals contract
Java API hashCode contract
If the equality of a FooKey is such that two FooKeys with the same name and serial numbers are considered to be equal then you can remove the line in the equals() clause that compares the hashcodes.
Or you could leave it in, it does not really matter assuming that all implementors of the FooKey interface have a correct implementation of equals and gethashcode but I would recommend removing it since otherwise a reader of the code could get the impression that it is there because it makes a difference when in reality it does not.
You can also get rid of the '*37' in the gethashcode method, it is unlikely it would contribute to better hashcode distribution.
In terms of your question 3, I would say no, don't do that, unless the equality contract for FooKey is not controlled by you (in which case trying to enforce an equality contract for the interface is questionable anyway)

Is the hashCode function generated by Eclipse any good?

Eclipse source menu has a "generate hashCode / equals method" which generates functions like the one below.
String name;
#Override
public int hashCode()
{
final int prime = 31;
int result = 1;
result = prime * result + ((name == null) ? 0 : name.hashCode());
return result;
}
#Override
public boolean equals(Object obj)
{
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
CompanyRole other = (CompanyRole) obj;
if (name == null)
{
if (other.name != null)
return false;
} else if (!name.equals(other.name))
return false;
return true;
}
If I select multiple fields when generating hashCode() and equals() Eclipse uses the same pattern shown above.
I am not an expert on hash functions and I would like to know how "good" the generated hash function is? What are situations where it will break down and cause too many collisions?
You can see the implementation of hashCode function in java.util.ArrayList as
public int hashCode() {
int hashCode = 1;
Iterator<E> i = iterator();
while (i.hasNext()) {
E obj = i.next();
hashCode = 31*hashCode + (obj==null ? 0 : obj.hashCode());
}
return hashCode;
}
It is one such example and your Eclipse generated code follows a similar way of implementing it. But if you feel that you have to implement your hashCode by your own, there are some good guidelines given by Joshua Bloch in his famous book Effective Java. I will post those important points from Item 9 of that book. Those are,
Store some constant nonzero value, say, 17, in an int variable called result.
For each significant field f in your object (each field taken into account by the equals method, that is), do the following:
a. Compute an int hash code c for the field:
i. If the field is a boolean, compute (f ? 1 : 0).
ii. If the field is a byte, char, short, or int, compute (int) f.
iii. If the field is a long, compute (int) (f ^ (f >>> 32)).
iv. If the field is a float, compute Float.floatToIntBits(f).
v. If the field is a double, compute Double.doubleToLongBits(f), and then hash the resulting long as in step 2.a.iii.
vi. If the field is an object reference and this class’s equals method compares the field by recursively invoking equals, recursively invoke hashCode on the field. If a more complex comparison is required, compute a “canonical representation” for this field and invoke hashCode on the canonical representation. If the value of the field is null, return 0 (or some other constant, but 0 is traditional)
vii. If the field is an array, treat it as if each element were a separate field.
That is, compute a hash code for each significant element by applying
these rules recursively, and combine these values per step 2.b. If every
element in an array field is significant, you can use one of the
Arrays.hashCode methods added in release 1.5.
b. Combine the hash code c computed in step 2.a into result as follows:
result = 31 * result + c;
Return result.
When you are finished writing the hashCode method, ask yourself whether
equal instances have equal hash codes. Write unit tests to verify your intuition!
If equal instances have unequal hash codes, figure out why and fix the problem.
Java language designers and Eclipse seem to follow similar guidelines I suppose. Happy coding. Cheers.
Since Java 7 you can use java.util.Objects to write short and elegant methods:
class Foo {
private String name;
private String id;
#Override
public int hashCode() {
return Objects.hash(name,id);
}
#Override
public boolean equals(Object obj) {
if (obj instanceof Foo) {
Foo right = (Foo) obj;
return Objects.equals(name,right.name) && Objects.equals(id,right.id);
}
return false;
}
}
Generally it is good, but:
Guava does it somehow better, I prefer it. [EDIT: It seems that as of JDK7 Java provides a similar hash function].
Some frameworks can cause problems when accessing fields directly instead of using setters/getters, like Hibernate for example. For some fields that Hibernate creates lazy, it creates a proxy not the real object. Only calling the getter will make Hibernate go for the real value in the database.
Yes, it is perfect :) You will see this approach almost everywhere in the Java source code.
It's a standard way of writing hash functions. However, you can improve/simplify it if you have some knowledge about the fields. E.g. you can ommit the null check, if your class guarantees that the field never be null (applies to equals() as well). Or you can of delegate the field's hash code if only one field is used.
I would also like to add a reference to Item 9, in Effective Java 2nd Edition by Joshua Bloch.
Here is a recipe from Item 9 : ALWAYS OVERRIDE HASHCODE WHEN YOU OVERRIDE EQUALS
Store some constant nonzero value, say, 17, in an int variable called result.
For each significant field f in your object (each field taken into account by the equals method, that is), do the following:
a. Compute an int hash code c for the field:
i. If the field is a boolean, compute (f ? 1 : 0).
ii. If the field is a byte, char, short, or int, compute (int) f.
iii. If the field is a long,compute(int)(f^(f>>>32)).
iv. If the field is a float, compute Float.floatToIntBits(f).
v. If the field is a double, compute Double.doubleToLongBits(f), and then hash the resulting long as in step 2.a.iii.
vi. If the field is an object reference and this class’s equals method compares the field by recursively invoking equals, recursively invoke hashCode on the field. If a more complex comparison is required, compute a “canonical representation” for this field and invoke hashCode on the canonical representation. If the value of the field is null, return 0 (or some other constant, but 0 is traditional).
vii. If the field is an array, treat it as if each element were a separate field. That is, compute a hash code for each significant element by applying these rules recursively, and combine these values per step 2.b. If every element in an array field is significant, you can use one of the Arrays.hashCode methods added in release 1.5.
b. Combine the hash code c computed in step 2.a into result as follows: result = 31 * result + c;
3. Return result.
4. When you are finished writing the hashCode method, ask yourself whether equal instances have equal hash codes. Write unit tests to verify your intuition! If equal instances have unequal hash codes, figure out why and fix the problem.
If you are using Apache Software Foundation (commons-lang library) then
below classes will help you to generate hashcode/equals/toString methods using reflection.
You don't need to worry about regenerating hashcode/equals/toString methods when you add/remove instance variables.
EqualsBuilder - This class provides methods to build a good equals method for any class. It follows rules laid out in Effective Java , by Joshua Bloch. In particular the rule for comparing doubles, floats, and arrays can be tricky. Also, making sure that equals() and hashCode() are consistent can be difficult.
HashCodeBuilder - This class enables a good hashCode method to be built for any class. It follows the rules laid out in the book Effective Java by Joshua Bloch. Writing a good hashCode method is actually quite difficult. This class aims to simplify the process.
ReflectionToStringBuilder - This class uses reflection to determine the fields to append. Because these fields are usually private, the class uses AccessibleObject.setAccessible(java.lang.reflect.AccessibleObject[], boolean) to change the visibility of the fields. This will fail under a security manager, unless the appropriate permissions are set up correctly.
Maven Dependency:
<dependency>
<groupId>commons-lang</groupId>
<artifactId>commons-lang</artifactId>
<version>${commons.lang.version}</version>
</dependency>
Sample Code:
import org.apache.commons.lang.builder.EqualsBuilder;
import org.apache.commons.lang.builder.HashCodeBuilder;
import org.apache.commons.lang.builder.ReflectionToStringBuilder;
public class Test{
instance variables...
....
getter/setter methods...
....
#Override
public String toString() {
return ReflectionToStringBuilder.toString(this);
}
#Override
public int hashCode() {
return HashCodeBuilder.reflectionHashCode(this);
}
#Override
public boolean equals(Object obj) {
return EqualsBuilder.reflectionEquals(this, obj);
}
}
One potential drawback is that all objects with null fields will have a hash code of 31, thus there could be many potential collisions between objects that only contain null fields. This would make for slower lookups in Maps.
This can occur when you have a Map whose key type has multiple subclasses. For example, if you had a HashMap<Object, Object>, you could have many key values whose hash code was 31. Admittedly, this won't occur that often. If you like, you could randomly change the values of the prime to something besides 31, and lessen the probability of collisions.

Java Arrays.hashcode() weird behaivor

I have an object which has one field- double[] _myField
it's hashcode is
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + Arrays.hashCode(_myField);
return result;
}
However if I used the object as a key in a Map I get the following weird behavior:
for (Map.Entry<MyObject, String> entry: _myMap.entrySet())
{
if (entry.getValue() != _myMap.get(entry.getKey()))
{
System.out.println("found the problem the value is null");
}
}
The only reason I can think of that the above IF statement is true is that I get a different hashcode for the key.
in fact, I have changed the hashcode function to return 1 in all cases. Not efficient, but good for debugging and indeed the IF statement is always false.
What's wrong with Arrays.hashcode()?
Pls note (after reading some comments):
1) As for the usage of != in the IF statement, indeed it compares references but in the above case it should have been the same. Anyhow the weird thing is that the right hand side returns NULL
2) As for posting Equals function. Of course I've implemented it. But it is irrelevant. Tracing the code in debug reveals that only hashcode is called. The reason for that is presumably the weird thing, the returned hashcode is different from the orignal one. In such cases the Map doesn't find a matching entry and therefore there is no need to call Equals.
Is the array being changed while it's in the map? Because that will change the outcome of the result.
Implementing hashCode is not enough. You also need to implement equals object. In fact whenever you implement hashCode for an object, you MUST implement equals as well. Those 2 work together.
You need to implement equals for your object and ensure that whenever equals is true for 2 objects, their hashCode also matches.
Trying to reproduce the problem on a clean slate revealed it is not reproducible.
Hence, further investigation revealed that the problem was
the _myField used for Hash was changed while the object was stored in the map.
As expected the map is corrupted.
Sorry for the time wasted by those who tried to answer the wrong question.

Categories

Resources