According to this question
how-does-a-hashmap-work-in-java and this
Many Key-Values pairs could be stocked in the same bucket (after calculating the index of the bucket using the hash), and when we call get(key) it looks over the linked list and tests using equals method.
It doesn't sound really optimized to me, doesn't it compare hashCodes of the linked List before the use of equals?
If the answer is NO:
it means most of the time the bucket contains only 1 node, could you explain why ? because according to this logical explanation many differents keys could have the same bucket index.
how the implementation ensure the good distribution of keys ? this probably mean that the bucket table size is relative to the number of keys
And even if the Table Bucket size is equals to the number of keys, how the HashMap hashCode function ensure the good distribution of keys ? isn't a random distribution ?,
could we have more details?
The implementation is open source, so I would encourage you to just read the code for any specific questions. But here's the general idea:
The primary responsibility for good hashCode distribution lies with the keys' class, not with the HashMap. If the key have a hashCode() method with bad distribution (for instance, return 0;) then the HashMap will perform badly.
HashMap does do a bit of "re-hashing" to ensure slightly better distribution, but not much (see HashMap::hash)
On the get side of things, a couple checks are made on each element in the bucket (which, yes, is implemented as a linked list)
First, the HashMap checks the element's hashCode with the incoming key's hashCode. This is because this operation is quick, and the element's hashCode was cached at put time. This guards against elements that have different hashCodes (and are thus unequal, by the contract of hashCode and equals established by Object) but happen to fall into the same bucket (remember, bucket indexes are basically hashCode % buckets.length)
If that succeeds, then, the HashMap checks equals explicitly to ensure they're really equal. Remember that equality implies same hashCode, but same hashCode does not require equality (and can't, since some classes have potentially infinite number of different values -- like String -- but there are only a finite number of possible hashCode values)
The reason for the double-checking of both hashCode and equals is to be both fast and correct. Consider two keys that have a different hashCode, but end up in the same HashMap bucket. For instance, if key A has hashCode=7 and B has hashCode=14, and there are 7 buckets, then they'll both end up in bucket 0 (7 % 7 == 0, and 14 % 7 == 0). Checking the hashCodes there is a quick way of seeing that A and B are unequal. If you find that the hashCodes are equal, then you make sure it's not just a hashCode collision by calling equals. This is just an optimization, really; it's not required by the general hash map algorithm.
To avoid having to make multiple comparisons in linked lists, the number of buckets in a HashMap is generally kept large enough that most buckets contain only one item. By default the java.util.HashMap tries to maintain enough buckets that the number of items is only 75% of the number of buckets.
Some of the buckets may still contain more than one item - what's called a "hash collision" - and other buckets will be empty. But on average, most buckets with items in them will contain only one item.
The equals() method will always be used at least once to determine if the key is an exact match. Note that the equals() method is usually at least as fast as the hashCode() method.
A good distribution of keys is maintained by a good hashCode() implementation; the HashMap can do little to affect this. A good hashCode() method is one where the returned hash has as random a relationship to the value of the object as possible.
For an example of a bad hashing function, once upon a time, the String.hashCode() method only depended on the start of the string. The problem was that sometimes you want to store a bunch of strings in a HashMap that all start the same - for example, the URLs to all the pages on a single web site - resulting in an inordinately high proportion of hash collisions. I believe String.hashCode() was later modified to fix this.
dosn't it compares hachCodes of the linked List instead of use the
equals
Its not required. hashcode is used to determine the bucket number be it put or get operation. Once you know the bucket number with hashcode and find its a linked list there, then you know that you need to iterate over it and need to check for equality to find exact key . so there is no need of hashcode comparison here
Thats why hashcode should be as unique as as it can be so that its best for lookup.
it means most of the time the bucket contains only 1 node
No . It depend on the uniqueness of hascode. If two key objects have same hashcode but are not equal, then bucket with contain two nodes
When we pass Key and Value object to put() method on Java HashMap, HashMap implementation calls hashCode method on Key object and applies returned hashcode into its own hashing function to find a bucket location for storing Entry object, important point to mention is that HashMap in Java stores both key and value object as Map.Entry in bucket which is essential to understand the retrieving logic.
While retrieving the Values for a Key, if hashcode is same to some other keys, bucket location would be same and collision will occur in HashMap, Since HashMap use LinkedList to store object, this entry (object of Map.Entry comprise key and value ) will be stored in LinkedList.
The good distribution of the Keys will depend on the implementation of hashcode method. This implementation must obey the general contract for hashcode:
If two objects are equal by equals() method then there hashcode returned by hashCode() method must be same.
Whenever hashCode() mehtod is invoked on the same object more than once within single execution of application, hashCode() must return same integer provided no information or fields used in equals and hashcode is modified. This integer is not required to be same during multiple execution of application though.
If two objects are not equals by equals() method it is not require that there hashcode must be different. Though it’s always good practice to return different hashCode for unequal object. Different hashCode for distinct object can improve performance of hashmap or hashtable by reducing collision.
You can visit this git-hub repository "https://github.com/devashish234073/alternate-hash-map-implementation-Java/blob/master/README.md".
You can understand the working the working of HashMap with a basic implementation and examples. The ReadMe.md explains all.
Including some portion of the example here:
Suppose I have to store the following key-value pairs.
(key1,val1)
(key2,val2)
(key3,val3)
(....,....)
(key99999,val99999)
Let our hash algo produces values only in between 0 and 5.
So first we create a rack with 6 buckets numbered 0 to 5.
Storing:
To store (keyN,valN):
1.get the hash of 'keyN'
2.suppose we got 2
3.store the (keyN,valN) in rack 2
Searching:
For searching keyN:
1.get hash of keyN
2.lets say we get 2
3.we traverse rack 2 and get the key and return the value
Thus for N keys , if we were to store them linearly it will take N comparison to search the last element , but with hashmap whose hash algo generates 25 values , we have to do only N/25 comparison. [with hash values equally dispersed]
Related
What is the reason to make unique hashCode for hash-based collection to work faster?And also what is with not making hashCode mutable?
I read it here but didn't understand, so I read on some other resources and ended up with this question.
Thanks.
Hashcodes don't have to be unique, but they work better if distinct objects have distinct hashcodes.
A common use for hashcodes is for storing and looking objects in data structures like HashMap. These collections store objects in "buckets" and the hashcode of the object being stored is used to determine which bucket it's stored in. This speeds up retrieval. When looking for an object, instead of having to look through all of the objects, the HashMap uses the hashcode to determine which bucket to look in, and it looks only in that bucket.
You asked about mutability. I think what you're asking about is the requirement that an object stored in a HashMap not be mutated while it's in the map, or preferably that the object be immutable. The reason is that, in general, mutating an object will change its hashcode. If an object were stored in a HashMap, its hashcode would be used to determine which bucket it gets stored in. If that object is mutated, its hashcode would change. If the object were looked up at this point, a different hashcode would result. This might point HashMap to the wrong bucket, and as a result the object might not be found even though it was previously stored in that HashMap.
Hash codes are not required to be unique, they just have a very low likelihood of collisions.
As to hash codes being immutable, that is required only if an object is going to be used as a key in a HashMap. The hash code tells the HashMap where to do its initial probe into the bucket array. If a key's hash code were to change, then the map would no longer look in the correct bucket and would be unable to find the entry.
hashcode() is basically a function that converts an object into a number. In the case of hash based collections, this number is used to help lookup the object. If this number changes, it means the hash based collection may be storing the object incorrectly, and can no longer retrieve it.
Uniqueness of hash values allows a more even distribution of objects within the internals of the collection, which improves the performance. If everything hashed to the same value (worst case), performance may degrade.
The wikipedia article on hash tables provides a good read that may help explain some of this as well.
It has to do with the way items are stored in a hash table. A hash table will use the element's hash code to store and retrieve it. It's somewhat complicated to fully explain here but you can learn about it by reading this section: http://www.brpreiss.com/books/opus5/html/page206.html#SECTION009100000000000000000
Why searching by hashing is faster?
lets say you have some unique objects as values and you have a String as their keys. Each keys should be unique so that when the key is searched, you find the relevant object it holds as its value.
now lets say you have 1000 such key value pairs, you want to search for a particular key and retrieve its value. if you don't have hashing, you would then need to compare your key with all the entries in your table and look for the key.
But with hashing, you hash your key and put the corresponding object in a certain bucket on insertion. now when you want to search for a particular key, the key you want to search will be hashed and its hash value will be determined. And you can go to that hash bucket straight, and pick your object without having to search through the entire key entries.
hashCode is a tricky method. It is supposed to provide a shorthand to equality (which is what maps and sets care about). If many objects in your map share the same hashcode, the map will have to check equals frequently - which is generally much more expensive.
Check the javadoc for equals - that method is very tricky to get right even for immutable objects, and using a mutable object as a map key is just asking for trouble (since the object is stored for its "old" hashcode)
As long, as you are working with collections that you are retrieving elements from by index (0,1,2... collection.size()-1) than you don't need hashcode. However, if we are talking about associative collections like maps, or simply asking collection does it contain some elements than we are talkig about expensive operations.
Hashcode is like digest of provided object. It is robust and unique. Hashcode is generally used for binary comparisions. It is not that expensive to compare on binary level hashcode of every collection's member, as comparing every object by it's properties (more than 1 operation for sure). Hashcode needs to be like a fingerprint - one entity - one, and unmutable hashcode.
The basic idea of hashing is that if one is looking in a collection for an object whose hash code differs from that of 99% of the objects in that collection, one only need examine the 1% of objects whose hash code matches. If the hashcode differs from that of 99.9% of the objects in the collection, one only need examine 0.1% of the objects. In many cases, even if a collection has a million objects, a typical object's hash code will only match a very tiny fraction of them (in many cases, less than a dozen). Thus, a single hash computation may eliminate the need for nearly a million comparisons.
Note that it's not necessary for hash values to be absolutely unique, but performance may be very bad if too many instances share the same hash code. Note that what's important for performance is not the total number of distinct hash values, but rather the extent to which they're "clumped". Searching for an object which is in a collection of a million things in which half the items all have one hash value and each remaining items each have a different value will require examining on average about 250,000 items. By contrast, if there were 100,000 different hash values, each returned by ten items, searching for an object would require examining about five.
You can define a customized class extending from HashMap. Then you override the methods (get, put, remove, containsKey, containsValue) by comparing keys and values only with equals method. Then you add some constructors. Overriding correctly the hashcode method is very hard.
I hope I have helped everybody who wants to use easily a hashmap.
There is a point in general contract of equals method, which says if You has defined equals() method then You should also define hashCode() method. And if o1.equals(o2) then this is must o1.hashCode() == o2.hashCode().
So my question is what if I break this contract? Where can bring fails the situation when o1.equals(o2) but o1.hashCode != o2.hashCode() ?
It will lead to unexpected behavior in hash based data structure for example: HashMap, Read how HashTable works
HashMap/HashTable/HashSet/etc will put your object into one of several buckets based on its hashCode, and then check to see if any other objects already in that bucket are equal.
Because these classes assume the equals/hashCode contract, they won't check for equality against objects in other buckets. After all, any object in another bucket must have a different hashCode, and thus (by the contract) cannot be equal to the object in quesiton. If two objects are equal but have different hash codes, they could end up in different buckets, in which case the HashMap/Table/Set/etc won't have a chance to compare them.
So, you could end up with a Set that contains two objects which are equal -- which it's not supposed to do; or a Map that contains two values for the same one key (since the buckets are by key); or a Map where you can't look up a key (since the lookup checks both the hash code and equality); or any number of similar bugs.
If you break the contract, your objects won't work with hash-based containers (and anything else that uses hashCode() and relies on its contract).
The basic intuition is as follows: to see whether two objects are the same, the container could call hashCode() on both, and compare the results. If the hash codes are different, the container is allowed to short-circuit by assuming that the two objects are not equal.
To give a specific example, if o1.equals(o2) but o1.hashCode() != o2.hashCode(), you'll likely be able to insert both objects into a HashMap (which is meant to store unique objects).
When we put key-value pair in HashMap this could happen the hashcode of two keys could be same then how in this condition storing and retrieval of key-value will be handled.
Update
What I understand so far is that if two object key has same hash code then both key objects will be stored in same bucket and when I will say get(key) then out of two objects with matching hashcode which element to fetch is decided by object.equals().
When you want to retrieve some object from hashmap and there exists several objects with the same hashcode, java will call equals() to determine the right object.
That is why it is so important to override equals() when overriding hashCode().
Each bucket is represented by a linked list, so there is no limit, other than heap space, on the number of entries in a bucket. There are fewer buckets than possible hashCode results, so multiple hash codes are mapped to the same bucket, not just keys with the same hashCode.
A hashCode() with many collisions, or a HashMap with too few buckets, can make some linked lists long. Good performance depends on short linked lists, because the final stage in a look up is a linear scan of one of the linked lists.
I agree with the previous answer that a match depends on both equals() and hashCode().
I understand that returning the same value for each object is inefficient, but is it the most efficient approach to return distinct values for distinct instances?
If each object gets a different hashCode value then isn't this just like storing them in an ArrayList?
hashCode must be consistent with equals, that's number one priority. If no two objects are equal, then this would be desirable. Bear in mind that if your object has any more than 32 bits of state, it is theoretically impossible to provide a perfectly spread hashcode.
No, it's not actually.
Assuming your objects are going to be stored into a HashMap (or Set... doesn't matter, we'll use HashMap here for simplicity), you want your hashCode method to return a result in a way that distributes the objects as evenly as possible.
Hashcode should be unique for Objects that are not equal, although you can't guarantee this will always be true.
On the other hand, if a.equals(b) is true, then a.hashCode() == b.hashCode(). This is known as the Object Contract.
Besides this, there are performance issues also. Each time two different objects have the same hashCode, they're mapped to the same position in the HashMap (aka, they collide). This means that the HashMap implementation has to handle this collision, which is much more complex than simply storing and retrieving an entry.
There are also plenty of algorithms that rely on the fact that values are distributed evenly across a Map, and the performance deteriorates rapidly when the number of collisions increase (some algorithms assume a perfect hash function, meaning that no collisions ever occur, no two different values get mapped to the same position on the Map).
Good examples of this are probabilistic algorithms and data-structures such as Bloom Filters (to use an example that appears to be in fashion these days).
You want hashCode() as varied as possible to avoid collisions. If there are no collisions, each key or element will be stored in the underlying array on its own. (A bit like an ArrayList)
The problem is that even if the hashCode() are different you can still get collisions. This happens because you don't have a bucket for every possible hashCode, and this value has to be reduced to a smaller range. e.g. you have 16 buckets, the range is 0 to 15. How it does this is complicated, but I am sure you can see that even if all the hashCodes are different, they can still result in a collision (though its unlikely)
It is a concern for denial of service attacks. Normally strings have a low collision rate, however you can deliberately construct strings which have the same hashcode. This question gives a list of Strings with a hashCode of 0 Why doesn't String's hashCode() cache 0?
The hashCode() method isn't suited for placing objects in an ArrayList.
Although it does return the same value for the same object every time, two objects could quite possibly have the same hashcode.
Therefore the hashCode method is used on the key Object when storing items in for example a HashMap.
The HashMap class's major data structure is this:
Entry[] table;
It's important to note that the Entry class (which is a static package protected class that implements Map.Entry) is actually a linked list style structure.
When you try to put an element, first the key's hashcode is computed and then transformed into a bucket number. The "bucket" is the index into the above array.
Once you find the bucket, a linear search is done inside of that bucket for the exact key (if you don't believe me, look at the HashMap code). If it is found, the value is replaced. If not, the key/value pair is appended to the end of that bucket.
For this reason, hashcode() values need not be unique, however, the more unique and evenly distributed they are, the better your odds are to have the value evenly distributed among the buckets. If your hashcode() method return the same value for all instances, they'd all end up in the same bucket, hence rendering your get() method to be one long linear search, yielding O(N)
The more distributed the values are, the smaller the buckets, and thus the smaller the linear search component would be. Unique values would yield constant time lookup O(1).
I have a class called Node that I've written. I overrode the hashCode() function of it to take into account two fields of the Node (there is also a third field which doesn't affect the my hashCode() function). I also wrote an equals() function which takes all three fields into account.
I'm trying to use the Hashtable class to store Nodes so that I can easily check later on when I make new nodes whether the new nodes are duplicates of ones in the hashtable or not. So far I have this
Hashtable<Node,Node> hashTbl = new Hashtable<Node,Node>();
...
Node node1 = // some new node
hashTbl.put(node1,node1);
...
So now, say I make a new node called node2 which has the exact same hash value as node1, but is not equal to node1 as defined by the equals() method. I want to check whether node2 is a duplicate of anything in the hash table (it's not), but if I use constainsKey(), won't that give me a false positive? It seems like using containsValue() wouldn't be utilizing the efficiency of the hash table. So how can I do this efficiently?
Also, my assumption is that when I call hashTbl.put(arg1,arg2), it calls the hashCode() function of arg1 and uses that value to find an index in an "array" to place arg2 in. Is this right?
Sorry for being kind of confusing. Thanks anyone.
First, you probably want a HashSet (or something similar), not a Hashtable - all you are trying to do is check for membership, and a HashSet will allow you to do that without needing to provide a value for every key.
To answer your question, the determines which slot in the array that the value gets put in, but every slot is actually a linked list. If the new key is not .equal to any other key in the linked list, the new key and value are put in their own node in the linked list. Simply returning 1 for all objects is a perfectly legal and correct .hashcode implementation. The only issue with that implementation is that it turns Hashtables and similar data structures into linked lists (which obviously causes you to lose all the performance benefits of the Hashtable).
In short, your .hashcode method will work fine. If you put a large number of objects that aren't .equal but have the same hashcode, performance will decrease, but the code will still function correctly.
You're essentially right: the hashtable (btw, HashMap is the newer, more recommended class) uses hashCode() to find a bucket to put your object in. If there's a collision (another object in the same bucket), it uses a list within each bucket, using equals(Object) to find out if this new object is already equal to one of the objects in the hash (or, in a lookup, to see whether the lookup key matches one of the key-value pairs). So in the worst case of all collisions, your hash turns into a list with O(N) operations. As you point out, this is inefficient.
As long as your equals(Object) is correct, there won't be a functional problem -- just an efficiency issue if your hashCode produces too many conflicts. Basically, if two objects are equal, they must have the same hashCode for correctness; if two objects are not equal, they should have different hashCodes for hashing efficiency.
A HashTable (or HashMap) contains N bins, where a bin can hold more than one Object. (Each bin is effectively a linked list of Map.Entry). The hashCode() of the key is used to determine the bin. However, after determining the bin, equals() is then used on the key to see if the key is already there. So, if you put node1 and node2 into the HashTable, and both have the same hashCode() but are not equal, they will go into the same bin, but that bin will be a linked list of length two, with two keys, node1 and node2, and corresponding values.
containsKey() will NOT give you a false positive, as it will use the hashCode() to find the bin, but then will do an equals on all the keys for that bin. Having the same hashCode for a bunch of keys makes the HashTable slow and inefficient (if all the values have the same hashCode, in effect, you are storing in a linked list) but will not break the contract.