Mechanism of Java HashMap - java

Reading Algorithms book, need to grasp the concept of a hashtable. They write about hashing with separate chaining and hashing with linear probing. I guess Java's HashMap is a hashtable, therefore I'm wondering what mechanism does HashMap use (chaining or probing)?
I need to implement simplest HashMap with get, put, remove. Could you point me at the good material to read that?
When the unique keys used for the Map are custom objects, we need to implement hashCode() function inside the corresponding type. Did I get it right or when is hashCode() needed?
Unfortunately the book does not answer all questions, even though I understand that for many of you these questions are low level.

1: before java 1.8 HashMap uses separate chaining with linked lists to resolve collisions. There is a linked list for every bucket.
2: hmmmmmm maybe this one?
3: yes, you are right, hashCode() is used to calculate the hash of the Key. Then the hash code will be transformed to a number between 0 and number of buckets - 1.

This is a Most Confusing Question for many of us in Interviews.But its not that complex.
We know
HashMap stores key-value pair in Map.Entry (we all know)
HashMap works on hashing algorithm and uses hashCode() and equals() method in put() and get() methods. (even we know this)
When we call put method by passing key-value pair, HashMap uses Key **hashCode()** with hashing to **find out the index** to store the key-value pair. (this is important)
The Entry is **stored in the LinkedList**, so if there are already existing entry, it uses **equals() method to check if the passed key already exists** (even this is important)
if yes it overwrites the value else it creates a new entry and store this key-value Entry.
When we call get method by passing Key, again it uses the hashCode() to find the index in the array and then use equals() method to find the correct Entry and return it’s value. (now this is obvious)
THIS IMAGE WILL HELP YOU UNDERSTAND:

HashMap works on the principle of Hashing. Its working is two fold.
First, it maintains a Linked List to store objects of similar values, that means ones which are "equal".
Second it has a collection of these linked list whose headers are present in a array.
For more information refer blog Java Collection Internal Working

Related

Hashtable implementation in C and Java

In Java, HashMap and Hashtable, both implement map interface and store key/value pairs using hash function and Array/LinkedList implementation.  In C also, Hash table can be implemented using Array/LinkedList functionality but there is no concept of key/value pair like map. 
So my question is, whether Hash table implementation in C, similar to Hashtable in Java? or its more closer to HashSet in java (except unique elements only condition)?
Both semantics (Hashtable and HashSet) can be implemented in C, but neither comes in the Standard C library. You can find many different has table implementation on the Internet, each with its own advantages and drawbacks. Implementing this yourself may prove difficult as there are many traps and pitfalls.
I previously used BSD's Red-Black trees implementation. It's relatively easy to use when you start to understand how it works.
The really great thing about is that you only have to copy one header file and then just include that where it's needed, no need to link to libraries.
It has similar functionallity to HashSets, you can find by keys with the RB_FIND() macro, enumerate elements with RB_FOREACH(), insert new ones with RB_INSERT() and so on.
You can find more info in it's MAN page or the source code itself.
The difference (in Java) between a HashTable and a HashSet is in how the key is selected to calculate its hash value. In the HashSet the key is the instance stored itself, and the hashCode() method is applied to the complete instance, (Object provides both, hashCode() and equals(Object) methods. In the case of an external key, the equals(Object) and hashCode() are selected now from the separate key instance, instead of from the stored data value. For that reason, HashTable is normally a subclass of HashSet (and every Java table is actually derived from its corresponding *Set counterpart), by publishing an internal implementation of the Map.Entry<K,V> interface)
Implementing a hash table in C is not too difficult, but you need to understand first what's the key (if external) and the differences between the Key and the Value, the differences between calculating a hashCode() and comparing for equality, how are you going to distinguish the key from the value, and how do you manage internally keys and hashes, in order to manage collisions.
I recently started an implementation of a hash table in C (not yet finished) and my hash_table constructor need to store in the instance record a pointer to an equals comparison routine (to check for equality, the same as Java requires an Object's compareTo() method, this allows you to detect collisions (when you have two entries with the same hash but they compare as different) and the hash function used on keys to get the hash. In my implementation probably I will store the full hash value returned by the hash function (before fitting it on the table's size), so I can grow the table in order to simplify the placement of the elements in the new hash table once growed, without having to recalculate all hashes again.
I don't know if this hints can be of help to you, but it's my two cents. :)
My implementation uses a prime numbers table to select the capacity (aprox, doubling the size on each entry) to redimension the table when the number of collisions begin to be unacceptable (whatever this means to you, I have no clear idea yet, this is a time consuming operation, but happens scarcely, so it must be carefully specified, and it is something that Java's HashTable does) But if you don't plan to grow your hash table, or to do it manually, the implementation is easier (just add the number of entries in the constructor parameter list, and create some grow_hash_table(new_cap) method.)

Why do we need hashcode and bucket in LinkedHashMap

Lately,I've been going through implementations of Map interface in java. I understood HashMap and everything made sense.
But when it comes to LinkedHashMap, as per my knowledge so far, the Entry has key, value, before and after. where before and after keeps track of order of insertion.
However, using hashcode and bucket concept doesn't make sense to me in LinkedHashMaps.
I went through this article for understanding implementation of linkedHashMaps
Could someone please explain it? I mean why does it matter in which bucket we put the entry node. In fact why bucket concept in the first place.? why not plain doubly llinked lists?
LinkedHashMap is still a type of a HashMap. It uses the same logic as HashMap in order to find the bucket where a key belongs (used in methods such as get(),put(),containsKey(),etc...). The hashCode() is used to locate that bucket. This logic is essential for the expected O(1) performance of these operations.
The added functionality of LinkedHashMap (which uses the before and after references) is only used to iterate the entries according to insertion order, so it affects the iterators of the Collections returned by the keySet(),entrySet() & values() methods. It doesn't affect where the entries are stored.
Without hashcodes and buckets, LinkedHashMap won't be able to lookup keys in the Map in O(1) expected time.

Is it possible to implement a MyHashMap backed by a given HashSet implementation?

As we all known, in Sun(Oracle) JDK, HashSet is implemented backed by a HashMap, to reuse the complicated algorithm and data structure.
But, is it possible to implement a MyHashMap using java.util.HashSet as its back?
If possible, how? If not, why?
Please note that this question is only a discussion of coding skill, not applicable for production scenarios.
Trove bases it's Map on it's Set implementation. However, it has one critical method which is missing from Set which is a get() method.
Without a get(Element) method, HashSet you cannot perform a lookup which is a key function of a Map. (pardon the pun) The only option Set has is a contains which could be hacked to perform a get() but it would not be ideal.
You can have;
a Set where the Entry is a key and a value.
you define entries as being equal when the keys are the same.
you hack the equals() method so when there is a match, that on a "put" the value portion of an entry is updated, and on a "get" the value portion is copied.
Set could have been designed to be extended as Map, but it wasn't and it wouldn't be a good idea to use HashSet or the existing Set implementations to create a Map.

Map and HashCode

What is the reason to make unique hashCode for hash-based collection to work faster?And also what is with not making hashCode mutable?
I read it here but didn't understand, so I read on some other resources and ended up with this question.
Thanks.
Hashcodes don't have to be unique, but they work better if distinct objects have distinct hashcodes.
A common use for hashcodes is for storing and looking objects in data structures like HashMap. These collections store objects in "buckets" and the hashcode of the object being stored is used to determine which bucket it's stored in. This speeds up retrieval. When looking for an object, instead of having to look through all of the objects, the HashMap uses the hashcode to determine which bucket to look in, and it looks only in that bucket.
You asked about mutability. I think what you're asking about is the requirement that an object stored in a HashMap not be mutated while it's in the map, or preferably that the object be immutable. The reason is that, in general, mutating an object will change its hashcode. If an object were stored in a HashMap, its hashcode would be used to determine which bucket it gets stored in. If that object is mutated, its hashcode would change. If the object were looked up at this point, a different hashcode would result. This might point HashMap to the wrong bucket, and as a result the object might not be found even though it was previously stored in that HashMap.
Hash codes are not required to be unique, they just have a very low likelihood of collisions.
As to hash codes being immutable, that is required only if an object is going to be used as a key in a HashMap. The hash code tells the HashMap where to do its initial probe into the bucket array. If a key's hash code were to change, then the map would no longer look in the correct bucket and would be unable to find the entry.
hashcode() is basically a function that converts an object into a number. In the case of hash based collections, this number is used to help lookup the object. If this number changes, it means the hash based collection may be storing the object incorrectly, and can no longer retrieve it.
Uniqueness of hash values allows a more even distribution of objects within the internals of the collection, which improves the performance. If everything hashed to the same value (worst case), performance may degrade.
The wikipedia article on hash tables provides a good read that may help explain some of this as well.
It has to do with the way items are stored in a hash table. A hash table will use the element's hash code to store and retrieve it. It's somewhat complicated to fully explain here but you can learn about it by reading this section: http://www.brpreiss.com/books/opus5/html/page206.html#SECTION009100000000000000000
Why searching by hashing is faster?
lets say you have some unique objects as values and you have a String as their keys. Each keys should be unique so that when the key is searched, you find the relevant object it holds as its value.
now lets say you have 1000 such key value pairs, you want to search for a particular key and retrieve its value. if you don't have hashing, you would then need to compare your key with all the entries in your table and look for the key.
But with hashing, you hash your key and put the corresponding object in a certain bucket on insertion. now when you want to search for a particular key, the key you want to search will be hashed and its hash value will be determined. And you can go to that hash bucket straight, and pick your object without having to search through the entire key entries.
hashCode is a tricky method. It is supposed to provide a shorthand to equality (which is what maps and sets care about). If many objects in your map share the same hashcode, the map will have to check equals frequently - which is generally much more expensive.
Check the javadoc for equals - that method is very tricky to get right even for immutable objects, and using a mutable object as a map key is just asking for trouble (since the object is stored for its "old" hashcode)
As long, as you are working with collections that you are retrieving elements from by index (0,1,2... collection.size()-1) than you don't need hashcode. However, if we are talking about associative collections like maps, or simply asking collection does it contain some elements than we are talkig about expensive operations.
Hashcode is like digest of provided object. It is robust and unique. Hashcode is generally used for binary comparisions. It is not that expensive to compare on binary level hashcode of every collection's member, as comparing every object by it's properties (more than 1 operation for sure). Hashcode needs to be like a fingerprint - one entity - one, and unmutable hashcode.
The basic idea of hashing is that if one is looking in a collection for an object whose hash code differs from that of 99% of the objects in that collection, one only need examine the 1% of objects whose hash code matches. If the hashcode differs from that of 99.9% of the objects in the collection, one only need examine 0.1% of the objects. In many cases, even if a collection has a million objects, a typical object's hash code will only match a very tiny fraction of them (in many cases, less than a dozen). Thus, a single hash computation may eliminate the need for nearly a million comparisons.
Note that it's not necessary for hash values to be absolutely unique, but performance may be very bad if too many instances share the same hash code. Note that what's important for performance is not the total number of distinct hash values, but rather the extent to which they're "clumped". Searching for an object which is in a collection of a million things in which half the items all have one hash value and each remaining items each have a different value will require examining on average about 250,000 items. By contrast, if there were 100,000 different hash values, each returned by ten items, searching for an object would require examining about five.
You can define a customized class extending from HashMap. Then you override the methods (get, put, remove, containsKey, containsValue) by comparing keys and values only with equals method. Then you add some constructors. Overriding correctly the hashcode method is very hard.
I hope I have helped everybody who wants to use easily a hashmap.

Most efficient key object type in a hash map?

When using a HashMap, how much does the object type matter for element retrieval when it comes to speed? Say I use a loop to iterate through possible keys of a large hash map. What would be the most efficient key type I could use?
As of now I am using a String for the key object type due to simplicity for my sake. While coding, this question popped up in my head and struck my curiosity. I attempted searching this question online, but couldn't quite find the answer I was looking for. Thanks!
Key hashCode() and equals() should be fast
hashCode() should be well-distributed to minimize hash collisions
The hash map would ask your key for a hashCode(). If the time taken to generate a hash code is unreasonable, then insertion and retrieval times for such objects would be high. Take java.net.URL for example. It's hashcode method performs a DNS lookup. Such objects would not make good keys for a hash map.
There is no universal answer to which is the best key, since there is no best key. The best key for use in your hash map is one that you need for retrieval. Just make sure the key's hashCode() is quick and uses the int space appropriately.
What matters is the implementation of the equals and hashCode methods. See the following: What issues should be considered when overriding equals and hashCode in Java?
As these functions are utilized in hash operations, their efficiency comes into play when you operate on your collection.
As a sidenote, keep in mind the point made in the reference link:
Make sure that the hashCode() of the key objects that you put into the
collection never changes while the object is in the collection. The
bulletproof way to ensure this is to make your keys immutable, which
has also other benefits.
What matters in your case is the speed of the hashCode method of the elements and of the equals method. Using Integers is fine, as they don't require any special computation for the hash value. Strings are also ok, because the hash value is cached internally, although they perform slower with equals.
Are you trying to retrieve values from the HashMap via the get method or by iterating over it? As for the get method, all the people above have answered this.
If you are iterating over a HashMap via the entrySet method, the type of the keys in your HashMap doesn't matter. Besides, having an entry of the entrySet at hand in each iteration, looking up values becomes useless. Also note that entrySet is in generally preferable to the values method and keySet method as they both internally use the entrySet iterator and return either the key or value of an entry.

Categories

Resources