Map and HashCode - java

What is the reason to make unique hashCode for hash-based collection to work faster?And also what is with not making hashCode mutable?
I read it here but didn't understand, so I read on some other resources and ended up with this question.
Thanks.

Hashcodes don't have to be unique, but they work better if distinct objects have distinct hashcodes.
A common use for hashcodes is for storing and looking objects in data structures like HashMap. These collections store objects in "buckets" and the hashcode of the object being stored is used to determine which bucket it's stored in. This speeds up retrieval. When looking for an object, instead of having to look through all of the objects, the HashMap uses the hashcode to determine which bucket to look in, and it looks only in that bucket.
You asked about mutability. I think what you're asking about is the requirement that an object stored in a HashMap not be mutated while it's in the map, or preferably that the object be immutable. The reason is that, in general, mutating an object will change its hashcode. If an object were stored in a HashMap, its hashcode would be used to determine which bucket it gets stored in. If that object is mutated, its hashcode would change. If the object were looked up at this point, a different hashcode would result. This might point HashMap to the wrong bucket, and as a result the object might not be found even though it was previously stored in that HashMap.

Hash codes are not required to be unique, they just have a very low likelihood of collisions.
As to hash codes being immutable, that is required only if an object is going to be used as a key in a HashMap. The hash code tells the HashMap where to do its initial probe into the bucket array. If a key's hash code were to change, then the map would no longer look in the correct bucket and would be unable to find the entry.

hashcode() is basically a function that converts an object into a number. In the case of hash based collections, this number is used to help lookup the object. If this number changes, it means the hash based collection may be storing the object incorrectly, and can no longer retrieve it.
Uniqueness of hash values allows a more even distribution of objects within the internals of the collection, which improves the performance. If everything hashed to the same value (worst case), performance may degrade.
The wikipedia article on hash tables provides a good read that may help explain some of this as well.

It has to do with the way items are stored in a hash table. A hash table will use the element's hash code to store and retrieve it. It's somewhat complicated to fully explain here but you can learn about it by reading this section: http://www.brpreiss.com/books/opus5/html/page206.html#SECTION009100000000000000000

Why searching by hashing is faster?
lets say you have some unique objects as values and you have a String as their keys. Each keys should be unique so that when the key is searched, you find the relevant object it holds as its value.
now lets say you have 1000 such key value pairs, you want to search for a particular key and retrieve its value. if you don't have hashing, you would then need to compare your key with all the entries in your table and look for the key.
But with hashing, you hash your key and put the corresponding object in a certain bucket on insertion. now when you want to search for a particular key, the key you want to search will be hashed and its hash value will be determined. And you can go to that hash bucket straight, and pick your object without having to search through the entire key entries.

hashCode is a tricky method. It is supposed to provide a shorthand to equality (which is what maps and sets care about). If many objects in your map share the same hashcode, the map will have to check equals frequently - which is generally much more expensive.
Check the javadoc for equals - that method is very tricky to get right even for immutable objects, and using a mutable object as a map key is just asking for trouble (since the object is stored for its "old" hashcode)

As long, as you are working with collections that you are retrieving elements from by index (0,1,2... collection.size()-1) than you don't need hashcode. However, if we are talking about associative collections like maps, or simply asking collection does it contain some elements than we are talkig about expensive operations.
Hashcode is like digest of provided object. It is robust and unique. Hashcode is generally used for binary comparisions. It is not that expensive to compare on binary level hashcode of every collection's member, as comparing every object by it's properties (more than 1 operation for sure). Hashcode needs to be like a fingerprint - one entity - one, and unmutable hashcode.

The basic idea of hashing is that if one is looking in a collection for an object whose hash code differs from that of 99% of the objects in that collection, one only need examine the 1% of objects whose hash code matches. If the hashcode differs from that of 99.9% of the objects in the collection, one only need examine 0.1% of the objects. In many cases, even if a collection has a million objects, a typical object's hash code will only match a very tiny fraction of them (in many cases, less than a dozen). Thus, a single hash computation may eliminate the need for nearly a million comparisons.
Note that it's not necessary for hash values to be absolutely unique, but performance may be very bad if too many instances share the same hash code. Note that what's important for performance is not the total number of distinct hash values, but rather the extent to which they're "clumped". Searching for an object which is in a collection of a million things in which half the items all have one hash value and each remaining items each have a different value will require examining on average about 250,000 items. By contrast, if there were 100,000 different hash values, each returned by ten items, searching for an object would require examining about five.

You can define a customized class extending from HashMap. Then you override the methods (get, put, remove, containsKey, containsValue) by comparing keys and values only with equals method. Then you add some constructors. Overriding correctly the hashcode method is very hard.
I hope I have helped everybody who wants to use easily a hashmap.

Related

Hashtable implementation in C and Java

In Java, HashMap and Hashtable, both implement map interface and store key/value pairs using hash function and Array/LinkedList implementation.  In C also, Hash table can be implemented using Array/LinkedList functionality but there is no concept of key/value pair like map. 
So my question is, whether Hash table implementation in C, similar to Hashtable in Java? or its more closer to HashSet in java (except unique elements only condition)?
Both semantics (Hashtable and HashSet) can be implemented in C, but neither comes in the Standard C library. You can find many different has table implementation on the Internet, each with its own advantages and drawbacks. Implementing this yourself may prove difficult as there are many traps and pitfalls.
I previously used BSD's Red-Black trees implementation. It's relatively easy to use when you start to understand how it works.
The really great thing about is that you only have to copy one header file and then just include that where it's needed, no need to link to libraries.
It has similar functionallity to HashSets, you can find by keys with the RB_FIND() macro, enumerate elements with RB_FOREACH(), insert new ones with RB_INSERT() and so on.
You can find more info in it's MAN page or the source code itself.
The difference (in Java) between a HashTable and a HashSet is in how the key is selected to calculate its hash value. In the HashSet the key is the instance stored itself, and the hashCode() method is applied to the complete instance, (Object provides both, hashCode() and equals(Object) methods. In the case of an external key, the equals(Object) and hashCode() are selected now from the separate key instance, instead of from the stored data value. For that reason, HashTable is normally a subclass of HashSet (and every Java table is actually derived from its corresponding *Set counterpart), by publishing an internal implementation of the Map.Entry<K,V> interface)
Implementing a hash table in C is not too difficult, but you need to understand first what's the key (if external) and the differences between the Key and the Value, the differences between calculating a hashCode() and comparing for equality, how are you going to distinguish the key from the value, and how do you manage internally keys and hashes, in order to manage collisions.
I recently started an implementation of a hash table in C (not yet finished) and my hash_table constructor need to store in the instance record a pointer to an equals comparison routine (to check for equality, the same as Java requires an Object's compareTo() method, this allows you to detect collisions (when you have two entries with the same hash but they compare as different) and the hash function used on keys to get the hash. In my implementation probably I will store the full hash value returned by the hash function (before fitting it on the table's size), so I can grow the table in order to simplify the placement of the elements in the new hash table once growed, without having to recalculate all hashes again.
I don't know if this hints can be of help to you, but it's my two cents. :)
My implementation uses a prime numbers table to select the capacity (aprox, doubling the size on each entry) to redimension the table when the number of collisions begin to be unacceptable (whatever this means to you, I have no clear idea yet, this is a time consuming operation, but happens scarcely, so it must be carefully specified, and it is something that Java's HashTable does) But if you don't plan to grow your hash table, or to do it manually, the implementation is easier (just add the number of entries in the constructor parameter list, and create some grow_hash_table(new_cap) method.)

How HashMap really works?

According to this question
how-does-a-hashmap-work-in-java and this
Many Key-Values pairs could be stocked in the same bucket (after calculating the index of the bucket using the hash), and when we call get(key) it looks over the linked list and tests using equals method.
It doesn't sound really optimized to me, doesn't it compare hashCodes of the linked List before the use of equals?
If the answer is NO:
it means most of the time the bucket contains only 1 node, could you explain why ? because according to this logical explanation many differents keys could have the same bucket index.
how the implementation ensure the good distribution of keys ? this probably mean that the bucket table size is relative to the number of keys
And even if the Table Bucket size is equals to the number of keys, how the HashMap hashCode function ensure the good distribution of keys ? isn't a random distribution ?,
could we have more details?
The implementation is open source, so I would encourage you to just read the code for any specific questions. But here's the general idea:
The primary responsibility for good hashCode distribution lies with the keys' class, not with the HashMap. If the key have a hashCode() method with bad distribution (for instance, return 0;) then the HashMap will perform badly.
HashMap does do a bit of "re-hashing" to ensure slightly better distribution, but not much (see HashMap::hash)
On the get side of things, a couple checks are made on each element in the bucket (which, yes, is implemented as a linked list)
First, the HashMap checks the element's hashCode with the incoming key's hashCode. This is because this operation is quick, and the element's hashCode was cached at put time. This guards against elements that have different hashCodes (and are thus unequal, by the contract of hashCode and equals established by Object) but happen to fall into the same bucket (remember, bucket indexes are basically hashCode % buckets.length)
If that succeeds, then, the HashMap checks equals explicitly to ensure they're really equal. Remember that equality implies same hashCode, but same hashCode does not require equality (and can't, since some classes have potentially infinite number of different values -- like String -- but there are only a finite number of possible hashCode values)
The reason for the double-checking of both hashCode and equals is to be both fast and correct. Consider two keys that have a different hashCode, but end up in the same HashMap bucket. For instance, if key A has hashCode=7 and B has hashCode=14, and there are 7 buckets, then they'll both end up in bucket 0 (7 % 7 == 0, and 14 % 7 == 0). Checking the hashCodes there is a quick way of seeing that A and B are unequal. If you find that the hashCodes are equal, then you make sure it's not just a hashCode collision by calling equals. This is just an optimization, really; it's not required by the general hash map algorithm.
To avoid having to make multiple comparisons in linked lists, the number of buckets in a HashMap is generally kept large enough that most buckets contain only one item. By default the java.util.HashMap tries to maintain enough buckets that the number of items is only 75% of the number of buckets.
Some of the buckets may still contain more than one item - what's called a "hash collision" - and other buckets will be empty. But on average, most buckets with items in them will contain only one item.
The equals() method will always be used at least once to determine if the key is an exact match. Note that the equals() method is usually at least as fast as the hashCode() method.
A good distribution of keys is maintained by a good hashCode() implementation; the HashMap can do little to affect this. A good hashCode() method is one where the returned hash has as random a relationship to the value of the object as possible.
For an example of a bad hashing function, once upon a time, the String.hashCode() method only depended on the start of the string. The problem was that sometimes you want to store a bunch of strings in a HashMap that all start the same - for example, the URLs to all the pages on a single web site - resulting in an inordinately high proportion of hash collisions. I believe String.hashCode() was later modified to fix this.
dosn't it compares hachCodes of the linked List instead of use the
equals
Its not required. hashcode is used to determine the bucket number be it put or get operation. Once you know the bucket number with hashcode and find its a linked list there, then you know that you need to iterate over it and need to check for equality to find exact key . so there is no need of hashcode comparison here
Thats why hashcode should be as unique as as it can be so that its best for lookup.
it means most of the time the bucket contains only 1 node
No . It depend on the uniqueness of hascode. If two key objects have same hashcode but are not equal, then bucket with contain two nodes
When we pass Key and Value object to put() method on Java HashMap, HashMap implementation calls hashCode method on Key object and applies returned hashcode into its own hashing function to find a bucket location for storing Entry object, important point to mention is that HashMap in Java stores both key and value object as Map.Entry in bucket which is essential to understand the retrieving logic.
While retrieving the Values for a Key, if hashcode is same to some other keys, bucket location would be same and collision will occur in HashMap, Since HashMap use LinkedList to store object, this entry (object of Map.Entry comprise key and value ) will be stored in LinkedList.
The good distribution of the Keys will depend on the implementation of hashcode method. This implementation must obey the general contract for hashcode:
If two objects are equal by equals() method then there hashcode returned by hashCode() method must be same.
Whenever hashCode() mehtod is invoked on the same object more than once within single execution of application, hashCode() must return same integer provided no information or fields used in equals and hashcode is modified. This integer is not required to be same during multiple execution of application though.
If two objects are not equals by equals() method it is not require that there hashcode must be different. Though it’s always good practice to return different hashCode for unequal object. Different hashCode for distinct object can improve performance of hashmap or hashtable by reducing collision.
You can visit this git-hub repository "https://github.com/devashish234073/alternate-hash-map-implementation-Java/blob/master/README.md".
You can understand the working the working of HashMap with a basic implementation and examples. The ReadMe.md explains all.
Including some portion of the example here:
Suppose I have to store the following key-value pairs.
(key1,val1)
(key2,val2)
(key3,val3)
(....,....)
(key99999,val99999)
Let our hash algo produces values only in between 0 and 5.
So first we create a rack with 6 buckets numbered 0 to 5.
Storing:
To store (keyN,valN):
1.get the hash of 'keyN'
2.suppose we got 2
3.store the (keyN,valN) in rack 2
Searching:
For searching keyN:
1.get hash of keyN
2.lets say we get 2
3.we traverse rack 2 and get the key and return the value
Thus for N keys , if we were to store them linearly it will take N comparison to search the last element , but with hashmap whose hash algo generates 25 values , we have to do only N/25 comparison. [with hash values equally dispersed]

Most efficient key object type in a hash map?

When using a HashMap, how much does the object type matter for element retrieval when it comes to speed? Say I use a loop to iterate through possible keys of a large hash map. What would be the most efficient key type I could use?
As of now I am using a String for the key object type due to simplicity for my sake. While coding, this question popped up in my head and struck my curiosity. I attempted searching this question online, but couldn't quite find the answer I was looking for. Thanks!
Key hashCode() and equals() should be fast
hashCode() should be well-distributed to minimize hash collisions
The hash map would ask your key for a hashCode(). If the time taken to generate a hash code is unreasonable, then insertion and retrieval times for such objects would be high. Take java.net.URL for example. It's hashcode method performs a DNS lookup. Such objects would not make good keys for a hash map.
There is no universal answer to which is the best key, since there is no best key. The best key for use in your hash map is one that you need for retrieval. Just make sure the key's hashCode() is quick and uses the int space appropriately.
What matters is the implementation of the equals and hashCode methods. See the following: What issues should be considered when overriding equals and hashCode in Java?
As these functions are utilized in hash operations, their efficiency comes into play when you operate on your collection.
As a sidenote, keep in mind the point made in the reference link:
Make sure that the hashCode() of the key objects that you put into the
collection never changes while the object is in the collection. The
bulletproof way to ensure this is to make your keys immutable, which
has also other benefits.
What matters in your case is the speed of the hashCode method of the elements and of the equals method. Using Integers is fine, as they don't require any special computation for the hash value. Strings are also ok, because the hash value is cached internally, although they perform slower with equals.
Are you trying to retrieve values from the HashMap via the get method or by iterating over it? As for the get method, all the people above have answered this.
If you are iterating over a HashMap via the entrySet method, the type of the keys in your HashMap doesn't matter. Besides, having an entry of the entrySet at hand in each iteration, looking up values becomes useless. Also note that entrySet is in generally preferable to the values method and keySet method as they both internally use the entrySet iterator and return either the key or value of an entry.

Is a hashCode() method that returns different values for every distinct object the most efficient approach?

I understand that returning the same value for each object is inefficient, but is it the most efficient approach to return distinct values for distinct instances?
If each object gets a different hashCode value then isn't this just like storing them in an ArrayList?
hashCode must be consistent with equals, that's number one priority. If no two objects are equal, then this would be desirable. Bear in mind that if your object has any more than 32 bits of state, it is theoretically impossible to provide a perfectly spread hashcode.
No, it's not actually.
Assuming your objects are going to be stored into a HashMap (or Set... doesn't matter, we'll use HashMap here for simplicity), you want your hashCode method to return a result in a way that distributes the objects as evenly as possible.
Hashcode should be unique for Objects that are not equal, although you can't guarantee this will always be true.
On the other hand, if a.equals(b) is true, then a.hashCode() == b.hashCode(). This is known as the Object Contract.
Besides this, there are performance issues also. Each time two different objects have the same hashCode, they're mapped to the same position in the HashMap (aka, they collide). This means that the HashMap implementation has to handle this collision, which is much more complex than simply storing and retrieving an entry.
There are also plenty of algorithms that rely on the fact that values are distributed evenly across a Map, and the performance deteriorates rapidly when the number of collisions increase (some algorithms assume a perfect hash function, meaning that no collisions ever occur, no two different values get mapped to the same position on the Map).
Good examples of this are probabilistic algorithms and data-structures such as Bloom Filters (to use an example that appears to be in fashion these days).
You want hashCode() as varied as possible to avoid collisions. If there are no collisions, each key or element will be stored in the underlying array on its own. (A bit like an ArrayList)
The problem is that even if the hashCode() are different you can still get collisions. This happens because you don't have a bucket for every possible hashCode, and this value has to be reduced to a smaller range. e.g. you have 16 buckets, the range is 0 to 15. How it does this is complicated, but I am sure you can see that even if all the hashCodes are different, they can still result in a collision (though its unlikely)
It is a concern for denial of service attacks. Normally strings have a low collision rate, however you can deliberately construct strings which have the same hashcode. This question gives a list of Strings with a hashCode of 0 Why doesn't String's hashCode() cache 0?
The hashCode() method isn't suited for placing objects in an ArrayList.
Although it does return the same value for the same object every time, two objects could quite possibly have the same hashcode.
Therefore the hashCode method is used on the key Object when storing items in for example a HashMap.
The HashMap class's major data structure is this:
Entry[] table;
It's important to note that the Entry class (which is a static package protected class that implements Map.Entry) is actually a linked list style structure.
When you try to put an element, first the key's hashcode is computed and then transformed into a bucket number. The "bucket" is the index into the above array.
Once you find the bucket, a linear search is done inside of that bucket for the exact key (if you don't believe me, look at the HashMap code). If it is found, the value is replaced. If not, the key/value pair is appended to the end of that bucket.
For this reason, hashcode() values need not be unique, however, the more unique and evenly distributed they are, the better your odds are to have the value evenly distributed among the buckets. If your hashcode() method return the same value for all instances, they'd all end up in the same bucket, hence rendering your get() method to be one long linear search, yielding O(N)
The more distributed the values are, the smaller the buckets, and thus the smaller the linear search component would be. Unique values would yield constant time lookup O(1).

100% Accurate key,value HashMap

According to the webpage http://www.javamex.com/tutorials/collections/hash_codes_advanced.shtml
hash codes do not uniquely identify an object. They simply narrow down the choice of matching items, but it is expected that in normal use, there is a good chance that several objects will share the same hash code. When looking for a key in a map or set, the fields of the actual key object must therefore be compared to confirm a match."
First does this mean that keys used in a has map may point to more then one value as well? I assume that it does.
If this is the case. How can I create a "Always Accurate" hashmap or similar key,value object?
My key needs to be String and my value needs to be String as well.. I need around 4,000 to 10,000 key value pairs..
A standard hashmap will guarantee unique keys. A hashcode is not equivalent to a key. It is just a means of quickly reducing the set of possible values down to objects (strings in your case) that have a specific hashcode.
First, let it be noted: Java's HashMaps work. Assuming the hash function is implemented correctly, you'll always get the same value for the same key.
Now, in a hash map, the key's hash code determines the bucket in which the value will be placed (read about hash tables if you're not familiar with the term). The performance of the map depends on how well the hash codes are distributed, and how balanced is the number of values in every bucket. Since you're using String, rest assure. HashMap will be "Always Accurate".

Categories

Resources