How do I search a certain bucket in a hashing solution to find a key? I am having trouble figuring out how to see if my key is already in a given bucket number. I don't understand how to read buckets in an array.
I am writing my own Hash data structure using buckets not Java's.
Once you have found the bucket where the item should be, based on hashcode, you will then have to look for the item in question among all the objects in the same bucket. Now all of these objects have the same hashCode, because they are all in the same bucket. So you will have to actually compare these objects with the .equals method to see if the item you are actually looking for is there.
How you manage this group of items that all share the same bucket is up to you. You might have a list, or a sub array, or any data structure that holds a collection of objects.
In fact, you don't necessarily need to hold them all in the same bucket at all. There are schemes called open hashing where items with the same hash 'spill' out of the target bucket, and occupy successive buckets in the top array.
Without knowing your exact data structure I can't be more specific. But basically you use hashCode to get you to the top bucket, then you use equals to find the object within the group of objects that have the same hashcode.
Related
I read that HashMap has a backing array, where entries are stored (marked with bucket number, initial size 16). Arrays are ordered, and I can call get(n) to get the element at nth position. Then why is HashMap unordered and has no get(n) method?
It depends on your view of what ordered means.
Indeed HashMapss internally use an array or another collection that has a fixed ordering. However the order has nothing to do with insertion order or something like that. The elements are ordered, for example, in increasing size of their hash-values and they have nothing to do with some actual ordering on the elements themselves.
So HashMaps indeed have something like a get(n) method if you think of n being the hash-value of the key-element. The method is called get(*key*) and it first computes the hash-value of the given key-element and then looks the value up on the internal structure by using get(*hash-value*) on it.
Here is an image a quick search yield that shows the structure of HashSets:
Note that HashSets are kinda the same than HashMaps, they use the same technique and the same image applies. But instead of just inserting an element a map inserts a container that is identified by the key and additionally holds a value.
Just as a small overview. A hash-function is a function that given an object computes a small value, the hash-value out of it, using its properties. The computation usually can be done fast and a lookup on the internal array at the position given by the hash-value is thus also fast.
To your specific question, as an user of a HashMap you generally are not interested in what elements specifically hide behind hash-value 1 or 2 and so on, that is why they did not include such a method. However if you truly need to do that for a special application or so than you can always try to use Reflection to access the internals of your HashMap or you could also just write a small wrapper around the class that provides such a method.
A HashMap is divided into individual buckets. Buckets are initially backed by an array, however if the buckets get too large then they are converted to tree structures which are sorted based on hash codes. That fact alone destroys any guarantee it could make about preserving insertion order.
If you'd like to know more about how it's implemented, you can look at my answer to this question: HashMap Java 8 implementation
According to this question
how-does-a-hashmap-work-in-java and this
Many Key-Values pairs could be stocked in the same bucket (after calculating the index of the bucket using the hash), and when we call get(key) it looks over the linked list and tests using equals method.
It doesn't sound really optimized to me, doesn't it compare hashCodes of the linked List before the use of equals?
If the answer is NO:
it means most of the time the bucket contains only 1 node, could you explain why ? because according to this logical explanation many differents keys could have the same bucket index.
how the implementation ensure the good distribution of keys ? this probably mean that the bucket table size is relative to the number of keys
And even if the Table Bucket size is equals to the number of keys, how the HashMap hashCode function ensure the good distribution of keys ? isn't a random distribution ?,
could we have more details?
The implementation is open source, so I would encourage you to just read the code for any specific questions. But here's the general idea:
The primary responsibility for good hashCode distribution lies with the keys' class, not with the HashMap. If the key have a hashCode() method with bad distribution (for instance, return 0;) then the HashMap will perform badly.
HashMap does do a bit of "re-hashing" to ensure slightly better distribution, but not much (see HashMap::hash)
On the get side of things, a couple checks are made on each element in the bucket (which, yes, is implemented as a linked list)
First, the HashMap checks the element's hashCode with the incoming key's hashCode. This is because this operation is quick, and the element's hashCode was cached at put time. This guards against elements that have different hashCodes (and are thus unequal, by the contract of hashCode and equals established by Object) but happen to fall into the same bucket (remember, bucket indexes are basically hashCode % buckets.length)
If that succeeds, then, the HashMap checks equals explicitly to ensure they're really equal. Remember that equality implies same hashCode, but same hashCode does not require equality (and can't, since some classes have potentially infinite number of different values -- like String -- but there are only a finite number of possible hashCode values)
The reason for the double-checking of both hashCode and equals is to be both fast and correct. Consider two keys that have a different hashCode, but end up in the same HashMap bucket. For instance, if key A has hashCode=7 and B has hashCode=14, and there are 7 buckets, then they'll both end up in bucket 0 (7 % 7 == 0, and 14 % 7 == 0). Checking the hashCodes there is a quick way of seeing that A and B are unequal. If you find that the hashCodes are equal, then you make sure it's not just a hashCode collision by calling equals. This is just an optimization, really; it's not required by the general hash map algorithm.
To avoid having to make multiple comparisons in linked lists, the number of buckets in a HashMap is generally kept large enough that most buckets contain only one item. By default the java.util.HashMap tries to maintain enough buckets that the number of items is only 75% of the number of buckets.
Some of the buckets may still contain more than one item - what's called a "hash collision" - and other buckets will be empty. But on average, most buckets with items in them will contain only one item.
The equals() method will always be used at least once to determine if the key is an exact match. Note that the equals() method is usually at least as fast as the hashCode() method.
A good distribution of keys is maintained by a good hashCode() implementation; the HashMap can do little to affect this. A good hashCode() method is one where the returned hash has as random a relationship to the value of the object as possible.
For an example of a bad hashing function, once upon a time, the String.hashCode() method only depended on the start of the string. The problem was that sometimes you want to store a bunch of strings in a HashMap that all start the same - for example, the URLs to all the pages on a single web site - resulting in an inordinately high proportion of hash collisions. I believe String.hashCode() was later modified to fix this.
dosn't it compares hachCodes of the linked List instead of use the
equals
Its not required. hashcode is used to determine the bucket number be it put or get operation. Once you know the bucket number with hashcode and find its a linked list there, then you know that you need to iterate over it and need to check for equality to find exact key . so there is no need of hashcode comparison here
Thats why hashcode should be as unique as as it can be so that its best for lookup.
it means most of the time the bucket contains only 1 node
No . It depend on the uniqueness of hascode. If two key objects have same hashcode but are not equal, then bucket with contain two nodes
When we pass Key and Value object to put() method on Java HashMap, HashMap implementation calls hashCode method on Key object and applies returned hashcode into its own hashing function to find a bucket location for storing Entry object, important point to mention is that HashMap in Java stores both key and value object as Map.Entry in bucket which is essential to understand the retrieving logic.
While retrieving the Values for a Key, if hashcode is same to some other keys, bucket location would be same and collision will occur in HashMap, Since HashMap use LinkedList to store object, this entry (object of Map.Entry comprise key and value ) will be stored in LinkedList.
The good distribution of the Keys will depend on the implementation of hashcode method. This implementation must obey the general contract for hashcode:
If two objects are equal by equals() method then there hashcode returned by hashCode() method must be same.
Whenever hashCode() mehtod is invoked on the same object more than once within single execution of application, hashCode() must return same integer provided no information or fields used in equals and hashcode is modified. This integer is not required to be same during multiple execution of application though.
If two objects are not equals by equals() method it is not require that there hashcode must be different. Though it’s always good practice to return different hashCode for unequal object. Different hashCode for distinct object can improve performance of hashmap or hashtable by reducing collision.
You can visit this git-hub repository "https://github.com/devashish234073/alternate-hash-map-implementation-Java/blob/master/README.md".
You can understand the working the working of HashMap with a basic implementation and examples. The ReadMe.md explains all.
Including some portion of the example here:
Suppose I have to store the following key-value pairs.
(key1,val1)
(key2,val2)
(key3,val3)
(....,....)
(key99999,val99999)
Let our hash algo produces values only in between 0 and 5.
So first we create a rack with 6 buckets numbered 0 to 5.
Storing:
To store (keyN,valN):
1.get the hash of 'keyN'
2.suppose we got 2
3.store the (keyN,valN) in rack 2
Searching:
For searching keyN:
1.get hash of keyN
2.lets say we get 2
3.we traverse rack 2 and get the key and return the value
Thus for N keys , if we were to store them linearly it will take N comparison to search the last element , but with hashmap whose hash algo generates 25 values , we have to do only N/25 comparison. [with hash values equally dispersed]
When we create a Collection (ArrayList,HashMap) in Java, does Java internally create some kind of index for faster retrieval of data ? In Oracle we have to manually create indexes but what is the technique (if any) used in Java
For ArrayList, each Object has a unique index (even duplicate objects).
An object can easily be accessed by its index using ArrayList.get(). The index is based on the order Objects are added (assuming you haven't sorted the ArrayList or otherwise changed the order). When an object is removed from an ArrayList, all elements in front of it (with a larger index) are shifted to the left so that their indices become index - 1.
A HashMap uses a slightly more complex indexing scheme...
For a HashMap, all indexing information is hidden from you first of all, so you don't really need to know this unless you want to understand its internal workings (which is a good thing!) It does use indexing however... HashMaps use an array of Entrys (its own implementation of Map.Entry) to store information. Entry represents a node in a linked list (not to be confused with the object java.util.LinkedList) and it stores a key, a value, and the next node in the linked list.
The index of an entry in a HashMap is simply h & (length - 1) where h is the hashCode of the key, passed through a custom hashing method internal to the java.util package (you won't be able to access it), and length is a power-of-two integer representing the size of the array of Entrys (this will automatically grow if need be).
Of course there may be some collisions if two keys end up computing the same hash. This is why HashMap uses an array of linked lists. In case of a collision, where two Entrys have the same hash, one can be tagged to the end of the other.
To get an object in HashMap, the index is calculated from the key you provide through get(key) and the relevant Entry is retrieved from the array of Entrys. Now that the map has the first node in a linked list, it will iterate over all elements of this linked list until it finds the key equal to the key you provided.
yes.. Java does hide these implementation details. But for the performance that java provides, there may be some indexing technique internally used. When you refer to "Oracle" I believe its the SQL or database software and not a language like Java.
What is the reason to make unique hashCode for hash-based collection to work faster?And also what is with not making hashCode mutable?
I read it here but didn't understand, so I read on some other resources and ended up with this question.
Thanks.
Hashcodes don't have to be unique, but they work better if distinct objects have distinct hashcodes.
A common use for hashcodes is for storing and looking objects in data structures like HashMap. These collections store objects in "buckets" and the hashcode of the object being stored is used to determine which bucket it's stored in. This speeds up retrieval. When looking for an object, instead of having to look through all of the objects, the HashMap uses the hashcode to determine which bucket to look in, and it looks only in that bucket.
You asked about mutability. I think what you're asking about is the requirement that an object stored in a HashMap not be mutated while it's in the map, or preferably that the object be immutable. The reason is that, in general, mutating an object will change its hashcode. If an object were stored in a HashMap, its hashcode would be used to determine which bucket it gets stored in. If that object is mutated, its hashcode would change. If the object were looked up at this point, a different hashcode would result. This might point HashMap to the wrong bucket, and as a result the object might not be found even though it was previously stored in that HashMap.
Hash codes are not required to be unique, they just have a very low likelihood of collisions.
As to hash codes being immutable, that is required only if an object is going to be used as a key in a HashMap. The hash code tells the HashMap where to do its initial probe into the bucket array. If a key's hash code were to change, then the map would no longer look in the correct bucket and would be unable to find the entry.
hashcode() is basically a function that converts an object into a number. In the case of hash based collections, this number is used to help lookup the object. If this number changes, it means the hash based collection may be storing the object incorrectly, and can no longer retrieve it.
Uniqueness of hash values allows a more even distribution of objects within the internals of the collection, which improves the performance. If everything hashed to the same value (worst case), performance may degrade.
The wikipedia article on hash tables provides a good read that may help explain some of this as well.
It has to do with the way items are stored in a hash table. A hash table will use the element's hash code to store and retrieve it. It's somewhat complicated to fully explain here but you can learn about it by reading this section: http://www.brpreiss.com/books/opus5/html/page206.html#SECTION009100000000000000000
Why searching by hashing is faster?
lets say you have some unique objects as values and you have a String as their keys. Each keys should be unique so that when the key is searched, you find the relevant object it holds as its value.
now lets say you have 1000 such key value pairs, you want to search for a particular key and retrieve its value. if you don't have hashing, you would then need to compare your key with all the entries in your table and look for the key.
But with hashing, you hash your key and put the corresponding object in a certain bucket on insertion. now when you want to search for a particular key, the key you want to search will be hashed and its hash value will be determined. And you can go to that hash bucket straight, and pick your object without having to search through the entire key entries.
hashCode is a tricky method. It is supposed to provide a shorthand to equality (which is what maps and sets care about). If many objects in your map share the same hashcode, the map will have to check equals frequently - which is generally much more expensive.
Check the javadoc for equals - that method is very tricky to get right even for immutable objects, and using a mutable object as a map key is just asking for trouble (since the object is stored for its "old" hashcode)
As long, as you are working with collections that you are retrieving elements from by index (0,1,2... collection.size()-1) than you don't need hashcode. However, if we are talking about associative collections like maps, or simply asking collection does it contain some elements than we are talkig about expensive operations.
Hashcode is like digest of provided object. It is robust and unique. Hashcode is generally used for binary comparisions. It is not that expensive to compare on binary level hashcode of every collection's member, as comparing every object by it's properties (more than 1 operation for sure). Hashcode needs to be like a fingerprint - one entity - one, and unmutable hashcode.
The basic idea of hashing is that if one is looking in a collection for an object whose hash code differs from that of 99% of the objects in that collection, one only need examine the 1% of objects whose hash code matches. If the hashcode differs from that of 99.9% of the objects in the collection, one only need examine 0.1% of the objects. In many cases, even if a collection has a million objects, a typical object's hash code will only match a very tiny fraction of them (in many cases, less than a dozen). Thus, a single hash computation may eliminate the need for nearly a million comparisons.
Note that it's not necessary for hash values to be absolutely unique, but performance may be very bad if too many instances share the same hash code. Note that what's important for performance is not the total number of distinct hash values, but rather the extent to which they're "clumped". Searching for an object which is in a collection of a million things in which half the items all have one hash value and each remaining items each have a different value will require examining on average about 250,000 items. By contrast, if there were 100,000 different hash values, each returned by ten items, searching for an object would require examining about five.
You can define a customized class extending from HashMap. Then you override the methods (get, put, remove, containsKey, containsValue) by comparing keys and values only with equals method. Then you add some constructors. Overriding correctly the hashcode method is very hard.
I hope I have helped everybody who wants to use easily a hashmap.
When we put key-value pair in HashMap this could happen the hashcode of two keys could be same then how in this condition storing and retrieval of key-value will be handled.
Update
What I understand so far is that if two object key has same hash code then both key objects will be stored in same bucket and when I will say get(key) then out of two objects with matching hashcode which element to fetch is decided by object.equals().
When you want to retrieve some object from hashmap and there exists several objects with the same hashcode, java will call equals() to determine the right object.
That is why it is so important to override equals() when overriding hashCode().
Each bucket is represented by a linked list, so there is no limit, other than heap space, on the number of entries in a bucket. There are fewer buckets than possible hashCode results, so multiple hash codes are mapped to the same bucket, not just keys with the same hashCode.
A hashCode() with many collisions, or a HashMap with too few buckets, can make some linked lists long. Good performance depends on short linked lists, because the final stage in a look up is a linear scan of one of the linked lists.
I agree with the previous answer that a match depends on both equals() and hashCode().