If Hashtables use separate chaining, why are duplicate keys not possible? - java

So I'm a bit confused about this one.
If Hashtables use separate chaining (or linear probing), why won't the following print out both values?
Hashtable<Character, Integer> map = new Hashtable<>();
map.put('h', 0);
map.put('h', 1);
System.out.println(map.remove('h')); // outputs 1
System.out.println(map.get('h')); // outputs null
I'm trying to understand why, given 2 identical keys, the hashtable won't use separate chaining in order to store both values. Did I understand this somewhat incorrectly or has Java just not implemented collision handling in their hashtable class?
Another question that might tie together would be, how does a hashtable using linear probing, given a key, know which value is the one we are looking for?
Thanks in advance!

I'm trying to understand why, given 2 identical keys, the hashtable won't use separate chaining in order to store both values.
The specification for Map (i.e. the javadoc) says that only one value is stored for each key. So that's what HashTable and HashMap implementations do.
Certainly the separate chaining doesn't stop someone from implementing a hash table with that property. The pseudo-code for put(key, value) on a hash table with separate chaining is broadly as follows:
Compute the hash value for the key.
Compute an index in the array of hash chains from the hash value. (The computation is index = hash % array.length or something similar.)
Search the hash chain at the computed index for an entry that matches the key.
If you found the entry for the key on the chain, update the value in the entry.
If you didn't find the entry, create an entry and add it to the chain.
If you repeat that for the same key, you will compute the same hash value, search the same chain, and find the same entry. You then update it, and there is still only one entry for that key ... as required by the specification.
In short, the above algorithm has no problem meeting the Map.put API requirements.

I think you are mis-understanding how hash tables work. Imagine I am looking for someone with an id of 227828. Say I have 1000 such people. I can search all 1000 and eventually find that ID and the person to whom it belongs.
But if their ids are used as keys in a hash table it is easier. Using the id as the key, say the hash function returns 0 for an even id and 1 for an odd id. Then all I have to do is find the box that contains even ids. Ideally I would then only have to search thru 500 entries to find the key - i.e. the id, and return the value associated with it.
But hash functions are more sophisticated and there are many such boxes or buckets. And the appropriate box or bucket can be identified and then be searched for the proper key. And then return its associated value.

Related

Java Hashmap - Please explain how hash maps work

I am currently preparing for interviews, Java in particular.
A common question is to explain hash maps.
Every explanation says that in case there is more than a single value per key, the values are being linked listed to the bucket.
Now, in HashMap class, when we use put(), and the key is already in the map, the value is not being linked to the existing one (at list as I understand) but replacing it:
Map<String, Integer> map = new HashMap();
map.put("a", 1);
//map now have the pair ["a", 1]
map.put("a", 2);
//map now have the pair ["a", 2]
//And according to all hash maps tutorials, it should have been like: ["a", 1->2]
From the docs:
If the map previously contained a mapping for the key, the old value
is replaced.
What am I missing here? I am a little confused...
Thanks
You're confusing the behaviour of a Map with the implementation of a HashMap.
In a Map, there is only one value for a key -- if you put a new value for the same key, the old value will be replaced.
HashMaps are implemented using 'buckets' -- an array of cells of finite size, indexed by the hashCode of the key.
It's possible for two different keys to hash to the same bucket, a 'hash collision'. In case of a collision, one solution is to put the (key, value) pairs into a list, which is searched when getting a value from that bucket. This list is part of the internal implementation of the HashMap and is not visible to the user of the HashMap.
This is probably what you are thinking of.
Your basic understanding is correct: maps in general and hashmaps in particular only support one value per key. That value could be a list but that's something different. put("a", 2) will replace any value for key "a" that's already in the list.
So what are you missing?
Every explanation says that in case there is more than a single value per key, the values are being linked listed to the bucket.
Yes, that's basically the case (unless the list is replaced by a tree for efficiency reasons but let's ignore that here).
This is not about putting values for the same key into a list, however, but for the same bucket. Since keys could be mapped to the same bucket due to their hash code you'd need to handle that "collision".
Example:
Key "A" has a hash code of 65, key "P" has a hash code of 81 (assuming hashCode() just returns the ascii codes).
Let's assume our hashmap currently has 16 buckets. So when I put "A" into the map, which bucket does it go to? We calculate bucketIndex = hashCode % numBuckets (so 65 % 16) and we get the index 1 for the 2nd bucket.
Now we want to put "P" into the map. bucketIndex = hashCode % numBuckets also yields 1 (81 % 16) so the value for a different key goes to the same bucket at index 1.
To solve that a simple implementation is to use a linked list, i.e. the entry for "A" points to the next entry for "P" in the same bucket.
Any get("P") will then look for the bucket index first using the same calculation, so it gets 1. Then it iterates the list and calls equals() on each entry's key until it hits the one that matches (or none if none match).
in case there is more than a single value per key, the values are being linked listed to the bucket.
Maybe you mistake that with: Multiple keys can have the same hashCode value. (Collision)
For example let's consider 2 keys(key1, key2). Key1 references value1 and Key2 references value2.
If
hashcode(key1) = 1
hashcode(key2) = 1
The objects might have the same hashCode, but at the same time not be equal (a collision). In that situation both values will be put as List according to hashCode. Values will be retrieved by hashCode and than you'll get your value among that values by equals operation.

Problems using HashMap

I'm trying to solve an analytic problem just for study data structures. My doubt is with the HashTable in Java.
I have a HashMap
HashMap<String, Integer> map = new HashMap<>();
And this HashMap has any fewer keys, but, some of these keys are duplicated.
map.put('k', 1);
map.put('k', 2);
And So on...
My question is when I am gonna remove a key to the HashMap. Some of these keys are duplicated too.
Let's see.
map.remove('k');
I suppose that in this case, it will remove all the values with the key 'k' or it just only remove the first it found.
What's going to happen in this case? I'm a little confused.
Thanks for your help!!
In HashMap (or HashTable) you can only have UNIQUE KEYS, you cannot have different values assigned to the same key. In your code you attempt put 2 different values with the same key:
map.put('k', 1);
map.put('k', 2);
Guess what, there will be no 2 entries, but only 1, the last, which will REPLACE previous one, since they have the same key - 'k'. Hence, map.remove('k'); will remove everything which is just one single entry, not two.
There are multiple things you are asking. Let's answer all of them.
HashTable is not the same as HashMap. However, hashTable is very similar to HashMap. The biggest difference between them is that in HashTable, every method is synchronized, which makes it extremely expensive to do a read/write. HashMap's methods are not synchronized. HashTable is more or less obsolete and people writing new code should avoid using a HashTable.
In a HashMap, keys are always unique. i.e., there cannot be 2 entries with the same key. In your example,
map.put('k', 1);
This will create an entry in the map whose key is 'k' and value is 1.
Then, you do
map.put('k', 2);
This will not create another entry with key 'k' and value 2. This will overwrite the value for the first entry itself. So, you will only have a singe entry for key 'k' whose value is now 2 (and not 1)
Now, I guess understanding remove() would be easy. When you do remove(key), it tries removing the only entry for that key.
In HashMap Keys will be unique , so it wont add multiple times of key K . when you remove key "K" , it will remove the unique key 'K' from the hashtable.
By definition a HashMap in Java stores all keys uniquely.
Simply, when we try to put more entries with the same key it will override the previously defined value for that key. Hence, when deleting a key after putting multiple values for that key means the key will no longer exist in the HashMap.
For more details you can read the documentation here.
use
map.putIfAbsent('k',2);
instead of
map.put('k', 2);

How key is mapped to value in a hasmap in java with O(1) get() method?

Consider an int array variable x[]. The variable X will have starting address reference. When array is accessed with index 2 that is x[2].then its memory location is calculated as
address of x[2] is starting addr + index * size of int.
ie. x[2]=x + 2*4.
But in case of hashmap how the memory address will be mapped internally.
By reading many previous posts I observed that HashMap uses a linked list to store the key value list. But if that is the case, then to find a key, it generates a hashcode then it will checks for equal hash code in list and retrieves the value..
This takes O(n) complexity.
If i am wrong in above observation Please correct me... I am a beginner.
Thank you
The traditional implementation of a HashMap is to use a function to generate a key, then use that key to access a value directly. Think of it as generating something that will translate into an array index. It does not look through the hashmap comparing element hashes to the generated hash; it generates the hash, and uses the hash to access an element directly.
What I think you're talking about is the case where two values in the HashMap generate the SAME key. THEN it uses a list of those, and has to look through them to determine which one it wants. But that's not O(n) where n is the number of elements in the HashMap, just O(m) where m is the number of elements with the same hash. Clearly the name of the game is to find a hash function where the generated hash is unique for all the elements, as much as is feasible.
--- edit to expand on the explanation ---
In your post, you state:
By reading many previous posts I observed that HashMap uses a linked
list to store the key value list.
This is wrong for the overall HashMap. For a HashMap to work reasonably, there must be a way to use the key to calculate a way to access the corresponding element directly, not by searching through all the values in the HashMap.
A "perfect" hash calculation would translate each possible key into hash value that was not calculated for any other key. This is usually not feasible, and so it is usually possible that two different keys will result in the same result from the hash calculation. In this case, the HashMap implementation could use a linked list of values, and would need to look through all such values to find the one that it was looking for. But this number is FAR less than the number of values in the overall HashMap.
You can make a hash where strings are the keys, and in which the first character of the string is converted to a number which is then used as an array index. As long as your strings all have different first characters, then accessing the value is a simple calc plus an array access -- O(1). Or you could add all the character values of the string indices together and take the last two (or three) digits, and THAT would be your hash calc. As long as the addition produced unique values for each index string, you don't ever have to look through a list; again, O(1).
And, in fact, as long as the hash calculation is approximately perfect, the lookup is still O(1) overall, because the limited number of times you have to look through a short list does not alter the overall efficiency.

How does HashMap.get method work [duplicate]

This question already has answers here:
How does a Java HashMap handle different objects with the same hash code?
(15 answers)
Closed 9 years ago.
I am just trying to understand the algorithm behind the HashMap.get method in Java.
How is the search for a specific object carried out? How is the hashMap implemented in Java and what type of searching algorithm it uses?
Extract from How does a Java HashMap handle different objects with the same hash code?
A hashmap works like this (this is a little bit simplified, but it illustrates the basic mechanism):
It has a number of "buckets" which it uses to store key-value pairs in. Each bucket has a unique number - that's what identifies the bucket. When you put a key-value pair into the map, the hashmap will look at the hash code of the key, and store the pair in the bucket of which the identifier is the hash code of the key. For example: The hash code of the key is 235 -> the pair is stored in bucket number 235. (Note that one bucket can store more then one key-value pair).
When you lookup a value in the hashmap, by giving it a key, it will first look at the hash code of the key that you gave. The hashmap will then look into the corresponding bucket, and then it will compare the key that you gave with the keys of all pairs in the bucket, by comparing them with equals().
Now you can see how this is very efficient for looking up key-value pairs in a map: by the hash code of the key the hashmap immediately knows in which bucket to look, so that it only has to test against what's in that bucket.
Looking at the above mechanism, you can also see what requirements are necessary on the hashCode() and equals() methods of keys:
If two keys are the same (equals() returns true when you compare them), their hashCode() method must return the same number. If keys violate this, then keys that are equal might be stored in different buckets, and the hashmap would not be able to find key-value pairs (because it's going to look in the same bucket).
If two keys are different, then it doesn't matter if their hash codes are the same or not. They will be stored in the same bucket if their hash codes are the same, and in this case, the hashmap will use equals() to tell them apart.

How to implement a Simple Hash Table (eg. using just arrays) [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
The fundamentals of Hash tables?
I am trying to implement a Simple Hash Table probably with simple java arrays. But 1st I will need to somehow have an associative array or sorts? How might a simple Hash Table implementation look like? It should still be able to do add/delete/get in O(1)
A hash table basically takes an input key, hashes it with a function to find a bucket ID, and then uses that bucket ID to either store or retrieve the data associated with that key.
In other words, for your case, you just have to provide a hashing function on your data that will give you a bucket ID of your array index.
Perhaps the simplest (and most naive) would be exclusive ORing together all the characters of your key then doing a modulus operation to bring it to the desired range. For example, say you have a structure containing:
Name
Address
Phone
All sorts of other details
You could generate a hash as follows:
set hashval to zero
for each character in Name:
hashval = hashval xor character
hashval = hashval mod 256
This would give you a bucket ID of between 0 and 255 inclusive.
Just keep in mind that the bucket may contain more than one item so you can't just use the bucket ID as an array index. Each bucket will need to be a structure containing possibly multiple items (such as a linked list or even another hashtable).
Read any text book on data structures and algorithms, or just Wikipedia "Hash Table" entry
The implementation which comes packaged with your JDK is pretty good for self study (though I admit in no way minimalistic). Have a look at it here.

Categories

Resources