Resizing of array table in HashMap implementation - java

This one is simple question, for those who know internal implementation of HashMap :)
Initial size is 16 buckets, load factor is 0.75. Meaning when it gets (watch that word) 12, it resizes to 32 buckets.
My question is does it resize from 16 to 32 buckets when it gets 12 key-value pairs or when it gets 12 'filled' buckets? I am asking that because it could happen that from those 16 buckets all 12 key-value pairs get inserted to same bucket. In that case it would be weird to resize as other 15 are totally empty.
Thanks, any opinion on this would be appreciated :)

As mentioned in this link.
It represents that 12th key-value pair of hashmap will keep its size to 16. As soon as 13th element (key-value pair) will come into the Hashmap, it will increase its size from default 2^4 = 16 buckets to 2^5 = 32 buckets.
Independent where each key was inserted, when the product of load factor and current capacity exceed, the table will be resized.
The HashMap doesn't care about how many buckets were used until the load factor has been reached, it knows that the probability of having collisions is becoming too big, and the map should be resized. Even though many collisions already happened.

From JavaDoc
https://docs.oracle.com/javase/8/docs/api/java/util/HashMap.html#put-K-V-
When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) so that the hash table has approximately twice the number of buckets.
So, HashMap resize when it has 12 key-value pairs. It isn't weird, because after resizing entries will change their bucket.

Related

How to choose loadfactor of HashMap in Java? [duplicate]

HashMap has two important properties: size and load factor. I went through the Java documentation and it says 0.75f is the initial load factor. But I can't find the actual use of it.
Can someone describe what are the different scenarios where we need to set load factor and what are some sample ideal values for different cases?
The documentation explains it pretty well:
An instance of HashMap has two parameters that affect its performance: initial capacity and load factor. The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created. The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) so that the hash table has approximately twice the number of buckets.
As a general rule, the default load factor (.75) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the lookup cost (reflected in most of the operations of the HashMap class, including get and put). The expected number of entries in the map and its load factor should be taken into account when setting its initial capacity, so as to minimize the number of rehash operations. If the initial capacity is greater than the maximum number of entries divided by the load factor, no rehash operations will ever occur.
As with all performance optimizations, it is a good idea to avoid optimizing things prematurely (i.e. without hard data on where the bottlenecks are).
Default initial capacity of the HashMap takes is 16 and load factor is 0.75f (i.e 75% of current map size). The load factor represents at what level the HashMap capacity should be doubled.
For example product of capacity and load factor as 16 * 0.75 = 12. This represents that after storing the 12th key – value pair into the HashMap , its capacity becomes 32.
Actually, from my calculations, the "perfect" load factor is closer to log 2 (~ 0.7). Although any load factor less than this will yield better performance. I think that .75 was probably pulled out of a hat.
Proof:
Chaining can be avoided and branch prediction exploited by predicting if a
bucket is empty or not. A bucket is probably empty if the probability of it
being empty exceeds .5.
Let s represent the size and n the number of keys added. Using the binomial
theorem, the probability of a bucket being empty is:
P(0) = C(n, 0) * (1/s)^0 * (1 - 1/s)^(n - 0)
Thus, a bucket is probably empty if there are less than
log(2)/log(s/(s - 1)) keys
As s reaches infinity and if the number of keys added is such that
P(0) = .5, then n/s approaches log(2) rapidly:
lim (log(2)/log(s/(s - 1)))/s as s -> infinity = log(2) ~ 0.693...
What is load factor ?
The amount of capacity which is to be exhausted for the HashMap to increase its capacity.
Why load factor ?
Load factor is by default 0.75 of the initial capacity (16) therefore 25% of the buckets will be free before there is an increase in the capacity & this makes many new buckets with new hashcodes pointing to them to exist just after the increase in the number of buckets.
Why should you keep many free buckets & what is the impact of keeping free buckets on the performance ?
If you set the loading factor to say 1.0 then something very interesting might happen.
Say you are adding an object x to your hashmap whose hashCode is 888 & in your hashmap the bucket representing the hashcode is free , so the object x gets added to the bucket, but now again say if you are adding another object y whose hashCode is also 888 then your object y will get added for sure BUT at the end of the bucket (because the buckets are nothing but linkedList implementation storing key,value & next) now this has a performance impact ! Since your object y is no longer present in the head of the bucket if you perform a lookup the time taken is not going to be O(1) this time it depends on how many items are there in the same bucket. This is called hash collision by the way & this even happens when your loading factor is less than 1.
Correlation between performance, hash collision & loading factor
Lower load factor = more free buckets = less chances of collision = high performance = high space requirement.
Higher load factor = fewer free buckets = higher chance of collision = lower performance = lower space requirement.
From the documentation:
The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased
It really depends on your particular requirements, there's no "rule of thumb" for specifying an initial load factor.
For HashMap DEFAULT_INITIAL_CAPACITY = 16 and DEFAULT_LOAD_FACTOR = 0.75f
it means that MAX number of ALL Entries in the HashMap = 16 * 0.75 = 12. When the thirteenth element will be added capacity (array size) of HashMap will be doubled!
Perfect illustration answered this question:
image is taken from here:
https://javabypatel.blogspot.com/2015/10/what-is-load-factor-and-rehashing-in-hashmap.html
If the buckets get too full, then we have to look through
a very long linked list.
And that's kind of defeating the point.
So here's an example where I have four buckets.
I have elephant and badger in my HashSet so far.
This is a pretty good situation, right?
Each element has zero or one elements.
Now we put two more elements into our HashSet.
buckets elements
------- -------
0 elephant
1 otter
2 badger
3 cat
This isn't too bad either.
Every bucket only has one element
.
So if I wanna know, does this contain panda?
I can very quickly look at bucket number 1 and it's not
there and
I known it's not in our collection.
If I wanna know if it contains cat, I look at bucket
number 3,
I find cat, I very quickly know if it's in our
collection.
What if I add koala, well that's not so bad.
buckets elements
------- -------
0 elephant
1 otter -> koala
2 badger
3 cat
Maybe now instead of in bucket number 1 only looking at
one element,
I need to look at two.
But at least I don't have to look at elephant, badger and
cat.
If I'm again looking for panda, it can only be in bucket
number 1 and
I don't have to look at anything other then otter and
koala.
But now I put alligator in bucket number 1 and you can
see maybe where this is going.
That if bucket number 1 keeps getting bigger and bigger
and
bigger, then I'm basically having to look through all of
those elements to find
something that should be in bucket number 1.
buckets elements
------- -------
0 elephant
1 otter -> koala ->alligator
2 badger
3 cat
If I start adding strings to other buckets,
right, the problem just gets bigger and bigger in every
single bucket.
How do we stop our buckets from getting too full?
The solution here is that
"the HashSet can automatically
resize the number of buckets."
There's the HashSet realizes that the buckets are getting
too full.
It's losing this advantage of this all of one lookup for
elements.
And it'll just create more buckets(generally twice as before) and
then place the elements into the correct bucket.
So here's our basic HashSet implementation with separate
chaining.
Now I'm going to create a "self-resizing HashSet".
This HashSet is going to realize that the buckets are
getting too full and
it needs more buckets.
loadFactor is another field in our HashSet class.
loadFactor represents the average number of elements per
bucket,
above which we want to resize.
loadFactor is a balance between space and time.
If the buckets get too full then we'll resize.
That takes time, of course, but
it may save us time down the road if the buckets are a
little more empty.
Let's see an example.
Here's a HashSet, we've added four elements so far.
Elephant, dog, cat and fish.
buckets elements
------- -------
0
1 elephant
2 cat ->dog
3 fish
4
5
At this point, I've decided that the loadFactor, the
threshold,
the average number of elements per bucket that I'm okay
with, is 0.75.
The number of buckets is buckets.length, which is 6, and
at this point our HashSet has four elements, so the
current size is 4.
We'll resize our HashSet, that is we'll add more buckets,
when the average number of elements per bucket exceeds
the loadFactor.
That is when current size divided by buckets.length is
greater than loadFactor.
At this point, the average number of elements per bucket
is 4 divided by 6.
4 elements, 6 buckets, that's 0.67.
That's less than the threshold I set of 0.75 so we're
okay.
We don't need to resize.
But now let's say we add woodchuck.
buckets elements
------- -------
0
1 elephant
2 woodchuck-> cat ->dog
3 fish
4
5
Woodchuck would end up in bucket number 3.
At this point, the currentSize is 5.
And now the average number of elements per bucket
is the currentSize divided by buckets.length.
That's 5 elements divided by 6 buckets is 0.83.
And this exceeds the loadFactor which was 0.75.
In order to address this problem, in order to make the
buckets perhaps a little
more empty so that operations like determining whether a
bucket contains
an element will be a little less complex, I wanna resize
my HashSet.
Resizing the HashSet takes two steps.
First I'll double the number of buckets, I had 6 buckets,
now I'm going to have 12 buckets.
Note here that the loadFactor which I set to 0.75 stays the same.
But the number of buckets changed is 12,
the number of elements stayed the same, is 5.
5 divided by 12 is around 0.42, that's well under our
loadFactor,
so we're okay now.
But we're not done because some of these elements are in
the wrong bucket now.
For instance, elephant.
Elephant was in bucket number 2 because the number of
characters in elephant
was 8.
We have 6 buckets, 8 minus 6 is 2.
That's why it ended up in number 2.
But now that we have 12 buckets, 8 mod 12 is 8, so
elephant does not belong in bucket number 2 anymore.
Elephant belongs in bucket number 8.
What about woodchuck?
Woodchuck was the one that started this whole problem.
Woodchuck ended up in bucket number 3.
Because 9 mod 6 is 3.
But now we do 9 mod 12.
9 mod 12 is 9, woodchuck goes to bucket number 9.
And you see the advantage of all this.
Now bucket number 3 only has two elements whereas before
it had 3.
So here's our code,
where we had our HashSet with separate chaining that
didn't do any resizing.
Now, here's a new implementation where we use resizing.
Most of this code is the same,
we're still going to determine whether it contains the
value already.
If it doesn't, then we'll figure it out which bucket it
should go into and
then add it to that bucket, add it to that LinkedList.
But now we increment the currentSize field.
currentSize was the field that kept track of the number
of elements in our HashSet.
We're going to increment it and then we're going to look
at the average load,
the average number of elements per bucket.
We'll do that division down here.
We have to do a little bit of casting here to make sure
that we get a double.
And then, we'll compare that average load to the field
that I've set as
0.75 when I created this HashSet, for instance, which was
the loadFactor.
If the average load is greater than the loadFactor,
that means there's too many elements per bucket on
average, and I need to reinsert.
So here's our implementation of the method to reinsert
all the elements.
First, I'll create a local variable called oldBuckets.
Which is referring to the buckets as they currently stand
before I start resizing everything.
Note I'm not creating a new array of linked lists just yet.
I'm just renaming buckets as oldBuckets.
Now remember buckets was a field in our class, I'm going
to now create a new array
of linked lists but this will have twice as many elements
as it did the first time.
Now I need to actually do the reinserting,
I'm going to iterate through all of the old buckets.
Each element in oldBuckets is a LinkedList of strings
that is a bucket.
I'll go through that bucket and get each element in that
bucket.
And now I'm gonna reinsert it into the newBuckets.
I will get its hashCode.
I will figure out which index it is.
And now I get the new bucket, the new LinkedList of
strings and
I'll add it to that new bucket.
So to recap, HashSets as we've seen are arrays of Linked
Lists, or buckets.
A self resizing HashSet can realize using some ratio or
I would pick a table size of n * 1.5 or n + (n >> 1), this would give a load factor of .66666~ without division, which is slow on most systems, especially on portable systems where there is no division in the hardware.

Resize hashtable when max chain length is reached

I am implementing a hashtable for educational purposes. The hashtable is implemented with an array and collision is dealt by using linked list. The instructions says that I can insert same items without checking to increase speed of insertion. But when chain length reaches max allowed, the hashtable needs to be resized. But I found resizing is not going to help at all because same items still go to the same bucket even array length is increased. Did I miss something here? Thank you very much.
Let's take an example: three objects with hashcodes 7, 23 and 47.
If the hashtable is of size 8, then by modular arithmetic, all of those objects would go into hash bucket 7.
On the other hand, if the hashtable is of size 16, then the first two would go into hash bucket 7 while the other would go into bucket 15.
The instructions says that I can insert same items without checking to increase speed of insertion.
You can't skip checking completely, because you would end up with duplicates on the same chain.
But I found resizing is not going to help at all because same items still go to the same bucket even array length is increased.
This would happen only for hash values below table size. For values above table size the % operator will often place the item in a different bucket, assuming that you avoid the aliasing problem.
In order to avoid aliasing, use table sizes corresponding to prime numbers. See this Q&A for additional information on this.
I can tell how jdk handles that. You entries (keys) override hashcode - which is an int (made from 32 bits). When you have 16 buckets (internal array has the length of 16), the operation that is performed internally to find out where the entry will go is:
hash_code & (array.lenght - 1) // this is the same as modulo operation
// if array.lenght is power of two.
that means that when you put an entry into the map, only the last 4 bits of the hash code of your entries are taken into account.
Now when you fill those 16 entries (of you implement a load factor): the internal array is made bigger (let's double it), so now it has 32 entries.
This means that deciding where the entry will go is computed:
hash_code & (32 - 1) // now there are 5 bits take into consideration
All your entries are now re-hashed (because there is one more bit now), and your entries might end this time in different buckets.

A HashMap with default capacity 16 can contain more than 11/16 objects without rehashing - Is this right?

This is a followup question to What is the initial size of Array in HashMap Architecture?.
From that question I understand the initial capacity of HashMap is 16 by default which allows up to 11 entries before resizing since the default load factor is 0.75.
When does the rehashing take place? After 11 entries in the array or 11 entries in one of the linked lists? I think after 11 entries in the array.
Since the array size is 16 a HashMap could contain many objects (maybe more than 16) in a linked list as long as the array size is less than or equal to 11. Hence, a HashMap with default capacity 16 can contain more than 11/16 objects without rehashing - is this right?
Hence, a HashMap with default capacity 16 can contain more than 11/16 objects(K,V) without rehashing
This is an obvious flaw in using the number of buckets occupied as a measure. Another problem is that you would need to maintain both a size and a number of buckets used.
Instead it's the size() which is used so the size() is the only thing which determines when rehashing occurs no matter how it is arranged.
From the source for Java 8
final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
int s = m.size();
if (s > 0) {
if (table == null) { // pre-size
float ft = ((float)s / loadFactor) + 1.0F;
int t = ((ft < (float)MAXIMUM_CAPACITY) ?
(int)ft : MAXIMUM_CAPACITY);
if (t > threshold)
threshold = tableSizeFor(t);
}
else if (s > threshold)
resize();
I think you're fixating a bit too much on the implementation of HashMap, which can and does change over time. Think in terms of the map itself, rather than the internal data structures.
When does the rehashing take place? After 11 entries in the array or 11 entries in one of the linked lists? I think after 11 entries in the array.
Neither; the map is resized once the map contains 11 entries. Those entries could all be in their own buckets or all chained 11-deep in a single bucket.
Since the array size is 16 a HashMap could contain many objects (maybe more than 16) in a linked list as long as the array size is less than or equal to 11. Hence, a HashMap with default capacity 16 can contain more than 11/16 objects without rehashing - is this right?
No. While you could create your own hash table implementation that stores more elements than you have buckets, you'd do so at the cost of efficiency. The JDK's HashMap implementation will resize the backing array as soon as the number of elements in the map exceeds the load factor. It again doesn't matter whether the elements are all in the same bucket or distributed among them. in From the docs:
When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) so that the hash table has approximately twice the number of buckets.
For example if you have a HashMap (with default load and capacity) that currently contains 11 entries and you call .put() to insert a 12th entry, the map will be resized.

How is a hashMap in java populated when load factor is more than 1?

I tried to create a HashMap with the following details:-
HashMap<Integer,String> test = new HashMap<Integer,String>();
test.put(1, "Value1");
test.put(2, "Value2");
test.put(3, "Value3");
test.put(4, "Value4");
test.put(5, "Value5");
test.put(6, "Value6");
test.put(7, "Value7");
test.put(8, "Value8");
test.put(9, "Value9");
test.put(10, "Value10");
test.put(11, "Value11");
test.put(12, "Value12");
test.put(13, "Value13");
test.put(14, "Value14");
test.put(15, "Value15");
test.put(16, "Value16");
test.put(17, "Value17");
test.put(18, "Value18");
test.put(19, "Value19");
test.put(20, "Value20");
and I saw that every input was put in a different bucket. Which means a different hash code was calculated for each key.
Now,
if I modify my code as follows :-
HashMap<Integer,String> test = new HashMap<Integer,String>(16,2.0f);
test.put(1, "Value1");
test.put(2, "Value2");
test.put(3, "Value3");
test.put(4, "Value4");
test.put(5, "Value5");
test.put(6, "Value6");
test.put(7, "Value7");
test.put(8, "Value8");
test.put(9, "Value9");
test.put(10, "Value10");
test.put(11, "Value11");
test.put(12, "Value12");
test.put(13, "Value13");
test.put(14, "Value14");
test.put(15, "Value15");
test.put(16, "Value16");
test.put(17, "Value17");
test.put(18, "Value18");
test.put(19, "Value19");
test.put(20, "Value20");
I find that some of the values which were put in different buckets are now put in a bucket which already contains some values even though their hash value is different. Can anyone please help me understand the same ?
Thanks
So, if you initialize a HashMap without specifying an initial size and a load factor it will get initialized with a size of 16 and a load factor of 0.75. This means, once the HashMap is at least (initial size * load factor) big, so 12 elements big, it will be rehashed, which means, it will grow to about twice the size and all elements will be added anew.
You now set the load factor to 2, which means, now the Map will only get rehashed, when it is filled with at least 32 elements.
What happens now is that elements with the same hash mod bucketcount will be put into the same bucket. Each bucket with more then one element contains a list, where all the elements are put into. Now when you try to lookup one of the elements it first finds the bucket using the hash. Then it has to iterate over the whole list in that bucket to find the hash with the exact match. This is quite costly.
So in the end there is a trade-off: rehashing is pretty expensive, so you should try to avoid it. On the other hand, if you have multiple elements in a bucket, the lookup gets pretty expensive, so you should really try to avoid that as well. So you need a balance between those two. One other way to go is to set the initial size quite high, but that takes up more memory that is not used.
In your second test, the initial capacity is 16 and the load factor is 2. This means the HashMap will use an array of 16 elements to store the entries (i.e. there are 16 buckets), and this array will be resized only when the number of entries in the Map reaches 32 (16 * 2).
This means that some keys having different hashCodes must be stored in the same bucket, since the number of buckets (16) is smaller than the total number of entries (20 in your case).
The assignment of a key to a bucket is calculated in 3 steps :
First the hashCode method is called.
Then an additional function is applied on the hashCode to reduce the damage that may be caused by bad hashCode implementations.
Finally a modulus operation is applied on the result of the previous step to get a number between 0 and capacity - 1.
The 3rd step is where keys having different hashCodes may end up in the same bucket.
Lets check it with examples -
i) In first case, load factor is 0.75f and initialCapacity is 16 which means array resize will occur when number of buckets in HashMap reaches 16*0.75 = 12.
Now, every key has different HashCode so that HashCode modulo 16 is unique which causes all first 12 entries to go to different buckets after which resize occur and when new entries are put they also end up in different buckets (HashCode modulo 32 being unique.)
ii) In second case, load factor is 2.0f which means resize will happen when no. of buckets reaches 16*2 = 32.
You keep on putting entries in map and it never resizes (for the 20 entries) making multiple entries collide.
So, in nutshell in first example - HashCode modulo 16 for first 12 entries and HashCode modulo 32 for all entries is unique while in second case it is always HashCode modulo 16 for all entries which is not unique (cannot be as all 20 entries have to be accommodated in 16 buckets)
The javadoc explanation:
An instance of HashMap has two parameters that affect its performance:
initial capacity and load factor. The capacity is the number of
buckets in the hash table, and the initial capacity is simply the
capacity at the time the hash table is created. The load factor is a
measure of how full the hash table is allowed to get before its
capacity is automatically increased. When the number of entries in the
hash table exceeds the product of the load factor and the current
capacity, the hash table is rehashed (that is, internal data
structures are rebuilt) so that the hash table has approximately twice
the number of buckets.
As a general rule, the default load factor (.75) offers a good
tradeoff between time and space costs. Higher values decrease the
space overhead but increase the lookup cost (reflected in most of the
operations of the HashMap class, including get and put). The expected
number of entries in the map and its load factor should be taken into
account when setting its initial capacity, so as to minimize the
number of rehash operations. If the initial capacity is greater than
the maximum number of entries divided by the load factor, no rehash
operations will ever occur.
By default,initial capacity is 16 and load factor is 0.75.
So when number of entries goes beyond 12 (16 * 0.75),its capacity is increased to 32 and hashtable is rehashed. That is why in your first case, every different element is having its own bucket.
In your second case,only when the number of entries crossess 32(16*2), hash table will be resized. Even if the elements are having different hash code values, when hashcode%bucketsize is calculated, it may collide. That is the reason you are seeing more than one element in same bucket

Resizing bucket of hashmap

Performance of hashmap depends on Load factor(l) and Capacity(c). If the number of entries in a map are greater than or equal to (l*c) it changes the internal data structures i.e increases the capacity or size of bucket. My question is how does it calculate the number of entries in a hashmap to check the mentioned condition? Is it the total number of (key, value) pairs in map or the number of engaged locations in the bucket being used? If it's the number of engaged locations in bucket how do you keep track of those? I’m assuming chaining is being followed to avoid collisions.
The load factor is the ratio of the number of elements it holds and your HashMap capacity (i.e. how many buckets you have)
So using a simple array of 10 spaces with a load factor of .75 means that the moment your elements divided by your size is greater or equal to 75% (that will mean there are 8 elements in your Array), the data structure must regrow in order to lower the ratio.
The HashMap usually keeps track of the number of elements it holds on every add/remove operation and recalculates the load factor

Categories

Resources