Java source Hashtable - java

I don't known how to search this question, so I asked a question.
Java version 1.7.0_80_x86
The remove method in java.util.Hashtable;
I see that the value attribute of node e is set to null;
However, e.next is not set to null;
So if e.next is not null, will node e not be reclaimed by gc?
Method source code:
/**
* Removes the key (and its corresponding value) from this
* hashtable. This method does nothing if the key is not in the hashtable.
*
* #param key the key that needs to be removed
* #return the value to which the key had been mapped in this hashtable,
* or <code>null</code> if the key did not have a mapping
* #throws NullPointerException if the key is <code>null</code>
*/
public synchronized V remove(Object key) {
Entry tab[] = table;
int hash = hash(key);
int index = (hash & 0x7FFFFFFF) % tab.length;
for (Entry<K,V> e = tab[index], prev = null ; e != null ; prev = e, e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
modCount++;
if (prev != null) {
prev.next = e.next;
} else {
tab[index] = e.next;
}
count--;
V oldValue = e.value;
e.value = null;
return oldValue;
}
}
return null;
}

Every object tree must have one or more root objects. As long as application can reach these roots they are considered as live object if application cant reach these roots then they are collected by garbage collector. So answer to your question is yes "e" will be garbage collected by GC. But "e.next" may or may not be garbage collected by GC. If their is a reference to "e.next" (other than "e") then it will not else it will garbage collected by GC.

Related

At first I insert null in TreeSet. After that I insert any other value why it gives NPE?

public static void main(String []args){
TreeSet tree = new TreeSet();
String obj = "Ranga";
tree.add(null);
tree.add(obj);
}
As per my knowledge, the TreeSet is depends on default natural sorting order. So JVM internally calls compareTo() method.
In above example, the case is:
obj.compareTo(null);
So, why the result is null pointer exception?
From 1.7 onwards null is not at all accepted by TreeSet. If you enforce to add then we will get NullPointerException. Till 1.6 null was accepted only as the first element.
Before java 7 -
For a non-empty TreeSet, if we are trying to insert a null value at run time you will get a NullPointerException. This is because when some elements exist in the tree, before inserting any object it compares the new object to the existing ones using the compareTo() method and decides where to put the new object. So by inserting null the compareTo() method internally throws NullPointerException.
TreeMap Add method documentation
When you try to add null on empty TreeSet initially it does not contain any element to compare hence its add without NPE, when second element you will add in TreeSet, TreeSet will use Comparable compareTo() method to sort the element and place into TreeSet object hence it will call null.compareTo() which defiantly leads to NPE.
TreeSet backed by TreeMap internally, before java 7 TreeMap put(K,V) doesn't have null check for K(key) and from java 7 null check has been added in TreeMap put(K.V) mehod
Before java 7 TreeMap put mehod code does not have null check -
public V put(K key, V value) {
Entry<K,V> t = root;
if (t == null) {
incrementSize();
root = new Entry<K,V>(key, value, null);
return null;
}
while (true) {
int cmp = compare(key, t.key);
if (cmp == 0) {
return t.setValue(value);
} else if (cmp < 0) {
if (t.left != null) {
t = t.left;
} else {
incrementSize();
t.left = new Entry<K,V>(key, value, t);
fixAfterInsertion(t.left);
return null;
}
} else { // cmp > 0
if (t.right != null) {
t = t.right;
} else {
incrementSize();
t.right = new Entry<K,V>(key, value, t);
fixAfterInsertion(t.right);
return null;
}
}
}
}
from java 7 you can see null check for key, if its is null it will throw NPE.
public V put(K key, V value) {
Entry<K,V> t = root;
if (t == null) {
compare(key, key); // type (and possibly null) check
root = new Entry<>(key, value, null);
size = 1;
modCount++;
return null;
}
int cmp;
Entry<K,V> parent;
// split comparator and comparable paths
Comparator<? super K> cpr = comparator;
if (cpr != null) {
do {
parent = t;
cmp = cpr.compare(key, t.key);
if (cmp < 0)
t = t.left;
else if (cmp > 0)
t = t.right;
else
return t.setValue(value);
} while (t != null);
}
else {
if (key == null)
throw new NullPointerException();
Comparable<? super K> k = (Comparable<? super K>) key;
do {
parent = t;
cmp = k.compareTo(t.key);
if (cmp < 0)
t = t.left;
else if (cmp > 0)
t = t.right;
else
return t.setValue(value);
} while (t != null);
}
Entry<K,V> e = new Entry<>(key, value, parent);
if (cmp < 0)
parent.left = e;
else
parent.right = e;
fixAfterInsertion(e);
size++;
modCount++;
return null;
}
I hope this will leads you on proper conclusion.
Just to maintain the contract and the behavior is enforced byComparable incase of natural ordering.
The natural ordering for a class C is said to be consistent with equals if and only if e1.compareTo(e2) == 0 has the same boolean value as e1.equals(e2) for every e1 and e2 of class C. Note that null is not an instance of any class, and e.compareTo(null) should throw a NullPointerException even though e.equals(null) returns false.
In "relatively recent" Java versions (from the 6th version), the NullPointerException is expected to be thrown in the first add() invocation :
tree.add(null);
as TreeSet.add() javadoc states that :
throw NullPointerException -
if the specified element is null and this set uses natural ordering,
or its comparator does not permit null elements
Note that it is specified in this way from the JDK 6.
For example JDK 5 doesn't explicit this point.
If you use an old Java version (as Java 5), please specify it.
Firstly, don't use raw types instead utilise the power of generics:
TreeSet<String> tree = new TreeSet<>();
As for your issue:
TreeSet no longer supports the addition of null.
From the doc:
public boolean add(E e)
Throws NullPointerException if the specified element is null and this set
uses natural ordering, or its comparator does not permit null
elements.
solutions to overcome this issue:
Provide a null-safe comparator where the null elements will come first:
TreeSet<String> tree = new TreeSet<>(Comparator.nullsFirst(Comparator.naturalOrder()));
or provide a null-safe comparator where the null elements will come last:
TreeSet<String> tree = new TreeSet<>(Comparator.nullsLast(Comparator.naturalOrder()));

Why during HashMap iteration, changing value of a key/value pair doesn't throw ConcurrentModificationException? [duplicate]

This question already has answers here:
What basic operations on a Map are permitted while iterating over it?
(4 answers)
Closed 6 years ago.
For example see the following code snippet:
Map<String,String> unsafemap=new HashMap<>();
unsafemap.put("hello",null);
unsafemap.put(null, null);
unsafemap.put("world","hello");
unsafemap.put("foo","hello");
unsafemap.put("bar","hello");
unsafemap.put("john","hello");
unsafemap.put("doe","hello");
System.out.println("changing null values");
for(Iterator<Map.Entry<String,String>> i=unsafemap.entrySet().iterator();i.hasNext();){
Map.Entry<String,String> e=i.next();
System.out.println("key : "+e.getKey()+" value :"+e.getValue());
if(e.getValue() == null){
//why is the below line not throwing ConcurrentModificationException
unsafemap.put(e.getKey(), "no data");
//same result, no ConcurrentModificationException thrown
e.setValue("no data");
}
//throws ConcurrentModificationException
unsafemap.put("testKey","testData");
}
System.out.println("---------------------------------");
for(Map.Entry<String,String> e :unsafemap.entrySet()){
System.out.println(e);
}
Modifying the map during iteration always results in an exception, if not done using the iterator e.g. iterator.remove(). So obviously adding a new value during iteration is throwing the exception as expected but why is it not thrown if the value of a particular key/value pair is modified?
The Entry object already exists in your first case, so the value will just be modified using e.value = value; and return and no new Entry will be made. So, no exception here.
In second case, changes done to the value object really don't affect the map, so no exception there.
From HashMap source code:
public V put(K key, V value) {
if (key == null)
return putForNullKey(value);
int hash = hash(key.hashCode());
int i = indexFor(hash, table.length);
for (Entry<K,V> e = table[i]; e != null; e = e.next) {
Object k;
if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
V oldValue = e.value;
e.value = value;
e.recordAccess(this);
return oldValue;
}
}
modCount++;
addEntry(hash, key, value, i);
return null;
}

How could HashMap assurance same index when a duplicate key added with different `tab.length`?

The following piece of code is used to add an element to a HashMap (from Android 5.1.1 source tree), I'm very confused this statement:int index = hash & (tab.length - 1);, how could this map assurance the same index when a duplicate key added with different tab.length?
For example, assume that we have a new empty HashMap hMap. Firstly, we add pair ("1","1") to it, assume tab.length equals 1 at this time, then we add many pairs to this map, assume tab.length equals "x", now we add a duplicate pair ("1","1") to it, notice that the tab.length is changed, so the index's value int index = hash & (tab.length - 1); may also changed.
/**
* Maps the specified key to the specified value.
*
* #param key
* the key.
* #param value
* the value.
* #return the value of any previous mapping with the specified key or
* {#code null} if there was no such mapping.
*/
#Override public V put(K key, V value) {
if (key == null) {
return putValueForNullKey(value);
}
int hash = Collections.secondaryHash(key);
HashMapEntry<K, V>[] tab = table;
int index = hash & (tab.length - 1);
for (HashMapEntry<K, V> e = tab[index]; e != null; e = e.next) {
if (e.hash == hash && key.equals(e.key)) {
preModify(e);
V oldValue = e.value;
e.value = value;
return oldValue;
}
}
// No entry for (non-null) key is present; create one
modCount++;
if (size++ > threshold) {
tab = doubleCapacity();
index = hash & (tab.length - 1);
}
addNewEntry(key, value, hash, index);
return null;
}
When table need to reconstruct, it will first re-computing the index of older element, so the index will follow the changes of table's length.

Please share some insights on rehash method in java?

I am looking for some better insight on hashtable/hash-map data-structure.
By going through the api I could make out that inner Entry class is referrred to as bucket. Please correct me if I am wrong.
Please find the following method:-
public synchronized V put(K key, V value) {
// Make sure the value is not null
if (value == null) {
throw new NullPointerException();
}
// Makes sure the key is not already in the hashtable.
Entry tab[] = table;
int hash = hash(key);
int index = (hash & 0x7FFFFFFF) % tab.length;
for (Entry<K,V> e = tab[index] ; e != null ; e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
V old = e.value;
e.value = value;
return old;
}
}
modCount++;
if (count >= threshold) {
// Rehash the table if the threshold is exceeded
rehash();
tab = table;
hash = hash(key);
index = (hash & 0x7FFFFFFF) % tab.length;
}
// Creates the new entry.
Entry<K,V> e = tab[index]; <-------are we assigining null to this entry?
tab[index] = new Entry<>(hash, key, value, e);
count++;
return null;
}
By the following line of code
Entry<K,V> e = tab[index];
I can assume that we are assigning null to this new entry object; Please correct me here also.
So my another question is :-
why are we not doing this directly
Entry<K,V> e = null
instead of
Entry<K,V> e = tab[index];
Please find below is the screen shot for the debug also:-
Please share your valuable insights on this.
Entry<K,V> is an instance that can represent a link in a linked list. Note that the next member refers to the next Entry on the list.
A bucket contains a linked list of entries that were mapped to the same index.
Entry<K,V> e = tab[index] will return null only if there's no Entry stored in that index yet. Otherwise it will return the first Entry in the linked list of that bucket.
tab[index] = new Entry<>(hash, key, value, e); creates a new Entry and stores it as the first Entry in the bucket. The previous first Entry is passed to the Entry constructor, in order to become the next (second) Entry in the list.

How do java implement hash map chain collision resolution

I know that we can use a linked list to handle chain collision for hash map. However, in Java, the hash map implementation uses an array, and I am curious how java implements hash map chain collision resolution. I do find this post: Collision resolution in Java HashMap
. However, this is not the answer I am looking for.
Thanks a lot.
HashMap contains an array of Entry class. Each bucket has a LinkedList implementation. Each bucket points to hashCode, That being said, if there is a collision, then the new entry will be added at the end of the list in the same bucket.
Look at this code :
public V put(K key, V value) {
if (key == null)
return putForNullKey(value);
int hash = hash(key.hashCode());
int i = indexFor(hash, table.length); // get table/ bucket index
for (Entry<K,V> e = table[i]; e != null; e = e.next) { // walk through the list of nodes
Object k;
if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
V oldValue = e.value;
e.value = value;
e.recordAccess(this);
return oldValue; // return old value if found
}
}
modCount++;
addEntry(hash, key, value, i); // add new value if not found
return null;
}

Categories

Resources