I'm working on an application, that has uses a HashMap to share state. I need to prove via unit tests that it will have problems in a multi-threaded environment.
I tried to check the state of the application in a single thread environment and in a multi-threaded environment via checking the size and elements of the HashMap in both of them. But seems this doesn't help, the state is always the same.
Are there any other ways to prove it or prove that an application that performs operations on the map works well with concurrent requests?
This is quite easy to prove.
Shortly
A hash map is based on an array, where each item represents a bucket. As more keys are added, the buckets grow and at a certain threshold the array is recreated with a bigger size so that its buckets are spread more evenly (performance considerations). During the array recreation, the array becomes empty, which results in empty result for the caller, until the recreation completes.
Details and Proof
It means that sometimes HashMap#put() will internally call HashMap#resize() to make the underlying array bigger.
HashMap#resize() assigns the table field a new empty array with a bigger capacity and populates it with the old items. While this population happens, the underlying array doesn't contain all of the old items and calling HashMap#get() with an existing key may return null.
The following code demonstrates that. You are very likely to get the exception that will mean the HashMap is not thread safe. I chose the target key as 65 535 - this way it will be the last element in the array, thus being the last element during re-population which increases the possibility of getting null on HashMap#get() (to see why, see HashMap#put() implementation).
final Map<Integer, String> map = new HashMap<>();
final Integer targetKey = 0b1111_1111_1111_1111; // 65 535
final String targetValue = "v";
map.put(targetKey, targetValue);
new Thread(() -> {
IntStream.range(0, targetKey).forEach(key -> map.put(key, "someValue"));
}).start();
while (true) {
if (!targetValue.equals(map.get(targetKey))) {
throw new RuntimeException("HashMap is not thread safe.");
}
}
One thread adds new keys to the map. The other thread constantly checks the targetKey is present.
If count those exceptions, I get around 200 000.
It is hard to simulate Race but looking at the OpenJDK source for put() method of HashMap:
public V put(K key, V value) {
if (key == null)
return putForNullKey(value);
//Operation 1
int hash = hash(key.hashCode());
int i = indexFor(hash, table.length);
for (Entry<K,V> e = table[i]; e != null; e = e.next) {
Object k;
if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
V oldValue = e.value;
e.value = value;
e.recordAccess(this);
return oldValue;
}
}
//Operation 2
modCount++;
//Operation 3
addEntry(hash, key, value, i);
return null;
}
As you can see put() involves 3 operations which are not synchronized. And compound operations are non thread safe. So theoretically it is proven that HashMap is not thread safe.
Its an old thread. But just pasting my sample code which is able to demonstrate the problems with hashmap.
Take a look at the below code, we try to insert 30000 Items into the hashmap using 10 threads (3000 items per thread).
So after all the threads are completed, you should ideally see that the size of hashmap should be 30000. But the actual output would be either an exception while rebuilding the tree or the final count is less than 30000.
class TempValue {
int value = 3;
#Override
public int hashCode() {
return 1; // All objects of this class will have same hashcode.
}
}
public class TestClass {
public static void main(String args[]) {
Map<TempValue, TempValue> myMap = new HashMap<>();
List<Thread> listOfThreads = new ArrayList<>();
// Create 10 Threads
for (int i = 0; i < 10; i++) {
Thread thread = new Thread(() -> {
// Let Each thread insert 3000 Items
for (int j = 0; j < 3000; j++) {
TempValue key = new TempValue();
myMap.put(key, key);
}
});
thread.start();
listOfThreads.add(thread);
}
for (Thread thread : listOfThreads) {
thread.join();
}
System.out.println("Count should be 30000, actual is : " + myMap.size());
}
}
Output 1 :
Count should be 30000, actual is : 29486
Output 2 : (Exception)
java.util.HashMap$Node cannot be cast to java.util.HashMap$TreeNodejava.lang.ClassCastException: java.util.HashMap$Node cannot be cast to java.util.HashMap$TreeNode
at java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1819)
at java.util.HashMap$TreeNode.treeify(HashMap.java:1936)
at java.util.HashMap.treeifyBin(HashMap.java:771)
at java.util.HashMap.putVal(HashMap.java:643)
at java.util.HashMap.put(HashMap.java:611)
at TestClass.lambda$0(TestClass.java:340)
at java.lang.Thread.run(Thread.java:745)
However if you modify the line Map<TempValue, TempValue> myMap = new HashMap<>(); to a ConcurrentHashMap the output is always 30000.
Another Observation :
In the above example the hashcode for all objects of TempValue class was the same(** i.e., 1**). So you might be wondering, this issue with HashMap might occur only in case there is a collision (due to hashcode).
I tried another example.
Modify the TempValue class to
class TempValue {
int value = 3;
}
Now re-execute the same code. Out of every 5 runs, I see 2-3 runs still give a different output than 30000.
So even if you usually don't have much collisions, you might still end up with an issue. (Maybe due to rebuilding of HashMap, etc.)
Overall these examples show the issue with HashMap which ConcurrentHashMap handles.
I need to prove via unit tests that it will have problems in multithread environment.
This is going to be tremendously hard to do. Race conditions are very hard to demonstrate. You could certainly write a program which does puts and gets into a HashMap in a large number of threads but logging, volatile fields, other locks, and other timing details of your application may make it extremely hard to force your particular code to fail.
Here's a stupid little HashMap failure test case. It fails because it times out when the threads go into an infinite loop because of memory corruption of HashMap. However, it may not fail for you depending on number of cores and other architecture details.
#Test(timeout = 10000)
public void runTest() throws Exception {
final Map<Integer, String> map = new HashMap<Integer, String>();
ExecutorService pool = Executors.newFixedThreadPool(10);
for (int i = 0; i < 10; i++) {
pool.submit(new Runnable() {
#Override
public void run() {
for (int i = 0; i < 10000; i++) {
map.put(i, "wow");
}
}
});
}
pool.shutdown();
pool.awaitTermination(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
}
Is reading the API docs enough? There is a statement in there:
Note that this implementation is not synchronized. If multiple threads
access a hash map concurrently, and at least one of the threads
modifies the map structurally, it must be synchronized externally. (A
structural modification is any operation that adds or deletes one or
more mappings; merely changing the value associated with a key that an
instance already contains is not a structural modification.) This is
typically accomplished by synchronizing on some object that naturally
encapsulates the map. If no such object exists, the map should be
"wrapped" using the Collections.synchronizedMap method. This is best
done at creation time, to prevent accidental unsynchronized access to
the map:
The problem with thread safety is that it's hard to prove through a test. It could be fine most of the times. Your best bet would be to just run a bunch of threads that are getting/putting and you'll probably get some concurrency errors.
I suggest using a ConcurrentHashMap and trust that the Java team saying that HashMap is not synchronized is enough.
Are there any other ways to prove it?
How about reading the documentation (and paying attention to the emphasized "must"):
If multiple threads access a hash map concurrently, and at least one of the threads modifies the map structurally, it must be synchronized externally
If you are going to attempt to write a unit test that demonstrates incorrect behavior, I recommend the following:
Create a bunch of keys that all have the same hashcode (say 30 or 40)
Add values to the map for each key
Spawn a separate thread for the key, which has an infinite loop that (1) asserts that the key is present int the map, (2) removes the mapping for that key, and (3) adds the mapping back.
If you're lucky, the assertion will fail at some point, because the linked list behind the hash bucket will be corrupted. If you're unlucky, it will appear that HashMap is indeed threadsafe despite the documentation.
It may be possible, but will never be a perfect test. Race conditions are just too unpredictable. That being said, I wrote a similar type of test to help fix a threading issue with a proprietary data structure, and in my case, it was much easier to prove that something was wrong (before the fix) than to prove that nothing would go wrong (after the fix). You could probably construct a multi-threaded test that will eventually fail with sufficient time and the right parameters.
This post may be helpful in identifying areas to focus on in your test and has some other suggestions for optional replacements.
You can create multiple threads each adding an element to a hashmap and iterating over it.
i.e. In the run method we have to use "put" and then iterate using iterator.
For the case of HashMap we get ConcurrentModificationException while for ConcurrentHashMap we dont get.
Most probable race condition at java.util.HashMap implementation
Most of hashMaps failing if we are trying to read values while resizing or rehashing step executing. Resizing and rehashing operation executed under certain conditions most commonly if exceed bucket threshold. This code proves that if I call resizing externally or If I put more element than threshold and tend to call resizing operation internally causes to some null read which shows that HashMap is not thread safe. There should be more race condition but it is enough to prove it is not Thread Safe.
Practically proof of race condition
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Map;
import java.util.stream.IntStream;
public class HashMapThreadSafetyTest {
public static void main(String[] args) {
try {
(new HashMapThreadSafetyTest()).testIt();
} catch (Exception e) {
e.printStackTrace();
}
}
private void threadOperation(int number, Map<Integer, String> map) {
map.put(number, "hashMapTest");
while (map.get(number) != null);
//If code passes to this line that means we did some null read operation which should not be
System.out.println("Null Value Number: " + number);
}
private void callHashMapResizeExternally(Map<Integer, String> map)
throws NoSuchMethodException, InvocationTargetException, IllegalAccessException {
Method method = map.getClass().getDeclaredMethod("resize");
method.setAccessible(true);
System.out.println("calling resize");
method.invoke(map);
}
private void testIt()
throws InterruptedException, NoSuchMethodException, IllegalAccessException, InvocationTargetException {
final Map<Integer, String> map = new HashMap<>();
IntStream.range(0, 12).forEach(i -> new Thread(() -> threadOperation(i, map)).start());
Thread.sleep(60000);
// First loop should not show any null value number untill calling resize method of hashmap externally.
callHashMapResizeExternally(map);
// First loop should fail from now on and should print some Null Value Numbers to the out.
System.out.println("Loop count is 12 since hashmap initially created for 2^4 bucket and threshold of resizing"
+ "0.75*2^4 = 12 In first loop it should not fail since we do not resizing hashmap. "
+ "\n\nAfter 60 second: after calling external resizing operation with reflection should forcefully fail"
+ "thread safety");
Thread.sleep(2000);
final Map<Integer, String> map2 = new HashMap<>();
IntStream.range(100, 113).forEach(i -> new Thread(() -> threadOperation(i, map2)).start());
// Second loop should fail from now on and should print some Null Value Numbers to the out. Because it is
// iterating more than 12 that causes hash map resizing and rehashing
System.out.println("It should fail directly since it is exceeding hashmap initial threshold and it will resize"
+ "when loop iterate 13rd time");
}
}
Example output
No null value should be printed untill thread sleep line passed
calling resize
Loop count is 12 since hashmap initially created for 2^4 bucket and threshold of resizing0.75*2^4 = 12 In first loop it should not fail since we do not resizing hashmap.
After 60 second: after calling external resizing operation with reflection should forcefully failthread safety
Null Value Number: 11
Null Value Number: 5
Null Value Number: 6
Null Value Number: 8
Null Value Number: 0
Null Value Number: 7
Null Value Number: 2
It should fail directly since it is exceeding hashmap initial threshold and it will resizewhen loop iterate 13th time
Null Value Number: 111
Null Value Number: 100
Null Value Number: 107
Null Value Number: 110
Null Value Number: 104
Null Value Number: 106
Null Value Number: 109
Null Value Number: 105
Very Simple Solution to prove this
Here is the code, which proves the Hashmap implementation is not thread safe.
In this example, we are only adding the elements to map, not removing it from any method.
We can see that it prints the keys which are not in map, even though we have put the same key in map before doing get operation.
package threads;
import java.util.HashMap;
import java.util.Map;
import java.util.Random;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class HashMapWorkingDemoInConcurrentEnvironment {
private Map<Long, String> cache = new HashMap<>();
public String put(Long key, String value) {
return cache.put(key, value);
}
public String get(Long key) {
return cache.get(key);
}
public static void main(String[] args) {
HashMapWorkingDemoInConcurrentEnvironment cache = new HashMapWorkingDemoInConcurrentEnvironment();
class Producer implements Callable<String> {
private Random rand = new Random();
public String call() throws Exception {
while (true) {
long key = rand.nextInt(1000);
cache.put(key, Long.toString(key));
if (cache.get(key) == null) {
System.out.println("Key " + key + " has not been put in the map");
}
}
}
}
ExecutorService executorService = Executors.newFixedThreadPool(4);
System.out.println("Adding value...");
try {
for (int i = 0; i < 4; i++) {
executorService.submit(new Producer());
}
} finally {
executorService.shutdown();
}
}
}
Sample Output for a execution run
Adding value...
Key 611 has not been put in the map
Key 978 has not been put in the map
Key 35 has not been put in the map
Key 202 has not been put in the map
Key 714 has not been put in the map
Key 328 has not been put in the map
Key 606 has not been put in the map
Key 149 has not been put in the map
Key 763 has not been put in the map
Its strange to see the values printed, that's why hashmap is not thread safe implementation working in concurrent environment.
There is a great tool open sourced by the OpenJDK team called JCStress which is used in the JDK for concurrency testing.
https://github.com/openjdk/jcstress
In one of its sample: https://github.com/openjdk/jcstress/blob/master/tests-custom/src/main/java/org/openjdk/jcstress/tests/collections/HashMapFailureTest.java
#JCStressTest
#Outcome(id = "0, 0, 1, 2", expect = Expect.ACCEPTABLE, desc = "No exceptions, entire map is okay.")
#Outcome(expect = Expect.ACCEPTABLE_INTERESTING, desc = "Something went wrong")
#State
public class HashMapFailureTest {
private final Map<Integer, Integer> map = new HashMap<>();
#Actor
public void actor1(IIII_Result r) {
try {
map.put(1, 1);
r.r1 = 0;
} catch (Exception e) {
r.r1 = 1;
}
}
#Actor
public void actor2(IIII_Result r) {
try {
map.put(2, 2);
r.r2 = 0;
} catch (Exception e) {
r.r2 = 1;
}
}
#Arbiter
public void arbiter(IIII_Result r) {
Integer v1 = map.get(1);
Integer v2 = map.get(2);
r.r3 = (v1 != null) ? v1 : -1;
r.r4 = (v2 != null) ? v2 : -1;
}
}
The methods marked with actor are run concurrently on different threads.
The result for this on my machine is:
Results across all configurations:
RESULT SAMPLES FREQ EXPECT DESCRIPTION
0, 0, -1, 2 3,854,896 5.25% Interesting Something went wrong
0, 0, 1, -1 4,251,564 5.79% Interesting Something went wrong
0, 0, 1, 2 65,363,492 88.97% Acceptable No exceptions, entire map is okay.
This shows that 88% of the times expected values were observed but in around 12% of the times, incorrect results were seen.
You can try out this tool and the samples and write your own tests to verify that concurrency of some code is broken.
As a yet another reply to this topic, I would recommend example from https://www.baeldung.com/java-concurrent-map, that looks as below. Theory is very straigthforwad - for N times we run 10 threads, that each of them increments the value in a common map 10 times. If the map was thread safe, the value should be 100 every time. Example proves, it's not.
#Test
public void givenHashMap_whenSumParallel_thenError() throws Exception {
Map<String, Integer> map = new HashMap<>();
List<Integer> sumList = parallelSum100(map, 100);
assertNotEquals(1, sumList
.stream()
.distinct()
.count());
long wrongResultCount = sumList
.stream()
.filter(num -> num != 100)
.count();
assertTrue(wrongResultCount > 0);
}
private List<Integer> parallelSum100(Map<String, Integer> map,
int executionTimes) throws InterruptedException {
List<Integer> sumList = new ArrayList<>(1000);
for (int i = 0; i < executionTimes; i++) {
map.put("test", 0);
ExecutorService executorService =
Executors.newFixedThreadPool(4);
for (int j = 0; j < 10; j++) {
executorService.execute(() -> {
for (int k = 0; k < 10; k++)
map.computeIfPresent(
"test",
(key, value) -> value + 1
);
});
}
executorService.shutdown();
executorService.awaitTermination(5, TimeUnit.SECONDS);
sumList.add(map.get("test"));
}
return sumList;
}
Related
I search the database many times,even I have cache some result, it still cost took a long time.
List<Map<Long, Node>> aNodeMapList = new ArrayList<>();
Map<String, List<Map<String, Object>>> cacheRingMap = new ConcurrentHashMap<>();
for (Ring startRing : startRings) {
for (Ring endRing : endRings) {
Map<String, Object> nodeMapResult = getNodeMapResult(startRing, endRing, cacheRingMap);
Map<Long, Node> nodeMap = (Map<Long, Node>) nodeMapResult.get("nodeMap");
if (nodeMap.size() > 0) {
aNodeMapList.add(nodeMap);
}
}
}
getNodeMapResult is a function to search database according to startRing, endRing, and cache in cacheRingMap, and next time it may not need to search database if I find the result have exist in
cacheRingMap.
My leader tell me that multithread technology can be used. So I change it to executorCompletionService, but now I have a question, is this thread safe when I use concurrentHashMap to cache result in executorCompletionService?
Will it run fast after I change?
int totalThreadCount = startRings.size() * endRings.size();
ExecutorService threadPool2 = Executors.newFixedThreadPool(totalThreadCount > 4 ? 4 : 2);
CompletionService<Map<String, Object>> completionService = new ExecutorCompletionService<Map<String, Object>>(threadPool2);
for (Ring startRing : startRings) {
for (Ring endRing : endRings) {
completionService.submit(new Callable<Map<String, Object>>() {
#Override
public Map<String, Object> call() throws Exception {
return getNodeMapResult(startRing, endRing, cacheRingMap);
}
});
}
}
for (int i = 0; i < totalThreadCount; i++) {
Map<String, Object> nodeMapResult = completionService.take().get();
Map<Long, Node> nodeMap = (Map<Long, Node>) nodeMapResult.get("nodeMap");
if (nodeMap.size() > 0) {
aNodeMapList.add(nodeMap);
}
}
Is this thread safe when I use concurrentHashMap to cache result in executorCompletionService?
The ConcurrentHashMap itself is thread safe, as its name suggests ("Concurrent"). However, that doesn't mean that the code that uses it is thread safe.
For instance, if your code does the following:
SomeObject object = cacheRingMap.get(someKey); //get from cache
if (object == null){ //oh-oh, cache miss
object = getObjectFromDb(someKey); //get from the db
cacheRingMap.put(someKey, object); //put in cache for next time
}
Since the get and put aren't performed atomically in this example, two threads executing this code could end up both looking for the same key first in the cache, and then in the db. It's still thread-safe, but we performed two db lookups instead of just one. But this is just a simple example, more complex caching logic (say one that includes cache invalidation and removals from the cache map) can end up being not just wasteful, but actually incorrect. It all depends on how the map is used and what guarantees you need from it. I suggest you read the ConcurrentHashMap javadoc. See what it can guarantee, and what it cannot.
Will it run fast after I change?
That depends on too many parameters to know in advance. How would the database handle the concurrent queries? How many queries are there? How fast is a single query? Etc. The best way of knowing is to actually try it out.
As a side note, if you're looking for ways to improve performance, you might want to try using a batch query. The flow would then be to search the cache for all the keys you need, gather the keys you need to look up, and then send them all together in a single query to the database. In many cases, a single large query would run faster that a bunch of smaller ones.
Also, you should check whether concurrent lookups in the map are faster than single threaded ones in your case. Perhaps parallelizing only the query itself, and not the cache lookup could yield better results in your case.
When using a hash map, it's important to evenly distribute the keys over the buckets.
If all keys end up in the same bucket, you essentially end up with a list.
Is there a way to "audit" a HashMap in Java in order to see how well the keys are distributed?
I tried subtyping it and iterating Entry<K,V>[] table, but it's not visible.
I tried subtyping it and iterating Entry[] table, but it's not visible
Use Reflection API!
public class Main {
//This is to simulate instances which are not equal but go to the same bucket.
static class A {
#Override
public boolean equals(Object obj) { return false;}
#Override
public int hashCode() {return 42; }
}
public static void main(String[] args) {
//Test data
HashMap<A, String> map = new HashMap<A, String>(4);
map.put(new A(), "abc");
map.put(new A(), "def");
//Access to the internal table
Class clazz = map.getClass();
Field table = clazz.getDeclaredField("table");
table.setAccessible(true);
Map.Entry<Integer, String>[] realTable = (Map.Entry<Integer, String>[]) table.get(map);
//Iterate and do pretty printing
for (int i = 0; i < realTable.length; i++) {
System.out.println(String.format("Bucket : %d, Entry: %s", i, bucketToString(realTable[i])));
}
}
private static String bucketToString(Map.Entry<Integer, String> entry) throws Exception {
if (entry == null) return null;
StringBuilder sb = new StringBuilder();
//Access to the "next" filed of HashMap$Node
Class clazz = entry.getClass();
Field next = clazz.getDeclaredField("next");
next.setAccessible(true);
//going through the bucket
while (entry != null) {
sb.append(entry);
entry = (Map.Entry<Integer, String>) next.get(entry);
if (null != entry) sb.append(" -> ");
}
return sb.toString();
}
}
In the end you'll see something like this in STDOUT:
Bucket : 0, Entry: null
Bucket : 1, Entry: null
Bucket : 2, Entry: Main$A#2a=abc -> Main$A#2a=def
Bucket : 3, Entry: null
HashMap uses the keys produced by the hashCode() method of your key objects, so I guess you are really asking how evenly distributed those hash code values are. You can get hold of the key objects using Map.keySet().
Now, the OpenJDK and Oracle implementations of HashMap do not use the key hash codes directly, but apply another hashing function to the provided hashes before distributing them over the buckets. But you should not rely on or use this implementation detail. So you ought to ignore it. So you should just ensure that the hashCode() methods of your key values are well distributed.
Examining the actual hash codes of some sample key value objects is unlikely to tell you anything useful unless your hash cide method is very poor. You would be better doing a basic theoretical analysis of your hash code method. This is not as scary as it might sound. You may (indeed, have no choice but to do so) assume that the hash code methods of the supplied Java classes are well distributed. Then you just need a check that the means you use for combining the hash codes for your data members behaves well for the expected values of your data members. Only if your data members have values that are highly correlated in a peculiar way is this likely to be a problem.
You can use reflection to access the hidden fields:
HashMap map = ...;
// get the HashMap#table field
Field tableField = HashMap.class.getDeclaredField("table");
tableField.setAccessible(true);
Object[] table = (Object[]) tableField.get(map);
int[] counts = new int[table.length];
// get the HashMap.Node#next field
Class<?> entryClass = table.getClass().getComponentType();
Field nextField = entryClass.getDeclaredField("next");
nextField.setAccessible(true);
for (int i = 0; i < table.length; i++) {
Object e = table[i];
int count = 0;
if (e != null) {
do {
count++;
} while ((e = nextField.get(e)) != null);
}
counts[i] = count;
}
Now you have an array of the entry counts for each bucket.
Client.java
public class Client{
public static void main(String[] args) {
Map<Example, Number> m = new HashMap<>();
Example e1 = new Example(100); //point 1
Example e2 = new Example(200); //point2
Example e3 = new Example(300); //point3
m.put(e1, 10);
m.put(e2, 20);
m.put(e3, 30);
System.out.println(m);//point4
}
}
Example.java
public class Example {
int s;
Example(int s) {
this.s =s;
}
#Override
public int hashCode() {
// TODO Auto-generated method stub
return 5;
}
}
Now at point 1, point 2 and point 3 in Client.java, we are inserting 3 keys of type Example in hashmap m. Since hashcode() is overridden in Example.java, all three keys e1,e2,e3 will return same hashcode and hence same bucket in hashmap.
Now the problem is how to see the distribution of keys.
Approach :
Insert a debug point at point4 in Client.java.
Debug the java application.
Inspect m.
Inside m, you will find table array of type HashMap$Node and size 16.
This is literally the hashtable. Each index contains a linked list of Entry objects that are inserted into hashmap. Each non null index has a hash variable that correspond to the hash value returned by the hash() method of Hashmap. This hash value is then sent to indexFor() method of HashMap to find out the index of table array , where the Entry object will be inserted. (Refer #Rahul's link in comments to question to understand the concept of hash and indexFor).
For the case, taken above, if we inspect table, you will find all but one key null.
We had inserted three keys but we can see only one, i.e. all three keys have been inserted into the same bucket i.e same index of table.
Inspect the table array element(in this case it will be 5), key correspond to e1, while value correspond to 10 (point1)
next variable here points to next node of Linked list i.e. next Entry object which is (e2, 200) in our case.
So in this way you can inspect the hashmap.
Also i would recommend you to go through internal implementation of hashmap to understand HashMap by heart.
Hope it helped..
The problem I have is an example of something I've seen often. I have a series of strings (one string per line, lets say) as input, and all I need to do is return how many times each string has appeared. What is the most elegant way to solve this, without using a trie or other string-specific structure? The solution I've used in the past has been to use a hashtable-esque collection of custom-made (String, integer) objects that implements Comparable to keep track of how many times each string has appeared, but this method seems clunky for several reasons:
1) This method requires the creation of a comparable function which is identical to the String's.compareTo().
2) The impression that I get is that I'm misusing TreeSet, which has been my collection of choice. Updating the counter for a given string requires checking to see if the object is in the set, removing the object, updating the object, and then reinserting it. This seems wrong.
Is there a more clever way to solve this problem? Perhaps there is a better Collections interface I could use to solve this problem?
Thanks.
One posibility can be:
public class Counter {
public int count = 1;
}
public void count(String[] values) {
Map<String, Counter> stringMap = new HashMap<String, Counter>();
for (String value : values) {
Counter count = stringMap.get(value);
if (count != null) {
count.count++;
} else {
stringMap.put(value, new Counter());
}
}
}
In this way you still need to keep a map but at least you don't need to regenerate the entry every time you match a new string, you can access the Counter class, which is a wrapper of integer and increase the value by one, optimizing the access to the array
TreeMap is much better for this problem, or better yet, Guava's Multiset.
To use a TreeMap, you'd use something like
Map<String, Integer> map = new TreeMap<>();
for (String word : words) {
Integer count = map.get(word);
if (count == null) {
map.put(word, 1);
} else {
map.put(word, count + 1);
}
}
// print out each word and each count:
for (Map.Entry<String, Integer> entry : map.entrySet()) {
System.out.printf("Word: %s Count: %d%n", entry.getKey(), entry.getValue());
}
Integer theCount = map.get("the");
if (theCount == null) {
theCount = 0;
}
System.out.println(theCount); // number of times "the" appeared, or null
Multiset would be much simpler than that; you'd just write
Multiset<String> multiset = TreeMultiset.create();
for (String word : words) {
multiset.add(word);
}
for (Multiset.Entry<String> entry : multiset.entrySet()) {
System.out.printf("Word: %s Count: %d%n", entry.getElement(), entry.getCount());
}
System.out.println(multiset.count("the")); // number of times "the" appeared
You can use a hash-map (no need to "create a comparable function"):
Map<String,Integer> count(String[] strings)
{
Map<String,Integer> map = new HashMap<String,Integer>();
for (String key : strings)
{
Integer value = map.get(key);
if (value == null)
map.put(key,1);
else
map.put(key,value+1);
}
return map;
}
Here is how you can use this method in order to print (for example) the string-count of your input:
Map<String,Integer> map = count(input);
for (String key : map.keySet())
System.out.println(key+" "+map.get(key));
You can use a Bag data structure from the Apache Commons Collection, like the HashBag.
A Bag does exactly what you need: It keeps track of how often an element got added to the collections.
HashBag<String> bag = new HashBag<>();
bag.add("foo");
bag.add("foo");
bag.getCount("foo"); // 2
I'm trying to use a Guava Cache as a replacement for the ConcurrentLinkedHashMap. However I found that while the ConcurrentLinkedHashMap allowed me to iterate over the map in order of Insertion, Guava's asMap() method doesn't return elements in any particular order. Am I missing something, or is this functionality simply not available?
Example (trying to print the keys, the values, and the entries):
Cache<Integer, Integer> cache = CacheBuilder.newBuilder().maximumSize(10).initialCapacity(10)
.expireAfterAccess(10000, TimeUnit.SECONDS).build();
cache.put(1, 1);
cache.put(2, 2);
cache.put(3, 3);
cache.put(4, 4);
cache.put(5, 5);
cache.put(6, 6);
Iterator<Integer> iter1 = cache.asMap().keySet().iterator();
System.out.println("Keys");
while (iter1.hasNext())
System.out.println(iter1.next());
System.out.println("Values");
Iterator<Integer> iter2 = cache.asMap().values().iterator();
while (iter2.hasNext())
System.out.println(iter2.next());
System.out.println("Entries");
Iterator<Entry<Integer, Integer>> iter3 = cache.asMap().entrySet().iterator();
while (iter3.hasNext()) {
Entry<Integer,Integer> entry = iter3.next();
System.out.println(entry.getKey() + " " + entry.getValue());
}
Prints:
Keys
2
6
1
4
3
5
Values
2
6
1
4
3
5
Entries
2 2
6 6
1 1
4 4
3 3
5 5
A CacheWriter will allow your code to be invoked during an explicit write or on removal. For a loading cache, you would have to perform the same work within the loader. That too is performed under the entry's lock so you can assume atomicity. This should let you maintain the ordering without relying on the cache's internal data structures. Note that if the work when performing the ordered iteration is expensive, you might want to copy it inside the lock and then do the work outside so as not to block cache writes.
LinkedHashMap<K, V> orderedMap = new LinkedHashMap<>();
LoadingCache<K, V> cache = Caffeine.newBuilder()
.writer(new CacheWriter<K, V>() {
public void write(K key, V value) {
synchronized (orderedMap) {
orderedMap.put(key, value);
}
}
public void delete(K key, V value, RemovalCause cause) {
if (cause == RemovalCause.REPLACED) {
return;
}
synchronized (orderedMap) {
orderedMap.remove(key);
}
}
})
.maximumSize(1_000)
.build(key -> {
V value = ...
synchronized (orderedMap) {
orderedMap.put(key, value);
}
return value;
});
cache.put(key1, value); // calls writer under lock
cache.get(key2); // calls loader under lock; not writer
cache.invalidate(key1); // calls writer under lock
cache.policy().eviction().get().setMaximum(0); // calls writer under lock
synchronized (orderedMap) {
for (K key : orderedMap.keySet()) {
// do work, but blocks writes!
}
}
(Answering my own question)
It seems fge's answer is correct, and the Guava Cache cannot be iterated according to the order of insertion. As a workaround, I used the previously noted ConcurrentLinkedHashMap, which is less feature rich, but allows for ordered iteration.
I'd still appreciate an official answer from someone on the Guava team since this seems to indicate that the ConcurrentLinkedHashMap is not fully integrated into Guava (contrary to the ConcurrentLinkedHashMap documentation)
I've a program where I am trying to understand thread parallelism. This program deals with coin-flips and counts the number of heads and tails (and the total number of coin flips).
Please see the following code:
import java.util.Random;
import java.util.concurrent.ConcurrentHashMap;
public class CoinFlip{
// main
public static void main (String[] args) {
if (args.length != 2){
System.out.println("CoinFlip #threads #iterations");
return;
}
// check if arguments are integers
int numberOfThreads = 0;
long iterations = 0;
try{
numberOfThreads = Integer.parseInt(args[0]);
iterations = Long.parseLong(args[1]);
}catch(NumberFormatException e){
System.out.println("error: I asked for numbers mate.");
System.out.println("error: " + e);
System.exit(1);
}
// ------------------------------
// set time field
// ------------------------------
// create a hashmap
ConcurrentHashMap <String, Long> universalMap = new ConcurrentHashMap <String, Long> ();
// store count for heads, tails and iterations
universalMap.put("HEADS", new Long(0));
universalMap.put("TAILS", new Long(0));
universalMap.put("ITERATIONS", new Long(0));
long startTime = System.currentTimeMillis();
Thread[] doFlip = new Thread[numberOfThreads];
for (int i = 0; i < numberOfThreads; i ++){
doFlip[i] = new Thread( new DoFlip(iterations/numberOfThreads, universalMap));
doFlip[i].start();
}
for (int i = 0; i < numberOfThreads; i++){
try{
doFlip[i].join();
}catch(InterruptedException e){
System.out.println(e);
}
}
// log time taken to accomplish task
long elapsedTime = System.currentTimeMillis() - startTime;
System.out.println("Runtime:" + elapsedTime);
// print the output to check if the values are legal
// iterations = heads + tails = args[1]
System.out.println(
universalMap.get("HEADS") + " " +
universalMap.get("TAILS") + " " +
universalMap.get("ITERATIONS") + "."
);
return;
}
private static class DoFlip implements Runnable{
// local counters for heads/tails/count
long heads = 0, tails = 0, iterations = 0;
Random randomHT = new Random();
// constructor values -----------------------
long times = 0; // number of iterations
ConcurrentHashMap <String, Long> map; // pointer to hash map
DoFlip(long times, ConcurrentHashMap <String, Long> map){
this.times = times;
this.map = map;
}
public void run(){
while(this.times > 0){
int r = randomHT.nextInt(2); // 0 and 1
if (r == 1){
this.heads ++;
}else{
this.tails ++;
}
// System.out.println("Happening...");
this.iterations ++;
this.times --;
}
updateStats();
}
public void updateStats(){
// read from hashmap and get the existing values
Long nHeads = (Long)this.map.get("HEADS");
Long nTails = (Long)this.map.get("TAILS");
Long nIterations = (Long)this.map.get("ITERATIONS");
// update values
nHeads = nHeads + this.heads;
nTails = nTails + this.tails;
nIterations = nIterations + this.iterations;
// push updated values to hashmap
this.map.put("HEADS", nHeads);
this.map.put("TAILS", nTails);
this.map.put("ITERATIONS", nIterations);
}
}
}
I am using a ConcurrentHashMap to store the different counts. Apparently, when the returns wrong values.
I wrote a PERL script to check the (sum of) values of heads and tails (individually for each thread), it seems to be appropriate. I cannot understand why I get different values from the hashmap.
A concurrent hash map provides you with guarantees with respect to visibility of changes with respect to the map itself, not to its values. In this case you retrieve some values from the map, hold them for some arbitrary amount of time, then try and store them into the map again. In between the read and consequent write though, any number of operations might have happened on the map.
The concurrent in concurrent hash map just guarantees, for example, that if I put a value into a map, that I will actually be able to read that value in another thread (aka it will be visible).
What you need to do is ensure that all threads accessing the map wait their turn, so to speak, when updating the shared counters. In order to do this, you either have to use an atomic operation like 'addAndGet` on AtomicInteger:
this.map.get("HEADS").addAndGet(this.heads);
or you need to synchronize both the read and write manually (most easily accomplished by synchronizing on the map itself):
synchronized(this.map) {
Long currentHeads = this.map.get("HEADS");
this.map.put("HEADS", Long.valueOf(currentHeads.longValue() + this.heads);
}
Personally, I prefer to leverage the SDK whenever I can, so I would go with the use of an Atomic data type.
You should use AtomicLongs as values and you should create them only once and increment them instead of get/put.
ConcurrentHashMap <String, AtomicLong> universalMap = new ConcurrentHashMap <String, AtomicLong> ();
...
universalMap.put("HEADS", new AtomicLong(0));
universalMap.put("TAILS", new AtomicLong(0));
universalMap.put("ITERATIONS", new AtomicLong(0));
...
public void updateStats(){
// read from hashmap and get the existing values
this.map.get("HEADS").getAndAdd(heads);
this.map.get("TAILS").getAndAdd(tails);
this.map.get("ITERATIONS").getAndAdd(iterations);
}
Long is immutable.
An example:
Thread 1: get 0
Thread 2: get 0
Thread 2: put 10
Thread 3: get 10
Thread 3: put 15
Thread 1: put 5
Now your map contains 5 instead of 20
Basically your problem is not the Map. You can use a regular HashMap since you do not modify it. Of course you have to make the map field final.
A couple things. One you really don't need to use a ConcurrentHashMap. A ConcurrentHashMap is only useful when you are dealing with concurrent put/removes. In this case the map is fairly static as far as the keys go simply use an UnmodifiableMap to prove this.
Finally if you are dealing with concurrent adds you really should consider using a LongAdder. It scales far better when many parallel adds occur in which you don't need to worry about the count until the end.
public class HeadsTails{
private final Map<String, LongAdder> map;
public HeadsTails(){
Map<String,LongAdder> local = new HashMap<String,LongAdder>();
local.put("HEADS", new LongAdder());
local.put("TAILS", new LongAdder());
local.put("ITERATIONS", new LongAdder());
map = Collections.unmodifiableMap(local);
}
public void count(){
map.get("HEADS").increment();
map.get("TAILS").increment();
}
public void print(){
System.out.println(map.get("HEADS").sum());
/// etc...
}
}
I mean, in reality I wouldn't even use a map...
public class HeadsTails{
private final LongAdder heads = new LongAdder();
private final LongAdder tails = new LongAdder();
private final LongAdder iterations = new LongAdder();
private final Map<String, LongAdder> map;
public void count(){
heads.increment();
tails.increment();
}
public void print(){
System.out.println(iterations.sum());
}
}