java map concurrent update - java

I'm trying to create a Map with int values and increase them by multiple threads. two or more threads might increase the same key.
ConcurrentHashMap documentation was very unclear to me since it sais that:
Retrieval operations (including get) generally do not block, so may overlap with update operations (including put and remove)
I wonder if the following code using ConcurrentHashMap will works correctly:
myMap.put(X, myMap.get(X) + 1);
if not, how can I manage such thing?

Concurrent map will not help thread safety of your code. You still can get race condition:
Thread-1: x = 1, get(x)
Thread-2: x = 1, get(x)
Thread-1: put(x + 1) => 2
Thread-2: put(x + 1) => 2
Two increments happened, but you still get only +1. You need a concurrent map only if you aim for modifying the map itself, not its content. Even the simplest HashMap is threadsafe for concurrent reads, given the map is not mutated anymore.
So instead of a threadsafe map for primitive type, you need a threadsafe wrapper for the type. Either something from java.util.concurrent.atomic or roll your own locked container if needing an arbitrary type.

One idea would be combining ConcurrentMap with AtomicInteger, which has a increment method.
AtomicInteger current = map.putIfAbsent(key, new AtomicInteger(1));
int newValue = current == null ? 1 :current.incrementAndGet();
or (more efficiently, thanks #Keppil) with an extra code guard to avoid unnecessary object creation:
AtomicInteger current = map.get(key);
if (current == null){
current = map.putIfAbsent(key, new AtomicInteger(1));
}
int newValue = current == null ? 1 : current.incrementAndGet();

Best practice. You can use HashMap and AtomicInteger.
Test code:
public class HashMapAtomicIntegerTest {
public static final int KEY = 10;
public static void main(String[] args) {
HashMap<Integer, AtomicInteger> concurrentHashMap = new HashMap<Integer, AtomicInteger>();
concurrentHashMap.put(HashMapAtomicIntegerTest.KEY, new AtomicInteger());
List<HashMapAtomicCountThread> threadList = new ArrayList<HashMapAtomicCountThread>();
for (int i = 0; i < 500; i++) {
HashMapAtomicCountThread testThread = new HashMapAtomicCountThread(
concurrentHashMap);
testThread.start();
threadList.add(testThread);
}
int index = 0;
while (true) {
for (int i = index; i < 500; i++) {
HashMapAtomicCountThread testThread = threadList.get(i);
if (testThread.isAlive()) {
break;
} else {
index++;
}
}
if (index == 500) {
break;
}
}
System.out.println("The result value should be " + 5000000
+ ",actually is"
+ concurrentHashMap.get(HashMapAtomicIntegerTest.KEY));
}
}
class HashMapAtomicCountThread extends Thread {
HashMap<Integer, AtomicInteger> concurrentHashMap = null;
public HashMapAtomicCountThread(
HashMap<Integer, AtomicInteger> concurrentHashMap) {
this.concurrentHashMap = concurrentHashMap;
}
#Override
public void run() {
for (int i = 0; i < 10000; i++) {
concurrentHashMap.get(HashMapAtomicIntegerTest.KEY)
.getAndIncrement();
}
}
}
Results:
The result value should be 5000000,actually is5000000
Or HashMap and synchronized, but much slower than the former
public class HashMapSynchronizeTest {
public static final int KEY = 10;
public static void main(String[] args) {
HashMap<Integer, Integer> hashMap = new HashMap<Integer, Integer>();
hashMap.put(KEY, 0);
List<HashMapSynchronizeThread> threadList = new ArrayList<HashMapSynchronizeThread>();
for (int i = 0; i < 500; i++) {
HashMapSynchronizeThread testThread = new HashMapSynchronizeThread(
hashMap);
testThread.start();
threadList.add(testThread);
}
int index = 0;
while (true) {
for (int i = index; i < 500; i++) {
HashMapSynchronizeThread testThread = threadList.get(i);
if (testThread.isAlive()) {
break;
} else {
index++;
}
}
if (index == 500) {
break;
}
}
System.out.println("The result value should be " + 5000000
+ ",actually is" + hashMap.get(KEY));
}
}
class HashMapSynchronizeThread extends Thread {
HashMap<Integer, Integer> hashMap = null;
public HashMapSynchronizeThread(
HashMap<Integer, Integer> hashMap) {
this.hashMap = hashMap;
}
#Override
public void run() {
for (int i = 0; i < 10000; i++) {
synchronized (hashMap) {
hashMap.put(HashMapSynchronizeTest.KEY,
hashMap
.get(HashMapSynchronizeTest.KEY) + 1);
}
}
}
}
Results:
The result value should be 5000000,actually is5000000
Use ConcurrentHashMap will get the wrong results.
public class ConcurrentHashMapTest {
public static final int KEY = 10;
public static void main(String[] args) {
ConcurrentHashMap<Integer, Integer> concurrentHashMap = new ConcurrentHashMap<Integer, Integer>();
concurrentHashMap.put(KEY, 0);
List<CountThread> threadList = new ArrayList<CountThread>();
for (int i = 0; i < 500; i++) {
CountThread testThread = new CountThread(concurrentHashMap);
testThread.start();
threadList.add(testThread);
}
int index = 0;
while (true) {
for (int i = index; i < 500; i++) {
CountThread testThread = threadList.get(i);
if (testThread.isAlive()) {
break;
} else {
index++;
}
}
if (index == 500) {
break;
}
}
System.out.println("The result value should be " + 5000000
+ ",actually is" + concurrentHashMap.get(KEY));
}
}
class CountThread extends Thread {
ConcurrentHashMap<Integer, Integer> concurrentHashMap = null;
public CountThread(ConcurrentHashMap<Integer, Integer> concurrentHashMap) {
this.concurrentHashMap = concurrentHashMap;
}
#Override
public void run() {
for (int i = 0; i < 10000; i++) {
concurrentHashMap.put(ConcurrentHashMapTest.KEY,
concurrentHashMap.get(ConcurrentHashMapTest.KEY) + 1);
}
}
}
Results:
The result value should be 5000000,actually is11759

You could just put the operation in a synchronized (myMap) {...} block.

Your current code changes the values of your map concurrently so this will not work.
If multiple threads can put values into your map, you have to use a concurrent map like ConcurrentHashMap with non thread safe values like Integer. ConcurrentMap.replace will then do what you want (or use AtomicInteger to ease your code).
If your threads will only change the values (and not add/change the keys) of your map, then you can use a standard map storing thread safe values like AtomicInteger. Then your thread will call:map.get(key).incrementAndGet() for instance.

Related

Incrementing and removing elements of ConcurrentHashMap

There is class Counter, which contains a set of keys and allows incrementing value of each key and getting all values. So, the task I'm trying to solve is the same as in Atomically incrementing counters stored in ConcurrentHashMap . The difference is that the set of keys is unbounded, so new keys are added frequently.
In order to reduce memory consumption, I clear values after they are read, this happens in Counter.getAndClear(). Keys are also removed, and this seems to break things up.
One thread increments random keys and another thread gets snapshots of all values and clears them.
The code is below:
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.ThreadLocalRandom;
import java.util.Map;
import java.util.HashMap;
import java.lang.Thread;
class HashMapTest {
private final static int hashMapInitSize = 170;
private final static int maxKeys = 100;
private final static int nIterations = 10_000_000;
private final static int sleepMs = 100;
private static class Counter {
private ConcurrentMap<String, Long> map;
public Counter() {
map = new ConcurrentHashMap<String, Long>(hashMapInitSize);
}
public void increment(String key) {
Long value;
do {
value = map.computeIfAbsent(key, k -> 0L);
} while (!map.replace(key, value, value + 1L));
}
public Map<String, Long> getAndClear() {
Map<String, Long> mapCopy = new HashMap<String, Long>();
for (String key : map.keySet()) {
Long removedValue = map.remove(key);
if (removedValue != null)
mapCopy.put(key, removedValue);
}
return mapCopy;
}
}
// The code below is used for testing
public static void main(String[] args) throws InterruptedException {
Counter counter = new Counter();
Thread thread = new Thread(new Runnable() {
public void run() {
for (int j = 0; j < nIterations; j++) {
int index = ThreadLocalRandom.current().nextInt(maxKeys);
counter.increment(Integer.toString(index));
}
}
}, "incrementThread");
Thread readerThread = new Thread(new Runnable() {
public void run() {
long sum = 0;
boolean isDone = false;
while (!isDone) {
try {
Thread.sleep(sleepMs);
}
catch (InterruptedException e) {
isDone = true;
}
Map<String, Long> map = counter.getAndClear();
for (Map.Entry<String, Long> entry : map.entrySet()) {
Long value = entry.getValue();
sum += value;
}
System.out.println("mapSize: " + map.size());
}
System.out.println("sum: " + sum);
System.out.println("expected: " + nIterations);
}
}, "readerThread");
thread.start();
readerThread.start();
thread.join();
readerThread.interrupt();
readerThread.join();
// Ensure that counter is empty
System.out.println("elements left in map: " + counter.getAndClear().size());
}
}
While testing I have noticed that some increments are lost. I get the following results:
sum: 9993354
expected: 10000000
elements left in map: 0
If you can't reproduce this error (that sum is less than expected), you can try to increase maxKeys a few orders of magnitude or decrease hashMapInitSize or increase nIterations (the latter also increases run time). I have also included testing code (main method) in the case it has any errors.
I suspect that the error is happening when capacity of ConcurrentHashMap is increased during runtime. On my computer the code appears to work correctly when hashMapInitSize is 170, but fails when hashMapInitSize is 171. I believe that size of 171 triggers increasing of capacity (128 / 0.75 == 170.66, where 0.75 is the default load factor of hash map).
So, the question is: am I using remove, replace and computeIfAbsent operations correctly? I assume that they are atomic operations on ConcurrentHashMap based on answers to Use of ConcurrentHashMap eliminates data-visibility troubles?. If so, why are some increments lost?
EDIT:
I think that I missed an important detail here that increment() is supposed to be called much more frequently than getAndClear(), so that I try to avoid any explicit locking in increment(). However, I'm going to test performance of different versions later to see if it is really an issue.
I gues the problem is the use of remove while iterating over the keySet. This is what the JavaDoc says for Map#keySet() (my emphasis):
Returns a Set view of the keys contained in this map. The set is backed by the map, so changes to the map are reflected in the set, and vice-versa. If the map is modified while an iteration over the set is in progress (except through the iterator's own remove operation), the results of the iteration are undefined.
The JavaDoc for ConcurrentHashMap give further clues:
Similarly, Iterators, Spliterators and Enumerations return elements reflecting the state of the hash table at some point at or since the creation of the iterator/enumeration.
The conclusion is that mutating the map while iterating over the keys is not predicatble.
One solution is to create a new map for the getAndClear() operation and just return the old map. The switch has to be protected, and in the example below I used a ReentrantReadWriteLock:
class HashMapTest {
private final static int hashMapInitSize = 170;
private final static int maxKeys = 100;
private final static int nIterations = 10_000_000;
private final static int sleepMs = 100;
private static class Counter {
private ConcurrentMap<String, Long> map;
ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
ReadLock readLock = lock.readLock();
WriteLock writeLock = lock.writeLock();
public Counter() {
map = new ConcurrentHashMap<>(hashMapInitSize);
}
public void increment(String key) {
readLock.lock();
try {
map.merge(key, 1L, Long::sum);
} finally {
readLock.unlock();
}
}
public Map<String, Long> getAndClear() {
ConcurrentMap<String, Long> oldMap;
writeLock.lock();
try {
oldMap = map;
map = new ConcurrentHashMap<>(hashMapInitSize);
} finally {
writeLock.unlock();
}
return oldMap;
}
}
// The code below is used for testing
public static void main(String[] args) throws InterruptedException {
final AtomicBoolean ready = new AtomicBoolean(false);
Counter counter = new Counter();
Thread thread = new Thread(new Runnable() {
public void run() {
for (int j = 0; j < nIterations; j++) {
int index = ThreadLocalRandom.current().nextInt(maxKeys);
counter.increment(Integer.toString(index));
}
}
}, "incrementThread");
Thread readerThread = new Thread(new Runnable() {
public void run() {
long sum = 0;
while (!ready.get()) {
try {
Thread.sleep(sleepMs);
} catch (InterruptedException e) {
//
}
Map<String, Long> map = counter.getAndClear();
for (Map.Entry<String, Long> entry : map.entrySet()) {
Long value = entry.getValue();
sum += value;
}
System.out.println("mapSize: " + map.size());
}
System.out.println("sum: " + sum);
System.out.println("expected: " + nIterations);
}
}, "readerThread");
thread.start();
readerThread.start();
thread.join();
ready.set(true);
readerThread.join();
// Ensure that counter is empty
System.out.println("elements left in map: " + counter.getAndClear().size());
}
}

Division of a task to threads - multi threading

I want to generate pairs from a given large pool of numbers. I am using two for loops and threads. My function getAllPairs() in the code generates apairs with a given array of numbers.
I have an array of length 1000. With one thread, output time is nearly 15 sec. Now I want to use 5-6 threads and reduce this output time.I am stuck at dividing this task equally to five threads.If not threads,how to decrease the output time?
Solution with threads is appreciated since I put a lot of time learning multithreading. I would like to implement it.
import java.util.*;
class Pair {
public int x, y;
public Pair(int x, int y) {
this.x = x;
this.y = y;
}
#Override
public String toString(){
return " ( " + x + " ," + y + " ) " ;
}
}
class selectPairs{
private int[] array;
private List<Pair> totalPairs ;
public selectPairs(int[] arr){
array = arr;
}
//set Method
public void settotalPairs(List<Pair> pieces){
totalPairs = pieces;
}
//get Method
public List<Pair> gettotalPairs(){
return totalPairs;
}
// Method to generate pairs
public List<Pair> getAllPairs() {
List<Pair> pairs = new ArrayList<Pair>();
int total = array.length;
for(int i=0; i < total; i++) {
int num1 = array[i];
for(int j=i+1; j < total; j++) {
int num2 = array[j];
pairs.add(new Pair(num1,num2));
}
}
return pairs;
}
}
// Thread class
class ThreadPairs extends Thread {
private Thread t;
selectPairs SP;
ThreadPairs(selectPairs sp){
SP = sp;
}
public void run() {
synchronized(SP) {
List<Pair> PAIRS = SP.getAllPairs();
SP.settotalPairs(PAIRS);
}
}
}
public class TestThread {
public static void main(String args[]) {
int[] a = new int[1000];
for (int i = 0; i < a.length; i++) {
a[i] = i ;
}
selectPairs ob = new selectPairs(a);
ThreadPairs T = new ThreadPairs( ob );
T.start();
while (true) {
try {
T.join();
break;
}
catch(Exception e){
}
}
List<Pair> Total = new ArrayList<Pair>() ;
List<Pair> Temp1 = ob.gettotalPairs();
Total.addAll(Temp1);
System.out.println(Total);
}
}
A solution with a thread-pool, a task split strategy and it collects all results:
public class SelectPairs {
private static final int NUM_THREADS = 8;
private int[] array;
public SelectPairs(int[] arr) {
array = arr;
}
// A splitting task strategy
public List<Pair> getPartialPairs(int threadIndex, int numThreads) {
List<Pair> pairs = new ArrayList<Pair>();
int total = array.length;
for (int i = threadIndex; i < total; i += numThreads) {
int num1 = array[i];
for (int j = i + 1; j < total; j++) {
int num2 = array[j];
pairs.add(new Pair(num1, num2));
}
}
return pairs;
}
// To use Callables or Runnables are better than extends a Thread.
public static class PartialPairsCall implements Callable<List<Pair>> {
private int thread;
private int totalThreads;
private SelectPairs selectPairs;
public PartialPairsCall(int thread, int totalThreads, SelectPairs selectPairs) {
this.thread = thread;
this.totalThreads = totalThreads;
this.selectPairs = selectPairs;
}
#Override
public List<Pair> call() throws Exception {
return selectPairs.getPartialPairs(thread, totalThreads);
}
}
public static void main(String[] args) throws Exception {
int[] a = new int[1000];
for (int i = 0; i < a.length; i++) {
a[i] = i;
}
SelectPairs sp = new SelectPairs(a);
// Create a thread pool
ExecutorService es = Executors.newFixedThreadPool(NUM_THREADS);
List<Future<List<Pair>>> futures = new ArrayList<>(NUM_THREADS);
// Submit task to every thread:
for (int i = 0; i < NUM_THREADS; i++) {
futures.add(es.submit(new PartialPairsCall(i, NUM_THREADS, sp)));
}
// Collect the results:
List<Pair> result = new ArrayList<>(a.length * (a.length - 1));
for (Future<List<Pair>> future : futures) {
result.addAll(future.get());
}
// Shutdown thread pool
es.shutdown();
System.out.println("result: " + result.size());
}
}
regarding the framework of multithreading, you can implement ThreadPoolExecutor as was suggested in a comment.
Regarding splitting the workload, it seems that the key is splitting the iteration on the array which is achievable if you give the Runnable task a start and end index to iterate over.

Strange behaviour of synchronized

class TestSync {
public static void main(String[] args) throws InterruptedException {
Counter counter1 = new Counter();
Counter counter2 = new Counter();
Counter counter3 = new Counter();
Counter counter4 = new Counter();
counter1.start();
counter2.start();
counter3.start();
counter4.start();
counter1.join();
counter2.join();
counter3.join();
counter4.join();
for (int i = 1; i <= 100; i++) {
if (values[i] > 1) {
System.out.println(String.format("%d was visited %d times", i, values[i]));
} else if (values[i] == 0) {
System.out.println(String.format("%d wasn't visited", i));
}
}
}
public static Integer count = 0;
public static int[] values = new int[105];
static {
for (int i = 0; i < 105; i++) {
values[i] = 0;
}
}
public static void incrementCount() {
count++;
}
public static int getCount() {
return count;
}
public static class Counter extends Thread {
#Override
public void run() {
do {
synchronized (count) {
incrementCount();
values[getCount()]++;
}
} while (getCount() < 100);
}
}
}
That is a code from one online course. My task is to make this code visit each element of array only once (only for elements from 1 to 100). So I have added simple synchronized block to run method. In case of using values inside of that statement everything works. But with count it doesn't want to work.
What the difference? Both of this objects are static fields inside of the same class. Also I have tried to make count volatile but it hasn't helped me.
PS: a lot of elements are visited 2 times and some of them even 3 times. In case of using values in synchronized all elements are visited only once!!!
Integer is immutable. The moment you call increment method, You get a new object and reference of count variable gets changed and hence leads to an issue.

Synchronized hashmap read-only access in Java

In Java, there are 3 threads that want to access (read-only) an immutable hashmap to do something. Is SynchronizedMap class below the fastest solution for that purpose? If not, then what would be faster to use?
import com.carrotsearch.hppc.IntObjectMap;
import com.carrotsearch.hppc.IntObjectOpenHashMap;
public class abc {
public static void main(String[] args) {
final IntObjectMap<int[]> map = new IntObjectOpenHashMap<int[]>();
for (int i = 0; i < 4; i++) {
map.put(i, new int[] {1, 2, 3, 4, 5});
}
Thread[] threads = new Thread[3];
class SynchronizedMap {
private final Object syncObject = new Object();
public final int[] read(int i) {
final int[] value;
synchronized (syncObject) {
// code that reads-only immutable map object
value = map.get(i);
}
return value;
}
}
final SynchronizedMap syncMap = new SynchronizedMap();
class AccessMap implements Runnable {
private int id;
AccessMap(int index) { id = index; }
public void run() {
// code that reads-only immutable map object like this:
for (int i = 0; i < 4; i++) {
final int[] array = syncMap.read(i);
for (int j = 0; j < array.length; j++)
System.out.println(id + ": " + array[j] + " ");
}
}
}
for (int i = 0; i < threads.length; i++) {
threads[i] = new Thread(new AccessMap(i) {});
threads[i].start();
}
for (int i = 0; i < threads.length; i++) {
try {
threads[i].join();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
Is SynchronizedMap class below the fastest solution for that purpose?
No. If the HashMap is truly immutable/read-only then a volatile Map<...> is the way to go.
volatile IntObjectMap<int[]> readOnlyMap = new IntObjectOpenHashMap<int[]>();
If you are starting your threads after your map is built then you don't even need the volatile. The only time you would need the volatile is if you are swapping in a new map that is being accessed by currently running threads.
final IntObjectMap<int[]> readOnlyMap = new IntObjectOpenHashMap<int[]>();

Is it a thread-safe mechanism?

Is this class thread-safe?
class Counter {
private ConcurrentMap<String, AtomicLong> map =
new ConcurrentHashMap<String, AtomicLong>();
public long add(String name) {
if (this.map.get(name) == null) {
this.map.putIfAbsent(name, new AtomicLong());
}
return this.map.get(name).incrementAndGet();
}
}
What do you think?
Yes, provided you make the map final. The if is not necessary but you can keep it for performance reasons if you want, although it will most likely not make a noticeable difference:
public long add(String name) {
this.map.putIfAbsent(name, new AtomicLong());
return this.map.get(name).incrementAndGet();
}
EDIT
For the sake of it, I have quickly tested both implementation (with and without the check). 10 millions calls on the same string take:
250 ms with the check
480 ms without the check
Which confirms what I said: unless you call this method millions of time or it is in performance critical part of your code, it does not make a difference.
EDIT 2
Full test result - see the BetterCounter which yields even better results. Now the test is very specific (no contention + the get always works) and does not necessarily correspond to your usage.
Counter: 482 ms
LazyCounter: 207 ms
MPCounter: 303 ms
BetterCounter: 135 ms
public class Test {
public static void main(String args[]) throws IOException {
Counter count = new Counter();
LazyCounter lazyCount = new LazyCounter();
MPCounter mpCount = new MPCounter();
BetterCounter betterCount = new BetterCounter();
//WARM UP
for (int i = 0; i < 10_000_000; i++) {
count.add("abc");
lazyCount.add("abc");
mpCount.add("abc");
betterCount.add("abc");
}
//TEST
long start = System.nanoTime();
for (int i = 0; i < 10_000_000; i++) {
count.add("abc");
}
long end = System.nanoTime();
System.out.println((end - start) / 1000000);
start = System.nanoTime();
for (int i = 0; i < 10_000_000; i++) {
lazyCount.add("abc");
}
end = System.nanoTime();
System.out.println((end - start) / 1000000);
start = System.nanoTime();
for (int i = 0; i < 10_000_000; i++) {
mpCount.add("abc");
}
end = System.nanoTime();
System.out.println((end - start) / 1000000);
start = System.nanoTime();
for (int i = 0; i < 10_000_000; i++) {
betterCount.add("abc");
}
end = System.nanoTime();
System.out.println((end - start) / 1000000);
}
static class Counter {
private final ConcurrentMap<String, AtomicLong> map =
new ConcurrentHashMap<String, AtomicLong>();
public long add(String name) {
this.map.putIfAbsent(name, new AtomicLong());
return this.map.get(name).incrementAndGet();
}
}
static class LazyCounter {
private final ConcurrentMap<String, AtomicLong> map =
new ConcurrentHashMap<String, AtomicLong>();
public long add(String name) {
if (this.map.get(name) == null) {
this.map.putIfAbsent(name, new AtomicLong());
}
return this.map.get(name).incrementAndGet();
}
}
static class BetterCounter {
private final ConcurrentMap<String, AtomicLong> map =
new ConcurrentHashMap<String, AtomicLong>();
public long add(String name) {
AtomicLong counter = this.map.get(name);
if (counter != null)
return counter.incrementAndGet();
AtomicLong newCounter = new AtomicLong();
counter = this.map.putIfAbsent(name, newCounter);
return (counter == null ? newCounter.incrementAndGet() : counter.incrementAndGet());
}
}
static class MPCounter {
private final ConcurrentMap<String, AtomicLong> map =
new ConcurrentHashMap<String, AtomicLong>();
public long add(String name) {
final AtomicLong newVal = new AtomicLong(),
prevVal = map.putIfAbsent(name, newVal);
return (prevVal != null ? prevVal : newVal).incrementAndGet();
}
}
}
EDIT
Yes if you make the map final. Otherwise, it's not guaranteed that all threads see the most recent version of the map data structure when they call add() for the first time.
Several threads can reach the body of the if(). The putIfAbsent() will make sure that only a single AtomicLong is put into the map.
There should be no way that putIfAbsent() can return without the new value being in the map.
So when the second get() is executed, it will never get a null value and since only a single AtomicLong can have been added to the map, all threads will get the same instance.
[EDIT2] The next question: How efficient is this?
This code is faster since it avoids unnecessary searches:
public long add(String name) {
AtomicLong counter = map.get( name );
if( null == counter ) {
map.putIfAbsent( name, new AtomicLong() );
counter = map.get( name ); // Have to get again!!!
}
return counter.incrementAndGet();
}
This is why I prefer Google's CacheBuilder which has a method that is called when a key can't be found. That way, the map is searched only once and I don't have to create extra instances.
No one seems to have the complete solution, which is:
public long add(String name) {
AtomicLong counter = this.map.get(name);
if (counter == null) {
AtomicLong newCounter = new AtomicLong();
counter = this.map.putIfAbsent(name, newCounter);
if(counter == null) {
counter = newCounter;
}
}
return counter.incrementAndGet();
}
What about this:
class Counter {
private final ConcurrentMap<String, AtomicLong> map =
new ConcurrentHashMap<String, AtomicLong>();
public long add(String name) {
this.map.putIfAbsent(name, new AtomicLong());
return this.map.get(name).incrementAndGet();
}
}
The map should be final to guarantee it is fully visible to all threads before the first method is invoked. (see 17.5 final Field Semantics (Java Language Specification) for details)
I think the if is redundant, I hope I'm not overseeing anything.
Edit: Added a quote from the Java Language Specification:
This solution (note that I am showing only the body of the add method -- the rest stays the same!) spares you of any calls to get:
final AtomicLong newVal = new AtomicLong(),
prevVal = map.putIfAbsent(name, newVal);
return (prevVal != null? prevVal : newVal).incrementAndGet();
In all probability an extra get is much costlier than an extra new AtomicLong().
I think you would be better off with something like this:
class Counter {
private ConcurrentMap<String, AtomicLong> map = new ConcurrentHashMap<String, AtomicLong>();
public long add(String name) {
AtomicLong counter = this.map.get(name);
if (counter == null) {
AtomicLong newCounter = new AtomicLong();
counter = this.map.putIfAbsent(name, newCounter);
if (counter == null) {
// The new counter was added - use it
counter = newCounter;
}
}
return counter.incrementAndGet();
}
}
Otherwise multiple threads may add simultaneously and you wouldn't notice (since you ignore the value returned by putIfAbsent).
I assume that you never recreate the map.

Categories

Resources