Atomic copy-and-clear on Java collection - java

I know similar questions are often asked, but I could not find anything that would help me.
The situation is like this:
One worker is adding elements to collection
The second one is waiting for some time (maturity of elements) or for certain size of collection, and start it's job.
The thing is: how to copy (I think it's best to work on copy) the collection for second worker, and then clear original collection to ensure we won't lost anything (the first worker is writing all the time) but not to hold lock on original collection as short as possible?
thanks

This kind of thing will be far easier if you use the purpose-built concurrency tools like LinkedBlockingQueue rather than a plain HashSet. Have the producer add elements to the queue, and the consumer can use drainTo to extract elements from the queue in batches as it requires them. There's no need for any synchronization, as BlockingQueue implementations are designed to be threadsafe.

Ian's LinkedBlockingQueue solution is the simplest.
For higher throughput (potentially trade off with latency) in a single producer single consumer scenario, you may want to consider the example in java.util.concurrent.Exchanger
After swapping, you now have the whole collection yourself.

works for me
import java.util.Collection;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class MyClass {
private final Map<String, Integer> cachedData = new ConcurrentHashMap<>();
private final ReadWriteLock lock = new ReentrantReadWriteLock();
private final Lock sharedLock = lock.readLock();
private final Lock copyAndFlushLock = lock.writeLock();
public void putData(String key, Integer value) {
try {
sharedLock.lock();
cachedData.put(key, value);
} finally {
sharedLock.unlock();
}
}
public Collection<Integer> copyAndFlush() {
try {
copyAndFlushLock.lock();
Collection<Integer> values = cachedData.values();
cachedData.clear();
return values;
} finally {
copyAndFlushLock.unlock();
}
}
}

Related

Processing changing source data in Java Akka streams

2 threads are started. dataListUpdateThread adds the number 2 to a List. processFlowThread sums the values in the same List and prints the summed list to the console. Here is the code:
import akka.NotUsed;
import akka.actor.ActorSystem;
import akka.stream.javadsl.Sink;
import akka.stream.javadsl.Source;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.ExecutionException;
import static java.lang.Thread.sleep;
public class SourceExample {
private final static ActorSystem system = ActorSystem.create("SourceExample");
private static void delayOneSecond() {
try {
sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
private static void printValue(CompletableFuture<Integer> integerCompletableFuture) {
try {
System.out.println("Sum is " + integerCompletableFuture.get().intValue());
} catch (ExecutionException | InterruptedException e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
final List dataList = new ArrayList<Integer>();
final Thread dataListUpdateThread = new Thread(() -> {
while (true) {
dataList.add(2);
System.out.println(dataList);
delayOneSecond();
}
});
dataListUpdateThread.start();
final Thread processFlowThread = new Thread(() -> {
while (true) {
final Source<Integer, NotUsed> source = Source.from(dataList);
final Sink<Integer, CompletionStage<Integer>> sink =
Sink.fold(0, (agg, next) -> agg + next);
final CompletionStage<Integer> sum = source.runWith(sink, system);
printValue(sum.toCompletableFuture());
delayOneSecond();
}
});
processFlowThread.start();
}
}
I've tried to create the simplest example to frame the question. dataListUpdateThread could be populating the List from a REST service or Kafka topic instead of just adding the value 2 to the List. Instead of using Java threads how should this scenario be implemented? In other words, how to share dataList to the Akka Stream for processing?
Mutating the collection passed to Source.from is only ever going to accomplish this by coincidence: if the collection is ever exhausted, Source.from will complete the stream. This is because it's intended for finite, strictly evaluated data (the use cases are basically: a) simple examples for the docs and b) situations where you want to bound resource consumption when performing an operation for a collection in the background (think a list of URLs that you want to send HTTP requests to)).
NB: I haven't written Java to any great extent since the Java 7 days, so I'm not providing Java code, just an outline of approaches.
As mentioned in a prior answer Source.queue is probably the best option (besides using something like Akka HTTP or an Alpakka connector). In a case such as this, where the stream's materialized value is a future that won't be completed until the stream completes, that Source.queue will never complete the stream (because there's no way for it to know that its reference is the only reference), introducing a KillSwitch and propagating that through viaMat and toMat would give you the ability to decide outside of the stream to complete the stream.
An alternative to Source.queue, is Source.actorRef, which lets you send a distinguished message (akka.Done.done() in the Java API is pretty common for this). That source materializes as an ActorRef to which you can tell messages, and those messages (at least those which match the type of the stream) will be available for the stream to consume.
With both Source.queue and Source.actorRef, it's often useful to prematerialize them: the alternative in a situation like your example where you also want the materialized value of the sink, is to make heavy use of the Mat operators to customize materialized values (in Scala, it's possible to use tuples to at least simplify combining multiple materialized values, but in Java, once you got beyond a pair (as you would with queue), I'm pretty sure you'd have to define a class just to hold the three (queue, killswitch, future for completed value) materialized values).
It's also worth noting that, since Akka Streams run on actors in the background (and thus get scheduled as needed onto the ActorSystem's threads), there's almost never a reason to create a thread on which to run a stream.

Implementing a cache within a Repository using HashMap question

I got this question on an interview and I'm trying to learn from this.
Assuming that this repository is used in a concurrent context with billions of messages in the database.
public class MessageRepository {
public static final Map<String, Message> cache = new HashMap<>();
public Message findMessageById(String id) {
if(cache.containsKey(id)) {
return cache.get(id);
}
Message p = loadMessageFromDb(id);
cache.put(id, p);
return p;
}
Message loadMessageFromDb(String id) {
/* query DB and map row to a Message object */
}
}
What are possible problems with this approach?
One I can think of is HashMap not being a thread safe implementation of Map. Perhaps ConcurrentHashMap would be better for that.
I wasn't sure about any other of the possible answers which were:
1) Class MessageRepository is final meaning it's immutable, so it can't have a modifiable cache.
(AFAIK HashMap is mutable and it's composed into MessageRepository so this wouldn't be an issue).
2) Field cache is final meaning that it's immutable, so it can't be modified by put.
(final doesn't mean immutable so this wouldn't be an issue either)
3) Field cache is static meaning that it will be reset every time an instance of MessageRepository will be built.
(cache will be shared by all instances of MessageRepository so it shouldn't be a problem)
4) HashMap is synchronized, performances may be better without synchronization.
(I think SynchronizedHashMap is the synced implementation)
5) HashMap does not implement evict mechanism out of the box, it may cause memory problems.
(What kind of problems?)
I see two problems with this cache. If loadMessageFromDb() is an expensive operation, then two threads can initiate duplicate calculations. This isn't alleviated even if you use ConcurrentHashMap. A proper implementation of a cache that avoid this would be:
public class MessageRepository {
private static final Map<String, Future<Message>> CACHE = new ConcurrentHashMap<>();
public Message findMessageById(String id) throws ExecutionException, InterruptedException {
Future<Message> messageFuture = CACHE.get(id);
if (null == messageFuture) {
FutureTask<Message> ft = new FutureTask<>(() -> loadMessageFromDb(id));
messageFuture = CACHE.putIfAbsent(id, ft);
if (null == messageFuture) {
messageFuture = ft;
ft.run();
}
}
return messageFuture.get();
}
}
(Taken directly from JCIP By Brian Goetz et. al.)
In the cache above, when a thread starts a computation, it puts the Future into the cache and then patiently waits till the computation finishes. Any thread that comes in with the same id sees that a computation is already ongoing and will again wait on the same future. If two threads call exactly at the same time, putIfAbsent ensures that only one thread is able to initiate the computation.
Java does not have any SynchronizedHashMap class. You should use ConcurrentHashMap. You can do Collections.synchronisedMap(new HashMap<>()) but it has really bad performance.
A problem with the above cache is that it does not evict entries. Java provides LinkedHashMap that can help you create a LRU cache, but it is not synchronised. If you want both functionalities, you should try Guava cache.

AtomicInteger or LongAccumulator

Can someone tell if LongAccumulator could be a better alternative for AtomicInteger in the below example?
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.atomic.AtomicInteger;
public class IncrementThread implements Runnable{
AtomicInteger atomicint = new AtomicInteger();
public IncrementThread(AtomicInteger atominint) {
this.atomicint = atominint;
}
#Override
public void run() {
while(true)
if(atomicint.incrementAndGet()==4){doSomething(); atomicint.set(0);}
}
private void doSomething() {
System.out.println(Thread.currentThread().getName() + " : counter reached to 4");
}
public static void main(String[] args) {
AtomicInteger atomicint = new AtomicInteger();
IncrementThread incThread1 = new IncrementThread(atomicint);
IncrementThread incThread2 = new IncrementThread(atomicint);
IncrementThread incThread3 = new IncrementThread(atomicint);
ExecutorService threadPool = Executors.newCachedThreadPool();
threadPool.execute(incThread1);
threadPool.execute(incThread2);
threadPool.execute(incThread3);
}
}
In this very exact example (which really does nothing useful) both classes seem to be equivalent.
With LongAccumulator you could pass the whole if statement as a lambda expression. But: The java doc of LongAccumulator states that the supplied function should be side effect free which is not fulfilled (doSomething writes to system out).
I suppose that you would probably use LongAccumulator in the same manner as AtomicInteger in this example. In that case the answer is no.
LongAccumulator accumulate values and count the result only, when you call methods like get(). You also make clearing similar LongAccumulator.reset() method, when it would be has 4 value. But all of this methods in LongAccumulator are not thread safe and you could get unpredictable results, because you use multiple threads for reading and updating.
LongAccumulator is good, when you know that many different threads will be update value, but the reading is seldom and more over, you should be sure that reading or reset happen only in one thread, if you matter about synchronization.
But if you don't, LongAccumulator could be better. For example, when you want to count statistic, because if you want get statistic you more probably don't mean "statistic exactly in time of calling", you mean something like "current" results.
This reasoning is applicable for LongAdder too.
In your example there are two possible improvements.
First, you can use:
while(true)
if(atomicint.incrementAndGet()==4){doSomething(); atomicint.compareAndSet(4, 0);}
Now you check, that you reset the same state of atomicint variable.
Another possible improvement - don't use AtomicLong or LongAccumulator, just use simple long variable with volatile keyword. It will be simpler and more applicable here, because in this example you don't use capabilities (like I mention in first improvement).
You could know more in documentation and classes' sources
LongAdder
LongAccumulator
About volatile
Most efficient - sources :)

Thread safe Map of Queues

I want to implement a thread safe Map of Queues.
I intent to start with an empty Map. If the key does not exist, I want to create a new Map entry with a new Queue. If the key does exist, I want to add to the Queue. My proposed implementation is as follows:
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentLinkedQueue;
public class StackOverFlowExample {
private final Map<String, ConcurrentLinkedQueue<String>> map = new ConcurrentHashMap<>();
public void addElementToQueue(String key, String value){
if (map.containsKey(key)){
map.get(key).add(value);
}
else{
ConcurrentLinkedQueue<String> queue = new ConcurrentLinkedQueue<>();
queue.add(value);
map.put(key, queue);
}
}
}
My concern is that is when multiple threads attempt to add a new value to the Map, the first will put a new Map entry with a new Queue, the second will wait, and then put a new Queue for the key, rather than adding to the Queue. My concurrency / concurrency API knowledge is slim at best. Perhaps the concurrency is in-place to avoid this? Advice would be much appreciated.
This pattern has probably been posted many times on SO (efficiently adding to a concurrent map):
Queue<String> q = map.get(key);
if(q == null) {
q = new ConcurrentLinkedQueue<String>();
Queue<String> curQ = map.putIfAbsent(key, q);
if(curQ != null) {
q = curQ;
}
}
q.add(value);
Note that since Java 8, this can be replaced with computeIfAbsent().
So your fear is that thread A and thread B will do the following:
thread A: lock ConcurrentHashMap
Look for Queue "x" (not found)
unlock ConcurrentHashMap
create Queue "x"
lock ConcurrentHashMap
Insert Queue X
unlock ConcurrentHashMap
Thread B:
Lock ConcurrentHashMap (while thread A is in 'create Queue X')
look for queue X (not found)
unlock ConcurrentHashMap (thread A then gets lock)
create Queue "x" v2
lock ConcurrentHashMap
Insert Queue X v2 (overwriting the old entry)
unlock ConcurrentHashMap
That is in fact a real issue, but one that is easily resolved by making AddElementToQueue be a synchronized method. Then there can only be one thread inside AddElementToQueue at any given time, and thus the synchronization hole between the first 'unlock' and the second 'lock' is closed.
Thus
public synchronized void addElementToQueue(String key, String value){
should resolve your lost-queue problem.
If Java 8 is an option :
public void addElementToQueue(String key, String value) {
map.merge(key, new ConcurrentLinkedQueue<>(Arrays.asList(value)), (oldValue, coming) -> {
oldValue.addAll(coming);
return oldValue;
});
}

Java synchronizing based on a parameter (named mutex/lock)

I'm looking for a way to synchronize a method based on the parameter it receives, something like this:
public synchronized void doSomething(name){
//some code
}
I want the method doSomething to be synchronized based on the name parameter like this:
Thread 1: doSomething("a");
Thread 2: doSomething("b");
Thread 3: doSomething("c");
Thread 4: doSomething("a");
Thread 1 , Thread 2 and Thread 3 will execute the code without being synchronized , but Thread 4 will wait until Thread 1 has finished the code because it has the same "a" value.
Thanks
UPDATE
Based on Tudor explanation I think I'm facing another problem:
here is a sample of the new code:
private HashMap locks=new HashMap();
public void doSomething(String name){
locks.put(name,new Object());
synchronized(locks.get(name)) {
// ...
}
locks.remove(name);
}
The reason why I don't populate the locks map is because name can have any value.
Based on the sample above , the problem can appear when adding / deleting values from the hashmap by multiple threads in the same time, since HashMap is not thread-safe.
So my question is if I make the HashMap a ConcurrentHashMap which is thread safe, will the synchronized block stop other threads from accessing locks.get(name) ??
TL;DR:
I use ConcurrentReferenceHashMap from the Spring Framework. Please check the code below.
Although this thread is old, it is still interesting. Therefore, I would like to share my approach with Spring Framework.
What we are trying to implement is called named mutex/lock. As suggested by Tudor's answer, the idea is to have a Map to store the lock name and the lock object. The code will look like below (I copy it from his answer):
Map<String, Object> locks = new HashMap<String, Object>();
locks.put("a", new Object());
locks.put("b", new Object());
However, this approach has 2 drawbacks:
The OP already pointed out the first one: how to synchronize the access to the locks hash map?
How to remove some locks which are not necessary anymore? Otherwise, the locks hash map will keep growing.
The first problem can be solved by using ConcurrentHashMap. For the second problem, we have 2 options: manually check and remove locks from the map, or somehow let the garbage collector knows which locks are no longer used and the GC will remove them. I will go with the second way.
When we use HashMap, or ConcurrentHashMap, it creates strong references. To implement the solution discussed above, weak references should be used instead (to understand what is a strong/weak reference, please refer to this article or this post).
So, I use ConcurrentReferenceHashMap from the Spring Framework. As described in the documentation:
A ConcurrentHashMap that uses soft or weak references for both keys
and values.
This class can be used as an alternative to
Collections.synchronizedMap(new WeakHashMap<K, Reference<V>>()) in
order to support better performance when accessed concurrently. This
implementation follows the same design constraints as
ConcurrentHashMap with the exception that null values and null keys
are supported.
Here is my code. The MutexFactory manages all the locks with <K> is the type of the key.
#Component
public class MutexFactory<K> {
private ConcurrentReferenceHashMap<K, Object> map;
public MutexFactory() {
this.map = new ConcurrentReferenceHashMap<>();
}
public Object getMutex(K key) {
return this.map.compute(key, (k, v) -> v == null ? new Object() : v);
}
}
Usage:
#Autowired
private MutexFactory<String> mutexFactory;
public void doSomething(String name){
synchronized(mutexFactory.getMutex(name)) {
// ...
}
}
Unit test (this test uses the awaitility library for some methods, e.g. await(), atMost(), until()):
public class MutexFactoryTests {
private final int THREAD_COUNT = 16;
#Test
public void singleKeyTest() {
MutexFactory<String> mutexFactory = new MutexFactory<>();
String id = UUID.randomUUID().toString();
final int[] count = {0};
IntStream.range(0, THREAD_COUNT)
.parallel()
.forEach(i -> {
synchronized (mutexFactory.getMutex(id)) {
count[0]++;
}
});
await().atMost(5, TimeUnit.SECONDS)
.until(() -> count[0] == THREAD_COUNT);
Assert.assertEquals(count[0], THREAD_COUNT);
}
}
Use a map to associate strings with lock objects:
Map<String, Object> locks = new HashMap<String, Object>();
locks.put("a", new Object());
locks.put("b", new Object());
// etc.
then:
public void doSomething(String name){
synchronized(locks.get(name)) {
// ...
}
}
The answer of Tudor is fine, but it's static and not scalable. My solution is dynamic and scalable, but it goes with increased complexity in the implementation. The outside world can use this class just like using a Lock, as this class implements the interface. You get an instance of a parameterized lock by the factory method getCanonicalParameterLock.
package lock;
import java.lang.ref.Reference;
import java.lang.ref.WeakReference;
import java.util.Map;
import java.util.WeakHashMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public final class ParameterLock implements Lock {
/** Holds a WeakKeyLockPair for each parameter. The mapping may be deleted upon garbage collection
* if the canonical key is not strongly referenced anymore (by the threads using the Lock). */
private static final Map<Object, WeakKeyLockPair> locks = new WeakHashMap<>();
private final Object key;
private final Lock lock;
private ParameterLock (Object key, Lock lock) {
this.key = key;
this.lock = lock;
}
private static final class WeakKeyLockPair {
/** The weakly-referenced parameter. If it were strongly referenced, the entries of
* the lock Map would never be garbage collected, causing a memory leak. */
private final Reference<Object> param;
/** The actual lock object on which threads will synchronize. */
private final Lock lock;
private WeakKeyLockPair (Object param, Lock lock) {
this.param = new WeakReference<>(param);
this.lock = lock;
}
}
public static Lock getCanonicalParameterLock (Object param) {
Object canonical = null;
Lock lock = null;
synchronized (locks) {
WeakKeyLockPair pair = locks.get(param);
if (pair != null) {
canonical = pair.param.get(); // could return null!
}
if (canonical == null) { // no such entry or the reference was cleared in the meantime
canonical = param; // the first thread (the current thread) delivers the new canonical key
pair = new WeakKeyLockPair(canonical, new ReentrantLock());
locks.put(canonical, pair);
}
}
// the canonical key is strongly referenced now...
lock = locks.get(canonical).lock; // ...so this is guaranteed not to return null
// ... but the key must be kept strongly referenced after this method returns,
// so wrap it in the Lock implementation, which a thread of course needs
// to be able to synchronize. This enforces a thread to have a strong reference
// to the key, while it isn't aware of it (as this method declares to return a
// Lock rather than a ParameterLock).
return new ParameterLock(canonical, lock);
}
#Override
public void lock() {
lock.lock();
}
#Override
public void lockInterruptibly() throws InterruptedException {
lock.lockInterruptibly();
}
#Override
public boolean tryLock() {
return lock.tryLock();
}
#Override
public boolean tryLock(long time, TimeUnit unit) throws InterruptedException {
return lock.tryLock(time, unit);
}
#Override
public void unlock() {
lock.unlock();
}
#Override
public Condition newCondition() {
return lock.newCondition();
}
}
Of course you'd need a canonical key for a given parameter, otherwise threads would not be synchronized as they would be using a different Lock. Canonicalization is the equivalent of the internalization of Strings in Tudor's solution. Where String.intern() is itself thread-safe, my 'canonical pool' is not, so I need extra synchronization on the WeakHashMap.
This solution works for any type of Object. However, make sure to implement equals and hashCode correctly in custom classes, because if not, threading issues will arise as multiple threads could be using different Lock objects to synchronize on!
The choice for a WeakHashMap is explained by the ease of memory management it brings. How else could one know that no thread is using a particular Lock anymore? And if this could be known, how could you safely delete the entry out of the Map? You would need to synchronize upon deletion, because you have a race condition between an arriving thread wanting to use the Lock, and the action of deleting the Lock from the Map. All these things are just solved by using weak references, so the VM does the work for you, and this simplifies the implementation a lot. If you inspected the API of WeakReference, you would find that relying on weak references is thread-safe.
Now inspect this test program (you need to run it from inside the ParameterLock class, due to private visibility of some fields):
public static void main(String[] args) {
Runnable run1 = new Runnable() {
#Override
public void run() {
sync(new Integer(5));
System.gc();
}
};
Runnable run2 = new Runnable() {
#Override
public void run() {
sync(new Integer(5));
System.gc();
}
};
Thread t1 = new Thread(run1);
Thread t2 = new Thread(run2);
t1.start();
t2.start();
try {
t1.join();
t2.join();
while (locks.size() != 0) {
System.gc();
System.out.println(locks);
}
System.out.println("FINISHED!");
} catch (InterruptedException ex) {
// those threads won't be interrupted
}
}
private static void sync (Object param) {
Lock lock = ParameterLock.getCanonicalParameterLock(param);
lock.lock();
try {
System.out.println("Thread="+Thread.currentThread().getName()+", lock=" + ((ParameterLock) lock).lock);
// do some work while having the lock
} finally {
lock.unlock();
}
}
Chances are very high that you would see that both threads are using the same lock object, and so they are synchronized. Example output:
Thread=Thread-0, lock=java.util.concurrent.locks.ReentrantLock#8965fb[Locked by thread Thread-0]
Thread=Thread-1, lock=java.util.concurrent.locks.ReentrantLock#8965fb[Locked by thread Thread-1]
FINISHED!
However, with some chance it might be that the 2 threads do not overlap in execution, and therefore it is not required that they use the same lock. You could easily enforce this behavior in debugging mode by setting breakpoints at the right locations, forcing the first or second thread to stop wherever necessary. You will also notice that after the Garbage Collection on the main thread, the WeakHashMap will be cleared, which is of course correct, as the main thread waited for both worker threads to finish their job by calling Thread.join() before calling the garbage collector. This indeed means that no strong reference to the (Parameter)Lock can exist anymore inside a worker thread, so the reference can be cleared from the weak hashmap. If another thread now wants to synchronize on the same parameter, a new Lock will be created in the synchronized part in getCanonicalParameterLock.
Now repeat the test with any pair that has the same canonical representation (= they are equal, so a.equals(b)), and see that it still works:
sync("a");
sync(new String("a"))
sync(new Boolean(true));
sync(new Boolean(true));
etc.
Basically, this class offers you the following functionality:
Parameterized synchronization
Encapsulated memory management
The ability to work with any type of object (under the condition that equals and hashCode is implemented properly)
Implements the Lock interface
This Lock implementation has been tested by modifying an ArrayList concurrently with 10 threads iterating 1000 times, doing this: adding 2 items, then deleting the last found list entry by iterating the full list. A lock is requested per iteration, so in total 10*1000 locks will be requested. No ConcurrentModificationException was thrown, and after all worker threads have finished the total amount of items was 10*1000. On every single modification, a lock was requested by calling ParameterLock.getCanonicalParameterLock(new String("a")), so a new parameter object is used to test the correctness of the canonicalization.
Please note that you shouldn't be using String literals and primitive types for parameters. As String literals are automatically interned, they always have a strong reference, and so if the first thread arrives with a String literal for its parameter then the lock pool will never be freed from the entry, which is a memory leak. The same story goes for autoboxing primitives: e.g. Integer has a caching mechanism that will reuse existing Integer objects during the process of autoboxing, also causing a strong reference to exist. Addressing this, however, this is a different story.
Check out this framework. Seems you're looking for something like this.
public class WeatherServiceProxy {
...
private final KeyLockManager lockManager = KeyLockManagers.newManager();
public void updateWeatherData(String cityName, Date samplingTime, float temperature) {
lockManager.executeLocked(cityName, new LockCallback() {
public void doInLock() {
delegate.updateWeatherData(cityName, samplingTime, temperature);
}
});
}
https://code.google.com/p/jkeylockmanager/
I've created a tokenProvider based on the IdMutexProvider of McDowell.
The manager uses a WeakHashMap which takes care of cleaning up unused locks.
You could find my implementation here.
I've found a proper answer through another stackoverflow question: How to acquire a lock by a key
I copied the answer here:
Guava has something like this being released in 13.0; you can get it out of HEAD if you like.
Striped more or less allocates a specific number of locks, and then assigns strings to locks based on their hash code. The API looks more or less like
Striped<Lock> locks = Striped.lock(stripes);
Lock l = locks.get(string);
l.lock();
try {
// do stuff
} finally {
l.unlock();
}
More or less, the controllable number of stripes lets you trade concurrency against memory usage, because allocating a full lock for each string key can get expensive; essentially, you only get lock contention when you get hash collisions, which are (predictably) rare.
Just extending on to Triet Doan's answer, we also need to take care of if the MutexFactory can be used at multiple places, as with currently suggested code we will end up with same MutexFactory at all places of its usage.
For example:-
#Autowired
MutexFactory<CustomObject1> mutexFactory1;
#Autowired
MutexFactory<CustomObject2> mutexFactory2;
Both mutexFactory1 & mutexFactory2 will refer to the same instance of factory even if their type differs, this is due to the fact that a single instance of MutexFactory is created by spring during application startup and same is used for both mutexFactory1 & mutexFactory2.
So here is the extra Scope annotation that needs to be put in to avoid above case-
#Component
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class MutexFactory<K> {
private ConcurrentReferenceHashMap<K, Object> map;
public MutexFactory() {
this.map = new ConcurrentReferenceHashMap<>();
}
public Object getMutex(K key) {
return this.map.compute(key, (k, v) -> v == null ? new Object() : v);
}
}
I've used a cache to store lock objects. The my cache will expire objects after a period, which really only needs to be longer that the time it takes the synchronized process to run
`
import com.google.common.cache.Cache;
import com.google.common.cache.CacheBuilder;
...
private final Cache<String, Object> mediapackageLockCache = CacheBuilder.newBuilder().expireAfterWrite(DEFAULT_CACHE_EXPIRE, TimeUnit.SECONDS).build();
...
public void doSomething(foo) {
Object lock = mediapackageLockCache.getIfPresent(foo.toSting());
if (lock == null) {
lock = new Object();
mediapackageLockCache.put(foo.toString(), lock);
}
synchronized(lock) {
// execute code on foo
...
}
}
`
I have a much simpler, scalable implementation akin to #timmons post taking advantage of guavas LoadingCache with weakValues. You will want to read the help files on "equality" to understand the suggestion I have made.
Define the following weakValued cache.
private final LoadingCache<String,String> syncStrings = CacheBuilder.newBuilder().weakValues().build(new CacheLoader<String, String>() {
public String load(String x) throws ExecutionException {
return new String(x);
}
});
public void doSomething(String x) {
x = syncStrings.get(x);
synchronized(x) {
..... // whatever it is you want to do
}
}
Now! As a result of the JVM, we do not have to worry that the cache is growing too large, it only holds the cached strings as long as necessary and the garbage manager/guava does the heavy lifting.

Categories

Resources