public class MapDem {
final HashMap<Integer,Integer> map = new HashMap<Integer,Integer>();
public HashMap<Integer,Integer> getMap(){
return map;
}
public void putValue(int key,int value){
map.put(key,value);
}
public static void main(String args[]){
MapDem demo = new MapDem();
new Thread(new Runnable(){
#Override
public void run() {
demo.putValue(1, 10);
}
}).start();
new Thread(new Runnable(){
#Override
public void run() {
demo.putValue(1, 10);
}
}).start();
System.out.println(demo.getMap().size());
}
}
Are final fields inherently thread-safe? In the above code the map variable is marked as final, does that mean that it is thread-safe?
If the variable is not thread-safe I expect that the output from the main-method should be a "2" but I am getting either "1" or "0"
EDIT
If I declare the variable using the volatile keyword, i.e.
volatile HashMap<Integer,Integer> map = new HashMap<Integer,Integer>();
the map variable is seemingly not thread-safe, why is this? However the below method seems to work, why is this?
public synchronized void putValue(int key,int value){
if(map.isEmpty()){
System.out.println("hello");
map.put(key,value);
}
Will using Collections.unmodifiableMap(map) work?
Your test ist faulty. If two values are stored with the same key, HashMap.put(K key, V value) will overwrite the former value with the later. Thus, even without concurrency, your "test" will return a size of 1.
Code:
import java.util.HashMap;
public class MapDem {
final HashMap<Integer, Integer> map = new HashMap<Integer, Integer>();
public HashMap<Integer, Integer> getMap() {
return map;
}
public void putValue(int key, int value) {
map.put(key, value);
}
public static void main(String args[]) {
MapDem demo = new MapDem();
demo.putValue(1, 10);
demo.putValue(1, 10);
System.out.println(demo.getMap().size());
}
}
Output (Ideone demo):
1
The fact that sometimes one can see a size of 0 is due to the lack of blocking constructs. You should wait for completion of both Threads before querying the size of yur map by calling join() on your Thread-objects.
Thread t1 = new Thread(new Runnable() {
#Override
public void run() {
demo.putValue(1, 10);
}
});
t1.start();
Thread t2 = new Thread(new Runnable() {
#Override
public void run() {
demo.putValue(1, 10);
}
});
t2.start();
try {
t1.join();
t2.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(demo.getMap().size());
As mentioned by #SachinSarawgi, final does not make your code thread-safe and as further explained by #assylias, volatile does not cut it in this case.
If you need a thread-safe Hashmap, use ConcurrentHashMap.
If you are determined to write your own thread-safe implementation of the Map interface, I recommend Oracle's Lesson on Concurrency to start with, followed by Brian Goetz's "Java Concurrency in Practice" and maybe a little bit of Javier Fernández González' "Mastering Concurrency Programming with Java 8".
The direct immediate answer to your question is: no, the final keyword does not make fields thread safe.
That keyword only tells the compiler that it has to ensure that there is exactly one value assigned to that field (not zero or multiple assignments).
You know, there are reasons why getting multi-threaded code correct is considered hard.
The essence of correct multi-threading is: when some state can be updated by one thread, but is used (or updated) by other threads .. to make sure that you only get those state changes that you want to see.
Long story short: you have a lot of learning to do; a good starting point would be here.
What is thread safe is the access to the map variable: all threads reading that variable will see the same object reference.
However the operations on the HashMap (get/put) are not thread safe and this has nothing to do with the fact that map is final or not.
So your code is not thread safe unless you add some concurrency control around the putValue method - for example by making it synchronized.
Making reference variable final make sure that the reference variable cant change the object reference it has been assigned to.
But the value of the object could change. Same thing is happening here your object value could change. Now the code which change the value need to be synchronized.
You can get rid of all the synchronization problem by using ConcurrentHashMap. You can read about it here.
Using ConcurrentHashMap make sure that every write operation on the object should be handled by one thread at a time. Also it optimize the reading of HashMap too. It divide the HashMap into block and different threads may read from different blocks.
I have this scenrio:
private static final Map<String, Boolean> availables = new HashMap<String, Boolean>();
1 single Thread keep getting an available task and distribute it among worker (let's call it distributor)
public static String getAnAvailable() {
synchronized (availables){
for(Map.Entry<String, Boolean> entry : availables.entrySet()){
if(entry.getValue() == true){
entry.setValue(false);
return entry.getKey();
}
}
}
return null;
}
1 single Thread run this code (let's call it Updater):
while(true){
...
synchronized (availables) {
for(String deleteIt : delete){
if(available.containsKey(deleteIt)){
available.remove(deleteIt);
}
}
for(String name : names){
if(!available.containsKey(name)){
available.put(name, true);
}
}
}
...
}
I tried to use ConcurrentHashMap instead of manual synchronization but it can throw Exception in getAnAvailable method, the reason is obvious, because if updater thread changes the size of the map then within getAnAvailable we get iteration exception. so no point of using that.
now my problem is that for some reason Updater starves indefinitely on exactly the third run...
basically it gets stock when it tries to acquire the luck.
for the life of me I have no idea why that happens. can anyone see something that I don't see here?
I want to reproduce one scenario in which there are two threads accessing a shared HashMap. While one thread is copying the contents of the shared map into localMap using putAll() operation, second thread changes the shared map and CocurrentModificationException should be thrown.
I have tried but not able to reproduce the exception at the time when putAll operation is running. Each time either putAll gets complete before other thread does modification or putAll is called after other thread modification.
Can anyone please suggest how can I generate the scenario in java?
Thanks.
Spin up both threads, have them running continuously.
Have one thread constantly doing putAll, the other constantly doing the modification.
import java.util.HashMap;
import java.util.Map;
public class Example {
private final HashMap<String, String> map = new HashMap<String, String>();
public static void main(String[] args) {
new Example();
}
public Example() {
Thread thread1 = new Thread(new Runnable() {
#Override
public void run() {
int counter = 0;
while (true) {
Map<String, String> tmp = new HashMap<String, String>();
tmp.put("example" + counter, "example");
counter++;
map.putAll(tmp);
}
}
});
Thread thread2 = new Thread(new Runnable() {
#Override
public void run() {
while (true) {
map.values().remove("example");
}
}
});
thread1.start();
thread2.start();
}
}
Unfortunately I cannot copy/paste the running code directly from my current workstation so I retyped it here so there might be a typing error.
As you can see the first thread is continuously adding values while the second thread iterates over the values in the Map. When it starts iterating over the values it expects a number of values (this value is initialized at the construction of the iterator). However because thread1 is continuously adding items this value is not as expected when the Iterator checks the actual amount of values that are in the map when it actual executes the remove code. This causes the ConcurrentModificationException.
If you just need a CocurrentModificationException to be thrown, you could implement your own Map implementation (HackedMap) to remove items from the HashMap when the HashMap tries to copy values from your HackedMap
I'm having a bit of trouble concerning concurrency and maps in Java.
Basically I have multiple threads using (reading and modifying) their own maps, however each of these maps is a part of a larger map which is being read and modified by a further thread:
My main method creates all threads, the threads create their respective maps which are then put into the "main" map:
Map<String, MyObject> mainMap = new HashMap<String, Integer>();
FirstThread t1 = new FirstThread();
mainMap.putAll(t1.getMap());
t1.start();
SecondThread t2 = new SecondThread();
mainMap.putAll(t2.getMap());
t2.start();
ThirdThread t3 = new ThirdThread(mainMap);
t3.start();
The problem I'm facing now is that the third (main) thread sees arbitrary values in the map, depending on when one or both of the other threads update "their" items.
I must however guarantee that the third thread can iterate over - and use the values of - the map without having to fear that a part of what is being read is "old":
FirstThread (analogue to SecondThread):
for (MyObject o : map.values()) {
o.setNewValue(getNewValue());
}
ThirdThread:
for (MyObject o : map.values()) {
doSomethingWith(o.getNewValue());
}
Any ideas? I've considered using a globally accessible (static final Object through a static class) lock which will be synchronized in each thread when the map must be modified.
Or are there specific Map implementations that assess this particular problem which I could use?
Thanks in advance!
Edit:
As suggested by #Pyranja, it would be possible to synchronize the getNewValue() method. However I forgot to mention that I am in fact trying to do something along the lines of transactions, where t1 and t2 modify multiple values before/after t3 works with said values. t3 is implemented in such a way that doSomethingWith() will not actually do anything with the value if it hasn't changed.
To synchronize at a higher level than the individual value objects, you need locks to handle the synchronization between the various threads. One way to do this, without changing your code too much, is a ReadWriteLock. Thread 1 and Thread 2 are writers, Thread 3 is a reader.
You can either do this with two locks, or one. I've sketched out below doing it with one lock, two writer threads, and one reader thread, without worrying about what happens with an exception during data update (ie, transaction rollback...).
All that said, this sounds like a classic producer-consumer scenario. You should consider using something like a BlockingQueue for communication between threads, as is outlined in this question.
There's other things you may want to consider changing as well, like using Runnable instead of extending Thread.
private static final class Value {
public void update() {
}
}
private static final class Key {
}
private final class MyReaderThread extends Thread {
private final Map<Key, Value> allValues;
public MyReaderThread(Map<Key, Value> allValues) {
this.allValues = allValues;
}
#Override
public void run() {
while (!isInterrupted()) {
readData();
}
}
private void readData() {
readLock.lock();
try {
for (Value value : allValues.values()) {
// Do something
}
}
finally {
readLock.unlock();
}
}
}
private final class WriterThread extends Thread {
private final Map<Key, Value> data = new HashMap<Key, Value>();
#Override
public void run() {
while (!isInterrupted()) {
writeData();
}
}
private void writeData() {
writeLock.lock();
try {
for (Value value : data.values()) {
value.update();
}
}
finally {
writeLock.unlock();
}
}
}
private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
private final ReadLock readLock;
private final WriteLock writeLock;
public Thing() {
readLock = lock.readLock();
writeLock = lock.writeLock();
}
public void doStuff() {
WriterThread thread1 = new WriterThread();
WriterThread thread2 = new WriterThread();
Map<Key, Value> allValues = new HashMap<Key, Value>();
allValues.putAll(thread1.data);
allValues.putAll(thread2.data);
MyReaderThread thread3 = new MyReaderThread(allValues);
thread1.start();
thread2.start();
thread3.start();
}
ConcurrentHashMap from java.util.concurrent - a thread-safe implementation of Map, which provides a much higher degree of concurrency than synchronizedMap. Just a lot of reads can almost always be performed in parallel, simultaneous reads and writes can usually be done in parallel, and multiple simultaneous recordings can often be done in parallel. (The class ConcurrentReaderHashMap offers a similar parallelism for multiple read operations, but allows only one active write operation.) ConcurrentHashMapis designed to optimize the retrieval operations.
Your example code may be misleading. In your first example you create a HashMap<String,Integer> but the second part iterates the map values which in this case are MyObject. The key to synchronization is to understand where and which mutable state is shared.
An Integer is immutable. It can be shared freely (but the reference to an Integer is mutable - it must be safely publicated and/or synchronized). But your code example suggests that the maps are populated with mutable MyObject instances.
Given that the map entries (key -> MyObject references) are not changed by any thread and all maps are created and safely publicated before any thread starts it would be in my opinion sufficient to synchronize the modification of MyObject. E.g.:
public class MyObject {
private Object value;
synchronized Object getNewValue() {
return value;
}
synchronized void setNewValue(final Object newValue) {
this.value = newValue;
}
}
If my assumptions are not correct, clarify your question / code example and also consider #jacobm's comment and #Alex answer.
I'm looking for a way to synchronize a method based on the parameter it receives, something like this:
public synchronized void doSomething(name){
//some code
}
I want the method doSomething to be synchronized based on the name parameter like this:
Thread 1: doSomething("a");
Thread 2: doSomething("b");
Thread 3: doSomething("c");
Thread 4: doSomething("a");
Thread 1 , Thread 2 and Thread 3 will execute the code without being synchronized , but Thread 4 will wait until Thread 1 has finished the code because it has the same "a" value.
Thanks
UPDATE
Based on Tudor explanation I think I'm facing another problem:
here is a sample of the new code:
private HashMap locks=new HashMap();
public void doSomething(String name){
locks.put(name,new Object());
synchronized(locks.get(name)) {
// ...
}
locks.remove(name);
}
The reason why I don't populate the locks map is because name can have any value.
Based on the sample above , the problem can appear when adding / deleting values from the hashmap by multiple threads in the same time, since HashMap is not thread-safe.
So my question is if I make the HashMap a ConcurrentHashMap which is thread safe, will the synchronized block stop other threads from accessing locks.get(name) ??
TL;DR:
I use ConcurrentReferenceHashMap from the Spring Framework. Please check the code below.
Although this thread is old, it is still interesting. Therefore, I would like to share my approach with Spring Framework.
What we are trying to implement is called named mutex/lock. As suggested by Tudor's answer, the idea is to have a Map to store the lock name and the lock object. The code will look like below (I copy it from his answer):
Map<String, Object> locks = new HashMap<String, Object>();
locks.put("a", new Object());
locks.put("b", new Object());
However, this approach has 2 drawbacks:
The OP already pointed out the first one: how to synchronize the access to the locks hash map?
How to remove some locks which are not necessary anymore? Otherwise, the locks hash map will keep growing.
The first problem can be solved by using ConcurrentHashMap. For the second problem, we have 2 options: manually check and remove locks from the map, or somehow let the garbage collector knows which locks are no longer used and the GC will remove them. I will go with the second way.
When we use HashMap, or ConcurrentHashMap, it creates strong references. To implement the solution discussed above, weak references should be used instead (to understand what is a strong/weak reference, please refer to this article or this post).
So, I use ConcurrentReferenceHashMap from the Spring Framework. As described in the documentation:
A ConcurrentHashMap that uses soft or weak references for both keys
and values.
This class can be used as an alternative to
Collections.synchronizedMap(new WeakHashMap<K, Reference<V>>()) in
order to support better performance when accessed concurrently. This
implementation follows the same design constraints as
ConcurrentHashMap with the exception that null values and null keys
are supported.
Here is my code. The MutexFactory manages all the locks with <K> is the type of the key.
#Component
public class MutexFactory<K> {
private ConcurrentReferenceHashMap<K, Object> map;
public MutexFactory() {
this.map = new ConcurrentReferenceHashMap<>();
}
public Object getMutex(K key) {
return this.map.compute(key, (k, v) -> v == null ? new Object() : v);
}
}
Usage:
#Autowired
private MutexFactory<String> mutexFactory;
public void doSomething(String name){
synchronized(mutexFactory.getMutex(name)) {
// ...
}
}
Unit test (this test uses the awaitility library for some methods, e.g. await(), atMost(), until()):
public class MutexFactoryTests {
private final int THREAD_COUNT = 16;
#Test
public void singleKeyTest() {
MutexFactory<String> mutexFactory = new MutexFactory<>();
String id = UUID.randomUUID().toString();
final int[] count = {0};
IntStream.range(0, THREAD_COUNT)
.parallel()
.forEach(i -> {
synchronized (mutexFactory.getMutex(id)) {
count[0]++;
}
});
await().atMost(5, TimeUnit.SECONDS)
.until(() -> count[0] == THREAD_COUNT);
Assert.assertEquals(count[0], THREAD_COUNT);
}
}
Use a map to associate strings with lock objects:
Map<String, Object> locks = new HashMap<String, Object>();
locks.put("a", new Object());
locks.put("b", new Object());
// etc.
then:
public void doSomething(String name){
synchronized(locks.get(name)) {
// ...
}
}
The answer of Tudor is fine, but it's static and not scalable. My solution is dynamic and scalable, but it goes with increased complexity in the implementation. The outside world can use this class just like using a Lock, as this class implements the interface. You get an instance of a parameterized lock by the factory method getCanonicalParameterLock.
package lock;
import java.lang.ref.Reference;
import java.lang.ref.WeakReference;
import java.util.Map;
import java.util.WeakHashMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public final class ParameterLock implements Lock {
/** Holds a WeakKeyLockPair for each parameter. The mapping may be deleted upon garbage collection
* if the canonical key is not strongly referenced anymore (by the threads using the Lock). */
private static final Map<Object, WeakKeyLockPair> locks = new WeakHashMap<>();
private final Object key;
private final Lock lock;
private ParameterLock (Object key, Lock lock) {
this.key = key;
this.lock = lock;
}
private static final class WeakKeyLockPair {
/** The weakly-referenced parameter. If it were strongly referenced, the entries of
* the lock Map would never be garbage collected, causing a memory leak. */
private final Reference<Object> param;
/** The actual lock object on which threads will synchronize. */
private final Lock lock;
private WeakKeyLockPair (Object param, Lock lock) {
this.param = new WeakReference<>(param);
this.lock = lock;
}
}
public static Lock getCanonicalParameterLock (Object param) {
Object canonical = null;
Lock lock = null;
synchronized (locks) {
WeakKeyLockPair pair = locks.get(param);
if (pair != null) {
canonical = pair.param.get(); // could return null!
}
if (canonical == null) { // no such entry or the reference was cleared in the meantime
canonical = param; // the first thread (the current thread) delivers the new canonical key
pair = new WeakKeyLockPair(canonical, new ReentrantLock());
locks.put(canonical, pair);
}
}
// the canonical key is strongly referenced now...
lock = locks.get(canonical).lock; // ...so this is guaranteed not to return null
// ... but the key must be kept strongly referenced after this method returns,
// so wrap it in the Lock implementation, which a thread of course needs
// to be able to synchronize. This enforces a thread to have a strong reference
// to the key, while it isn't aware of it (as this method declares to return a
// Lock rather than a ParameterLock).
return new ParameterLock(canonical, lock);
}
#Override
public void lock() {
lock.lock();
}
#Override
public void lockInterruptibly() throws InterruptedException {
lock.lockInterruptibly();
}
#Override
public boolean tryLock() {
return lock.tryLock();
}
#Override
public boolean tryLock(long time, TimeUnit unit) throws InterruptedException {
return lock.tryLock(time, unit);
}
#Override
public void unlock() {
lock.unlock();
}
#Override
public Condition newCondition() {
return lock.newCondition();
}
}
Of course you'd need a canonical key for a given parameter, otherwise threads would not be synchronized as they would be using a different Lock. Canonicalization is the equivalent of the internalization of Strings in Tudor's solution. Where String.intern() is itself thread-safe, my 'canonical pool' is not, so I need extra synchronization on the WeakHashMap.
This solution works for any type of Object. However, make sure to implement equals and hashCode correctly in custom classes, because if not, threading issues will arise as multiple threads could be using different Lock objects to synchronize on!
The choice for a WeakHashMap is explained by the ease of memory management it brings. How else could one know that no thread is using a particular Lock anymore? And if this could be known, how could you safely delete the entry out of the Map? You would need to synchronize upon deletion, because you have a race condition between an arriving thread wanting to use the Lock, and the action of deleting the Lock from the Map. All these things are just solved by using weak references, so the VM does the work for you, and this simplifies the implementation a lot. If you inspected the API of WeakReference, you would find that relying on weak references is thread-safe.
Now inspect this test program (you need to run it from inside the ParameterLock class, due to private visibility of some fields):
public static void main(String[] args) {
Runnable run1 = new Runnable() {
#Override
public void run() {
sync(new Integer(5));
System.gc();
}
};
Runnable run2 = new Runnable() {
#Override
public void run() {
sync(new Integer(5));
System.gc();
}
};
Thread t1 = new Thread(run1);
Thread t2 = new Thread(run2);
t1.start();
t2.start();
try {
t1.join();
t2.join();
while (locks.size() != 0) {
System.gc();
System.out.println(locks);
}
System.out.println("FINISHED!");
} catch (InterruptedException ex) {
// those threads won't be interrupted
}
}
private static void sync (Object param) {
Lock lock = ParameterLock.getCanonicalParameterLock(param);
lock.lock();
try {
System.out.println("Thread="+Thread.currentThread().getName()+", lock=" + ((ParameterLock) lock).lock);
// do some work while having the lock
} finally {
lock.unlock();
}
}
Chances are very high that you would see that both threads are using the same lock object, and so they are synchronized. Example output:
Thread=Thread-0, lock=java.util.concurrent.locks.ReentrantLock#8965fb[Locked by thread Thread-0]
Thread=Thread-1, lock=java.util.concurrent.locks.ReentrantLock#8965fb[Locked by thread Thread-1]
FINISHED!
However, with some chance it might be that the 2 threads do not overlap in execution, and therefore it is not required that they use the same lock. You could easily enforce this behavior in debugging mode by setting breakpoints at the right locations, forcing the first or second thread to stop wherever necessary. You will also notice that after the Garbage Collection on the main thread, the WeakHashMap will be cleared, which is of course correct, as the main thread waited for both worker threads to finish their job by calling Thread.join() before calling the garbage collector. This indeed means that no strong reference to the (Parameter)Lock can exist anymore inside a worker thread, so the reference can be cleared from the weak hashmap. If another thread now wants to synchronize on the same parameter, a new Lock will be created in the synchronized part in getCanonicalParameterLock.
Now repeat the test with any pair that has the same canonical representation (= they are equal, so a.equals(b)), and see that it still works:
sync("a");
sync(new String("a"))
sync(new Boolean(true));
sync(new Boolean(true));
etc.
Basically, this class offers you the following functionality:
Parameterized synchronization
Encapsulated memory management
The ability to work with any type of object (under the condition that equals and hashCode is implemented properly)
Implements the Lock interface
This Lock implementation has been tested by modifying an ArrayList concurrently with 10 threads iterating 1000 times, doing this: adding 2 items, then deleting the last found list entry by iterating the full list. A lock is requested per iteration, so in total 10*1000 locks will be requested. No ConcurrentModificationException was thrown, and after all worker threads have finished the total amount of items was 10*1000. On every single modification, a lock was requested by calling ParameterLock.getCanonicalParameterLock(new String("a")), so a new parameter object is used to test the correctness of the canonicalization.
Please note that you shouldn't be using String literals and primitive types for parameters. As String literals are automatically interned, they always have a strong reference, and so if the first thread arrives with a String literal for its parameter then the lock pool will never be freed from the entry, which is a memory leak. The same story goes for autoboxing primitives: e.g. Integer has a caching mechanism that will reuse existing Integer objects during the process of autoboxing, also causing a strong reference to exist. Addressing this, however, this is a different story.
Check out this framework. Seems you're looking for something like this.
public class WeatherServiceProxy {
...
private final KeyLockManager lockManager = KeyLockManagers.newManager();
public void updateWeatherData(String cityName, Date samplingTime, float temperature) {
lockManager.executeLocked(cityName, new LockCallback() {
public void doInLock() {
delegate.updateWeatherData(cityName, samplingTime, temperature);
}
});
}
https://code.google.com/p/jkeylockmanager/
I've created a tokenProvider based on the IdMutexProvider of McDowell.
The manager uses a WeakHashMap which takes care of cleaning up unused locks.
You could find my implementation here.
I've found a proper answer through another stackoverflow question: How to acquire a lock by a key
I copied the answer here:
Guava has something like this being released in 13.0; you can get it out of HEAD if you like.
Striped more or less allocates a specific number of locks, and then assigns strings to locks based on their hash code. The API looks more or less like
Striped<Lock> locks = Striped.lock(stripes);
Lock l = locks.get(string);
l.lock();
try {
// do stuff
} finally {
l.unlock();
}
More or less, the controllable number of stripes lets you trade concurrency against memory usage, because allocating a full lock for each string key can get expensive; essentially, you only get lock contention when you get hash collisions, which are (predictably) rare.
Just extending on to Triet Doan's answer, we also need to take care of if the MutexFactory can be used at multiple places, as with currently suggested code we will end up with same MutexFactory at all places of its usage.
For example:-
#Autowired
MutexFactory<CustomObject1> mutexFactory1;
#Autowired
MutexFactory<CustomObject2> mutexFactory2;
Both mutexFactory1 & mutexFactory2 will refer to the same instance of factory even if their type differs, this is due to the fact that a single instance of MutexFactory is created by spring during application startup and same is used for both mutexFactory1 & mutexFactory2.
So here is the extra Scope annotation that needs to be put in to avoid above case-
#Component
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class MutexFactory<K> {
private ConcurrentReferenceHashMap<K, Object> map;
public MutexFactory() {
this.map = new ConcurrentReferenceHashMap<>();
}
public Object getMutex(K key) {
return this.map.compute(key, (k, v) -> v == null ? new Object() : v);
}
}
I've used a cache to store lock objects. The my cache will expire objects after a period, which really only needs to be longer that the time it takes the synchronized process to run
`
import com.google.common.cache.Cache;
import com.google.common.cache.CacheBuilder;
...
private final Cache<String, Object> mediapackageLockCache = CacheBuilder.newBuilder().expireAfterWrite(DEFAULT_CACHE_EXPIRE, TimeUnit.SECONDS).build();
...
public void doSomething(foo) {
Object lock = mediapackageLockCache.getIfPresent(foo.toSting());
if (lock == null) {
lock = new Object();
mediapackageLockCache.put(foo.toString(), lock);
}
synchronized(lock) {
// execute code on foo
...
}
}
`
I have a much simpler, scalable implementation akin to #timmons post taking advantage of guavas LoadingCache with weakValues. You will want to read the help files on "equality" to understand the suggestion I have made.
Define the following weakValued cache.
private final LoadingCache<String,String> syncStrings = CacheBuilder.newBuilder().weakValues().build(new CacheLoader<String, String>() {
public String load(String x) throws ExecutionException {
return new String(x);
}
});
public void doSomething(String x) {
x = syncStrings.get(x);
synchronized(x) {
..... // whatever it is you want to do
}
}
Now! As a result of the JVM, we do not have to worry that the cache is growing too large, it only holds the cached strings as long as necessary and the garbage manager/guava does the heavy lifting.