How to optimize concurrent operations in Java? - java

I'm still quite shaky on multi-threading in Java. What I describe here is at the very heart of my application and I need to get this right. The solution needs to work fast and it needs to be practically safe. Will this work? Any suggestions/criticism/alternative solutions welcome.
Objects used within my application are somewhat expensive to generate but change rarely, so I am caching them in *.temp files. It is possible for one thread to try and retrieve a given object from cache, while another is trying to update it there. Cache operations of retrieve and store are encapsulated within a CacheService implementation.
Consider this scenario:
Thread 1: retrieve cache for objectId "page_1".
Thread 2: update cache for objectId "page_1".
Thread 3: retrieve cache for objectId "page_2".
Thread 4: retrieve cache for objectId "page_3".
Thread 5: retrieve cache for objectId "page_4".
Note: thread 1 appears to retrieve an obsolete object, because thread 2 has a newer copy of it. This is perfectly OK so I do not need any logic that will give thread 2 priority.
If I synchronize retrieve/store methods on my service, then I'm unnecessarily slowing things down for threads 3, 4 and 5. Multiple retrieve operations will be effective at any given time but the update operation will be called rarely. This is why I want to avoid method synchronization.
I gather I need to synchronize on an object that is exclusively common to thread 1 and 2, which implies a lock object registry. Here, an obvious choice would be a Hashtable but again, operations on Hashtable are synchronized, so I'm trying a HashMap. The map stores a string object to be used as a lock object for synchronization and the key/value would be the id of the object being cached. So for object "page_1" the key would be "page_1" and the lock object would be a string with a value of "page_1".
If I've got the registry right, then additionally I want to protect it from being flooded with too many entries. Let's not get into details why. Let's just assume, that if the registry has grown past defined limit, it needs to be reinitialized with 0 elements. This is a bit of a risk with an unsynchronized HashMap but this flooding would be something that is outside of normal application operation. It should be a very rare occurrence and hopefully never takes place. But since it is possible, I want to protect myself from it.
#Service
public class CacheServiceImpl implements CacheService {
private static ConcurrentHashMap<String, String> objectLockRegistry=new ConcurrentHashMap<>();
public Object getObject(String objectId) {
String objectLock=getObjectLock(objectId);
if(objectLock!=null) {
synchronized(objectLock) {
// read object from objectInputStream
}
}
public boolean storeObject(String objectId, Object object) {
String objectLock=getObjectLock(objectId);
synchronized(objectLock) {
// write object to objectOutputStream
}
}
private String getObjectLock(String objectId) {
int objectLockRegistryMaxSize=100_000;
// reinitialize registry if necessary
if(objectLockRegistry.size()>objectLockRegistryMaxSize) {
// hoping to never reach this point but it is not impossible to get here
synchronized(objectLockRegistry) {
if(objectLockRegistry.size()>objectLockRegistryMaxSize) {
objectLockRegistry.clear();
}
}
}
// add lock to registry if necessary
objectLockRegistry.putIfAbsent(objectId, new String(objectId));
String objectLock=objectLockRegistry.get(objectId);
return objectLock;
}

If you are reading from disk, lock contention is not going to be your performance issue.
You can have both threads grab the lock for the entire cache, do a read, if the value is missing, release the lock, read from disk, acquire the lock, and then if the value is still missing write it, otherwise return the value that is now there.
The only issue you will have with that is the concurrent read trashing the disk... but the OS caches will be hot, so the disk shouldn't be overly trashed.
If that is an issue then switch your cache to holding a Future<V> in place of a <V>.
The get method will become something like:
public V get(K key) {
Future<V> future;
synchronized(this) {
future = backingCache.get(key);
if (future == null) {
future = executorService.submit(new LoadFromDisk(key));
backingCache.put(key, future);
}
}
return future.get();
}
Yes that is a global lock... but you're reading from disk, and don't optimize until you have a proved performance bottleneck...
Oh. First optimization, replace the map with a ConcurrentHashMap and use putIfAbsent and you'll have no lock at all! (BUT only do that when you know this is an issue)

The complexity of your scheme has already been discussed. That leads to hard to find bugs. For example, not only do you lock on non-final variables, but you even change them in the middle of synchronized blocks that use them as a lock. Multi-threading is very hard to reason about, this kind of code makes it almost impossible:
synchronized(objectLockRegistry) {
if(objectLockRegistry.size() > objectLockRegistryMaxSize) {
objectLockRegistry = new HashMap<>(); //brrrrrr...
}
}
In particular, 2 simultaneous calls to get a lock on a specific string might actually return 2 different instances of the same string, each stored in a different instance of your hashmap (unless they are interned), and you won't be locking on the same monitor.
You should either use an existing library or keep it a lot simpler.

If your question includes the keywords "optimize", "concurrent", and your solution includes a complicated locking scheme ... you're doing it wrong. It is possible to succeed at this sort of venture, but the odds are stacked against you. Prepare to diagnose bizarre concurrency bugs, including but not limited to, deadlock, livelock, cache incoherency... I can spot multiple unsafe practices in your example code.
Pretty much the only way to create a safe and effective concurrent algorithm without being a concurrency god is to take one of the pre-baked concurrent classes and adapt them to your need. It's just too hard to do unless you have an exceptionally convincing reason.
You might take a look at ConcurrentMap. You might also like CacheBuilder.

Using Threads and synchronize directly is covered by the beginning of most tutorials about multithreading and concurrency. However, many real-world examples require more sophisticated locking and concurrency schemes, which are cumbersome and error prone if you implement them yourself. To prevent reinventing the wheel over an over again, the Java concurrency library was created. There, you can find many classes that will be of great help to you. Try googling for tutorials about java concurrency and locks.
As an example for a lock which might help you, see http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReadWriteLock.html .

Rather than roll your own cache I would take a look at Google's MapMaker. Something like this will give you a lock cache that automatically expires unused entries as they are garbage collected:
ConcurrentMap<String,String> objectLockRegistry = new MapMaker()
.softValues()
.makeComputingMap(new Function<String,String> {
public String apply(String s) {
return new String(s);
});
With this, the whole getObjectLock implementation is simply return objectLockRegistry.get(objectId) - the map takes care of all the "create if not already present" stuff for you in a safe way.

I Would do it similar, to you: just create a map of Object (new Object()).
But in difference to you i would use TreeMap<String, Object>
or HashMap
You call that the lockMap. One entry per file to lock. The lockMap is public available to all participating threads.
Each read and write to a specific file, gets the lock from the map. And uses syncrobize(lock) on that lock object.
If the lockMap is not fixed, and its content chan change, then reading and writing to the map must syncronized, too. (syncronized (this.lockMap) {....})
But your getObjectLock() is not safe, sync that all with your lock. (Double checked lockin is in Java not thread safe!) A recomended book: Doug Lea, Concurrent Programming in Java

Related

Synchronised VS Striped Lock

I have a class that is accessed by multiple threads, and I want to make sure it's thread safe. Plus it needs to be as fast as possible. This is just an example:
public class SharedClass {
private final Map<String, String> data = new HashMap<>();
private final Striped<ReadWriteLock> rwLockStripes = Striped.readWriteLock(100);
public void setSomethingFastVersion(String key, String value) {
ReadWriteLock rwLock = rwLockStripes.get(key);
try {
rwLock.lock();
} finally{
rwLock.unLock();
}
data.put(key, value);
}
public synchronized void setSomethingSlowVersion(String key, String value) {
data.put(key, value);
}
}
I'm using StripedLock from Google Guava in one version, and a normal synchronized on the other one.
Am I right saying that the Guava version should be faster?
If so, what would be a good use case for synchronized, where the StripedLocks would not fit?
BTW, I know I could use a simple ConcurrentHashMap here, but I'm adding the example code to make sure you understand my question.
Synchronized has been around for ages. It's not really surprising that we nowadays have more advanced mechanisms for concurrent programming.
However striped locks are advantageous only in cases where something can be partitioned or striped, such as locking parts of a map allowing different parts to be manipulated at the same time, but blocking simultaneous manipulations to the same stripe. In many cases you don't have that kind of partitioning, you're just looking for a mutex. In those cases synchronized is still a viable option, although a ReadWriteLock might be a better choice depending on the situation.
A ConcurrentHashMap has internal partitioning similar to stripes, but it applies only to the map operations such as put(). With an explicit StripedLock you can make longer operations atomic, while still allowing concurrency when operations don't touch the same stripe.
Let me put in this way. Say you have 1000 instances of a class, and you have 1000 threads trying to accesses those instances. Each instance will acquire a lock for each thread. So 1000 locks which will lead to huge memory consumption. In this case stripped locks could come handy.
But in normal case where you have a singleton class you may not need stripped locks and can go ahead and use synchronized keyword.
So, i hope i answered when to use what.
Use a ConcurrentHashMap so you won't have to do any of your own synchronizing.

Is it safe in Java to read (not modify) objects which are not thread safe (like linked list) from multiple threads?

there was already a question whether threads can simultaneously safely read/iterate LinkeList. It seems the answer is yes as far as no-one structurally changes it (add/delete) from the linked list.
Although one answer was warning about "unflushed cache" and advicing to know "java memory model". So I'm asking to elaborate those "evil" caches. I'm a newbie and so far I still naively believe that following code is ok (at least from my tests)
public static class workerThread implements Runnable {
LinkedList<Integer> ll_only_for_read;
PrintWriter writer;
public workerThread(LinkedList<Integer> ll,int id2) throws Exception {
ll_only_for_read = ll;
writer = new PrintWriter("file."+id2, "UTF-8");
}
#Override
public void run() {
for(Integer i : ll_only_for_read) writer.println(" ll:"+i);
writer.close();
}
}
public static void main(String args[]) throws Exception{
LinkedList<Integer> ll = new LinkedList<Integer>();
for(int i=0;i<1e3;i++) ll.add(i);
// do I need to call something special here? (in order to say:
// "hey LinkeList flush all your data from local cache
// you will be now a good boy and share those data among
// whole lot of interesting threads. Don't worry though they will only read
// you, no thread would dare to change you"
new Thread(new workerThread(ll,1)).start();
new Thread(new workerThread(ll,2)).start();
}
Yes, in your specific example code it's okay, since the act of creating the new thread should define a happens-before relationship between populating the list and reading it from another thread." There are plenty of ways that a seemingly-similar set up could be unsafe, however.
I highly recommend reading "Java Concurrency in Practice" by Brian Goetz et al for more details.
If your code created and populated the list with a single thread and only in a second moment you create other threads that concurrently access the list there is no problem.
Only when a thread can modify a value while other threads try to read the same value can happens problems.
It can be a problem if you change the object you retrieve (also if you don't change the list itself).
Although one answer was warning about "unflushed cache" and advicing to know "java memory model".
I think you are referring to my Answer to this Question: Can Java LinkedList be read in multiple-threads safely?.
So I'm asking to elaborate those "evil" caches.
They are not evil. They are just a fact of life ... and they affect the correctness (thread-safety) reasoning for multi-threaded applications.
The Java Memory Model is Java's answer to this fact of life. The memory model specifies with mathematical precision a bunch of rules that need to be obeyed to ensure that all possible executions of your application are "well-formed". (In simple terms: that your application is thread-safe.)
The Java Memory Model is ... difficult.
Someone recommended "Java Concurrency in Practice" by Brian Goetz et al. I concur. It is the best textbook on the topic of writing "classic" Java multi-threaded applications, and it has a good explanation of the Java Memory Model.
More importantly, Goetz et al gives you a simpler set of rules that are sufficient to give you thread-safety. These rules are still too detailed to condense into StackOverflow answer ... but
one of the concepts is "safe publication", and
one of the principles is to use / re-use existing concurrency constructs rather than to roll your own concurrency mechanisms based on the Memory Model.
I'm a newbie and so far I still naively believe that following code is ok.
It >>is<< correct. However ...
(at least from my tests)
... testing is NOT a guarantee of anything. The problem with non-thread-safe programs is that the faults are frequently not revealed by testing because they manifest randomly, with low probability, and often differently on different platforms.
You cannot rely on testing to tell you that your code is thread-safe. You need to reason1 about the behaviour ... or follow a set of well-founded rules.
1 - And I mean real, well-founded reasoning ... not seat-of-the-pants intuitive stuff.
The way you're using it is fine, but only by coincidence.
Programs are rarely that trivial:
If the List contains references to other (mutable) data, then you'll get race conditions.
If someone modifies your 'reader' threads later in the code's lifecycle, then you'll get races.
Immutable data (and data structures) are by definition thread-safe. However, this is a mutable List, even though you're making the agreement with yourself that you won't modify it.
I'd recommend wrapping the List<> instance like this so the code fails immediately if someone tries to use any mutators on the List:
List<Integer> immutableList = Collections.unmodifiableList(ll);
//...pass 'immutableList' to threads.
Link to unmodifiableList
You need to guarantee happens-before relationship between reads and writes in your LinkedList because they are done in separate threads.
Result of ll.add(i) will be visible for new workerThread because Thread.start forms happens-before relationship. So your example is thread safe. See more about happens-before conditions.
However be aware of more complex situation, when LinkedList is read during iteration in worker threads and at the same time it is modified by the main thread. Like this:
for(int i=0;i<1e3;i++) {
ll.add(i);
new Thread(new workerThread(ll,1)).start();
new Thread(new workerThread(ll,2)).start();
}
This way ConcurrentModificationException is possible.
There are several options:
Clone your LinkedList inside of workerThread and iterate the copy
instead.
Use synchronization both for list modification and for list
iteration (but it will lead to poor concurrency).
Instead of LinkedList use CopyOnWriteArrayList.
Sorry for answering to my question. But I was thinking of your reassuring answers and I found it may not be so safe as it seems. I found and tested case when it is not working - if object would use it's class variable for storing any data (I wouldn't know about) then it would fail (then the only question is if linked list (and other java classes) in some implementation can do it...) See failing example:
public class DummyLinkedList {
public LinkedList<Integer> ll;
public DummyLinkedList(){
ll = new LinkedList<Integer>();
}
int lastGetIndex;
int myDummyGet(int idx){
lastGetIndex = idx;
//return ll.get(idx); // thids would work fine as parameter is on the stack so uniq for each call (at least if java supports reentrant functions)
return ll.get(lastGetIndex); // this would make a problem even for only readin the object - question is how many such issues java.* contains
}
}
It depends on how the object was created and made available to your thread. In general, no, it's not safe, even if the object isn't modified.
Following are some ways to make it safe.
First, create the object and perform any modification that is necessary; you can consider the object to be effectively immutable if no more modifications occur. Then, share the effectively immutable object with other threads by one of the following means:
Have other threads read the object from a field that is volatile.
Write a reference to the object inside a synchronized block, then have other threads read that reference while synchronized on the same lock.
Start the reading threads after the object is initialized, passing the object as a parameter. (This is what you are doing in your example, so you are safe.)
Pass the object between threads using a concurrent mechanism like a BlockingQueue implementation, or publish it in a concurrent collection, like a ConcurrentMap implementation.
There might be others. Alternatively, you can make all of the fields of the shared object final (including all the fields of its Object members, and so on). Then it will be safe to share this object by any means across threads. That's one of the under-appreciated virtues of immutable types.
If you only access to the list is by 'read' methods (including iterations) then you are fine. Like in your code.

least blocking java cache

Suppose we want to implement a cache for a particular entity.
class Cache {
private static Map<String, Object> cache = new HashMap<>();
public static Object get(String id) {
assert notNullOrEmpty(id);
return cache.get(id);
}
public static Object add(String id, Object element) {
assert notNullOrEmpty(id) && notNull(element);
if(cache.containsKey(id)) return cache.get(id);
cache.put(id, element);
return element;
}
}
now we want to ensure this is threadsafe and most importantly optimal when it comes to data access and performance (we dont want to block when its not necessary). For example if we mark both methods as synchronized we will uslessly block two concurrent get() calls which could perfectly work without block.
so we want to block get() only if add() is in process, and block add only if at least one get() or an add() is in process. Multiple concurrent get() executions should not block each other...
How do we do this?
UPDATE
In fact this is not a cache but just a use case i've come up with to describe the problem, the actual purpose is to create a singletone instances store...
For example there is a Currency type which is only instantiated trough its builder and is immutable, builder itself after verifying that parameters passed in are valid checks this so called global cache in static context to see if there is an instance already created... well you got me...
This is not an enum usecase because system will dynamically add new Currency, Market or even Exchange instances which all should be loosely coupled and instantiated only once... (also to prevent heavy GC)
So to clarify the question... think of the global problem of concurrency not the particular examlpe.
I've found this link quite helpful http://tutorials.jenkov.com/java-concurrency/read-write-locks.html
i guess there are some lock types already in JDK for this purpose, but not sure yet.
Actually I gave a talk on this just today at the FOSDEM conference in Burssels. See the slides here: http://www.slideshare.net/cruftex/cache2k-java-caching-turbo-charged-fosdem-2015
Basically you can use Google Guava, however, since Guava is a cache which uses LRU, there is still a synchronized block needed. Something which I am exploring in cache2k is used an advanced eviction algorithm, that needs no list manipulation for the cache access, so locks whatsoever at all.
cache2k is on maven central, add cache2k-api and cache2k-core as dependency and initialize the cache with:
cache =
CacheBuilder.newCache(String.class, Object.class)
.implementation(ClockProPlusCache.class)
.build();
If you have only cache hits, cache2k is about 5x faster then Guava and 10x faster then EHCache. For your usage pattern e.g. with the Currency type you can run the cache in read through configuration and add a cache source which is responsible for constructing the Currency instances.
So, you don't necessarily do look out for a cache. For the currency example you don't need a cache, since there is a limited space of currency instances. If you want to do the same with a possible non limited space, the cache is the more universal solution, since you have to limit the resource consumption. One example I explored, is using this for formatted dates. See: https://github.com/headissue/cache2k-benchmark/blob/master/zoo/src/test/java/org/cache2k/benchmark/DateFormattingBenchmark.java
For general questions on cache2k, feel free to post them on stack overflow.

Thread safe Java pool, with INSTANT READ

To write a fully functional pool of Java objects, using READ/WRITE locks is not a big problem.
The problem I see is that READ operation will have to wait until the storage monitor (or something similar, depending on the model) is released, which really slows it.
So, the following requirements should be met:
READ (or GET) operation should be INSTANT - using some key, the latest version of the object should be returned immediately, without waiting for any lock.
WRITE (CREATE/UPDATE) - may be queued, reasonably delayed in time, probably waiting for some storage lock.
Any code sample?
I didn't find a question that directly targets the issue.
It popped up in some discussions, but I couldn't find a question that was fully devoted to the problems of creating such a pool in Java.
when the modification on the datastructure takes too long (for whatever reason), simply waiting and write-locking the structure will not be successful. You just cannot foresee when you will have enough time to perform the modification without blocking any reads.
the only thing you can do (try to do) is to reduce the time within the write-operation to a minimum. As #assylias stated, a CopyOnWrite* does this by cloning the datastructure upon write operations and atomically activates the modified structure when the operation is complete.
By this the read-locks will take as long as the duration of the clone-operation plus the time for switching the reference. You can work that down to small parts of the datastructure: if only states in an object change, you can modify a copy of that object and change the reference in your more complex datastructure to that copy afterwards.
The other way around is to do that copy on or before read operations. Often you return a copy of an Object via the API of you datastructure anyway, so just "cache" that copy and during the modifications let the readers access the cached copy. This is what database-caches aso do.
It depends on your model what is best for you. If you will have few writes on data that can be copied easily, CopyOnWrite will probably perform best. If you will have lots of writes you probably better provide a single "read"/cached-state of your structure and switch it from time to time.
AtomicReference<Some> datastructure = ...;
//copy on write
synchronized /*one writer*/ void change(Object modification)
throws CloneNotSupportedException {
Object copy = datastructure.clone();
apply(copy, modification);
datastructure.set(copy);
}
Object search(Object select) {
return datastructure.get().search(select);
}
// copy for read
AtomicReference<Some> cached = new AtomicReference<Some>(datastructure.get().clone());
synchronized void change(Object modification) {
apply(datastructure, modification);
cached.set(datastructure);
}
Object search(Object select) {
return cached.get().search(select);
}
For both operations there is no wait when reading .. but for the time it needs to switch the reference.
In this case you can simply use a volatile variable to avoid locking on the reader side and keep the writes exclusive with a synchronized method. volatile will add little to no overhead to reads but writes will be a little slow. This might be a good solution depending on expected throughput and read/write ratio.
class Cache {
private volatile Map<K, V> cache; //Assuming map is the right data structure
public V get(K key) {
return cache.get(key);
}
//synchronized writes for exclusive access
public synchronized void put(K key, V value) {
Map<K, V> copy = new HashMap<> (cache);
V value = copy.put(key, value);
//volatile guarantees that this will be visible from the getter
cache = copy;
return value;
}
}
Here is a totally-lock-free Java object pool solution. FYI
http://daviddengcn.blogspot.com/2015/02/a-lock-free-java-object-pool.html

What is the name of this locking technique?

I've got a gigantic Trove map and a method that I need to call very often from multiple threads. Most of the time this method shall return true. The threads are doing heavy number crunching and I noticed that there was some contention due to the following method (it's just an example, my actual code is bit different):
synchronized boolean containsSpecial() {
return troveMap.contains(key);
}
Note that it's an "append only" map: once a key is added, is stays in there forever (which is important for what comes next I think).
I noticed that by changing the above to:
boolean containsSpecial() {
if ( troveMap.contains(key) ) {
// most of the time (>90%) we shall pass here, dodging lock-acquisition
return true;
}
synchronized (this) {
return troveMap.contains(key);
}
}
I get a 20% speedup on my number crunching (verified on lots of runs, running during long times etc.).
Does this optimization look correct (knowing that once a key is there it shall stay there forever)?
What is the name for this technique?
EDIT
The code that updates the map is called way less often than the containsSpecial() method and looks like this (I've synchronized the entire method):
synchronized void addSpecialKeyValue( key, value ) {
....
}
This code is not correct.
Trove doesn't handle concurrent use itself; it's like java.util.HashMap in that regard. So, like HashMap, even seemingly innocent, read-only methods like containsKey() could throw a runtime exception or, worse, enter an infinite loop if another thread modifies the map concurrently. I don't know the internals of Trove, but with HashMap, rehashing when the load factor is exceeded, or removing entries can cause failures in other threads that are only reading.
If the operation takes a significant amount of time compared to lock management, using a read-write lock to eliminate the serialization bottleneck will improve performance greatly. In the class documentation for ReentrantReadWriteLock, there are "Sample usages"; you can use the second example, for RWDictionary, as a guide.
In this case, the map operations may be so fast that the locking overhead dominates. If that's the case, you'll need to profile on the target system to see whether a synchronized block or a read-write lock is faster.
Either way, the important point is that you can't safely remove all synchronization, or you'll have consistency and visibility problems.
It's called wrong locking ;-) Actually, it is some variant of the double-checked locking approach. And the original version of that approach is just plain wrong in Java.
Java threads are allowed to keep private copies of variables in their local memory (think: core-local cache of a multi-core machine). Any Java implementation is allowed to never write changes back into the global memory unless some synchronization happens.
So, it is very well possible that one of your threads has a local memory in which troveMap.contains(key) evaluates to true. Therefore, it never synchronizes and it never gets the updated memory.
Additionally, what happens when contains() sees a inconsistent memory of the troveMap data structure?
Lookup the Java memory model for the details. Or have a look at this book: Java Concurrency in Practice.
This looks unsafe to me. Specifically, the unsynchronized calls will be able to see partial updates, either due to memory visibility (a previous put not getting fully published, since you haven't told the JMM it needs to be) or due to a plain old race. Imagine if TroveMap.contains has some internal variable that it assumes won't change during the course of contains. This code lets that invariant break.
Regarding the memory visibility, the problem with that isn't false negatives (you use the synchronized double-check for that), but that trove's invariants may be violated. For instance, if they have a counter, and they require that counter == someInternalArray.length at all times, the lack of synchronization may be violating that.
My first thought was to make troveMap's reference volatile, and to re-write the reference every time you add to the map:
synchronized (this) {
troveMap.put(key, value);
troveMap = troveMap;
}
That way, you're setting up a memory barrier such that anyone who reads the troveMap will be guaranteed to see everything that had happened to it before its most recent assignment -- that is, its latest state. This solves the memory issues, but it doesn't solve the race conditions.
Depending on how quickly your data changes, maybe a Bloom filter could help? Or some other structure that's more optimized for certain fast paths?
Under the conditions you describe, it's easy to imagine a map implementation for which you can get false negatives by failing to synchronize. The only way I can imagine obtaining false positives is an implementation in which key insertions are non-atomic and a partial key insertion happens to look like another key you are testing for.
You don't say what kind of map you have implemented, but the stock map implementations store keys by assigning references. According to the Java Language Specification:
Writes to and reads of references are always atomic, regardless of whether they are implemented as 32 or 64 bit values.
If your map implementation uses object references as keys, then I don't see how you can get in trouble.
EDIT
The above was written in ignorance of Trove itself. After a little research, I found the following post by Rob Eden (one of the developers of Trove) on whether Trove maps are concurrent:
Trove does not modify the internal structure on retrievals. However, this is an implementation detail not a guarantee so I can't say that it won't change in future versions.
So it seems like this approach will work for now but may not be safe at all in a future version. It may be best to use one of Trove's synchronized map classes, despite the penalty.
I think you would be better off with a ConcurrentHashMap which doesn't need explicit locking and allows concurrent reads
boolean containsSpecial() {
return troveMap.contains(key);
}
void addSpecialKeyValue( key, value ) {
troveMap.putIfAbsent(key,value);
}
another option is using a ReadWriteLock which allows concurrent reads but no concurrent writes
ReadWriteLock rwlock = new ReentrantReadWriteLock();
boolean containsSpecial() {
rwlock.readLock().lock();
try{
return troveMap.contains(key);
}finally{
rwlock.readLock().release();
}
}
void addSpecialKeyValue( key, value ) {
rwlock.writeLock().lock();
try{
//...
troveMap.put(key,value);
}finally{
rwlock.writeLock().release();
}
}
Why you reinvent the wheel?
Simply use ConcurrentHashMap.putIfAbsent

Categories

Resources