How to "'validate and lazy swap' atomically" like CAS? - java

Here is prototype of function I want:
atomicReference.validateAndSwap(value -> isInvalid(value), () -> createValid());
It assumed to be called from multiple threads.
Second lambda is called only when first returns true.
First (plus second if first returns true) lambda calls should be a single atomic operation.
It is even possible to implement without synchronized?
Are there ready solutions for similar functionality?
Have I wrong way of thinking and miss something?

I’m not sure whether you mean the right thing when saying “First (plus second if first returns true) lambda calls should be a single atomic operation.” The point of atomic references is that the update function evaluation may overlap and therefore, should not have interference, but will act as if being atomic, as when evaluations overlap, only one can succeed with CAS and the other has to be repeated based on the new value.
If you want truly atomic evaluations, using a Lock or synchronized is unavoidable. If you have appropriate non-interfering functions and want implement updates as if atomic, it can be implemented like
Value old;
do old = atomicReference.get();
while(isInvalid(old) && !atomicReference.compareAndSet(old, createValid()));
Since in this specific case, the createValid() function does not depend on the old value, we could avoid repeated evaluation in the contended case:
Value old = atomicReference.get();
if(isInvalid(old)) {
Value newValue = createValid();
while(!atomicReference.compareAndSet(old, newValue)) {
old=atomicReference.get();
if(!isInvalid(old)) break;
}
}
That all assuming that the validity of an object cannot change in-between. Otherwise, locking or synchronizing is unavoidable.
Note that the Java 8’s update methods follow the same principle. So you can write
atomicReference.updateAndGet(old -> isInvalid(old)? createValid(): old);
to achieve the same, but it also isn’t truly atomic but rather behaves as-if atomic if concurrent evaluations of the update function have no interference.

Related

Why is this code thread safe?

I am preparing for the OCP exam and I found this question in a mock exam:
Given:
class Calculator {
private AtomicInteger i = new AtomicInteger();
public void add(int value) {
int oldValue = i.get();
int newValue = oldValue + value;
System.out.print(i.compareAndSet(oldValue,newValue));
}
public int getValue() {
return i.get();
}
}
What could you do to make this class thread safe?
And surprisingly, to me, the answer is:
"Class Calculator is thread-safe"
It must be that I have not understood correctly the concept. To my understanding, a class is thread safe when all the methods work as expected under thread concurrency. Now, if two thread call at the same time getValue(), then call add() passing a different value, and then they call getValue() again, the 'second' thread won't see its value passed increased.
I understand that the oldValue and the newValue as local variables are stored in the method stack, but that doesn't prevent that the second call to compareAndSet will find that the oldValue is not the current value and so won't add the newValue.
What am I missing here?
According to JCIP
A class is thread-safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or
interleaving of the execution of those threads by the runtime environment, and with no additional synchronization or
other coordination on the part of the calling code.
Although there is no definition of thread-safety and no specification of the class, in my opinion, by any sane definition of an add method in a Calculator class it is "correct" if it the value of the AtomicInteger i is increased in any case, "regardless of the scheduling or interleaving of the execution".
Therefore, in my opinion, the class is not thread-safe by this definition.
There is clearly a problem with the term "thread-safe" here, in that it isn't absolute. What is considered thread-safe depends on what you expect the program to do. And in most real-world applications you wouldn't consider this code thread-safe.
However the JLS formally specifies a different concept:
A program is correctly synchronized if and only if all sequentially
consistent executions are free of data races.
If a program is correctly synchronized, then all executions of the
program will appear to be sequentially consistent
Correctly synchronized is a precisely defined, objective condition and according to that definition the code is correctly synchronized because every access to i is in a happens-before relationship with every other access, and that's enough to satisfy the criteria.
The fact that the exact order of reads/writes depends on unpredictable timing is a different problem, outside the realms of correct synchronization (but well within what most people would call thread safety).
The add method is going to do one of two things:
Add value to the value of i, and print true.
Do nothing and print false.
The theoretically sound1 definitions of thread-safe that I have seen say something like this:
Given a set of requirements, a program is thread-safe with respect to those requirements if it correct with respect to those requirements for all possible executions in a multi-threaded environment.
In this case, we don't have a clear statement of requirements, but if we infer that the intended behavior is as above, then that class is thread-safe with respect to the inferred requirements.
Now if the requirements were that add always added value to i, then that implementation does not meet the requirement. In that case, you could argue that the method is not thread-safe. (In a single-threaded use-case add would always work, but in a multi-threaded use-case the add method could occasionally fail to meet the requirement ... and hence would be non-thread-safe.)
1 - By contrast, the Wikipedia description (seen on 2016-01-17) is this: "A piece of code is thread-safe if it only manipulates shared data structures in a manner that guarantees safe execution by multiple threads at the same time." The problem is that it doesn't say what "safe execution" means. Eric Lippert's 2009 blog post "What is this thing you call thread-safe" is really pertinent.
It's threadsafe because compareAndSet is threadsafe and that's the only part that's modifying shared resources in the object.
It doesn't make a difference how many threads enter that method body at the same time. The first one to reach the end and call compareAndSet "wins" and gets to change the value while the others find the value has changed on them and compareAndSet returns false. It never results in the system being in an indeterminate state, though callers would probably have to handle the false outcome in any realistic scenario.

How atomicity is achieved in the classes defined in java.util.concurrent.atomic package?

I was going through the source code of java.util.concurrent.atomic.AtomicInteger to find out how atomicity is achieved by the atomic operations provided by the class. For instance AtomicInteger.getAndIncrement() method source is as follows
public final int getAndIncrement() {
for (;;) {
int current = get();
int next = current + 1;
if (compareAndSet(current, next))
return current;
}
}
I am not able to understand the purpose of writing the sequence of operations inside a infinite for loop. Does it serve any special purpose in Java Memory Model (JMM). Please help me find a descriptive understanding. Thanks in advance.
I am not able to understand the purpose of writing the sequence of operations inside a infinite for loop.
The purpose of this code is to ensure that the volatile field gets updated appropriately without the overhead of a synchronized lock. Unless there are a large number of threads all competing to update this same field, this will most likely spin a very few times to accomplish this.
The volatile keyword provides visibility and memory synchronization guarantees but does not in itself ensure atomic operations with multiple operations (test and set). If you are testing and then setting a volatile field there are race-conditions if multiple threads are trying to perform the same operation at the same time. In this case, if multiple threads are trying to increment the AtomicInteger at the same time, you might miss one of the increments. The concurrent code here uses the spin loop and the compareAndSet underlying methods to make sure that the volatile int is only updated to 4 (for example) if it still is equal to 3.
t1 gets the atomic-int and it is 0.
t2 gets the atomic-int and it is 0.
t1 adds 1 to it
t1 atomically tests to make sure it is 0, it is, and stores 1.
t2 adds 1 to it
t2 atomically tests to make sure it is 0, it is not, so it has to spin and try again.
t2 gets the atomic-int and it is 1.
t2 adds 1 to it
t2 atomically tests to make sure it is 1, it is, and stores 2.
Does it serve any special purpose in Java Memory Model (JMM).
No, it serves the purpose of the class and method definitions and uses the JMM and the language definitions around volatile to achieve its purpose. The JMM defines what the language does with the synchronized, volatile, and other keywords and how multiple threads interact with cached and central memory. This is mostly about native code interactions with operating system and hardware and is rarely, if ever, about Java code.
It is the compareAndSet(...) method which gets closer to the JMM by calling into the Unsafe class which is mostly native methods with some wrappers:
public final boolean compareAndSet(int expect, int update) {
return unsafe.compareAndSwapInt(this, valueOffset, expect, update);
}
I am not able to understand the purpose of writing the sequence of
operations inside a infinite for loop.
To understand why it is in an infinite loop I find it helpful to understand what the compareAndSet does and how it may return false.
Atomically sets the value to the given updated value if the current
value == the expected value.
Parameters:
expect - the expected value
update - the new value
Returns:
true if successful. False return indicates that the actual value was not
equal to the expected value
So you read the Returns message and ask how is that possible?
If two threads are invoking incrementAndGet at close to the same time, and they both enter and see the value current == 1. Both threads will create a thread-local next == 2 and try to set via compareAndSet. Only one thread will win as per documented and the thread that loses must try again.
This is how CAS works. You attempt to change the value if you fail, try again, if you succeed then continue on.
Now simply declaring the field as volatile will not work because incrementing is not atomic. So something like this is not safe from the scenario I explained
volatile int count = 0;
public int incrementAndGet(){
return ++count; //may return the same number more than once.
}
Java's compareAndSet is based on CPU compare-and-swap (CAS) instructions see http://en.wikipedia.org/wiki/Compare-and-swap. It compares the contents of a memory location to a given value and, only if they are the same, modifies the contents of that memory location to a given new value.
In case of incrementAndGet we read the current value and call compareAndSet(current, current + 1). If it returns false it means that another thread interfered and changed the current value, which means that our attempt failed and we need to repeat the whole cycle until it succeeds.

Is this incrementAndGet thread-safe? It seems to pull object from eh cache

This servlet seems to fetch an object from ehCache, from an Element which has this object: http://code.google.com/p/adwhirl/source/browse/src/obj/HitObject.java?repo=servers-mobile
It then goes on to increment the counter which is an atomic long:
http://code.google.com/p/adwhirl/source/browse/src/servlet/MetricsServlet.java?repo=servers-mobile#174
//Atomically record the hit
if(i_hitType == AdWhirlUtil.HITTYPE.IMPRESSION.ordinal()) {
ho.impressions.incrementAndGet();
}
else {
ho.clicks.incrementAndGet();
}
This doesn't seem thread-safe to me as multiple threads could be fetching from the cache and if both increment at the same time you might loose a click/impression count.
Do you agree that this is not thread-safe?
AtomicLong and AtomicInteger use a CAS internally -- compare and set (or compare-and-swap). The idea is that you tell the CAS two things: the value you expect the long/int to have, and the value you want to update it to. If the long/int has the value you say it should have, the CAS will atomically make the update and return true; otherwise, it won't make the update, and it'll return false. Many modern chips support CAS very efficiently at the machine-code level; if the JVM is running in an environment that doesn't have a CAS, it can use mutexes (what Java calls synchronization) to implement the CAS. Regardless, once you have a CAS, you can safely implement an atomic increment via this logic (in pseudocode):
long incrementAndGet(atomicLong, byIncrement)
do
oldValue = atomicLong.get() // 1
newValue = oldValue + byIncrement
while ! atomicLong.cas(oldValue, newValue) // 2
return newValue
If another thread has come in and does its own increment between lines // 1 and // 2, the CAS will fail and the loop will try again. Otherwise, the CAS will succeed.
There's a gamble in this kind of approach: if there's low contention, a CAS is faster than a synchronized block isn't as likely to cause a thread context switch. But if there's a lot of contention, some threads are going to have to go through multiple loop iterations per increment, which obviously amounts to wasted work. Generally speaking, the incrementAndGet is going to be faster under most common loads.
The increment is thread safe since AtomicInteger and family guarantee that. But there is a problem with the insertion and fetching from the cache, where two (or more) HitObject could be created and inserted. That would cause potentially losing some hits on the first time this HitObject is accessed. As #denis.solonenko has pointed, there is already a TODO in the code to fix this.
However I'd like to point out that this code only suffers from concurrency on first accessing a given HitObject. Once you have the HitObject in the cache (and there are no more threads creating or inserting the HitObject) then this code is perfectly thread-safe. So this is only a very limited concurrency problem, and probably that's the reason they have not yet fixed it.

What is the name of this locking technique?

I've got a gigantic Trove map and a method that I need to call very often from multiple threads. Most of the time this method shall return true. The threads are doing heavy number crunching and I noticed that there was some contention due to the following method (it's just an example, my actual code is bit different):
synchronized boolean containsSpecial() {
return troveMap.contains(key);
}
Note that it's an "append only" map: once a key is added, is stays in there forever (which is important for what comes next I think).
I noticed that by changing the above to:
boolean containsSpecial() {
if ( troveMap.contains(key) ) {
// most of the time (>90%) we shall pass here, dodging lock-acquisition
return true;
}
synchronized (this) {
return troveMap.contains(key);
}
}
I get a 20% speedup on my number crunching (verified on lots of runs, running during long times etc.).
Does this optimization look correct (knowing that once a key is there it shall stay there forever)?
What is the name for this technique?
EDIT
The code that updates the map is called way less often than the containsSpecial() method and looks like this (I've synchronized the entire method):
synchronized void addSpecialKeyValue( key, value ) {
....
}
This code is not correct.
Trove doesn't handle concurrent use itself; it's like java.util.HashMap in that regard. So, like HashMap, even seemingly innocent, read-only methods like containsKey() could throw a runtime exception or, worse, enter an infinite loop if another thread modifies the map concurrently. I don't know the internals of Trove, but with HashMap, rehashing when the load factor is exceeded, or removing entries can cause failures in other threads that are only reading.
If the operation takes a significant amount of time compared to lock management, using a read-write lock to eliminate the serialization bottleneck will improve performance greatly. In the class documentation for ReentrantReadWriteLock, there are "Sample usages"; you can use the second example, for RWDictionary, as a guide.
In this case, the map operations may be so fast that the locking overhead dominates. If that's the case, you'll need to profile on the target system to see whether a synchronized block or a read-write lock is faster.
Either way, the important point is that you can't safely remove all synchronization, or you'll have consistency and visibility problems.
It's called wrong locking ;-) Actually, it is some variant of the double-checked locking approach. And the original version of that approach is just plain wrong in Java.
Java threads are allowed to keep private copies of variables in their local memory (think: core-local cache of a multi-core machine). Any Java implementation is allowed to never write changes back into the global memory unless some synchronization happens.
So, it is very well possible that one of your threads has a local memory in which troveMap.contains(key) evaluates to true. Therefore, it never synchronizes and it never gets the updated memory.
Additionally, what happens when contains() sees a inconsistent memory of the troveMap data structure?
Lookup the Java memory model for the details. Or have a look at this book: Java Concurrency in Practice.
This looks unsafe to me. Specifically, the unsynchronized calls will be able to see partial updates, either due to memory visibility (a previous put not getting fully published, since you haven't told the JMM it needs to be) or due to a plain old race. Imagine if TroveMap.contains has some internal variable that it assumes won't change during the course of contains. This code lets that invariant break.
Regarding the memory visibility, the problem with that isn't false negatives (you use the synchronized double-check for that), but that trove's invariants may be violated. For instance, if they have a counter, and they require that counter == someInternalArray.length at all times, the lack of synchronization may be violating that.
My first thought was to make troveMap's reference volatile, and to re-write the reference every time you add to the map:
synchronized (this) {
troveMap.put(key, value);
troveMap = troveMap;
}
That way, you're setting up a memory barrier such that anyone who reads the troveMap will be guaranteed to see everything that had happened to it before its most recent assignment -- that is, its latest state. This solves the memory issues, but it doesn't solve the race conditions.
Depending on how quickly your data changes, maybe a Bloom filter could help? Or some other structure that's more optimized for certain fast paths?
Under the conditions you describe, it's easy to imagine a map implementation for which you can get false negatives by failing to synchronize. The only way I can imagine obtaining false positives is an implementation in which key insertions are non-atomic and a partial key insertion happens to look like another key you are testing for.
You don't say what kind of map you have implemented, but the stock map implementations store keys by assigning references. According to the Java Language Specification:
Writes to and reads of references are always atomic, regardless of whether they are implemented as 32 or 64 bit values.
If your map implementation uses object references as keys, then I don't see how you can get in trouble.
EDIT
The above was written in ignorance of Trove itself. After a little research, I found the following post by Rob Eden (one of the developers of Trove) on whether Trove maps are concurrent:
Trove does not modify the internal structure on retrievals. However, this is an implementation detail not a guarantee so I can't say that it won't change in future versions.
So it seems like this approach will work for now but may not be safe at all in a future version. It may be best to use one of Trove's synchronized map classes, despite the penalty.
I think you would be better off with a ConcurrentHashMap which doesn't need explicit locking and allows concurrent reads
boolean containsSpecial() {
return troveMap.contains(key);
}
void addSpecialKeyValue( key, value ) {
troveMap.putIfAbsent(key,value);
}
another option is using a ReadWriteLock which allows concurrent reads but no concurrent writes
ReadWriteLock rwlock = new ReentrantReadWriteLock();
boolean containsSpecial() {
rwlock.readLock().lock();
try{
return troveMap.contains(key);
}finally{
rwlock.readLock().release();
}
}
void addSpecialKeyValue( key, value ) {
rwlock.writeLock().lock();
try{
//...
troveMap.put(key,value);
}finally{
rwlock.writeLock().release();
}
}
Why you reinvent the wheel?
Simply use ConcurrentHashMap.putIfAbsent

Using putIfAbsent like a short circuit operator

Is it possible to use putIfAbsent or any of its equivalents like a short circuit operator.
myConcurrentMap.putIfAbsent(key,calculatedValue)
I want that if there is already a calculatedValue it shouldnt be calculated again.
by default putIfAbsent would still do the calculation every time even though it will not actually store the value again.
Java doesn't allow any form of short-circuiting save the built-in cases, sadly - all method calls result in the arguments being fully evaluated before control passes to the method. Thus you couldn't do this with "normal" syntax; you'd need to manually wrap up the calculation inside a Callable or similar, and then explicitly invoke it.
In this case I find it difficult to see how it could work anyway, though. putIfAbsent works on the basis of being an atomic, non-blocking operation. If it were to do what you want, the sequence of events would roughly be:
Check if key exists in the map (this example assumes it doesn't)
Evaluate calculatedValue (probably expensive, given the context of the question)
Put result in map
It would be impossible for this to be non-blocking if the value didn't already exist at step two - two different threads calling this method at the same time could only perform correctly if blocking happened. At this point you may as well just use synchronized blocks with the flexibility of implementation that that entails; you can definitely implement what you're after with some simple locking, something like the following:
private final Map<K, V> map = ...;
public void myAdd(K key, Callable<V> valueComputation) {
synchronized(map) {
if (!map.containsKey(key)) {
map.put(key, valueComputation.call());
}
}
}
You can put Future<V> objects into the map. Using putIfAbsent, only one object will be there, and computation of final value will be performed by calling Future.get() (e.g. by FutureTask + Callable classes). Check out Java Concurrency in Practice for discussion about using this technique. (Example code is also in this question here on SO.
This way, your value is computed only once, and all threads get same value. Access to map isn't blocked, although access to value (through Future.get()) will block until this value is computed by one of the threads.
You could consider to use a Guava ComputingMap
ConcurrentMap<Key, Value> myConcurrentMap = new MapMaker()
.makeComputingMap(
new Function<Key, Value>() {
public Value apply(Key key) {
Value calculatedValue = calculateValue(key);
return calculatedValue;
}
});

Categories

Resources