Can somebody explain ....This is the official java AtomicBoolean getAndSet method's definition
public final boolean getAndSet(boolean newValue) {
for (;;) {
boolean current = get();
if (compareAndSet(current, newValue))
return current;
}
}
In Java 8, the sourcecode has been slightly restructured, making it easier to understand:
public final boolean getAndSet(boolean newValue) {
boolean prev;
do {
prev = get();
} while (!compareAndSet(prev, newValue));
return prev;
}
As you can see, compareAndSet, which returns a boolean, which comes from the native function Unsafe.compareAndSwapInt, might fail. In that case, the operation is simply repeated.
According to the documentation of Unsafe.compareAndSwapInt,
Atomically update Java variable to x if it is currently holding expected.
Returns:
true if successful
the function will fail if the value of the AtomicBoolean has been changed between calling get() and some point in Unsafe.compareAndSwapInt. This usually shouldn't be the case, but when it happens, it will poll the current value once again and hope the same thing doesn't repeat.
Clearly, it is not an infinite loop. The loop just has its exit condition inside the body:
return current;
In general, this is the typical idiom used in optimistic, lock-free atomic operations. A Compare-And-Swap (CAS) operation is retried until it succeeeds, and it will succeed as soon as it is not contended from another thread. More precisely, the exit condition is met whenever the return value of get() matches the current value as observed by compareAndSet(). It is very hard not to meet this condition, and it happens very rarely.
Related
I have infinite queue of promises(completablefuture) as an input.
The goal is to run promises one by one till condition fulfilled on the result and stop processing and return result from current promise.
My iterative solution looks like that:
volatile boolean shouldKeepReading = true;
....
CompletableFuture<Integer> result = promisesQueue.poll().get();
while (shouldKeepReading) {
result = result.thenCompose(res -> {
if (conditionPass(res)) {
shouldKeepReading = false;
return CompletableFuture.completedFuture(0));
} else {
if (shouldKeepReading) {
return promisesQueue.poll().get();
} else {
return CompletableFuture.completedFuture(0));
}
}
});
I used infinite loop with volatile flag to control processing. Volatile guarantee memory visibility to all readers. Once condition met control flag will be set to false in order to stop processing.
I used double check before read next item.
if (shouldKeepReading) {
return promisesQueue.poll().get();
The code seems works correct but noticed that volatile keyword is not needed here, it doesn't change the processing. Why ? Have I miss something ?
Do you see any problems with that code ?
The HotSpot JVM is rather conservative. It’s too easy to reproducibly see writes made by other threads as side effect of other, unrelated, reads and writes with stronger memory guarantees.
For example, in your case thenCompose checks the completion status of the future whereas the implementation specific caller of the function will change the completion status. This may appear to have the desired effect even when the status is “not completed” in which case there’s no formal happens-before relationship or when actually calling thenApply on the next chained future which also doesn’t establish a happens-before relationship as it’s a different variable.
In other words, it may appear to work with this JVM implementation without volatile but is not guaranteed, so you should never rely on such behavior.
Even worse, your code is not guaranteed to work even with volatile.
The basic shape of your code is
CompletableFuture<Integer> result = …
while (shouldKeepReading) {
result = result.thenCompose(…);
}
which implies that as long as the initial future is not already completed, this loop may chain an arbitrary number of dependent actions until the completion of the dependency chain manages to catch up. The system load caused by this loop may even prevent the chain from catching up, until encountering an OutOfMemoryError.
As long as the completion chain manages to catch up, you don’t notice a difference, as all chained actions evaluate to the same result, zero, as soon as shouldKeepReading became false.
Since the original future originates from promisesQueue.poll().get() outside the scope, we may simulate a higher workload by inserting a small delay. Then, add a counter to see what the end result doesn’t tell, e.g.
AtomicInteger chainedOps = new AtomicInteger();
CompletableFuture<Integer> result = promisesQueue.poll().get();
result = result.whenCompleteAsync(
(x,y) -> LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(2)));
while(shouldKeepReading) {
result = result.thenCompose(res -> {
chainedOps.incrementAndGet();
if(conditionPass(res)) {
shouldKeepReading = false;
return CompletableFuture.completedFuture(0);
} else {
if (shouldKeepReading) {
return promisesQueue.poll().get();
} else {
return CompletableFuture.completedFuture(0);
}
}
});
}
result.join();
System.out.println(chainedOps.get() + " chained ops");
On my machine, the loop easily chains more than five million actions, even when conditionPass returns true in the first.
The solution is quite simple. Use neither a flag variable nor a loop
result = result.thenCompose(new Function<Integer, CompletionStage<Integer>>() {
#Override
public CompletionStage<Integer> apply(Integer res) {
// for testing, do chainedOps.incrementAndGet();
return conditionPass(res)? CompletableFuture.completedFuture(0):
promisesQueue.poll().get().thenCompose(this);
}
});
This calls thenCompose only when the condition is not fulfilled, hence, never chains more actions than necessary. Since it requires the function itself to be chained via thenCompose(this), the lambda has to be replaced by an anonymous inner class. If you don’t like this, you may resort to a recursive solution
CompletableFuture<Integer> retryPoll() {
CompletableFuture<Integer> result = promisesQueue.poll().get();
return result.thenComposeAsync(res ->
conditionPass(res)? CompletableFuture.completedFuture(0): retryPoll());
}
It’s remarkably simple here, as the retry doesn’t depend on the result of the previous evaluation (you’d need to introduce parameters otherwise), but on the changes promisesQueue.poll().get() makes to the program’s state.
This method uses thenComposeAsync to avoid deep recursions if there is a large number of already completed futures whose result is rejected by conditionPass. If you know for sure that conditionPass will succeed after a rather small amount of retries, you can change thenComposeAsync to thenCompose.
I pasted some code about Java concurrency:
public class ValueLatch <T> {
#GuardedBy("this") private T value = null;
private final CountDownLatch done = new CountDownLatch(1);
public boolean isSet() {
return (done.getCount() == 0);
}
public synchronized void setValue(T newValue) {
if (!isSet()) {
value = newValue;
done.countDown();
}
}
public T getValue() throws InterruptedException {
done.await();
synchronized (this) {
return value;
}
}
}
Why does return value; need to be synchronized???
Is the return statement not atomic??
The return does not need to be synchronized. Since CountDownLatch.countDown() is not called until after the value is set for the last time, CountDownLatch.await() ensures that value is stable before it is read and returned.
The developer who wrote this was probably not quite sure of what he was doing (concurrency is difficult and dangerous) or, more likely, his use of the GuardedBy annotation on value caused his build system to emit a warning on the return, and some other developer synchronized it unnecessarily just to make the warning go away.
I say 'some other developer', because this class otherwise seems to be specifically designed to allow getValue() to proceed without locking once the value has been set.
The return statement needs to perform a read operation over value.
The read operation is atomic for most primitives, but you're dealing with a generic, meaning you won't know value's type.
For that reason, the return should be synchronized.
return value does not need to be synchronized:
reads of references is atomic according to the JLS: "Writes to and reads of references are always atomic, ..."
each thread reading value is guaranteed to so see its latest value as according to the Java Memory Model value = newValue happens-before done.countDown(), which happens-before done.await(), which happens-before return value. By transitivity value = newValue thus happens-before return value.
I am looking for something like AtomicInteger or LongAddr that will:
Increment if value is less than MAX where MAX is some user-defined value.
Return a value indicating whether the atomic was incremented.
Use-case:
I have a queue of tasks.
Only MAX tasks should run concurrently.
When a new task is added to a queue, I want to run it if the number of ongoing tasks is less than MAX
The reason I can't use AtomicInteger or LongAddr is that they only allow you to compare against a specific value instead of a range of values.
Clarification: I don't want the solution to actually execute the task. My use-case involves passing network requests to Jetty. It uses a single thread to drive multiple network requests. Any solution that fires up an Executor defeats this purpose because then I end up with one thread per network request.
Andy Turner provided an excellent answer but I find this solution more readable. Essentially, all we need is new Semaphore(MAX) and Semaphore.tryAcquire().
If you dig into the source-code of Semaphore you will find that the implementation is similar to Andy's answer.
Here is some sample code:
Semaphore semaphore = new Semaphore(MAX);
// ... much later ...
public void addTask(Runnable task)
{
if (semaphore.tryAcquire())
task.run();
else
queue.add(task);
}
public void afterTaskComplete(Runnable task)
{
semaphore.release();
}
Use compareAndSet():
boolean incrementToTheMax(AtomicInteger atomicInt, int max) {
while (true) {
int value = atomicInt.get();
if (value >= max) {
// The counter has already reached max, so don't increment it.
return false;
}
if (atomicInt.compareAndSet(value, value+1)) {
// If we reach here, the atomic integer still had the value "value";
// and so we incremented it.
return true;
}
// If we reach here, some other thread atomically updated the value.
// Rats! Loop, and try to increment of again.
}
}
The following code works without race condition
AtomicInteger atomicInt = new AtomicInteger(0);
ExecutorService executor = Executors.newFixedThreadPool(20);
IntStream.range(0, 1000)
.forEach(i -> executor.submit(atomicInt::incrementAndGet));
Here is the implementation of incrementAndGet
public final int incrementAndGet() {
for (;;) {
int current = get();
int next = current + 1;
if (compareAndSet(current, next))
return next;
}
}
We can see current is not synchronized or locked, after one thread get the current another thread might already update the current.
But it seems like atomic class avoids race condition some how.
Can someone point out my mistake?
compareAndSet sets the value (and returns true) if and only if the first parameter is equal to the AtomicInteger's current value.
That is, if another thread had already changed the value, then current would not be equal to the current value, and the loop would run once more.
From the documentation of compareAndSet(int expect, int update):
Atomically sets the value to the given updated value if the current
value == the expected value.
The getAndIncrement implementation of AtomicInteger does the following:
public final int getAndIncrement() {
for (;;) {
int current = get(); // Step 1 , get returns the volatile variable
int next = current + 1;
if (compareAndSet(current, next))
return current;
} }
Isn't it an equivalent of aVolatileVariable++? (which we know is not a correct usage). Without synchronization, how are we ensuring that this complete operation is atomic? What if the value of the volatile variable changes after the variable 'current' is read in Step 1?
The "secret sauce" is in this call:
compareAndSet(current, next)
The compareAndSet operation is going to fail (and return false) if the original volatile value has been changed concurrently after the read, forcing the code to continue with the loop.