I have an object based on array, which implements the following interface:
public interface PairSupplier<Q, E> {
public int size();
public Pair<Q, E> get(int index);
}
I would like to create a specific iterator over it:
public boolean hasNext(){
return true;
}
public Pair<Q, E> next(){
//some magic
}
In method next I would like to return some element from PairSupplier.
This element should be unique for thread, other threads should not have this element.
Since PairSupplier has a final size, this situation is not always possible, but I would like to approach it.
The order of elements doesn't matter, thread can take same element at a different time.
Example: 2 Threads, 5 elements - {1,2,3,4,5}
Thread 1 | Thread 2
1 2
3 4
5 1
3 2
4 5
My solution:
I create AtomicInteger index, which I increment on every next call.
PairSupplier pairs;
AtomicInteger index;
public boolean hasNext(){
return true;
}
public Pair<Q, E> next(){
int position = index.incrementAndGet() % pairs.size;
if (position < 0) {
position *= -1;
position = pairs.size - position;
}
return pairs.get(position);
}
pairs and index are shared among all threads.
I found this solution not scalable (because all threads go for increment), maybe someone have better ideas?
This iterator will be used by 50-1000 threads.
Your question details are ambiguous - your example suggests that two threads can be handed the same Pair but you say otherwise in the description.
As the more difficult to achieve, I will offer an Iterable<Pair<Q,E>> that will deliver Pairs one per thread until the supplier cycles - then it will repeat.
public interface Supplier<T> {
public int size();
public T get(int index);
}
public interface PairSupplier<Q, E> extends Supplier<Pair<Q, E>> {
}
public class IterableSupplier<T> implements Iterable<T> {
// The common supplier to use across all threads.
final Supplier<T> supplier;
// The atomic counter.
final AtomicInteger i = new AtomicInteger();
public IterableSupplier(Supplier<T> supplier) {
this.supplier = supplier;
}
#Override
public Iterator<T> iterator() {
/**
* You may create a NEW iterator for each thread while they all share supplier
* and Will therefore distribute each Pair between different threads.
*
* You may also share the same iterator across multiple threads.
*
* No two threads will get the same pair twice unless the sequence cycles.
*/
return new ThreadSafeIterator();
}
private class ThreadSafeIterator implements Iterator<T> {
#Override
public boolean hasNext() {
/**
* Always true.
*/
return true;
}
private int pickNext() {
// Just grab one atomically.
int pick = i.incrementAndGet();
// Reset to zero if it has exceeded - but no spin, let "just someone" manage it.
int actual = pick % supplier.size();
if (pick != actual) {
// So long as someone has a success before we overflow int we're good.
i.compareAndSet(pick, actual);
}
return actual;
}
#Override
public T next() {
return supplier.get(pickNext());
}
#Override
public void remove() {
throw new UnsupportedOperationException("Remove not supported.");
}
}
}
NB: I have adjusted the code a little to accommodate both scenarios. You can take an Iterator per thread or share a single Iterator across threads.
You have a piece of information ("has anyone taken this Pair already?") that must be shared between all threads. So for the general case, you're stuck. However, if you have an idea about this size of your array and the number of threads, you could use buckets to make it less painful.
Let's suppose we know that there will be 1,000,000 array elements and 1,000 threads. Assign each thread a range (thread #1 gets elements 0-999, etc). Now instead of 1,000 threads contending for one AtomicInteger, you can have no contention at all!
That works if you can be sure that all your threads will run at about the same pace. If you need to handle the case where sometimes thread #1 is busy doing other things while thread #2 is idle, you can modify your bucket pattern slightly: each bucket has an AtomicInteger. Now threads will generally only contend with themselves, but if their bucket is empty, they can move on to the next bucket.
I'm having some trouble understanding what the problem you are trying to solve is?
Does each thread process the whole collection?
Is the concern that no two threads can work on the same Pair at the same time? But each thread needs to process each Pair in the collection?
Or do you want the collection processed once by using all of the threads?
There is one key thing which is obscure in your example - what exactly is the meaning this?
The order of elements doesn't matter, thread can take same element at a different time.
"different time" means what? Within N milliseconds of each other? Does it mean that absolutely two threads will never be touching the same Pair at the same time? I will assume that.
If you want to decrease the probability that threads will block on each other contending for the same Pair, and there is a backing array of Pairs, try this:
Partition your array into numPairs / threadCount sub-arrays (you don't have to actually create sub-arrays, just start at different offsets - but it's easier to think about as sub-array)
Assign each thread to a different sub-array; when a thread exhausts its sub-array, increment the index of its sub array
Say we have 6 Pairs and 2 threads - your assignments look like Thread-1:[0,1,2] Thread-2:[3,4,5]. When Thread-1 starts it will be looking at a different set of Pairs than thread 2, so it is unlikely that they will contend for the same pair
If it is important that two threads really not touch a Pair at the same time, then wrap all of the code which touches a Pair object in synchronized(pair) (synchronize on the instance, not the type!) - there may occasionally be blocking, but you're never blocking all threads on a single thing, as with the AtomicInteger - threads can only block each other because they are really trying to touch the same object
Note this is not guaranteed never to block - for that, all threads would have to run at exactly the same speed, and processing every Pair object would have to take exactly the same amount of time, and the OS's thread scheduler would have to never steal time from one thread but not another. You cannot assume any of those things. What this gives you is a higher probability that you will get better concurrency, by dividing the areas to work in and making the smallest unit of state that is shared be the lock.
But this is the usual pattern for getting more concurrency on a data structure - partition the data between threads so that they rarely are touching the same lock at the same time.
The most easy that o see, is create Hash set or Map, and give a unique hash for every thread. After that just do simple get by this hash code.
This is standard java semaphore usage problem. The following javadoc gives almost similar example as your problem. http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Semaphore.html
If you need more help, let me know?
I prefer a lock and release process.
If a thread is asking for a pair object, the Pair object is removed from the supplier. Before the thread is asking for a new pair, the 'old' pair is added the the suplier again.
You can push from front and put at the end.
Related
There have been many discussions on this topic, e.g. here:
What's the difference between ConcurrentHashMap and Collections.synchronizedMap(Map)?
But I haven't found an answer to my specific use-case.
In general, you cannot assume that a HashMap is thread-safe. If write to the same key from different threads at the same time, all hell could break loose. But what if I know that all my threads will have unique keys?
Is this code thread-safe or do I need to add blocking mechanism (or use concurrent map)?
Map<int, String> myMap = new HashMap<>();
for (int i = 1 ; i > 6 ; i++) {
new Thread(() -> {
myMap.put(i, Integer.toString(i));
}).start();
}
The answer is simple: HashMap makes absolutely no thread-safety guarantees at all.
In fact it's explicitly documented that it's not thread-safe:
If multiple threads access a hash map concurrently, and at least one of the threads modifies the map structurally, it must be synchronized externally.
So accessing one from multiple threads without any kind of synchronization is a recipe for disaster.
I have seen cases where each thread uses a different key cause issue (like iterations happening at the same time resulting in infinite loops).
Just think of re-hashing: when the threshold is reached, the internal bucket-array needs to be resized. That's a somewhat lengthy operation (compared to a single put). During that time all manner of weird things can happen if another thread tries to put as well (and maybe even triggers a second re-hashing!).
Additionally, there's no reliable way for you to proof that your specific use case is safe, since all tests you could run could just "accidentally" work. In other words: you can never depend on this working, even if you thin k you covered it with unit tests.
And since not everyone is convinced, you can easily test it yourself with this code:
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
class HashMapDemonstration {
public static void main(String[] args) throws InterruptedException {
int threadCount = 10;
int valuesPerThread = 1000;
Map<Integer, Integer> map = new HashMap<>();
List<Thread> threads = new ArrayList<>(threadCount);
for (int i = 0; i < threadCount; i++) {
Thread thread = new Thread(new MyUpdater(map, i*valuesPerThread, (i+1)*valuesPerThread - 1));
thread.start();
threads.add(thread);
}
for (Thread thread : threads) {
thread.join();
}
System.out.printf("%d threads with %d values per thread with a %s produced %d entries, should be %d%n",
threadCount, valuesPerThread, map.getClass().getName(), map.size(), threadCount * valuesPerThread);
}
}
class MyUpdater implements Runnable {
private final Map<Integer, Integer> map;
private final int startValue;
private final int endValue;
MyUpdater(Map<Integer, Integer> map, int startValue, int endValue) {
this.map = map;
this.startValue = startValue;
this.endValue = endValue;
System.out.printf("Creating updater for values %d to %d%n", startValue, endValue);
}
#Override
public void run() {
for (int i = startValue; i<= endValue; i++) {
map.put(i, i);
}
}
}
This is exactly the type of program OP mentioned: Each thread will only ever write to keys that no other thread ever touches. And still, the resulting Map will not contain all entries:
Creating updater for values 0 to 999
Creating updater for values 1000 to 1999
Creating updater for values 2000 to 2999
Creating updater for values 3000 to 3999
Creating updater for values 4000 to 4999
Creating updater for values 5000 to 5999
Creating updater for values 6000 to 6999
Creating updater for values 7000 to 7999
Creating updater for values 8000 to 8999
Creating updater for values 9000 to 9999
10 threads with 1000 values per thread with a java.util.HashMap produced 9968 entries, should be 10000
Note that the actual number of entries in the final Map will vary for each run. It even sometimes prints 10000 (because it's not thread-safe!).
Note that this failure mode (losing entries) is definitely not the only possible one: basically anything could happen.
I would like to specifically respond to the phrase.
But what if I know that all my threads will have unique keys?
You are making an assumption about the implementation of the map. The implementation is subject to change. If the implementation is documented not to be thread-safe, you must take into account the Java Memory Model (JMM) that guarantees almost nothing about visibility of memory between threads.
This is making a lot of assumptions and few guarantees. You should not rely on these assumptions, even if it happens to work on your machine, in a specific use-case, at a specific time.
In short: if an implementation that is not thread-safe is used in multiple threads, you MUST surround it with constructs that ensure thread-safety. Always.
However, just for the fun of it, let's describe what can go wrong in your particular case, where each thread only uses a unique key.
When adding or removing a key, even if unique, there are cases when a hash map needs to reorganise internally. The first one is in case of a hash-collision,1 in which a linked list of key-value entries must be updated. The second one is where the map decides to resize its internal entry table. That overhauls the internal structure including the mentioned linked lists.
Because of the JMM it is largely not guaranteed what another thread sees of the reorganisation. That means that behaviour is undefined if another threads happens to be in the middle of a get(key) when the reorganisation happens. If another thread is concurrently doing a put(key,value), you could end up with two threads trying to resize the map at the same time. Frankly, I do not even want to think what mayhem that can cause!
1 Multiple keys can have the same hash-code. Because the map has no limitless storage, the hash-code are often also wrapped around with the size of the internal table of entries, like (hashCode % sizeOfTable), which can lead to a situation where different hash-codes utilize the same "entry".
Are there any concurrency problems with one thread reading from one index of an array, while another thread writes to another index of the array, as long as the indices are different?
e.g. (this example not necessarily recommended for real use, only to illustrate my point)
class Test1
{
static final private int N = 4096;
final private int[] x = new int[N];
final private AtomicInteger nwritten = new AtomicInteger(0);
// invariant:
// all values x[i] where 0 <= i < nwritten.get() are immutable
// read() is not synchronized since we want it to be fast
int read(int index) {
if (index >= nwritten.get())
throw new IllegalArgumentException();
return x[index];
}
// write() is synchronized to handle multiple writers
// (using compare-and-set techniques to avoid blocking algorithms
// is nontrivial)
synchronized void write(int x_i) {
int index = nwriting.get();
if (index >= N)
throw SomeExceptionThatIndicatesArrayIsFull();
x[index] = x_i;
// from this point forward, x[index] is fixed in stone
nwriting.set(index+1);
}
}
edit: critiquing this example is not my question, I literally just want to know if array access to one index, concurrently to access of another index, poses concurrency problems, couldn't think of a simple example.
While you will not get an invalid state by changing arrays as you mention, you will have the same problem that happens when two threads are viewing a non volatile integer without synchronization (see the section in the Java Tutorial on Memory Consistency Errors). Basically, the problem is that Thread 1 may write a value in space i, but there is no guarantee when (or if) Thread 2 will see the change.
The class java.util.concurrent.atomic.AtomicIntegerArray does what you want to do.
The example has a lot of stuff that differs from the prose question.
The answer to that question is that distinct elements of an array are accessed independently, so you don't need synchronization if two threads change different elements.
However, the Java memory model makes no guarantees (that I'm aware of) that a value written by one thread will be visible to another thread, unless you synchronize access.
Depending on what you're really trying to accomplish, it's likely that java.util.concurrent already has a class that will do it for you. And if it doesn't, I still recommend taking a look at the source code for ConcurrentHashMap, since your code appears to be doing the same thing that it does to manage the hash table.
I am not really sure if synchronizing only the write method, while leaving the read method unsychronized would work. Not really what are all the consequences, but at least it might lead to read method returning some values that has just been overriden by write.
Yes, as bad cache interleaving can still happen in a multi-cpu/core environment. There are several options to avoid it:
Use the Unsafe Sun-private library to atomically set an element in an array (or the jsr166y added feature in Java7
Use AtomicXYZ[] array
Use custom object with one volatile field and have an array of that object.
Use the ParallelArray of jsr166y addendum instead in your algorithm
Since read() is not synchronized you could have the following scenario:
Thread A enters write() method
Thread A writes to nwriting = 0;
Thread B reads from nwriting =0;
Thread A increments nwriting. nwriting=1
Thread A exits write();
Since you want to guarantee that your variable addresses never conflict, what about something like (discounting array index issues):
int i;
synchronized int curr(){ return i; }
synchronized int next(){ return ++i;}
int read( ) {
return values[curr()];
}
void write(int x){
values[next()]=x;
}
I have a simple multi threading problem (in Java). I have 2 sets of 4 very large arrays and I have 4 threads, 1 for each array in the set. I want the threads, in parallel, to check if both sets, if their arrays have identical values. If one of the values in one of the arrays does not match the corresponding index value in the other array, then the two sets are not identical and all threads should stop what they are doing and move on to next 2 sets of 4 very large arrays. This process continues until all the pairs of array sets have been compared and deemed equal or not equal. I want all the threads to stop when one of the threads finds a mis-match. What is the correct way to implement this?
Here's one simple solution, but I don't know if it's the most efficient: Simply declare an object with a public boolean field.
public class TerminationEvent {
public boolean terminated = false;
}
Before starting the threads, create a new TerminationEvent object. Use this object as a parameter when you construct the thread objects, e.g.
public class MyThread implements Runnable {
private TerminationEvent terminationEvent;
public MyThread(TerminationEvent event) {
terminationEvent = event;
}
}
The same object will be passed to every MyThread, so they will all see the same boolean.
Now, the run() method in each MyThread will have something like
if (terminationEvent.terminated) {
break;
}
in the loop, and will set terminationEvent.terminated = true; when the other threads need to stop.
(Normally I wouldn't use public fields like terminated, but you said you wanted efficiency. I think this is a bit more efficient than a getter method, but I haven't tried benchmarking anything. Also, in a simple case like this, I don't think you need to worry about synchronization when the threads read or write the terminated field.)
Stopping other threads are usually done through the use of interrupts. Java threads do no longer use Thread.stop() because this was seen as unsafe in that it unlocks all monitors held by the thread, possibly leading to other threads being able to view objects in an inconsistent state (Ref: http://docs.oracle.com/javase/1.5.0/docs/guide/misc/threadPrimitiveDeprecation.html). The threads are not "stopped" as such, but are commonly used to set a flag false:
The thread should check the interrupted flag (infrequently) before performing computations:
if (Thread.interrupted()) {
throw new InterruptedException();
}
Use a volatile variable to set the abort condition. In your check loop that is run by all threads, let those threads check a number N of values uninterrupted so they don't have to fetch the volatile too often, which may be costly compared to the value match test. Benchmark your solution to find the optimum for N on your target hardware.
Another way would be to use a ForkJoin approach where your result is true if a mismatch was found. Divide your array slices down to a minimum size similar to N.
I am trying to find a good way to achieve the following API:
void add(Object o);
void processAndClear();
The class would store the objects and upon calling processAndClear would iterate through the currently stored ones, process them somehow, and then clear the store. This class should be thread safe.
the obvious approach is to use locking, but I wanted to be more "concurrent". This is the approach which I would use:
class Store{
private AtomicReference<CopyOnWriteArrayList<Object>> store = new AtomicReference<>(new CopyOnWriteArrayList <>());
void add(Object o){
store.get().add(o);
}
void processAndClear(){
CopyOnWriteArrayList<Object> objects = store.get();
store.compareAndSet(objects, new CopyOnWriteArrayList<>());
for (Object object : objects) {
//do sth
}
}
}
This would allow threads that try to add objects to proceed almost immediately without any locking/waiting for the xlearing to complete. Is this the more or less correct approach?
Your above code is not thread-safe. Imagine the following:
Thread A is put on hold at add() right after store.get()
Thread B is in processAndClear(), replaces the list, processes all elements of the old one, then returns.
Thread A resumes and adds a new item to the now obsolete list that will never be processed.
The probably easiest solution here would be to use a LinkedBlockingQueue, which would as well simplify the task a lot:
class Store{
final LinkedBlockingQueue<Object> queue = new LinkedBlockingQueue<>();
void add(final Object o){
queue.put(o); // blocks until there is free space in the optionally bounded queue
}
void processAndClear(){
Object element;
while ((element = queue.poll()) != null) { // does not block on empty list but returns null instead
doSomething(element);
}
}
}
Edit: How to do this with synchronized:
class Store{
final LinkedList<Object> queue = new LinkedList<>(); // has to be final for synchronized to work
void add(final Object o){
synchronized(queue) { // on the queue as this is the shared object in question
queue.add(o);
}
}
void processAndClear() {
final LinkedList<Object> elements = new LinkedList<>(); // temporary local list
synchronized(queue) { // here as well, as every access needs to be properly synchronized
elements.addAll(queue);
queue.clear();
}
for (Object e : elements) {
doSomething(e); // this is thread-safe as only this thread can access these now local elements
}
}
}
Why this is not a good idea
Although this is thread-safe, it is much slower if compared to the concurrent version. Assume that you have a system with 100 threads that frequently call add, while one thread calls processAndClear. Then the following performance bottle-necks will occur:
If one thread calls add the other 99 are put on hold in the meantime.
During the first part of processAndClear all 100 threads are put on hold.
If you assume that those 100 adding threads have nothing else to do, you can easily show, that the application runs at the same speed as a single-threaded application minus the cost for synchronization. That means: adding will effectively be slower with 100 threads than with 1. This is not the case if you use a concurrent list as in the first example.
There will however be a minor performance gain with the processing thread, as doSomething can be run on the old elements while new ones are added. But again the concurrent example could be faster, as you could have multiple threads do the processing simultaneously.
Effectively synchronized can be used as well, but you will automatically introduce performance bottle-necks, potentially causing the application to run slower as single-threaded, forcing you to do complicated performance tests. In addition extending the functionality always contains a risk of introducing threading issues, as locking needs to be done manually.A concurrent list in contrast solves all these problems without additional code and the code can easily changed or extended later on.
The class would store the objects and upon calling processAndClear would iterate through the currently stored ones, process them somehow, and then clear the store.
This seems like you should use a BlockingQueue for this task. Your add(...) method would add to the queue and your consumer would call take() which blocks waiting for the next item. The BlockingQueue (ArrayBlockingQueue is a typical implementation) takes care of all of the synchronization and signaling for you.
This means that you don't have to have a CopyOnWriteArrayList nor an AtomicReference. What you would lose is a collection and you can iterate through for other reasons than your post articulates currently.
I am planning to use this schema in my application, but I was not sure whether this is safe.
To give a little background, a bunch of servers will compute results of sub-tasks that belong to a single task and report them back to the central server. This piece of code is used to register the results, and also check whether all the subtasks for the task has completed and if so, report that fact only once.
The important point is that, all task must be reported once and only once as soon as it is completed (all subTaskResults are set).
Can anybody help? Thank you! (Also, if you have a better idea to solve this problem, please let me know!)
*Note that I simplified the code for brevity.
Solution I
class Task {
//Populate with bunch of (Long, new AtomicReference()) pairs
//Actual app uses read only HashMap
Map<Id, AtomicReference<SubTaskResult>> subtasks = populatedMap();
Semaphore permission = new Semaphore(1);
public Task set(id, subTaskResult){
//null check omitted
subtasks.get(id).set(result);
return check() ? this : null;
}
private boolean check(){
for(AtomicReference ref : subtasks){
if(ref.get()==null){
return false;
}
}//for
return permission.tryAquire();
}
}//class
Stephen C kindly suggested to use a counter. Actually, I have considered that once, but I reasoned that the JVM could reorder the operations and thus, a thread can observe a decremented counter (by another thread) before the result is set in AtomicReference (by that other thread).
*EDIT: I now see this is thread safe. I'll go with this solution. Thanks, Stephen!
Solution II
class Task {
//Populate with bunch of (Long, new AtomicReference()) pairs
//Actual app uses read only HashMap
Map<Id, AtomicReference<SubTaskResult>> subtasks = populatedMap();
AtomicInteger counter = new AtomicInteger(subtasks.size());
public Task set(id, subTaskResult){
//null check omitted
subtasks.get(id).set(result);
//In the actual app, if !compareAndSet(null, result) return null;
return check() ? this : null;
}
private boolean check(){
return counter.decrementAndGet() == 0;
}
}//class
I assume that your use-case is that there are multiple multiple threads calling set, but for any given value of id, the set method will be called once only. I'm also assuming that populateMap creates the entries for all used id values, and that subtasks and permission are really private.
If so, I think that the code is thread-safe.
Each thread should see the initialized state of the subtasks Map, complete with all keys and all AtomicReference references. This state never changes, so subtasks.get(id) will always give the right reference. The set(result) call operates on an AtomicReference, so the subsequent get() method calls in check() will give the most up-to-date values ... in all threads. Any potential races with multiple threads calling check seem to sort themselves out.
However, this is a rather complicated solution. A simpler solution would be to use an concurrent counter; e.g. replace the Semaphore with an AtomicInteger and use decrementAndGet instead of repeatedly scanning the subtasks map in check.
In response to this comment in the updated solution:
Actually, I have considered that once,
but I reasoned that the JVM could
reorder the operations and thus, a
thread can observe a decremented
counter (by another thread) before the
result is set in AtomicReference (by
that other thread).
The AtomicInteger and AtomicReference by definition are atomic. Any thread that tries to access one is guaranteed to see the "current" value at the time of the access.
In this particular case, each thread calls set on the relevant AtomicReference before it calls decrementAndGet on the AtomicInteger. This cannot be reordered. Actions performed by a thread are performed in order. And since these are atomic actions, the efects will be visible to other threads in order as well.
In other words, it should be thread-safe ... AFAIK.
The atomicity guaranteed (per class documentation) explicitly for AtomicReference.compareAndSet extends to set and get methods (per package documentation), so in that regard your code appears to be thread-safe.
I am not sure, however, why you have Semaphore.tryAquire as a side-effect there, but without complimentary code to release the semaphore, that part of your code looks wrong.
The second solution does provide a thread-safe latch, but it's vulnerable to calls to set() that provide an ID that's not in the map -- which would trigger a NullPointerException -- or more than one call to set() with the same ID. The latter would mistakenly decrement the counter too many times and falsely report completion when there are presumably other subtasks IDs for which no result has been submitted. My criticism isn't with regard to the thread safety, but rather to the invariant maintenance; the same flaw would be present even without the thread-related concern.
Another way to solve this problem is with AbstractQueuedSynchronizer, but it's somewhat gratuitous: you can implement a stripped-down counting semaphore, where each call set() would call releaseShared(), decrementing the counter via a spin on compareAndSetState(), and tryAcquireShared() would only succeed when the count is zero. That's more or less what you implemented above with the AtomicInteger, but you'd be reusing a facility that offers more capabilities you can use for other portions of your design.
To flesh out the AbstractQueuedSynchronizer-based solution requires adding one more operation to justify the complexity: being able to wait on the results from all the subtasks to come back, such that the entire task is complete. That's Task#awaitCompletion() and Task#awaitCompletion(long, TimeUnit) in the code below.
Again, it's possibly overkill, but I'll share it for the purpose of discussion.
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.AbstractQueuedSynchronizer;
final class Task
{
private static final class Sync extends AbstractQueuedSynchronizer
{
public Sync(int count)
{
setState(count);
}
#Override
protected int tryAcquireShared(int ignored)
{
return 0 == getState() ? 1 : -1;
}
#Override
protected boolean tryReleaseShared(int ignored)
{
int current;
do
{
current = getState();
if (0 == current)
return true;
}
while (!compareAndSetState(current, current - 1));
return 1 == current;
}
}
public Task(int count)
{
if (count < 0)
throw new IllegalArgumentException();
sync_ = new Sync(count);
}
public boolean set(int id, Object result)
{
// Ensure that "id" refers to an incomplete task. Doing so requires
// additional synchronization over the structure mapping subtask
// identifiers to results.
// Store result somehow.
return sync_.releaseShared(1);
}
public void awaitCompletion()
throws InterruptedException
{
sync_.acquireSharedInterruptibly(0);
}
public void awaitCompletion(long time, TimeUnit unit)
throws InterruptedException
{
sync_.tryAcquireSharedNanos(0, unit.toNanos(time));
}
private final Sync sync_;
}
I have a weird feeling reading your example program, but it depends on the larger structure of your program what to do about that. A set function that also checks for completion is almost a code smell. :-) Just a few ideas.
If you have synchronous communication with your servers you might use an ExecutorService with the same number of threads like the number of servers that do the communication. From this you get a bunch of Futures, and you can naturally proceed with your calculation - the get calls will block at the moment the result is needed but not yet there.
If you have asynchronous communication with the servers you might also use a CountDownLatch after submitting the task to the servers. The await call blocks the main thread until the completion of all subtasks, and other threads can receive the results and call countdown on each received result.
With all these methods you don't need special threadsafety measures other than that the concurrent storing of the results in your structure is threadsafe. And I bet there are even better patterns for this.