I have a situation where I'd like to use an extension of Java's fixed thread pools. I have N groups of runnable objects that I'd like to compete for resources. However, I'd like the total number of threads used to remain constant. The way that I would like this to work is outlined here
Allocate an object with N threads and M queues;
Schedule job n on queue m.
Have a pointer to the first queue
Repeat
a. If the maximum number of threads is currently in use wait.
b. Pop off a job on the current queue
c. Move the pointer one queue over (or from the last queue to the first)
First, does something like this already exist? Second if not, I'm nervous about writing my own because I know writing my own thread pools can be dangerous. Can anyone point me to some good examples for writing my own.
Your best bet is probably creating your own implementation of a Queue that cycles through other queues. For example (in pseudo-code):
class CyclicQueue {
Queue queues[];
int current = 0;
CyclicQueue(int size) {
queues = new Queue[size];
for(int i=0; i<size; i++)
queues[i] = new LinkedList<T>();
}
T get() {
int i = current;
T value;
while( (value = queues[i].poll() == null) {
i++;
if(i == current)
return null;
}
return value;
}
}
Of course, with this, if you want blocking you'll need to add that in yourself.
In which case, you'll probably want a custom Queue for each queue which can notify the parent queue that value has been added.
Related
I saw this self implemented bounded blocking queue.
A change was made to it, aiming to eleminate competition by replacing notifyAll with notify.
But I don't quite get what's the point of the 2 extra variables added: waitOfferCount and waitPollCount.
Their initial values are both 0.
Diff after and before they're added is below:
Offer:
Poll:
My understanding is that the 2 variables purpose is that you won't do useless notify calls when there's nothing wait on the object. But what harm would it do if not done this way?
Another thought is that they may have something to do with the switch from notifyAll to notify, but again I think we can safely use notify even without them?
Full code below:
class FairnessBoundedBlockingQueue implements Queue {
protected final int capacity;
protected Node head;
protected Node tail;
// guard: canPollCount, head
protected final Object pollLock = new Object();
protected int canPollCount;
protected int waitPollCount;
// guard: canOfferCount, tail
protected final Object offerLock = new Object();
protected int canOfferCount;
protected int waitOfferCount;
public FairnessBoundedBlockingQueue(int capacity) {
this.capacity = capacity;
this.canPollCount = 0;
this.canOfferCount = capacity;
this.waitPollCount = 0;
this.waitOfferCount = 0;
this.head = new Node(null);
this.tail = head;
}
public boolean offer(Object obj) throws InterruptedException {
synchronized (offerLock) {
while (canOfferCount <= 0) {
waitOfferCount++;
offerLock.wait();
waitOfferCount--;
}
Node node = new Node(obj);
tail.next = node;
tail = node;
canOfferCount--;
}
synchronized (pollLock) {
++canPollCount;
if (waitPollCount > 0) {
pollLock.notify();
}
}
return true;
}
public Object poll() throws InterruptedException {
Object result;
synchronized (pollLock) {
while (canPollCount <= 0) {
waitPollCount++;
pollLock.wait();
waitPollCount--;
}
result = head.next.value;
head.next.value = null;
head = head.next;
canPollCount--;
}
synchronized (offerLock) {
canOfferCount++;
if (waitOfferCount > 0) {
offerLock.notify();
}
}
return result;
}
}
You would need to ask the authors of that change what they thought they were achieving with that change.
My take is as follows:
Changing from notifyAll() to notify() is a good thing. If there are N threads waiting on a queue's offerLock or pollLock, then this avoids N - 1 unnecessary wakeups.
It seems that the counters are being used avoid calling notify() when there isn't a thread waiting. This looks to me like a doubtful optimization. AFAIK a notify on a mutex when nothing is waiting is very cheap. So this may make a small difference ... but it is unlikely to be significant.
If you really want to know, write some benchmarks. Write 4 versions of this class with no optimization, the notify optimization, the counter optimization and both of them. Then compare the results ... for different levels of queue contention.
I'm not sure what "fairness" is supposed to mean here, but I can't see anything in this class to guarantee that threads that are waiting in offer or poll get treated fairly.
Another thought is that they may have something to do with the switch from notifyAll to notify, but again I think we can safely use notify even without them?
Yes, since two locks (pollLock and offerLock) are used, it is no problem to change notyfiAll to notify without these two variables. But if you are using a lock, you must use notifyAll.
My understanding is that the 2 variables purpose is that you won't do useless notify calls when there's nothing wait on the object. But what harm would it do if not done this way?
Yes, these two variables are to avoid useless notify calls. These two variables also bring in additional operations. I think benchmarking may be needed to determine performance in different scenarios.
Besides,
1.As a blocking queue, it should implement the interface BlockingQueue, and both poll and offer methods shoule be non-blocking. It should use take and put.
2.This is not a Fairness queue.
I've implemented a pipeline approach. I'm going to traverse a tree and I need certain values which aren't available beforehand... so I have to traverse the tree in parallel (or before) and once more for every node I want to save values (descendantCount for example).
As such I'm interating through the tree, then from the constructor I'm calling a method which invokes a new Thread started through an ExecutorService. The Callable which is submitted is:
#Override
public Void call() throws Exception {
// Get descendants for every node and save it to a list.
final ExecutorService executor =
Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors());
int index = 0;
final Map<Integer, Diff> diffs = mDiffDatabase.getMap();
final int depth = diffs.get(0).getDepth().getNewDepth();
try {
boolean first = true;
for (final AbsAxis axis = new DescendantAxis(mNewRtx, true); index < diffs.size()
&& ((diffs.get(index).getDiff() == EDiff.DELETED && depth < diffs.get(index).getDepth()
.getOldDepth()) || axis.hasNext());) {
if (axis.getTransaction().getNode().getKind() == ENodes.ROOT_KIND) {
axis.next();
} else {
if (index < diffs.size() && diffs.get(index).getDiff() != EDiff.DELETED) {
axis.next();
}
final Future<Integer> submittedDescendants =
executor.submit(new Descendants(mNewRtx.getRevisionNumber(), mOldRtx
.getRevisionNumber(), axis.getTransaction().getNode().getNodeKey(), mDb
.getSession(), index, diffs));
final Future<Modification> submittedModifications =
executor.submit(new Modifications(mNewRtx.getRevisionNumber(), mOldRtx
.getRevisionNumber(), axis.getTransaction().getNode().getNodeKey(), mDb
.getSession(), index, diffs));
if (first) {
first = false;
mMaxDescendantCount = submittedDescendants.get();
// submittedModifications.get();
}
mDescendantsQueue.put(submittedDescendants);
mModificationQueue.put(submittedModifications);
index++;
}
}
mNewRtx.close();
} catch (final AbsTTException e) {
LOGWRAPPER.error(e.getMessage(), e);
}
executor.shutdown();
return null;
}
Therefore for every node it's creating a new Callable which traverses the tree for every node and counts descendants and modifications (I'm actually fusing two tree-revisions together). Well, mDescendantsQueue and mModificationQueue are BlockingQueues. At first I've only had the descendantsQueue and traversed the tree once more to get modifications of every node (counting modifications made in the subtree of the current node). Then I thought why not do both in parallel and implement a pipelined approach. Sadly the performance seemed to have decreased everytime I've implemented another multithreaded "step".
Maybe because an XML-tree usually isn't that deep and the Concurrency-Overhead is too heavy :-/
At first I did everything sequential, which was the fastest:
- traversing the tree
- for every node traverse the descendants and compute descendantCount and modificationCount
After using a pipelined approach with BlockingQueues it seems the performance has decreased, but I haven't actually made any time measures and I would have to revert many changes to go back :( Maybe the performance increases with more CPUs, because I only have a Core2Duo for testing right now.
best regards,
Johannes
Probably this should help: Amadahl's law, what it basically says it that the increase in productivity depends (inversely proportional) to the percentage of the code which has to be processed by synchronization. Hence even by increasing by increasing more computing resources, it wont end up to the better result. Ideally if the ratio of ( the synchronized part to the total part) is low, then with (number of processors +1) should give the best output (unless you are using network or other I/O in which case you can increase the size of the pool).
So just follow it up from the above link and see if it helps
From your description it sounds like you're recursively creating threads, each of which processes one node and then spawns a new thread? Is this correct? If so, I'm not surprised that you're suffering from performance degradation.
A simple recursive descent method might actually be the best way to do this. I can't see how multithreading will gain you any advantages here.
This piece of code:
synchronized (mList) {
if (mList.size() != 0) {
int s = mList.size() - 1;
for (int i = s; i > 0; i -= OFFSET) {
mList.get(i).doDraw(canv);
}
getHead().drawHead(canv);
}
}
Randomly throws AIOOBEs. From what I've read, the synchronized should prevent that, so what am I doing wrong?
Edits:
AIOOBE = Array Index Out Of Bounds Exception
The code's incomplete, cut down to what is needed. But to make you happy, OFFSET is 4, and just imagine that there is a for-loop adding a bit of data at the beginning. And a second thread reading and / or modifying the list.
Edit 2:
I've noticed it happens when the list is being drawn and the current game ends. The draw-thread hasn't drawn all elements when the list is emptied. Is there a way of telling the game to wait with emtying the list untill it's empty?
Edit 3:
I've just noticed that I'm not sure if this is a multi-threading problem. Seems I only have 2 threads, one for calculating and drawing and one for user input.. Gonna have to look into this a bit more than I thought.
What you're doing looks right... but that's all:
It doesn't matter on what object you synchronize, it needn't be the list itself.
What does matter is if all threads always synchronize on the same object, when accessing a shared resource.
Any access to SWING (or another graphic library) must happen in the AWT-Thread.
To your edit:
I've noticed it happens when the list is being drawn and the current game ends. The draw-thread hasn't drawn all elements when the list is emptied. Is there a way of telling the game to wait with emtying the list untill it's empty?
I think you mean "...wait with emptying the list until the drawing has completed." Just synchronize the code doing it on the same lock (i.e., the list itself in your case).
Again: Any access to a shared resource must be protected somehow. It seems like you're using synchronized just here and not where you're emptying the list.
The safe solution is to only allow one thread to create objects, add and remove them from a List after the game has started.
I had problems myself with random AIOOBEs erros and no synchornize could solve it properly plus it was slowing down the response of the user.
My solution, which is now stable and fast (never had an AIOOBEs since) is to make UI thread inform the game thread to create or manipulate an object by setting a flag and coordinates of the touch into the persistent variables.
Since the game thread loops about 60 times per second this proved to be sufficent to pick up the message from the UI thread and do something.
This is a very simple solution and it works great!
My suggestion is to use a BlockingQueue and I think you are looking for this solution also. How you can do it? It is already shown with an example in the javadoc :)
class Producer implements Runnable {
private final BlockingQueue queue;
Producer(BlockingQueue q) { queue = q; }
public void run() {
try {
while (true) { queue.put(produce()); }
} catch (InterruptedException ex) { ... handle ...}
}
Object produce() { ... }
}
class Consumer implements Runnable {
private final BlockingQueue queue;
Consumer(BlockingQueue q) { queue = q; }
public void run() {
try {
while (true) { consume(queue.take()); }
} catch (InterruptedException ex) { ... handle ...}
}
void consume(Object x) { ... }
}
class Setup {
void main() {
BlockingQueue q = new SomeQueueImplementation();
Producer p = new Producer(q);
Consumer c1 = new Consumer(q);
Consumer c2 = new Consumer(q);
new Thread(p).start();
new Thread(c1).start();
new Thread(c2).start();
}
}
The beneficial things for you are, you need not to worry about synchronizing your mList. BlockingQueue offers 10 special method. You can check it in the doc. Few from javadoc:
BlockingQueue methods come in four forms, with different ways of handling operations that cannot be satisfied immediately, but may be satisfied at some point in the future: one throws an exception, the second returns a special value (either null or false, depending on the operation), the third blocks the current thread indefinitely until the operation can succeed, and the fourth blocks for only a given maximum time limit before giving up.
To be in safe side: I am not experienced with android. So not certain whether all java packages are allowed in android. But at least it should be :-S, I wish.
You are getting Index out of Bounds Exception because there are 2 threads that operate on the list and are doing it wrongly.
You should have been synchronizing at another level, in such a way that no other thread can iterate through the list while other thread is modifying it! Only on thread at a time should 'work on' the list.
I guess you have the following situation:
//piece of code that adds some item in the list
synchronized(mList){
mList.add(1, drawableElem);
...
}
and
//code that iterates you list(your code simplified)
synchronized (mList) {
if (mList.size() != 0) {
int s = mList.size() - 1;
for (int i = s; i > 0; i -= OFFSET) {
mList.get(i).doDraw(canv);
}
getHead().drawHead(canv);
}
}
Individually the pieces of code look fine. They seam thread-safe. But 2 individual thread-safe pieces of code might not be thread safe at a higher level!
It's just you would have done the following:
Vector v = new Vector();
if(v.length() == 0){ v.length() itself is thread safe!
v.add("elem"); v.add() itself is also thread safe individually!
}
BUT the compound operation is NOT!
Regards,
Tiberiu
Ok, so I'm trying to find the maximum element of a 2D array. I will have a method that accepts the 2darray as a parameter and finds the maximum. It needs to find the maximum element of each row as a separate thread so that the threads run parrallel, then join each thread, and finding the max of those to get the maximum of the entire 2d array. Now the problem I'm having is that run() does not return any value...How then am i supposed to access the value that has been modified. for example
public static int maxof2darray(long[][] input){
ArrayList<Thread> threads = new ArrayList<Thread>();
long[]rowArray;
for(int i=0; i<input.length; i++){
rowArray = input[i];
teste r1 = new teste(rowArray,max);
threads.add(new Thread(r1));
}
for ( Thread x : threads )
{
x.start();
}
try {
for ( Thread x : threads)
{
x.join();
}
}
as you can see it creates an arraylist of thread objects. Then takes each row and calls the run() function that finds the maximum of that row...the problem is run() does not return any value...How then can i possibly access the maximum of that row?
The Future API should do what you need.
A Future represents the result of an
asynchronous computation. Methods are
provided to check if the computation
is complete, to wait for its
completion, and to retrieve the result
of the computation. The result can
only be retrieved using method get
when the computation has completed,
blocking if necessary until it is
ready. Cancellation is performed by
the cancel method. Additional methods
are provided to determine if the task
completed normally or was cancelled.
Once a computation has completed, the
computation cannot be cancelled. If
you would like to use a Future for the
sake of cancellability but not provide
a usable result, you can declare types
of the form Future and return null
as a result of the underlying task.
I think this is not proper way for starting and joining the threads. You should use Thread Pool instead.
Following is a sample of code that demonstrates Thread Pool.
ExecutorService workers = Executors.newFixedThreadPool(10);
for(int i=0; i<input.length; i++) {
Teste task = new Teste(rowArray,max);
workers.execute(task);
}
workers.shutdown();
while(!workers.isTerminated()) {
try {
Thread.sleep(10000);
} catch (InterruptedException exception) {
}
System.out.println("waiting for submitted task to finish operation");
}
Hope this help.
Unless the array is fairly large it will be faster to do the search in one thread. However say the size is 1000s or more I suggest you use the ExecutionService which is a simple way to manage tasks.
However, the simplest change is to store the result in an AtomicLong, that way your Runnables don't need to return a result.
You can add a new field to your "teste" class that holds the max row. The main thread stops at x.join(), so after that line to can refer to that field and get the max value.
.
.
.
int max=0;
for ( Thread x : threads)
{
x.join();
max=x.getMax();
}
.
.
.
In multi-thread environment, in order to have thread safe array element swapping, we will perform synchronized locking.
// a is char array.
synchronized(a) {
char tmp = a[1];
a[1] = a[0];
a[0] = tmp;
}
Is it possible that we can make use of the following API in the above situation, so that we can have a lock free array element swapping? If yes, how?
http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/atomic/AtomicReferenceFieldUpdater.html#compareAndSet%28T,%20V,%20V%29
Regardless of API used you won't be able to achieve both thread-safe and lock-free array element swapping in Java.
The element swapping requires multiple read and update operations that need to be performed atomically. To simulate the atomicity you need a lock.
EDIT:
An alternative to lock-free algorithm might be micro-locking: instead of locking the entire array it’s possible to lock only elements that are being swapped.
The value of this approach fully is questionable. That is to say if the algorithm that requires swapping elements can guarantee that different threads are going to work on different parts of the array then no synchronisation required.
In the opposite case, when different threads can actually attempt swapping overlapping elements then thread execution order will matter. For example if one thread tries to swap elements 0 and 1 of the array and the other simultaneously attempts to swap 1 and 2 then the result will depend entirely on the order of execution, for initial {‘a’,’b’,’c’} you can end up either with {‘b’,’c’,’a’} or {‘c’,’a’,’b’}. Hence you’d require a more sophisticated synchronisation.
Here is a quick and dirty class for character arrays that implements micro locking:
import java.util.concurrent.atomic.AtomicIntegerArray;
class SyncCharArray {
final private char array [];
final private AtomicIntegerArray locktable;
SyncCharArray (char array[])
{
this.array = array;
// create a lock table the size of the array
// to track currently locked elements
this.locktable = new AtomicIntegerArray(array.length);
for (int i = 0;i<array.length;i++) unlock(i);
}
void swap (int idx1, int idx2)
{
// return if the same element
if (idx1==idx2) return;
// lock element with the smaller index first to avoid possible deadlock
lock(Math.min(idx1,idx2));
lock(Math.max(idx1,idx2));
char tmp = array[idx1];
array [idx1] = array[idx2];
unlock(idx1);
array[idx2] = tmp;
unlock(idx2);
}
private void lock (int idx)
{
// if required element is locked when wait ...
while (!locktable.compareAndSet(idx,0,1)) Thread.yield();
}
private void unlock (int idx)
{
locktable.set(idx,0);
}
}
You’d need to create the SyncCharArray and then pass it to all threads that require swapping:
char array [] = {'a','b','c','d','e','f'};
SyncCharArray sca = new SyncCharArray(array);
// then pass sca to any threads that require swapping
// then within a thread
sca.swap(15,3);
Hope that makes some sense.
UPDATE:
Some testing demonstrated that unless you have a great number of threads accessing the array simulteniously (100+ on run-of-the-mill hardware) a simple synchronise (array) {} works much faster than the elaborate synchronisation.
// lock-free swap array[i] and array[j] (assumes array contains not null elements only)
static <T> void swap(AtomicReferenceArray<T> array, int i, int j) {
while (true) {
T ai = array.getAndSet(i, null);
if (ai == null) continue;
T aj = array.getAndSet(j, null);
if (aj == null) {
array.set(i, ai);
continue;
}
array.set(i, aj);
array.set(j, ai);
break;
}
}
The closest you're going to get is java.util.concurrent.atomic.AtomicReferenceArray, which offers CAS-based operations such as boolean compareAndSet(int i, E expect, E update). It does not have a swap(int pos1, int pos2) operation though so you're going to have to emulate it with two compareAndSet calls.
"The principal threat to scalability in concurrent applications is the exclusive resource lock." - Java Concurrency in Practice.
I think you need a lock, but as others mention that lock can be more granular than it is at present.
You can use lock striping like java.util.concurrent.ConcurrentHashMap.
The API you mentioned, as already stated by others, may only be used to set values of a single object, not an array. Nor even for two objects simultaneously, so you wouldn't have a secure swap anyway.
The solution depends on your specific situation. Can the array be replaced by another data structure? Is it also changing in size concurrently?
If you must use an array, it could be changed it to hold updatable objects (not primitive types nor a Char), and synchronize over both being swapped. S data structure like this would work:
public class CharValue {
public char c;
}
CharValue[] a = new CharValue[N];
Remember to use a deterministic synchronization order for not having a deadlocks (http://en.wikipedia.org/wiki/Deadlock#Circular_wait_prevention)! You could simply follow index ordering to avoid it.
If items should also be added or removed concurrently from the collection, you could use a Map instead, synchronize swaps on the Map.Entry'es and use a synchronized Map implementation. A simple List wouldn't do it because there are no isolated structures for retaining the values (or you don't have access to them).
I don't think the AtomicReferenceFieldUpdater is meant for array access, and even if it were, it only provides atomic guarantees on one reference at a time. AFAIK, all the classes in java.util.concurrent.atomic only provide atomic access to one reference at a time. In order to change two or more references as one atomic operation, you must use some kind of locking.