I am exploring a problem which is likely a special case of a problem class, but I don't know the problem class nor the appropriate terminology, so I have to resort to desribing the problem using ad-hoc vocabulary. I'll rephrase once I know the right terminology.
I have a bunch of singletons A, B, C. The singletons are:
Unrelated. There are no constraints like "you must access B before you can do X with C" or similar.
Not thread-safe.
The system accepts tasks to be processed in parallel as far as possible.
Each task consists of a sequence of actions, each action to be executed using one of the singletons. Different tasks may access different singleton in different order, and tasks may contain loops of actions.
Pseudocode:
void myTask(in1, in2, ...) {
doWithA(() -> {
// use in1, in2, ...
// inspect and/or update A
// set up outputs to be used as inputs for the next action:
outA1 = ...
outA2 = ...
...
});
doWithB(() -> {
// use outA1, outA2, ...
// inspect and/or update B
// set up outputs to be used as inputs for the next action:
outB1 = ...
outB2 = ...
...
});
// Tasks may touch singletons repeatedly, in any order
doWithA(() -> {
// outB1, outB2, ..., inspect/modify A, set up outputs
outAx1 = ...
outAx2 = ...
...
});
// Tasks may have loops:
while (conditionInC(() -> ...) {
doWithC(() -> ...);
doWithD(() -> ...);
}
// I am aware that a loop like this can cause a livelock.
// That's an aspect for another question, on another day.
}
There are multiple tasks like myTask above.
Tasks to be executed are wrapped in a closure and scheduled to a ThreadPoolExecutor (or something similar).
Approaches I considered:
Have singletons LockA, LockB, ...
Each doWithX is merely a synchronized(X) block.
OutXn are local variables of myTask.
Problem: One of the singletons is Swing, and I can't move the EDT into a thread that I manage.
As above. Solve the Swing problem from approach (1) by coding doWithSwing(){...} as SwingUtilities.invokeAndWait(() -> {...}.
Problem: invokeAndWait is generally considered prone to deadlock. How do I find out if I am into this kind of trouble with the pattern above?
Have threads threadA, threadB, ..., each of them "owning" one of the singletons (Swing already has this, it is the EDT).
doWithX schedules the block as a Runnable on threadX.
outXn are set up as Future<...> outXn = new SettableFuture<>(), the assignments become outXn.set(...).
Problem: I couldn't find anything like SettableFuture in the JDK; all ways to create a Futurethat I could find were somehow tied to a ThreadPool. Maybe I am looking at the wrong top-level interface and Future is a red herring?
With of these approaches would be best?
Is there a superior approach that I didn't consider?
I don't know the problem class nor the appropriate terminology
I'd probably just refer to the problem class as concurrent task orchestration.
There's a lot of things to consider when identifying the right approach. If you provide some more details, I'll try to update my answer with more color.
There are no constraints like "you must access B before you can do X with C" or similar.
This is generally a good thing. A very common cause of deadlocks is different threads acquiring the same locks in differing orders. E.g., thread 1 locks A then B while thread 2 owns the lock B and is waiting to acquire A. Designing the solution such that this situation does not occur is very important.
I couldn't find anything like SettableFuture in the JDK
Take a look at java.util.concurrent.CompletableFuture<T> - this is probably what you want here. It exposes a blocking get() as well as a number of asynchronous completion callbacks such as thenAccept(Consumer<? super T>).
invokeAndWait is generally considered prone to deadlock
It depends. If your calling thread isn't holding any locks that are going to be necessary for the execution of the Runnable you're submitting, you're probably okay. That said, if you can base your orchestration on asynchronous callbacks, you can instead use SwingUtilities.invokeLater(Runnable) - this will submit the execution of your Runnable on the Swing event loop without blocking the calling thread.
I would probably avoid creating a thread per singleton. Each running thread contributes some overhead and it's better to decouple the number of threads from your business logic. This will allow you to tune the software to different physical machines based on the number of cores, for example.
It sounds like you need each runWithX(...) method to be atomic. In other words, once one thread has begun accessing X, another thread cannot do so until the first thread is finished with its task step. If this is the case, then creating a lock object per singleton and insuring serial (rather than parallel) access is the right way to go. You can achieve this by wrapping the execution of closures that get submitted in your runWithX(...) methods in a synchronized Java code block. The code within the block is also referred to as the critical section or monitor region.
Another thing to consider is thread contention and order of execution. If two tasks both require access to X and task 1 gets submitted before task 2, is it a requirement that task 1's access to X occurs before task 2's? A requirement like that can complicate the design quite a bit and I would probably recommend a different approach than outlined above.
Is there a superior approach that I didn't consider?
These days there are frameworks out there for solving these types of problems. I'm specifically thinking of reactive streams and RxJava. While it is a very powerful framework, it also comes with a very steep learning curve. A lot of analysis and consideration should be done before adopting such a technology within an organization.
Update:
Based on your feedback, I think a CompletableFuture-based approach probably makes the most sense.
I'd create a helper class to orchestrate task step execution:
class TaskHelper
{
private final Object lockA;
private final Object lockB;
private final Object lockC;
private final Executor poolExecutor;
private final Executor swingExecutor;
public TaskHelper()
{
poolExecutor = Executors.newFixedThreadPool( 2 );
swingExecutor = SwingUtilities::invokeLater;
lockA = new Object();
lockB = new Object();
lockC = new Object();
}
public <T> CompletableFuture<T> doWithA( Supplier<T> taskStep )
{
return doWith( lockA, poolExecutor, taskStep );
}
public <T> CompletableFuture<T> doWithB( Supplier<T> taskStep )
{
return doWith( lockB, poolExecutor, taskStep );
}
public <T> CompletableFuture<T> doWithC( Supplier<T> taskStep )
{
return doWith( lockC, swingExecutor, taskStep );
}
private <T> CompletableFuture<T> doWith( Object lock, Executor executor, Supplier<T> taskStep )
{
CompletableFuture<T> future = new CompletableFuture<>();
Runnable serialTaskStep = () -> {
T result;
synchronized ( lock ) {
result = taskStep.get();
}
future.complete( result );
};
executor.execute( serialTaskStep );
return future;
}
}
In my example above withA and withB get scheduled on a shared thread pool while withC is always executed on the Swing thread. The Swing Executor is already going to be serial in nature, so the lock is really optional there.
For creating actual tasks, I'd recommend creating an object for each task. This allows you to supply callbacks as method references, resulting in cleaner code and avoiding callback hell:
This example computes the square of a provided number on a background thread pool and then displays the results on the Swing thread:
class SampleTask
{
private final TaskHelper helper;
private final String id;
private final int startingValue;
public SampleTask( TaskHelper helper, String id, int startingValue )
{
this.helper = helper;
this.id = id;
this.startingValue = startingValue;
}
private void start()
{
helper.doWithB( () -> {
int square = startingValue * startingValue;
return String.format( "computed-thread: %s computed-square: %d",
Thread.currentThread().getName(), square );
} )
.thenAccept( this::step2 );
}
private void step2( String result )
{
helper.doWithC( () -> {
String message = String.format( "current-thread: %s task: %s result: %s",
Thread.currentThread().getName(), id, result );
JOptionPane.showConfirmDialog( null, message );
return null;
} );
}
}
#Test
public void testConcurrent() throws InterruptedException, ExecutionException
{
TaskHelper helper = new TaskHelper();
new SampleTask( helper, "task1", 5 ).start();
new SampleTask( helper, "task2", 7 ).start();
Thread.sleep( 60000 );
}
Update 2:
If you want to avoid callback hell while also avoiding the need to create an object per task, perhaps you should take a serious look at reactive streams after all.
Take a look at the "getting started" page for RxJava:
https://github.com/ReactiveX/RxJava/wiki/How-To-Use-RxJava
For reference here's how the same example above would look in Rx (I'm removing the concept of task ID for simplicity):
#Test
public void testConcurrentRx() throws InterruptedException
{
Scheduler swingScheduler = Schedulers.from( SwingUtilities::invokeLater );
Subject<Integer> inputSubject = PublishSubject.create();
inputSubject
.flatMap( input -> Observable.just( input )
.subscribeOn( Schedulers.computation() )
.map( this::computeSquare ))
.observeOn( swingScheduler )
.subscribe( this::displayResult );
inputSubject.onNext( 5 );
inputSubject.onNext( 7 );
Thread.sleep( 60000 );
}
private String computeSquare( int input )
{
int square = input * input;
return String.format( "computed-thread: %s computed-square: %d",
Thread.currentThread().getName(), square );
}
private void displayResult( String result )
{
String message = String.format( "current-thread: %s result: %s",
Thread.currentThread().getName(), result );
JOptionPane.showConfirmDialog( null, message );
}
Related
I am not sure about safety of reading/writing instance variables from rxJava chain with different schedulers. There is a small example
public class RxJavaThreadSafety {
private int variable = 0;
// First call
public void doWriting() {
Single.just(255)
.doOnSuccess(
newValue -> variable = newValue
)
.subscribeOn(Schedulers.io())
.subscribe();
}
// Second call
public void doReadingRxChain() {
Single.fromCallable((Callable<Integer>) () -> variable)
.subscribeOn(Schedulers.computation())
.subscribe(
result -> System.out.println(result)
);
}
// Third call
public void doReading() {
System.out.println(variable);
}
}
For simplicity lets assume that these three methods called one after another
My question: Does it thread safe to set variable "in" io scheduler, and lately read this variable "from" computation scheduler or main thread?
I think that is not thread safe, but i want some rxJava and concurrency experts to prove it
No, this is not thread safe.
When you use subscribeOn it means that calling subscribe() adds the task for producing the item to the work queue of a scheduler.
The doWriting() and doReadingRxChain() methods add tasks to different schedulers. There is no guarantee that the chain in doWriting() will even start to run before doReadingRxChain(). This can happen for example if all IO threads are busy.
There is a more fundamental problem: you are writing the value of variable in one thread and reading it in another. Without any concurrency controls, nothing guarantees that the new value of variable is seen by the thread reading it. One way to fix that is declaring the variable as volatile:
private volatile int variable = 0;
I want to create two threads in my application that'll run two methods. I'm using the builder design pattern where inside the build method I have something like this, request is the Object that is passed:
Rules rule;
Request build() {
Request request = new Request(this);
//I want one threat to call this method
Boolean isExceeding = this.rule.volumeExceeding(request);
//Another thread to call this method
Boolean isRepeating = this.rule.volumeRepeating(request);
//Some sort of timer that will wait until both values are received,
//If one value takes too long to be received kill the thread and continue with
//whatever value was received.
..Logic based on 2 booleans..
return request;
}
Here's how this class looks like:
public class Rules {
public Boolean volumeExceeding(Request request) {
...some...logic...
return true/false;
}
public Boolean volumeRepeating(Request request) {
...some...logic...
return true/false;
}
}
I have commented in the code what I'd like to happen. Basically, I'd like to create two threads that'll run their respective method. It'll wait until both are finished, however, if one takes too long (example: more than 10ms) then return the value that was completed. How do I create this? I'm trying to understand the multithreading tutorials, but the examples are so generic that it's hard to take what they did and apply it to something more complicated.
One way to do that is to use CompletableFutures:
import java.util.concurrent.CompletableFuture;
class Main {
private static final long timeout = 1_000; // 1 second
static Boolean volumeExceeding(Object request) {
System.out.println(Thread.currentThread().getName());
final long startpoint = System.currentTimeMillis();
// do stuff with request but we do dummy stuff
for (int i = 0; i < 1_000_000; i++) {
if (System.currentTimeMillis() - startpoint > timeout) {
return false;
}
Math.log(Math.sqrt(i));
}
return true;
}
static Boolean volumeRepeating(Object request) {
System.out.println(Thread.currentThread().getName());
final long startpoint = System.currentTimeMillis();
// do stuff with request but we do dummy stuff
for (int i = 0; i < 1_000_000_000; i++) {
if (System.currentTimeMillis() - startpoint > timeout) {
return false;
}
Math.log(Math.sqrt(i));
}
return true;
}
public static void main(String[] args) {
final Object request = new Object();
CompletableFuture<Boolean> isExceedingFuture = CompletableFuture.supplyAsync(
() -> Main.volumeExceeding(request));
CompletableFuture<Boolean> isRepeatingFuture = CompletableFuture.supplyAsync(
() -> Main.volumeRepeating(request));
Boolean isExceeding = isExceedingFuture.join();
Boolean isRepeating = isRepeatingFuture.join();
System.out.println(isExceeding);
System.out.println(isRepeating);
}
}
Notice that one task takes significantly longer than the other.
What's happening? You supply those tasks to the common pool by using CompletableFuture for execution. Both tasks are executed by two different threads. What you've asked for is that a task is stopped when it takes too long. Therefore you can simply remember the time when a task has started and periodically check it against a timeout. Important: Do this check when the task would return while leaving the data in a consistent state. Also note that you can place multiple checks of course.
Here's a nice guide about CompletableFuture: Guide To CompletableFuture
If I understand your question correctly, then you should do this with a ticketing system (also known as provider-consumer pattern or producer-consumer pattern), so your threads are reused (which is a significant performance boost, if those operations are time critical).
The general idea should be:
application initialization
Initialize 2 or more "consumer" threads, which can work tickets (also called jobs).
runtime
Feed the consumer threads tickets (or jobs) that will be waited on for (about) as long as you like. However depending on the JVM, the waiting period will most likely not be exactly n milliseconds, as most often schedulers are more 'lax' in regards to waiting periods for timeouts. e.g. Thread.sleep() will almost always be off by a bunch of milliseconds (always late, never early - to my knowledge).
If the thread does not return after a given waiting period, then that result must be neglected (according to your logic), and the ticket (and thus the thread) must be informed to abort that ticket. It is important that you not interrupt the thread, since that can lead to exceptions, or prevent locks from being unlocked.
Remember, that halting or stopping threads from the outside is almost always problematic with locks, so I would suggest, your jobs visit a possible exit point periodically, so if you stop caring about a result, they can be safely terminated.
I think I'm having race conditions when running my multithreaded Java program.
It's a permutation algorithm, which I want to speed up by running multiple instances with different values. So I start the threads in Main class with:
Runnable[] mcl = new MCL[n1];
for (int thread_id = 0; thread_id < n1; thread_id ++)
{
mcl[thread_id] = new MCL(thread_id);
new Thread(mcl[thread_id]).start();
Thread.sleep(100);
}
And it runs those MCL classes instances.
Again, I think threads are accessing the same memory space of the MCL class variables, am I right? If so, how can I solve this?
I'm trying to make all variables arrays, where one of the dimensions is related to an Id of the thread, so that each thread writes on a different index. Is this a good solution?:
int[] foo = new foo[thread_id];
You can't just bolt on thread safety as an afterthought, it needs to be an integral part of your data flow design.
To start, research and learn the following topics:
1) Synchronized blocks, mutexes, and final variables. A good place to start: Tutorial. I also love Josh Bloch's Effective Java, which although a few years old has golden nuggets for writing correct Java programs.
2) Oracle's Concurrency Tutorial
3) Learn about Executors. You shouldn't have to manage threads directly except in the most extreme cases. See this tutorial
If you pass non thread safe objects between threads you're going to see unpredictable results. Unpredictable means assignments may never show up between different threads, or objects may be left in invalid states (especially if you've got multiple member fields that have data dependent on each other).
Without seeing the MCL class we can't give you any specific details on what's dangerous, but given the code sample you've posted I think you should take a step back and do some research. In the long run it will save you time to learn it the right way rather than troubleshoot an incorrect concurrency scheme.
If you want to keep the thread data separate store it as instance variables in the Runnables (initializing each Runnable before starting its thread). Don't keep a reference to it in an array, that's just inviting trouble.
You can use a CompletionService to get a computed value back for each task wrapped in a Future, so you don't wait for it to be calculated until you actually need the value. The difference between a CompletionService and an Executor, which the commentors are recommending, is that the CompletionService uses an Executor for executing tasks, but it makes it easier to get your data back out, see this answer.
Here's an example of using a CompletionService. I'm using Callable instead of Runnable because I want to get a result back:
public class CompletionServiceExample {
public static void main(String[] args) throws Exception {
ExecutorService executorService = Executors.newCachedThreadPool();
ExecutorCompletionService<BigInteger> service =
new ExecutorCompletionService<BigInteger>(executorService);
MyCallable task1 = new MyCallable(new BigInteger("3"));
MyCallable task2 = new MyCallable(new BigInteger("5"));
Future<BigInteger> future1 = service.submit(task1);
Future<BigInteger> future2 = service.submit(task2);
System.out.println("submitted tasks");
System.out.println("result1=" + future1.get() );
System.out.println("result2=" + future2.get());
executorService.shutdown();
}
}
class MyCallable implements Callable<BigInteger> {
private BigInteger b;
public MyCallable(BigInteger b) {
this.b = b;
}
public BigInteger call() throws Exception {
// do some number-crunching thing
Thread.sleep(b.multiply(new BigInteger("100")).longValue());
return b;
}
}
Alternatively you can use the take method to retrieve results as they get completed:
public class TakeExample {
public static void main(String[] args) throws Exception {
ExecutorService executorService = Executors.newCachedThreadPool();
ExecutorCompletionService<BigInteger> service = new
ExecutorCompletionService<BigInteger>(executorService);
MyCallable task1 = new MyCallable(new BigInteger("10"));
MyCallable task2 = new MyCallable(new BigInteger("5"));
MyCallable task3 = new MyCallable(new BigInteger("8"));
service.submit(task1);
service.submit(task2);
service.submit(task3);
Future<BigInteger> futureFirst = service.take();
System.out.println(futureFirst.get());
Future<BigInteger> futureSecond = service.take();
System.out.println(futureSecond.get());
Future<BigInteger> futureThird = service.take();
System.out.println(futureThird.get());
executorService.shutdown();
}
}
My main class, generates multiple threads based on some rules. (20-40 threads live for long time).
Each thread create several threads (short time ) --> I am using executer for this one.
I need to work on Multi dimension arrays in the short time threads --> I wrote it like it is in the code below --> but I think that it is not efficient since I pass it so many times to so many threads / tasks --. I tried to access it directly from the threads (by declaring it as public --> no success) --> will be happy to get comments / advices on how to improve it.
I also look at next step to return a 1 dimension array as a result (which might be better just to update it at the Assetfactory class ) --> and I am not sure how to.
please see the code below.
thanks
Paz
import java.util.concurrent.*;
import java.util.logging.Level;
public class AssetFactory implements Runnable{
private volatile boolean stop = false;
private volatile String feed ;
private double[][][] PeriodRates= new double[10][500][4];
private String TimeStr,Bid,periodicalRateIndicator;
private final BlockingQueue<String> workQueue;
ExecutorService IndicatorPool = Executors.newCachedThreadPool();
public AssetFactory(BlockingQueue<String> workQueue) {
this.workQueue = workQueue;
}
#Override
public void run(){
while (!stop) {
try{
feed = workQueue.take();
periodicalRateIndicator = CheckPeriod(TimeStr, Bid) ;
if (periodicalRateIndicator.length() >0) {
IndicatorPool.submit(new CalcMvg(periodicalRateIndicator,PeriodRates));
}
}
if ("Stop".equals(feed)) {
stop = true ;
}
} // try
catch (InterruptedException ex) {
logger.log(Level.SEVERE, null, ex);
stop = true;
}
} // while
} // run
Here is the CalcMVG class
public class CalcMvg implements Runnable {
private double [][][] PeriodRates = new double[10][500][4];
public CalcMvg(String Periods, double[][][] PeriodRates) {
System.out.println(Periods);
this.PeriodRates = PeriodRates ;
}
#Override
public void run(){
try{
// do some work with the data of PeriodRates array e.g. print it (no changes to array
System.out.println(PeriodRates[1][1][1]);
}
catch (Exception ex){
System.out.println(Thread.currentThread().getName() + ex.getMessage());
logger.log(Level.SEVERE, null, ex);
}
}//run
} // mvg class
There are several things going on here which seem to be wrong, but it is hard to give a good answer with the limited amount of code presented.
First the actual coding issues:
There is no need to define a variable as volatile if only one thread ever accesses it (stop, feed)
You should declare variables that are only used in a local context (run method) locally in that function and not globally for the whole instance (almost all variables). This allows the JIT to do various optimizations.
The InterruptedException should terminate the thread. Because it is thrown as a request to terminate the thread's work.
In your code example the workQueue doesn't seem to do anything but to put the threads to sleep or stop them. Why doesn't it just immediately feed the actual worker-threads with the required workload?
And then the code structure issues:
You use threads to feed threads with work. This is inefficient, as you only have a limited amount of cores that can actually do the work. As the execution order of threads is undefined, it is likely that the IndicatorPool is either mostly idle or overfilling with tasks that have not yet been done.
If you have a finite set of work to be done, the ExecutorCompletionService might be helpful for your task.
I think you will gain the best speed increase by redesigning the code structure. Imagine the following (assuming that I understood your question correctly):
There is a blocking queue of tasks that is fed by some data source (e.g. file-stream, network).
A set of worker-threads equal to the amount of cores is waiting on that data source for input, which is then processed and put into a completion queue.
A specific data set is the "terminator" for your work (e.g. "null"). If a thread encounters this terminator, it finishes it's loop and shuts down.
Now the following holds true for this construct:
Case 1: The data source is the bottle-neck. It cannot be speed-up by using multiple threads, as your harddisk/network won't work faster if you ask more often.
Case 2: The processing power on your machine is the bottle neck, as you cannot process more data than the worker threads/cores on your machine can handle.
In both cases the conclusion is, that the worker threads need to be the ones that seek for new data as soon as they are ready to process it. As either they need to be put on hold or they need to throttle the incoming data. This will ensure maximum throughput.
If all worker threads have terminated, the work is done. This can be i.E. tracked through the use of a CyclicBarrier or Phaser class.
Pseudo-code for the worker threads:
public void run() {
DataType e;
try {
while ((e = dataSource.next()) != null) {
process(e);
}
barrier.await();
} catch (InterruptedException ex) {
}
}
I hope this is helpful on your case.
Passing the array as an argument to the constructor is a reasonable approach, although unless you intend to copy the array it isn't necessary to initialize PeriodRates with a large array. It seems wasteful to allocate a large block of memory and then reassign its only reference straight away in the constructor. I would initialize it like this:
private final double [][][] PeriodRates;
public CalcMvg(String Periods, double[][][] PeriodRates) {
System.out.println(Periods);
this.PeriodRates = PeriodRates;
}
The other option is to define CalcMvg as an inner class of AssetFactory and declare PeriodRate as final. This would allow instances of CalcMvg to access PeriodRate in the outer instance of AssetFactory.
Returning the result is more difficult since it involves publishing the result across threads. One way to do this is to use synchronized methods:
private double[] result = null;
private synchronized void setResult(double[] result) {
this.result = result;
}
public synchronized double[] getResult() {
if (result == null) {
throw new RuntimeException("Result has not been initialized for this instance: " + this);
}
return result;
}
There are more advanced multi-threading concepts available in the Java libraries, e.g. Future, that might be appropriate in this case.
Regarding your concerns about the number of threads, allowing a library class to manage the allocation of work to a thread pool might solve this concern. Something like an Executor might help with this.
There are a huge amount of tasks.
Each task is belong to a single group. The requirement is each group of tasks should executed serially just like executed in a single thread and the throughput should be maximized in a multi-core (or multi-cpu) environment. Note: there are also a huge amount of groups that is proportional to the number of tasks.
The naive solution is using ThreadPoolExecutor and synchronize (or lock). However, threads would block each other and the throughput is not maximized.
Any better idea? Or is there exist a third party library satisfy the requirement?
A simple approach would be to "concatenate" all group tasks into one super task, thus making the sub-tasks run serially. But this will probably cause delay in other groups that will not start unless some other group completely finishes and makes some space in the thread pool.
As an alternative, consider chaining a group's tasks. The following code illustrates it:
public class MultiSerialExecutor {
private final ExecutorService executor;
public MultiSerialExecutor(int maxNumThreads) {
executor = Executors.newFixedThreadPool(maxNumThreads);
}
public void addTaskSequence(List<Runnable> tasks) {
executor.execute(new TaskChain(tasks));
}
private void shutdown() {
executor.shutdown();
}
private class TaskChain implements Runnable {
private List<Runnable> seq;
private int ind;
public TaskChain(List<Runnable> seq) {
this.seq = seq;
}
#Override
public void run() {
seq.get(ind++).run(); //NOTE: No special error handling
if (ind < seq.size())
executor.execute(this);
}
}
The advantage is that no extra resource (thread/queue) is being used, and that the granularity of tasks is better than the one in the naive approach. The disadvantage is that all group's tasks should be known in advance.
--edit--
To make this solution generic and complete, you may want to decide on error handling (i.e whether a chain continues even if an error occures), and also it would be a good idea to implement ExecutorService, and delegate all calls to the underlying executor.
I would suggest to use task queues:
For every group of tasks You have create a queue and insert all tasks from that group into it.
Now all Your queues can be executed in parallel while the tasks inside one queue are executed serially.
A quick google search suggests that the java api has no task / thread queues by itself. However there are many tutorials available on coding one. Everyone feel free to list good tutorials / implementations if You know some:
I mostly agree on Dave's answer, but if you need to slice CPU time across all "groups", i.e. all task groups should progress in parallel, you might find this kind of construct useful (using removal as "lock". This worked fine in my case although I imagine it tends to use more memory):
class TaskAllocator {
private final ConcurrentLinkedQueue<Queue<Runnable>> entireWork
= childQueuePerTaskGroup();
public Queue<Runnable> lockTaskGroup(){
return entireWork.poll();
}
public void release(Queue<Runnable> taskGroup){
entireWork.offer(taskGroup);
}
}
and
class DoWork implmements Runnable {
private final TaskAllocator allocator;
public DoWork(TaskAllocator allocator){
this.allocator = allocator;
}
pubic void run(){
for(;;){
Queue<Runnable> taskGroup = allocator.lockTaskGroup();
if(task==null){
//No more work
return;
}
Runnable work = taskGroup.poll();
if(work == null){
//This group is done
continue;
}
//Do work, but never forget to release the group to
// the allocator.
try {
work.run();
} finally {
allocator.release(taskGroup);
}
}//for
}
}
You can then use optimum number of threads to run the DoWork task. It's kind of a round robin load balance..
You can even do something more sophisticated, by using this instead of a simple queue in TaskAllocator (task groups with more task remaining tend to get executed)
ConcurrentSkipListSet<MyQueue<Runnable>> sophisticatedQueue =
new ConcurrentSkipListSet(new SophisticatedComparator());
where SophisticatedComparator is
class SophisticatedComparator implements Comparator<MyQueue<Runnable>> {
public int compare(MyQueue<Runnable> o1, MyQueue<Runnable> o2){
int diff = o2.size() - o1.size();
if(diff==0){
//This is crucial. You must assign unique ids to your
//Subqueue and break the equality if they happen to have same size.
//Otherwise your queues will disappear...
return o1.id - o2.id;
}
return diff;
}
}
Actor is also another solution for this specified type of issues.
Scala has actors and also Java, which provided by AKKA.
I had a problem similar to your, and I used an ExecutorCompletionService that works with an Executor to complete collections of tasks.
Here is an extract from java.util.concurrent API, since Java7:
Suppose you have a set of solvers for a certain problem, each returning a value of some type Result, and would like to run them concurrently, processing the results of each of them that return a non-null value, in some method use(Result r). You could write this as:
void solve(Executor e, Collection<Callable<Result>> solvers)
throws InterruptedException, ExecutionException {
CompletionService<Result> ecs = new ExecutorCompletionService<Result>(e);
for (Callable<Result> s : solvers)
ecs.submit(s);
int n = solvers.size();
for (int i = 0; i < n; ++i) {
Result r = ecs.take().get();
if (r != null)
use(r);
}
}
So, in your scenario, every task will be a single Callable<Result>, and tasks will be grouped in a Collection<Callable<Result>>.
Reference:
http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ExecutorCompletionService.html