Understanding Parallel Execution of chained CompletableFutures - java

I have a question about how Java Streams and chained CompletableFutures perform.
My question is this: if I run the following code, calling execute() with 10 items in the list takes ~11 seconds to complete (number of items in the list plus 1). This is because I have two threads working in parallel: the first executes the digItUp operation, and once that's complete, the second executes the fillItBackIn operation, and the first starts processing digItUp on the next item in the list.
If I comment out line 36 (.collect(Collectors.toList())), the execute() method takes ~20 seconds to complete. The threads do not operate in parallel; for each item in the list, the digItUp operation completes, and then the fillItBackIn operation completes in sequence before the next item in the list is processed.
It's unclear to me why the exclusion of (.collect(Collectors.toList())) should change this behavior. Can someone explain?
The complete class:
package com.test;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.stream.Collectors;
public class SimpleExample {
private final ExecutorService diggingThreadPool = Executors.newFixedThreadPool(1);
private final ExecutorService fillingThreadPool = Executors.newFixedThreadPool(1);
public SimpleExample() {
}
public static void main(String[] args) {
List<Double> holesToDig = new ArrayList<>();
Random random = new Random();
for (int c = 0; c < 10; c++) {
holesToDig.add(random.nextDouble(1000));
}
new SimpleExample().execute(holesToDig);
}
public void execute(List<Double> holeVolumes) {
long start = System.currentTimeMillis();
holeVolumes.stream()
.map(volume -> {
CompletableFuture<Double> digItUpCF = CompletableFuture.supplyAsync(() -> digItUp(volume), diggingThreadPool);
return digItUpCF.thenApplyAsync(volumeDugUp -> fillItBackIn(volumeDugUp), fillingThreadPool);
})
.collect(Collectors.toList())
.forEach(cf -> {
Double volume = cf.join();
System.out.println("Dug a hole and filled it back in. Net volume: " + volume);
});
System.out.println("Dug up and filled back in " + holeVolumes.size() + " holes in " + (System.currentTimeMillis() - start) + " ms");
}
public Double digItUp(Double volume) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
}
System.out.println("Dug hole with volume " + volume);
return volume;
}
public Double fillItBackIn(Double volumeDugUp) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
}
System.out.println("Filled back in hole of volume " + volumeDugUp);
return 0.0;
}
}

The reason is that collect(Collectors.toList()) is a terminal operation, hence it triggers the stream pipeline (remember that streams are evaluated lazily). So when you call collect, all of the CompletableFuture instances are constructed and placed in the list. This means that there is a chain of CompletableFuture, where each one is in turn a chain composed of two stages, let's call them X and Y.
Every time the first thread executor finishes an X stage, it is free to process the X stage of the next composed CompletableFuture, while the other thread executor is processing stage Y of the previous CompletableFuture. This is the result that we intuitively expect.
On the other hand, when you don't call collect, then forEach is in this case the terminal operation. However, in this case every element in the stream is processed sequentially (to confirm try switching to parallelStream()), hence stages X and Y get executed for the first CompletableFuture. Only when stage Y from the previous stream element is finished, will forEach move to the second element in the stream pipeline, and only then will a new CompletableFuture be mapped from the original Double value.

Love this question and M A's answer is awesome! I had a similar use case, and I was using Rxjava there. It worked very well, but my colleagues challenged me to implement it without that. T.T
I tested your example and found a workaround to make it the same performance without collect. The trick is to let the cf.join() be executed in another thread.
.forEach(cf -> CompletableFuture.supplyAsync(cf::join, anotherThreadpool)
// another threadpool for the join, or you can omit it, using the default forkjoinpool.commonpool
.thenAccept(v -> System.out.println("Dug a hole and filled it back in. Net volume: " + v))
);
But I have to say, this might lead to potential issues as it lacks the support for backpressure...if the upstream is infinite and fast, but the consumer is too slow, all the fast-created CompletableFuture in the map operator would be accumulated and submitted to the first diggingThreadPool, finally causing RejectedExecutionException, OOM, etc.

Related

CompletableFuture: Await percentage complete

I am writing identical data in parallel to n nodes of a distributed system.
When n% of these nodes have been written to successfully, the remaining writes to the other nodes are unimportant as n% guarantees replication between the other nodes.
Java's CompletableFuture seems to have very close to what I want eg:
CompletableFuture.anyOf()
(Returns when the first future is complete) - avoids waiting unnecessarily, but returns too soon as I require n% completions
CompletableFuture.allOf()
(Returns when all futures complete) - avoids returning too soon but waits unnecessarily for 100% completion
I am looking for a way to return when a specific percentage of futures have completed.
For example if I supply 10 futures, return when 6 or 60% of these complete successfully.
For example, Bluebird JS has this feature with
Promise.some(promises, countThatNeedToComplete)
I was wondering if I could do something similar with TheadExecutor or vanilla CompletableFuture in java
I believe you can achieve what you want using only what's already provided by CompletableFuture, but you'll have to implement additional control to know how many future tasks were already completed and, when you reach the number/percentage that you need, cancel the remaining tasks.
Below is a class to illustrate the idea:
public class CompletableSome<T>
{
private List<CompletableFuture<Void>> tasks;
private int tasksCompleted = 0;
public CompletableSome(List<CompletableFuture<T>> tasks, int percentOfTasksThatMustComplete)
{
int minTasksThatMustComplete = tasks.size() * percentOfTasksThatMustComplete / 100;
System.out.println(
String.format("Need to complete at least %s%% of the %s tasks provided, which means %s tasks.",
percentOfTasksThatMustComplete, tasks.size(), minTasksThatMustComplete));
this.tasks = new ArrayList<>(tasks.size());
for (CompletableFuture<?> task : tasks)
{
this.tasks.add(task.thenAccept(a -> {
// thenAccept will be called right after the future task is completed. At this point we'll
// check if we reached the minimum number of nodes needed. If we did, then complete the
// remaining tasks since they are no longer needed.
tasksCompleted++;
if (tasksCompleted >= minTasksThatMustComplete)
{
tasks.forEach(t -> t.complete(null));
}
}));
}
}
public void execute()
{
CompletableFuture.allOf(tasks.toArray(new CompletableFuture<?>[0])).join();
}
}
You would use this class as in the example below:
public static void main(String[] args)
{
int numberOfNodes = 4;
// Create one future task for each node.
List<CompletableFuture<String>> nodes = new ArrayList<>();
for (int i = 1; i <= numberOfNodes; i++)
{
String nodeId = "result" + i;
nodes.add(CompletableFuture.supplyAsync(() -> {
try
{
// Sleep for some time to avoid all tasks to complete before the count is checked.
Thread.sleep(100 + new Random().nextInt(500));
}
catch (InterruptedException e)
{
e.printStackTrace();
}
// The action here is just to print the nodeId, you would make the actual call here.
System.out.println(nodeId + " completed.");
return nodeId;
}));
}
// Here we're saying that just 75% of the nodes must be called successfully.
CompletableSome<String> tasks = new CompletableSome<>(nodes, 75);
tasks.execute();
}
Please note that with this solution you could end up executing more tasks than the minimum required -- for instance, when two or more nodes respond almost simultaneously, you may reach the minimum required count when the first node responds, but there will be no time to cancel the other tasks. If that's an issue, then you'd have to implement even more controls.

java: maximum execution time within for-loop

I've got the folowing code:
List<String> instances2 = Arrays.asList("instances/umps20.txt","instances/umps22.txt","instances/umps24.txt","instances/umps26.txt","instances/umps28.txt","instances/umps30.txt","instances/umps32.txt");
List<Integer> qq1 = Arrays.asList(9,10,11,12,13,14,14);
List<Integer> qq2 = Arrays.asList(4,4,5,5,5,5,6);
for (int i = 0; i<7; i++) {
Tournament t = p.process(instances2.get(i));
int nTeams = t.getNTeams();
int q1 = qq1.get(i);
int q2 = qq2.get(i);
UndirectedGraph graph = g.create(t, q1, q2);
new Choco(graph, nTeams);
}
}
Now i want to put a limit on each iteration. So after let's say 3h = 10 800 000ms, i would like for everything in the for-loop to stop and start the next iteration over the loop. Any ideas?
Thanks in advance!
Nicholas
You can get the System time before you start the loop and compare it after each iteration to check if the time is over the specified time, like this:
On for loop starting:
long start = System.currentTimeMillis();
in each iteration:
if(start + 10_800_000 >= System.currentTimeMillis()){
start = System.currentTimeMillis();
i++;
}
and you have to remove the i++ in the for loop
for (int i = 0; i<7;) {
You will have to create a new thread which will run your loop, the ExecutorService will run this loop (or whatever code you put into the call() method) for the specified amount of time.
Here is a demo of a task which takes 5 seconds to run, it will be interrupted after 3 seconds:
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
public class QuickTest {
public static void main(String[] args) throws Exception {
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<String> future = executor.submit(new Task());
try {
System.out.println("Started.."); // your task is running
System.out.println(future.get(3, TimeUnit.SECONDS)); // enter the amount of time you want to allow your code to run
System.out.println("Finished!"); // the task finished within the given time
} catch (TimeoutException e) {
future.cancel(true);
System.out.println("Terminated!"); // the task took too long and was interrupted
}
executor.shutdownNow();
}
}
class Task implements Callable<String> {
#Override
public String call() throws Exception { // enter the code you want to run for x time in here
Thread.sleep(5000); // Just to demo some code which takes 5 seconds to finish.
return "Ready!"; // code finished and was not interrupted (you gave it enough time).
}
}
There are many ways of implementing the requested functionality.
One approach could be converting the code in the for to a FutureTask object and submit it to an ExecutorService - even one with just 1 thread, if the loop has to be executed in sequence - e.g.
ExecutorService executor = Executors.newFixedThreadPool(1);
The benefit of having a FutureTask (or any other object implementing the Future interface), is that the cancel() method can be used to make sure that the interrupted iteration will not create any side effects.
For the interrupts, there are numerous alternatives. For example, the javax.swing.Timer class can be used, which fires ActionEvent notifications after the expiry of the timer.
In the above approach, the task (for loop code) will be executed until completion, or until an ActionEvent is received from the timer. In the latter case, a call to cancel() can be used to stop the running task and the next task will start. The counter of the total number of iterations can be maintained at the same place.
For more sophisticated solutions, one can play with the various implementations of ExecutorService and timeout specification options, as in another StackOverflow question.

Java concurrency counter not properly clean up

This is a java concurrency question. 10 jobs need to be done, each of them will have 32 worker threads. Worker thread will increase a counter . Once the counter is 32, it means this job is done and then clean up counter map. From the console output, I expect that 10 "done" will be output, pool size is 0 and counterThread size is 0.
The issues are :
most of time, "pool size: 0 and countThreadMap size:3" will be
printed out. even those all threads are gone, but 3 jobs are not
finished yet.
some time, I can see nullpointerexception in line 27. I have used ConcurrentHashMap and AtomicLong, why still have concurrency
exception.
Thanks
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.atomic.AtomicLong;
public class Test {
final ConcurrentHashMap<Long, AtomicLong[]> countThreadMap = new ConcurrentHashMap<Long, AtomicLong[]>();
final ExecutorService cachedThreadPool = Executors.newCachedThreadPool();
final ThreadPoolExecutor tPoolExecutor = ((ThreadPoolExecutor) cachedThreadPool);
public void doJob(final Long batchIterationTime) {
for (int i = 0; i < 32; i++) {
Thread workerThread = new Thread(new Runnable() {
#Override
public void run() {
if (countThreadMap.get(batchIterationTime) == null) {
AtomicLong[] atomicThreadCountArr = new AtomicLong[2];
atomicThreadCountArr[0] = new AtomicLong(1);
atomicThreadCountArr[1] = new AtomicLong(System.currentTimeMillis()); //start up time
countThreadMap.put(batchIterationTime, atomicThreadCountArr);
} else {
AtomicLong[] atomicThreadCountArr = countThreadMap.get(batchIterationTime);
atomicThreadCountArr[0].getAndAdd(1);
countThreadMap.put(batchIterationTime, atomicThreadCountArr);
}
if (countThreadMap.get(batchIterationTime)[0].get() == 32) {
System.out.println("done");
countThreadMap.remove(batchIterationTime);
}
}
});
tPoolExecutor.execute(workerThread);
}
}
public void report(){
while(tPoolExecutor.getActiveCount() != 0){
//
}
System.out.println("pool size: "+ tPoolExecutor.getActiveCount() + " and countThreadMap size:"+countThreadMap.size());
}
public static void main(String[] args) throws Exception {
Test test = new Test();
for (int i = 0; i < 10; i++) {
Long batchIterationTime = System.currentTimeMillis();
test.doJob(batchIterationTime);
}
test.report();
System.out.println("All Jobs are done");
}
}
Let’s dig through all the mistakes of thread related programming, one man can make:
Thread workerThread = new Thread(new Runnable() {
…
tPoolExecutor.execute(workerThread);
You create a Thread but don’t start it but submit it to an executor. It’s a historical mistake of the Java API to let Thread implement Runnable for no good reason. Now, every developer should be aware, that there is no reason to treat a Thread as a Runnable. If you don’t want to start a thread manually, don’t create a Thread. Just create the Runnable and pass it to execute or submit.
I want to emphasize the latter as it returns a Future which gives you for free what you are attempting to implement: the information when a task has been finished. It’s even easier when using invokeAll which will submit a bunch of Callables and return when all are done. Since you didn’t tell us anything about your actual task, it’s not clear whether you can let your tasks simply implement Callable (may return null) instead of Runnable.
If you can’t use Callables or don’t want to wait immediately on submission, you have to remember the returned Futures and query them at a later time:
static final ExecutorService cachedThreadPool = Executors.newCachedThreadPool();
public static List<Future<?>> doJob(final Long batchIterationTime) {
final Random r=new Random();
List<Future<?>> list=new ArrayList<>(32);
for (int i = 0; i < 32; i++) {
Runnable job=new Runnable() {
public void run() {
// pretend to do something
LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(r.nextInt(10)));
}
};
list.add(cachedThreadPool.submit(job));
}
return list;
}
public static void main(String[] args) throws Exception {
Test test = new Test();
Map<Long,List<Future<?>>> map=new HashMap<>();
for (int i = 0; i < 10; i++) {
Long batchIterationTime = System.currentTimeMillis();
while(map.containsKey(batchIterationTime))
batchIterationTime++;
map.put(batchIterationTime,doJob(batchIterationTime));
}
// print some statistics, if you really need
int overAllDone=0, overallPending=0;
for(Map.Entry<Long,List<Future<?>>> e: map.entrySet()) {
int done=0, pending=0;
for(Future<?> f: e.getValue()) {
if(f.isDone()) done++;
else pending++;
}
System.out.println(e.getKey()+"\t"+done+" done, "+pending+" pending");
overAllDone+=done;
overallPending+=pending;
}
System.out.println("Total\t"+overAllDone+" done, "+overallPending+" pending");
// wait for the completion of all jobs
for(List<Future<?>> l: map.values())
for(Future<?> f: l)
f.get();
System.out.println("All Jobs are done");
}
But note that if you don’t need the ExecutorService for subsequent tasks, it’s much easier to wait for all jobs to complete:
cachedThreadPool.shutdown();
cachedThreadPool.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
System.out.println("All Jobs are done");
But regardless of how unnecessary the manual tracking of the job status is, let’s delve into your attempt, so you may avoid the mistakes in the future:
if (countThreadMap.get(batchIterationTime) == null) {
The ConcurrentMap is thread safe, but this does not turn your concurrent code into sequential one (that would render multi-threading useless). The above line might be processed by up to all 32 threads at the same time, all finding that the key does not exist yet so possibly more than one thread will then be going to put the initial value into the map.
AtomicLong[] atomicThreadCountArr = new AtomicLong[2];
atomicThreadCountArr[0] = new AtomicLong(1);
atomicThreadCountArr[1] = new AtomicLong(System.currentTimeMillis());
countThreadMap.put(batchIterationTime, atomicThreadCountArr);
That’s why this is called the “check-then-act” anti-pattern. If more than one thread is going to process that code, they all will put their new value, being confident that this was the right thing as they have checked the initial condition before acting but for all but one thread the condition has changed when acting and they are overwriting the value of a previous put operation.
} else {
AtomicLong[] atomicThreadCountArr = countThreadMap.get(batchIterationTime);
atomicThreadCountArr[0].getAndAdd(1);
countThreadMap.put(batchIterationTime, atomicThreadCountArr);
Since you are modifying the AtomicInteger which is already stored into the map, the put operation is useless, it will put the very array that it retrieved before. If there wasn’t the mistake that there can be multiple initial values as described above, the put operation had no effect.
}
if (countThreadMap.get(batchIterationTime)[0].get() == 32) {
Again, the use of a ConcurrentMap doesn’t turn the multi-threaded code into sequential code. While it is clear that the only last thread will update the atomic integer to 32 (when the initial race condition doesn’t materialize), it is not guaranteed that all other threads have already passed this if statement. Therefore more than one, up to all threads can still be at this point of execution and see the value of 32. Or…
System.out.println("done");
countThreadMap.remove(batchIterationTime);
One of the threads which have seen the 32 value might execute this remove operation. At this point, there might be still threads not having executed the above if statement, now not seeing the value 32 but producing a NullPointerException as the array supposed to contain the AtomicInteger is not in the map anymore. This is what happens, occasionally…
After creating your 10 jobs, your main thread is still running - it doesn't wait for your jobs to complete before it calls report on the test. You try to overcome this with the while loop, but tPoolExecutor.getActiveCount() is potentially coming out as 0 before the workerThread is executed, and then the countThreadMap.size() is happening after the threads were added to your HashMap.
There are a number of ways to fix this - but I will let another answer-er do that because I have to leave at the moment.

Generate infinite sequence of Natural numbers using RxJava

I am trying to write a simple program using RxJava to generate an infinite sequence of natural numbers. So, far I have found two ways to generate sequence of numbers using Observable.timer() and Observable.interval(). I am not sure if these functions are the right way to approach this problem. I was expecting a simple function like one we have in Java 8 to generate infinite natural numbers.
IntStream.iterate(1, value -> value +1).forEach(System.out::println);
I tried using IntStream with Observable but that does not work correctly. It sends infinite stream of numbers only to first subscriber. How can I correctly generate infinite natural number sequence?
import rx.Observable;
import rx.functions.Action1;
import java.util.stream.IntStream;
public class NaturalNumbers {
public static void main(String[] args) {
Observable<Integer> naturalNumbers = Observable.<Integer>create(subscriber -> {
IntStream stream = IntStream.iterate(1, val -> val + 1);
stream.forEach(naturalNumber -> subscriber.onNext(naturalNumber));
});
Action1<Integer> first = naturalNumber -> System.out.println("First got " + naturalNumber);
Action1<Integer> second = naturalNumber -> System.out.println("Second got " + naturalNumber);
Action1<Integer> third = naturalNumber -> System.out.println("Third got " + naturalNumber);
naturalNumbers.subscribe(first);
naturalNumbers.subscribe(second);
naturalNumbers.subscribe(third);
}
}
The problem is that the on naturalNumbers.subscribe(first);, the OnSubscribe you implemented is being called and you are doing a forEach over an infinite stream, hence why your program never terminates.
One way you could deal with it is to asynchronously subscribe them on a different thread. To easily see the results I had to introduce a sleep into the Stream processing:
Observable<Integer> naturalNumbers = Observable.<Integer>create(subscriber -> {
IntStream stream = IntStream.iterate(1, i -> i + 1);
stream.peek(i -> {
try {
// Added to visibly see printing
Thread.sleep(50);
} catch (InterruptedException e) {
}
}).forEach(subscriber::onNext);
});
final Subscription subscribe1 = naturalNumbers
.subscribeOn(Schedulers.newThread())
.subscribe(first);
final Subscription subscribe2 = naturalNumbers
.subscribeOn(Schedulers.newThread())
.subscribe(second);
final Subscription subscribe3 = naturalNumbers
.subscribeOn(Schedulers.newThread())
.subscribe(third);
Thread.sleep(1000);
System.out.println("Unsubscribing");
subscribe1.unsubscribe();
subscribe2.unsubscribe();
subscribe3.unsubscribe();
Thread.sleep(1000);
System.out.println("Stopping");
Observable.Generate is exactly the operator to solve this class of problem reactively. I also assume this is a pedagogical example, since using an iterable for this is probably better anyway.
Your code produces the whole stream on the subscriber's thread. Since it is an infinite stream the subscribe call will never complete. Aside from that obvious problem, unsubscribing is also going to be problematic since you aren't checking for it in your loop.
You want to use a scheduler to solve this problem - certainly do not use subscribeOn since that would burden all observers. Schedule the delivery of each number to onNext - and as a last step in each scheduled action, schedule the next one.
Essentially this is what Observable.generate gives you - each iteration is scheduled on the provided scheduler (which defaults to one that introduces concurrency if you don't specify it). Scheduler operations can be cancelled and avoid thread starvation.
Rx.NET solves it like this (actually there is an async/await model that's better, but not available in Java afaik):
static IObservable<int> Range(int start, int count, IScheduler scheduler)
{
return Observable.Create<int>(observer =>
{
return scheduler.Schedule(0, (i, self) =>
{
if (i < count)
{
Console.WriteLine("Iteration {0}", i);
observer.OnNext(start + i);
self(i + 1);
}
else
{
observer.OnCompleted();
}
});
});
}
Two things to note here:
The call to Schedule returns a subscription handle that is passed back to the observer
The Schedule is recursive - the self parameter is a reference to the scheduler used to call the next iteration. This allows for unsubscription to cancel the operation.
Not sure how this looks in RxJava, but the idea should be the same. Again, Observable.generate will probably be simpler for you as it was designed to take care of this scenario.
When creating infinite sequencies care should be taken to:
subscribe and observe on different threads; otherwise you will only serve single subscriber
stop generating values as soon as subscription terminates; otherwise runaway loops will eat your CPU
The first issue is solved by using subscribeOn(), observeOn() and various schedulers.
The second issue is best solved by using library provided methods Observable.generate() or Observable.fromIterable(). They do proper checking.
Check this:
Observable<Integer> naturalNumbers =
Observable.<Integer, Integer>generate(() -> 1, (s, g) -> {
logger.info("generating {}", s);
g.onNext(s);
return s + 1;
}).subscribeOn(Schedulers.newThread());
Disposable sub1 = naturalNumbers
.subscribe(v -> logger.info("1 got {}", v));
Disposable sub2 = naturalNumbers
.subscribe(v -> logger.info("2 got {}", v));
Disposable sub3 = naturalNumbers
.subscribe(v -> logger.info("3 got {}", v));
Thread.sleep(100);
logger.info("unsubscribing...");
sub1.dispose();
sub2.dispose();
sub3.dispose();
Thread.sleep(1000);
logger.info("done");

what is the most simple and efficient way to return value from Runnables in Thread Pool Executor?

I have created a DataBaseManager Class in my android app that manages all database operations for my app.
I have different methods to create,update, and retrieve value from database.
I do it on a runnable and submit it to the Thread pool Executor.
In case, I have to return some value from this Runnable, how can I achieve it, I know about callbacks but it will be little cumbersome for me as the number of methods large.
Any help will be appreciated!
You need to use Callable: Interface Callable<V>
Like Runnable, It's instances are potentially executed by another thread.
But smarter then Runnable: Capable of returning result and checked Exception
Using it is as much simple as Runnable:
private final class MyTask extends Callable<T>{
public T call(){
T t;
// your code
return t;
}
}
I am using T to represent a reference type e.g. String.
Getting the result upon completion:
using Future<V>: A Future represents the result of an
asynchronous computation. Methods are provided to check if the
computation is complete, to wait for its completion. The result is
retrieved using method get() when the computation has completed,
blocking if necessary until it is ready.
List<Future<T>> futures = new ArrayList<>(10);
for(int i = 0; i < 10; i++){
futures.add(pool.submit(new MyTask()));
}
T result;
for(Future<T> f: futures)
result = f.get(); // get the result
The disadvantages of above approach is that, if first task takes a
long time to compute and all the other tasks ends before the first,
the current thread cannot compute the result before the first task
ends. Hence another solution would be to use CompletionService.
using CompletionService<V>: A service that decouples the
production of new asynchronous tasks from the consumption of the
results of completed tasks. Producers submit tasks for execution.
Consumers take completed tasks and process their results in the order
they complete. Using it is as simple as follows:
CompletionService<T> pool = new ExecutorCompletionService<T>(threadPool);
And then use pool.take().get() to read the returned result from
callable instances:
for(int i = 0; i < 10; i++){
pool.submit(new MyTask());
}
for(int i = 0; i < 10; i++){
T result = pool.take().get();
//your another code
}
the below is the sample code for using callable
import java.util.concurrent.Callable;
import java.util.concurrent.Executors;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Future;
public class Test {
public static void main(String[] args) throws Exception {
ExecutorService executorService1 = Executors.newFixedThreadPool(4);
Future f1 =executorService1.submit(new callable());
Future f2 =executorService1.submit(new callable());
System.out.println("f1 " + f1.get());
System.out.println("f1 " + f2.get());
executorService1.shutdown();
}
}
class callable implements Callable<String> {
public String call() {
System.out.println(" Starting callable Asynchronous task" + Thread.currentThread().getName());
try {
Thread.currentThread().sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(" Ending callable Asynchronous task" + Thread.currentThread().getName());
return Thread.currentThread().getName();
}
}

Categories

Resources