How to send multiple asynchronous requests to different web services? - java

I need to send multiple requests to many different web services and receive the results. The problem is that, if I send the requests one by one it takes so long as I need to send and process all individually.
I am wondering how I can send all the requests at once and receive the results.
As the following code shows, I have three major methods and each has its own sub methods.
Each sub method sends request to its associated web service and receive the results;therefore, for example, to receive the results of web service 9 I have to wait till all web services from 1 to 8 get completed, it takes a long time to send all the requests one by one and receive their results.
As shown below none of the methods nor sub-methods are related to each other, so I can call them all and receive their results in any order, the only thing which is important is to receive the results of each sub-method and populate their associated lists.
private List<StudentsResults> studentsResults = new ArrayList();
private List<DoctorsResults> doctorsResults = new ArrayList();
private List<PatientsResults> patientsResults = new ArrayList();
main (){
retrieveAllLists();
}
retrieveAllLists(){
retrieveStudents();
retrieveDoctors();
retrievePatients();
}
retrieveStudents(){
this.studentsResults = retrieveStdWS1(); //send request to Web Service 1 to receive its list of students
this.studentsResults = retrieveStdWS2(); //send request to Web Service 2 to receive its list of students
this.studentsResults = retrieveStdWS3(); //send request to Web Service 3 to receive its list of students
}
retrieveDoctors(){
this.doctorsResults = retrieveDocWS4(); //send request to Web Service 4 to receive its list of doctors
this.doctorsResults = retrieveDocWS5(); //send request to Web Service 5 to receive its list of doctors
this.doctorsResults = retrieveDocWS6(); //send request to Web Service 6 to receive its list of doctors
}
retrievePatients(){
this.patientsResults = retrievePtWS7(); //send request to Web Service 7 to receive its list of patients
this.patientsResults = retrievePtWS8(); //send request to Web Service 8 to receive its list of patients
this.patientsResults = retrievePtWS9(); //send request to Web Service 9 to receive its list of patients
}

That is a simple fork-join approach, but for clarity, you can start any number of threads and retrieve the results later as they are available, such as this approach.
ExecutorService pool = Executors.newFixedThreadPool(10);
List<Callable<String>> tasks = new ArrayList<>();
tasks.add(new Callable<String>() {
public String call() throws Exception {
Thread.sleep((new Random().nextInt(5000)) + 500);
return "Hello world";
}
});
List<Future<String>> results = pool.invokeAll(tasks);
for (Future<String> future : results) {
System.out.println(future.get());
}
pool.shutdown();
UPDATE, COMPLETE:
Here's a verbose, but workable solution. I wrote it ad hoc, and have not compiled it.
Given the three lists have diffent types, and the WS methods are individual, it is not
really modular, but try to use your best programming skills and see if you can modularize it a bit better.
ExecutorService pool = Executors.newFixedThreadPool(10);
List<Callable<List<StudentsResults>>> stasks = new ArrayList<>();
List<Callable<List<DoctorsResults>>> dtasks = new ArrayList<>();
List<Callable<List<PatientsResults>>> ptasks = new ArrayList<>();
stasks.add(new Callable<List<StudentsResults>>() {
public List<StudentsResults> call() throws Exception {
return retrieveStdWS1();
}
});
stasks.add(new Callable<List<StudentsResults>>() {
public List<StudentsResults> call() throws Exception {
return retrieveStdWS2();
}
});
stasks.add(new Callable<List<StudentsResults>>() {
public List<StudentsResults> call() throws Exception {
return retrieveStdWS3();
}
});
dtasks.add(new Callable<List<DoctorsResults>>() {
public List<DoctorsResults> call() throws Exception {
return retrieveDocWS4();
}
});
dtasks.add(new Callable<List<DoctorsResults>>() {
public List<DoctorsResults> call() throws Exception {
return retrieveDocWS5();
}
});
dtasks.add(new Callable<List<DoctorsResults>>() {
public List<DoctorsResults> call() throws Exception {
return retrieveDocWS6();
}
});
ptasks.add(new Callable<List<PatientsResults>>() {
public List<PatientsResults> call() throws Exception {
return retrievePtWS7();
}
});
ptasks.add(new Callable<List<PatientsResults>>() {
public List<PatientsResults> call() throws Exception {
return retrievePtWS8();
}
});
ptasks.add(new Callable<List<PatientsResults>>() {
public List<PatientsResults> call() throws Exception {
return retrievePtWS9();
}
});
List<Future<List<StudentsResults>>> sresults = pool.invokeAll(stasks);
List<Future<List<DoctorsResults>>> dresults = pool.invokeAll(dtasks);
List<Future<List<PatientsResults>>> presults = pool.invokeAll(ptasks);
for (Future<List<StudentsResults>> future : sresults) {
this.studentsResults.addAll(future.get());
}
for (Future<List<DoctorsResults>> future : dresults) {
this.doctorsResults.addAll(future.get());
}
for (Future<List<PatientsResults>> future : presults) {
this.patientsResults.addAll(future.get());
}
pool.shutdown();
Each Callable returns a list of results, and is called in its own separate thread.
When you invoke the Future.get() method you get the result back onto the main thread.
The result is NOT available until the Callable have finished, hence there is no concurrency issues.

So just for fun I am providing two working examples. The first one shows the old school way of doing this before java 1.5. The second shows a much cleaner way using tools available within java 1.5:
import java.util.ArrayList;
public class ThreadingExample
{
private ArrayList <MyThread> myThreads;
public static class MyRunnable implements Runnable
{
private String data;
public String getData()
{
return data;
}
public void setData(String data)
{
this.data = data;
}
#Override
public void run()
{
}
}
public static class MyThread extends Thread
{
private MyRunnable myRunnable;
MyThread(MyRunnable runnable)
{
super(runnable);
setMyRunnable(runnable);
}
/**
* #return the myRunnable
*/
public MyRunnable getMyRunnable()
{
return myRunnable;
}
/**
* #param myRunnable the myRunnable to set
*/
public void setMyRunnable(MyRunnable myRunnable)
{
this.myRunnable = myRunnable;
}
}
public ThreadingExample()
{
myThreads = new ArrayList <MyThread> ();
}
public ArrayList <String> retrieveMyData ()
{
ArrayList <String> allmyData = new ArrayList <String> ();
if (isComplete() == false)
{
// Sadly we aren't done
return (null);
}
for (MyThread myThread : myThreads)
{
allmyData.add(myThread.getMyRunnable().getData());
}
return (allmyData);
}
private boolean isComplete()
{
boolean complete = true;
// wait for all of them to finish
for (MyThread x : myThreads)
{
if (x.isAlive())
{
complete = false;
break;
}
}
return (complete);
}
public void kickOffQueries()
{
myThreads.clear();
MyThread a = new MyThread(new MyRunnable()
{
#Override
public void run()
{
// This is where you make the call to external services
// giving the results to setData("");
setData("Data from list A");
}
});
myThreads.add(a);
MyThread b = new MyThread (new MyRunnable()
{
#Override
public void run()
{
// This is where you make the call to external services
// giving the results to setData("");
setData("Data from list B");
}
});
myThreads.add(b);
for (MyThread x : myThreads)
{
x.start();
}
boolean done = false;
while (done == false)
{
if (isComplete())
{
done = true;
}
else
{
// Sleep for 10 milliseconds
try
{
Thread.sleep(10);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
}
}
public static void main(String [] args)
{
ThreadingExample example = new ThreadingExample();
example.kickOffQueries();
ArrayList <String> data = example.retrieveMyData();
if (data != null)
{
for (String s : data)
{
System.out.println (s);
}
}
}
}
This is the much simpler working version:
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
public class ThreadingExample
{
public static void main(String [] args)
{
ExecutorService service = Executors.newCachedThreadPool();
Set <Callable<String>> callables = new HashSet <Callable<String>> ();
callables.add(new Callable<String>()
{
#Override
public String call() throws Exception
{
return "This is where I make the call to web service A, and put its results here";
}
});
callables.add(new Callable<String>()
{
#Override
public String call() throws Exception
{
return "This is where I make the call to web service B, and put its results here";
}
});
callables.add(new Callable<String>()
{
#Override
public String call() throws Exception
{
return "This is where I make the call to web service C, and put its results here";
}
});
try
{
List<Future<String>> futures = service.invokeAll(callables);
for (Future<String> future : futures)
{
System.out.println (future.get());
}
}
catch (InterruptedException e)
{
e.printStackTrace();
}
catch (ExecutionException e)
{
e.printStackTrace();
}
}
}

You can ask your jax-ws implementation to generate asynchronous bindings for the web service.
This has two advantages that I can see:
As discussed in Asynchronous web services calls with JAX-WS: Use wsimport support for asynchrony or roll my own? , jax-ws will generate well-tested (and possibly fancier) code for you, you need not instantiate the ExecutorService yourself. So less work for you! (but also less control over the threading implementation details)
The generated bindings include a method where you specify a callback handler, which may suit your needs better than synchronously get() ting all response lists on the thread calling retrieveAllLists(). It allows for per-service-call error handling and will process the results in parallel, which is nice if processing is non-trivial.
An example for Metro can be found on the Metro site. Note the contents of the custom bindings file custom-client.xml :
<bindings ...>
<bindings node="wsdl:definitions">
<enableAsyncMapping>true</enableAsyncMapping>
</bindings>
</bindings>
When you specify this bindings file to wsimport, it'll generate a client which returns an object that implements javax.xml.ws.Response<T>. Response extends the Future interface that others also suggest you use when rolling your own implementation.
So, unsurprisingly, if you go without the callbacks, the code will look similar to the other answers:
public void retrieveAllLists() throws ExecutionException{
// first fire all requests
Response<List<StudentsResults>> students1 = ws1.getStudents();
Response<List<StudentsResults>> students2 = ws2.getStudents();
Response<List<StudentsResults>> students3 = ws3.getStudents();
Response<List<DoctorsResults>> doctors1 = ws4.getDoctors();
Response<List<DoctorsResults>> doctors2 = ws5.getDoctors();
Response<List<DoctorsResults>> doctors3 = ws6.getDoctors();
Response<List<PatientsResults>> patients1 = ws7.getPatients();
Response<List<PatientsResults>> patients2 = ws8.getPatients();
Response<List<PatientsResults>> patients3 = ws9.getPatients();
// then await and collect all the responses
studentsResults.addAll(students1.get());
studentsResults.addAll(students2.get());
studentsResults.addAll(students3.get());
doctorsResults.addAll(doctors1.get());
doctorsResults.addAll(doctors2.get());
doctorsResults.addAll(doctors3.get());
patientsResults.addAll(patients1.get());
patientsResults.addAll(patients2.get());
patientsResults.addAll(patients3.get());
}
If you create callback handers such as
private class StudentsCallbackHandler
implements AsyncHandler<Response<List<StudentsResults>>> {
public void handleResponse(List<StudentsResults> response) {
try {
studentsResults.addAll(response.get());
} catch (ExecutionException e) {
errors.add(new CustomError("Failed to retrieve Students.", e.getCause()));
} catch (InterruptedException e) {
log.error("Interrupted", e);
}
}
}
you can use them like this:
public void retrieveAllLists() {
List<Future<?>> responses = new ArrayList<Future<?>>();
// fire all requests, specifying callback handlers
responses.add(ws1.getStudents(new StudentsCallbackHandler()));
responses.add(ws2.getStudents(new StudentsCallbackHandler()));
responses.add(ws3.getStudents(new StudentsCallbackHandler()));
...
// await completion
for( Future<?> response: responses ) {
response.get();
}
// or do some other work, and poll response.isDone()
}
Note that the studentResults collection needs to be thread safe now, since results will get added concurrently!

Looking at the problem, you need to integrate your application with 10+ different webservices.While making all the calls asynchronous. This can be done easily with Apache Camel. It is a prominent framework for enterprise integration and also supports async processing. You can use its CXF component for calling webservices and its routing engine for invocation and processing results. Look at the following page regarding camel's async routing capability. They have also provided a complete example invoking webservices async using CXF, it available at its maven repo. Also see the following page for more details.

You might consider the following paradigm in which you create work (serially), but the actual work is done in parallel. One way to do this is to: 1) have your "main" create a queue of work items; 2) create a "doWork" object that queries the queue for work to do; 3) have "main" start some number of "doWork" threads (can be same number as number of different services, or a smaller number); have the "doWork" objects put add their results to an object list (whatever construct works Vector, list...).
Each "doWork" object would mark their queue item complete, put all results in the passed container and check for new work (if no more on the queue, it would sleep and try again).
Of course you will want to see how well you can construct your class model. If each of the webservices is quite different for parsing, then you may want to create an Interface that each of your "retrieveinfo" classes promises to implement.

It has got various option to develop this.
JMS : quality of service and management, e.g. redelivery attempt, dead message queue, load management, scalability, clustering, monitoring, etc.
Simply using the Observer pattern for this. For more details OODesign and How to solve produce and consumer follow this Kodelog**

Related

How do I place Asynchronous Retrofit calls using rxjava. I have to place over a 100 calls asynchronously

Here's a sample of the code I've been working on
items contains 100 elements, thus obtaining data using synchronous calling takes up a lot of time. Can someone suggest a way to increase the speed of this operation so that it takes less time.
Currently this takes 15-20 seconds to execute. I'm new to rxjava so please provide a detailed solution to this problem if possible. dataResponses contains RouteDistance objects for each of the 100 items.
for(int i = 0 ; i<items.size();i++){
Map<String, String> map2 = new HashMap<>();
map2.put("units", "metric");
map2.put("origin", currentLocation.getLatitude()+","+currentLocation.getLongitude());
map2.put("destination", items.get(i).getPosition().get(0)+","+items.get(i).getPosition().get(1));
map2.put("transportMode", "car");
requests.add(RetrofitClient4_RouteDist.getClient().getRouteDist(map2));
}
Observable.zip(requests, new Function<Object[], List<RouteDist>>() {
#Override
public List<RouteDist> apply(Object[] objects) throws Exception {
Log.i("onSubscribe", "apply: " + objects.length);
List<RouteDist> dataaResponses = new ArrayList<>();
for (Object o : objects) {
dataaResponses.add((RouteDist) o);
}
return dataaResponses;
}
})
.observeOn(AndroidSchedulers.mainThread())
.subscribeOn(Schedulers.io())
.subscribe(
new Consumer<List<RouteDist>>() {
#Override
public void accept(List<RouteDist> dataaResponses) throws Exception {
Log.i("onSubscribe", "YOUR DATA IS HERE: "+dataaResponses.toString());
recyclerViewAdapter_profile = new RecyclerViewAdapter_Profile(items,dataaResponses);
recyclerView.setAdapter(recyclerViewAdapter_profile);
}
},
new Consumer<Throwable>() {
#Override
public void accept(Throwable e) throws Exception {
Log.e("onSubscribe", "Throwable: " + e);
}
});
API
interface Client {
Observable<RouteDist> routeDist();
}
final class RouteDist {
}
final class ClientImpl implements Client {
#Override
public Observable<RouteDist> routeDist() {
return Observable.fromCallable(() -> {
// with this log, you see, that each subscription to an Observable is executed on the ThreadPool
// Log.e("---------------------", Thread.currentThread().getName());
return new RouteDist();
});
}
}
Apply threading via subscribeOn
final class ClientProxy implements Client {
private final Client api;
private final Scheduler scheduler;
ClientProxy(Client api, Scheduler scheduler) {
this.api = api;
this.scheduler = scheduler;
}
#Override
public Observable<RouteDist> routeDist() {
// apply #subscribeOn in order to move subscribeAcutal call on given Scheduler
return api.routeDist().subscribeOn(scheduler);
}
}
AndroidTest
#Test
public void name() {
// CachedThreadPool, in order to avoid creating 100-Threads or more. It is always a good idea to use own Schedulers (e.g. Testing)
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(0, 10,
60L, TimeUnit.SECONDS,
new SynchronousQueue<>());
// wrap real client with Proxy, in order to move the subscribeActual call to the ThreadPool
Client client = new ClientProxy(new ClientImpl(), Schedulers.from(threadPool));
List<Observable<RouteDist>> observables = Arrays.asList(client.routeDist(), client.routeDist(), client.routeDist());
TestObserver<List<RouteDist>> test = Observable.zip(observables, objects -> {
return Arrays.stream(objects).map(t -> (RouteDist) t).collect(Collectors.toList());
})
.observeOn(AndroidSchedulers.mainThread())
.test();
test.awaitCount(1);
// verify that onNext in subscribe is called in Android-EventLoop
assertThat(test.lastThread()).isEqualTo(Looper.getMainLooper().getThread());
// verify that 3 calls were made and merged into one List
test.assertValueAt(0, routeDists -> {
assertThat(routeDists).hasSize(3);
return true;
});
}
Further reading:
http://tomstechnicalblog.blogspot.de/2016/02/rxjava-understanding-observeon-and.html
Note:
It is not recommanded to call an API 100-times concurrently at once. Furthermore when using Zip, this is what will acutally happen, when you have a ThreadPool, which is big enough. When one API-call times-out, an onError will probably emitted for this API-calls. The onError will be propagated further to the subscriber. You will not get any result, even if only on API-call fails. It is recommanded to have some onErrorResumeNext or some other error-handling operator, in order to ensure, that one API-call does not cancel the overall result.

Make multiple asynchronous calls(fire and forget calls) at once using Rx java Observable

I have a list of downstream api calls(about 10) that I need to call at once asynchronously. I was using callables till now where I was using
List<RequestContextPreservingCallable <FutureResponse>> callables
I would add the api calls to this list and submit it at the end using executeAsyncNoReturnRequestContextPreservingCallables.
Using Rx java Observables how do I do this?
List<RequestContextPreservingCallable<FutureResponse>> callables = new
ArrayList<RequestContextPreservingCallable<FutureResponse>>();
callables.add(apiOneConnector.CallToApiOne(name));
callables.add(apiTwoConnector.CallToApiTWO(sessionId));
....
//execute all the calls
executeAsyncNoReturnRequestContextPreservingCallables(callables);
You could make use of the zip operator. The zip operator can take multiple observables and execute them simultaneously, and it will proceed after all the results have arrived.
You could then transform these result into your needed form and pass to the next level.
As per your example. Say you have multiple API calls for getting name and session etc, as shown below
Observable.zip(getNameRequest(), getSessionIdRequest(), new BiFunction<String, String, Object>() {
#Override
public Object apply(String name, String sessionId) throws Exception {
// here you will get all the results once everything is completed. you can then take these
// results and transform into another object and returnm from here. I decided to transform the results into an Object[]
// the retuen type of this apply funtion is generic, so you can choose what to return
return new Object[]{name, sessionId};
}
})
.subscribeOn(Schedulers.io()) // will start this entire chain in an IO thread
.observeOn(AndroidSchedulers.mainThread()) // observeOn will filp the thread to the given one , so that the downstream will be executed in the specified thread. here I'm switching to main at this point onwards
.subscribeWith(new DisposableObserver<Object>() {
#Override
public void onNext(Object finalResult) {
// here you will get the final result with all the api results
}
#Override
public void onError(Throwable e) {
// any error during the entire process will be triggered here
}
#Override
public void onComplete() {
//will be called once the whole chain is completed and terminated
}
});
You could even pass a list of observables to the zip as follows
List<Observable<String>> requests = new ArrayList<>();
requests.add(getNameRequest());
requests.add(getSessionIdRequest());
Observable.zip(requests, new Function<Object[], Object[]>() {
#Override
public Object[] apply(Object[] objects) throws Exception {
return new Object[]{objects[0], objects[1]};
}
}).subscribeWith(new DisposableObserver<Object[]>() {
#Override
public void onNext(Object[] objects) {
}
#Override
public void onError(Throwable e) {
}
#Override
public void onComplete() {
}
})

How to process a list of objects in parallel processing in Java

I have a list of objects in Java like thousand objects in a List and I am iterating the List for every object and further processing . The same processing is hapening for every objects. This sequentail approach is taking much time for processing so, I want to achieve with parallel processing in Java. I checked executor framework in Java but I got stuck in it.
I thought one approach to implement my requirement.
I want to implement some fixed number of minimum objects will be processed by each thread so that each thread do its work and process objects in a quick manner. How can I acheive this ? Or If any other approach is ther for implementing my requirement pls share.
Eg:
List objects = new List();
For(Object object : objects) {
//Doing some common operation for all
Objects
}
You can use a ThreadPoolExecutor, it will take care of load balance. Tasks will be distributed on different threads.
Here is an example:
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class Test {
public static void main(String[] args) {
// Fixed thread number
ExecutorService service = Executors.newFixedThreadPool(10);
// Or un fixed thread number
// The number of threads will increase with tasks
// ExecutorService service = Executors.newCachedThreadPool(10);
List<Object> objects = new ArrayList<>();
for (Object o : objects) {
service.execute(new MyTask(o));
}
// shutdown
// this will get blocked until all task finish
service.shutdown();
try {
service.awaitTermination(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public static class MyTask implements Runnable {
Object target;
public MyTask(Object target) {
this.target = target;
}
#Override
public void run() {
// business logic at here
}
}
}
There are many options for processing a list in parallel:
Use a parallel stream:
objects.stream().parallel().forEach(object -> {
//Your work on each object goes here, using object
})
Use an executor service to submit tasks if you want to use a pool with more threads than the fork-join pool:
ExecutorService es = Executors.newFixedThreadPool(10);
for(Object o: objects) {
es.submit(() -> {
//code here using Object o...
}
}
This preceding example is essentially the same as the traditional executor service, running tasks on separate threads.
As an alternative to these, you can also submit using the completable future:
//You can also just run a for-each and manually add each
//feature to a list
List<CompletableFuture<Void>> futures =
objects.stream().map(object -> CompletableFuture.runAsync(() -> {
//Your work on each object goes here, using object
})
You can then use the futures object to check the status of each execution if that's required.
Split list into multiple sub-lists and use multi threading to process
each sub-lists parallel.
public class ParallelProcessListElements {
public void processList (int numberofthreads,List<Object>tempList,
Object obj, Method method){
final int sizeofList=tempList.size();
final int sizeofsublist = sizeofList/numberofthreads;
List<Thread> threadlist = new ArrayList<Thread>();
for(int i=0;i<numberofthreads;i++) {
int firstindex = i*sizeofsublist;
int lastindex = i*sizeofsublist+sizeofsublist;
if(i==numberofthreads-1)
lastindex=sizeofList;
List<Object> subList=tempList.subList(firstindex,lastindex );
Thread th = new Thread(()->{
try{method.invoke(obj, subList);}catch(Exception e) {e.printStackTrace();}
});
threadlist.add(th);
}
threadlist.forEach(th->{th.start();try{Thread.sleep(10);}catch(Exception e) {}});
}
}
public class Demo {
public static void main(String[] args) {
List<Object> tempList= new ArrayList<Object>();
/**
* Adding values to list... For Demo purpose..
*/
for(int i=0;i<500;i++)
tempList.add(i);
ParallelProcessListElements process = new ParallelProcessListElements();
final int numberofthreads = 5;
Object obj = new Demo();
Method method=null;
try{ method=Demo.class.getMethod("printList", List.class);}catch(Exception e) {}
/**
* Method Call...
*/
process.processList(numberofthreads,tempList,obj,method);
}
public void printList(List<Integer>list) {
/**
* Business logic to process the list...
*/
list.forEach(item->{
try{Thread.sleep(1000);}catch(Exception e) {}
System.out.println(item);
});
}
}

Multithreaded execution where order of finished Work Items is preserved

I have a flow of units of work, lets call them "Work Items" that are processed sequentially (for now). I'd like to speed up processing by doing the work multithreaded.
Constraint: Those work items come in a specific order, during processing the order is not relevant - but once processing is finished the order must be restored.
Something like this:
|.|
|.|
|4|
|3|
|2| <- incoming queue
|1|
/ | \
2 1 3 <- worker threads
\ | /
|3|
|2| <- outgoing queue
|1|
I would like to solve this problem in Java, preferably without Executor Services, Futures, etc., but with basic concurrency methods like wait(), notify(), etc.
Reason is: My Work Items are very small and fine grained, they finish processing in about 0.2 milliseconds each. So I fear using stuff from java.util.concurrent.* might introduce way to much overhead and slow my code down.
The examples I found so far all preserve the order during processing (which is irrelevant in my case) and didn't care about order after processing (which is crucial in my case).
This is how I solved your problem in a previous project (but with java.util.concurrent):
(1) WorkItem class does the actual work/processing:
public class WorkItem implements Callable<WorkItem> {
Object content;
public WorkItem(Object content) {
super();
this.content = content;
}
public WorkItem call() throws Exception {
// getContent() + do your processing
return this;
}
}
(2) This class puts Work Items in a queue and initiates processing:
public class Producer {
...
public Producer() {
super();
workerQueue = new ArrayBlockingQueue<Future<WorkItem>>(THREADS_TO_USE);
completionService = new ExecutorCompletionService<WorkItem>(Executors.newFixedThreadPool(THREADS_TO_USE));
workerThread = new Thread(new Worker(workerQueue));
workerThread.start();
}
public void send(Object o) throws Exception {
WorkItem workItem = new WorkItem(o);
Future<WorkItem> future = completionService.submit(workItem);
workerQueue.put(future);
}
}
(3) Once processing is finished the Work Items are dequeued here:
public class Worker implements Runnable {
private ArrayBlockingQueue<Future<WorkItem>> workerQueue = null;
public Worker(ArrayBlockingQueue<Future<WorkItem>> workerQueue) {
super();
this.workerQueue = workerQueue;
}
public void run() {
while (true) {
Future<WorkItem> fwi = workerQueue.take(); // deqeueue it
fwi.get(); // wait for it till it has finished processing
}
}
}
(4) This is how you would use the stuff in your code and submit new work:
public class MainApp {
public static void main(String[] args) throws Exception {
Producer p = new Producer();
for (int i = 0; i < 10000; i++)
p.send(i);
}
}
If you allow BlockingQueue, why would you ignore the rest of the concurrency utils in java?
You could use e.g. Stream (if you have java 1.8) for the above:
List<Type> data = ...;
List<Other> out = data.parallelStream()
.map(t -> doSomeWork(t))
.collect(Collectors.toList());
Because you started from an ordered Collection (List), and collect also to a List, you will have results in the same order as the input.
Just ID each of the objects for processing, create a proxy which would accept done work and allow to return it only when the ID pushed was sequential. A sample code below. Note how simple it is, utilizing an unsynchronized auto-sorting collection and just 2 simple methods as API.
public class SequentialPushingProxy {
static class OrderedJob implements Comparable<OrderedJob>{
static AtomicInteger idSource = new AtomicInteger();
int id;
public OrderedJob() {
id = idSource.incrementAndGet();
}
public int getId() {
return id;
}
#Override
public int compareTo(OrderedJob o) {
return Integer.compare(id, o.getId());
}
}
int lastId = OrderedJob.idSource.get();
public Queue<OrderedJob> queue;
public SequentialPushingProxy() {
queue = new PriorityQueue<OrderedJob>();
}
public synchronized void pushResult(OrderedJob job) {
queue.add(job);
}
List<OrderedJob> jobsToReturn = new ArrayList<OrderedJob>();
public synchronized List<OrderedJob> getFinishedJobs() {
while (queue.peek() != null) {
// only one consumer at a time, will be safe
if (queue.peek().getId() == lastId+1) {
jobsToReturn.add(queue.poll());
lastId++;
} else {
break;
}
}
if (jobsToReturn.size() != 0) {
List<OrderedJob> toRet = jobsToReturn;
jobsToReturn = new ArrayList<OrderedJob>();
return toRet;
}
return Collections.emptyList();
}
public static void main(String[] args) {
final SequentialPushingProxy proxy = new SequentialPushingProxy();
int numProducerThreads = 5;
for (int i=0; i<numProducerThreads; i++) {
new Thread(new Runnable() {
#Override
public void run() {
while(true) {
proxy.pushResult(new OrderedJob());
}
}
}).start();
}
int numConsumerThreads = 1;
for (int i=0; i<numConsumerThreads; i++) {
new Thread(new Runnable() {
#Override
public void run() {
while(true) {
List<OrderedJob> ret = proxy.getFinishedJobs();
System.out.println("got "+ret.size()+" finished jobs");
try {
Thread.sleep(200);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}).start();
}
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.exit(0);
}
}
This code could be easily improved to
allow pushing more than one job result at once, to reduce the synchronization costs
introduce a limit to returned collection to get done jobs in smaller chunks
extract an interface for those 2 public methods and switch implementations to perform tests
You could have 3 input and 3 output queues - one of each type for each worker thread.
Now when you want to insert something into the input queue you put it into only one of the 3 input queues. You change the input queues in a round robin fashion. The same applies to the output, when you want to take something from the output you choose the first of the output queues and once you get your element you switch to the next queue.
All the queues need to be blocking.
Pump all your Futures through a BlockingQueue. Here's all the code you need:
public class SequentialProcessor implements Consumer<Task> {
private final ExecutorService executor = Executors.newCachedThreadPool();
private final BlockingDeque<Future<Result>> queue = new LinkedBlockingDeque<>();
public SequentialProcessor(Consumer<Result> listener) {
new Thread(() -> {
while (true) {
try {
listener.accept(queue.take().get());
} catch (InterruptedException | ExecutionException e) {
// handle the exception however you want, perhaps just logging it
}
}
}).start();
}
public void accept(Task task) {
queue.add(executor.submit(callableFromTask(task)));
}
private Callable<Result> callableFromTask(Task task) {
return <how to create a Result from a Task>; // implement this however
}
}
Then to use, create a SequentialProcessor (once):
SequentialProcessor processor = new SequentialProcessor(whatToDoWithResults);
and pump tasks to it:
Stream<Task> tasks; // given this
tasks.forEach(processor); // simply this
I created the callableFromTask() method for illustration, but you can dispense with it if getting a Result from a Task is simple by using a lambda instead or method reference instead.
For example, if Task had a getResult() method, do this:
queue.add(executor.submit(task::getResult));
or if you need an expression (lambda):
queue.add(executor.submit(() -> task.getValue() + "foo")); // or whatever
Reactive programming could help. During my brief experience with RxJava I found it to be intuitive and easy to work with than core language features like Future etc. Your mileage may vary. Here are some helpful starting points https://www.youtube.com/watch?v=_t06LRX0DV0
The attached example also shows how this could be done. In the example below we have Packet's which need to be processed. They are taken through a simple trasnformation and fnally merged into one list. The output appended to this message shows that the Packets are received and transformed at different points in time but in the end they are output in the order they have been received
import static java.time.Instant.now;
import static rx.schedulers.Schedulers.io;
import java.time.Instant;
import java.util.List;
import java.util.Random;
import rx.Observable;
import rx.Subscriber;
public class RxApp {
public static void main(String... args) throws InterruptedException {
List<ProcessedPacket> processedPackets = Observable.range(0, 10) //
.flatMap(i -> {
return getPacket(i).subscribeOn(io());
}) //
.map(Packet::transform) //
.toSortedList() //
.toBlocking() //
.single();
System.out.println("===== RESULTS =====");
processedPackets.stream().forEach(System.out::println);
}
static Observable<Packet> getPacket(Integer i) {
return Observable.create((Subscriber<? super Packet> s) -> {
// simulate latency
try {
Thread.sleep(new Random().nextInt(5000));
} catch (Exception e) {
e.printStackTrace();
}
System.out.println("packet requested for " + i);
s.onNext(new Packet(i.toString(), now()));
s.onCompleted();
});
}
}
class Packet {
String aString;
Instant createdOn;
public Packet(String aString, Instant time) {
this.aString = aString;
this.createdOn = time;
}
public ProcessedPacket transform() {
System.out.println(" Packet being transformed " + aString);
try {
Thread.sleep(new Random().nextInt(5000));
} catch (Exception e) {
e.printStackTrace();
}
ProcessedPacket newPacket = new ProcessedPacket(this, now());
return newPacket;
}
#Override
public String toString() {
return "Packet [aString=" + aString + ", createdOn=" + createdOn + "]";
}
}
class ProcessedPacket implements Comparable<ProcessedPacket> {
Packet p;
Instant processedOn;
public ProcessedPacket(Packet p, Instant now) {
this.p = p;
this.processedOn = now;
}
#Override
public int compareTo(ProcessedPacket o) {
return p.createdOn.compareTo(o.p.createdOn);
}
#Override
public String toString() {
return "ProcessedPacket [p=" + p + ", processedOn=" + processedOn + "]";
}
}
Deconstruction
Observable.range(0, 10) //
.flatMap(i -> {
return getPacket(i).subscribeOn(io());
}) // source the input as observables on multiple threads
.map(Packet::transform) // processing the input data
.toSortedList() // sorting to sequence the processed inputs;
.toBlocking() //
.single();
On one particular run Packets were received in the order 2,6,0,1,8,7,5,9,4,3 and processed in order 2,6,0,1,3,4,5,7,8,9 on different threads
packet requested for 2
Packet being transformed 2
packet requested for 6
Packet being transformed 6
packet requested for 0
packet requested for 1
Packet being transformed 0
packet requested for 8
packet requested for 7
packet requested for 5
packet requested for 9
Packet being transformed 1
packet requested for 4
packet requested for 3
Packet being transformed 3
Packet being transformed 4
Packet being transformed 5
Packet being transformed 7
Packet being transformed 8
Packet being transformed 9
===== RESULTS =====
ProcessedPacket [p=Packet [aString=2, createdOn=2016-04-14T13:48:52.060Z], processedOn=2016-04-14T13:48:53.247Z]
ProcessedPacket [p=Packet [aString=6, createdOn=2016-04-14T13:48:52.130Z], processedOn=2016-04-14T13:48:54.208Z]
ProcessedPacket [p=Packet [aString=0, createdOn=2016-04-14T13:48:53.989Z], processedOn=2016-04-14T13:48:55.786Z]
ProcessedPacket [p=Packet [aString=1, createdOn=2016-04-14T13:48:54.109Z], processedOn=2016-04-14T13:48:57.877Z]
ProcessedPacket [p=Packet [aString=8, createdOn=2016-04-14T13:48:54.418Z], processedOn=2016-04-14T13:49:14.108Z]
ProcessedPacket [p=Packet [aString=7, createdOn=2016-04-14T13:48:54.600Z], processedOn=2016-04-14T13:49:11.338Z]
ProcessedPacket [p=Packet [aString=5, createdOn=2016-04-14T13:48:54.705Z], processedOn=2016-04-14T13:49:06.711Z]
ProcessedPacket [p=Packet [aString=9, createdOn=2016-04-14T13:48:55.227Z], processedOn=2016-04-14T13:49:16.927Z]
ProcessedPacket [p=Packet [aString=4, createdOn=2016-04-14T13:48:56.381Z], processedOn=2016-04-14T13:49:02.161Z]
ProcessedPacket [p=Packet [aString=3, createdOn=2016-04-14T13:48:56.566Z], processedOn=2016-04-14T13:49:00.557Z]
You could launch a DoTask thread for every WorkItem. This thread processes the work.
When the work is done, you try to post the item, synchronized on the controlling object, in which you check if it's the right ID and wait if not.
The post implementation can be something like:
synchronized(controllingObject) {
try {
while(workItem.id != nextId) controllingObject.wait();
} catch (Exception e) {}
//Post the workItem
nextId++;
object.notifyAll();
}
I think that you need an extra queue to hold the incoming order.
IncomingOrderQueue.
When you consume the objects you put them in some storage, for example Map and then from another thread which consumes from the IncomingOrderQueue you pick the ids(hashes) of the objects and then you collect them from this HashMap.
This solution can easily be implemented without execution service.
Preprocess: add an order value to each item, prepare an array if it is not allocated.
Input: queue (concurrent sampling with order values 1,2,3,4 but doesnt matter which tread gets which sample)
Output: array (writing to indexed elements, using a synch point to wait for all threads in the end, doesn't need collision checks since it writes different positions for every thread)
Postprocess: convert array to a queue.
Needs n element-array for n-threads. Or some multiple of n to do postprocessing only once.

Executing Dependent tasks in parallel in Java

I need to find a way to execute tasks (dependent and independent) in parallel in java.
Task A and Task C can run independently.
Task B is dependent on the output of Task A.
I checked java.util.concurrent Future and Fork/Join, but looks like we cannot add dependency to a Task.
Can anyone point me to correct Java API.
In Scala this is very easy to do, and I think you are better off using Scala. Here is an example I pulled from here http://danielwestheide.com/ (The Neophyte’s Guide to Scala Part 16: Where to Go From Here) this guy has a great blog (I am not that guy)
Lets take a barrista making coffee. The tasks to do are:
Grind the required coffee beans (no preceding tasks)
Heat some water (no preceding tasks)
Brew an espresso using the ground coffee and the heated water (depends on 1 & 2)
Froth some milk (no preceding tasks)
Combine the froth milk and the espresso (depends on 3,4)
or as a tree:
Grind _
Coffe \
\
Heat ___\_Brew____
Water \_____Combine
/
Foam ____________/
Milk
In java using the concurrency api this would be:
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.FutureTask;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
public class Barrista {
static class HeatWater implements Callable<String> {
#Override
public String call() throws Exception {
System.out.println("Heating Water");
Thread.sleep(1000);
return "hot water";
}
}
static class GrindBeans implements Callable<String> {
#Override
public String call() throws Exception {
System.out.println("Grinding Beans");
Thread.sleep(2000);
return "grinded beans";
}
}
static class Brew implements Callable<String> {
final Future<String> grindedBeans;
final Future<String> hotWater;
public Brew(Future<String> grindedBeans, Future<String> hotWater) {
this.grindedBeans = grindedBeans;
this.hotWater = hotWater;
}
#Override
public String call() throws Exception
{
System.out.println("brewing coffee with " + grindedBeans.get()
+ " and " + hotWater.get());
Thread.sleep(1000);
return "brewed coffee";
}
}
static class FrothMilk implements Callable<String> {
#Override
public String call() throws Exception {
Thread.sleep(1000);
return "some milk";
}
}
static class Combine implements Callable<String> {
public Combine(Future<String> frothedMilk, Future<String> brewedCoffee) {
super();
this.frothedMilk = frothedMilk;
this.brewedCoffee = brewedCoffee;
}
final Future<String> frothedMilk;
final Future<String> brewedCoffee;
#Override
public String call() throws Exception {
Thread.sleep(1000);
System.out.println("Combining " + frothedMilk.get() + " "
+ brewedCoffee.get());
return "Final Coffee";
}
}
public static void main(String[] args) {
ExecutorService executor = Executors.newFixedThreadPool(2);
FutureTask<String> heatWaterFuture = new FutureTask<String>(new HeatWater());
FutureTask<String> grindBeans = new FutureTask<String>(new GrindBeans());
FutureTask<String> brewCoffee = new FutureTask<String>(new Brew(grindBeans, heatWaterFuture));
FutureTask<String> frothMilk = new FutureTask<String>(new FrothMilk());
FutureTask<String> combineCoffee = new FutureTask<String>(new Combine(frothMilk, brewCoffee));
executor.execute(heatWaterFuture);
executor.execute(grindBeans);
executor.execute(brewCoffee);
executor.execute(frothMilk);
executor.execute(combineCoffee);
try {
/**
* Warning this code is blocking !!!!!!!
*/
System.out.println(combineCoffee.get(20, TimeUnit.SECONDS));
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
} catch (TimeoutException e) {
System.out.println("20 SECONDS FOR A COFFEE !!!! I am !##! leaving!!");
e.printStackTrace();
} finally{
executor.shutdown();
}
}
}
Make sure that you add time outs though to ensure that your code will not wait forever on something to complete, that is done by using the Future.get(long, TimeUnit) and then handle failure accordingly.
It is much nicer in scala however, here it is like it's on the blog:
The code to prepare some coffee would look something like this:
def prepareCappuccino(): Try[Cappuccino] = for {
ground <- Try(grind("arabica beans"))
water <- Try(heatWater(Water(25)))
espresso <- Try(brew(ground, water))
foam <- Try(frothMilk("milk"))
} yield combine(espresso, foam)
where all the methods return a future (typed future), for instance grind would be something like this:
def grind(beans: CoffeeBeans): Future[GroundCoffee] = Future {
// grinding function contents
}
For all the implementations check out the blog but that's all there is to it. You can integrate Scala and Java easily as well. I really recommend doing this sort of thing in Scala instead of Java. Scala requires much less code, much cleaner and event driven.
General programming model for tasks with dependencies is Dataflow. Simplified model where each task has only one, though repeating, dependency is Actor model. There are many actor libraries for Java, but very few for dataflow.
See also: which-actor-model-library-framework-for-java, java-pattern-for-nested-callbacks
Use a BlockingQueue. Put the output of task A into the queue, and task B blocks until something is available in the queue.
The docs contain example code to achieve this: http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/BlockingQueue.html
Java defines a class CompletableFuture.
https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html
This is what you are looking for.
It helps to build execution flows.
What you need is a CountDownLatch.
final CountDownLatch gate = new CountDownLatch(2);
// thread a
new Thread() {
public void run() {
// process
gate.countDown();
}
}.start();
// thread c
new Thread() {
public void run() {
// process
gate.countDown();
}
}.start();
new Thread() {
public void run() {
try {
gate.await();
// both thread a and thread c have completed
// process thread b
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}.start();
As an alternative, depending on your scenario, you might also be able to use a BlockingQueue to implement the Producer-Consumer pattern. See the example on the documentation page.
If task B is dependent on task A's output, I would first question whether or not task B really is a separate task. Separating the tasks would make sense if there is:
Some non-trivial amount of work that task B can do before needing task A's results
Task B is a long ongoing process that handles output from many different instances of task A
There is some other tasks (say D) that also use task A's results
Assuming it is a separate task, then you can allow task A & B to share a BlockingQueue such that task A can pass task B data.
Use this library https://github.com/familysyan/TaskOrchestration. It manages the task dependency for you.
There is a java library specifically for this purpose (Disclaimer : I am the owner of this library) called Dexecutor
Here is how you can achieve the desired result, you can read more about it here
#Test
public void testDependentTaskExecution() {
DefaultDependentTasksExecutor<String, String> executor = newTaskExecutor();
executor.addDependency("A", "B");
executor.addIndependent("C");
executor.execute(ExecutionBehavior.RETRY_ONCE_TERMINATING);
}
private DefaultDependentTasksExecutor<String, String> newTaskExecutor() {
return new DefaultDependentTasksExecutor<String, String>(newExecutor(), new SleepyTaskProvider());
}
private ExecutorService newExecutor() {
return Executors.newFixedThreadPool(ThreadPoolUtil.ioIntesivePoolSize());
}
private static class SleepyTaskProvider implements TaskProvider<String, String> {
public Task<String, String> provid(final String id) {
return new Task<String, String>() {
#Override
public String execute() {
try {
//Perform some task
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
String result = id + "processed";
return result;
}
#Override
public boolean shouldExecute(ExecutionResults<String, String> parentResults) {
ExecutionResult<String, String> firstParentResult = parentResults.getFirst();
//Do some logic with parent result
if ("B".equals(id) && firstParentResult.isSkipped()) {
return false;
}
return true;
}
};
}
}

Categories

Resources