How to send multiple asynchronous requests in parallel using Unirest - java

While using Unirest, the program doesn't exit until we manually shutdown every thread by invoking Unirest.shutdown(). If I had to make just one request, it's easy:
private static void asyncRequest (String link) {
try {
Future <HttpResponse <JsonNode>> request = Unirest.head(link).asJsonAsync(
new Callback<JsonNode>() {
#Override
public void completed(HttpResponse<JsonNode> httpResponse) {
print(httpResponse.getHeaders());
try {
Unirest.shutdown();
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void failed(UnirestException e) {
print(e.getMessage());
}
#Override
public void cancelled() {
print("Request cancelled");
}
}
);
} catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String[] args) throws Exception {
asyncRequest("https://entrepreneur.com");
}
But I have to make multiple HTTP request in parallel (subsequent requests are meant not to wait for previous requests to complete). In the code above, I have to execute the code inside asyncRequest more than once with different links. The problem is I can't decide when to invoke Unirest.shutdown() so that the program exits as soon as the last request receives response. If I call Unirest.shutdown() after all the calls to asyncRequest in main, some or all the requests might get interrupted. If I call it inside completed (and other overridden methods), only the first request is made and others are interrupted. How can I solve this?

In theory, you could make the current thread wait for the execution of the method and after they are all done you can call the shutdown. But this would make the whole process synchronous, which is not what we want. So what I would do is, run different thread (other than the main one) which will wait for the execution of all your http requests. To do so you can use the class CountDownLatch, initializing with the countdown before it releases the control to the parent thread. You pass the instance of the CountDownLatch to the async method and you decrease by one each time you complete an http request. When it reaches 0 it returns the control to the parent thread, where you know you can call shutdown method as all your requests are done.

Related

What's the best way to release resources after java.util.Timer?

I have an AutoCloseable whose close() method is being called prematurely. The AutoCloseable is ProcessQueues below. I don't want the close() method to be called when it is currently being called. I'm considering the removal of "implements AutoCloseable" to accomplish that. But then how do I know when to call ProcessQueues.close()?
public class ProcessQueues implements AutoCloseable {
private ArrayList<MessageQueue> queueObjects = new ArrayList<MessageQueue>();
public ProcessQueues() {
queueObjects.add(new FFE_DPVALID_TO_SSP_EXCEPTION());
queueObjects.add(new FFE_DPVALID_TO_SSP_ESBEXCEPTION());
...
}
private void scheduleProcessRuns() {
try {
for (MessageQueue obj : queueObjects) {
monitorTimer.schedule(obj, new Date(), 1); // NOT THE ACTUAL ARGUMENTS
}
}
catch (Exception ex) {
// NOT THE ACTUAL EXCEPTION HANDLER
}
}
public static void main(String[] args) {
try (ProcessQueues pq = new ProcessQueues()) {
pq.scheduleProcessRuns();
} catch (Exception e) {
// NOT THE ACTUAL EXCEPTION HANDLER
}
}
#Override
public void close() throws Exception {
for (MessageQueue queue : queueObjects) {
queue.close();
}
}
}
I want ProcessQueues.close() to be called, but not until the task execution threads of all Timer objects terminate. As written, ProcessQueues.close() will be called as soon as the tasks are scheduled. I can easily solve that by removing "implements AutoCloseable" from the ProcessQueues class (and removing the #Override annotation). But then I have to call ProcessQueues.close() myself. How do I know when the task execution threads of all Timer objects have terminated? That's when I want to call ProcessQueues.close().
Note that MessageQueue isn't instantiated in the resource specification header of a try-with-resources block, so although MessageQueue also implements AutoCloseable, the feature isn't utilized here. I'm explicitly calling MessageQueue.close(). It is in MessageQueue.close() that I release resources. Releasing those resources prematurely causes the task execution threads to fail to complete their tasks.
I'm considering an explicit call to ProcessQueues.close() after rewriting the code to prevent automatic resource deallocation, but again I don't know how to discover the right time for that explicit call.
I considered overriding ProcessQueues.finalize(), but "Java: How to Program", Eleventh Edition advises against that. "You should never use method finalize, because it can cause many problems and there's uncertainty as to whether it will ever get called before a program terminates... Now it's considered better practice for any class that uses system resources... to provide a method that programmers can call to release resources when they're no longer needed in a program." I have such a method. It's ProcessQueues.close(). But when should I call it?
You have conflicting lifecycle issues here.
You have Timer whose lifecycle is 100% in your control. You start it, you stop it, and that's it. But you have no direct introspection in to the status of the threads being managed by the Timer. So, you can't ask it if it has anything currently running, for example.
Then you have your MessageQueue, which is invoked by the Timer. This is the lifecycle you're interested in. You want to wait for all of the MessageQueues to be "done", for assorted values of done. But, since the queue are constantly being rescheduled (given the Timer.schedule method that you're using), they're NEVER "done". They process their contents and go off and run again.
So, how is anyone to know when "done" means "done"?
Is it up to the MessageQueue? Or is it up to the ProcessQueues? Who's in command here?
Notice, nothing ever cancels the Timer. It's just runs on and on and on.
So, how can one know when MessageQueue can be closed?
If MessageQueue is the real driver here, then you should add lifecycle methods to the MessageQueue that ProcessQueues can monitor to know when to shut things down. For example, you could create a CountDownLatch set for however many MessageQueues are in your list, and then subscribe to a new lifecycle method on the MessageQueue that it calls when it's finished. The callback method can then decrement the CountDownLatch, and the ProcessQueues.close method simply waits on the latch to countdown before closing everything.
public class ProcessQueues implements AutoCloseable, MessageQueueListener {
private ArrayList<MessageQueue> queueObjects = new ArrayList<MessageQueue>();
CountDownLatch latch;
public ProcessQueues() {
queueObjects.add(new FFE_DPVALID_TO_SSP_EXCEPTION());
queueObjects.add(new FFE_DPVALID_TO_SSP_ESBEXCEPTION());
...
queueObjects.forEach((mq) -> {
mq.setListener(this);
});
latch = new CountDownLatch(queueObjects.size());
}
private void scheduleProcessRuns() {
try {
for (MessageQueue obj : queueObjects) {
monitorTimer.schedule(obj, new Date(), 1); // NOT THE ACTUAL ARGUMENTS
}
} catch (Exception ex) {
// NOT THE ACTUAL EXCEPTION HANDLER
}
}
public static void main(String[] args) {
try (ProcessQueues pq = new ProcessQueues()) {
pq.scheduleProcessRuns();
} catch (Exception e) {
// NOT THE ACTUAL EXCEPTION HANDLER
}
}
#Override
public void close() throws Exception {
latch.await();
for (MessageQueue queue : queueObjects) {
queue.close();
}
monitorTimer.cancel();
}
#Override
public void messageQueueDone() {
latch.countDown();
}
}
public interface MessageQueueListener {
public void messageQueueDone();
}
public class MessageQueue extends TimerTask {
MessageQueueListener listener;
public void setListener(MessageQueueListener listener) {
this.listener = listener;
}
private boolean isMessageQueueReallyDone {
...
}
public void run() {
...
if (isMessageQueueReallyDone() && listener != null) {
listener.messageQueueDone();
}
}
}
Mind, this means that your try-with-resource block will block waiting on all of the MessageQueues, if that's what you want, then you're good to go.
It also crassly assumes that your MessageQueue.run() knows when to shut down, which goes back to that "who's in control here" thing.
I could terminate the Timer, but having it run perpetually is intentional. The question is in consideration of what happens when something else terminates the Timer and the MessageQueue objects are no longer needed. It is at that point that I would like to call ProcessQueues.close().
If I were to use the Executor framework, rather than Timer, then I could use ExecutorService.awaitTermination(long timeout, TimeUnit unit)
TimerTask is a Runnable, and MessageQueue is already a TimerTask, so MessageQueue need not change.
'ExecutorService.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS)' would effectively wait forever for termination.
public static void main(String[] args) {
try (ProcessQueues pq = new ProcessQueues()) {
pq.scheduleProcessRuns();
// Don't take this literally.
ExecutorService.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
} catch (Exception e) {
// NOT THE ACTUAL EXCEPTION HANDLER
}
}
Of course, awaitTermination isn't a static method, so I'll have to have an ExecutorService, but you get the idea.
After termination, the AutoCloseable feature is leveraged and ProcessQueues.close() is implicitly called.
All that remains is to start the threads for perpetually repeated calls to each TimerTask, using the Executor framework. The answer to that question is ScheduledExecutorService.
I think this will work.

Event driven to continue request thread execution in Spring MVC

There is a method foo() in controller, which have to wait another method bar() triggered to continue execution.
#GetMapping("/foo")
public void foo(){
doSomething();
// wait until method bar() triggered
doAnotherSomething();
}
#GetMapping("/bar")
public void bar(){
// make foo() continue execute after being called
}
My solution is: saving a status flag in database/cache, while foo() is waiting, the thread loops searching if the status changed.
However, this solution will blocke request thread for seconds.
Is there any way to make foo() method run asynchronously, thus won't block thread execution?
This question is too broad. Yes you can use DeferredResult to finish a web request later. But doAnotherSomething() should actually do stuff asynchronously, otherwise you still end up using a thread, just not the one from the app server's pool. Which would be a waste since you can simply increase the app server's pool size and be done with it. "Offloading" work from it to another pool is a wild goose chase.
You achieve truly asynchronous execution when you wait on more than one action in a single thread. For example by using asynchronous file or socket channels you can read from multiple files/sockets at once. If you're using a database, the database driver must support asynchronous execution.
Here's an example of how to use the mongodb async driver:
#GetMapping("/foo")
public DeferredResult<ResponseEntity<?>> foo() {
DeferredResult<ResponseEntity<?>> res = new DeferredResult<>();
doSomething();
doAnotherSomething(res);
return res;
}
void doAnotherSomething(DeferredResult<ResponseEntity<?>> res) {
collection.find().first(new SingleResultCallback<Document>() {
public void onResult(final Document document, final Throwable t) {
// process (document)
res.setResult(ResponseEntity.ok("OK")); // finish the request
}
});
}
You can use CountDownLatch to wait till the dependent method is executed. For the sake of simplicity, I have used a static property. Make sure both methods have access to the same CountDownLatch object. ThreadLocal<CountDownLatch> could also be considered for this usecase.
private static CountDownLatch latch = new CountDownLatch(1);
#GetMapping("/foo")
public void foo(){
doSomething();
// wait until method bar() triggered
latch.await();
doAnotherSomething();
}
#GetMapping("/bar")
public void bar(){
// make foo() continue execute after being called
latch.countDown();
}

How to execute business logic handler in a separate thread pool using netty

I have a handler that needs to execute some business logic and I want that to be executed in a separate thread pool to not block the io event loop. I have added DefaultEventExecutorGroup into the pipeline as specified in http://netty.io/4.0/api/io/netty/channel/ChannelPipeline.html javadoc and http://netty.io/wiki/new-and-noteworthy-in-4.0.html#no-more-executionhandler---its-in-the-core wiki:
ch.pipeline().addLast(new DefaultEventExecutorGroup(10), new ServerHandler());
Just for testing purposes my ServerHandler just puts the current thread to sleep for 5 seconds:
protected void channelRead0(ChannelHandlerContext ctx, Command cmd) throws Exception {
System.out.println("Starting.");
try {
Thread.currentThread().sleep(5000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Finished.");
}
But apparently the business logic is still executed synchronously:
Starting.
Finished.
Starting.
Finished.
Starting.
Finished.
What am I missing?
In case your goal is not to block IO event loop - you did it right. But due to netty specific, your handler will be always attached to the same thread of EventExecutorGroup and thus behavior you described above is expected.
In case you want to execute blocking operation in parallel as soon as it arrives you need to use the another way - separate ThreadPoolExecutor. Like this:
ch.pipeline().addLast(new ServerHandler(blockingThreadPool));
where blockingThreadPool is regular ThreadPoolExecutor.
For example:
ExecutorService blockingThreadPool = Executors.newFixedThreadPool(10);
Now, within your logic handler you can submit blocking tasks to this executor like this:
protected void channelRead0(ChannelHandlerContext ctx, Command cmd) throws Exception {
blockingIOProcessor.execute(new Runnable() {
#Override
public void run() {
System.out.println("Starting.");
try {
Thread.currentThread().sleep(5000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Finished.");
}
});
}
You can also pass context to this runnable in order to return the response back when processing is finished if needed.
Because the Netty handle the request which is sended from same socket by the same EventExecutor,so you can start more than one client,and see the result.

How to make a thread wait for a server response

I am currently writing a small Java program where I have a client sending commands to a server. A separate Thread is dealing with replies from that server (the reply is usually pretty fast). Ideally I pause the Thread that made the server request until such time as the reply is received or until some time limit is exceeded.
My current solution looks like this:
public void waitForResponse(){
thisThread = Thread.currentThread();
try {
thisThread.sleep(10000);
//This should not happen.
System.exit(1);
}
catch (InterruptedException e){
//continue with the main programm
}
}
public void notifyOKCommandReceived() {
if(thisThread != null){
thisThread.interrupt();
}
}
The main problem is: This code does throw an exception when everything is going as it should and terminates when something bad happens. What is a good way to fix this?
There are multiple concurrency primitives which allow you to implement thread communication. You can use CountDownLatch to accomplish similar result:
public void waitForResponse() {
boolean result = latch.await(10, TimeUnit.SECONDS);
// check result and react correspondingly
}
public void notifyOKCommandReceived() {
latch.countDown();
}
Initialize latch before sending request as follows:
latch = new CountDownLatch(1);

How to call asynchronous method inside java thread?

In Java, is there any way to call and handle asynchronous method inside a thread?
Consider an scenario in which one of the method inside thread body takes more time to execute it. Because of that, thread completion takes more time.
I have tried some examples by using concurrency package classes like FutureTask and Executors.
Is it possible to implement and handle all exceptions inside asynchronous method? and Is it possible to get success or error responses like AJAX success and error handlers in JavaScript?
How will we ensure that asynchronous method successfully executed or not (with or without parent thread context)?
Most natural way of communication between async method and parent thread is standard class CompletableFuture:
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;
public class AsyncExample {
String input; // common data
// async method
public String toLower() {
return input.toLowerCase();
}
// method on main thread
public void run() {
input = "INPUT"; // set common data
try {
// start async method
CompletableFuture<String> future = CompletableFuture.supplyAsync(this::toLower);
// here we can work in parallel
String result = future.get(); // get the async result
System.out.println("input="+input+"; result="+result);
} catch (InterruptedException | ExecutionException e) {
}
}
public static void main(String[] args) {
new AsyncExample().run();
}
}
Note that creation and warming of an Executor, including the default executor used in the example, requires some time (50 ms on my computer), so you may want to create and warm one beforehand, e.g. by supplying an empty method:
CompletableFuture.supplyAsync(()->null).get();

Categories

Resources