Using Java Executor Service In Online Application - java

I have one functionality in online application. I need to mail receipt to customer after generate receipt. My problem is mail function takes more time nearly 20 to 30 seconds, customer could not wait for long time during online transaction.
So i have used java ExecutorService to run independently mail service [sendMail] and return response PAGE to customer either mail sent or not.
Is it right to use ExecutorService in online application [Http request & Response]. Below is my code. Kindly advice.
#RequestMapping(value="/generateReceipt",method=RequestMethod.GET)
public #ResponseBody ReceiptBean generateReceipt(HttpServletRequest httpRequest,HttpServletResponse httpResponse) {
// Other codes here
...
...
I need run below line independently, since it takes more time. so commeneted and wrote executor service
//mailService.sendMail(httpRequest, httpResponse, receiptBean);
java.util.concurrent.ExecutorService executorService = java.util.concurrent.Executors.newFixedThreadPool(10);
executorService.execute(new Runnable() {
ReceiptBean receiptBean1;
public void run() {
mailService.sendMail(httpRequest, httpResponse, receiptBean);
}
public Runnable init(ReceiptBean receiptBean) {
this.receiptBean = receiptBean1;
return(this);
}
}.init(receiptBean));
executorService.shutdown();
return receiptBean;
}

You can do that, although I wouldn't expect this code in a controller class but in a separate on (Separation of Concerns and all).
However, since you seem to be using Spring, you might as well use their scheduling framework.

It is fine to use Executor Service to make an asynchronous mail sending request, but you should try to follow SOLID principles in your design. Let the service layer take care of running the executor task.
https://en.wikipedia.org/wiki/SOLID

I agree with both #daniu and #Ankur regarding the separation of concerns u should follow. So just create a dedicated service like "EmailService" and inject it where needed.
Moreover you are leveraging the Spring framework and u can take advantage of its Async feature.
If u prefer to write your own async code then I'll suggest to use maybe a CompletableFuture instead of the ExecutorService to better handling failure (maybe u want store messages not sent into a queue for achieving retry feature or some other behaviour).

Related

How to create a non-blocking #RestController webservice in spring?

I'm having a #RestController webservice method that might block the response thread with a long running service call. As follows:
#RestController
public class MyRestController {
//could be another webservice api call, a long running database query, whatever
#Autowired
private SomeSlowService service;
#GetMapping()
public Response get() {
return service.slow();
}
#PostMapping()
public Response get() {
return service.slow();
}
}
Problem: what if X users are calling my service here? The executing threads will all block until the response is returned. Thus eating up "max-connections", max threads etc.
I remember some time ago a read an article on how to solve this issue, by parking threads somehow until the slow service response is received. So that those threads won't block eg the tomcat max connection/pool.
But I cannot find it anymore. Maybe somebody knows how to solve this?
there are a few solutions, such as working with asynchronous requests. In those cases, a thread will become free again as soon as the CompletableFuture, DeferredResult, Callable, ... is returned (and not necessarily completed).
For example, let's say we configure Tomcat like this:
server.tomcat.max-threads=5 # Default = 200
And we have the following controller:
#GetMapping("/bar")
public CompletableFuture<String> getSlowBar() {
return CompletableFuture.supplyAsync(() -> {
silentSleep(10000L);
return "Bar";
});
}
#GetMapping("/baz")
public String getSlowBaz() {
logger.info("Baz");
silentSleep(10000L);
return "Baz";
}
If we would fire 100 requests at once, you would have to wait at least 200 seconds before all the getSlowBar() calls are handled, since only 5 can be handled at a given time. With the asynchronous request on the other hand, you would have to wait at least 10 seconds, because all requests will likely be handled at once, and then the thread is available for others to use.
Is there a difference between CompletableFuture, Callable and DeferredResult? There isn't any difference result-wise, they all behave the similarly.
The way you have to handle threading is a bit different though:
With Callable, you rely on Spring executing the Callable using a TaskExecutor
With DeferredResult you have to to he thread-handling by yourself. For example by executing the logic within the ForkJoinPool.commonPool().
With CompletableFuture, you can either rely on the default thread pool (ForkJoinPool.commonPool()) or you can specify your own thread pool.
Other than that, CompletableFuture and Callable are part of the Java specification, while DeferredResult is a part of the Spring framework.
Be aware though, even though threads are released, connections are still kept open to the client. This means that with both approaches, the maximum amount of requests that can be handled at once is limited by 10000, and can be configured with:
server.tomcat.max-connections=100 # Default = 10000
in my opinion.the async may be better for the sever.for this particular api, async not works well.the clients also hold the connections. finally it will eating up "max-connections".you can send the request to messagequeue(kafka)and return success to clients. then you get the request and pass it to the slow sevice.

Long running job Spring

I want to create a web page which takes a string as an input and start a process. The process will run for long time. I need to email the results after processing.
Which spring API should I use ?
The user will close the browser once he makes the request . I am a newbie to Java EE and spring .
Can anyone say the architecture that is needed to accomplish this ?
You could use some executor service:
#Bean
public ExecutorService executorService() {
return Executors.newCachedThreadPool();
}
Then
executorService.execute(yourLongRunningTaskRunnable);
in your controller, where yourLongRunningTaskRunnable is, of course, a Runnable. This like schedules your task for execution. The task could send an email if needed in the end.
This is not a Spring API, but it seems suitable for such a task.

Java: How to handle an API call that can take around 10 seconds

I have a requirement and I am bit confused about its design.
Requirement: iOS makes a call to backend(java), backend makes a call to the cloud API which return a token for future calls. The cloud API might take approximately 6 to 10 seconds to return the actual result, so instead of waiting for 6 to 10 seconds it gives a token back and let the caller(in my case the backend java server) to pull the results.
Current Approach: iOS calls the backend(java server), the backend calls cloud API and get's the token, then it sleeps the thread for 1 second and once the thread is invoked it hit the cloud API to get the status, if the status is not completed thread.sleep is invoked again and this continues till the cloud API call give's the complete result. Once the cloud API returns the result the backend returns the result to iOS.
The approach is not scalable and was done to test the cloud API but now we need a more scalable approach.
This is what I am thinking about iOS calls backend, backend calls the API and send back the result to iOS(it displays some static screen just to keep users engaged) and in the mean time it puts the object in Spring Thread pool Executor. The executor hits the API every one second and update the iOS through push notification and this continues till we get the final result from cloud API.
This is better then existing approach but even this doesn't look scalable and thread pool executor will get exhausted after some time(making it slow) and also thread.sleep is also not a good option.
I thought about using AWS SQS but it doesn't provide real time processing and running background jobs every 1 second doesn't seem to be a good option.
I am also exploring Apache Kafka and trying to understand whether it can fit to my use case.
Let me know if someone has tacked the similar kind of use case.
Here #EventListener in tandem with #Scheduled can be utilized, if Spring 4.2 (or newer) version is used.
First Create an event object say APIResult which will hold the API result
public class APIResult extends ApplicationEvent {
public APIResult(Object result) {
super(source);
}
}
Next register a listener for the event published as APIResult
#Component
public class MyListener {
#EventListener
public void handleResult(APIResult result) {
// do something ...
}
}
Next create a scheduled process which will hold the token(s) for which result is not yet retrieved
#Component
public class MyScheduled {
private final ApplicationEventPublisher publisher;
private List<String> tokens = new ArrayList<>();
#Autowired
public MyScheduled (ApplicationEventPublisher publisher) {
this.publisher = publisher;
}
#Scheduled(initialDelay=1000, fixedRate=5000) // modify it as per requirement
public void callAPIForResult() {
// call the API and get result for each token(s) ....
this.publisher.publishEvent(new APIResult(result));
}
// method to add & remove tokens
}
The overall process flow should be like
Application submit a request to API and collect the respective token.
Token is passed to scheduled service to fetch the result.
In its next run the scheduled service iterates over the available token(s) and call API to fetch the results (if result is available publish the event else continue)
The published event is intercepted by registered listener; which itself process the result or delegates as applicable
This approach will transparently fetch results without messing with the business logic and at same time leveraging the standard framework features viz. scheduling and asynchronous event publishing & processing.
Although I have not tested this but it should work, at least giving an idea on how to implement. The setup is tested with Spring boot ver. 1.5.1.RELEASE which is backed by Spring's 4.3.6.RELEASE
Do let know in comments if any further information is required.
Reference - Application Event in Spring (link)
I am thinking about using Spring ConcurrentTaskExecutor(let's call it cloudApiCall) and as soon as I received the token from Cloud API, I will submit a future job to the executor and return the token to the Mobile Client. The thread associated with ConcurrentTaskExecutor will pick the job, call the Cloud API and submit the response to the another ConcurrentTaskExecutor(let's call it pushNotification) which will be responsible for pushing the silent notification to the Mobile client. The thread associated ConcurrentTaskExecutor(cloudApiCall), will also check the status of the call, if the future call is required, it will submit the job back to ConcurrentTaskExecutor(cloudApiCall). This will continue till we get the complete response.

Vert.x Unit Test a Verticle that does not implement the start method with future

I'm new to Vert.x and just stumbled about a problem.
I've the following Verticle:
public class HelloVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
String greetingName = config().getString("greetingName", "Welt");
String greetingNameEnv = System.getenv("GREETING_NAME");
String greetingNameProp = System.getProperty("greetingName");
Router router = Router.router(vertx);
router.get("/hska").handler(routingContext -> {
routingContext.response().end(String.format("Hallo %s!", greetingName));
});
router.get().handler(routingContext -> {
routingContext.response().end("Hallo Welt");
});
vertx
.createHttpServer()
.requestHandler(router::accept)
.listen(8080);
}
}
I want to unit test this verticle but i dont know how to wait for the verticle to be deployed.
#Before
public void setup(TestContext context) throws InterruptedException {
vertx = Vertx.vertx();
JsonObject config = new JsonObject().put("greetingName", "Unit Test");
vertx.deployVerticle(HelloVerticle.class.getName(), new DeploymentOptions().setConfig(config));
}
when i setup my test like this i have to add a Thread.sleep after the deploy call, to make the tests be executed after some time of watiting for the verticle.
I heared about Awaitability and that it should be possible to wait for the verticle to be deployed with this library. But I didn't find any examples of how to use Awaitability with vertx-unit and the deployVerticle method.
Could anyone bring some light into this?
Or do i really have to hardcode a sleep timer after calling the deployVerticle-Method in my tests?
Have a look into the comments of the accepted answer
First of all you need to implement start(Future future) instead of just start(). Then you need to add a callback handler (Handler<AsyncResult<HttpServer>> listenHandler) to the listen(...) call — which then resolves the Future you got via start(Future future).
Vert.x is highly asynchronous — and so is the start of an Vert.x HTTP server. In your case, the Verticle would be fully functional when the HTTP server is successfully started. Therefore, you need implement the stuff I mentioned above.
Second you need to tell the TestContext that the asynchronous deployment of your Verticle is done. This can be done via another callback handler (Handler<AsyncResult<String>> completionHandler). Here is blog post shows how to do that.
The deployment of a Verticle is always asynchronous even if you implemented the plain start() method. So you should always use a completionHandler if you want to be sure that your Verticle was successfully deployed before test.
So, no you don't need to and you definitely shouldn't hardcode a sleep timer in any of your Vert.x applications. Mind The Golden Rule - Don’t Block the Event Loop.
Edit:
If the initialisation of your Verticle is synchronous you should overwrite the plain start() method — like it's mentioned in the docs:
If your verticle does a simple, synchronous start-up then override this method and put your start-up code in there.
If the initialisation of your Verticle is asynchronous (e.g. deploying a Vert.x HTTP server) you should overwrite start(Future future) and complete the Future when your asynchronous initialisation is finished.

When to use a Thread pool instead of calling new Thread

I have a JAX-RS/Jersey Rest API which gets a request and needs to do an additional job in a separate thread but I am not sure whether it would be advisable to use a threadpool or not. I expect a lot of requests to this API (a few thousands a day) but I only have a single additional job in the background.
Would it be bad to just create a new Thread each time like this? Any advice would be appreciated. I have not used a ThreadPool before.
#Get
#Path("/myAPI")
public Response myCall() {
// call load in the background
load();
...
// do main job here
mainJob();
...
}
private void load() {
new Thread(new Runnable() {
#Override
public void run() {
doSomethingInTheBackground();
}
}).start();
}
Edit:
Just to clarify. I only need a single additional job to run in the background. This job will call another API to log some info and that's it. But it has to do this for every request and I do not need to wait for a response. That's why I thought of just doing this in a new background thread.
Edit2:
So this is what I came up with now. Could anyone please tell me if this seems OK (it works locally) and if I need to shutdown the executor (see my comment in the code)?
// Configuration class
#Bean (name = "executorService")
public ExecutorService executorService() {
return Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() + 1);
}
// Some other class
#Qualifier("executorService")
#Autowired
private ExecutorService executorService;
....
private void load() {
executorService.submit(new Runnable() {
#Override
public void run() {
doSomethingInTheBackground();
}
});
// If I enable this I will get a RejectedExecutionException
// for a next request.
// executorService.shutdown();
}
Threadpool is a good way of dealing with this for two reasons:
1) you will reuse existing threads in the pool, sort of less overhead
2) more importantly, your system will not get bog down if system goes under attack and some party tries to start zillions of sessions at once because of size of the pool will be preset.
Use of threadpools is not complicated at all. See here more about threadpools. And also take a look at oracle documentation.
It sounds to me you don't need to create multiple threads at all.
(although I might be wrong, I don't know the specifics of your task).
Could you perhaps create exactly 1 thread that does background work, and give that thread a LinkedBlockingQueue to store the parameters of the doSomethingInTheBackground call?
This solution wouldn't work if it is of the utmost importance that the background task starts right away, even when the server is under heavy load. But for example for my most recent task (retrieve text externally, return them to the API caller, then delayed-add the text to the SOLR layer) this was a perfectly fine solution.
I suggest using neither of the approaches you mention, but to use a JMS queue. You can easily embed an ActiveMQ instance in your application. First create one or more separate consumer threads in the background to pick up jobs from the queue.
Then when a request is received just push a message with the job details on the JMS queue. This is a much better architecture and more scalable than fiddling with low level threads or thread pools.
See also: this answer and the activeMQ site.

Categories

Resources