Invoke method call asynchronously without blocking main thread - java

I have a scenario where the spring-boot application have to download a file from downstream application and pass it to the client. The API also needs to update a read flag in the database without blocking the response(main-thread).
A basic async use-case is what I thought of and implemented in the respective API. But, I am getting a behavioral issue with #Async. The annotation is able to spawn a new thread , but its blocking the main-thread and holding the response. The expectation was to return without holding the main-thread.
Actually, the async update is the last operation of main-thread, and I guess due to that #Async is blocking the main-thread.
Can anyone please suggest a better solution of this scenario.
Calling class
ResponseEntity<byte[]> parsedResponse = retrieverService.retrieve(id,"html");
retrieverService.update(id);
return parsedResponse;
Async method
#Override
#Async("updateTaskExecutor")
public void update(String id) {
LOG.info("Updating data for metaTagId: {}", id);
db.updateReadFlag(id);
}
Async Config
#Configuration
#EnableAsync
public class AsyncConfiguration {
#Bean(name = "updateTaskExecutor")
public Executor updateTaskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(100);
executor.setMaxPoolSize(100);
executor.setQueueCapacity(100);
executor.setThreadNamePrefix("UpdateTaskClient-");
executor.initialize();
return executor;
}
}

The Configurations were correct. I was using debugger to check the parallelism. As suggested by #M. Deinum, its not the correct way to check parallelism. After using Thread.sleep() , I could see that asynchronous calls are working as expected. I am able to send the response back, while performing an update query asynchronously.

Related

Spring - add a low priority multi threaded service (no impact to production performance)

We have a Spring application, I want to add a service that will handle 10Ks IDs with multiple threads but will be as background process without impact production realtime.
Service will update database and send external providers requests.
I don't want service to impact/effect production performance/timing, I want to execute operation on each ID in a low priority
I read previous post about setting priority in Executer, but I want low priority to all other threads that can be outside this specific Executer scope.
Is answer using ThreadPoolExecutor more relevant to my case?
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(1, numOfWorkerThreads, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());
threadPool.setThreadFactory(new OpJobThreadFactory(Thread.NORM_PRIORITY-2));
public final static class OpJobThreadFactory implements ThreadFactory {
private int priority;
public OpJobThreadFactory(int priority) {
this(priority, true);
}
#Override
public Thread newThread(Runnable r) {
Thread t = new Thread(r, namePrefix + threadNumber.getAndIncrement());
t.setDaemon(daemon);
t.setPriority(priority);
}
}
maybe even use Thread.MIN_PRIORITY
Or I'm fine with using Executors.newCachedThreadPool()
Creates a thread pool that creates new threads as needed, but will reuse previously constructed threads when they are available. These pools will typically improve the performance of programs that execute many short-lived asynchronous tasks.
Also should I use Spring bean? because I need to create pool on demand/request so it seems not needed/wrong
EDIT
Should I use Spring Actuator to get this task or other monitoring tool?
Spring Boot Actuator module helps you monitor and manage your Spring Boot application by providing production-ready features like health check-up, auditing, metrics gathering, HTTP tracing etc. All of these features can be accessed over JMX or HTTP endpoints.
I would like to throw some light on the question
what is a thread priority? According to java SE Docs
Every thread has a priority. Threads with higher priority are executed in preference to threads with lower priority.
Even though you create threads with priority it does not completely guarantee that threads with lower priority get executed first you may have to block the thread with lower priority until other threads are executed
For small java programs you can handle the thread execution by yourself but for larger programs, it's recommended either you use the Executor Framework from vanilla Java which is from the package java.util.concurrent or use the spring TaskExecutor.By using both frameworks you can execute tasks asynchronously that is executing them in the background as part of the main task.
Impact on Production:
The main task, for example, will be a call to your rest endpoint i.e /account and on calling the account endpoint you want to send welcome emails to customers which are a third party API call which can be executed asynchronously either using Executor Framework or Spring TaskExecutor on executing them asynchronously i.e as background process they will not have an impact on the current API but it will surely have an impact on the production server since you are running the threads within the same JVM and they share common memory. if there are a number of threads created and not destroyed .you server will surely go down.
So using Executor Framework or Spring TaskExecutor does not guarantee you that it will not affect your current production it will surely increase the performance of the rest API that is called. since it's executed asynchronously and on your other questions
can i use Executors.newCachedThreadPool()
yes if you have a number of the short-lived task such as updating a single column in a database or triggering a rest endpoint only once and it's not suitable for bulk loading or executing some backend job which updates 10000 records because it will create larger number of threads for each task and you will surely face memory problems.
Also, should I use Spring bean? because I need to create a pool on-demand/request so it seems not needed/wrong
yes you can ThreadPoolTaskExecutor use https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/concurrent/ThreadPoolTaskExecutor.html
as per the docs Setting "queueCapacity" to 0 mimics Executors.newCachedThreadPool() .so you can use either Spring TaskExecutor or use Executor Framework from vanilla Java I personally recommend using Spring TaskExecutor which has more feature for a good start on using Spring you can refer the tutorial https://egkatzioura.com/2017/10/25/spring-and-async/ which is a good start
final verdict: if you are looking to execute the task only asynchronous or a background process you can use either use Executor Framework from java or Spring TaskExecutor but both will have an impact on production since they use the same JVM .if you do not want to impact production at all then I recommend creating separate spring boot app on a different server and make the database calls or service call from the new app and expose it as a rest endpoint and invoke these endpoints asynchronously from your main app using Spring Task Executor.
Spring Task Executor: https://egkatzioura.com/2017/10/25/spring-and-async/
Java Executor Framework : https://stackabuse.com/concurrency-in-java-the-executor-framework/
for using threads with low priority :
https://medium.com/#daniyaryeralin/priority-based-thread-pooling-in-spring-framework-d74b91b51dcb
Sorry if the answer is too long :)
Here there is a nice tutorial about priority based task execution in Spring. Maybe this may help you in some ways.
This is a method of creating a configurable ¨heap¨ of task and always keep your main task in the top of the heap.
To sum up this process you should create a custom Task Executor. Firstly you need to create a ThreadPoolTaskExecutor bean with one method being overidden. The properties that should be modified are: CorePoolSize(initial number of threads), QueueCapacity(the number of threads waiting in the queue), and MaxPoolSize(maximum number of threads). With these parameters you can configure your applications limitations in order for this service not to impact the production performance.
#Bean("CustomTaskExecutor")
public TaskExecutor threadPoolTaskExecutor(
#Value("${spring.async.core-pool-size}") int corePoolSize,
#Value("${spring.async.max-pool-size}") int maxPoolSize,
#Value("${spring.async.queue-capacity}") int queueCapacity) {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor() {
#Override
protected BlockingQueue<Runnable> createQueue(int queueCapacity) {
return new PriorityBlockingQueue<Runnable>(queueCapacity);
}
};
executor.setCorePoolSize(corePoolSize);
executor.setMaxPoolSize(maxPoolSize);
executor.setQueueCapacity(queueCapacity);
return executor;
}
After that,you need to make tasks with priorities that the task executor can
understand. For that we would need to create two classes:
1) A custom class that implements Runnable interface that will be running the task
2) A wrapper class that extends FutureTask and implementsComparable interface, so that the task executor could understand the priority picking logic of the tasks
public class Task implements Runnable {
private Consumer<Job> jobConsumer;
private Job job;
public Job getJob() {
return this.job;
}
public Task(Consumer<Job> jobConsumer, Job job) {
this.jobConsumer = jobConsumer;
this.job = job;
}
#Override
public void run() {
this.jobConsumer.accept(job);
}
}
Then you have the FutureCustomTask class:
public class FutureCustomTask extends FutureTask<FutureCustomTask> implements Comparable<FutureCustomTask> {
private Task task;
public FutureCustomTask(Task task) {
super(task, null);
this.task = task;
}
#Override
public int compareTo(FutureCustomTask o) {
return task.getJob().getPriority().compareTo(o.task.getJob().getPriority());
}
}
For the execution the TaskExecutor needs to be Autowired.
Then, you can create your Task object, wrap it inside FutureCustomTask, and pass it to TaskExecutor.The code should look like this:
#Autowired
private TaskExecutor taskExecutor;
#Autowired
private JobBusiness jobBusiness;
...
Task task = new Task(jobBusiness::performSomethingOn, job);
taskExecutor.execute(new FutureCustomTask(task));

How to create a non-blocking #RestController webservice in spring?

I'm having a #RestController webservice method that might block the response thread with a long running service call. As follows:
#RestController
public class MyRestController {
//could be another webservice api call, a long running database query, whatever
#Autowired
private SomeSlowService service;
#GetMapping()
public Response get() {
return service.slow();
}
#PostMapping()
public Response get() {
return service.slow();
}
}
Problem: what if X users are calling my service here? The executing threads will all block until the response is returned. Thus eating up "max-connections", max threads etc.
I remember some time ago a read an article on how to solve this issue, by parking threads somehow until the slow service response is received. So that those threads won't block eg the tomcat max connection/pool.
But I cannot find it anymore. Maybe somebody knows how to solve this?
there are a few solutions, such as working with asynchronous requests. In those cases, a thread will become free again as soon as the CompletableFuture, DeferredResult, Callable, ... is returned (and not necessarily completed).
For example, let's say we configure Tomcat like this:
server.tomcat.max-threads=5 # Default = 200
And we have the following controller:
#GetMapping("/bar")
public CompletableFuture<String> getSlowBar() {
return CompletableFuture.supplyAsync(() -> {
silentSleep(10000L);
return "Bar";
});
}
#GetMapping("/baz")
public String getSlowBaz() {
logger.info("Baz");
silentSleep(10000L);
return "Baz";
}
If we would fire 100 requests at once, you would have to wait at least 200 seconds before all the getSlowBar() calls are handled, since only 5 can be handled at a given time. With the asynchronous request on the other hand, you would have to wait at least 10 seconds, because all requests will likely be handled at once, and then the thread is available for others to use.
Is there a difference between CompletableFuture, Callable and DeferredResult? There isn't any difference result-wise, they all behave the similarly.
The way you have to handle threading is a bit different though:
With Callable, you rely on Spring executing the Callable using a TaskExecutor
With DeferredResult you have to to he thread-handling by yourself. For example by executing the logic within the ForkJoinPool.commonPool().
With CompletableFuture, you can either rely on the default thread pool (ForkJoinPool.commonPool()) or you can specify your own thread pool.
Other than that, CompletableFuture and Callable are part of the Java specification, while DeferredResult is a part of the Spring framework.
Be aware though, even though threads are released, connections are still kept open to the client. This means that with both approaches, the maximum amount of requests that can be handled at once is limited by 10000, and can be configured with:
server.tomcat.max-connections=100 # Default = 10000
in my opinion.the async may be better for the sever.for this particular api, async not works well.the clients also hold the connections. finally it will eating up "max-connections".you can send the request to messagequeue(kafka)and return success to clients. then you get the request and pass it to the slow sevice.

Can API Request (GET Call) return a response to client and start a background task to finish request

I am using Spring Boot 1.4 and Java8. I want to know is it possible that if I receive a get request for an API in controller. I immediately return a response to the client and then create a background task for the request (that handle success and exception scenarios). I understand we can use completablefuture for async processing, but still from controller method for this API we generally send the response after using thenapply, exceptionally or get. That means though we have spawned a new thread. Main thread is still not free. I am looking for hit and forget kind of use case. Please suggest how it may be feasible.
as stated in comments you can use async functionality from Spring. For that you'll need a configuration like
#EnableAsync
#Configuration
public class AsyncConfig {
#Bean
public Executor threadPoolTaskExecutor() {
return new ConcurrentTaskExecutor(Executors.newCachedThreadPool());
}
}
then put the annotation on the method which is running the background task
#Async
void runBgTask() { /* ... */ }
and call it in your controller method
#GetMapping("/foo")
public Foo hello() {
runBgTask();
return new Foo();
}

Synchronous behavior when use methods of Java CompletableFuture

I am using Java's CompletableFuture like this into a spring boot #Service:
#Service
public class ProcessService {
private static final ExecutorService EXECUTOR = Executors.newFixedThreadPool(3);
#Autowired
ChangeHistoryService changeHistoryService;
public Attribute process(Attribute attribute) {
//some code
CompletableFuture.runAsync(() -> changeHistoryService.logChanges(attribute), EXECUTOR);
return attribute;
}
}
The process method is called form a method inside a #RestController:
#RestController
public class ProcessController {
#Autowired
ProcessService processService;
#RequestMapping(value = "/processAttribute",
method = {RequestMethod.POST},
produces = {MediaType.APPLICATION_JSON_VALUE},
consumes = {MediaType.APPLICATION_JSON_VALUE})
public Attribute applyRules(#RequestBody Attribute attribute) {
Attribute resultValue = processService.service(attribute);
return resultValue;
}
}
ChangeHistoryService::logChanges only save some data to database according to its parameter.
I have a microservice that makes a number of request to this "/processAttribute" endpoint and print all responses.
When I put a breakpoint in logChanges method, the microservice is waiting on some request but not all which makes me think that the ChangeHistoryService::logChanges not always runs async. If I don't supply the runAsync with a ExecutorService, the microservice blocks on more request but still not all.
From what I understood this is because method that process the request and logChanges method share same thread pool (ForkJoinPool?).
Anyway, as I have another ExecutorService, logChanges should not runs independently? Or is something about how IDE treats breakpoints on async task? I am using IntelliJ IDEA.
The problem was that the breakpoint suspends all threads and not only the thread that runs logChanges method. I fix this in Intellij IDEA by pressing right click on breakpoint and checked "Thread" checkbox, not "All":
You have a rather small threadpool, so it's no wonder that you can saturate it. The threads that process requests are not the same as the ones processing your CompletableFutures. One is an internal component of the server, and the second one is the one you explicitly created, EXECUTOR.
If you want to increase the asynchronousness, try giving EXECUTOR some more threads and see how the behaviour changes accordingly. Currently the EXECUTOR is a bottleneck, since there are far more threads available for requests to run in.
Note that by putting a breakpoint inside logChanges() you'll be blocking one thread in the pool, making it even more saturated.

Frequent send to spring-websocket session: lost in transit

I got a load-test setup of spring websocket server (based on Jetty and spring version 4.3.2.RELEASE) and client, that generates many connections (based on spring's sample java websocket client). The code below sends data to given websocket session: the snippet exploits the case where sessionId can be used instead of User ID (Spring WebSocket #SendToSession: send message to specific session). I may execute this code very often, every 2-3 milliseconds. I use SimpleMessageBroker.
public void publishToSessionUsingTopic(String sessionId, String subscriptionTopic, Map<String, CacheRowModel> payload) {
String subscriptionTopicWithoutUser = subscriptionTopic.replace(USER_ENDPOINT, "");
// necessary message headers for per-session send
SimpMessageHeaderAccessor headerAccessor = SimpMessageHeaderAccessor.create(SimpMessageType.MESSAGE);
headerAccessor.setSessionId(sessionId);
headerAccessor.setLeaveMutable(true);
simpMessagingTemplate.convertAndSendToUser(sessionId, subscriptionTopicWithoutUser, Collections.singletonList(payload), headerAccessor.getMessageHeaders());
}
When this code is executed very frequently (every 2-3 milliseconds) for ~100 sessions, while I see in my logs that it was run and called the convertAndSendToUser, some of the sessions won't receive the message. I appreciate any suggestions about how this could be cleared.
Well, I think your problem is with the:
#Bean
public ThreadPoolTaskExecutor clientOutboundChannelExecutor() {
TaskExecutorRegistration reg = getClientOutboundChannelRegistration().getOrCreateTaskExecRegistration();
ThreadPoolTaskExecutor executor = reg.getTaskExecutor();
executor.setThreadNamePrefix("clientOutboundChannel-");
return executor;
}
where it uses this config for the Executor:
protected ThreadPoolTaskExecutor getTaskExecutor() {
ThreadPoolTaskExecutor executor = (this.taskExecutor != null ? this.taskExecutor : new ThreadPoolTaskExecutor());
executor.setCorePoolSize(this.corePoolSize);
executor.setMaxPoolSize(this.maxPoolSize);
executor.setKeepAliveSeconds(this.keepAliveSeconds);
executor.setQueueCapacity(this.queueCapacity);
executor.setAllowCoreThreadTimeOut(true);
return executor;
}
See, there is no RejectedExecutionHandler configured. And by default it is like:
private RejectedExecutionHandler rejectedExecutionHandler = new ThreadPoolExecutor.AbortPolicy();
So, when you have enough many messages and tasks for them exceed the ThreadPool, any extra are just aborted.
To fix the issue you should implement WebSocketMessageBrokerConfigurer and override its configureClientOutboundChannel() to provide some custom taskExecutor(ThreadPoolTaskExecutor taskExecutor) for example with the new ThreadPoolExecutor.CallerRunsPolicy().

Categories

Resources