As I found, controllers in String are singletones Are Spring MVC Controllers Singletons?
The question is, how Spring handles multiple long time consuming requests, to the same mapping? For example, when we want to return a model which require long calculations or connection to other server- and there are a lot of users which are sending request to the same url?
Async threads I assume- are not a solution, because method need to end before next request will be maintained? Or not..?
Requests are handled using a thread-pool (Container-managed), so each request has an independent context, it does not matter whether if the Controller is Singleton or not.
One important thing is that Singleton instances MUST NOT share state between requests to avoid unexpected behaviours or race conditions.
The thread-pool capacity will define the number of requests the server could handle in a sync model.
If you want an async approach you coud use many options like:
Having a independent thread pool that processes tasks from container threads, or
Use a queue to push tasks and use an scheduler process tasks, or
Use Websockets to make requests and use (1) or (2) for processing and then receive the notification when done.
Related
I faced a requirement to process a large amount of data for every request and I'm looking for some clues. I came up with three ideas:
1) start a new thread from a service
2) use request scope for a service
3) use #Async
I realized I don't fully understand the basics:
If all Spring beans are Singletons, what happens in there is a
time-consuming operation in #Service? Will other users have to wait for #Service to complete?
If expected service behavior to call DB, fetch and process results
and do other no trivial operations, shouldn't it be scoped request by
default?
What #Async has to do with all this? Is it equivalent to AJAX call?
I would really appreciate some explanation on how to perform heavy calculations for every request in Spring Boot.
No, singleton bean doesn't mean other threads have to wait for a thread to finish executing some service offered by that bean.
I see that you mentioned about up to 1000 users requesting concurrent calculation. In that case, #Async may not be a good choice.
You can start with a simple ThreadPoolExecutor with some decent value for maximum amount of concurrent thread (16), and work queue size (10000). Since you claim that this is kind of 'start and forget' action, I assume it's OK to have an amount of calculation requests waiting in the work queue until some idle thread available.
Next, do some load test with the starting solution to estimate again the capability of your service. In case your single service doesn't have enough capability to handle such huge amount of heavy calculation requests in time, you would need to think about having dedicated worker service instances where actual calculations are done, while your server service only play as "request dispatcher".
This is not an actual answer but since I don't have enough point to add answer, consider this as some starting point for your problem.
I'm creating a spring-mvc restful app, and I'm questionning about performance when it gonna be on production.
I found this link about async controllers, but I still have few questions. In general, what happens when 3 clients try to access a page?
Is this async or synchronous that is to say, I) client A will be processed, then B, and then C, like a waiting queue, or II) do they have a different and single thread each?
If I), would I have to make all my controllers aync?
Requests are processed concurrently using a worker pool of threads.
No, you don't need to make anything async. You can get an advantage from them if you have long blocking requests, but unless you understand how and why, don't bother worrying about it.
I've built a REST api using Spring Boot that basically accepts two images via POST and performs image comparison on them. The api is invoked synchronously. I'm not using an external application server to host the service, rather I package it as a jar and run it.
#RequestMapping(method = RequestMethod.POST, value = "/arraytest")
public String compareTest(#RequestParam("query") MultipartFile queryFile,#RequestParam("test") MultipartFile testFile,RedirectAttributes redirectAttributes,Model model) throws IOException{
CoreDriver driver=new CoreDriver();
boolean imageResult=driver.initProcess(queryFile,testFile);
model.addAttribute("result",imageResult);
return "resultpage";
}
The service could be invoked in parallel across multiple machines and I would need my service to perform efficiently. I'm trying to understand how are parallel calls to a REST service handled?
When the request is sent to the service , does a single object of the service get created and same object get used in multiple threads to handle multiple requests?
A follow-up question would be whether if it is possible to improve the performance of a service on the perspective of handling requests rather than improving the performance of the service functionality.
Spring controllers (and most spring beans) are Singletons, i.e. there is a single instance in your application and it handles all requests.
Assuming this is not web sockets (and if you don't know what that means, it's probably not), servlet containers typically maintain a thread pool, and will take a currently unused thread from the pool and use it to handle the request.
You can tune this by, for example, changing some aspects of the thread pool (initial threads, max threads, etc...). This is the servlet container stuff (i.e. configuring tomcat/jetty/whatever you're using) not spring per se.
You can also tune other http aspects such as compression. This can usually be done via the container, but if I recall correctly spring offers a servlet filter that will do this.
The image library and image operations you perform will also matter. Many libraries convert the image into raw in memory in order to perform operations. This means a 3 meg jpg can take upwards of 100 megs of heap space. Implication of this is that you may need some kind of semaphore to limit concurrent image processing.
Best approach here is to experiment with different libraries and see what works best for your usecase. Hope this helps.
The controller will be singleton but there are ways to make the processing async. Like a thread pool or JMS. Also you can have multiple nodes. This way as long as you return a key and have a service for clients to poll to get the result later, you can scale out back end processing.
Besides you can cluster your app so there are more nodes to process. Also if possible cache results; if you get the same input and they have the same output for 30% or more of the requests.
Here are two links which seem to be contradicting each other. I'd sooner trust the docs:
Link 1
Request processing on the server works by default in a synchronous processing mode
Link 2
It already is multithreaded.
My question:
Which is correct. Can it be both synchronous and multithreaded?
Why do the docs say the following?:
in cases where a resource method execution is known to take a long time to compute the result, server-side asynchronous processing model should be used
If the docs are correct, why is the default action synchronous? All requests are asynchronous on client-side javascript by default for user experience, it would make sense then that the default action for server-side should also be asynchronous too.
If the client does not need to serve requests in a specific order, then who cares how "EXPENSIVE" the operation is. Shouldn't all operations simply be asynchronous?
Request processing on the server works by default in a synchronous processing mode
Each request is processed on a separate thread. The request is considered synchronous because that request holds up the thread until the request is finished processing.
It already is multithreaded.
Yes, the server (container) is multi-threaded. For each request that comes in, a thread is taken from the thread pool, and the request is tied to the particular request.
in cases where a resource method execution is known to take a long time to compute the result, server-side asynchronous processing model should be used
Yes, so that we don't hold up the container thread. There are only so many threads in the container thread pool to handle requests. If we are holding them all up with long processing requests, then the container may run out of threads, blocking other requests from coming in. In asynchronous processing, Jersey hands the thread back to the container, and handle the request processing itself in its own thread pool, until the process is complete, then send the response up to the container, where it can send it back to the client.
If the client does not need to serve requests in a specific order, then who cares how "EXPENSIVE" the operation is.
Not really sure what the client has to do with anything here. Or at least in the context of how you're asking the question. Sorry.
Shouldn't all operations simply be asynchronous?
Not necessarily, if all the requests are quick. Though you could make an argument for it, but that would require performance testing, and numbers you can put up against each other and make a decision from there. Every system is different.
I need to wait for a condition in a Spring MVC request handler while I call a third party service to update some entities for a user.
The wait averages about 2 seconds.
I'm calling Thread.sleep to allow the remote call to complete and for the entities to be updated in the database:
Thread.currentThread().sleep(2000);
After this, I retrieve the updated models from the database and display the view.
However, what will be the effect on parallel requests that arrive for processing at this controller/request handler?
Will parallel requests also experience a wait?
Or will they be spawned off into separate threads and so not be affected by the delay experienced by the current request?
What are doing may work sometimes, but it is not a reliable solution.
The Java Future interface, along with a configured ExecutorService allows you to begin some operation and have one or more threads wait until the result is ready (or optionally until a certain amount of time has passed).
You can find documentation for it here:
http://download.oracle.com/javase/6/docs/api/java/util/concurrent/Future.html