Spring-mvc using async controllers - java

I'm creating a spring-mvc restful app, and I'm questionning about performance when it gonna be on production.
I found this link about async controllers, but I still have few questions. In general, what happens when 3 clients try to access a page?
Is this async or synchronous that is to say, I) client A will be processed, then B, and then C, like a waiting queue, or II) do they have a different and single thread each?
If I), would I have to make all my controllers aync?

Requests are processed concurrently using a worker pool of threads.
No, you don't need to make anything async. You can get an advantage from them if you have long blocking requests, but unless you understand how and why, don't bother worrying about it.

Related

Microservices async communication circuit breaker

I have 2 services running with spring boot and I have the following situation:
Service A (N instances) -> Queue (multiple instances) -> Service B (N instances)
Service A enqueue events async
Service B dequeue events
Traffic is increasing and we've noticed that events spend a lot of time in queue. We need to process events faster. Payload for each event is small and this solution has been working for some years now and a couple of years ago they thought that having a queue was a good idea but now I'm having this performance issue.
I thinking about creating an endpoint in service B and hit this endpoint from service A.
This call should be async and also implement a circuit breaker to avoid lossing messages if B goes down. If B goes down I could use the queue to keep messages and once B is running and up again pull messages from queue.
I have 2 questions:
Is it possible to implement circuit breaker or a failover mechanism for an async call?
do you think there is some other approach which could be better?
Thanks
for question
yes, it is possible, I am assuming you are using spring boot as you have mentioned java in tags, so there is a mechanism called RetryTemplate that you can use for async calls as well, you can find more details here -> https://www.openprogrammer.info/2019/09/03/using-spring-boot-async-with-retry/
There are better approaches I would say, what did you implement when you say queue here? like is it a LIFO or something like AWS SQS? if that is the case then you can try and look for topic-based queue mechanisms like apache kafka
If the bottleneck is in the queue, I don't think you should just remove it. You can just try to use a SQS or other cloud-based queue, which doesn't have that problem.
What happens if service A calls service B directly, and service B is down? The request will be lost. With a queue, the message will remain there until service B recovers.
Circuit breakers are used to avoid overwhelming a failed service with requests (in order to allow it to recover). So when it kicks in, there is a clear data loss there.

How Spring MVC controller handle multiple long http requests?

As I found, controllers in String are singletones Are Spring MVC Controllers Singletons?
The question is, how Spring handles multiple long time consuming requests, to the same mapping? For example, when we want to return a model which require long calculations or connection to other server- and there are a lot of users which are sending request to the same url?
Async threads I assume- are not a solution, because method need to end before next request will be maintained? Or not..?
Requests are handled using a thread-pool (Container-managed), so each request has an independent context, it does not matter whether if the Controller is Singleton or not.
One important thing is that Singleton instances MUST NOT share state between requests to avoid unexpected behaviours or race conditions.
The thread-pool capacity will define the number of requests the server could handle in a sync model.
If you want an async approach you coud use many options like:
Having a independent thread pool that processes tasks from container threads, or
Use a queue to push tasks and use an scheduler process tasks, or
Use Websockets to make requests and use (1) or (2) for processing and then receive the notification when done.

Asynchronous Pages equivalent in Java for REST Services

We are writing some REST services using Jersey. Our service make some underlying service calls which happens to be dead slow, which results in holding each thread per request for 3-4 seconds. While investigating I came across Asynchronous Pages in .Net which assigns a thread to each request from thread pool and returns the thread to thread pool once I/O operation starts and gets a new thread when I/O operation is finished to do rest of the processing.
Is there anything similar to this exists in Jersey, where we can serve more concurrent connections instead of holding one thread for each connection until it completes. I don't want to POST a request, return a GUID and then keep polling for the status of the request from client, since I don't control client code.
Thanks,
GG
Take a look at the Atmosphere's Framework, specifically the atmosphere-jersey module that brings asynchronous annotation to Jersey.
Take a look at one of the samples, or read this quick tutorial. Atmosphere's Jersey does exactly what you are looking at, without requiring you to manipulate threads or anything like that. Come to our mailing list in case you need more help.
(I am the Atmosphere Creator and Lead)

Would a JMS Topic suffice in this situation? Or should I look elsewhere?

There is one controlling entity and several 'worker' entities. The controlling entity requests certain data from the worker entities, which they will fetch and return in their own manner.
Since the controlling entity can agnostic about the worker entities (and the working entities can be added/removed at any point), putting a JMS provider in between them sounds like a good idea. That's the assumption at least.
Since it is an one-to-many relation (controller -> workers), a JMS Topic would be the right solution. But, since the controlling entity is depending on the return values of the workers, request/reply functionality would be nice as well (somewhere, I read about the TopicRequester but I cannot seem to find a working example). Request/reply is typical Queue functionality.
As an attempt to use topics in a request/reply sort-of-way, I created two JMS topis: request and response. The controller publishes to the request topic and is subscribed to the response topic. Every worker is subscribed to the request topic and publishes to the response topic. To match requests and responses the controller will subscribe for each request to the response topic with a filter (using a session id as the value). The messages workers publish to the response topic have the session id associated with them.
Now this does not feel like a solution (rather it uses JMS as a hammer and treats the problem (and some more) as a nail). Is JMS in this situation a solution at all? Or are there other solutions I'm overlooking?
Your approach sort of makes sense to me. I think a messaging system could work. I think using topics are wrong. Take a look at the wiki page for Enterprise Service Bus. It's a little more complicated than you need, but the basic idea for your use case, is that you have a worker that is capable of reading from one queue, doing some processing and adding the processed data back to another queue.
The problem with a topic is that all workers will get the message at the same time and they will all work on it independently. It sounds like you only want one worker at a time working on each request. I think you have it as a topic so different types of workers can also listen to the same queue and only respond to certain requests. For that, you are better off just creating a new queue for each type of work. You could potentially have them in pairs, so you have a work_a_request queue and work_a_response queue. Or if your controller is capable of figuring out the type of response from the data, they can all write to a single response queue.
If you haven't chosen an Message Queue vendor yet, I would recommend RabbitMQ as it's easy to set-up, easy to add new queues (especially dynamically) and has really good spring support (although most major messaging systems have spring support and you may not even be using spring).
I'm also not sure what you are accomplishing the filters. If you ensure the messages to the workers contain all the information needed to do the work and the response messages back contain all the information your controller needs to finish the processing, I don't think you need them.
I would simply use two JMS queues.
The first one is the one that all of the requests go on. The workers will listen to the queue, and process them in their own time, in their own way.
Once complete, they will put bundle the request with the response and put that on another queue for the final process to handle. This way there's no need for the the submitting process to retain the requests, they just follow along with the entire procedure. A final process will listen to the second queue, and handle the request/response pairs appropriately.
If there's no need for the message to be reliable, or if there's no need for the actual processes to span JVMs or machines, then this can all be done with a single process and standard java threading (such as BlockingQueues and ExecutorServices).
If there's a need to accumulate related responses, then you'll need to capture whatever grouping data is necessary and have the Queue 2 listening process accumulate results. Or you can persist the results in a database.
For example, if you know your working set has five elements, you can queue up the requests with that information (1 of 5, 2 of 5, etc.). As each one finishes, the final process can update the database, counting elements. When it sees all of the pieces have been completed (in any order), it marks the result as complete. Later you would have some audit process scan for incomplete jobs that have not finished within some time (perhaps one of the messages erred out), so you can handle them better. Or the original processors can write the request to a separate "this one went bad" queue for mitigation and resubmission.
If you use JMS with transaction, if one of the processors fails, the transaction will roll back and the message will be retained on the queue for processing by one of the surviving processors, so that's another advantage of JMS.
The trick with this kind of processing is to try and push the state with message, or externalize it and send references to the state, thus making each component effectively stateless. This aids scaling and reliability since any component can fail (besides catastrophic JMS failure, naturally), and just pick up where you left off when you get the problem resolved an get them restarted.
If you're in a request/response mode (such as a servlet needing to respond), you can use Servlet 3.0 Async servlets to easily put things on hold, or you can put a local object on a internal map, keyed with the something such as the Session ID, then you Object.wait() in that key. Then, your Queue 2 listener will get the response, finalize the processing, and then use the Session ID (sent with message and retained through out the pipeline) to look up
the object that you're waiting on, then it can simply Object.notify() it to tell the servlet to continue.
Yes, this sticks a thread in the servlet container while waiting, that's why the new async stuff is better, but you work with the hand you're dealt. You can also add a timeout to the Object.wait(), if it times out, the processing took to long so you can gracefully alert the client.
This basically frees you from filters and such, and reply queues, etc. It's pretty simple to set it all up.
Well actual answer should depend upon whether your worker entities are external parties, physical located outside network, time expected for worker entity to finish their work etc..but problem you are trying to solve is one-to-many communication...u added jms protocol in your system just because you want all entities to be able to talk in jms protocol or asynchronous is reason...former reason does not make sense...if it is latter reason, you can choose other communication protocol like one-way web service call.
You can use latest java concurrent APIs to create multi-threaded asynchronous one-way web service call to different worker entities...

Threading in Servlets

I am working on a servlet that can take a few hours to complete the request. However, the client calling the servlet is only interested in knowing whether the request has been received by the servlet or not. The client doesn't want to wait hours before it gets any kind of response from the servlet. Also since calling the servlet is a blocking call, the client cannot proceed until it receives the response from the servlet.
To avoid this, I am thinking of actually launching a new thread in the servlet code. The thread launched by the servlet will do the time consuming processing allowing the servlet to return a response to the client very quickly. But I am not sure if this an acceptable way of working around the blocking nature of servlet calls. I have looked into NIO but it seems like it is not something that is guaranteed to work in any servlet container as the servlet container has be NIO based also.
What you need is a job scheduler because they give assurance that a job will be finished, even in case a server is restarted.
Take a look at java OSS job schedulers, most notably Quartz.
Your solution is correct, but creating threads in enterprise applications is considered a bad practice. Better use a thread pool or JMS queue.
You have to take into account what should happen server goes down during processing, how to react when multiple requests (think: hundreds or even thousands) occur at the same time, etc. So you have chosen the right direction, but it is a bit more complicated.
A thread isn't bad but I recommend throwing this off to an executor pool as a task. Better yet a long running work manager. It's not a bad practice to return quickly like you plan. I would recommend providing some sort of user feedback indicating where the user can find information about the long running job. So:
Create a job representing the work task with a unique ID
Send the job to your background handler object (that contains an executor)
Build a url for the unique job id.
Return a page describing where they can get the result
The page with the result will have to coordinate with this background job manager. While it's computing you can have this page describe the progress. When its done the page can display the results of the long running job.

Categories

Resources