How to handle an asynchronous response in a java library? - java

I am developing a library which sends an asynchronous request to a server (in another thread).
Our clients will use this library, and I am thinking of the best way how to get this handled by them. The response might follow in seconds to a minute. I am doing this using the jersey-client and there already exists a listener which will be called in the other "async" thread.
So I have a shared ressource which will be "filled" with the response by the listener, but the main-thread has to call it to retrieve the response.
Is there a better way how the client might access or be "notified" that the response has arrived? Something like an EventNotifier? (If i implement something like this, shouldn't there exist something like polling on the notifier, but this means that there might exist another thread, which has to do this).
I have no influence on the client who is using my library, I can only make their life easier with an appropriate design of the library-functions they call.

What is the servlet container version you are trying to use ? Servlet 3.0 has builtin support for it

Related

JAX-WS with async servlets

So, I have this old legacy JAX-WS service, that does a lot of IO per request, so each request takes quite a bit of time, but doesn't consume much CPU/RAM.
With amount of clients increasing lately, there's a huge problem of thread starvation.
At first, I thought about implementing JAX-WS builtin async support, but it requires clients to be aware of it, and thats a no in my situation. I cannot possibly force them all to update their code.
My current idea is to create special async servlet and manually parse SOAP request and encode the response, but I can't seem to find some good way to do that.
Is there any way to utilize Servlet 3.1 async support when handling JAX-WS requests?
I should clarify, that I cannot use some kind of queue, or otherwise "just return early", because request processing can fail, and client must get that failure.
I've found the solution that works perfectly for me, CXF Continuations API.
http://cxf.apache.org/docs/continuations.html
https://github.com/search?l=java&q=ContinuationProvider&type=Code&utf8=%E2%9C%93
I had to enable async for CXF Servlet, and add jboss module dependency to bundled CXF.
While the whole things feels somewhat like a hack, it allowed me to do proper asynchronous processing without changing service external API at all.
As a bonus, I can even decide whether to suspend request, or process it normally, which helps a lot in certain cases.
You could make use of workflow here where say you have webserver's workers accept the request, you handle it and places it in a queue which is shared by thread which will process this event asynchronously.
This would just mean that you have low wait time on client side and you process the client request asynchronously. In that way, you build your system scalable and can increase number of worker threads on either of the side i.e. webserver so you could accept multiple requests from client concurrently and at the same time multiple threads may be concurrently processing the events from client.

How to run something on servlet that will continue to run after the servlet sends the response?

I am sending a jQuery ajax request to a servlet.
I want the servlet to handle the request and response.
But just before the servlet returns the response, I want it to start to run something that will continue to run after the response is returned.
Is using a new thread the best way to do it? Or something else?
You may use plain threads which is not the best idea. You can also use some of the thread pools provided by the standard API (see the java.util.concurrent.Executors class).
If you are in a Java EE environment, then it's better to use #Asynchronous EJBs or javax.enterprise.concurrent.ManagedExecutorService.

How to set timeout for Jersey REST web services?

I would like to know how to set a timeout for a REST web service with Jersey, and being able to catch it within the service. I've read some approaches to achieve this, such as calling another service after the timeout to check if the current service is active, or verifying application credentials, etc.
I'd rather not to follow these approaches. In fact, I would like to know if is possible to set a listener to the HTTP request, or to the service itself, that would execute some procedure if
the timeout is reached.
I suppose that creating a thread within the service body to act as listener could be a solution, but I'd like to know if there is a solution closer to Jersey.
I'm pretty sure that Jersey like many other api's has client timeout functionality built in.
Don't want to give you loads of code so you can check out these posts
Not sure I remember correctly but I think you can set it using this clint api through setReadTimeout and setConnectTimeout*
https://jersey.java.net/apidocs/1.1.1-ea/jersey/com/sun/jersey/api/client/Client.html

Design pattern for single request webservice

I plan to create a webservice (in Tomcat) that works with a certain file on the system and make some system calls. The challenge am having is to make sure only 1 request can be processed at a given time and the previous request needs to be pre-empted. I currently have a singleton class that works on the request. And requesting threads somehow wait until the running thread is done. But what is the general design pattern for such problems?
Any ideas how this can be achieved?
Thanks,
V
Since there may be too many requests calling to this service, the synchronous approach may not be achieved. There also may be a chance that some clients waiting so that it is time-out. Then I prefer the asynchronous as
The Service-A receives the request and put it to queue-A or DB table-A, together with generating the ref-id.
The Service-A returns the ref-id to the client for further monitoring.
There is a back-end process monitoring the queue-A or DB table-A and perform the request one-by-one.
After finish, put the result to another queue-B or DB table-B.
The client keep monitoring periodically via another Service-B by using the ref-id, if the request is done.
I hope this may help to achieve your requirement.
Regard,
Charlee Ch.
I would place the queues mentioned by Charlee in the ServletContext. It is initialized when your WebApplication is started.
You can initialize and destroy these queues an the backend process in an implementation of ServletContextListener. Store them with setAttribute on the ServletContext.
Now you're able to access these queues via ServletContext in your Servlet.

Server-side performing multiple requests against cloud services

I am in the process of writing a web-app that uses multiple web APIs.
For a single request of a single user, my app may need to perform up to 30 HTTP requests to other sites. The site housing the web-app can have hundreds of concurrent users.
I've been looking around trying to figure out which library should I use. I'm looking for a mature project that has detailed documentation and tested code, one that will still be around in years to come. Not sure if something like that exists (!)
Couple of questions :
In a case such as described above, should I be using an asynchronous HTTP client (without threading), or a regular (possibly pooled) HTTP client (with threading)? Asynchronicity relieves my app from using threads, but makes the code more scattered - will the above mentioned number of requests burden my server too much? (it says here that asynchronous is more scalable)
Which library is the common one to use? Is it Apache HttpComponenets HttpClient or its asynch couterpart HttpAsynchClient - which is in Alpha...)? How about jfarcand's AsyncHttpClient?
Okay, let's say I will use threads.
After digging around I realize that spawning threads from within a servlet (in my case - a Struts action), may be a big No No :
related questions:
What is recommended way for spawning threads from a servlet in Tomcat
Need help with java web app design to perform background tasks
Can i spawn a thread from a servlet ?
The way I see it, these are my options:
use my own thread pool (container doesn't manage my threads)
use a WorkManager such as CommonJ (seems like an inactive product)
use a 3rd party scheduler such as Quartz (may be an overkill ... ?)
I would appreciate any recommendations for this specific use case - aggregating lotsa data from different web services (this aggregation is invoked by a single user's single request).
Good question. I would try an asynchronous solution first to see how everything works. The asynchronous solution would be the simplest to implement.
If that doesn't work, try a more threaded model.
I would use HttpClient for making your requests. I've worked with it a lot and use it for any http work that I have to do.
A single thread for each remote http connection, and using a synchronous http client will probably be easier. I would try this approach first, and see if it is fast/scalable enough. For the synchronous approach, apache http client is a good choice.
If a synchronous solution is not good enough, something like netty may be a good fit. It uses NIO so you won't get thousands of threads.
I do not know of any existing software to do this for you that will not be overkill. But you might try splitting things up. That is, separate the fetching of the data of the showing of the result. Since you do not provide no further details on the problem at hand I cannot say for you whether that would be feasible or not.
Basically the idea is to create a service that will fetch those 30 subsequent requests for you and if possible process it into a request. The client of this service is the service that is running on the web. It will receive a request from a user and subsequently put it's own request trough to your data service. When the data service is ready it will return it's response. Either synchronously or asynchronously.
You could program your data service in any language you want, even Java, without being bound to servlets ensuring that fetching the subsequent 30 request and combining them in a response isn't being done by the webserver. This could also enhance the responsiveness of your web server itself.
Nutshell: branch of the ``difficult" tasks to a specialised service where you can transparently handle parallelism.
I would use Jetty and in the servlets I'd use the continuation mechanism to free up the thread while waiting for the third party web request to complete. This will allow maximum concurrency on your server, as you can have a lot more suspended requests than threads.
You can use either continuations or the servlet 3.0 asynchronous API, the end result is the same.

Categories

Resources