Design pattern for single request webservice - java

I plan to create a webservice (in Tomcat) that works with a certain file on the system and make some system calls. The challenge am having is to make sure only 1 request can be processed at a given time and the previous request needs to be pre-empted. I currently have a singleton class that works on the request. And requesting threads somehow wait until the running thread is done. But what is the general design pattern for such problems?
Any ideas how this can be achieved?
Thanks,
V

Since there may be too many requests calling to this service, the synchronous approach may not be achieved. There also may be a chance that some clients waiting so that it is time-out. Then I prefer the asynchronous as
The Service-A receives the request and put it to queue-A or DB table-A, together with generating the ref-id.
The Service-A returns the ref-id to the client for further monitoring.
There is a back-end process monitoring the queue-A or DB table-A and perform the request one-by-one.
After finish, put the result to another queue-B or DB table-B.
The client keep monitoring periodically via another Service-B by using the ref-id, if the request is done.
I hope this may help to achieve your requirement.
Regard,
Charlee Ch.

I would place the queues mentioned by Charlee in the ServletContext. It is initialized when your WebApplication is started.
You can initialize and destroy these queues an the backend process in an implementation of ServletContextListener. Store them with setAttribute on the ServletContext.
Now you're able to access these queues via ServletContext in your Servlet.

Related

How to handle thousands of request in REST based web service at a time?

Is making REST based web service (POST) asynchronous is the best way to handle thousands of requests at one time (Keeping in mind that I have only single instance of server serving the request)?
Edited:
Jersey is wrongly tagged.
For eg: I have a rest based web service, which is supposed to be consumed by 100 thousand clients within a very short span of time (~60 seconds). I understand that if I am allowed to deploy multiple instance of the server, then I can use a load balancer to handle all my incoming request and delegate them accordingly. But I am restricted to use only single instance. What design could I opt within this restriction?
I could think of making the request asynchronous( which will not respond to client immediately ) in order to be able to let the server be free from this load and handle the requests at it's own pace.
For now we can ignore memory limitations.
Please let me know if this clarifies your doubt?
The term asynchronous could have different meanings in different places. For a web application code, it could refer to a Nonblocking I/O server such as Node or Netty/Akka which is a way for HTTP Requests to time multiplex on the same worker threads. If you're writing callbacks or using async or future constructs, it probably is non-blocking I/O which people sometimes refer to as asynchronous.
However, I could have REST API running on Node which implements non-blocking I/O, but the API or the overall architecture is still fully synchronous. For example, let's say I have an API endpoint POST /photos, which takes in a photo, creates image thumbnails, stores the URLs of the photo in a SQL Db and then stores the images in S3. The REST API could still block from the initial POST until after the image is processed and stored.
A second way is for the server to accept the photo process as a job and return immediately. Then the server could store the photo in a in memory or network based queue to be processed later by some other worker thread. In fact, I could even implement this async architecture even with a blocking server like some good old Java 7 and Jetty.

How to throttle max thread per route in tomcat/java web server world

let say i have implemented few Rest end points.
Now i want to make sure the a particular route let say POST /log only get a max of 2 thread and other important route get the rest of the Threads.
During load hour we dont want to have many thread out of thread pool invested in POST /log route. We dont care about POST /log route.
[1] How to achieve this is in tomcat?
[2] Is there some other way to achieve this instead of relaying on web server?
PS: I did find SingleThreadModel interface in Servlet, but this has been deprecated.
EDIT:
I dont want to add a filter and count request and once limit exceed drop the request as in this case the JVM will still take the heat of switching thread context, just to run my count request and drop request.
Ideally something like event loop of NODE.JS is refereed where there is only one thread who is processing request and others are queued up.
in my knowledge one crud way to achieve this is by having different connector for which route and assign threadpool to each.But i looking for something for development friendly.
Setup the QoSFilter and mark your priorities appropriately for the url-patterns that you want to limit.
What you need is a basic quality of service where you allocate resources to certain URIs.
This can be achieved with Servlet filters, and yes HTTP requests can be queued since Servlet 3.0.
Jetty, for example, provides a QoSFilter exactly for this purpose, see its documentation.

Using global Service and *PortType class in a web application?

I generated a web service client using JAXWS. I notice that when using a JAXWS client, instantiating the Service and *PortType classes takes a while. As such, instantiating the Service and *PortType classes each time a request needs to be made is not a good idea.
Is it safe to make the Service and *PortType classes global to the whole web application? What are its pros and cons?
Won't there be a possibility for the request/ response to get switched to a different request/ response?
When you call a method in a Service, does it create a new connection? Or does it simply reuse an old connection?
If it's just reusing an old connection, then there could be some threading issue right?
Also given the code, port.calculate(requestParam) where port is a global variable, what will happen if many threads simultaneuosly called the calculate() method? Will each thread create a new thread for each calculate calls? Or will it wait for each calls to finish before proceeding to the next call? How will the calls be handled? I'm just afraid that I might mix some of the requests and responses.
Thanks in advanced!
You are right to worry. The ports are not thread safe, however the service, which takes the longest to create, is thread safe. There is no official documentation of this but it is stated here and in this forum post an experiment is done showing multiple requests on the same port cause garbled requests. The recommended approach is to create a single service and a pool of port objects which you check out from to make requests. Another alternative is to use CXF which does make their client objects thread safe and is more explicit about how to share across threads in their documentation.

Server-side performing multiple requests against cloud services

I am in the process of writing a web-app that uses multiple web APIs.
For a single request of a single user, my app may need to perform up to 30 HTTP requests to other sites. The site housing the web-app can have hundreds of concurrent users.
I've been looking around trying to figure out which library should I use. I'm looking for a mature project that has detailed documentation and tested code, one that will still be around in years to come. Not sure if something like that exists (!)
Couple of questions :
In a case such as described above, should I be using an asynchronous HTTP client (without threading), or a regular (possibly pooled) HTTP client (with threading)? Asynchronicity relieves my app from using threads, but makes the code more scattered - will the above mentioned number of requests burden my server too much? (it says here that asynchronous is more scalable)
Which library is the common one to use? Is it Apache HttpComponenets HttpClient or its asynch couterpart HttpAsynchClient - which is in Alpha...)? How about jfarcand's AsyncHttpClient?
Okay, let's say I will use threads.
After digging around I realize that spawning threads from within a servlet (in my case - a Struts action), may be a big No No :
related questions:
What is recommended way for spawning threads from a servlet in Tomcat
Need help with java web app design to perform background tasks
Can i spawn a thread from a servlet ?
The way I see it, these are my options:
use my own thread pool (container doesn't manage my threads)
use a WorkManager such as CommonJ (seems like an inactive product)
use a 3rd party scheduler such as Quartz (may be an overkill ... ?)
I would appreciate any recommendations for this specific use case - aggregating lotsa data from different web services (this aggregation is invoked by a single user's single request).
Good question. I would try an asynchronous solution first to see how everything works. The asynchronous solution would be the simplest to implement.
If that doesn't work, try a more threaded model.
I would use HttpClient for making your requests. I've worked with it a lot and use it for any http work that I have to do.
A single thread for each remote http connection, and using a synchronous http client will probably be easier. I would try this approach first, and see if it is fast/scalable enough. For the synchronous approach, apache http client is a good choice.
If a synchronous solution is not good enough, something like netty may be a good fit. It uses NIO so you won't get thousands of threads.
I do not know of any existing software to do this for you that will not be overkill. But you might try splitting things up. That is, separate the fetching of the data of the showing of the result. Since you do not provide no further details on the problem at hand I cannot say for you whether that would be feasible or not.
Basically the idea is to create a service that will fetch those 30 subsequent requests for you and if possible process it into a request. The client of this service is the service that is running on the web. It will receive a request from a user and subsequently put it's own request trough to your data service. When the data service is ready it will return it's response. Either synchronously or asynchronously.
You could program your data service in any language you want, even Java, without being bound to servlets ensuring that fetching the subsequent 30 request and combining them in a response isn't being done by the webserver. This could also enhance the responsiveness of your web server itself.
Nutshell: branch of the ``difficult" tasks to a specialised service where you can transparently handle parallelism.
I would use Jetty and in the servlets I'd use the continuation mechanism to free up the thread while waiting for the third party web request to complete. This will allow maximum concurrency on your server, as you can have a lot more suspended requests than threads.
You can use either continuations or the servlet 3.0 asynchronous API, the end result is the same.

Best practice: servlet that uses httpclient to post content

i want to write a java servlet that will be called by different users to do httpclient post content to another side via "POST" . i wanted to hear opinion from guru in this case, do my servlet need to use threadpool or something since i want to serve different users at the same time and each user is executing different httpclient post
You should read the HttpClient threading guide because you are in a multi-threaded environment within a servlet container.
Are your out-going POST requests going to be synchronous or asynchronous? That is: will the user-request for which the POST is being performed wait for the POST to complete?
Servlet engines already use separate threads for each request being concurrently handled, so if your outgoing POSTs are meant to be synchronous then you don't need to make your own thread pool. If they're asynchronous, however, you may want to have a producer consumer queue where requests "produce" a command to perform a POST, and a set of worker threads consume (and then perform) these commands.

Categories

Resources