I want to make an AJAX call to my Java webapp. The Java webapp will in turn make an asynchronous return call elsewhere. The result of that call will then be returned as the result of AJAX request.
The crux of my question is what would I do with the HttpRequest whilst I'm waiting for the second call to return?
Do I just block and wait for the call within the AJAX handler method or do I store the request somewhere and wait for a callback? How would I handle errors / timeouts?
For those who care further information as to how I arrived at this situation follows:
This is part of an XMPP based instant messaging system. There is one global support user which is displayed as an icon on every page in our webapp. I also want to display the presence of this user, so, I could just use the IM system to request this users presence on every single page load for every user and eventually DDOS myself. Instead I want to have a single user query the presence from the webapp periodically and cache the result.
The AJAX call is therefore to the server which will then either return the cached presence or query the XMPP server asynchronously.
You shouldn't have to block and wait for the AJAX call. That is, don't make the call synchronously. What you should do on the Java side is figure out a way to block while you wait for the response to come back from your asynchronous call (i.e., figure out to a way to make the request synchronously. The performance hit will be on the first call for any new data. Subsequent calls will hit the cache, so you should be good). You can maintain a cache for this data, so you can check the cache first to see if the data exists. If it doesn't make the call and store the result in the cache. Otherwise, grab the data from the cache and send it back to the view. Since AJAX is asynchronous, your callback will be called as soon as the data comes back from the server.
here is what i would do:
when the page startup, init an job to retrieve data array you need for that specific page, you need to identify the job and the job result for later usage
use ajax from the page to poll for the job result, once the job is done, the poll finishes and returned with data
cache the entries you have requested as Vivin indicated
cache the job result on your server and give it a time-out option
HTTP requests, i.e. HttpServletRequest objects are not serializable. Therefore you cannot store them in a persistent store of any sort, for the duration of the call. It doesn't make sense anyway to store the request, for its life is limited to the duration of the HTTP request itself, given the stateless nature of the HTTP protocol.
This effectively means that you have to hold on to the HttpServletResponse object for the duration of the call. The HttpServletRequest object is no longer needed, once the parsing of the HTTP request is performed, and once all the data is available to your application; it is the response object that is of importance in your context.
The response could be populated with the cached copy of the user status. If the copy in the cache is stale, you might want to refresh it synchronously from the XMPP server (after all, it affects the performance of just one page load). You could query asynchronously from within the application server, but some result must be returned to the browser (so there might be a few edges cases that need to be taken care of).
Related
In Java in a web service, I have a requirement I want to return the response to the user after configured threshold time reaches and wants to continue processing after that.
Let's say I have a service it does step1, step 2, and the configured threshold is 1 second. Let's say step1 is completed at 1 second I want to return an acknowledgment response to the user and continue processing with step2 and wants to store response in DB or something like that.
Please let me know if anyone has any solutions or thoughts on this problem
There are multiple ways to achieve this
HTTP Layer
On HTTP layer, if the response comes back before the threshold, then I'd be tempted to send back a 200 Success.
However, if it takes more time than the threshold, you could use 202 Accepted
Looking at the RFC, its use case looks like this
6.3.3. 202 Accepted
The 202 (Accepted) status code indicates that the request has been
accepted for processing, but the processing has not been completed.
The request might or might not eventually be acted upon, as it might
be disallowed when processing actually takes place. There is no
facility in HTTP for re-sending a status code from an asynchronous
operation.
The 202 response is intentionally noncommittal. Its purpose is to
allow a server to accept a request for some other process (perhaps a
batch-oriented process that is only run once per day) without
requiring that the user agent's connection to the server persist
until the process is completed. The representation sent with this
response ought to describe the request's current status and point to
(or embed) a status monitor that can provide the user with an
estimate of when the request will be fulfilled.
Now, of course, instead of having a mix of 200 and 202, you could just return 202 everytime
Application Layer
In your application layer, you'll typically want to make use of asynchronous processing for this purpose.
There are multiple ways to leverage this way of working, you can:
Post a message on a queue/topic and let a message broker take care of dispatching it to another part of the app, or another app and let this part do the processing
Save the request inside of a database, and have another service poll the database for new requests, similar to queueing explained above, without JMS
If you're using Java EE, your EJB container allows you to work with #Asynchronous which will call a method asynchronously and return (so you'll be able to return 202)
If you're using Spring, it has an #Async annotation for the same purpose as hereabove
There are definitely other methods you could use to achieve this use case, but I think the ones I presented are the most common ones
How to make sure that a particular DB transaction happens only once. I am making a payment request from my mobile (more than once), but the backend should only execute only one. Once the request is executed its status is marked as COMPLETED. But in case of multiple request, before one request gets completed another starts is execution so the payment is done twice before the status to be marked COMPLETED. How to solve this problem? I am using Java as backend. How can synchronize() help to solve this problem?
You can try to add a lock around the code. That way only one thread can enter at any given time.
If you make one request, then the other request have to wait until the request is finish.
This is a known issue as Double Post.
Preventing parallel access to the method with Synchronize and lock will not help you, as the requests will be proceed in series.
Using client Side methods may help, but is not enough, as many things may happen at client side.
If you want to prevent it at Server Side (this is the correct way to do), you can add a hidden field to the client form (some unique hash string) and send it to the server with every request. In the Server side component, you can check if a request with that hash is already received, and if so, return an error code to client.
You can also persist the hash with your Data and make it a unique field, so the first request that reach your database will be persisted, and the others will see unique field errors.
I have a Java web application, sitting in a Jetty container. I would like to know what happens if I submit 2 requests to the same URL one immediately after the other. Assume that the requests are simple GET requests and have no side effects.
I imagine what happens for each request is that a HTTP request is made to the URL, Jetty receives it and starts up a new thread to handle the request, then generates the response and sends it back over HTTP.
In the context of a browser - if I have sent off a second request before the first one returns, does the first response simply get discarded and not used? Is it effectively a wasted transaction?
In general, you have no way of knowing that the server isn't providing a different response to each get -- perhaps incrementing a hit counter, as one trivial example -- so unless the server, or your client, is explicitly set up to cache results each request gets processed independently.
I have a Spring Web flow application running on Weblogic 10. In current application, on load of the Page A , we are making an ajax call , which in the back end makes a webservice call WEBSVCA. On the submission of the same page,we have another webservice call being made WEBSVCB. The application requires that WEBSVCA call should always be made and completed before WEBSVCB call starts. But sometimes, when the user submits the page very fast, WEBSVCA response has not comeback yet and the call to WEBSVCB fails because of the concurrent call.
In order to resolve the above problem, I was planning to implement BlockingQueue for the webservice call status. In this case, the response from the WEBSVCA can be used as Producer and before the call to WEBSVCB is made we can check the queue as consumer.
Is is this the best approach or there could be a simpler approach than this??
Please let me know if you need any other details.
If the user can't progress to the next page before WEBSVCA has finished, then it shouldn't be an ajax call - so you could just do it as part of the page load.
Or, disable the submit button by default, then only enable it when the ajax callback succeeds.
I'm building a web service with a RESTful interface (lets call it MY_API). This service relies on another RESTful webservice to handle certain aspects (calling it OTHER_API). I'd like to determine what types of best practices I should consider using to handle failures of OTHER_API.
Scenario
My UI is a single page javascript application. There are some fairly complex actions a user can take, which can easily take the user a minute or two to complete. When they are done, they click the SAVE button and MY_API is called to save the data.
MY_API has everything it needs to persist the information submitted by the user. However, there is an action that must take place that is handled by OTHER_API. For instance, OTHER_API might handle sending out an emails. Or perhaps it handles adding line items to my user's billing statement. In both cases, these are critical things than must be completed, but they don't have to happen right now, they just need to happen eventually.
If OTHER_API fails, I don't want to simply tell the user their action has failed, as they spent a lot of time doing it and this will make the experience less than optimal.
Questions
So should I create some sort of Message or Event Queue that can save these failed REST requests to OTHER_API and process them later?
Any advice or suggestions on techniques to go about saving REST requests for delayed processing?
Is there a recommended open source message queue solution that would work for this type of scenario with JSON-based REST web services? Java is preferred as my backend is written in it.
Are there other techniques I should consider?
Rather than approach this by focusing on the failure state, it'd be faster and more robust to recognize that these actions should be performed asynchronously and out-of-band from the request by the UI. You should indeed use a message/event/job queue, and just pop those jobs right onto that queue as quickly as possible, and respond to the original request as quickly as possible. Once you've done that, the asynchronous job can be performed independently of the original request, and at its own pace — including with retries as needed.
If you want your API to indicate that there are aspects of the request which have not completed, you can use the HTTP response Status Code 202 (Accepted).