Hi there I am trying to solve a challenge with Flux.
My API calls another third party API that is very slow. I want to make sure that I call that API as few times as possible. For that I want to queue the query parameters. When that queue is full or a certain expiration time is reached I make a request to the slow API.
Example:
Request 1: GET localhost:8080/shipping?q=BR,CN,NL
Request 2: GET localhost:8080/shipping?q=LU,CA
Suppose my queue has size 5 and timeout 5 seconds. When the second requests arrives, my queue is full and I call the 3rd party API. With the results I want to respond each request with the right response.
Response 1:
{
"BR" : 21,
"CN" : 33,
"NL" : 5
}
Response 2:
{
"LU" : 1,
"CA" : 2
}
How would I keep track of the request in an asynchronous way here? How to make sure that each request only gets what was requested essentially?
I would have a WebClient doing a request like:
GET slowapi.com/shipping?q=BR,CN,NL,LU,CA
And splitting the results into two different responses.
P.S.
I probably need to implement this using Buffer or Window, any experts on this area?
You want to do what people do with long polling.
Long polling is a method that server applications use to hold a client connection until information becomes available.
See long polling using deferred result for more details.
Having said that, this doesn't seem to be a very good (or user friendly) design for a synchronous end-point. If the API is slow, you should consider making this an asynchronous request (with polling/websockets OR with callback/status end point in response).
Related
In Java in a web service, I have a requirement I want to return the response to the user after configured threshold time reaches and wants to continue processing after that.
Let's say I have a service it does step1, step 2, and the configured threshold is 1 second. Let's say step1 is completed at 1 second I want to return an acknowledgment response to the user and continue processing with step2 and wants to store response in DB or something like that.
Please let me know if anyone has any solutions or thoughts on this problem
There are multiple ways to achieve this
HTTP Layer
On HTTP layer, if the response comes back before the threshold, then I'd be tempted to send back a 200 Success.
However, if it takes more time than the threshold, you could use 202 Accepted
Looking at the RFC, its use case looks like this
6.3.3. 202 Accepted
The 202 (Accepted) status code indicates that the request has been
accepted for processing, but the processing has not been completed.
The request might or might not eventually be acted upon, as it might
be disallowed when processing actually takes place. There is no
facility in HTTP for re-sending a status code from an asynchronous
operation.
The 202 response is intentionally noncommittal. Its purpose is to
allow a server to accept a request for some other process (perhaps a
batch-oriented process that is only run once per day) without
requiring that the user agent's connection to the server persist
until the process is completed. The representation sent with this
response ought to describe the request's current status and point to
(or embed) a status monitor that can provide the user with an
estimate of when the request will be fulfilled.
Now, of course, instead of having a mix of 200 and 202, you could just return 202 everytime
Application Layer
In your application layer, you'll typically want to make use of asynchronous processing for this purpose.
There are multiple ways to leverage this way of working, you can:
Post a message on a queue/topic and let a message broker take care of dispatching it to another part of the app, or another app and let this part do the processing
Save the request inside of a database, and have another service poll the database for new requests, similar to queueing explained above, without JMS
If you're using Java EE, your EJB container allows you to work with #Asynchronous which will call a method asynchronously and return (so you'll be able to return 202)
If you're using Spring, it has an #Async annotation for the same purpose as hereabove
There are definitely other methods you could use to achieve this use case, but I think the ones I presented are the most common ones
I am working on a project where I have to call a third-party REST service. The problem with the current setup is that service does not return in at least 16 seconds. This response time may exceed more than that.
To avoid the threads waiting on the server, my service has a timeout value of 16 seconds. But that value is not helping. I searched on this and found that the Circuit breaker pattern will be useful. Reference:- spring-boot-rest-api-request-timeout . I believe this pattern is useful when the service has a slow response a few times. In my case, it is always a slow service.
How can I tackle this scenario?
If you want the response from the third party REST service, you have no choise but to wait, but if your request method have other thing to do. You should use Callable Thread to sent request to REST service and let Main Thread to complete the other work first then wait for the Callable to come back.
Maybe you can try to use some Cache like #Cacheable or Redis for this scenario. It may speed up some of the similar request.
Or, just let your request method sent the response back to client first. After that, use AJAX to access the third party REST service from the client side.
I am working on a Java application which takes SOAP requests on one end with 1 to 50 unique id's. I use the unique id's from the request to make a REST call and process the response and send back the processed data as a soap response. The performance will take a hit if I get all 50 unique id's, since I am calling the REST service 50 times sequentially.
My question is,
will I get performance benefits if I make my application multi-threaded, spawn new threads to make REST calls, when I get higher number of unique id's .
if so how should I design the multi-threading, use multiple threads to make rest calls only or also process the REST response data in multiple threads and merge the data after it is processed.
I searched for multithreaded implementation of Apache rest client but could not find one. Can any one point me in the right direction.
I'm using Apache Http client.
Thanks, in advance
It's most likely worth doing. Assuming you're getting multiple concurrent SOAP requests, your throughput won't improve, but your latency will.
You probably want to have a threadpool, so you have control over how many threads/REST calls you're doing at the same time. Create a ThreadPoolExecutor (you can use Executors.newFixedThreadPool or Executors.newCachedThreadPool); create a Callable task for constructing/processing each REST call, and then call ThreadPoolExecutor.invokeAll() with the list of the tasks. Then, iterate over the returned list and construct the SOAP response out of it.
See prior discussions on using Apache HTTP Client with multiple threads.
In designing my GWT/GAE app, it has become evident to me that my client-side (GWT) will be generating three types of requests:
Synchronous - "answer me right now! I'm important and require a real-time response!!!"
Asynchronous - "answer me when you can; I need to know the answer at some point but it's really not all that ugent."
Command - "I don't need an answer. This isn't really a request, it's just a command to do something or process something on the server-side."
My game plan is to implement my GWT code so that I can specify, for each specific server-side request (note: I've decided to go with RequestFactory over traditional GWT-RPC for reasons outside the scope of this question), which type of request it is:
SynchronousRequest - Synchronous (from above); sends a command and eagerly awaits a response that it then uses to update the client's state somehow
AsynchronousRequest - Asynchronous (from above); makes an initial request and somehow - either through polling or the GAE Channel API, is notified when the response is finally received
CommandRequest - Command (from above); makes a server-side request and does not wait for a response (even if the server fails to, or refuses to, oblige the command)
I guess my intention with SynchronousRequest is not to produce a totally blocking request, however it may block the user's ability to interact with a specific Widget or portion of the screen.
The added kicker here is this: GAE strongly enforces a timeout on all of its frontend instances (60 seconds). Backend instances have much more relaxed constraints for timeouts, threading, etc. So it is obvious to me that AsynchronousRequests and CommandRequests should be routed to backend instances so that GAE timeouts do not become an issue with them.
However, if GAE is behaving badly, or if we're hitting peak traffic, or if my code just plain sucks, I have to account for the scenario where a SynchronousRequest is made (which would have to go through a timeout-regulated frontend instance) and will timeout unless my GAE server code does something fancy. I know there is a method in the GAE API that I can call to see how many milliseconds a request has before its about to timeout; but although the name of it escapes me right now, it's what this "fancy" code would be based off of. Let's call it public static long GAE.timeLeftOnRequestInMillis() for the sake of this question.
In this scenario, I'd like to detect that a SynchronousRequest is about to timeout, and somehow dynamically convert it into an AsynchronousRequest so that it doesn't time out. Perhaps this means sending an AboutToTimeoutResponse back to the client, and force the client to decide about whether to resend as an AsynchronousRequest or just fail. Or perhaps we can just transform the SynchronousRequest into an AsynchronousRequest and push it to a queue where a backend instance will consume it, process it and return a response. I don't have any preferences when it comes to implementation, so long as the request doesn't fail or timeout because the server couldn't handle it fast enough (because of GAE-imposed regulations).
So then, here is what I'm actually asking here:
How can I wrap a RequestFactory call inside SynchronousRequest, AsynchronousRequest and CommandRequest in such a way that the RequestFactory call behaves the way each of them is intended? In other words, so that the call either partially-blocks (synchronous), can be notified/updated at some point down the road (asynchronous), or can just fire-and-forget (command)?
How can I implement my requirement to let a SynchronousRequest bypass GAE's 60-second timeout and still get processed without failing?
Please note: timeout issues are easily circumvented by re-routing things to backend instances, but backends don't/can't scale. I need scalability here as well (that's primarily why I'm on GAE in the first place!) - so I need a solution that deals with scalable frontend instances and their timeouts. Thanks in advance!
If the computation that you want GAE to do is going to take longer than 60 seconds, then don't wait for the results to be computed before sending a response. According to your problem definition, there is no way to get around this. Instead, clients should submit work orders, and wait for a notification from the server when the results are ready. Requests would consist of work orders, which might look something like this:
class ComputeDigitsOfPiWorkOrder {
// parameters for the computation
int numberOfDigitsToCompute;
// Used by the GAE app to contact the requester when results are ready.
ClientId clientId;
}
This way, your GAE app can respond as soon as the work order is saved (e.g. in Task Queue), and doesn't have to wait until it actually finishes calculating a billion digits of pi before responding. Your GWT client then waits for the result using the Channel API.
In order to give some work orders higher priority, you can use multiple task queues. If you want Task Queue work to scale automatically, you'll want to use push queues. Implementing priority using push queues is a little tricky, but you can configure high priority queues to have faster feed rate.
You could replace Channel API with some other notification solution, but that would probably be the most straightforward.
I want to push the data to the jsp for every 2 seconds, with out client requesting it.
I am using Spring with Hibernate here.
I am displaying google maps marker, and I want to update the marker location for every 2 seconds by getting the data from database, however I have done getting the data from database for every 2 seconds, but I am unable to push that data to this jsp.
#Scheduled(fixedRate = 2000)
public void getData(){
// TODO Auto-generated method stub
DeviceDetails deviceDetails = realTimeDataDAO.getDeviceDetails(deviceId);
System.out.println(deviceDetails);
}
I have to display some data after every 2 seconds. Can anyone tell me how to do that?
any one knows about Comet Ajax Push technology, will it work in this scenario?
You have a number of choices.
Polling - as mentioned in other answers you could simply have javascript in the client constantly poll the server every 2 seconds. This is a very common approach, is simple and will work in the large majority browsers. While not as scaleable as some other approaches setup correctly this should still be able to easily scale to moderate volumes (probably more users than you'll have!).
Long polling - Also known as Comet this is essentially a long lived request. The implementation of this will vary depending on your app server. see here for Tomcat: http://wiki.apache.org/tomcat/WhatIsComet or Jetty bundles some examples.
HTML 5 solutions while the web is traditionally request response based - event based processing is part of the HTML 5 spec. As you events seem to be only one way (server -> client) Consider using Event sources. See: http://www.html5rocks.com/en/tutorials/eventsource/basics/ or again the Jetty examples. Caveats here are that only modern browsers and some app servers support these methods - e.g. Apache doesn't natively support websockets.
So to sum up - my gut feeling is that your needs and for simplicity a polling approach is fine - don't worry too much initially about performance issues.
If you want to be on the cutting edge, learn new thing and you have control over your app server and frameworks then I'd go for the HTML 5 approach.
Comet is kind of a half way house between these two.
Your best bet with Spring is to store the results of the scheduled query into a bean in memory, then have another request-scope bean get that stored result in a method that is web accessible, and return it as text (or JSON). Alternatively you could query the DB everytime an update is requested.
Then, you can make a timed async request from your page (You may want to use YUI Connection Manager for that), read the response and use the panTo method from google.maps.Map to update your map location.
As you can see, the solution is split in a Java and a JavaScript portion.
For the Java side, you must create a controller that performs the query to the database (or better yet, delegates that task to another layer) and returns the results as JSON, you can use http://spring-json.sourceforge.net/ for that. It's a bit complex in Spring so you might want to instead create a simple servlet that returns the data.
For the Javascript side, once you have a working endpoint that returns the JSON data, using YUI Connection Manager and the google maps api:
function update(){
var callback = {
success: function (o) {
var response = YAHOO.lang.JSON.parse(o.responseText);
map.panTo({lat: response.lat, lng: response.longi}); // map is the google.maps.Map representing your map
},
failure: function (o) {
}
}
var sUrl = '/getData.htm'; // This is the request mapping for your bean
YAHOO.util.Connect.asyncRequest('GET', sUrl,callback);
}
function init(){
setTimeout("update()", 2000);
}
The best way to do it is to have the client send an new request every 2 second, and then display the new data.
Since you use HTTP i assume you use javascript on the client side, so you need a timer in your javascript which fire every 2 second, and then let the javascript perform an ajax call to the server to get the data which it can then display.
Try a TimerTask or ThreadExecutor (look at the scheduled implementation).
Well, if you want to implement above solution in web application I am not sure but I think you cannot do it this way. HTTP is a request/response protocol and when the server finish sending one response it cannot initiate on its own sending a new response. In short words: one request from client - one response from server.
I think that you should use AJAX (asynchronous Javascript requests) so as to ask server every 2 second for a new data and if necessary update the DOM (website HTML tags structure).
I have had good experience with WebSockets. Very fast, low overhead bi-directional protocol between server and browser. Not sure what's your backend but Jetty supports it very well. Just have a timer process on the backend which would iterate over all active WebSockets sessions and push updates. There are plenty example on the net of how to use Websockets.
Things to keep in mind:
WebSockets not supported by all browsers (Chrome and Safari seems to be the best supported)
WebSockets traffic doesn't traverse all proxies
Depending on your requirements it might or might not be acceptable.
There are some projects like Atmosphere which tries to abstract browser/server differences in websockets support with graceful fallback to Comet. It might worth to look at.
//Initialize this somewhere
ScheduledExecutorService exe = Executors.newScheduledThreadPool(1);
exe.scheduleWithFixedDelay(new Runnable() {
#Override
public void run() {
//The executor service tries to run 2 seconds after it last finished
//If you code takes 1 second to run this will effectively run every 3 seconds
}
}, 0, //this is the initial delay
2, //this is the consecutive delay
TimeUnit.SECONDS);
exe.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
//The executor service tries to run this every 2 seconds
//If you code takes 1 second to run this will still run evey 2 seconds
}
}, 0, //this is the initial delay
2, //this is the period it tries to run in
TimeUnit.SECONDS);
You need to send the data from server to client for every 2 secs. And already you know how to gather the data for every 2 seconds at the server side.
If this is all you need, the "Ajax streaming" will help you. This is on the client side. From server side for every 2 seconds you need to write the data and flush it.
Searching for this term will give you lot of examples. But remember all modern browsers will use one approach and all IE browsers will use IFrame approach to implement streaming.
In the first case, you need to make XHR request and peek the response and process it.
Here are a few examples: (I didt have time to go through them completely)
http://ajaxpatterns.org/HTTP_Streaming
http://developers.cogentrts.com:8080/DH_ajax_1.asp
U can use ajax call.
As you can write code from Javascript that will send the request for every 2 seconds,but for this your server should be quick responsive for this type of request.
Well I guess this will help you.
If your server gets more than 1000 users then your application server will fail. I recommend you use NON Blocking Input Output methods supported using Jetty Server only to host the requests made for this purpose and use your normal EE Server for other applications.