I am trying to perform some computations on a server. For this, the client initially inputs some data which I am capturing through Javascript. Now, I would perhaps make a XMLHttpRequest to a server to send this data. Let's say the computation takes an hour and the client leaves the system or switches off the system.
In practice, I would use perhaps polling from the client side to determine if the result is available. But is there some way I could implement this in the form of a call back, for instance, the next time the client logs in, I would just contact the client side Javascript to pass the result... Any suggestions? I am thinking all this requires some kind of a webserver sitting on the client side but I was wondering if there's a better approach to do this.
Your best bet is to just poll when the user gets to the web page.
What I did in something similar was to gradually change my polling time, so I would start with several seconds, then gradually increase the interval. In your case, just poll after 15 minutes, then increase every 5 minutes when it fails, and if the user closes the browser then you can just start the polling again.
If you want some callback, you could just send an email when it is finished, to let the user know.
Also, while you are doing the processing, try to give some feedback as to how far you have gone, how much longer it may be, anything to show that progress is being made, that the browser isn't locked up. If nothing else, show a time with how long the processing has been going on, to give the user some sense of progress.
Related
My application is using Struts 1.x and it's running on WAS..
All action classes are working fine except one wherein I click on one button and one action(which is expected to complete in 1hour) is called and then it starts executing ..the issue comes when same action is called after few minutes without any button trigger or any change of code.This happens after every few minutes for n number of times...
If anyone has any idea about this please let me know.
A request that takes 1 hour to complete is not normal: you should redesign this functionality.
Briefly, you have this problem because the request takes too much time to complete. For a technical explanation of the cause of your problem see Why does the user agent resubmit a request after server does a TCP reset?
Solution: create a separate thread (or a pool of parallel threads, if possible) to handle the long-running computation and send immediately a response page saying "Request accepted". This page could also use JavaScript to send periodically an "is it completed?" request to the server. You should also provide a mechanism to inquiry for pending requests, so users that close the browser without waiting for the final "Yes, completed!" response can get the result when they want.
In designing my GWT/GAE app, it has become evident to me that my client-side (GWT) will be generating three types of requests:
Synchronous - "answer me right now! I'm important and require a real-time response!!!"
Asynchronous - "answer me when you can; I need to know the answer at some point but it's really not all that ugent."
Command - "I don't need an answer. This isn't really a request, it's just a command to do something or process something on the server-side."
My game plan is to implement my GWT code so that I can specify, for each specific server-side request (note: I've decided to go with RequestFactory over traditional GWT-RPC for reasons outside the scope of this question), which type of request it is:
SynchronousRequest - Synchronous (from above); sends a command and eagerly awaits a response that it then uses to update the client's state somehow
AsynchronousRequest - Asynchronous (from above); makes an initial request and somehow - either through polling or the GAE Channel API, is notified when the response is finally received
CommandRequest - Command (from above); makes a server-side request and does not wait for a response (even if the server fails to, or refuses to, oblige the command)
I guess my intention with SynchronousRequest is not to produce a totally blocking request, however it may block the user's ability to interact with a specific Widget or portion of the screen.
The added kicker here is this: GAE strongly enforces a timeout on all of its frontend instances (60 seconds). Backend instances have much more relaxed constraints for timeouts, threading, etc. So it is obvious to me that AsynchronousRequests and CommandRequests should be routed to backend instances so that GAE timeouts do not become an issue with them.
However, if GAE is behaving badly, or if we're hitting peak traffic, or if my code just plain sucks, I have to account for the scenario where a SynchronousRequest is made (which would have to go through a timeout-regulated frontend instance) and will timeout unless my GAE server code does something fancy. I know there is a method in the GAE API that I can call to see how many milliseconds a request has before its about to timeout; but although the name of it escapes me right now, it's what this "fancy" code would be based off of. Let's call it public static long GAE.timeLeftOnRequestInMillis() for the sake of this question.
In this scenario, I'd like to detect that a SynchronousRequest is about to timeout, and somehow dynamically convert it into an AsynchronousRequest so that it doesn't time out. Perhaps this means sending an AboutToTimeoutResponse back to the client, and force the client to decide about whether to resend as an AsynchronousRequest or just fail. Or perhaps we can just transform the SynchronousRequest into an AsynchronousRequest and push it to a queue where a backend instance will consume it, process it and return a response. I don't have any preferences when it comes to implementation, so long as the request doesn't fail or timeout because the server couldn't handle it fast enough (because of GAE-imposed regulations).
So then, here is what I'm actually asking here:
How can I wrap a RequestFactory call inside SynchronousRequest, AsynchronousRequest and CommandRequest in such a way that the RequestFactory call behaves the way each of them is intended? In other words, so that the call either partially-blocks (synchronous), can be notified/updated at some point down the road (asynchronous), or can just fire-and-forget (command)?
How can I implement my requirement to let a SynchronousRequest bypass GAE's 60-second timeout and still get processed without failing?
Please note: timeout issues are easily circumvented by re-routing things to backend instances, but backends don't/can't scale. I need scalability here as well (that's primarily why I'm on GAE in the first place!) - so I need a solution that deals with scalable frontend instances and their timeouts. Thanks in advance!
If the computation that you want GAE to do is going to take longer than 60 seconds, then don't wait for the results to be computed before sending a response. According to your problem definition, there is no way to get around this. Instead, clients should submit work orders, and wait for a notification from the server when the results are ready. Requests would consist of work orders, which might look something like this:
class ComputeDigitsOfPiWorkOrder {
// parameters for the computation
int numberOfDigitsToCompute;
// Used by the GAE app to contact the requester when results are ready.
ClientId clientId;
}
This way, your GAE app can respond as soon as the work order is saved (e.g. in Task Queue), and doesn't have to wait until it actually finishes calculating a billion digits of pi before responding. Your GWT client then waits for the result using the Channel API.
In order to give some work orders higher priority, you can use multiple task queues. If you want Task Queue work to scale automatically, you'll want to use push queues. Implementing priority using push queues is a little tricky, but you can configure high priority queues to have faster feed rate.
You could replace Channel API with some other notification solution, but that would probably be the most straightforward.
I want to understand What is event driven io. I am hearing it is different than traditional blocking request/response model. Do we have any example to explain this? and how will it contribute to the increase in performance?
Examples will be highly appreciated.
I'm guessing since it's been 4 months you've got your answers. Regardless here goes...
Netty
http://www.jboss.org/netty
Mina
http://mina.apache.org/
C10K
http://www.kegel.com/c10k.html
To understand part of the problem that evented io is trying to solve take a look at the C10K link above. Scability is one of the main benefits of evented io.
A traditional web server will handle a request and then return a response (synchronous/blocking). Each request would typically require it's own thread.
An event driven web server will handle a request, then create an event (asynchronous/nonblocking io), and then return the response. Multiple requests are shared by a single thread/process.
Evented IO should be able to handle more requests per thread than a typical web server. You might not speed up your web application with evented IO, but it should handle large numbers of connections a lot easier than a traditional web server. This means requiring less machines for scaling.
Though I would argue that evented io architecture will force you to develop your web application to handle smaller chunks of data. Much like a google mail type application that uses a lot of ajax calls to poll for data on the server and then does small updates in the browser. This itself has many benefits that will help speed up AND improve scaling on your server.
Netty and Mina provide plenty of example code.
This is a very old question but I assume this might help some body else to understand Event driven programming :
This following analogy might help you to understand event driven I/O programming by drawing a parallel to waiting line at Doctor's Reception desk.
Blocking I/O is like, if you are standing in the queue, receptionist asks a guy in front of you to fill in the form and she waits till he finishes. You have to wait for your turn till the guy finishes his form, this is blocking.
If single guy takes 3 mins to fill in, the 10th guy have to wait till 30 minutes. Now to reduce this 10th guys wait time, solution would be, increasing number of receptionist's, which is costly. This is what happens in traditional web servers. If you request for a user info, subsequent request by other users should wait till the current operation, fetching from Database, is completed. This increases the "time to response" of the 10th request and it increase exponentially for nth user. To avoid this traditional web servers creates thread (equivalent to increasing number of receptionists) for every single request, ie., basically it creates a copy of the server for each request which is costly interms of CPU consumption since every request will need a Operating systems thread. To scale up the app, you would have to throw lots of computation power at the app.
Event Driven: The other approach to scale up queue's "response time" is to go for event driven approach, where guy's in the queue will be handed over the form, asked to fill in and come back on completion. Hence receptionist can always take request. This is exactly what javascript has been doing since from it's inception. In browser, javascript would respond to user click event, scroll, swipe or database fetch and so on. This is possible in javascript inherently, because javascript treats functions as first class objects and they can be passed as a parameters to other functions (called callbacks), and can be called on completion of particular task. This is what exactly node.js does on the server. You can find more info about event driven programming and blocking i/o, in the context of node here
I am making a client server MMO style game. So far I have the framework set up so that the server and clients interact with each other in order to provide state updates. The server maintains the game state and periodically calculates the next state and then it every once in a while (every n milliseconds) it sends out the new state to all the clients. This new state is able to be viewed and reacted to on the client side by the user. These actions are then sent back to the server to be processed and sent out for the next update.
The obvious problem is that it takes time for these updates to travel between server and clients. If a client acts to attack an enemy, by the time that update has gotten back to the server, it's very possible the server has progressed the game state far enough that the enemy is no longer in the same spot, and out of range.
In order to combat this problem, I have been trying to come up with a good solution. I have looked at the follow, and it has helped some, but not completely: Mutli Player Game synchronization. I already came to the conclusion that instead of just transmitting the current state of the game, I can transmit other information such as direction (or target position for AI movement) and speed. From this, I have part of what is needed to 'guess', on the client side, what the actual state is (as the server sees it) by progressing the game state n milliseconds into the future.
The problem is determining the amount of time to progress the state, because it will depend on the lag time between server and client, which could vary considerably. Also, should I progress the game state to what it would currently be when the client views it (i.e. only account for the time it took the update to get to the client) or should I progress it far enough so that when its response is sent back to the server, it will be the correct state by then (account for both to and from journey).
Any suggestions?
To reiterate:
1) What is the best way to calculate the amount of time between send and receive?
2) Should I progress the client side state far enough to count for the entire round trip, or just the time it takes to get the data from the server to the client?
EDIT: What I have come up with so far
Since I already have many packets going back and forth between the clients and server, I do not want to add to that traffic if I have to. Currently, the clients send status update packets (UDP) to the server ~150 milliseconds (only if something has changed), and then these are receive and processed by the server. Currently, the server sends no response to these packets.
To start off, I will have the clients attempt to estimate their lag time. I will default it to something like 50 to 100 milliseconds. I am proposing that about every 2 seconds (per client) the server will immediately respond to one of these packets, sending back the packet index in a special timing update packet. If the client receives the timing packet, it will use the index to calculate how long ago this packet was sent, and then use the time between packets as the new lag time.
This should keep the clients reasonably up to date on their lag, with out too much excess network traffic.
Sound acceptable, or is there a better way? This still doesn't answer question two.
First off, just as an FYI, if you are worrying about delays of less than 1 second you are starting to get out of the realm of realistic lag for an MMO. The way all of the big MMOs handle this is by basically having two different "games" going at the same time - there is an underlying game engine which is handling all of the math, character states, applying the numerical changes, and then there is the graphical client.
The first "game," the math and calculations, are a lot closer conceptually to a traditional console game (like the old MUDs). Think in terms of messages passing back and forth, with a very high degree of ACID isolation. These messages worry a lot more about accuracy, but you should assume that these may take 1-2 seconds (or more) to be processed and updated. This is the "rules lawyer" that is ensuring that hit points are being calculated correctly, etc.
The second "game" is the graphical client. This client is really focused on maintaining the illusion that things are happening much more quickly than the first game, but also synchronizing the events that are coming in with the graphical appearance. This graphical client often just flat makes things up that aren't critical. This client is responsible for the 30 fps+ graphics. That's why a lot of these graphical clients use tricks like starting the attack animation when the user presses the button, but not actually resolving the animation until the first game gets around to resolving the attack.
I know this is a little off from the literal interpretation of your question, but once you get outside two machines sitting next to each other on a network 100ms is really optimistic...
2) Should I progress the client side state far enough to count for the entire round trip, or just the time it takes to get the data from the server to the client?
Let's assume that the server sends the state at time T0, the client sees it in time T1, the player reacts in time T2, and the server obtains their answer in time T3, and processes it instantly.
Here, the round trip delay is T1-T0 + T3-T2. In an ideal world, T0=T1 and T2=T3,
and the only delay between the observing time and the processing of the player's action is the player's reaction time, i.e., T2-T1.
In the real world it's T3-T0.
So in order to simulate the ideal world you need to subtract the whole round trip delay:
T2-T1 = T3-T0 + (T1-T0 + T3-T2)
This means that a player on a slower network sees more advanced state the a player on a fast network.
However, this is no advantage for them, since it takes longer till their reaction gets processed.
Of course, it could get funny in case of two players sitting next to each other and using different speed networks.
But this is quite improbable scenario, isn't it?
There's a problem with the whole procedure:
You're extrapolating in the future and this may lead to nonsensical situations.
Some of them, like diving into walls can be easily prevented, but those depending on player's interaction can not.1
Maybe you could turn your idea upside down:
Instead of forecasting, try to evaluate player's action at the time T3 - (T1-T0 + T3-T2). If you determine that a character would be hit this way, reduce its hit points accordingly.
This may be easier and more realistic then the original idea, or it may be worse, or not applicable at all. Just an idea.
1 Imagine two players running against each other.
According to the extrapolating they pass each other on the right side.
In fact, one of them changes their direction, and at the end they passes each other on the left side.
One way to solve this kind of problem is running the game simulation on the client and the server.
So instead of simulating the world just on the server, do it on the client as well. Just send what the client did (for example "player hit monster") to the server. The server runs the same simulation and checks the events.
If they don't match (player cheating, lags), it sends a veto to the client and the action isn't recorded as successful on the server. This means all the other clients don't notice it (the server doesn't forward the action to the other clients).
That should be a pretty efficient way to handle the lag, especially if you have a lot of PvM battles (instead of PvP): Since the monster is a simulation, it doesn't matter if there is a long lag between the client and the server.
That said: Most networks are so fast that the lag should be in the area of a few milliseconds. That means you "just" have to make the server fast enough so it can respond withing, say, <100ms and the players won't notice.
How do i measure how long a client has to wait for a request.
On the server side it is easy, through a filter for example.
But if we want to take into accout the total time including latency and data transfer, it gets diffcult.
is it possible to access the underlying socket to see when the request is finished?
or is it neccessary to do some javascript tricks? maybe through clock synchronisation between browser and server? are there any premade javascripts for this task?
You could wrap the HttpServletResponse object and the OutputStream returned by the HttpServletResponse. When output starts writing you could set a startDate, and when it stops (or when it's flushed etc) you can set a stopDate.
This can be used to calculate the length of time it took to stream all the data back to the client.
We're using it in our application and the numbers look reasonable.
edit: you can set the start date in a ServletFilter to get the length of time the client waited. I gave you the length of time it took to write output to the client.
There's no way you can know how long the client had to wait purely from the server side. You'll need some JavaScript.
You don't want to synchronize the client and server clocks, that's overkill. Just measure the time between when the client makes the request, and when it finishes displaying its response.
If the client is AJAX, this can be pretty easy: call new Date().getTime() to get the time in milliseconds when the request is made, and compare it to the time after the result is parsed. Then send this timing info to the server in the background.
For a non-AJAX application, when the user clicks on a request, use JavaScript to send the current timestamp (from the client's point of view) to the server along with the query, and pass that same timestamp back through to the client when the resulting page is reloaded. In that page's onLoad handler, measure the total elapsed time, and then send it back to the server - either using an XmlHttpRequest or tacking on an extra argument to the next request made to the server.
If you want to measure it from your browser to simulate any client request you can watch the net tab in firebug to see how long it takes each piece of the page to download and the download order.
Check out Jiffy-web, developed by netflix to give them a more accurate view of the total page -> page rendering time
I had the same problem. But this JavaOne Paper really helped me to solve this problem. I would request you to go thru it and it basically uses javascript to calculate the time.
You could set a 0 byte socket send buffer (and I don't exactly recommend this) so that when your blocking call to HttpResponse.send() you have a closer idea as to when the last byte left, but travel time is not included. Ekk--I feel queasy for even mentioning it. You can do this in Tomcat with connector specific settings. (Tomcat 6 Connector documentation)
Or you could come up with some sort of javascript time stamp approach, but I would not expect to set the client clock. Multiple calls to the web server would have to be made.
timestamp query
the real request
reporting the data
And this approach would cover latency, although you still have have some jitter variance.
Hmmm...interesting problem you have there. :)