I am making a client server MMO style game. So far I have the framework set up so that the server and clients interact with each other in order to provide state updates. The server maintains the game state and periodically calculates the next state and then it every once in a while (every n milliseconds) it sends out the new state to all the clients. This new state is able to be viewed and reacted to on the client side by the user. These actions are then sent back to the server to be processed and sent out for the next update.
The obvious problem is that it takes time for these updates to travel between server and clients. If a client acts to attack an enemy, by the time that update has gotten back to the server, it's very possible the server has progressed the game state far enough that the enemy is no longer in the same spot, and out of range.
In order to combat this problem, I have been trying to come up with a good solution. I have looked at the follow, and it has helped some, but not completely: Mutli Player Game synchronization. I already came to the conclusion that instead of just transmitting the current state of the game, I can transmit other information such as direction (or target position for AI movement) and speed. From this, I have part of what is needed to 'guess', on the client side, what the actual state is (as the server sees it) by progressing the game state n milliseconds into the future.
The problem is determining the amount of time to progress the state, because it will depend on the lag time between server and client, which could vary considerably. Also, should I progress the game state to what it would currently be when the client views it (i.e. only account for the time it took the update to get to the client) or should I progress it far enough so that when its response is sent back to the server, it will be the correct state by then (account for both to and from journey).
Any suggestions?
To reiterate:
1) What is the best way to calculate the amount of time between send and receive?
2) Should I progress the client side state far enough to count for the entire round trip, or just the time it takes to get the data from the server to the client?
EDIT: What I have come up with so far
Since I already have many packets going back and forth between the clients and server, I do not want to add to that traffic if I have to. Currently, the clients send status update packets (UDP) to the server ~150 milliseconds (only if something has changed), and then these are receive and processed by the server. Currently, the server sends no response to these packets.
To start off, I will have the clients attempt to estimate their lag time. I will default it to something like 50 to 100 milliseconds. I am proposing that about every 2 seconds (per client) the server will immediately respond to one of these packets, sending back the packet index in a special timing update packet. If the client receives the timing packet, it will use the index to calculate how long ago this packet was sent, and then use the time between packets as the new lag time.
This should keep the clients reasonably up to date on their lag, with out too much excess network traffic.
Sound acceptable, or is there a better way? This still doesn't answer question two.
First off, just as an FYI, if you are worrying about delays of less than 1 second you are starting to get out of the realm of realistic lag for an MMO. The way all of the big MMOs handle this is by basically having two different "games" going at the same time - there is an underlying game engine which is handling all of the math, character states, applying the numerical changes, and then there is the graphical client.
The first "game," the math and calculations, are a lot closer conceptually to a traditional console game (like the old MUDs). Think in terms of messages passing back and forth, with a very high degree of ACID isolation. These messages worry a lot more about accuracy, but you should assume that these may take 1-2 seconds (or more) to be processed and updated. This is the "rules lawyer" that is ensuring that hit points are being calculated correctly, etc.
The second "game" is the graphical client. This client is really focused on maintaining the illusion that things are happening much more quickly than the first game, but also synchronizing the events that are coming in with the graphical appearance. This graphical client often just flat makes things up that aren't critical. This client is responsible for the 30 fps+ graphics. That's why a lot of these graphical clients use tricks like starting the attack animation when the user presses the button, but not actually resolving the animation until the first game gets around to resolving the attack.
I know this is a little off from the literal interpretation of your question, but once you get outside two machines sitting next to each other on a network 100ms is really optimistic...
2) Should I progress the client side state far enough to count for the entire round trip, or just the time it takes to get the data from the server to the client?
Let's assume that the server sends the state at time T0, the client sees it in time T1, the player reacts in time T2, and the server obtains their answer in time T3, and processes it instantly.
Here, the round trip delay is T1-T0 + T3-T2. In an ideal world, T0=T1 and T2=T3,
and the only delay between the observing time and the processing of the player's action is the player's reaction time, i.e., T2-T1.
In the real world it's T3-T0.
So in order to simulate the ideal world you need to subtract the whole round trip delay:
T2-T1 = T3-T0 + (T1-T0 + T3-T2)
This means that a player on a slower network sees more advanced state the a player on a fast network.
However, this is no advantage for them, since it takes longer till their reaction gets processed.
Of course, it could get funny in case of two players sitting next to each other and using different speed networks.
But this is quite improbable scenario, isn't it?
There's a problem with the whole procedure:
You're extrapolating in the future and this may lead to nonsensical situations.
Some of them, like diving into walls can be easily prevented, but those depending on player's interaction can not.1
Maybe you could turn your idea upside down:
Instead of forecasting, try to evaluate player's action at the time T3 - (T1-T0 + T3-T2). If you determine that a character would be hit this way, reduce its hit points accordingly.
This may be easier and more realistic then the original idea, or it may be worse, or not applicable at all. Just an idea.
1 Imagine two players running against each other.
According to the extrapolating they pass each other on the right side.
In fact, one of them changes their direction, and at the end they passes each other on the left side.
One way to solve this kind of problem is running the game simulation on the client and the server.
So instead of simulating the world just on the server, do it on the client as well. Just send what the client did (for example "player hit monster") to the server. The server runs the same simulation and checks the events.
If they don't match (player cheating, lags), it sends a veto to the client and the action isn't recorded as successful on the server. This means all the other clients don't notice it (the server doesn't forward the action to the other clients).
That should be a pretty efficient way to handle the lag, especially if you have a lot of PvM battles (instead of PvP): Since the monster is a simulation, it doesn't matter if there is a long lag between the client and the server.
That said: Most networks are so fast that the lag should be in the area of a few milliseconds. That means you "just" have to make the server fast enough so it can respond withing, say, <100ms and the players won't notice.
Related
I have a game in which users must answer questions within 10 second intervals. Which means that when the 10 seconds is up the user is not allowed to answer the question if he/she has not already answered it. At this point the server should also not allow the user to answer the question because the server as well knows the 10 seconds has elapsed.
For validity and making sure the client is not manipulated in any way i have decided to put the timer on the server and simply display the timer to the client (displaying the timer is important).
The question is how do i display the timer to the user and also make sure that the timer is valid on the client.
Socket IO handles the interaction between the client and server as the user is actually playing, http handles everything else the user does in the game(login, signup, account, payment etc).
My stack is android with java, node js server, socket io on both. I have setup all, the only issue is the approach to take when it comes to the synchronization of the timer on both the client and the server.
I was thinking one approach might be to serve a question and basically set a variable on the socket like this (on the server)
var t = new Date();
t.setSeconds(t.getSeconds() + 12); //2 seconds extra for network lag
socket.allowed_time = t
Then send the question to the client and show a 10 second count down to the user. Whenever the client responds check if the current time is greater than socket.allowed_time. This method seems unreliable considering the network lag.
A game like 8 ball pool i play lot has real time game play between 2 users and the time is synchronized between both users (in realtime), i noticed that network lag affects the game a lot as well.
I don't think there is a way that will both 1) cope with network lag1, and 2) cope with clients pretending there is network lag.
If you don't care that a lagged client / user sometimes gets penalized, then you simply run a 10 second timer on the server side. If the client's reply is not received before the timer goes off, then it is rejected.
You can just start the user's on-screen timer when the question is received by the client. If there is substantial lag, the timer may not reach zero until it is (noticeably) too late to reply. You could compensate for this by estimating the average lag in both directions and adjusting the number of seconds on the clock.
1 - ... without "unfairly" penalizing the lagged client.
I am developing an Android application communicating with a TCP Java-server over a WLAN connection. The Android application is a game with sprites being moved around the screen. Whenever a sprite moves, the AndroidClient sends its coordinates to the Java-server, wich then sends the data to the other clients (maximum 4 clients). The server handles each client on a separate thread, data updates are sent about every 20 ms., and each packet consists of about 1-10 bytes. I am on a 70 Mbit network (with about 15 Mbit effective on my Wireless).
I am having problems with an unstable connection, and experiencing latency at about 50-500 ms. every 10th-30th packet. I have set the tcpNoDelay to true, wich stopped the consistent 200ms latency, although it still lags a lot. As I am quite new to both Android and networking I don't know whether this is to be expected or not. I am also wondering if UDP could be suitable for my program, as I am interested in sending updates fast rather than every packet arriving correctly.
I would appreciate any guidance as to how to avoid/work around this latency problem. General tips on how to implement such a client-server architecture would also be applauded.
On a wireless LAN you'll occasionally see dropped packets, which results in a packet retransmission after a delay. If you want to control the delay before retransmission you're almost certainly going to have to use UDP.
You definitely want to use UDP. For a game you don't care if the position of a sprite is incorrect for a short time. So UDP is ideal in this case.
Also, if you have any control over the server code, I would not use separate threads for clients. Threads are useful if you need to make calls to libraries that you don't have control over and that can block (such as because they touch a file or try to perform additional network communication). But they are expensive. They consume a lot of resources and as such they actually make things slower than they could be.
So for a network game server where latency and performance are absolutely critical, I would just use one thread to process a queue of commands that have a state and then make sure that you never perform an operation that blocks. So each command is processed in order, it's state is evaluated and updated (like a laser blast intersected with another object). If the command requires blocking (like reading from a file) then you need to perform a non-blocking read and set the state of that command accordingly so that your command processor never blocks. The key is that the command processor can never never ever block. It would just run in a loop but you would have to call Thread.sleep(x) in an appropriate way so as not to waste CPU.
As for the client side, when a client submits a command (like they fired a laser or some such), the client would generate a response object and insert it into a Map with a sequence id as the key. Then it would send the request with the sequence id and when the server responds with the that id, you just lookup the response object in the Map and decode the response into that object. Meaning this allows you to perform concurrent operations.
Hi I've written a multi-player game in Java and I was wondering what I need to learn and/or what I should use in order to make the game playable over a network or on the internet by multiple computers. I'm really kind of clueless as to where to start so any advice would be helpful, thanks.
Those other answers are both fairly high-level, which is fine, but you don't want high-level, you want low-level, as in "how do I make it actually send data and what does that mean and what do I send , etc." Here's what you do:
First, TCP or UDP? If you don't know what either of those things are, read up on them as I don't have space to give a good rundown on both here, but for your choice know the following:
TCP is good for turn based games or games where high latency is generally ok, since TCP guarantees packet delivery so it's possible that it could take some time for a dropped packet to be re-delivered. This is good for things like Chess, or anything else that takes turns.
UDP is good for games where you don't necessarily care about reliability in messages and would prefer that data just keeps sending and if you miss something, oh well. This is good for games that are real-time action based games, such as HALO:Reach or Call of Duty. In those, if you send an object's position, and the object never gets there, it's better to send a new position than to re-send an old position (which is now even older) so it's not important to guarantee reliability all the time. That said, you MUST have certain things be 100% reliable, so you'll still need certain things to guarantee delivery, such as object creation and object destruction. This means you need to implement your own semi-reliable, priority based protocol on top of UDP. This is difficult.
So, ask yourself what is important, learn how TCP and UDP work, and then make an intelligent choice.
That said, now you have to synchronize object state across the network. This means that your objects need to serialize to something that can be represented in a byte stream and written to a socket. Writing to a socket is easy; if you can write to a file you can write to a socket, it's really not hard. What's important is to make sure that you are able to represent an object as a buffer, so if your object has references/pointers to other objects, you won't be able to just send those pointers since they're different on the other clients, so you have to convert them to something that is common to all the hosts. This means ID's, although an object's ID must be unique across all hosts, so you have to have a way to coordinate between hosts such that no two hosts will create different objects with the same ID. There are ways to handle hosts doing this, but we won't worry about that here (hint: use some sort of mapping between the host's ID and the network ID. Bigger hint: Don't do this if you don't need to).
So now you can send data, great, now what? Every time the game state changes, you must send an update to the other machines somehow. This is where the client-server architecture comes in, or peer-to-peer if you want. Client-Server is easier to implement. Also, one host "acting" as the server is still Client-Server and anyone who says differently is wrong.
So, the server's responsibility is to "own" all game state. Only the server can definitively say what state an object is in. If you want to move an object, you tell the server that you would like to move, however the server then tells you that you should move the object, you don't just do it (although some sort of client-side prediction is often useful). THen the server sends the updated object state out to all the other hosts.
So, you mentioned a turn-based game, right? Very simple:
You're going to resolve a full turn on the client who's turn it currently is. Once that client does what they want to do, send the results of that turn to the server. The server then validates the client's moves (don't just trust the client, cheating happens that way) and applies them to its object state.
Once the server is up to date, it sends messages to every other client with the new state of the world, and those clients apply those updates. This includes the client that just took their turn; that client should only update its world state when the server tells it to, since you want to ensure consistency with the rest of the hosts AND you want to prevent a host from cheating.
The server then sends out a message indicating whose turn it is. You could send this at the same time as the world state update in the previous step, that would be fine. Just be aware of clients attempting to take their turn out of order. That's why the server has authority over the world; if a client tries to cheat, the server can smack them down.
That's all you'll need to do for a turn-based game. Hint: Use TCP
Bigger hint: TCP implements something called "Nagle's Algorithm" which will combine your messages into a single packet. What this means is that if you send two separate messages with two separate calls to "Send", it's possible that the other hosts will receive only one packet on a single call to "Receive" but that packet will contain the contents of BOTH of the packets that were sent. Thus if you send two 100-byte packets with two calls to send, you may get one 200-byte packet on a single call to receive. This is normal, so you need to be able to deal with this somehow. One trick is to make every single packet the same size, and then just read that many bytes from the socket every time you check for input. Keep in mind also that you could get partial messages as well. For example, if you send two 100-byte messages, they can be combined into a single 200 byte message. Next, if you read from the socket on the other end, but you read with a buffer size of 150 bytes, you'll have 150-bytes, which contains the first packet and part of the second. You'll have to make a second call to receive to get the rest of the second message, so KEEP TRACK OF HOW MUCH DATA YOU'VE RECEIVED so that you don't miss part of a packet somewhere. This is why keeping your packets the same size is useful.
There are a number of other useful tricks for reducing the size and frequency of your messages and for keeping track of games that are not turn-based and act in real time, but if you have a turn-based game, then the correct thing to do is probably use TCP and not worry about any of that other stuff. Here are some links to useful websites and articles that will give you more information on how game network programming is done:
Glenn Fiedler's site, some great info here.
1500 archers, A great paper on how to implement a technique called deterministic lockstep, which is useful for many types of games.
Let me know if you want more details on any of this stuff or if you have more specific questions.
One possible architectural approach would be to have one instance of the game act as the host (e.g., the first one to start). It would coordinate the game and send information of the game state and turns to each of the other players.
When a player makes a move, it would send the move information to the host, which would update the state of the game (and check for validity of the move, etc.). It would then send the new game state to each of the players and also send (likely as a separate communication) notification of the next turn to the appropriate client and await its response.
In some sense, the host acts as the game server in this scenario, but may be simpler to use/play in that there is not a separate process that must be run in order to play the game.
If you want to add the multiplayer feature over the network, it may be useful to you to take a look at the Netty project to build the communication infrastructure.
But before you can do that, you need to make sure that your game has the right "architecture". You need to have to big modules: the Client and the Server.
The Server is responsible for all the game logic. In a sense, it is the game engine. The Client is responsible to query the Server for the game state, display it to the player, get the player input and sending commands to the server.
One way to decouple our Client and Server code without the hassle involved in learning network programming is to have two different programs that run from the CLI. The sequence of execution can be like this:
The server is run to initialize the game state, the game state is saved in a file.
The client is run, it reads the game state, display it somehow and gets some input from the player. It saves that input to a file.
The server is run, it reads the commands from the client, changes the state of the game and updates the state file.
Rinse and repeat. This is basically what PBEM servers do.
To help decouple the Client and the Server, it makes sense to define a "language" to represent changes in the state of the game, and the commands to be executed by the server. It helps if the client caches the state and apply the changes as sent from the server.
Once your Client and Server code is completely decoupled, and the "language" is completely defined, you're ready to change the communication mechanism from being file-based to being socket-based.
I want to understand What is event driven io. I am hearing it is different than traditional blocking request/response model. Do we have any example to explain this? and how will it contribute to the increase in performance?
Examples will be highly appreciated.
I'm guessing since it's been 4 months you've got your answers. Regardless here goes...
Netty
http://www.jboss.org/netty
Mina
http://mina.apache.org/
C10K
http://www.kegel.com/c10k.html
To understand part of the problem that evented io is trying to solve take a look at the C10K link above. Scability is one of the main benefits of evented io.
A traditional web server will handle a request and then return a response (synchronous/blocking). Each request would typically require it's own thread.
An event driven web server will handle a request, then create an event (asynchronous/nonblocking io), and then return the response. Multiple requests are shared by a single thread/process.
Evented IO should be able to handle more requests per thread than a typical web server. You might not speed up your web application with evented IO, but it should handle large numbers of connections a lot easier than a traditional web server. This means requiring less machines for scaling.
Though I would argue that evented io architecture will force you to develop your web application to handle smaller chunks of data. Much like a google mail type application that uses a lot of ajax calls to poll for data on the server and then does small updates in the browser. This itself has many benefits that will help speed up AND improve scaling on your server.
Netty and Mina provide plenty of example code.
This is a very old question but I assume this might help some body else to understand Event driven programming :
This following analogy might help you to understand event driven I/O programming by drawing a parallel to waiting line at Doctor's Reception desk.
Blocking I/O is like, if you are standing in the queue, receptionist asks a guy in front of you to fill in the form and she waits till he finishes. You have to wait for your turn till the guy finishes his form, this is blocking.
If single guy takes 3 mins to fill in, the 10th guy have to wait till 30 minutes. Now to reduce this 10th guys wait time, solution would be, increasing number of receptionist's, which is costly. This is what happens in traditional web servers. If you request for a user info, subsequent request by other users should wait till the current operation, fetching from Database, is completed. This increases the "time to response" of the 10th request and it increase exponentially for nth user. To avoid this traditional web servers creates thread (equivalent to increasing number of receptionists) for every single request, ie., basically it creates a copy of the server for each request which is costly interms of CPU consumption since every request will need a Operating systems thread. To scale up the app, you would have to throw lots of computation power at the app.
Event Driven: The other approach to scale up queue's "response time" is to go for event driven approach, where guy's in the queue will be handed over the form, asked to fill in and come back on completion. Hence receptionist can always take request. This is exactly what javascript has been doing since from it's inception. In browser, javascript would respond to user click event, scroll, swipe or database fetch and so on. This is possible in javascript inherently, because javascript treats functions as first class objects and they can be passed as a parameters to other functions (called callbacks), and can be called on completion of particular task. This is what exactly node.js does on the server. You can find more info about event driven programming and blocking i/o, in the context of node here
I am trying to perform some computations on a server. For this, the client initially inputs some data which I am capturing through Javascript. Now, I would perhaps make a XMLHttpRequest to a server to send this data. Let's say the computation takes an hour and the client leaves the system or switches off the system.
In practice, I would use perhaps polling from the client side to determine if the result is available. But is there some way I could implement this in the form of a call back, for instance, the next time the client logs in, I would just contact the client side Javascript to pass the result... Any suggestions? I am thinking all this requires some kind of a webserver sitting on the client side but I was wondering if there's a better approach to do this.
Your best bet is to just poll when the user gets to the web page.
What I did in something similar was to gradually change my polling time, so I would start with several seconds, then gradually increase the interval. In your case, just poll after 15 minutes, then increase every 5 minutes when it fails, and if the user closes the browser then you can just start the polling again.
If you want some callback, you could just send an email when it is finished, to let the user know.
Also, while you are doing the processing, try to give some feedback as to how far you have gone, how much longer it may be, anything to show that progress is being made, that the browser isn't locked up. If nothing else, show a time with how long the processing has been going on, to give the user some sense of progress.