MulticastSocket for multiplayer game - java

I am studying the structure for my server-client communications in my multiplayer game.
I came to the conclusion that UDP is the best choice due to the "shoot and forget" way of using it that will not block the application if a packet is lost.
I will also use TCP to send reliability needing packets, like during login procedures and exchange of informations like change of server, change of map, updates etc. It will also run an IRC based chat. (all the commands actually are IRC-style custom messages).
I was wondering what is the best way to send the interaction messages (moves, spells, objects, actions etc) between server and clients.
Reading some documentation I came to the MulticastSocket.
My question is:
Is better to send a continuous flow of information to all the clients starting a thread for each player (as I do in TCP communications) where each DatagramSockets will listen to a queue sending each new message to its client. This will mean that all the maps and all the movements (supposing there can be 50 players all-over the maps) will be sent to all the players, and each packet has to be larger to contain all those informations.
Or is better to use a thread for each map, active only if some player are inside that specific map, using a multicast communication, sending a message to only the players that are inside that map, listening with a MulticastSocket.
I read about problems with firewall or routers using multicast, but I can't figure out what those problems can be (different from normal UDP).
The application should be used by anyone with few configuration problems.

Looking at your scenario above you need to decided if your application absolutely needs TCP connection as TCP connection requires one thread per TCP connection, no exceptions (unless using nio).
Now to target the UDP section of the program, you have two basic choices:
a) You spawn one thread for receiving datagram packets for all players.
In this case, all players send their datagram packets to a single receiver which then decides what to do with the data. This data may be sent to various queues for other threads for processing. Data can be sent back to all players using a single thread or multiple threads (per player).
PROS:
Low resource usage
Low program (synchronization) overhead.
CONS:
Possible network slowness (due to masses of packets going towards the same socket)
Higher chance of packet drop (again due to masses of packets going to the same socket)
Serial processing
Disconnect events are messy and hard to deal with
b) You spawn one thread per player and listen on a different port per player.
In this case, all players get their own handler threads which may process the data directly or send it to a central processing queue. By doing this, data can be processed in parallel, allowing for faster processing speeds with a higher resource usage. Synchronization will also require special attention, uses of atomics and re-entrant read/write locks may be needed. Writing back out to the socket should generally occur on another "per player thread".
PROS:
Parallel Processing
Modular (have all the handling code per player in one thread, start thread on player join)
Disconnects are easier to handle and don't cause problems with other players.
Fast network response, concurrent packet receiving.
CONS:
High resource usage (a lot more objects)
High synchronization overhead
High thread count (may be as many as 2 ~ 4x threads to players ratio)
In either case, by using TCP you will need at least one thread per player. The question is are you willing to use a lot more resources for a smoother, swifter response from the server.

Related

handling multiple constant socket connections simultaneously

I want to write a chat application in java which can handle many users simultaneously. I read about sockets and threadpools to limit thread number, but I can't imagine how to handle e.g. 100 socket connections at the same time and do not create 100 new threads. Idea is that client connects at the beginning and his connection stays opened until he leaves the chat. He can send data to server as well as receive other users messages.
Read from socket is blocking operation, so I would need to check all user's sockets in loop with some timeout if new data is available in particular socket connection? My first idea was to create e.g. 3 threads for handling input from all connected users and 3 threads for outcomming communication from server to clients, but how can I achieve that? Is there any async API for sockets in Java where can I define threadpools for in/out communication?
Make a Client class that extends Thread. Write all the methods and in the void run() method, write the code you want executed when the client connection is made.
On the Server side, listen for new connections. Accept a new connection, get the information about the connection, pass it in the constructor to create a new Client object, and add it to an ArrayList to keep track of all ongoing connections and execute the start() method. So, all the Client objects are in an Arraylist, and the they keep running at the same time.
I had made such a chat application about an year ago. And do not forget to close the connection once the Client disengages, orelse all the objects pile up and slow up the application. I learnt that the hard way.
Use Netty as it provides an NIO framework (non-blocking IO) so that you do not need 1 thread per connection. It is a little bit (or a lot..) more complicated to write a server using non-blocking IO, but there are performance gains in regards to not requiring one thread per connection.
However, 100 threads is not so many, so you could still create your server using standard IO and one thread per connection, it just depends on how much you need to scale.
For a server setup using Netty, you create a channel to which new connections are assigned. This channel is an ordered series of handlers which process incoming (and outgoing) messages from a connection / client. The handlers themselves all need to be asynchronous such that when a handler needs to return a message to the client it writes it asynchronously (non-blockingly) to the channel and receives a future back to which it can attach actions for when the message is actually written.
There is a little bit of a learning curve, but it is not that steep and the overall design of your application will be much better if built the Netty way vs using standard blocking IO.

Keeping java sockets open - how to check if new data available?

I have a simple client-server application using sockets for the communication. One possibility is to close the socket every time the client has sent something to the server.
But my idea is to keep the connection always open, i.e. if a client contacts the server the connection should be put into a queue (e.g. LinkedBlockingQueue) and kept open, this would increase the performance.
How can I check in the server if there is new data available in a socket in the queue? The only thing I can imagine is to constantly iterate over the whole queue and check every socket if it has new data. But this would be very inefficient because if I have several threads working on the queue, the queue gets blocked when one thread is scanning over it.
Or is there a possibility to register a callback function on the socket, so that the socket informs the threads that data is ready?
But my idea is to keep the connection always open, i.e. if a client contacts the server the connection should be put into a queue (e.g. LinkedBlockingQueue) and kept open, this would increase the performance.
Keeping connections open will improve performance, though there are scaling issues: an open socket uses kernel resources. (I wouldn't use a queue though ...)
How can I check in the server if there is new data available in a socket in the queue?
If you have a number of sockets to different clients, and you want to process data in (roughly) the order that it arrives, there are two common techniques:
Create a thread per socket, and have each thread simply do a read. This will (naturally) block the thread until data becomes available.
Use the NIO channel selector mechanism (see Selector) which allows you to find out which of a group of I/O channels is ready for a read or write.
Thread per socket tends to be resource hungry (thread stacks), and does not scale well at all if you have multiple threads that are active simultaneously. (Too many context switches, too much load on the thread scheduler.)
By contrast, selectors map onto native syscalls provided by the host operating system, and thus they are efficient and responsive ... if used intelligently.
(You could also obtain non-blocking channels for the sockets, and poll them round-robin fashion. But that isn't going to be either efficient or responsive.)
As you can see, none of these ideas work with a queue. Either you have a number of threads each dealing with one socket, or you have one thread dealing with an array or (array) list of sockets. The queue abstraction is not designed for indexing or iterating.
Or is there a possibility to register a callback function on the socket, so that the socket informs the threads that data is ready?
See #Lolo's answer.
A practical solution would be to use NIO2 AsynchronousSocketChannels to perform asynchronous read operations with a callback that you can specify as a CompletionHandler.

Is this a more effective way to optimize my networking model?

To begin I will explain my networking model:
Networking in my game consists of pairing objects on the remote server and the client. To give a short description, say there are multiple characters in the server world that need to be synchronized with a client (i'll consider just one to simplify things)
Each time a character on the server-side is created, the server will instantiate a ServerRpgCharacter - this class wraps the RpgCharacter and registers observers etc to monitor the character and broadcast relevant mutations to the character. The server then requests a pair object for ServerRpgCharacter (that is, it requests the client to instantiate a pair for this object that will communicate with it.) The pair can be any class, but any messages dispatched by ServerRpgCharacter on the remote end will be received by its respective pair on the client end.
It gets a little more involved with multiple clients but it ends up working out nicely.
Anyway, I have been thinking of multiple ways to optimize this model. The way it works now is that when an object dispatches a message to its pair, it is queued up into a 'snapshot.' Whenever any paired entity dispatches a messages it is thrown into the same snapshot. The snapshot is then compressed and dispatched at intervals of 200ms.
The problem is that I am using the TCP/IP protocol to transmit these snapshot. I'm not sure exactly how the TCP protocol works, but I assume that if a snapshot's packet is dropped, the entire snapshot would have to be re-sent.
Thus I am wondering If it would be more optimal if I discarded TCP and instead implemented a custom layer overtop of UDP where instead of dispatching one whole snapshot for all messages sent by all pairs, I have it so that these individual pairs maintain their own packet ordering and buffering. This way, if a packet for pair A is dropped, pair B can ignore the fact that pair A had one if it's packets dropped.
I then need to consider that compressing this data is less efficient since less is being transmitted.
Compression with TCP is more efficient as you can compress using the context of the entire stream. You cant do that with UDP as you have to compress each packet individually.
The benefit of UDP is you can drop packets and not resend them because you assume a later packet will update the information quickly enough.
The interval needs to be shorter as 200 ms will be noticeable to users. 50 ms might be a better option.
instead of the normal listener pattern, you can use a spacial lookup. When an event occurs it is noticable to things on the same level and for a distance of X squares. This will save you have to maintain lots of lists as players and monsters move around.

Java TCP latency

I am developing an Android application communicating with a TCP Java-server over a WLAN connection. The Android application is a game with sprites being moved around the screen. Whenever a sprite moves, the AndroidClient sends its coordinates to the Java-server, wich then sends the data to the other clients (maximum 4 clients). The server handles each client on a separate thread, data updates are sent about every 20 ms., and each packet consists of about 1-10 bytes. I am on a 70 Mbit network (with about 15 Mbit effective on my Wireless).
I am having problems with an unstable connection, and experiencing latency at about 50-500 ms. every 10th-30th packet. I have set the tcpNoDelay to true, wich stopped the consistent 200ms latency, although it still lags a lot. As I am quite new to both Android and networking I don't know whether this is to be expected or not. I am also wondering if UDP could be suitable for my program, as I am interested in sending updates fast rather than every packet arriving correctly.
I would appreciate any guidance as to how to avoid/work around this latency problem. General tips on how to implement such a client-server architecture would also be applauded.
On a wireless LAN you'll occasionally see dropped packets, which results in a packet retransmission after a delay. If you want to control the delay before retransmission you're almost certainly going to have to use UDP.
You definitely want to use UDP. For a game you don't care if the position of a sprite is incorrect for a short time. So UDP is ideal in this case.
Also, if you have any control over the server code, I would not use separate threads for clients. Threads are useful if you need to make calls to libraries that you don't have control over and that can block (such as because they touch a file or try to perform additional network communication). But they are expensive. They consume a lot of resources and as such they actually make things slower than they could be.
So for a network game server where latency and performance are absolutely critical, I would just use one thread to process a queue of commands that have a state and then make sure that you never perform an operation that blocks. So each command is processed in order, it's state is evaluated and updated (like a laser blast intersected with another object). If the command requires blocking (like reading from a file) then you need to perform a non-blocking read and set the state of that command accordingly so that your command processor never blocks. The key is that the command processor can never never ever block. It would just run in a loop but you would have to call Thread.sleep(x) in an appropriate way so as not to waste CPU.
As for the client side, when a client submits a command (like they fired a laser or some such), the client would generate a response object and insert it into a Map with a sequence id as the key. Then it would send the request with the sequence id and when the server responds with the that id, you just lookup the response object in the Map and decode the response into that object. Meaning this allows you to perform concurrent operations.

Can Java ServerSocket and Sockets using ObjectIOStreams lose packets?

I'm using a ServerSocket on my server and Sockets that use ObjectIOStreams to send serializable objects over the network connection. I'm developing an essentially more financial version of monopoly and thus packets being sent and confirmed as sent/received is required. Do I need to implement my own packet loss watcher or is that already taken care of with (Server)Sockets?
I'm primarily asking about losing packets during network blips or whatnot, not full connection error. E.g. siblings move a lead plate between my router and computer's wi-fi adapter.
http://code.google.com/p/inequity/source/browse/#svn/trunk/src/network
Code can be found under network->ClientController and network->Server
Theoretically; yes. There is no way of giving 100 % theoretical guarantee that what is sent on the hardware layer, is received the same way on the receiving end.
Practically however, if you use TCP (Transmission Control Protocol) this stuff has already been taken care of; you won't loose any packets. (If you're using UDP on the other hand (User Datagram Protocol) it's another story, and it may very well be the case that you're loosing packets, or receiving them out of order).
Just looked briefly at your code, and it seems you're using multiple threads. If so you must be utterly careful with synchronization. It could very well be the case that it looks like a packet has been dropped, although, it is simply not handled due to a race condition in the program. (Keep in mind that the gui for instance, runs in its own thread.)
The best way to solve the synchronization, I think, is to put the network loop in a very small read/put-on-synchronized-queue loop, and pick up the received packets from the queue whenever you're sure no other thread will intervene.

Categories

Resources