I am currently writing a server for a fast paced multiplayer game, which should be run in UDP (I read TCP was inappropriate due to the way it handles packet drops, in applications requiring timely delivered data, please correct me if TCP is more useful) The game is not mass multiplayer, and should be hosted primarily by the players themselves, probably on local dedicated servers. While I am pretty sure that the client needs to be in NIO, to avoid game lag from network problems, I do not know how to write the server yet. Here is a list of possibilities I considered:
Using separate thread for each player with a socket bound to each
Using one single socket to loop through every player
Doing the same, but with NIO
using one socket to broadcast messages to all clients simultaneously
using one socket for receiving, and one for sending on a separate thread
Which of these is the most appropriate approach to handling time critical data with a relatively low number of clients? (no more than 16)
For 16 clients, unless your hardware sucks REALLY badly, or you do some outrageous processing, you'll be fine performance-wise either way you go. That means that you can write it any way it makes it simpler for you to write and maintain.
A few words about your options though:
Using separate thread for each player with a socket bound to each
This approach makes more sense for a low number of TCP sockets. If you're using UDP, you only have one socket, so there's no need for that. It is however useful to have a thread per client if your game requires some parallel processing per-client.
Using one single socket to loop through every player
This is probably the easiest option, unless your game requires really special. As I've said earlier, you should be fine performance-wise. So if it makes writing the code easier do it. Avoid premature optimizations.
using one socket for receiving, and one for sending on a separate
thread
Unless you have a good reason for doing this, I'd avoid this idea. It might (needlessly) complicate things network-wise since the clients will be sending packets to port X, and getting replies from port Y. NAT will be a serious issue.
using one socket to broadcast messages to all clients simultaneously
First of all, that would only work in one direction - from the server to the clients. Second, that would only work inside a LAN since routers don't forward broadcast packets. If you're fine with both limitations and find yourself designing a system in which the server sends some packets to all players (at least to those in the same network), that might not be a bad idea. As far as performance goes, unless you're sending a large number of packets, you'll probably won't notice a difference between sending each player a separate copy, and sending a broadcast.
TCP has more overheads, leading to higher bandwidth required to transmit the same information. TCP does however have the advantage of being resent if it doesn't arrive. Make sure the messages you send over the network are losable if you use UDP. for example, if you send movements as additions and alterations then it'll lose sync quickly with the clients. You'll need to overwrite values when transmitting.
You'll need a thread per player for listening to their outputs such as keystrokes or actions, as you have to wait for data on each thread and therefore it would block on player 1 until they make an action and player 2 would not be able to make action until player 1 had.
Broadcasting game states might make sense - but i have no experience with that or something as time critical as fast paced multiplayer games
Related
At the moment I have a project where we develop a Java Texas Holdem Application. Of course this Application is based on a client server socket system. I am saving all joined clients (I'm getting them with socketServer.accept() method) in an ArrayList. At the moment I make one thread for each joined client, which permanently checks if the client send any data to the server. My classmate told me it would be way better if I create one big Thread, that iterates through the whole Client ArrayList and checks every Client inputstreamreader. Should I trust him?
Creating a thread per Socket isn't a good idea if your application will have a lot of clients.
I'd recommend into looking into external libraries and how they handle their connonections. Example: http://netty.io/, https://mina.apache.org/
Both approaches are not feasible. Having a thread per connection will quickly exhaust resources in any loaded system. Having one thread pinging all connections in a loop will produce a terrible performance.
The proper way is to multiplex on the sockets - have a sane number of threads (16, why not), distribute all sockets between those 16 threads and multiplex on those sockets using select() variant - whatever is available in Java for this.
I am building an android app that communicates with a server on a regular basis as long as the app is running.
I do this by initiating a connection to the server when the app starts, then I have a separate thread for receiving messages called ReceiverThread, this thread reads the message from the socket, analyzes it, and forwards it to the appropriate part of the application.
This thread runs in a loop, reading whatever it has to read and then blocks on the read() command until new data arrives, so it spends most of it's time blocked.
I handle sending messages through a different thread, called SenderThread. What I am wondering about is: should I structure the SenderThread in a similar fashion? Meaning should I maintain some form a queue for this thread, let it send all the messages in the queue and then block until new messages enter the queue, or should I just start a new instance of the thread every time a message needs to be sent, let it send the message and then "die"? I am leaning towards the first approach, but I do not know what is actually better both in term of performance (keeping a blocked thread in memory versus initializing new threads), and in terms of code correctness.
Also since all of my activities need to be able to send and receive messages I am holding a reference to both threads in my Application class, is that an acceptable approach or should I implement it differently?
One problem I have encountered with this is that sometimes if I close my application and run it again I actually have two instances of ReceiverThread, so I get some messages twice.
I am guessing that this is because my application did not actually close and the previous thread was still active (blocked on the read() operation), and when I opened the application again a new thread was initialized, but both were connected to the server so the server sent the message to both. Any tips on how to get around this problem, or on how to completely re-organize it so it will be correct?
I tried looking up these questions but found some conflicting examples for my first question, and nothing that is useful enough and applies to my second question...
1. Your approach is ok, if you really need to keep an open connection between the server and client at all time at all cost. However I would use an asynchronous connection, like sending an HTTP request to the server and then get a reply whenever the server feels like it.
If you need the server to reply to the client at some later time, but you don't know when, you could also look into the Google Cloud Messaging framework, which gives you a transparent and consistent way of sending small messages to your clients from your server.
You need to consider some things, when you're developing a mobile application.
A smartphone doesn't have endless amount of battery.
A smartphone's Internet connection is somewhat volatile and you will lose Internet connection at different times.
When you keep a direct connection to server all the time, your app keep sending keep-alive packets, which means you'll suck the phone dry pretty fast.
When the Internet connection is as unstable as it gets on mobile broadband, you will lose the connection sometimes and need to recover from this. So if you use TCP because you want to make sure your packets are received you get to resend the same packets a lot of times and so get a lot of overhead.
Also you might run in to threading problems on the server-side, if you open threads on the server on your own, which it sounds like. Let's say you have 200 clients connecting to the server at the same time. Each client has 1 thread open on the server. If the server needs to serve 200 different threads at the same time, this could be quite a performance consuming task for the server in the end and you will need to do a lot work on your own as well.
2. When you exit your application, you'll need to clean-up after you. This should be done in your onPause method of the Activity which is active.
This means, killing off all active threads (or at least interupting them), saving the state of your UI (if you need this) and flushing and closing whatever open connections to the server you have.
As far as using Threads goes, I would recommend using some of the build-in threading tools like Handlers or implementing the AsyncTask.
If you really think Thread is the way to go, I would definitely recommend using a Singleton pattern as a "manager" for your threading.
This manager would control your threads, so you don't end up with more than one Thread talking to the server at any given time, even though you're in another part of the application.
As far as the Application class implementation goes, take a look at the Application class documentation:
Base class for those who need to maintain global application state. You can provide your own implementation by specifying its name in your AndroidManifest.xml's tag, which will cause that class to be instantiated for you when the process for your application/package is created.
There is normally no need to subclass Application. In most situation, static singletons can provide the same functionality in a more modular way.
So keeping away from implementing your own Application class is recommended, however if you let one of your Activities initialize your own Singleton class for managing the Threads and connections you might (just might) run into trouble, because the initialization of the singleton might "bind" to the specific Activity and so if the specific Activity is removed from the screen and paused it might be killed and so the singleton might be killed as well. So initializing the singleton inside your Application implementation might deem useful.
Sorry for the wall of text, but your question is quite "open-ended", so I've tried to give you a somewhat open-ended question - hope it helps ;-)
NIO and TCP make a great pair for many connections. Since a new connection needs to be opened for each new client, each of these clients would typically need their own thread for blocking I/O operations. NIO addresses that problem by allowing data to be read when it can, rather than blocking until it is available. But what about UDP?
I mean, connectionless UDP does not have the blocking nature of TCP associated with it because of how the protocol is designed (send it and forget it, basically). If I decide to send some data to some address, then it will do so, without delays (on server-end). Likewise, if I want to read data, I can just receive individual packets from different sources. I don't need to have many connections to many places using many threads to deal with each of them.
So, how does NIO and selectors enhance UDP? More specifically, when would one prefer to use UDP with NIO rather than the ol' java.net package?
Well the DatagramSocket.receive(...) method is documented as a blocking operation. So for instance if you had one thread that is trying to handle packets from N different sockets, you would need to use NIO and selectors. Similarly, if the thread had to multiplex checking for new packets with other activities, you might do this.
If you don't have these or similar requirements, then selectors won't help. But that's no different to the TCP case. You shouldn't use selectors with TCP if you don't need them, because it potentially adds an extra system call.
(On Linux, in the datagram case, you'd do a select syscall followed by a recv ... instead of just a recv.)
But if you're only dealing with one DatagramSocket, wouldn't the receive method read packets immediately as they arrive, regardless of the fact that they're from a different computer?
If you are listening on one socket for datagrams from "everyone" then yes. If you have different sockets for different computers then no.
And for the TCP comment, sometimes the use of a selector is justified simply by the fact that it is very resource demanding to have thousands of threads, as it would be required by a blocking TCP server.
We weren't discussing that case. But yes, that is true. And the same is true if you have thousands of threads blocking on UDP receives.
My point was that it you don't have lots of threads, or if it doesn't matter if a thread blocks, then NIO doesn't help. In fact, it may reduce performance.
NIO removes the necessity for threads altogether. It lets you handle all your clients in one thread, including both TCP and UDP clients.
connectionless UDP does not have the blocking nature of TCP associated with it
That's not true. Receives still block, and so can sends, at least in theory.
I need to implement a Peer To Peer File Transfer.
What protocol should I use? TCP or UDP? And why?
TCP is generically the best way to go when you want to ensure that your data gets to its intended destination with appropriate integrity.
In your case I would personnally choose tcp, because you will probably end up reimplementing tcp in some form inside of your udp packets otherwise (ask for block (syn), answer: i have block (syn ack), ok send it to me (ack)...data: (push ack)... ok done: (rst))
Also generically speaking, udp is the way to go when you want to broadcast, and you dont really care if the data gets there or not, meaning there is either high redundancy or low importance/integrity... since files require high integrity, it doesn't make a lot of sense to go UDP, again, unless you want to go through the extra work.
The only upside to udp would be the fact that is stateless, which could have some good implementations in a file sharing program.
Bottom line... go with your heart...
I'd recommend using TCP.
If you use UDP then you end up having to design and implement flow control and detection / retransmission of lost packets in your application-level protocol. Doing this in a way that gives you decent performance in good and bad networking conditions is hard work. For simple peer to peer, the payoff is generally not worth the effort.
FOLLOWUP
You ask:
and i plan to implement inter-lan calling over wifi, for that i would have to use UDP right?
Assuming that IP is implemented and the routing is set up correctly over your WiFi network(s), both UDP and TCP should work just fine.
UDP does not guarantee that the packets will be delivered, which is something that TCP does. Please take a look at this previous SO post which highlights the difference between these two protocols.
TCP helps ensure that your packets are received by the client and should be your choice for a file transfer because you want the file to be reproducible on the other end exactly as it is sent.
You can implement the file transfer using UDP too, but you would have to write your own logic for ensuring that the contents of the file are assembled correctly.
Since most users really care that all of their data makes it to the remote target with consistency, TCP is your best bet since packet error handling is managed. UDP is typically better for applications where loss is acceptable (e.g. player position in games) or where retransmission is not an option (e.g. streaming audio/video).
More on streaming
In the streaming A/V case, the data is always shipped with some error correction bits to fix some large percentage of errors. The endpoint manages the extra time required to detect (and potentially correct) the errors by buffering the stream. Nevertheless, it's obviously a ton of work (on both sides) to make it all happen and probably isn't worth it for P2P file transfer.
Update 1: audio streaming comment
The constraints are really based on required throughput, latency, and bit error rate (BER). Since these are likely both mobile devices, possibly operating across two carriers cellular networks, I'd opt for UDP with very high error-correction capability for audio. Users will likely be more displeased with no audio versus slightly corrupted audio and greater latency. Nevertheless, I would still use TCP for file transfer.
My Java application receives data through UDP. It uses the data for an online data mining task. This means that it is not critical to receive each and every packet, which is what makes the choice of UDP reasonable on the first place. Also, the data is transferred over LAN, so the physical network should be reasonably reliable. Anyway, I have no control over the choice of protocol or the data included.
Still, I am concerned about packet loss that may arise from overload and long processing time of the application itself. I would like to know how often these things happen and how much data is lost.
Ideally I am looking for a way to monitor packet loss continuously in the production system. But a partial solution would also be welcome.
I realize it is impossible to always know about UDP packet losses (without control on the packet contents). I was thinking of something along the lines of packets received by the OS but never arriving to the application; or maybe some clever solution inside the application, like a fast reading thread that drops data when its client is busy.
We are deploying under Windows Server 2003 or 2008.
The problem is that there is no way to tell that you have lost any packets if you are relying on the UDP format.
If you need to know this type of information, you need to build it into the format that you layer ontop of UDP (like the TCP Sequence Number). If you do that and the format is simple then you can easily create filters into Microsoft's NetMon or WireShark to log and track that information.
Also note that the TCP Sequence Number implementation also helps to detect out of order packets take may happen when using UDP.
If you are concerned about packet loss, use TCP.
That's one reason why it was invented.