Can Java ServerSocket and Sockets using ObjectIOStreams lose packets? - java

I'm using a ServerSocket on my server and Sockets that use ObjectIOStreams to send serializable objects over the network connection. I'm developing an essentially more financial version of monopoly and thus packets being sent and confirmed as sent/received is required. Do I need to implement my own packet loss watcher or is that already taken care of with (Server)Sockets?
I'm primarily asking about losing packets during network blips or whatnot, not full connection error. E.g. siblings move a lead plate between my router and computer's wi-fi adapter.
http://code.google.com/p/inequity/source/browse/#svn/trunk/src/network
Code can be found under network->ClientController and network->Server

Theoretically; yes. There is no way of giving 100 % theoretical guarantee that what is sent on the hardware layer, is received the same way on the receiving end.
Practically however, if you use TCP (Transmission Control Protocol) this stuff has already been taken care of; you won't loose any packets. (If you're using UDP on the other hand (User Datagram Protocol) it's another story, and it may very well be the case that you're loosing packets, or receiving them out of order).
Just looked briefly at your code, and it seems you're using multiple threads. If so you must be utterly careful with synchronization. It could very well be the case that it looks like a packet has been dropped, although, it is simply not handled due to a race condition in the program. (Keep in mind that the gui for instance, runs in its own thread.)
The best way to solve the synchronization, I think, is to put the network loop in a very small read/put-on-synchronized-queue loop, and pick up the received packets from the queue whenever you're sure no other thread will intervene.

Related

Use Socket permantently to handle Clientsession or create new one for every request

So i am quite new to Sockets and I have to create a Server-Client-App for school.
I expect a client to request something from the server multiple times during his runtime.
I am uncertain wether I should make a new java.net.Socket for every Request I get (Client opens new socket every time and Java Serversocket accepts)
or use a single socket and keep that during the client's runtime.
That strongly depends on how frequently the socket is used.
E.g. if you know that the client will send a request to the server every 50 milliseconds it would be easier to just keep the socket opened.
But if you know the client will only request information from the socket every 5 minutes it's probably better to close the connection and create a new one when needed. Same if you don't know when the next request will be created.
Creating a new Socket on server-side is not very expensive, so it's probably better to just close the connection if it's not used very frequently.
An exception could be a special socket, that needs authentifications or other expensive stuff on creation, but that's probably not the case in a school project.
So in general: It depends on the usage of the socket, but if you're unshure whether it's used very frequently or not better close it and open it again when needed.
Related to this question:
Maximum number of socket in java
If you're worried about exceeding the maximum number of sockets that can be opened (I imagine this is highly unlikely in your case) you can create a work-around where you use TCP to initially establish the connection, send the client a UID (Unique Identifier, a 64-bit unsigned long long type should be sufficient) and then close the TCP connection. Fill and maintain a structure (or Class object in your case) detailing the connection details (IP address, Unique Identifier code) and then wait on arriving packets sent via UDP (User Datagram Protocol, an alternative to TCP). If you decide to use UDP, be aware that you'll need to implement a means of reordering packets in order to reconstruct a byte-stream (serialisation) and a mechanism for resending data packets if somehow they do not arrive (packet loss recovery).
Sounds worse than it is. I'll repeat though, don't bother yourself with these intricacies if you're not worried about exceeding any limits.

Multicast data overhead?

My application uses multicast capability of UDP.
In short I am using java and wish to transmit all data using a single multicast address and port. Although the multicast listeners will be logically divided into subgroups which can change in runtime and may not wish to process data that comes from outside of their group.
To make this happen I have made the code so that all the running instances of application will join the same multicast group and port but will carefully observe the packet's sender to determine if it belongs to their sub-group.
Warning minimum packet size for my application is 30000-60000 bytes!!!!!
Will reading every packet using MulticastSocket.receive(DatagramPacket) and determining if its the required packet cause too much overhead (even buffer overflow).
Would it generate massive traffic leading to congestion in the network because every packet is sent to everyone ?
Every packet is not sent to everyone since multicast (e.g. PIM) would build a multicast tree that would place receivers and senders optimally. So, the network that would copy the packet as and when needed. Multicast packets are broadcasted (technically more accurate, flooded at Layer2) at the final hop. IGMP assists multicast at the last hop and makes sure that if there is no receiver joining in the last hop, then no such flooding is done.
"and may not wish to process data that comes from outside of their group." The receive call would return the next received datagram and so there is little one can do to avoid processing packets that are not meant for the subgroup classification. Can't your application use different multiple groups?
Every packet may be sent to everyone, but each one will only appear on the network once.
However unless this application is running entirely in a LAN that is entirely under your control including all routers, it is already wildly infeasible. The generally accepted maximum UDP datagram size is 534 once you go through a router you don't control.

MulticastSocket for multiplayer game

I am studying the structure for my server-client communications in my multiplayer game.
I came to the conclusion that UDP is the best choice due to the "shoot and forget" way of using it that will not block the application if a packet is lost.
I will also use TCP to send reliability needing packets, like during login procedures and exchange of informations like change of server, change of map, updates etc. It will also run an IRC based chat. (all the commands actually are IRC-style custom messages).
I was wondering what is the best way to send the interaction messages (moves, spells, objects, actions etc) between server and clients.
Reading some documentation I came to the MulticastSocket.
My question is:
Is better to send a continuous flow of information to all the clients starting a thread for each player (as I do in TCP communications) where each DatagramSockets will listen to a queue sending each new message to its client. This will mean that all the maps and all the movements (supposing there can be 50 players all-over the maps) will be sent to all the players, and each packet has to be larger to contain all those informations.
Or is better to use a thread for each map, active only if some player are inside that specific map, using a multicast communication, sending a message to only the players that are inside that map, listening with a MulticastSocket.
I read about problems with firewall or routers using multicast, but I can't figure out what those problems can be (different from normal UDP).
The application should be used by anyone with few configuration problems.
Looking at your scenario above you need to decided if your application absolutely needs TCP connection as TCP connection requires one thread per TCP connection, no exceptions (unless using nio).
Now to target the UDP section of the program, you have two basic choices:
a) You spawn one thread for receiving datagram packets for all players.
In this case, all players send their datagram packets to a single receiver which then decides what to do with the data. This data may be sent to various queues for other threads for processing. Data can be sent back to all players using a single thread or multiple threads (per player).
PROS:
Low resource usage
Low program (synchronization) overhead.
CONS:
Possible network slowness (due to masses of packets going towards the same socket)
Higher chance of packet drop (again due to masses of packets going to the same socket)
Serial processing
Disconnect events are messy and hard to deal with
b) You spawn one thread per player and listen on a different port per player.
In this case, all players get their own handler threads which may process the data directly or send it to a central processing queue. By doing this, data can be processed in parallel, allowing for faster processing speeds with a higher resource usage. Synchronization will also require special attention, uses of atomics and re-entrant read/write locks may be needed. Writing back out to the socket should generally occur on another "per player thread".
PROS:
Parallel Processing
Modular (have all the handling code per player in one thread, start thread on player join)
Disconnects are easier to handle and don't cause problems with other players.
Fast network response, concurrent packet receiving.
CONS:
High resource usage (a lot more objects)
High synchronization overhead
High thread count (may be as many as 2 ~ 4x threads to players ratio)
In either case, by using TCP you will need at least one thread per player. The question is are you willing to use a lot more resources for a smoother, swifter response from the server.

UDP packets waiting and then arriving together

I have a simple Java program which acts as a server, listening for UDP packets. I then have a client which sends UDP packets over 3g.
Something I've noticed is occasionally the following appears to occur: I send one packet and seconds later it is still not received. I then send another packet and suddenly they both arrive.
I was wondering if it was possible that some sort of system is in place to wait for a certain amount of data instead of sending an undersized packet. In my application, I only send around 2-3 bytes of data per packet - although the UDP header and what not will bulk the message up a bit.
The aim of my application is to get these few bytes of data from A to B as fast as possible. Huge emphasis on speed. Is it all just coincidence? I suppose I could increase the packet size, but it just seems like the transfer time will increase, and 3g isn't exactly perfect.
Since the comments are getting rather lengthy, it might be better to turn them into an answer altogether.
If your app is not receiving data until a certain quantity is retrieved, then chances are, there is some sort of buffering going on behind the scenes. A good example (not saying this applies to you directly) is that if you or the underlying libraries are using InputStream.readLine() or InputStream.read(bytes), then it will block until it receives a newline or bytes number of bytes before returning. Judging by the fact that your program seems to retrieve all of the data when a certain threshold is reached, it sounds like this is the case.
A good way to debug this is, use Wireshark. Wireshark doesn't care about your program--its analyzing the raw packets that are sent to and from your computer, and can tell you whether or not the issue is on the sender or the receiver.
If you use Wireshark and see that the data from the first send is arriving on your physical machine well before the second, then the issue lies with your receiving end. If you're seeing that the first packet arrives at the same time as the second packet, then the issue lies with the sender. Without seeing the code, its hard to say what you're doing and what, specifically, is causing the data to only show up after receiving more than 2-3 bytes--but until then, this behavior describes exactly what you're seeing.
There are several probable causes of this:
Cellular data networks are not "always-on". Depending on the underlying technology, there can be a substantial delay between when a first packet is sent and when IP connectivity is actually established. This will be most noticeable after IP networking has been idle for some time.
Your receiver may not be correctly checking the socket for readability. Regardless of what high-level APIs you may be using, underneath there needs to be a call to select() to check whether the socket is readable. When a datagram arrives, select() should unblock and signal that the socket descriptor is readable. Alternatively, but less efficiently, you could set the socket to non-blocking and poll it with a read. Polling wastes CPU time when there is no data and delays detection of arrival for up to the polling interval, but can be useful if for some reason you can't spare a thread to wait on select().
I said above that select() should signal readability on a watched socket when data arrives, but this behavior can be modified by the socket's "Receive low-water mark". The default value is usually 1, meaning any data will signal readability. But if SO_RCVLOWAT is set higher (via setsockopt() or a higher-level equivalent), then readability will be not be signaled until more than the specified amount of data has arrived. You can check the value with getsockopt() or whatever API is equivalent in your environment.
Item 1 would cause the first datagram to actually be delayed, but only when the IP network has been idle for a while and not once it comes up active. Items 2 and 3 would only make it appear to your program that the first datagram was delayed: a packet sniffer at the receiver would show the first datagram arriving on time.

Java TCP latency

I am developing an Android application communicating with a TCP Java-server over a WLAN connection. The Android application is a game with sprites being moved around the screen. Whenever a sprite moves, the AndroidClient sends its coordinates to the Java-server, wich then sends the data to the other clients (maximum 4 clients). The server handles each client on a separate thread, data updates are sent about every 20 ms., and each packet consists of about 1-10 bytes. I am on a 70 Mbit network (with about 15 Mbit effective on my Wireless).
I am having problems with an unstable connection, and experiencing latency at about 50-500 ms. every 10th-30th packet. I have set the tcpNoDelay to true, wich stopped the consistent 200ms latency, although it still lags a lot. As I am quite new to both Android and networking I don't know whether this is to be expected or not. I am also wondering if UDP could be suitable for my program, as I am interested in sending updates fast rather than every packet arriving correctly.
I would appreciate any guidance as to how to avoid/work around this latency problem. General tips on how to implement such a client-server architecture would also be applauded.
On a wireless LAN you'll occasionally see dropped packets, which results in a packet retransmission after a delay. If you want to control the delay before retransmission you're almost certainly going to have to use UDP.
You definitely want to use UDP. For a game you don't care if the position of a sprite is incorrect for a short time. So UDP is ideal in this case.
Also, if you have any control over the server code, I would not use separate threads for clients. Threads are useful if you need to make calls to libraries that you don't have control over and that can block (such as because they touch a file or try to perform additional network communication). But they are expensive. They consume a lot of resources and as such they actually make things slower than they could be.
So for a network game server where latency and performance are absolutely critical, I would just use one thread to process a queue of commands that have a state and then make sure that you never perform an operation that blocks. So each command is processed in order, it's state is evaluated and updated (like a laser blast intersected with another object). If the command requires blocking (like reading from a file) then you need to perform a non-blocking read and set the state of that command accordingly so that your command processor never blocks. The key is that the command processor can never never ever block. It would just run in a loop but you would have to call Thread.sleep(x) in an appropriate way so as not to waste CPU.
As for the client side, when a client submits a command (like they fired a laser or some such), the client would generate a response object and insert it into a Map with a sequence id as the key. Then it would send the request with the sequence id and when the server responds with the that id, you just lookup the response object in the Map and decode the response into that object. Meaning this allows you to perform concurrent operations.

Categories

Resources