Java Server - Sending packets out incorrectly? - java

Currently have a TCP server built in Java and I'm sending messages/packets out to clients using their socket's OutputStream:
// Send all player's information to everyone else
outerPlayerIter = players.iterator();
while(outerPlayerIter.hasNext()) {
Player outerPlayer = outerPlayerIter.next();
Iterator<Player> innerPlayerIter = players.iterator();
while(innerPlayerIter.hasNext()) {
Player innerPlayer = innerPlayerIter.next();
boolean isYou = false;
if(innerPlayer.equals(outerPlayer)) isYou = true;
// Send innerPlayer's info to outerPlayer
Thread.sleep(100);
dataBuffer.clearBuffer();
dataBuffer.writeByte(Msgs.mm_toclient.MES_SENDPLAYERINFO);
dataBuffer.writeBool(isYou);
dataBuffer.writeBool(innerPlayer.getIsHost());
dataBuffer.writeString(innerPlayer.getName());
dataBuffer.writeString(innerPlayer.getPublicIP().getHostAddress());
dataBuffer.writeShort((short)innerPlayer.getUdpPort());
outerPlayer.getSocket().getOutputStream().write(dataBuffer.getByteArray());
outerPlayer.getSocket().getOutputStream().flush();
}
}
However, sometimes the clients don't appear to receive all the messages. I can't send multiple messages at the exact same time over one socket.
One way to temporarily fix this was to sleep before I send another packet out. But I'm not sure why this is needed.
Am I doing something wrong in regards to how I'm sending/writing the packets out to be sent? What can be fixed to allow multiple packets to be received correctly at once without sleeping?

It might be due to the fact that the client closes the socket way too fast before the communication should actually finished. Could you please try to bump up the thread.sleep value or, on the client side, if you use any kind of timing, try to bump up that one as well.

Related

Trouble with UDP ports and DatagramSockets

I'm working on a project that is suppose to send a file from one machine to another using DatagramPackets and DatagramSockets. The implementation is suppose to mimic the TCP protocol. So once the receiver gets a packet it sends back an ACK to the sender, confirming the packet was delivered. My program so far without making any checks for ACKs. Im having trouble implementing the ACK messages. On my receiver program, it shows that the ACKs are being sent, but the sender application is not getting them.
I keep getting an error from creating the socket. "java.net.BindException: Address already in use: Cannot bind". I'm confused because nowhere else in the sender applicaion have a specified the port. I simply use DatagramSocket socket = new DatagramSocket();
but I do use
DatagramPacket packet = new DatagramPacket(packetData, packetData.length, internetAddress, 49000);
socket.send(packet); when sending packets.
I have tried removing the datagram declaration in my waitForAck() method and used the same datagramSocket I used to send packets. But socket.receive(packet); will hang and never recieve anything because it hasnt been assigned a port to listen on.
This is my method to listen for ACKs:
public void waitForACK(){
//listen for ack for a period of time
//if ACK received, then break send next packet
//if ACK not received or time out, send last packet
//TODO: implement a timeout
System.out.println("### Sender waiting for ACK");
try {
DatagramSocket receivingSocket = new DatagramSocket(49000);
while (!ACKreceived) {
byte[] buf = new byte[1500]; // Actual Ethernet packet size is 1500 bytes
// receive request
DatagramPacket packet = new DatagramPacket(buf, buf.length);
receivingSocket.receive(packet); //socket.receive(packet); <--
byte[] packetData = Arrays.copyOf(packet.getData(), packet.getLength());
ACKreceived = checkACK(packetData);//check the recieved packet contains an ACK message
}
System.out.println("### Sender recieved ACK");
} catch (Exception e) {
System.out.println("### never got ACK");
System.out.println(e);
}
}
I've also tried this but the scoket will hang and never actualy recieve anything. Even though the application that recieves the file successfully reports sending an ACK. I'm guessing its because it does not know to recieve the ACK on port 49000.
public void waitForACK(){
//listen for ack for a period of time
//if ACK received, then break send next packet
//if ACK not received or time out, send last packet
//TODO: implement a timeout
System.out.println("### Sender waiting for ACK");
try {
while (!ACKreceived) {
byte[] buf = new byte[1500]; // Actual Ethernet packet size is 1500 bytes
// receive request
DatagramPacket packet = new DatagramPacket(buf, buf.length);
socket.receive(packet); //<--- HANGS RIGHT HERE
byte[] packetData = Arrays.copyOf(packet.getData(), packet.getLength());
ACKreceived = checkACK(packetData);//check the recieved packet contains an ACK message
}
System.out.println("### Sender recieved ACK");
} catch (Exception e) {
System.out.println("### never got ACK");
System.out.println(e);
}
}
You're leaking sockets.
Don't create a new socket just to wait for an ACK. You should have exactly one DatagramSocket open for the life of the application.
Try to use the netstat command to check if another program (or even your program) is active on the port. On unix netstat -lp as su will show you, on windows netstat exists too with different command line options
Before we get into the problems with your code: Why is the client trying to listen on port 49000?
If you don't already realize this: the local port and peer port do not have to be the same, and generally are not. When you call DatagramSocket(), you get an arbitrary local port assigned by the OS. The fact that you sent to 49000 doesn't change your local port. And if the other side of the connection just sends back to the tuple it received a packet from, that won't arrive at 49000, it will arrive at your local port.
If that's your problem, the fix is to use the second version (just use your existing socket to listen as well as sending), and then fix the other side (that you haven't shown us the code for) to send the ACK to the complete address tuple of the packet's sender, not port 49000 on the sender's host.
If you realize that, but think that both sides need to have local port 49000 for some reason… well, they probably don't. Generally, a protocol needs one side (the "server") to have a well-known port to connect, but the other side (the "client") doesn't need that. That's why you can use DatagramSocket() instead of DatagramSocket(49000) on the client and things work.
Again, same fix.
In the rare cases where both sides really do need to have a well-known port (e.g., so you can explicit open it in your company's internal firewalls), you almost certainly want the sending to also happen on that port.
So, instead of creating a DatagramSocket() to send from, and a DatagramSocket(48000) to listen on, just create a DatagramSocket(48000) in the first place and use it for both.
However, note that this solution, like any solution that uses a fixed port, has two additional problems:
First, if client and server both want to bind port 48000, they can't both run on the same machine. You can renumber one of them to 48001, or just accept that.
Second, if you expect to start and stop the client frequently, it's often going to try to bind port 49000 while the OS still has a socket for that port in TIME_WAIT state, so you're going to get a bind error. This is what SO_REUSEADDR is for; use it.
What if you really do want to use an arbitrary-port sender on the client, but a fixed-port listener? There are some cases where that makes sense, but unless you can explain why you really need this, you don't have one.
If you do, then, and only then, could you use something like your first version. But you still probably want to create the listener socket once, not each time you listen for ACKs; it just should be a different attribute from the sending socket. (And of course you still need to deal with the same things as in the last section.)
And if you really do want to create a new listener socket for each ACK, then you have to make sure you close it immediately, rather than waiting for the Java GC and the OS to collectively get around to closing it for you, or the next time you wait for an ACK, you're likely to get a bind error, because the old listener socket is still bound to it.

How do I communicate with all threads on a Multithreaded server?

Ok. I'm trying to grasp some multithreading Java concepts. I know how to set up a multiclient/server solution. The server will start a new thread for every connected client.
Conceptually like this...
The loop in Server.java:
while (true) {
Socket socket = serverSocket.accept();
System.out.println(socket.getInetAddress().getHostAddress() + " connected");
new ClientHandler(socket).start();
}
The ClientHandler.java loop is:
while(true)
{
try {
myString = (String) objectInputStream.readObject();
}
catch (ClassNotFoundException | IOException e) {
break;
}
System.out.println(myClientAddress + " sent " + myString);
try {
objectOutputStream.writeObject(someValueFromTheServer);
objectOutputStream.flush();
}
catch (IOException e) {
return;
}
}
This is just a concept to grasp the idea. Now, I want the server to be able to send the same object or data at the same time - to all clients.
So somehow I must get the Server to speak to every single thread. Let's say I want the server to generate random numbers with a certain time interval and send them to the clients.
Should I use properties in the Server that the threads can access? Is there a way to just call a method in the running threads from the main thread? I have no clue where to go from here.
Bonus question:
I have another problem too... Which might be hard to see in this code. But I want every client to be able to receive messages from the server AND send messages to the sever independently. Right now I can get the Client to stand and wait for my gui to give something to send. After sending, the Client will wait for the server to send something back that it will give to the gui. You can see that my ClientHandler has that problem too.
This means that while the Client is waiting for the server to send something it cannot send anything new to the server. Also, while the Client is waiting for the gui to give it something to send, it cannot receive from the server.
I have only made a server/client app that uses the server to process data it receives from the Client - and the it sends the processed data back.
Could anyone point me in any direction with this? I think I need help how to think conceptually there. Should I have two different ClientHandlers? One for the instream and one for the outstream? I fumbling in the dark here.
"Is there a way to just call a method in the running threads from the main thread?"
No.
One simple way to solve your problem would be to have the "server" thread send the broadcast to every client. Instead of simply creating new Client objects and letting them go (as in your example), it could keep all of the active Client objects in a collection. When it's time to send a broadcast message, it could iterate over all of the Client objects, and call a sendBroadcast() method on each one.
Of course, you would have to synchronize each client thread's use of a Client object outputStream with the server thread's use of the same stream. You also might have to deal with client connections that don't last forever (their Client objects must somehow be removed from the collection.)

Receiving multiple UDP packets while updating UI

I have an application where I am receiving information from a server and then showing that information on the screen for the user. Since there is a lot of information, I would like to update the UI as I receive the information.
Sending/Receiving is done on a separate thread.
Two questions:
How can I best receive multiple UDP packets?
My current code for receiving one packet
try {
Log.i(TAG,"Listening...");
_dcOut.setSoTimeout(20000);
_dcOut.receive(packet);/* Wait to receive a datagram */
haveDatagram = true;
Log.d(TAG,"dc_out, received...");
}
catch (Exception e) { // can be just a time out
haveDatagram = false;
Log.d(TAG,"dc_out, failed to receive...");
}
Is it possible to update UI while receiving multiple UDP packets?
Edit:
I am using a bound service to get the information from the server(AIDL to be specific). Here is the setup:
Either I:
1. get an individual object and send it back and that's that for that particular instance of the service or
2. I can send back a List of them for that service
My idea is that I should send back a list of say, 5-10 objects, and repeat that for a while?
--I feel like there isn't a way for me to be updating the UI while receiving the packets with this service setup--
If the receiving of UDP packets are done on a seperate thread, there should be no problems showing it on your GUI!
Your code shows only receving UDP data. I need more info to be specific :)
Only one UDPSocket handles incoming data on a specific port, they will all (packets) be stored sequentially in a buffer, dedicated to that specific process.

How to ignore messages from disconnected channel

I'm implementing simple netty server for a multiplayer game. I'm just trying to figure out Netty.
I test the server via telnet. What i done is broadcast the messages to all channels. It's working smoothly. Also I remove channels from map on close event which is fine.
But the problem is if one of the clients disconnect unexpectedly, before closed callback, messageReceived callback called which the sender is disconnected channel.
How can i properly ignore the message comes from disconnected client?
I use StringBuffer in messagedReceived but for the case StringBuffer.toString is also not a proper string. At the end disconnected channel broadcast pointless message to other channels and itself, when receiver channel is itself throws an exception Connection reset by peer
which it's normal because the channel itself is not available at the moment.
Here is the code ;
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
System.out.println();
System.out.println("------------------");
Channel current = e.getChannel();
System.out.println("SenderChannel:"+current.getId());
if(!current.isOpen())
System.out.println("Not Open");
ChannelBuffer buf = (ChannelBuffer) e.getMessage();
StringBuffer sbs = new StringBuffer();
while(buf.readable()) {
sbs.append((char) buf.readByte());
}
String s = sbs.toString();
System.out.println(s);
String you = "You:" + s;
String other = "Other:" + s;
byte [] uResponse = you.getBytes();
byte [] otherResponse = other.getBytes();
Iterator iterator = channelList.entrySet().iterator();
while(iterator.hasNext()){
Map.Entry pairs = (Map.Entry)iterator.next();
Integer key = (Integer)pairs.getKey();
Channel c = (Channel)pairs.getValue();
System.out.println("ReceiverChannel:"+c.getId());
if(key != current.getId())
c.write(ChannelBuffers.wrappedBuffer(otherResponse));
else
c.write(ChannelBuffers.wrappedBuffer(uResponse));
}
}
#Override
public void channelDisconnected(ChannelHandlerContext ctx,
ChannelStateEvent e){
Channel ch = e.getChannel();
channelList.remove(ch.getId());
System.out.println();
System.out.println("*****************");
System.out.println("DisconnectEvent:"+ch.getId());
System.out.println("*****************");
System.out.println();
ch.close();
}
You can't solve the problem in the manner that you would like. If there's a network problem then technically the sender could disconnect at any time, for example
as soon as the thread enters messageReceived
while you're iterating through channelList
while you're iterating through channelList but after you've echoed back to the sender
after you've broadcast the message
Netty can't raise the disconnected event while messageReceived is processing because you're running in the thread that will raise the event (unless you have a non-ordered execution handler in your pipeline). The correct solution really depends on your application. If the broadcast results in all the other receivers responding it's probably better / easier to have the server suppress any messages destined for a client that's no longer connected.
Also, if you're really going to use strings then take a look at StringEncoder / StringDecoder. There's no guarantee in your code that the message event buffer contains a complete string.
Just put a try/catch around each send. If one of them fails, close the corresponding channel.
If this is for a multiplayer game server, it might be better to use an existing Netty game server solution like java game server. Disconnects become events which get sent to the session and since it is event driven, you could write your own handler to decide whether or not to receive anymore events on the same session. Since events are queued in a FIFO order, if disconnect happens then you need not go ahead with subsequent broadcasts.
I am not a Java Developer. But from socket point of view this data is in buffer or sent before disconnecting of user. So when you are in receiving stage user is still connected and exactly on time of completing of receiving user is already disconnected. So I think best way to prevent this things is to check if user is still connected after each receiving of data.
In C# I personally use this code to check if user is still connected:
if (client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Receive(checkConn, SocketFlags.Peek) == 0)
return false;
}
return true;
I am not sure about Java And Netty (And if your connection is TCP) but this is what I use and this could be possible to convert it easily to Java.

Java Sockets and Dropped Connections

What's the most appropriate way to detect if a socket has been dropped or not? Or whether a packet did actually get sent?
I have a library for sending Apple Push Notifications to iPhones through the Apple gatways (available on GitHub). Clients need to open a socket and send a binary representation of each message; but unfortunately Apple doesn't return any acknowledgement whatsoever. The connection can be reused to send multiple messages as well. I'm using the simple Java Socket connections. The relevant code is:
Socket socket = socket(); // returns an reused open socket, or a new one
socket.getOutputStream().write(m.marshall());
socket.getOutputStream().flush();
logger.debug("Message \"{}\" sent", m);
In some cases, if a connection is dropped while a message is sent or right before; Socket.getOutputStream().write() finishes successfully though. I expect it's due to the TCP window isn't exhausted yet.
Is there a way that I can tell for sure whether a packet actually got in the network or not? I experimented with the following two solutions:
Insert an additional socket.getInputStream().read() operation with a 250ms timeout. This forces a read operation that fails when the connection was dropped, but hangs otherwise for 250ms.
set the TCP sending buffer size (e.g. Socket.setSendBufferSize()) to the message binary size.
Both of the methods work, but they significantly degrade the quality of the service; throughput goes from a 100 messages/second to about 10 messages/second at most.
Any suggestions?
UPDATE:
Challenged by multiple answers questioning the possibility of the described. I constructed "unit" tests of the behavior I'm describing. Check out the unit cases at Gist 273786.
Both unit tests have two threads, a server and a client. The server closes while the client is sending data without an IOException thrown anyway. Here is the main method:
public static void main(String[] args) throws Throwable {
final int PORT = 8005;
final int FIRST_BUF_SIZE = 5;
final Throwable[] errors = new Throwable[1];
final Semaphore serverClosing = new Semaphore(0);
final Semaphore messageFlushed = new Semaphore(0);
class ServerThread extends Thread {
public void run() {
try {
ServerSocket ssocket = new ServerSocket(PORT);
Socket socket = ssocket.accept();
InputStream s = socket.getInputStream();
s.read(new byte[FIRST_BUF_SIZE]);
messageFlushed.acquire();
socket.close();
ssocket.close();
System.out.println("Closed socket");
serverClosing.release();
} catch (Throwable e) {
errors[0] = e;
}
}
}
class ClientThread extends Thread {
public void run() {
try {
Socket socket = new Socket("localhost", PORT);
OutputStream st = socket.getOutputStream();
st.write(new byte[FIRST_BUF_SIZE]);
st.flush();
messageFlushed.release();
serverClosing.acquire(1);
System.out.println("writing new packets");
// sending more packets while server already
// closed connection
st.write(32);
st.flush();
st.close();
System.out.println("Sent");
} catch (Throwable e) {
errors[0] = e;
}
}
}
Thread thread1 = new ServerThread();
Thread thread2 = new ClientThread();
thread1.start();
thread2.start();
thread1.join();
thread2.join();
if (errors[0] != null)
throw errors[0];
System.out.println("Run without any errors");
}
[Incidentally, I also have a concurrency testing library, that makes the setup a bit better and clearer. Checkout the sample at gist as well].
When run I get the following output:
Closed socket
writing new packets
Finished writing
Run without any errors
This not be of much help to you, but technically both of your proposed solutions are incorrect. OutputStream.flush() and whatever else API calls you can think of are not going to do what you need.
The only portable and reliable way to determine if a packet has been received by the peer is to wait for a confirmation from the peer. This confirmation can either be an actual response, or a graceful socket shutdown. End of story - there really is no other way, and this not Java specific - it is fundamental network programming.
If this is not a persistent connection - that is, if you just send something and then close the connection - the way you do it is you catch all IOExceptions (any of them indicate an error) and you perform a graceful socket shutdown:
1. socket.shutdownOutput();
2. wait for inputStream.read() to return -1, indicating the peer has also shutdown its socket
After much trouble with dropped connections, I moved my code to use the enhanced format, which pretty much means you change your package to look like this:
This way Apple will not drop a connection if an error happens, but will write a feedback code to the socket.
If you're sending information using the TCP/IP protocol to apple you have to be receiving acknowledgements. However you stated:
Apple doesn't return any
acknowledgement whatsoever
What do you mean by this? TCP/IP guarantees delivery therefore receiver MUST acknowledge receipt. It does not guarantee when the delivery will take place, however.
If you send notification to Apple and you break your connection before receiving the ACK there is no way to tell whether you were successful or not so you simply must send it again. If pushing the same information twice is a problem or not handled properly by the device then there is a problem. The solution is to fix the device handling of the duplicate push notification: there's nothing you can do on the pushing side.
#Comment Clarification/Question
Ok. The first part of what you understand is your answer to the second part. Only the packets that have received ACKS have been sent and received properly. I'm sure we could think of some very complicated scheme of keeping track of each individual packet ourselves, but TCP is suppose to abstract this layer away and handle it for you. On your end you simply have to deal with the multitude of failures that could occur (in Java if any of these occur an exception is raised). If there is no exception the data you just tried to send is sent guaranteed by the TCP/IP protocol.
Is there a situation where data is seemingly "sent" but not guaranteed to be received where no exception is raised? The answer should be no.
#Examples
Nice examples, this clarifies things quite a bit. I would have thought an error would be thrown. In the example posted an error is thrown on the second write, but not the first. This is interesting behavior... and I wasn't able to find much information explaining why it behaves like this. It does however explain why we must develop our own application level protocols to verify delivery.
Looks like you are correct that without a protocol for confirmation their is no guarantee the Apple device will receive the notification. Apple also only queue's the last message. Looking a little bit at the service I was able to determine this service is more for convenience for the customer, but cannot be used to guarantee service and must be combined with other methods. I read this from the following source.
http://blog.boxedice.com/2009/07/10/how-to-build-an-apple-push-notification-provider-server-tutorial/
Seems like the answer is no on whether or not you can tell for sure. You may be able to use a packet sniffer like Wireshark to tell if it was sent, but this still won't guarantee it was received and sent to the device due to the nature of the service.

Categories

Resources