Java UDP receive() - java

I am trying to implement Go Back N using UDP sockets in Java. I have a sender and receiver thread at the client end. The sender thread has its own UPD socket to send the data packets and the receiver thread has its own port to receive acknowledgements. If a receiver doesn't receive ACK before a timeout period, the packet has to be retransmitted. Both threads are always running with a while(true) loop. I observe that when there are no packet losses(i.e no timeouts), this functionality of sending and receiving works fine, but when the ACK isn't received, that is when there is a timeout, the switching to the sender thread (for retransmission) isn't happening. The execution is stuck in the receiver's for loop, showing timeouts over and over again.
I even tried using Thread.Sleep(), so that it lets the other thread work. but it isn't happening. Any help would be appreciated.
(The slideflag that is set to 0, will initiate retransmission in the other thread ideally)
while(true){
socket.setSoTimeout(10000);
byte[] buf = new byte[1024];
DatagramPacket packet = new DatagramPacket(buf, buf.length);
try{
socket.receive(packet);
} catch(Exception e) {
System.out.println("Socket timeout!");
ClientMain.setslideFlag(0);
Thread.sleep(10000);
continue;
}
}

Related

How to interrupt a blocking call to UDP socket's receive() [duplicate]

This question already has answers here:
How to terminate a thread blocking on socket IO operation instantly?
(2 answers)
Closed 6 years ago.
I have a UDP server listening packets from a client.
socket = new DatagramSocket(port);
while (isListen) {
byte[] data = new byte[1024];
DatagramPacket packet = new DatagramPacket(data, 0, data.length);
socket.receive(packet);
}
The receive() method will wait forever before a packet received. Is it possible to stop waiting for receiving? I can set a boolean isListen to stop the loop. On the other hand, if the socket is waiting then it will wait forever if no packet send from the client.
You need to set a socket timeout with the setSoTimeout() method and catch SocketTimeoutException thrown by the socket's receive() method when the timeout's been exceeded. After catching the exception you can keep using the socket for receiving packets. So utilizing the approach in a loop allows you to periodically (according to the timeout set) "interrupt" the receive() method call.
Note that timeout must be enabled prior to entering the blocking operation.
An example (w.r.t your code):
socket = new DatagramSocket(port);
socket.setSoTimeout(TIMEOUT_IN_MILLIS)
while (isListen) {
byte[] data = new byte[1024];
DatagramPacket packet = new DatagramPacket(data, 0, data.length);
while (true) {
try {
socket.receive(packet);
break;
} catch (SocketTimeoutException e) {
if (!isListen) {} // implement your business logic here
}
}
// handle the packet received
}
You can close the socket from another thread. The thread blocked in receive() will then throw an IOException.
while (isListen) {
byte[] data = new byte[1024];
DatagramPacket packet = new DatagramPacket(data, 0, data.length);
try {
socket.receive(packet);
} catch(IOException e) {
continue;
}
}
void stopListening() { // Call me from some other thread
isListen = false;
socket.close();
}

How to set a timeout in a UDP client for when a packet is not sent from server?

I am writing a simple program about UDP socket programming. I am using datagram sockets. I have to send packet from the client to the server. Then the server decides randomly if to send a packet back. The client has to accept the packet if sent or wait 2 seconds and assume the packet is lost. I cannot handle the case of the packet lost.
System.out.println("Receiving message...");
dsock.receive(dpack); // receive the packet
System.out.println("Message received");
It works all fine if the packet is sent but how can I handle a situation when a packet is not sent and I still have this line of code existing?
You can change the timeout of the socket and receive messages until timeout is reached, as seen here:
try {
dsock = new DatagramSocket();
byte[] buf = new byte[1000];
DatagramPacket dpack = new DatagramPacket(buf, buf.length);
//...
dsock.setSoTimeout(1000); // set the timeout in millisecounds.
while(true) { // recieve data until timeout
try {
System.out.println("Receiving message...");
dsock.receive(dpack); // receive the packet
System.out.println("Message received");
}
catch (SocketTimeoutException e) {
// timeout exception.
System.out.println("Timeout reached!!! " + e);
dsock.close();
}
}
catch (SocketException e) {
System.out.println("Socket closed " + e);
}
You are searching for dsock.setSoTimeout(2 * 1000) (2*1000 = 2000 ms = 2s). Here is the doc
Enable/disable SO_TIMEOUT with the specified timeout, in milliseconds. With this option set to a non-zero timeout, a call to receive() for this DatagramSocket will block for only this amount of time. If the timeout expires, a java.net.SocketTimeoutException is raised, though the DatagramSocket is still valid. The option must be enabled prior to entering the blocking operation to have effect. The timeout must be > 0. A timeout of zero is interpreted as an infinite timeout.
This will raised a SocketTimeoutException after two seconds, so you have to catch it.

Why does this thread only run once?

I have a server running on a separate thread, and for some reason, it only runs when it receives packets! Why is it doing this? Shouldn't it be running continuously?
public void run() {
while (running) {
System.out.println(true);
byte[] data = new byte[1024];
DatagramPacket packet = new DatagramPacket(data, data.length);
try {
this.socket.receive(packet);
} catch (IOException e) {
e.printStackTrace();
}
parsePacket(packet.getData(), packet.getAddress(), packet.getPort());
}
}
And I start it like this:
public static void main(String[] args) {
GameServer server = new GameServer();
server.start();
}
The class extends Thread.
socket.receive is a blocking method. So the code is waiting on receive untill you receive any data.
From here
public void receive(DatagramPacket p)
throws IOException
Receives a datagram packet from this socket. When this method returns,
the DatagramPacket's buffer is filled with the data received. The
datagram packet also contains the sender's IP address, and the port
number on the sender's machine.
This method blocks until a datagram is received. The length field of
the datagram packet object contains the length of the received
message. If the message is longer than the packet's length, the
message is truncated.
It clearly says that method blocks and wait for Datagram.
Your Thread is running correctly. The method DatagramSocket.receive(DatagramPacket) blocks until a packet is received.
The default behaviour is to block infinitely until a packet is received. You can specfiy a timeout using DatagramSocket.setSoTimeout(int), if you want to periodically log whether a packet is received or not, or to check if your Thread is still running.

Broadcast server discovery

I'm creating a game in Java that simulates the classic 5 cards Poker with 2 to 4 players.
Most of the data will be processed by a server, but since I can't use an online server, my idea is to allow a user to host a game by creating a local one.
Now, I don't want to force the use of IPs to connect to a game, so I created a "discovery" interface within the user can see all avaible games. This is done using the UDP protocol and a broadcast research on a common group:
(code is simplified to show only the actions that are executed, may not work as it is showed here)
Client
MulticastSocket socket = new MulticastSocket(6020);
InetAddress group = InetAddress.getByName("226.0.0.1");
socket.joinGroup(group);
DatagramPacket packet = new DatagramPacket(new byte[] {(byte) 0xF0}, 1, group, 6020);
socket.send(packet);
while(true) {
buf = new byte[1];
packet = new DatagramPacket(buf, buf.length);
socket.receive(packet);
if(packet.getData()[0] == 15) {
Socket client = new Socket(packet.getAddress(), 6020);
}
}
Server
MulticastSocket socket = new MulticastSocket(6020);
InetAddress group = InetAddress.getByName("226.0.0.1");
socket.joinGroup(group);
// new thread listening on port 6020 TCP
ServerSocket server = new ServerSocket(6020);
new Thread(new Runnable() {
public void run() {
while(true) {
// new thread communicating with client and back listening on port 6020
new ServerThread(server.accept());
}
}
}).start();
// listening on port 6020 UDP
byte[] buf;
DatagramPacket packet;
while(true) {
buf = new byte[1];
packet = new DatagramPacket(buf, buf.length);
socket.receive(packet);
if(packet.getData()[0] == -16) {
DatagramPacket packet = new DatagramPacket(new byte[] {(byte) 0x0F}, 1, packet.getSocketAddress());
socket.send(packet);
}
}
The client sends an UDP broadcast packet on the port 6020. When a server receive this packet if it's composed by a byte 0xF0, he sends back a byte 0x0F to the client. Every client is also listening on the port 6020 and when receive a packet composed by a byte 0x0F it starts a new connection TCP to the server on the port 6020.
My question: is there a better way to achieve this "discovery" system?
I know this is going to work only in local networks, is it possible to extend the discovery "outside" using a local server?
Unless you want to set up some sort of known broker that can connect players with servers (or give them a listing of servers), you may be out of luck. As you discovered, multicast and broadcast is generally not sent to the WAN by most switches (and definitely cannot traverse the Internet).
If your issue with setting up a known server/broker is that you have a home connection and so a dynamic ip, I would recommend looking into dynamic DNS. There are a number of providers out there that will allow you to set up a sub-domain on their system that is automatically changed to point to your IP as your IP changes.

Java Datagram Sockets not receiving packets

I'm attempting to use Java Datagrams to create a packet stream between server and client. The problem is that, although I receive confirmation that packets are being sent, they are all lost before they reach the client listener I set up. I have it right now so that there's a timeout after 5 seconds, which happens every time I run it.
class DGServer extends Thread
{
private DatagramSocket server;
public DGServer() throws IOException
{
server = new DatagramSocket();
}
public void run()
{
try
{
server.connect(App.Local, 4200);
System.out.println("Server starting...");
int i = 0;
while (server.isConnected() && (i < 256))
{
byte[] buffer = new byte[1];
buffer[0] = (byte) ++i;
DatagramPacket packet = new DatagramPacket(buffer, buffer.length, App.Local, 4200);
System.out.println("Sedning " + i + " to client...");
server.send(packet);
Thread.sleep(500);
}
}
catch (Exception e)
{
e.printStackTrace();
}
System.out.println("Server Finished!");
if (! server.isClosed())
server.close();
}
}
class DGClient extends Thread
{
private DatagramSocket client;
public DGClient() throws SocketException
{
client = new DatagramSocket();
}
public void run()
{
try
{
client.connect(App.Local, 4200);
client.setSoTimeout(5000);
System.out.println("Client starting...");
int i = 0;
while (client.isConnected() && (i < 256))
{
byte[] buffer = new byte[1];
DatagramPacket packet;
packet = new DatagramPacket(buffer, 1, App.Local, 4200);
//System.out.println("Sedning " + i + " to server...");
client.receive(packet);
buffer = packet.getData();
System.out.println("Client Received:\t" + packet.getData()[0]);
Thread.sleep(500);
}
}
catch (Exception e)
{
e.printStackTrace();
}
System.out.println("Client Finished!");
if (! client.isClosed())
client.close();
}
}
You may choose to skim over the second class. They're widely the same, it just replaces server.send, with client.receive. Also, this class was not designed to really do anything important. So, a lot of the code(like, Exception handling), is written very simplistically.
Is there anything I can do to prevent the loss of packets? I have the port forwarded on my computer(not that it should matter, I'm using my localhost, which is App.Local in case you wondered).
Also, side question. I originally had it set up as a single class, coded to send a packet, then turn around and receive one. But it threw an exception because the 'ICMP Port is unreachable'. Does anyone know why this happens?
Ok first off, I think you are testing both the server and the client at the same time, so you don't have any idea whether which one fails.
You should use either netcat (nc) or wireshark to test the client
with netcat, you can run the following command
nc -l -u -p 4200 -vv
This will tell netcat to listen (-l) on udp (-u) on port (-p 4200) and be very verbose (-vv)
This way you'll be able to check if your client can connect to anything.
You can use the same program to check if your server can receive connections from a known working program with
nc -u [target ip] 4200
There is a netcat cheatsheet here
You can also check netcat to netcat to diagnose if it is purely a network issue. Maybe the firewalls/NAT aren't configured correctly
Why both server and client doing a connect ?
Shouldn't one side be sending the data ?
Something like :
DatagramSocket socket = new DatagramSocket();
DatagramPacket packet = new DatagramPacket(buf, buf.length,
address, 4200);
socket.send(packet);
It sounds to me like there is some packet filter / firewall interfering with UDP traffic between the client and server on the port that you are using. It could be simple packet filtering, it could be NAT (which interferes with UDP traffic unless you take special steps), it could be some accidental network misconfiguration.
But it threw an exception because the 'ICMP Port is unreachable'. Does anyone know why this happens?
IMO, this is more evidence of packet filtering.
(However, its also a bit unexpected that you should receive this in response to trying to sent a datagram. I'd simply expect there to be no response at all, and any ICMP responses to a UDP request to have been dropped on the floor by the OS. But, I may be wrong about this ...
Now if you were using a regular stream socket; e.g. TCP/IP, this behaviour would be understandable.)
You're not binding the sending socket to a specific port number, so the client won't be able to send to it if connected. I suspect you have the same problem in reverse as well, i.e. the client socket isn't bound to port 4200. That would explain everything.
I would get rid of the connects and use explicit port numbers when sending, and the incoming port number when replying.
I am sure you are aware that UDP is a lossy proocol and you have allowed for this. Still you should expect to get some packets.
I suggest you test whether your program works by using the client and server on the same host and on different hosts, avoiding any firewalls. If this works then you have a network configuration issue.
If you are running both of them in the same machine this is never going to work because you are connecting both (server and client) to the same port.

Categories

Resources