I have a server running on a separate thread, and for some reason, it only runs when it receives packets! Why is it doing this? Shouldn't it be running continuously?
public void run() {
while (running) {
System.out.println(true);
byte[] data = new byte[1024];
DatagramPacket packet = new DatagramPacket(data, data.length);
try {
this.socket.receive(packet);
} catch (IOException e) {
e.printStackTrace();
}
parsePacket(packet.getData(), packet.getAddress(), packet.getPort());
}
}
And I start it like this:
public static void main(String[] args) {
GameServer server = new GameServer();
server.start();
}
The class extends Thread.
socket.receive is a blocking method. So the code is waiting on receive untill you receive any data.
From here
public void receive(DatagramPacket p)
throws IOException
Receives a datagram packet from this socket. When this method returns,
the DatagramPacket's buffer is filled with the data received. The
datagram packet also contains the sender's IP address, and the port
number on the sender's machine.
This method blocks until a datagram is received. The length field of
the datagram packet object contains the length of the received
message. If the message is longer than the packet's length, the
message is truncated.
It clearly says that method blocks and wait for Datagram.
Your Thread is running correctly. The method DatagramSocket.receive(DatagramPacket) blocks until a packet is received.
The default behaviour is to block infinitely until a packet is received. You can specfiy a timeout using DatagramSocket.setSoTimeout(int), if you want to periodically log whether a packet is received or not, or to check if your Thread is still running.
Related
So i am experimenting with socket buffer sizes and have created 2 test cases
Case 1[server sends data to client]
First we have a server which sends 100 bytes of data to a client
public static void main(String[] args)throws Exception
{
try(ServerSocket server=new ServerSocket())
{
server.bind(new InetSocketAddress(InetAddress.getLocalHost(),2500));
byte[] data=new byte[100];
for(int i=0;i<data.length;i++){data[i]=(byte)i;}
while(true)
{
try(Socket client=server.accept())
{
try(OutputStream output=client.getOutputStream())
{
output.write(data);
}
}
}
}
}
And next we have an Client which simply reads the 100 bytes received and prints it but i have set the Receive Buffer size to 25 bytes. Therefore i expect the server to send 4 packets each of size 25 bytes.
public static void main(String[] args)throws Exception
{
try(Socket client=new Socket())
{
//receive window as 25 bytes
client.setReceiveBufferSize(25);
client.setSoLinger(true,0);
client.bind(new InetSocketAddress("192.168.1.2",5000));
client.connect(new InetSocketAddress(InetAddress.getLocalHost(),2500),2000);
try(InputStream input=client.getInputStream())
{
int length;
byte[] data=new byte[100];
while((length=input.read(data))>0)
{
System.out.println(Arrays.toString(Arrays.copyOfRange(data,0,length)));
System.out.println("============");
}
}
}
}
And sure enough upon examining the packets sent from the server end using wire shark i see 4 packets[ignore malformed packets as i get no errors when receiving data on the client side and sometimes it shows malformed and sometimes no error] each with payload of size 25 bytes.
Further confirmation that each packet is 25 bytes is examining the ACK window packets sent from the client side
So case 1 is complete upon setting Receive Buffer Size from the client side server sends packets of that much size
Case 2[Client sends data to server]
The roles are reversed and this time we set the ServerSocket Receive Buffer size before binding as follows
public static void main(String[] args)throws Exception
{
try(ServerSocket server=new ServerSocket())
{
//server receive window
server.setReceiveBufferSize(10);
server.bind(new InetSocketAddress(InetAddress.getLocalHost(),2500));
int length;
byte[] data=new byte[100];
while(true)
{
try(Socket client=server.accept())
{
try(InputStream input=client.getInputStream())
{
while((length=input.read(data))>0)
{
System.out.println(Arrays.toString(Arrays.copyOfRange(data,0,length)));
System.out.println("============");
}
}
}
}
}
}
And the client sends data with no additional settings as follows
public static void main(String[] args)throws Exception
{
try(Socket client=new Socket())
{
client.setReuseAddress(true);
client.connect(new InetSocketAddress(InetAddress.getLocalHost(),2500),2000);
byte[] data=new byte[100];
for(int i=0;i<data.length;i++){data[i]=(byte)i;}
try(OutputStream output=client.getOutputStream()){output.write(data);}
}
}
Since the server has said it is willing to receive packets of only 10 bytes i expect the client to send 10 packets each of size 10 bytes. And sure enough upon examining the packets received on the server end using wire shark i get the expected output
So this completes understanding of the Receive Buffer on both the client and the server side. If one side broadcasts its receive buffer size the other side sends only that much data with each packet transmission which makes total sense.
Now comes the harder to understand part the Send Buffers. My understanding of an Send Buffer is that
Holds bytes sent by the socket and gets emptied out only after
receiving an ACK from the recipient . And if it gets full it blocks the
socket from sending any more data until an ACK is received.
Thus if the sender has an send_buffer=10 bytes and if the receiver has an receive_buffer=30 bytes. The sender should still send only 10 Bytes as it is capable of holding on to only that much data which must then be Acknowledged by the receiver and then it can send the next 10 bytes and so on. But despite all my combinations of setting send buffer on the server and client side as follows
1)Client sends data to server
server side=ServerSocket.setReceiveBufferSize(30);
client side=client.setSendBufferSize(10);
2)Server sends data to client
server side=serverSocket.accept().setSendBufferSize(10);
client side=client.setReceiveBufferSize(30);
The packets received on the recipient side using wire shark is always the same. i.e sender always sends packets of size 30 bytes. i.e sender is always dominated by the receiver's settings
a) Where have i gone wrong in my understanding ?
b) An very simply test case as presented above where Send Buffer actually makes an difference in both client & server side would be appreciated
I filed an incident report a few days ago and it has been confirmed as a bug
I am writing a simple program about UDP socket programming. I am using datagram sockets. I have to send packet from the client to the server. Then the server decides randomly if to send a packet back. The client has to accept the packet if sent or wait 2 seconds and assume the packet is lost. I cannot handle the case of the packet lost.
System.out.println("Receiving message...");
dsock.receive(dpack); // receive the packet
System.out.println("Message received");
It works all fine if the packet is sent but how can I handle a situation when a packet is not sent and I still have this line of code existing?
You can change the timeout of the socket and receive messages until timeout is reached, as seen here:
try {
dsock = new DatagramSocket();
byte[] buf = new byte[1000];
DatagramPacket dpack = new DatagramPacket(buf, buf.length);
//...
dsock.setSoTimeout(1000); // set the timeout in millisecounds.
while(true) { // recieve data until timeout
try {
System.out.println("Receiving message...");
dsock.receive(dpack); // receive the packet
System.out.println("Message received");
}
catch (SocketTimeoutException e) {
// timeout exception.
System.out.println("Timeout reached!!! " + e);
dsock.close();
}
}
catch (SocketException e) {
System.out.println("Socket closed " + e);
}
You are searching for dsock.setSoTimeout(2 * 1000) (2*1000 = 2000 ms = 2s). Here is the doc
Enable/disable SO_TIMEOUT with the specified timeout, in milliseconds. With this option set to a non-zero timeout, a call to receive() for this DatagramSocket will block for only this amount of time. If the timeout expires, a java.net.SocketTimeoutException is raised, though the DatagramSocket is still valid. The option must be enabled prior to entering the blocking operation to have effect. The timeout must be > 0. A timeout of zero is interpreted as an infinite timeout.
This will raised a SocketTimeoutException after two seconds, so you have to catch it.
how can i detect udp packet corruption in java?
public class PacketReceiver implements Runnable{
byte[] dataReceive = new byte[udpConnectionManager.MAX_PACKET_SIZE];
private ArrayList<Thread> workerList = new ArrayList<Thread>();
#Override
public void run() {
while(true){
DatagramPacket receivePacket = new DatagramPacket(dataReceive, dataReceive.length);
try {
udpConnectionManager.socket.receive(receivePacket);
} catch (IOException e) {
e.printStackTrace();
}
byte[] receivedData = receivePacket.getData();
//[0] stores basic command
//[1~4] int stores protocol id
//[5~9] int data increase counter for detect packet loss
//[10~14]
switch(receivedData[0]){
//initial packet
case 0x01:
if(!udpConnectionManager.instance.isInitialized(receivePacket)){
Thread t = new Thread(new AcceptThread(receivePacket));
t.start();
workerList.add(t);
}else{
System.out.println("initialized packet attempt to initialize.");
}
//heartbeat signal
case 0x02:
if(udpConnectionManager.instance.isInitialized(receivePacket)){
udpConnectionManager.instance.getConnection(receivePacket).onHeartBeat();
}else{
System.out.println("Received HeartBeat signal from non-initialized connection");
}
//
case 0x03:
}
}
}
}
packet corruption might happen. how do i have to handle packet corruption problem using udp?
and i know how to detect packet loss but i don't know how to detect packet corruption.
If you absolutely need to use only DatagramPacket - Then, it doesn't expose any api to query the the transmitted checksum. What you can implement as a solution is have a logic(SHA256, MD..) to calculate the checksum, transmit the checksum as a payload in an alternating UDP packets, and compare the checksum calculated on the data payload vs checksum received on the next UDP segment. Ofcourse, you need to handle lot more error conditions in the suggested solution.
Goodevening everyone.
I am trying to create an application using Eclipse(Kepler) where I send an array of DatagramPackets. My server side application is receiving all packets sent by the client. But when the server tries to respond back by sending the response packets back to client, my client is unable to receive all the packets.
I would be really obliged if someone helped me out.
here is my client side application code:
public static void main(String [] args) throws IOException
{
timing t=new timing(); // timing is the name of my class
InetAddress inet;
DatagramSocket ds;
int k=1024*60;
DatagramPacket[] p=new DatagramPacket[64];
DatagramPacket []recd=new DatagramPacket[64];
byte[] buf=new byte[1024*60];
if(args[0]==null)
{
args[0]="localhost";
}
inet=InetAddress.getByName(args[0]);
ds=new DatagramSocket(6443);
for(int i=0;i>64;i++)
{
p[i]=new DatagramPacket(sent.getBytes(),sent.length(),inet,7443);
recd[i]=new DatagramPacket(buf,buf.length);
}
ds.setSoTimeout(120000);
int buffer=ds.getReceiveBufferSize();
int j=ds.getSendBufferSize();
while(h<64)
{
p[h]=new DatagramPacket(sent.getBytes(),sent.length(),inet,7443);
ds.send(p[h]);
System.out.println("Client has sent packet:"+h);
h++;
}
System.out.println("Receiving.");
h=0;
while(h<64) // UNABLE TO RECEIVE ALL SERVER SIDE PACKETS . PROBLEM CODE
{
recd[h]=new DatagramPacket(buf,buf.length);
ds.receive(recd[h]);
System.out.println("Client has recd packet:"+h);
h++;
}
SERVER SIDE APPLICATION:
try{
byte[] buf=new byte[60*1024];
InetAddress add=InetAddress.getByName("localhost");
for(int i=0;i<64;i++)
dp[i]=new DatagramPacket(buf,buf.length,add,6443);
ds.setSoTimeout(120000);
System.out.println("SERVER READY AND LISTENING PORT 6443");
int h=0;
while(h<64)
{
dp[h]=new DatagramPacket(buf,buf.length,add,6443);
ds.receive(dp[h]);
System.out.println("Packet "+h+"recd.");
h++;
}
String x1=new String(dp[63].getData());
System.out.println("Server recd:"+x1);// correct. no problem here
InetAddress add2=dp[63].getAddress();
int port=dp[63].getPort();// this is fine. same data sent in all packets
h=0;
while(h<64)
{
dp[h]=new DatagramPacket(buf,buf.length,add2,port);
ds.send(dp[h]);
System.out.println("Server has sent packet:"+h);
h++;
}
kindly help me as when i send a single datagrampacket its being recd. but this array of packets isnt.
I do not know much about your code, but if I really want this 'for' loop:
for (int i=0;i>64;i++)
to be executed at least once I would change it to i<64.
You don't show what sent is in the client, or therefore how long it is, but I suspect it is some reasonable size that can actually be sent.
By contrast, in the server you are attempting to send 60k datagrams, which won't work unless the sender's socket send buffer and the receiver's socket receive buffer are at least that size.
I am trying to implement Go Back N using UDP sockets in Java. I have a sender and receiver thread at the client end. The sender thread has its own UPD socket to send the data packets and the receiver thread has its own port to receive acknowledgements. If a receiver doesn't receive ACK before a timeout period, the packet has to be retransmitted. Both threads are always running with a while(true) loop. I observe that when there are no packet losses(i.e no timeouts), this functionality of sending and receiving works fine, but when the ACK isn't received, that is when there is a timeout, the switching to the sender thread (for retransmission) isn't happening. The execution is stuck in the receiver's for loop, showing timeouts over and over again.
I even tried using Thread.Sleep(), so that it lets the other thread work. but it isn't happening. Any help would be appreciated.
(The slideflag that is set to 0, will initiate retransmission in the other thread ideally)
while(true){
socket.setSoTimeout(10000);
byte[] buf = new byte[1024];
DatagramPacket packet = new DatagramPacket(buf, buf.length);
try{
socket.receive(packet);
} catch(Exception e) {
System.out.println("Socket timeout!");
ClientMain.setslideFlag(0);
Thread.sleep(10000);
continue;
}
}