I am implementing socket programming using Java. I get this error.
My code is:
public class UDPServer {
public static void main(String[] args) throws Exception {
byte[] data = new byte[1024];
byte[] sendData = new byte[1024];
byte[] num1b = new byte[1024];
String num1String;
DatagramPacket recievePacket;
String sndmsg;
int port;
DatagramSocket serverSocket = new DatagramSocket(9676);
System.out.println("UDP Server running");
byte[] buffer = new byte[65536];
while(true) {
recievePacket = new DatagramPacket(num1b, num1b.length);
serverSocket.receive(recievePacket);
num1String = new String(recievePacket.getData());
System.out.println(num1String);
System.out.println(num1String.length());
int numbers2=Integer.parseInt(num1String);
I run my UDP client:
Enter number 1 :2
Enter number 2 :5
Enter number 3 :4
Enter number 4 :3
Enter number 5 :1
Select Protocol:
1.UDP
2.TCP
1
Data sent to server
My Server Shows this:
$ java UDPServer
UDP Server running
waiting for data from client
2
1024
Exception in thread "main" java.lang.NumberFormatException: For input string: "2"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:492)
at java.lang.Integer.parseInt(Integer.java:527)
at UDPServer.main(UDPServer.java:49)
$
What is causing this error? Why is my string 2 not getting converted?
You probably have an issue with your client code. However, a simple workaround would be to take the first character of num1String:
int numbers2=Integer.parseInt(num1String.substring(0, 1));
This should work:
num1String = new String(recievePacket.getData(), 0, receivePacket.getLength());
When you receive a packet from the server, receivePacket.getLength() will give you the number of bytes received. The remaining bytes in the array will be unchanged as they aren't part of the last received packet, and most of them will most likely be 0. If you include those in your String, it will contain a lot of irrelevant characters at the end, mostly null-characters (depending on the remaining bytes and the default charset). And some IDE's and/or platforms may not print the whole String if it contains null-characters.
Related
This question already has an answer here:
Message sent in UDP datagram is not sanitized?
(1 answer)
Closed 5 years ago.
I'm getting an error on a snippet of my Java UDP server program when receiving a integer (the transaction ID) converted into a string from a separate client program:
CLIENT:
// Sending transaction ID to server
sendData = String.valueOf(transID).getBytes();
sendPacket = new DatagramPacket(sendData, sendData.length, IPAddress, sportno);
clientSocket.send(sendPacket);
SERVER:
// Receive client's transaction ID
receiveData = new byte[1024]; // 1024 byte limit when receiving
receivePacket = new DatagramPacket(receiveData, receiveData.length);
serverSocket.receive(receivePacket); // Server receives packet "100" from client
String clientMsg = new String(receivePacket.getData()); // String to hold received packet
int transID = Integer.valueOf(clientMsg); // Convert clientMsg to integer
System.out.println("Client has sent: " + transID); // Printing integer before iteration
While the client sends the packet just fine, during the string-to-integer conversion I get this error:
Exception in thread "main" java.lang.NumberFormatException: For input string: "100"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.valueOf(Integer.java:766)
at server.main(server.java:46)
I've also used Integer.parseInt(clientMsg) with similar results. However, the program was able to print the packet as a string value just fine so long as I didn't try converting it into an integer.
Additionally, while it showed this in the terminal, when I copy+pasted this error message into this post the "100" was followed with a large amount of ""s from what I'm assuming are the rest of the empty bytes from the receiveData variable.
Would these empty values be the reason why my program failed? If not, what other reason would the program fail? Additionally, is there a way to send the transaction ID integer to the server without converting it to an string first to prevent this error?
String clientMsg = new String(receivePacket.getData()); // String to hold received packet
The problem is here. You are appending junk, by ignoring the actual length of the incoming datagram. It should be:
String clientMsg = new String(receivePacket.getData(), receivePacket.getOffset(), receivePacket.getLength()); // String to hold received packet
Try hope this works
String receivedMsg = new String(receivePacket.getData(), receivePacket.getOffset(), receivePacket.getLength());
I have a Java TCP Server Socket program that is expecting about 64 bytes of data from a piece of remote hardware. The Server code is:
public void run () throws Exception
{
//Open a socket on localhost at port 11111
ServerSocket welcomeSocket = new ServerSocket(11111);
while(true) {
//Open and Accept on Socket
Socket connectionSocket = welcomeSocket.accept();
DataInputStream dIn = new DataInputStream(connectionSocket.getInputStream());
int msgLen = dIn.readInt();
System.out.println("RX Reported Length: "+ msgLen);
byte[] msg = new byte[msgLen];
if(msgLen > 0 ) {
dIn.readFully(msg);
System.out.println("Message Length: "+ msg.length);
System.out.println("Recv[HEX]: " + StringTools.toHexString(msg));
}
}
}
This works correctly as I am able to test locally with a simple ACK program:
public class ACK_TEST {
public static void main (String[] args)
{
System.out.println("Byte Sender Running");
try
{
ACK_TEST obj = new ACK_TEST ();
obj.run();
}
catch (Exception e)
{
e.printStackTrace ();
}
}
public void run () throws Exception
{
Socket clientSocket = new Socket("localhost", 11111);
DataOutputStream dOut = new DataOutputStream(clientSocket.getOutputStream());
byte rtn[] = null;
rtn = new byte[1];
rtn[0] = 0x06; // ACK
dOut.writeInt(rtn.length); // write length of the message
dOut.write(rtn); // write the message
System.out.println("Byte Sent");
clientSocket.close();
}
}
And this correctly produces this output from the Server side:
However, when I deploy the same Server code on the Raspberry Pi and the hardware sends data to it, the data length is far greater and causes a heap memory issue (Even with the Heap pre-set at 512MB, which is definitely incorrect and unnecessary)
My presumption is I am reading the data wrong from the TCP socket as from the debug from the hardware, it's certainly not sending packets of this size.
Update: I have no access to the Client source code. I do however need to take the input TCP data stream, place it into a byte array, and then another function (Not shown) parses out some known HEX codes. That function expects a byte array input.
Update: I reviewed the packet documentation. It is a 10 byte header. The first Byte is a protocol identifier. The next 2 bytes is the Packet Length (Total number of bytes in the packet, including all the header bytes and checksum) and the last 7 are a Unique ID. Therefore, I need to read those 2 bytes and create a byte array that size.
Apparently the length from the header is about 1GB. Looks like the problem on the other end. Don't you mix low/big endian encoding?
Okay, so I have a project where I'm working with a Client (written in Lua) and a Server (written in Java). I'm using LuaSocket for the client and DatagramSockets for the server. The problem is when I send one string from the client in Lua, and receive it on the server(and convert the bytes to a string), it doesn't recognize the value of the string as equal to what it should be(I'm using .equals() for evaluation). I've printed the result and compared it to the string(everything checked out); I've even compared the bytes (using .getBytes()), they even checked out. The most annoying part of this is that when I analyze the string with .startsWith() it evaluates true, but nothing else works. I've looked into the string encoding of both languages, but I'm relatively new to sockets and this is beyond me.
Edit:
Upon writing some example code to demonstrate the problem, I solved it. Here is the code:
Client:
local socket = require "socket"
local udp = socket.udp()
udp:settimeout(0)
udp:setpeername("localhost", 1234)
udp:send("foo")
Server:
public class Main
{
public static void main(String args[]) throws Exception
{
DatagramSocket server = new DatagramSocket(1234);
byte[] incomingBytes = new byte[512];
DatagramPacket incomingPacket = new DatagramPacket(incomingBytes, incomingBytes.length);
server.receive(incomingPacket);
String received = new String(incomingBytes);
System.out.println(received);
System.out.println(received.equals("foo"));
for (byte b : received.getBytes())
{
System.out.print(b + " ");
}
System.out.print("\n");
for (byte b : "foo".getBytes())
{
System.out.print(b + " ");
}
System.out.print("\n");
}
}
The result:
foo
false
102 111 111 0 0 0 *I'm not going to include all but there are 506 more*
102 111 111
The string I had been examining the bytes from previously was split at several points, and that would explain why I didn't notice this.
Indeed as Etan pointed out, you're creating a string from the entire buffer--all 512 bytes--instead of a string of the correct length, so the string that is created has lots of zero bytes at the end.
A simple fix would be to use the String constructor that cuts off the buffer at the position and length you specify, along with the number of bytes received from the packet from DatagramPacket.getLength
Change the line assigning received to
String received = new String(incomingBytes, 0, incomingPacket.getLength());
I'm starting to write my first Java networking program, and long story short I'm having difficulty making sure that I'm taking the right approach. Our professor has given us a server program to test against this UDP client, but I'm getting some errors I can't seem to squash. Specifically, I get IO exceptions, either "Connection Refused" or "No route to host" exceptions.
public class Lab2Client {
/**
* #param args[1] == server name, args[2] == server port, args[3] == myport
*/
public static void main(String[] args) {
//Serverport is set to 10085, our client is 10086
try {
Socket echoSocket = new Socket(args[0],Integer.parseInt(args[2]));
System.out.println("Server connection Completed\n");
DataOutputStream output = new DataOutputStream(echoSocket.getOutputStream());
byte[] toSend = new byte[5];
toSend[0] = 12; toSend[1] = 34;//Code Number
toSend[2] = 15;//GroupId
toSend[3] = 86;toSend[4] = 100;//Port number in Little Endian Order
output.write(toSend);
System.out.println("Sent Request. Waiting for reply...\n");
DataInputStream input = new DataInputStream(echoSocket.getInputStream());
byte[] toRecieve = new byte[]{0,0,0,0,0,0,0,0};
input.read(toRecieve);
checkMessage(toRecieve);
}
catch (UnknownHostException e) {
System.err.println("Servername Incorrect!");
System.exit(1);
}
catch (IOException e){
System.err.println("IO Exception. Exiting...");
System.err.println(e);
System.exit(1);
}
}
I also have some questions about my implementation regarding receiving messages in Java. I'll be getting a datagram that contains either:
a) 3 formatting bytes (unimportant to the question) along with an IP and port number
or
b) 3 formatting bytes and a port.
Is using a DataInputStream the correct way to do this? I know using an array with 9 elements is lazy instead of dynamically allocating one that's either 5 or 9, but right now I'm just trying to get this working. That being said, is there a different approach anyone would suggest for this?
You need not to wrap the stream returned by Socket.getOuputStream() with DataOutputStream - it is already the DataOutputStream
In this line:
Socket echoSocket = new Socket(args[0],Integer.parseInt(args[2]));
I suppose it should be args[1], not args[0].
Here you have to convert the integer value to its byte representation:
toSend[3] = 10086 & 0xFF;toSend[4] = 10086>>8; //Port number in Little Endian Order
Answer to your question: case b as you are not sending the IP
thought I'd leave this up for posterity. The problem is simple, and I'm a fool for not noticing it sooner.
The correct programs I was testing this against used the UDP protocol, and this program is written in TCP. The corrected code is:
public class Lab2Client {
/**
* #param args[0] == server name, args[1] == server port, args[2] == myport
*/
public static void main(String[] args) {
//Serverport is 10085, our client is 10086
try {
DatagramSocket clientSocket = new DatagramSocket();
InetAddress IPAddress = InetAddress.getByName(args[0]);
int portToSend = Integer.parseInt(args[2]);
System.out.println("Clent Socket Created");
byte[] toSend = new byte[5];
toSend[0] = 0x12; toSend[1] = 0x34;//Code Number
toSend[2] = 15;//GroupId, f in hex
toSend[3] = 0x27;toSend[4] = 0x66;
System.out.println("Byte Array Constructed");
DatagramPacket sendPacket = new DatagramPacket(toSend, toSend.length, IPAddress, Integer.parseInt(args[1]));
clientSocket.send(sendPacket);
System.out.println("Sent Request. Waiting for reply...\n");
DataInputStream input = new DataInputStream(echoSocket.getInputStream());
toRecieve can either be an error message, a return of what we sent,
or a byte stream full of IP info and port numbers.
the "heavy" byte stream is either 4 for IPv4 of 16 for IPv6, 2 bytes for port,
and the magic number (2 bytes) for a total of 9-20 bytes*/
byte[] toRecieve = new byte[9];
DatagramPacket receivePacket = new DatagramPacket(toRecieve, toRecieve.length);
clientSocket.receive(receivePacket);
checkMessage(toRecieve);
} //and so on and so forth...
Thanks to #Serge for the help, though nobody could have answered my question correctly with how I asked it. The byte shifting you suggested was important too.
This came up while answering BufferedWriter only works the first time
As far as I understand the Java Doc (and this is confirmed by many posts on the net) a DatagramPacket should not accept more data than it's current size. The documentation for DatagramSocket.receive says
This method blocks until a datagram is received. The length field of the datagram packet object contains the length of the received message. If the message is longer than the packet's length, the message is truncated.
So, I made a program which reuses the receiving packet and send it longer and longer messages.
public class ReusePacket {
private static class Sender implements Runnable {
public void run() {
try {
DatagramSocket clientSocket = new DatagramSocket();
byte[] buffer = "1234567890abcdefghijklmnopqrstuvwxyz".getBytes("US-ASCII");
InetAddress address = InetAddress.getByName("127.0.0.1");
for (int i = 1; i < buffer.length; i++) {
DatagramPacket mypacket = new DatagramPacket(buffer, i, address, 40000);
clientSocket.send(mypacket);
Thread.sleep(200);
}
System.exit(0);
} catch (Exception e) {
e.printStackTrace();
}
}
}
public static void main(String args[]) throws Exception {
DatagramSocket serverSock = new DatagramSocket(40000);
byte[] buffer = new byte[100];
DatagramPacket recievedPacket = new DatagramPacket(buffer, buffer.length);
new Thread(new Sender()).start();
while (true) {
serverSock.receive(recievedPacket);
String byteToString = new String(recievedPacket.getData(), 0, recievedPacket.getLength(), "US-ASCII");
System.err.println("Length " + recievedPacket.getLength() + " data " + byteToString);
}
}
}
The output is
Length 1 data 1
Length 2 data 12
Length 3 data 123
Length 4 data 1234
Length 5 data 12345
Length 6 data 123456
...
So, even if the length is 1, in for the next receive it gets a message with length 2 and will not truncate it. However, if I manually set the length of the package then the message will be truncated to this length.
I have tested this on OSX 10.7.2 (Java 1.6.0_29) and Solaris 10 (Java 1.6.0_21). So to my questions.
Why does my code work and can expect it to work on other systems also?
To clarify, the behavior seems to have changed sometime in the past (at least for some JVMs), but I don't know if the old behavior was a bug. Am I lucky it works this way and should I expect it to work the same way on Oracle JVM, IBM JVM, JRockit, Android, AIX etc?
After further investigation and checking the source for 1.3.0, 1.3.1 and 1.4.0 the change was introduces in Sun implementation from 1.4.0, however, there is no mention of this in either the release notes or the network specific release notes of JDK 1.4.0.
There are two different lengths here. The length of the packet is set to 100 in the constructor:
DatagramPacket recievedPacket = new DatagramPacket(buffer, buffer.length);
According to the docs, the length() method tells you the length of the message currently stored in the packet, which it does. Changing
byte[] buffer = new byte[100];
to
byte[] buffer = new byte[10];
yeilds the following output:
Length 1 data 1
Length 2 data 12
...
Length 9 data 123456789
Length 10 data 1234567890
Length 10 data 1234567890
Length 10 data 1234567890
...