Okay, so I have a project where I'm working with a Client (written in Lua) and a Server (written in Java). I'm using LuaSocket for the client and DatagramSockets for the server. The problem is when I send one string from the client in Lua, and receive it on the server(and convert the bytes to a string), it doesn't recognize the value of the string as equal to what it should be(I'm using .equals() for evaluation). I've printed the result and compared it to the string(everything checked out); I've even compared the bytes (using .getBytes()), they even checked out. The most annoying part of this is that when I analyze the string with .startsWith() it evaluates true, but nothing else works. I've looked into the string encoding of both languages, but I'm relatively new to sockets and this is beyond me.
Edit:
Upon writing some example code to demonstrate the problem, I solved it. Here is the code:
Client:
local socket = require "socket"
local udp = socket.udp()
udp:settimeout(0)
udp:setpeername("localhost", 1234)
udp:send("foo")
Server:
public class Main
{
public static void main(String args[]) throws Exception
{
DatagramSocket server = new DatagramSocket(1234);
byte[] incomingBytes = new byte[512];
DatagramPacket incomingPacket = new DatagramPacket(incomingBytes, incomingBytes.length);
server.receive(incomingPacket);
String received = new String(incomingBytes);
System.out.println(received);
System.out.println(received.equals("foo"));
for (byte b : received.getBytes())
{
System.out.print(b + " ");
}
System.out.print("\n");
for (byte b : "foo".getBytes())
{
System.out.print(b + " ");
}
System.out.print("\n");
}
}
The result:
foo
false
102 111 111 0 0 0 *I'm not going to include all but there are 506 more*
102 111 111
The string I had been examining the bytes from previously was split at several points, and that would explain why I didn't notice this.
Indeed as Etan pointed out, you're creating a string from the entire buffer--all 512 bytes--instead of a string of the correct length, so the string that is created has lots of zero bytes at the end.
A simple fix would be to use the String constructor that cuts off the buffer at the position and length you specify, along with the number of bytes received from the packet from DatagramPacket.getLength
Change the line assigning received to
String received = new String(incomingBytes, 0, incomingPacket.getLength());
Related
in my Client-Server application I found a strange error. I got the following Methods :
sendLoginResponse();
sendPlayerList();
sendCurrentLevelState();
Each Methods sends a byte array to the Client-Side
If I only call 2 of them, everything works fine and the Client-Side gets all the sent byte arrays. But if I call all 3 of them only the first and second one arrive the Client, the order of the following methods doesnt matter. but the Server says all of them were send. To write to the Client iam using the
write(byte[] b, int off, int len); method
all The lenghts within the packages make sense too.
Here comes the strange point:
if I add a Thread.sleep(1000); after the second Method, the third one does now arrive the Client after the sleep. I Have also tried to flush the DataOutputStream after every write call, but this didnt help.
EDIT:
So let's say I'd send 3 Login-Response
The Method's that gives me the byte[]:
public byte[] getLoginResponse(int playerID){
byte[] msg = new byte[4];
short shortMsgLength = 4;
byte[] msgLength = shortToBytes(shortMsgLength);
msg[0] = 2;
msg[1] = msgLength[0];
msg[2] = msgLength[1];
msg[3] = (byte) playerID;
return msg;
}
private byte[] shortToBytes(short value) {
byte[] returnByteArray = new byte[2];
returnByteArray[0] = (byte) (value & 0xff);
returnByteArray[1] = (byte) ((value >>> 8) & 0xff);
return returnByteArray;
}
And the Send Method:
private void sendLoginResponse() {
try{
byte[] msg = rfcObject.getLoginResponse(playerID);
out.write(msg,0,msg.length);
}catch(Exception e){
System.err.println(e.getMessage());
System.exit(0);
}
}
So if I call the sendLoginResponse(); three times in a row, the client only recieves 2 byte-arrays, but the server says it has been sent 3 times. If i add a
Thread.sleep(1000); `after the second Method-Call, everything works fine..`
The Client that reads the message runs in a Thread:
public void run(){
while(true){
try {
byte[] data = new byte[MAX_DATA_SIZE]; // MAX_DATA = 255
byteCount = in.read(data);
} catch (IOException ex) {
handleExceptionError(ex);
}
}
}
thank you!
if I call the sendLoginResponse(); three times in a row, the client only recieves 2 byte-arrays, but the server says it has been sent 3 times.
This is because TCP is a stream-oriented protocol. Meaning it doesn't know or care how your messages are delimited. There's no concept of individual messages in TCP, just a stream of bytes, with the guarantee that the order of bytes is preserved.
So when the sender calls three write, the three byte arrays are simply concatenated over the connection and arrives at the receiver in the same order, but the receiver doesn't necessarily need three read to get all the bytes, and even if it does take three read, the read doesn't necessarily gives you the same byte array passed to each corresponding write.
Your message already have the necessary information to get the individual message back from the byte stream:
// Client code for reading individual messages from a TCP connection
byte type = din.readByte();
// Read the message length, little-endian.
// We cannot use din.readShort because it's big-endian
int lenLo = din.read();
int lenHi = din.read();
short len = (short)(lenLo | (lenHi << 8));
byte [] body = new byte[len];
din.readFully(body);
DataOutputStream and TCP don't lose data.
As almost invariable seen in questions of this nature, the problem is at the receiving end. You are probably assuming that `read()' fills the buffer, and ignoring the count that it returns.
Based on your protocol description in comments, you should be using DataInputStream.readFully() in this circumstance:
byte type = din,readByte();
int length = din.readShort();
byte[] data = new byte[length];
din.readFully(data);
I am implementing socket programming using Java. I get this error.
My code is:
public class UDPServer {
public static void main(String[] args) throws Exception {
byte[] data = new byte[1024];
byte[] sendData = new byte[1024];
byte[] num1b = new byte[1024];
String num1String;
DatagramPacket recievePacket;
String sndmsg;
int port;
DatagramSocket serverSocket = new DatagramSocket(9676);
System.out.println("UDP Server running");
byte[] buffer = new byte[65536];
while(true) {
recievePacket = new DatagramPacket(num1b, num1b.length);
serverSocket.receive(recievePacket);
num1String = new String(recievePacket.getData());
System.out.println(num1String);
System.out.println(num1String.length());
int numbers2=Integer.parseInt(num1String);
I run my UDP client:
Enter number 1 :2
Enter number 2 :5
Enter number 3 :4
Enter number 4 :3
Enter number 5 :1
Select Protocol:
1.UDP
2.TCP
1
Data sent to server
My Server Shows this:
$ java UDPServer
UDP Server running
waiting for data from client
2
1024
Exception in thread "main" java.lang.NumberFormatException: For input string: "2"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:492)
at java.lang.Integer.parseInt(Integer.java:527)
at UDPServer.main(UDPServer.java:49)
$
What is causing this error? Why is my string 2 not getting converted?
You probably have an issue with your client code. However, a simple workaround would be to take the first character of num1String:
int numbers2=Integer.parseInt(num1String.substring(0, 1));
This should work:
num1String = new String(recievePacket.getData(), 0, receivePacket.getLength());
When you receive a packet from the server, receivePacket.getLength() will give you the number of bytes received. The remaining bytes in the array will be unchanged as they aren't part of the last received packet, and most of them will most likely be 0. If you include those in your String, it will contain a lot of irrelevant characters at the end, mostly null-characters (depending on the remaining bytes and the default charset). And some IDE's and/or platforms may not print the whole String if it contains null-characters.
I'm writing a server program in c, and the client is on android platform which uses java language.Now I have a trouble to send char array from the server to the client,which means the client can get the data but can not decode it.I think it maybe because of the problem of data types or encoding differences.Can anyone give me some ideas,Thanks a lot!
Here is my code of server side:
char buf[MAXSIZE];
memset(buf, 0, MAXSIZE);
int n_write;
strcpy(buf, "0008200050005001");
n_write = Write(confd, buf, strlen(buf));
if (n_write <= 0)
{
fputs("Write error in send_matrix_and_position\n", stderr);
close(confd);
return -1;
}
And here is Java code:
mSocket = new Socket(HOST, PORT);
mIn = mSocket.getInputStream();
mOut = mSocket.getOutputStream();
byte[] lengthByte = new byte[4];
mIn.read(lengthByte);
for(byte b : lengthByte)
{
System.out.println(b + "");
}
You are sending 16 characters, but you read only four. Aren't you getting the data
48
48
48
56? These are the codes of the first four characters sent.
Try checking what values you get at the client when you read the char array, you might probably be doing br.readline(); see if this prints the characters??
You need to debug and check, then you might come up with some way.
I am implementing a really basic server-client model in Java, by using UDP sockets and I have come across a really strange issue.
All I want to do is let the user (client) send a message to the server and then the server will print it.
I have an example but I am missing something since I have the following issue:
If the client sends the message "a" to the server it gets received correctly.
If the client sends the message "bbb" to the server it gets received correctly.
If the client sends the message "c" to the server, then the server will print "cbb" as the received message.
It seems as if the server does clean some kind of buffer when it gets a new message.
This is the code I am using:
Server
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
public class UDPServer {
public static void main(String args[]) throws Exception {
byte[] receive_data = new byte[256];
int recv_port;
DatagramSocket server_socket = new DatagramSocket(5000);
System.out.println("Server - Initialized server. Waiting for client on port 5000");
while (true) {
// System.out.println("Server - Listening for connections...");
DatagramPacket receive_packet = new DatagramPacket(receive_data, receive_data.length);
server_socket.receive(receive_packet);
String data = new String(receive_packet.getData());
InetAddress IPAddress = receive_packet.getAddress();
recv_port = receive_packet.getPort();
if (data.equals("q") || data.equals("Q")) {
System.out.println("Server - Exiting !");
break;
} else {
System.out.println("Server - Client from IP " + IPAddress + " # port " + recv_port + " said : " + data + " (length: " + receive_packet.getLength() + ")");
}
}
}
}
Client
public class UDPClient {
public static void main(String args[]) throws Exception {
byte[] send_data = new byte[256];
BufferedReader infromuser = new BufferedReader(new InputStreamReader(System.in));
DatagramSocket client_socket = new DatagramSocket();
InetAddress IPAddress = InetAddress.getByName("localhost");
System.out.println("Client - Initialized the client...");
while (true) {
System.out.print("Client - Type Something (q or Q to quit): ");
String data = infromuser.readLine();
if (data.equals("q") || data.equals("Q")) {
System.out.println("Client - Exited !");
DatagramPacket send_packet = new DatagramPacket(send_data, send_data.length, IPAddress, 5000);
System.out.println("Client - Sending data : <" + data + ">");
client_socket.send(send_packet);
break;
} else {
send_data = data.getBytes();
DatagramPacket send_packet = new DatagramPacket(send_data, send_data.length, IPAddress, 5000);
System.out.println("Client - Sending data : <" + data + ">");
client_socket.send(send_packet);
}
}
client_socket.close();
}
}
I suppose that the mistake is something trivial, but my skills in network programming are limited, therefore I don't know what exactly it is.
Just to make clear, I am running both the server and the client at the same machine (mac) on different terminals, just in case it affects the situation in anyway.
Any help would be greatly appreciated.
EDIT
...And I come back to answer my own question.
The problem was that I was not defining the amount of data that the server socket should expect to read.
Therefore when I change
String data = new String(receive_packet.getData());
with
String data = new String(receive_packet.getData(), 0, receive_packet.getLength());
everything worked smoothly.
Just for future reference and for people who might come across the same problem :)
When you're constructing the String based on the result, you're currently ignoring the length of the received packet.
After using DataSocket.receive(DatagramPacket), the length of the DatagramPacket should be set to the length that was actually received:
The length field of the datagram packet object contains the length of
the received message. If the message is longer than the packet's
length, the message is truncated.
This should fix the problem on the receiving side:
String data = new String(receive_packet.getData(), 0, receive_packet.getLength());
For this to work you also need to make sure the data sent is of the right size. In particular, don't use send_data.length to construct the outgoing DatagramPacket. This will always use the full length of the buffer). The length parameter isn't meant to be always send_data.length (otherwise the constructor would get it itself from the array), it's meant for the actual useful length of the message within that array.
On your first call this is what receive_data looks like:
--------------
|"a"| | |
--------------
On your second call:
--------------
|"b"|"b"| "b" | notice that the "a" in data_receive was overwritten
--------------
On your third call, you only send a single letter,
so the only part of the array that gets overwritten is the first element:
--------------
|"c"|"b"| "b" |
--------------
This is happening because there is still data left in the receive_data array in between messages to the server, a simple way around this would to just initialize a new array inside of you receive loop. That way every time you receive a message you will have a fresh array waiting for you.
while (true)
{
byte[] receive_data = new byte[256];
......
}
To solve the problem you should use length of receive_packet to create a String or array.
For higher performance in server side, it's better to initialize receive_packet before while section and reset its length at the end of while section to reuse it in loop : receive_packet.setLength(buffer.length);
This came up while answering BufferedWriter only works the first time
As far as I understand the Java Doc (and this is confirmed by many posts on the net) a DatagramPacket should not accept more data than it's current size. The documentation for DatagramSocket.receive says
This method blocks until a datagram is received. The length field of the datagram packet object contains the length of the received message. If the message is longer than the packet's length, the message is truncated.
So, I made a program which reuses the receiving packet and send it longer and longer messages.
public class ReusePacket {
private static class Sender implements Runnable {
public void run() {
try {
DatagramSocket clientSocket = new DatagramSocket();
byte[] buffer = "1234567890abcdefghijklmnopqrstuvwxyz".getBytes("US-ASCII");
InetAddress address = InetAddress.getByName("127.0.0.1");
for (int i = 1; i < buffer.length; i++) {
DatagramPacket mypacket = new DatagramPacket(buffer, i, address, 40000);
clientSocket.send(mypacket);
Thread.sleep(200);
}
System.exit(0);
} catch (Exception e) {
e.printStackTrace();
}
}
}
public static void main(String args[]) throws Exception {
DatagramSocket serverSock = new DatagramSocket(40000);
byte[] buffer = new byte[100];
DatagramPacket recievedPacket = new DatagramPacket(buffer, buffer.length);
new Thread(new Sender()).start();
while (true) {
serverSock.receive(recievedPacket);
String byteToString = new String(recievedPacket.getData(), 0, recievedPacket.getLength(), "US-ASCII");
System.err.println("Length " + recievedPacket.getLength() + " data " + byteToString);
}
}
}
The output is
Length 1 data 1
Length 2 data 12
Length 3 data 123
Length 4 data 1234
Length 5 data 12345
Length 6 data 123456
...
So, even if the length is 1, in for the next receive it gets a message with length 2 and will not truncate it. However, if I manually set the length of the package then the message will be truncated to this length.
I have tested this on OSX 10.7.2 (Java 1.6.0_29) and Solaris 10 (Java 1.6.0_21). So to my questions.
Why does my code work and can expect it to work on other systems also?
To clarify, the behavior seems to have changed sometime in the past (at least for some JVMs), but I don't know if the old behavior was a bug. Am I lucky it works this way and should I expect it to work the same way on Oracle JVM, IBM JVM, JRockit, Android, AIX etc?
After further investigation and checking the source for 1.3.0, 1.3.1 and 1.4.0 the change was introduces in Sun implementation from 1.4.0, however, there is no mention of this in either the release notes or the network specific release notes of JDK 1.4.0.
There are two different lengths here. The length of the packet is set to 100 in the constructor:
DatagramPacket recievedPacket = new DatagramPacket(buffer, buffer.length);
According to the docs, the length() method tells you the length of the message currently stored in the packet, which it does. Changing
byte[] buffer = new byte[100];
to
byte[] buffer = new byte[10];
yeilds the following output:
Length 1 data 1
Length 2 data 12
...
Length 9 data 123456789
Length 10 data 1234567890
Length 10 data 1234567890
Length 10 data 1234567890
...