DataOutputStream not flushing - java

I have a Java Client which sends UTF-8 strings to a C# TCP-Server, I'm using a DataOutputStream to send the strings. The code looks like this:
public void sendUTF8String(String ar) {
if (socket.isConnected()) {
try {
dataOutputStream.write(ar.getBytes(Charset.forName("UTF-8")));
dataOutputStream.flush();
} catch (IOException e) {
handleException(e);
}
}
}
The problem is that flush doesn't seem to work right. If I send two Strings close to each other, the server receives only one message with both strings. The whole thing works if I do a Thread.sleep(1000) between calls, this is obviously not a solution.
What am I missing?

flush() doesn't guarantee that a data packet gets shipped off. Your TCP/IP stack is free to bundle your data for maximum efficiency. Worse, there are probably a bunch of other TCP/IP stacks between you and your destination, and they are free to do the same.
I think you shouldn't rely on packet bundling. Insert a logical terminator/divider in your data and you will be on the safe side.

You shouldn't worry about how the data is broken up into packets.
You should include the length of the string in your messages, and then on the receiving end you would read the length first. So for example to send you would do
byte[] arbytes = ar.getBytes(Charset.forName("UTF-8"));
output.writeInt(arbytes.length)
output.write(arbytes)
and then in your reader you do
byte[] arbytes = new byte[input.readInt()];
for(int i = 0; i < len; i++){
arbytes[i] = input.read();
}
//convert bytes back to string.
You can't just call input.read(arbytes) because the read function doesn't necessarily read the entire length of the array. You can do a loop where you read a chunk at a time but the code for that is a bit more complex.
Anyway, you get the idea.
Also, if you really want to control what goes in what packets, you can use Datagram Sockets, but if you do that then delivery of the packet is not guaranteed.

Socket send a stream of data, not messages.
You shouldn't rely on the packets you receive being the same size as they are sent.
Packets can be grouped together as you have seen but they can also be broken up.
Use #Chad Okere's suggestion on how to ensure you get blocks the same was they are sent.
However in your case, you can just use
dataOutputStream.writeUTF(ar); // sends a string as UTF-8
and
String text = dataInputStream.readUTF(); // reads a string as UTF-8

Related

Odd behavior reading SSL socket Java

I am trying to write a simple echo server using SSL. The first line that goes to the server is echoed exactly. When I send a second line, only the first character is echoed. The client works off of a buffered reader's read line from stdin. If I hit CR again the rest of the message comes through. The server seems to be sending all of the data. Here are output from client and server:
CLIENT:
Sending to server at 192.168.0.161
on port 9999
4 seasoNS
echo:4 seasoNS
are really good
echo:a
echo:re really good
SERVER:
server listening on 9999
has cr/lf
4 seasoNS
size to send: 10
has cr/lf
are really good
size to send: 16
exiting...
Here is the client loop:
try {
BufferedReader consoleBufferedReader = getConsoleReader();
sslsocket = getSecSocket(strAddress, port);
BufferedWriter sslBufferedWriter = getSslBufferedWriter(sslsocket);
InputStream srvrStream = sslsocket.getInputStream();
String outMsg;
while ((outMsg = consoleBufferedReader.readLine()) != null) {
byte[] srvrData = new byte[1024];
sslBufferedWriter.write(outMsg);
sslBufferedWriter.newLine();
sslBufferedWriter.flush();
int sz = srvrStream.read(srvrData);
String echoStr = new String(srvrData, 0, sz);
System.out.println("echo:" + echoStr);
}
} catch (Exception exception) {
exception.printStackTrace();
}
This problem seemed so odd that I was hoping there was something obvious that I was missing.
What you're seeing is perfectly normal.
The assumption you're making that you're going to read the whole buffer in one go is wrong:
int sz = srvrStream.read(srvrData);
Instead, you need to keep looping until you get the delimiter of your choice (possibly a new line in your case).
This applies to plain TCP connections as well as SSL/TLS connections in general. This is why application protocols must have delimiters or content length (for example, HTTP has a double new line to end its headers and uses Content-Length or chunked transfer encoding to tell the other party when the entity ends).
In practice, you might not see when your assumption doesn't work for such a small example.
However, the JSSE splits the records it sends into 1/n-1 on purpose to mitigate the BEAST attack. (OpenSSL would send 0/n.)
Hence, the problem is more immediately noticeable in this case.
Again, this is not an SSL/TLS or Java problem, the way to fix this is to treat the input you read as a stream and not to assume the size of buffers you read on one end will match the size of the buffers used to send that data from the other end.

Java TCP Server send more messages in one flush

using this code:
Java Server side:
...
out = new PrintWriter(this.client.getOutputStream(), true);
...
public void sendMsg(String msg) {
out.println(msg);
//out.flush(); // we don't flush manually because there is auto flush true
}
C# Client side:
while(connected) {
int lData = myStream.Read(myBuffer, 0, client.ReceiveBufferSize);
String myString = Encoding.UTF8.GetString(myBuffer);
myString = myString.Substring(0, lData);
myString = myString.Substring(0, myString.Length-2);
addToQueue(myString);
}
variable myString have many messages that server should send them one by one like
hello \r\t hello \r\t ...
they should come separately like
hello \r\t
hello \r\t ...
which means when i wait one by one they come instantly all of them in a row, how can i make it to send one by one in separate flush.
Note I send 30~ messages in a row in one second (1s), i want them separate.
TCP supports a stream of bytes. This means you have no control how the data arrives regardless of how you send it. (Other than it will comes as bytes) You should rethink your protocol if you depend on it coming in any particular manner.
You can reduce the amount of bunching of data but all this does is reduce latency at the cost of throughput and should never be relied upon. This can be reduce (but not eliminated) by turning off Nagle and reducing co-alessing setting in your TCP driver if you can change these.
i want them separate.
You can want it but TCP does not support messages as you would want them.
The solution in you case is for your reader to match your writers protocol. You send lines so you should read lines at a time, e.g. BufferedReader.readLine(), not blocks of whatever data happens to be in the buffer.

Transmiting/receiving compressed data with sockets: how to properly receive the data sent from the client

I have developed a client-server chat using the Sockets and it works great, but when I try to transmit data with Deflate compression it doesn't work: the output is "empty" (actually it's not empty, but I'll explain below).
The compression/decompression part is 100% working (I have already tested it), so the problem must be elsewhere in the transmission/receiving part.
I send the message from the client to the server using these methods:
// streamOut is an instance of DataOutputStream
// message is a String
if (zip) { // zip is a boolean variable: true means that compression is active
streamOut.write(Zip.compress(message)); // Zip.compress(String) returns a byte[] array of the compressed "message"
} else {
// if compression isn't active, the client sends the not compressed message to the server (and this works great)
streamOut.writeUTF(message);
}
streamOut.flush();
And I receive the message from the client to the server using these other methods:
// streamIn is an instace of DataInputStream
if (server.zip) { // same as before: true = compression is active
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] buf = new byte[512];
int n;
while ((n = streamIn.read(buf)) > 0) {
bos.write(buf, 0, n);
}
byte[] output = bos.toByteArray();
System.out.println("output: " + Zip.decompress(output)); // Zip.decompress(byte[]) returns a String of decompressed byte[] array received
} else {
System.out.println("output: " + streamIn.readUTF()); // this works great
}
Debugging a little bit my program, I've discovered that the while loop never ends, so:
byte[] output = bos.toByteArray();
System.out.println("output: " + Zip.decompress(output));
is never called.
If I put those 2 lines of code in the while loop (after bos.write()), then all works fine (it prints the message sent from the client)! But I don't think that's the solution, because the byte[] array received may vary in size. Because of this I assumed that the problem is in the receiving part (the client is actually able to send data).
So my problem became the while loop in the receiving part. I tried with:
while ((n = streamIn.read(buf)) != -1) {
and even with the condition != 0, but it's the same as before: the loop never ends, so the output part is never called.
-1 is only returned when the socket is closed or broken. You could close the socket after sending your zipped content, and your code would start working. But I suspect you want to keep the socket open for more (future) chat messages. So you need some other way of letting the client know when a discrete message has been fully transmitted. Like Patrick suggested, you could transmit the message length before each zipped payload.
You might be able to leverage something in the deflate format itself, though. I think it has a last-block-in-stream marker. If you're using java.util.zip.Inflater have a look at Inflater.finished().
The read function will not return a -1 until the stream is closed. What you can do is calculate the number of bytes that should be sent from the server to the client, and then read that number of bytes on the client side.
Calculating the number of bytes is as easy as sending the length of the byte array returned from the Zip.compress function before the actual message, and then use the readInt function to get that number.
Using this algorithm makes sure that you read the correct number of bytes before decompressing, so even if the client actually reads 0 bytes it will continue to read until it receives all bytes it wants. You can do a streamIn.read(buf, 0, Math.min(bytesLeft, buf.length)) to only read as many bytes you want.
Your problem is the way you are working with stream. You must send some meta-data so your client know what to expect as data. Idealy you are creating a protocol/state machine to read the stream. For your example, as a quick and dirt solution, send something like data size or a termination sequence or something.
Example of solution:
Server: send the "data size" before the compressed data
Client: wait for the "data size" bytes. Now loop till read is equal or greater "data size" value. Something like:
while( streamIn.ready() && dataRead < dataExpected)
{
dataRead += streamIn.read(buf);
}
Of course you need to read the dataExpected before, with a similar code.
Tip: You could also use UDP if you dont mind having the possibility to lose data. Its easier to program with datagrams...

Java Socket Issue: Packets Are Merged At The Receiver Side

I'm having a socket problem. This problem occurs when I'm running the server and client on the same PC i.e. using "localhost" parameter. But problem is not seen when different PCs are being used.
Client sends a file with these codes:
output_local.write(buffer, 0, bytesRead);
output_local.flush();
And after that in another method I'm sending a command with these:
outputStream.write(string);
outputStream.flush();
Server appends the command to the end of the file. So it thinks it hasn't received the command from the client yet. do you have an idea what might causing this problem? How can I solve the defect? below is the file receive method at the server:
while (true) {
try {
bytesReceived = input.read(buffer);
} catch (IOException ex) {
Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex);
System.out.println("exception occured");
break;
}
System.out.println("received:" + bytesReceived);
try {
/* Write to the file */
wr.write(buffer, 0, bytesReceived);
} catch (IOException ex) {
Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex);
}
total_byte = total_byte + bytesReceived;
if (total_byte >= filesizeInt) {
break;
}
}
If you want message-like support, you need a create a protocol to clarify what you're going to send and receive.
In TCP, you can't rely on separate "packets" being received separately (e.g., sending 4 chunks of 10 bytes may be received as 1 chunk of 40, or of 2 chunks of 20, or one chunk of 39 and one chunk of 1). TCP guarantees in order delivery, but not any particular 'packetization' of your data.
So, for example, if you're sending a string you need to first send the string length then its bytes. The logic in pseudocode would be something like:
Client:
Send the command indicator
Send the payload length
Send the payload
Server:
Read the command indicator
Read the payload length
Loop reading payload until the complete length has been read
The defect is that you're treating a stream-based protocol (TCP) as if it were a message-oriented protocol. It's not. You should assume that this can happen.
If you need to break your stream into individual messages, you should use either delimiters or (preferably IMO) a length prefix for each message. You should also then anticipate that any read you issue may not receive as much data as you've asked for - in other words, not only can messages be combined if you're not careful, but they can easily be split.
I mentioned that I prefer length-prefixing to delimiters. Pros and cons:
The benefit of using a message delimiter is that you don't need to know the message size before you start sending.
The benefits of using a length prefix are:
The code for reading the message doesn't need to care about the data within the message at all - it only needs to know how long it is. You read the message length, you read the message data (looping round until you've read it all) and then you pass the message on for process. Simple.
You don't need to worry about "escaping" the delimiter if you want it to appear within a normal message.
As TCP is a stream oriented connection, this behaviour is normal if the writer writes faster than the reader reads, or than the TCP stack sends packets.
You should add a separator to separate the parts of the streams, e.g. by using a length field for sub packets, or by using separators such as newline (\n, char code 10).
Another option could be to use UDP (or even SCTP), but that depends on the task to be fulfilled.

ReadableByteChannel hangs on read(bytebuffer)

Im working on Instant messenger using java 1.6. IM uses multithreading - main thread, receiving, and ping. For tcp/ip communication I used SocketChannel. And it seems there is a problem with receiving bigger packages from server. Server instead of one sends a couple of packages and thats where the problem begins. Every first 8 bytes is telling what is the type of package and how big it is. This is how I managed reading:
public void run(){
while(true){
try{
Headbuffer.clear();
bytes = readChannel.read(Headbuffer); //ReadableByteChannel
Headbuffer.flip();
if(bytes != -1){
int head = Headbuffer.getInt();
int size = Headbuffer.getInt();
System.out.println("received pkg: 0x" + Integer.toHexString(head)+" with size "+ size+" bytes);
switch(head){
case incoming.Pkg1: ReadWelcome(); break;
case incoming.Pkg2: ReadLoginFail();break;
case incoming.Pkg3: ReadLoginOk();break;
case incoming.Pkg4: ReadUserList();break;
case incoming.Pkg5: ReadUserData();break;
case incoming.Pkg6: ReadMessage();break;
case incoming.Pkg7: ReadTypingNotify();break;
case incoming.Pkg8: ReadListStatus();break;
case incoming.Pkg9: ChangeStatus();break;
}
}
}catch(Exception e){
e.printStackTrace();
}
}
}
And during the tests everything was fine until i logged on my account and import my buddylist. I send request to server for statuses and he send me back about 10 out of 80 contacts. So I came up with something like this:
public synchronized void readInStatus(ByteBuffer headBuffer){
byteArray.add(headBuffer); //Store every buffer in ArrayList
int buddies = MainController.controler.getContacts().getSize();
while(buddies>0){
readStuff();
readDescription();
--buddies;
}
}
and each readStuff() and readDescription() are checking each parameter size with remaining bytes in the buffer:
if(byteArray.get(current).remaining() >= 4){
uin = byteArray.get(current).getInt();
}else{
byteArray.add(Receiver.receiver.read());
current = current +1;
uin = byteArray.get(current).getInt();
}
and Receiver.receiver.read() is:
public ByteBuffer read(){
try {
ByteBuffer bb = ByteBuffer.allocate(40000);
bb.order(ByteOrder.LITTLE_ENDIAN);
bytes = readChannel.read(bb);
bb.flip();
return bb;
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
So application is lunched, logged and then sends contacts. Server send me back just a piece of my list. But in the method readInStatus(ByteBuffer headBuffer) I try to force the rest of the list. And now the fun part - after some time it gets to the Receiver.receiver.read() and on bytes = readChannel.read(bb) it just stops and I dont know why , no errors no nothing even after some time and Im out of the ideas. Im fighting with this whole week and i dont get anywhere near the solution. I will appreciate any suggestions. Thanks.
Thanks for response. Yes, I'm using blocking SocketChannel, I tried non-blocking but it goes wild and out of control so I skipped the idea. About the bytes I expect - this is kind of weird, because its giving me size only once in head but its size of the first part not the whole package, other parts is not containing header bytes at all. I can't predict how much bytes it would be, the reason is - descriptions with 255 bytes capacity. This is exactly why I've created variable buddies in : public synchronized void readInStatus(ByteBuffer headBuffer)
wich is basically length of my buddy list and before reading each field I'm checking if there is enough bytes left if its not, I do read(). But last field before description is integer with the length of the incoming description. But its impossible to determine how long package is, until some processing is done. #robert do you think I should try again switching to non-blocking SocketChannel in that situation ?
The problem is most likely that you are sending fewer bytes than you are trying to read. You might have missed writing something, written things in the wrong order, misread a size field or something like that.
I think I'd attack this problem by adding tracing code to count and log the number of bytes read and written, notional packect sizes and so on. Then run, and compare the traces to see where things start to get out of sync.
If you are using a blocking SocketChannel, then read will block until the buffer is filled or the server delivers end of stream. For a server with connection keep-alive, the server does not send end of stream - it will simply stop sending data, and the read will hang indefinitely or until timeout.
You could either:
(i) try using a non-blocking SocketChannel, repeatedly reading until the read delivers 0 bytes (but beware 0 bytes does not necessarily mean end of stream - it could mean an interruption) or
(ii) if you have to use the blocking version, and you know how many bytes you were expecting from the server e.g. from a header, when the number of bytes left to read is less than buffer.capacity(), move position and/or limit on the buffer so as to leave only the required space in the buffer before the read. I am working this solution now. If it works for you, please let me know!
So far as I can work out, if you have to use a blocking SocketChannel and you do not know how many bytes you are expecting, and the server does not send end of stream, there is no solution.

Categories

Resources