I am trying to write a simple echo server using SSL. The first line that goes to the server is echoed exactly. When I send a second line, only the first character is echoed. The client works off of a buffered reader's read line from stdin. If I hit CR again the rest of the message comes through. The server seems to be sending all of the data. Here are output from client and server:
CLIENT:
Sending to server at 192.168.0.161
on port 9999
4 seasoNS
echo:4 seasoNS
are really good
echo:a
echo:re really good
SERVER:
server listening on 9999
has cr/lf
4 seasoNS
size to send: 10
has cr/lf
are really good
size to send: 16
exiting...
Here is the client loop:
try {
BufferedReader consoleBufferedReader = getConsoleReader();
sslsocket = getSecSocket(strAddress, port);
BufferedWriter sslBufferedWriter = getSslBufferedWriter(sslsocket);
InputStream srvrStream = sslsocket.getInputStream();
String outMsg;
while ((outMsg = consoleBufferedReader.readLine()) != null) {
byte[] srvrData = new byte[1024];
sslBufferedWriter.write(outMsg);
sslBufferedWriter.newLine();
sslBufferedWriter.flush();
int sz = srvrStream.read(srvrData);
String echoStr = new String(srvrData, 0, sz);
System.out.println("echo:" + echoStr);
}
} catch (Exception exception) {
exception.printStackTrace();
}
This problem seemed so odd that I was hoping there was something obvious that I was missing.
What you're seeing is perfectly normal.
The assumption you're making that you're going to read the whole buffer in one go is wrong:
int sz = srvrStream.read(srvrData);
Instead, you need to keep looping until you get the delimiter of your choice (possibly a new line in your case).
This applies to plain TCP connections as well as SSL/TLS connections in general. This is why application protocols must have delimiters or content length (for example, HTTP has a double new line to end its headers and uses Content-Length or chunked transfer encoding to tell the other party when the entity ends).
In practice, you might not see when your assumption doesn't work for such a small example.
However, the JSSE splits the records it sends into 1/n-1 on purpose to mitigate the BEAST attack. (OpenSSL would send 0/n.)
Hence, the problem is more immediately noticeable in this case.
Again, this is not an SSL/TLS or Java problem, the way to fix this is to treat the input you read as a stream and not to assume the size of buffers you read on one end will match the size of the buffers used to send that data from the other end.
Related
I'm writing a little server that uses protocol buffer to encode some data.
TCP Socket is opened between Android Client and Python Server
Android Client sends string for processing as normal newline delimited utf-8.
Python Server does some processing to generate a response, which gives an Array of Int Arrays: [[int]]. This is encoded in the protocol buffer file:
syntax = "proto2";
package tts;
message SentenceContainer {
repeated Sentence sentence = 1;
}
message Sentence {
repeated uint32 phonemeSymbol = 1;
}
It gets loaded into this structure and sent as follows...
container = ttsSentences_pb2.SentenceContainer()
for sentence in input_sentences:
phonemes = container.sentence.add()
# Add all the phonemes to the phoneme list
phonemes.phonemeSymbol.extend(processor.text_to_sequence(sentence))
payload = container.SerializeToString()
client.send(payload)
Android Client receives Protocol Buffer encoded message and tries to decode.
This is where I'm stuck...
# I get the InputStream when the TCP connection is first opened
bufferIn = socket.getInputStream();
TtsSentences.SentenceContainer sentences = TtsSentences.SentenceContainer.parseDelimitedFrom(bufferIn);
When receiving the message the client gets this exception:
E/TCP: Server Error
com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group tag did not match expected tag.
at com.google.protobuf.CodedInputStream.checkLastTagWas(CodedInputStream.java:164)
at com.google.protobuf.GeneratedMessageLite.parsePartialDelimitedFrom(GeneratedMessageLite.java:1527)
at com.google.protobuf.GeneratedMessageLite.parseDelimitedFrom(GeneratedMessageLite.java:1496)
at com.tensorspeech.tensorflowtts.TtsSentences$SentenceContainer.parseDelimitedFrom(TtsSentences.java:221)
at com.tensorspeech.tensorflowtts.network.PersistentTcpClient.run(PersistentTcpClient.java:100)
at com.tensorspeech.tensorflowtts.MainActivity.lambda$onCreate$0$MainActivity(MainActivity.java:71)
at com.tensorspeech.tensorflowtts.-$$Lambda$MainActivity$NTUE8bAusaoF3UGkWb7-Jt806BY.run(Unknown Source:2)
at java.lang.Thread.run(Thread.java:919)
I already know this problem is caused because Protocol buffer is not self delimiting, but I'm not sure how I'm supposed to properly delimit it. I've tried adding a newline client.send(payload + b'\n'), and adding in the PB size in bytes to the beginning of the payload client.send(container.ByteSize().to_bytes(2, 'little') + payload), but am not sure how to proceed.
It's a shame there's no documentation on how to use Protocol Buffer over TCP Sockets in Java...
OK, I worked this out...
In the case where you have a short-lived connection, the socket closing would signify the end of the payload, so no extra logic is required.
In my case, I have a long-lived connection, so closing the socket to signify the end of the payload wouldn't work.
With a Java Client & Server, you could get around this by using:
MessageLite.writeDelimitedTo(OutputStream)
then on the recipient side:
MessageLite.parseDelimitedFrom(InputStream).
Easy enough...
But in the Python API, there is no writeDelimitedTo() function. So instead we must recreate what writeDelimitedTo() is doing. Fortunately, it's simple. It simply adds a _VarintBytes equal to the payload size to the beginning of the message!
client, _ = socket.accept()
payload = your_PB_item.SerializeToString()
size = payload.ByteSize()
client.send(_VarintBytes(size) + payload)
Then on the Java recipient side...
bufferIn = socket.getInputStream();
yourPbItem message;
if ((message = yourPbItem.parseDelimitedFrom(bufferIn)) != null) {
// Do stuff :)
}
This way, your protocol buffer library knows exactly how many bytes to read, and then to stop caring about the InputStream, rather than sitting listening indefinitely.
I'm having a socket problem. This problem occurs when I'm running the server and client on the same PC i.e. using "localhost" parameter. But problem is not seen when different PCs are being used.
Client sends a file with these codes:
output_local.write(buffer, 0, bytesRead);
output_local.flush();
And after that in another method I'm sending a command with these:
outputStream.write(string);
outputStream.flush();
Server appends the command to the end of the file. So it thinks it hasn't received the command from the client yet. do you have an idea what might causing this problem? How can I solve the defect? below is the file receive method at the server:
while (true) {
try {
bytesReceived = input.read(buffer);
} catch (IOException ex) {
Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex);
System.out.println("exception occured");
break;
}
System.out.println("received:" + bytesReceived);
try {
/* Write to the file */
wr.write(buffer, 0, bytesReceived);
} catch (IOException ex) {
Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex);
}
total_byte = total_byte + bytesReceived;
if (total_byte >= filesizeInt) {
break;
}
}
If you want message-like support, you need a create a protocol to clarify what you're going to send and receive.
In TCP, you can't rely on separate "packets" being received separately (e.g., sending 4 chunks of 10 bytes may be received as 1 chunk of 40, or of 2 chunks of 20, or one chunk of 39 and one chunk of 1). TCP guarantees in order delivery, but not any particular 'packetization' of your data.
So, for example, if you're sending a string you need to first send the string length then its bytes. The logic in pseudocode would be something like:
Client:
Send the command indicator
Send the payload length
Send the payload
Server:
Read the command indicator
Read the payload length
Loop reading payload until the complete length has been read
The defect is that you're treating a stream-based protocol (TCP) as if it were a message-oriented protocol. It's not. You should assume that this can happen.
If you need to break your stream into individual messages, you should use either delimiters or (preferably IMO) a length prefix for each message. You should also then anticipate that any read you issue may not receive as much data as you've asked for - in other words, not only can messages be combined if you're not careful, but they can easily be split.
I mentioned that I prefer length-prefixing to delimiters. Pros and cons:
The benefit of using a message delimiter is that you don't need to know the message size before you start sending.
The benefits of using a length prefix are:
The code for reading the message doesn't need to care about the data within the message at all - it only needs to know how long it is. You read the message length, you read the message data (looping round until you've read it all) and then you pass the message on for process. Simple.
You don't need to worry about "escaping" the delimiter if you want it to appear within a normal message.
As TCP is a stream oriented connection, this behaviour is normal if the writer writes faster than the reader reads, or than the TCP stack sends packets.
You should add a separator to separate the parts of the streams, e.g. by using a length field for sub packets, or by using separators such as newline (\n, char code 10).
Another option could be to use UDP (or even SCTP), but that depends on the task to be fulfilled.
I want to recognize end of data stream in Java Sockets. When I run the code below, it just stuck and keeps running (it stucks at value 10).
I also want the program to download binary files, but the last byte is always distinct, so I don't know how to stop the while (pragmatically).
String host = "example.com";
String path = "/";
Socket connection = new Socket(host, 80);
PrintWriter out = new PrintWriter(connection.getOutputStream());
out.write("GET "+ path +" HTTP/1.1\r\nHost: "+ host +"\r\n\r\n");
out.flush();
int dataBuffer;
while ((dataBuffer = connection.getInputStream().read()) != -1)
System.out.println(dataBuffer);
out.close();
Thanks for any hints.
Actually your code is not correct.
In HTTP 1.0 each connection is closed and as a result the client could detect when an input has ended.
In HTTP 1.1 with persistent connections, the underlying TCP connection remains open, so a client can detect when an input ends with 1 of the following 2 ways:
1) The HTTP Server puts a Content-Length header indicating the size of the response. This can be used by the client to understand when the reponse has been fully read.
2)The response is send in Chunked-Encoding meaning that it comes in chunks prefixed with the size of each chunk. The client using this information can construct the response from the chunks received by the server.
You should be using an HTTP Client library since implementing a generic HTTP client is not trivial (at all I may say).
To be specific in your code posted you should have followed one of the above approaches.
Additionally you should read in lines, since HTTP is a line terminated protocol.
I.e. something like:
BufferedReader in =new BufferedReader(new InputStreamReader( Connection.getInputStream() ) );
String s=null;
while ( (s=in.readLine()) != null) {
//Read HTTP header
if (s.isEmpty()) break;//No more headers
}
}
By sending a Connection: close as suggested by khachik, gets the job done (since the closing of the connection helps detect the end of input) but the performance gets worse because for each request you start a new connection.
It depends of course on what you are trying to do (if you care or not)
You should use existing libraries for HTTP. See here.
Your code works as expected. The server doesn't close the connection, and dataBuffer never becomes -1. This happens because connections are kept alive in HTTP 1.1 by default. Use HTTP 1.0, or put Connection: close header in your request.
For example:
out.write("GET "+ path +" HTTP/1.1\r\nHost: "+ host +"\r\nConnection: close\r\n\r\n");
out.flush();
int dataBuffer;
while ((dataBuffer = connection.getInputStream().read()) != -1)
System.out.print((char)dataBuffer);
out.close();
I have a Java Client which sends UTF-8 strings to a C# TCP-Server, I'm using a DataOutputStream to send the strings. The code looks like this:
public void sendUTF8String(String ar) {
if (socket.isConnected()) {
try {
dataOutputStream.write(ar.getBytes(Charset.forName("UTF-8")));
dataOutputStream.flush();
} catch (IOException e) {
handleException(e);
}
}
}
The problem is that flush doesn't seem to work right. If I send two Strings close to each other, the server receives only one message with both strings. The whole thing works if I do a Thread.sleep(1000) between calls, this is obviously not a solution.
What am I missing?
flush() doesn't guarantee that a data packet gets shipped off. Your TCP/IP stack is free to bundle your data for maximum efficiency. Worse, there are probably a bunch of other TCP/IP stacks between you and your destination, and they are free to do the same.
I think you shouldn't rely on packet bundling. Insert a logical terminator/divider in your data and you will be on the safe side.
You shouldn't worry about how the data is broken up into packets.
You should include the length of the string in your messages, and then on the receiving end you would read the length first. So for example to send you would do
byte[] arbytes = ar.getBytes(Charset.forName("UTF-8"));
output.writeInt(arbytes.length)
output.write(arbytes)
and then in your reader you do
byte[] arbytes = new byte[input.readInt()];
for(int i = 0; i < len; i++){
arbytes[i] = input.read();
}
//convert bytes back to string.
You can't just call input.read(arbytes) because the read function doesn't necessarily read the entire length of the array. You can do a loop where you read a chunk at a time but the code for that is a bit more complex.
Anyway, you get the idea.
Also, if you really want to control what goes in what packets, you can use Datagram Sockets, but if you do that then delivery of the packet is not guaranteed.
Socket send a stream of data, not messages.
You shouldn't rely on the packets you receive being the same size as they are sent.
Packets can be grouped together as you have seen but they can also be broken up.
Use #Chad Okere's suggestion on how to ensure you get blocks the same was they are sent.
However in your case, you can just use
dataOutputStream.writeUTF(ar); // sends a string as UTF-8
and
String text = dataInputStream.readUTF(); // reads a string as UTF-8
I have the following Java socket client app, that sends same string to socket server:
import java.net.*;
import java.io.*;
public class ServerClient {
public static void main(String[] args) throws IOException {
System.out.println("Starting a socket server client...");
Socket client = new Socket("XXX.X.XXX.XX", 12001);
BufferedOutputStream stream = new BufferedOutputStream(client.getOutputStream());
String message = "ABC";
BufferedReader inputReader = new BufferedReader(new InputStreamReader(System.in));
String input = null;
while ( true ) {
System.out.print("Would you like to send a message to Server? ");
input = inputReader.readLine();
if ( !input.equals("Y") ) break;
System.out.println("Message to send: " + message);
System.out.println("Message length is: " + message.length());
byte[] messageBytes = message.getBytes("US-ASCII");
stream.write(messageBytes, 0, messageBytes.length);
stream.flush();
}
System.out.println("Shutting down socket server client...");
stream.close();
client.close();
inputReader.close();
}
}
The first time message is sent, server receives the message; however, every subsequent time I'm trying to send this message, server is not receiving anything. Message simply disappears. I am writing to the socket successfully (no exceptions) but nothing is coming on the other side of the pipe (or so I'm told).
I do not have access to the server app, logs or code, so I'm wondering if there is any approach you can recommend to figure out why server is not receiving subsequent messages. Any ideas would be greatly appreciated!
Clarification:
New lines are not expected by the server; otherwise, how would it even receive message the first time? As a trial and error, I did try sending '\n' and "\r\n" and 0x00 characters at the end of the string - all without any luck.
I thought flushing was an issue, so I tried various outputstream classes (PrintStream, PrintWriter, FilterOutputStream), but was still running into same exact issues. Then, if "flushing" is an issue, how is it working the first time?
Other tests:
1 - use a network sniffer to see what is realy hapening on the network
2 - use some program like TCP Test Tool to send data to the server and simulate your program. (netcat can also be used, but it sends a newline after each line)
Remember:
TCP is stream oriented. not message oriented.
One write on the client could take several reads on the server to .. read
Multiple writes on the client could get read by the server in one read
You'll hardly see the above scenarios in a test application on a local network, you will see them very quick in a production environemnt, or when you start to really speed up the sending/receiving.
Following this, if you are sending messages you need a delimiter, or some other way of indicating 'here's one message', e.g. defining the protocol to be 'the first byte is the length of the following message'.
And you'd need to check the receiving end wether it read a partial message, a whole message, and any combination thereof (e.e.g one read might have read 3 and a half message..).
A quick solution for your test app, write lines. That is, a string followed by a newline character. A bufferedreader's ReadLine() could then take care of the reassembly for you on the receiving end.
It works correctly here... but I am missing a carriage return or some other end of message after sending the message.
Hard to write more without knowing what the server expects (protocol)...
Maybe you should try something like
String message = "ABC\n";