Implementing a client for custom TCP protocol - java

I have to implement a client for communication with the server by custom protocol based on XML format. This is an application layer protocol, based on TCP. So, my client sends a request XML message and receives a response, also XML message. Now, I consider how to be ensured that I received whole mesage before I start parsing it.
I see two aproaches:
Receiving bytes to the some magic number that means end of message.
It is the best approch (for my eye), yes?
But, it is possible that there is no magic number and size of message is not known. What about that case? I saw some clients for other protocols and I see something like that.
while(true){
r = socket.read(buffer, offset, 1024);
if(r < 1024) break;
offset += r;
}
// parse buffer
and I am not sure whether it is ok. It assumes that if we read less than 1024 bytes then the messae is completed. Is it ok?
What is a recommend way to solve it?

In your custom protocol you need to include the steps below :
Client
Calculate number of 1024 byte chunks of the XML contents i.e ceiling(XML content bytes /1024)
Send the number of step 1 to the server through the socket
Transmit the contents in chunks of a defined buffer size e.g 1024 bytes
Server
Read number of chunks to receive from client
Inside a for loop of size equal to the number read at step 1, start receiving contents in the predifined buffer size.
This way the server knows how many bytes the actual content is before it starts receiving the XML contents.

Related

many io stream from one socket

Can I parallelly connect many independent I/O streams of server's socket and client's socket such that each pair of I/O streams could send different data at the same time ?
How could I achieve such a connection in java without increasing number of sockets between server and client?
There's always one stream for input and one stream for output, so you can't add more streams.
However, as sje397 commented, you can use the same stream to send "different" data, you just need to come up with a way to distinguish the channels in the receiving side, so it can reconstruct the data properly. This is a protocol design issue.
Edit:
In your example you could have a packet structure with a header that tells the type (or channel) of the packet, length of the data and for the file packets some additional information if needed. Let's assume that the length field is a single byte, so your maximum packet size (for String packets) would be 1 + 1 + 255 = 257 bytes.
When the server reads bytes, it will check the first byte for the type of the packet. After determining that it's a String packet, it will read the length, then read the payload. Then the process repeats itself.
For file data additional header information is most likely needed, otherwise the non-String packets will just be a bunch of bytes.
This means that your protocol will become packet based, so you must write the data one packet at a time. Assuming that a data packet has a max size of 64K, you would then be able to send data in the following way (imagine that it's a network pipe):
Client -> 257(S) -> 64K(D) -> 257(S) -> 64K(D) -> 257(S) -> Server
allowing you to interleave the two different kinds of data in a single network connection.
Assuming that you want fast replies for console input, my suggestion to use two socket streams - one for file data and another for user input. You can use ObjectInputStream and ObjectOutputStreams to simplify your protocol. Just create a class for your protocol make it serializeable and use it with socket streams.

Java AsynchronousSocketChannel read operation

I'm using the java AsynchronousSocketChannel from nio2 in my project. I'm using oracle jdk 1.7.0_80 on ubuntu 14.04 too.
My project is a server that processes binary data.
The code calls the read operation recursively in the completed method of the CompletionHandler anonymous class, like this:
private final CompletionHandler<Integer,AsynchronousSocketChannel> readHandler=new CompletionHandler<Integer,AsynchronousSocketChannel>(){
#Override
public void completed(Integer result,AsynchronousSocketChannel attachment) {
if(result<0){
attachment.close();
return;
}
attachment.read(swap, attachment, this);
}
}
Where the variable swap is a ByteBuffer instance.
Apparently, everything works well. But, there is a packet whose total size is 3832 bytes, when the server receive this whole packet, without segments, there is no problem. However, sometimes this packet is divided in two or more parts (TCP segments). eg: The size of first segment is 2896 bytes and the size of second is 936 bytes.
The last segment doesn't have a header, this is breaking my algorithm.
I would like know, is there a way to do the API calls the "completed" method only after reading the whole packet?
I have increased the SO_RCVBUF to 64K, but it doesn't work.
I would like know, is there a way to do the API calls the "completed" method only after reading the whole packet?
No, there is no way to do this.
The TCP protocol can break up your stream of bytes in packets of arbitrary size. The application-level protocol that you use on top of TCP must not rely on messages always being sent completely in one TCP packet.
You must design your application-level protocol in such a way that it can deal with messages arriving broken up in packets of arbitrary size.
One common way to do this is to prefix application-level messages by a length field. For example, an application-level message consists of a field of 4 bytes that contain the length of the rest of the message. When you receive a message, you first receive the length, and then you should keep on receiving until you have received that many bytes, which you can then assemble into an application-level message.
The AsynchronousSocketChannel API cannot re-assemble application-level messages automatically for you, because it does not know anything about your application-level protocol.

Client & Server java programs. How to detect the end of a transmission

I have a client program and a server program. I send 512-byte packets, and I want to send some files. How can the server know when the transmission is finished?
I want the server program to receive some files before it stops running, and I want to distinguish between them. What I want to do is print out the number of packet received:
Packet number 1 has been received
Packet number 2 has been received
.
.
Packet number N has been received
And once I receive this file, I want to set this counter variable to 0, which means that I have to detect when a file is completely sent (it means to receive less than 512 bytes). Is there any way to count the amount of bytes received from the client program? Which is the best way to do it? I am using byte arrays.
The best way to do it is to introduce a delimiter. This will become part of your protocol.
When you're sending the files, you send the data. When there are no more bytes to sends, the client will send an END_OF_MESSAGE value to the server, indicating that there is no more data to send, then in your code, you've got something like:
String END_OF_MESSAGE = "ENDOFMESSAGE";
if(input.equals(END_OF_MESSAGE)) {
// You know the client has finished their transmission.
}
NOTE: I would make the value of END_OF_MESSAGE far more complex than that. This is simply for demonstration purposes.
EDIT
As JB Nizet suggested, another option is to send the length of the file before anything else. That way the server knows when the transmission is finished when the amount of transferred bytes are equal to, or greater than, the pre-set size.

SerialPort Reading in java

Here is problem in my Java Serial Communication ... my jennic hardware device is connected using UART. I want to retrieve values form my device ..
i am receiving byte array of string in SerialPortEvent.DATA_AVAILABLE
case SerialPortEvent.DATA_AVAILABLE:
try {
int size;
while(inputStream.available()!=0) {
byte buff[]=new byte[100];
size=inputStream.read(buff);
inputStream.close();
String result = new String(buff,0,size);
ZPS_tsAplZdpIeeeAddrRsp IeeRsp = new ZPS_tsAplZdpIeeeAddrRsp(result);
}
first I read the bytes and store it in buff[]. then convert it into string and convert it to string array there after .. but my problem is i get the out put like but few time its breaks.
Sample output:
80011634002078445541560000341201004189
80011635002078445541560000341201004189
80011636002078445541560000341201004189
/*Here is Break my seq */
800116370020784455
41560000341201004189/*this two breaking seq generated two separate array and here is the problem*/
80011638002078445541560000341201004189
is there problem for flushing the input buffer I have tried inputStream.reset() but it doesn't work.. can anyone give me a suitable suggestion to overcome the problem..
thanks...
The 'problem' is in your expectations. Nowhere does it say that read() will fill the buffer, or that serial data transfer will preserve your message boundaries. That's up to you. All you get is a byte stream.
You need to read from the port into a buffer, and when that buffer has a whole message, flush that portion of the buffer into your message handling routines. This means you need to define your messages in a manner where each message can independently be identified and isolated.
Reading a stream will work or block when data is available or unavailable; however, reading from a stream won't guarantee that you get your data in one message-sized pieces. You only get notice that there is data to be read. You noticed a common issue when data is available to be read in the serial port buffer, and you started reading it before all of the message was available to be read. Remember that there is another issue which can occur, perhaps during another run two or more messages might be buffered in the serial port buffers before your program is ready to read the "next" message.
Rework you communication protocol to read bytes into a buffer (a class), which holds bytes until messages are available to be read. Then put an interface on that buffer readMessage() which acts like read() except at the message level (buffering until it gets a full message).
In general you cannot expect that a "message" sent from one end of a serial connection is going to be received all as one group. You may get it all at once or in several chunks of varying lengths. It is up to your receiving program to use what it knows about the incoming data to read bytes from the serial port and put them together and realize when a complete message has been received.
Normally devices handle this in one of three ways:
Fix length packets - you read until you get X bytes and then process those X bytes.
Packet length is part of the packet header indicating how many additional bytes to read before considering the data received so far as a complete packet.
Packet start/end indicators (STX or SOH to start and ETX to end usually). You treat all data received between a start and end indicator as one message packet.

Java NIO: Sending large messages quickly leads to truncated packets and data loss

I've got this nasty problem where sending multiple, large messages in quick succession from a Java (NIO) server (running Linux) to a client will lead to truncated packets. The messages have to be large and sent very rapidly for the problem to occur. Here's basically what my code is doing (not actual code, but more-or-less what's happening):
//-- setup stuff: --
Charset charset = Charset.forName("UTF-8");
CharsetEncoder encoder = charset.newEncoder();
String msg = "A very long message (let's say 20KB)...";
//-- inside loop to handle incoming connections: --
ServerSocketChannel ssc = (ServerSocketChannel)key.channel();
SocketChannel sc = ssc.accept();
sc.configureBlocking(false);
sc.socket().setTcpNoDelay(true);
sc.socket().setSendBufferSize(1024*1024);
//-- later, actual sending of messages: --
for (int n=0; n<20; n++){
ByteBuffer bb = encoder.encode(CharBuffer.wrap(msg+'\0'));
sc.write(bb);
bb.rewind();
}
So, if the packets are long enough and sent as quickly as possible (i.e. in a loop like this with no delay), then on the other end it often comes out something like this:
[COMPLETE PACKET 1]
[COMPLETE PACKET 2]
[COMPLETE PACKET 3]
[START OF PACKET 4][SOME OR ALL OF PACKET 5]
There is data loss, and the packets start to run together, such that the start of packet 5 (in this example) arrives in the same message as the start of packet 4. It's not just truncating, its running the messages together.
I imagine that this is related to the TCP buffer or "window size", or that the server here is just providing data faster than the OS, or network adapter, or something, can handle it. But how do I check for, and prevent it from happening? If I reduce the length of message per use of sc.write(), but then increase the repetitions, I'll still run into the same problem. It seems to simply be an issue with the amount of data in a short amount of time. I don't see that sc.write() is throwing any exceptions either (I know that in my example above I'm not checking, but have in my tests).
I'd be happy if I could programmatically check if it is not ready for more data yet, and put in a delay, and wait until it is ready. I'm also not sure if "sc.socket().setSendBufferSize(1024*1024);" has any effect, or if I'd need to adjust this on the Linux side of things. Is there a way to really "flush" out a SocketChannel? As a lame workaround, I could try to explicitly force a complete send of anything that is buffered any time I'm trying to send a message of over 10KB, for example (which is not that often in my application). But I don't know of any way to force a send of the buffer (or wait until it has sent). Thanks for any help!
There are many reasons why sc.write() would not send some or all of the data. You have to check the return value and/or the number of bytes remaining in the buffer.
for (int n=0; n<20; n++){
ByteBuffer bb = encoder.encode(CharBuffer.wrap(msg+'\0'));
if(sc.write(bb) > 0 && bb.remaining() == 0) {
// all data sent
} else {
// could not send all data.
}
bb.rewind();
}
You are not checking the return value of:
sc.write(bb);
This returns the number of bytes written, which might be less than the data available in your buffer. Because of how nio works you can probably just call remaining() on your bytebuffer to see if there are any left.
I haven't done any NIO programming, but according to the Javadocs, sc.write() will not write the entire ByteBuffer if the SocketChannel is in non-blocking mode (as yours is) and the socket's output buffer is full.
Because you are writing so quickly, it is very likely that you are flooding your connection and your network or receiver cannot keep up.
I'd be happy if I could programmatically check if it is not ready for more data yet
You need to check the return value of sc.write() to find out whether your output buffer is full.
Don't assume you have any control over what data ends up in which packet.
More here: What's the best way to monitor a socket for new data and then process that data?
You are using the non-blocking mode: sc.configureBlocking(false);
Set blocking to true and your code should work as it is. Suggestions made by others here to check send count and loop will also work.

Categories

Resources