SerialPort Reading in java - java

Here is problem in my Java Serial Communication ... my jennic hardware device is connected using UART. I want to retrieve values form my device ..
i am receiving byte array of string in SerialPortEvent.DATA_AVAILABLE
case SerialPortEvent.DATA_AVAILABLE:
try {
int size;
while(inputStream.available()!=0) {
byte buff[]=new byte[100];
size=inputStream.read(buff);
inputStream.close();
String result = new String(buff,0,size);
ZPS_tsAplZdpIeeeAddrRsp IeeRsp = new ZPS_tsAplZdpIeeeAddrRsp(result);
}
first I read the bytes and store it in buff[]. then convert it into string and convert it to string array there after .. but my problem is i get the out put like but few time its breaks.
Sample output:
80011634002078445541560000341201004189
80011635002078445541560000341201004189
80011636002078445541560000341201004189
/*Here is Break my seq */
800116370020784455
41560000341201004189/*this two breaking seq generated two separate array and here is the problem*/
80011638002078445541560000341201004189
is there problem for flushing the input buffer I have tried inputStream.reset() but it doesn't work.. can anyone give me a suitable suggestion to overcome the problem..
thanks...

The 'problem' is in your expectations. Nowhere does it say that read() will fill the buffer, or that serial data transfer will preserve your message boundaries. That's up to you. All you get is a byte stream.

You need to read from the port into a buffer, and when that buffer has a whole message, flush that portion of the buffer into your message handling routines. This means you need to define your messages in a manner where each message can independently be identified and isolated.
Reading a stream will work or block when data is available or unavailable; however, reading from a stream won't guarantee that you get your data in one message-sized pieces. You only get notice that there is data to be read. You noticed a common issue when data is available to be read in the serial port buffer, and you started reading it before all of the message was available to be read. Remember that there is another issue which can occur, perhaps during another run two or more messages might be buffered in the serial port buffers before your program is ready to read the "next" message.
Rework you communication protocol to read bytes into a buffer (a class), which holds bytes until messages are available to be read. Then put an interface on that buffer readMessage() which acts like read() except at the message level (buffering until it gets a full message).

In general you cannot expect that a "message" sent from one end of a serial connection is going to be received all as one group. You may get it all at once or in several chunks of varying lengths. It is up to your receiving program to use what it knows about the incoming data to read bytes from the serial port and put them together and realize when a complete message has been received.
Normally devices handle this in one of three ways:
Fix length packets - you read until you get X bytes and then process those X bytes.
Packet length is part of the packet header indicating how many additional bytes to read before considering the data received so far as a complete packet.
Packet start/end indicators (STX or SOH to start and ETX to end usually). You treat all data received between a start and end indicator as one message packet.

Related

Implementing a client for custom TCP protocol

I have to implement a client for communication with the server by custom protocol based on XML format. This is an application layer protocol, based on TCP. So, my client sends a request XML message and receives a response, also XML message. Now, I consider how to be ensured that I received whole mesage before I start parsing it.
I see two aproaches:
Receiving bytes to the some magic number that means end of message.
It is the best approch (for my eye), yes?
But, it is possible that there is no magic number and size of message is not known. What about that case? I saw some clients for other protocols and I see something like that.
while(true){
r = socket.read(buffer, offset, 1024);
if(r < 1024) break;
offset += r;
}
// parse buffer
and I am not sure whether it is ok. It assumes that if we read less than 1024 bytes then the messae is completed. Is it ok?
What is a recommend way to solve it?
In your custom protocol you need to include the steps below :
Client
Calculate number of 1024 byte chunks of the XML contents i.e ceiling(XML content bytes /1024)
Send the number of step 1 to the server through the socket
Transmit the contents in chunks of a defined buffer size e.g 1024 bytes
Server
Read number of chunks to receive from client
Inside a for loop of size equal to the number read at step 1, start receiving contents in the predifined buffer size.
This way the server knows how many bytes the actual content is before it starts receiving the XML contents.

many io stream from one socket

Can I parallelly connect many independent I/O streams of server's socket and client's socket such that each pair of I/O streams could send different data at the same time ?
How could I achieve such a connection in java without increasing number of sockets between server and client?
There's always one stream for input and one stream for output, so you can't add more streams.
However, as sje397 commented, you can use the same stream to send "different" data, you just need to come up with a way to distinguish the channels in the receiving side, so it can reconstruct the data properly. This is a protocol design issue.
Edit:
In your example you could have a packet structure with a header that tells the type (or channel) of the packet, length of the data and for the file packets some additional information if needed. Let's assume that the length field is a single byte, so your maximum packet size (for String packets) would be 1 + 1 + 255 = 257 bytes.
When the server reads bytes, it will check the first byte for the type of the packet. After determining that it's a String packet, it will read the length, then read the payload. Then the process repeats itself.
For file data additional header information is most likely needed, otherwise the non-String packets will just be a bunch of bytes.
This means that your protocol will become packet based, so you must write the data one packet at a time. Assuming that a data packet has a max size of 64K, you would then be able to send data in the following way (imagine that it's a network pipe):
Client -> 257(S) -> 64K(D) -> 257(S) -> 64K(D) -> 257(S) -> Server
allowing you to interleave the two different kinds of data in a single network connection.
Assuming that you want fast replies for console input, my suggestion to use two socket streams - one for file data and another for user input. You can use ObjectInputStream and ObjectOutputStreams to simplify your protocol. Just create a class for your protocol make it serializeable and use it with socket streams.

Java datatransfer through sockets. How to keep sender and receiver in "sync"

I want to implement a simple way to transfer Data from one client to another.
The implementation itself is not the question - it already works. I've a problem with the real transfer rate on sender side.
You all know the progress bars while you send files to another client using [put your desired chat/filetransfer program here]. You see the transferred bytes and the bytes left to transfer and maybe an estimated time until the transfer is complete. The same thing I try to implement too but it seems I've problems with the buffers in between.
While the send buffer is a nice feature - it affect the measured transfer rate to the other client enormously. Starting the transfer I get nearly infinite Bps and during the transfer the Bps reduce slowly but never get where the real transfer is.
The effect is that the sender visually finishes sending a file while the receiver still receiving bytes. This totally desync sender and receiver what I need to avoid (because of other reasons).
My first attempt sending a file was just like this (pseudo code):
while(still bytes left to read) {
Sender reading Byte-Array from InputStream (aka FileInputStream or something else)
Sender write and flushes this Byte-Array to the SocketOutputStream
}
This ends in the described situation where sender and receiver is totally desynced.
My next attempt was this:
while(still bytes left to read) {
Sender reading Byte-Array from InputStream (aka FileInputStream or something else)
Sender write and flushes this Byte-Array to the SocketOutputStream
Sender wait for ACK-Paket from Receiver
}
So the sender write the Byte-Array to the wire and wait for a small ACK-Paket from the receiver. After receiving the ACK the sender sends the next Byte-Array.
While this works as desired on slow connections (aka WAN connections to the internet) it is horrible slow on LAN connections.
I came to the conclusion that I don't like the ACK-Idea but I also don't like the desync situation.
How does other clients workaround such situations? Is there a way to disable the buffer so that outputStream.write(byte[]) just take as long as the wire need to transmit the data, or is there any other mechanism I can use to "see" how many bytes are transferred for real?
Thanks in advance
Martin
Instead of displaying what you have sent, display what the other end has told you it has received. i.e. the other end can send back the number of bytes it has received. This way your progress bar will be slightly pessimistic, rather than really optimistic.
Your problem is that you are trying to send the entire array you managed to read as a single object. On the sending side, you get a big array, call write and then flush which sends the entire array and waits for it to be flushed (with no control for you in between, so you sending app has no way to display progress). On the receiving end you get the same problem. You probably just call read which would read the entire ArrayList in a single operation.
Chances are, you are able to read a lot of data before you try to send it as a big array.
You can get some control if you break the array into reasonable "chunks" (maybe a simple max(1024, arraysize/100)). Then send the chunks one by one. The pseudo code would look like this:
chunkSize = max(1024, arraysize/100)
while(still bytes left to read) {
Sender reading chunkSize bytes into Byte-Array from InputStream
Sender write and flushes this Byte-Array to the SocketOutputStream
reportProgress()
}

Java NIO SocketChannel writing problem

I am using Java NIO's SocketChannel to write : int n = socketChannel.write(byteBuffer); Most of the times the data is sent in one or two parts; i.e. if the data could not be sent in one attemmpt, remaining data is retried.
The issue here is, sometimes, the data is not being sent completely in one attempt, rest of the data when tried to send multiple times, it occurs that even after trying several times, not a single character is being written to channel, finally after some time the remaning data is sent. This data may not be large, could be approx 2000 characters.
What could be the cause of such behaviour? Could external factors such as RAM, OS, etc cause the hindarance?
Please help me solve this issue. If any other information is required please let me know.
Thanks
EDIT:
Is there a way in NIO SocketChannel, to check, if the channel could be provided with data to write before actual writing. The intention here is, after attempting to write complete data, if some data hasn't been written on channel, before writing the remaining data can we check if the SocketChannel can take any more data; so instead of attempting multiple times fruitlessly, the thread responsible for writing this data could wait or do something else.
TCP/IP is a streaming protocol. There is no guarantee anywhere at any level that the data you send won't be broken up into single-byte segments, or anything in between that and a single segment as you wrote it.
Your expectations are misplaced.
Re your EDIT, write() will return zero when the socket send buffer fills. When you get that, register the channel for OP_WRITE and stop the write loop. When you get OP_WRITE, deregister it (very important) and continue writing. If write() returns zero again, repeat.
While using TCP, we can write over sender side socket channel only until the socket buffers are filled up and not after that. So, in case the receiver is slow in consuming the data, sender side socket buffers fill up and as you mentioned, write() might return zero.
In any case, when there is some data to be sent on the sender side, we should register the SocketChannel with the selector with OP_WRITE as the interested operation and when selector returns the SelectionKey, check key.isWritable() and try writing on that channel. As mentioned by Nilesh above, don't forget to unregister the OP_WRITE bit with the selector after writing the complete data.

Java NIO: Sending large messages quickly leads to truncated packets and data loss

I've got this nasty problem where sending multiple, large messages in quick succession from a Java (NIO) server (running Linux) to a client will lead to truncated packets. The messages have to be large and sent very rapidly for the problem to occur. Here's basically what my code is doing (not actual code, but more-or-less what's happening):
//-- setup stuff: --
Charset charset = Charset.forName("UTF-8");
CharsetEncoder encoder = charset.newEncoder();
String msg = "A very long message (let's say 20KB)...";
//-- inside loop to handle incoming connections: --
ServerSocketChannel ssc = (ServerSocketChannel)key.channel();
SocketChannel sc = ssc.accept();
sc.configureBlocking(false);
sc.socket().setTcpNoDelay(true);
sc.socket().setSendBufferSize(1024*1024);
//-- later, actual sending of messages: --
for (int n=0; n<20; n++){
ByteBuffer bb = encoder.encode(CharBuffer.wrap(msg+'\0'));
sc.write(bb);
bb.rewind();
}
So, if the packets are long enough and sent as quickly as possible (i.e. in a loop like this with no delay), then on the other end it often comes out something like this:
[COMPLETE PACKET 1]
[COMPLETE PACKET 2]
[COMPLETE PACKET 3]
[START OF PACKET 4][SOME OR ALL OF PACKET 5]
There is data loss, and the packets start to run together, such that the start of packet 5 (in this example) arrives in the same message as the start of packet 4. It's not just truncating, its running the messages together.
I imagine that this is related to the TCP buffer or "window size", or that the server here is just providing data faster than the OS, or network adapter, or something, can handle it. But how do I check for, and prevent it from happening? If I reduce the length of message per use of sc.write(), but then increase the repetitions, I'll still run into the same problem. It seems to simply be an issue with the amount of data in a short amount of time. I don't see that sc.write() is throwing any exceptions either (I know that in my example above I'm not checking, but have in my tests).
I'd be happy if I could programmatically check if it is not ready for more data yet, and put in a delay, and wait until it is ready. I'm also not sure if "sc.socket().setSendBufferSize(1024*1024);" has any effect, or if I'd need to adjust this on the Linux side of things. Is there a way to really "flush" out a SocketChannel? As a lame workaround, I could try to explicitly force a complete send of anything that is buffered any time I'm trying to send a message of over 10KB, for example (which is not that often in my application). But I don't know of any way to force a send of the buffer (or wait until it has sent). Thanks for any help!
There are many reasons why sc.write() would not send some or all of the data. You have to check the return value and/or the number of bytes remaining in the buffer.
for (int n=0; n<20; n++){
ByteBuffer bb = encoder.encode(CharBuffer.wrap(msg+'\0'));
if(sc.write(bb) > 0 && bb.remaining() == 0) {
// all data sent
} else {
// could not send all data.
}
bb.rewind();
}
You are not checking the return value of:
sc.write(bb);
This returns the number of bytes written, which might be less than the data available in your buffer. Because of how nio works you can probably just call remaining() on your bytebuffer to see if there are any left.
I haven't done any NIO programming, but according to the Javadocs, sc.write() will not write the entire ByteBuffer if the SocketChannel is in non-blocking mode (as yours is) and the socket's output buffer is full.
Because you are writing so quickly, it is very likely that you are flooding your connection and your network or receiver cannot keep up.
I'd be happy if I could programmatically check if it is not ready for more data yet
You need to check the return value of sc.write() to find out whether your output buffer is full.
Don't assume you have any control over what data ends up in which packet.
More here: What's the best way to monitor a socket for new data and then process that data?
You are using the non-blocking mode: sc.configureBlocking(false);
Set blocking to true and your code should work as it is. Suggestions made by others here to check send count and loop will also work.

Categories

Resources