Can I parallelly connect many independent I/O streams of server's socket and client's socket such that each pair of I/O streams could send different data at the same time ?
How could I achieve such a connection in java without increasing number of sockets between server and client?
There's always one stream for input and one stream for output, so you can't add more streams.
However, as sje397 commented, you can use the same stream to send "different" data, you just need to come up with a way to distinguish the channels in the receiving side, so it can reconstruct the data properly. This is a protocol design issue.
Edit:
In your example you could have a packet structure with a header that tells the type (or channel) of the packet, length of the data and for the file packets some additional information if needed. Let's assume that the length field is a single byte, so your maximum packet size (for String packets) would be 1 + 1 + 255 = 257 bytes.
When the server reads bytes, it will check the first byte for the type of the packet. After determining that it's a String packet, it will read the length, then read the payload. Then the process repeats itself.
For file data additional header information is most likely needed, otherwise the non-String packets will just be a bunch of bytes.
This means that your protocol will become packet based, so you must write the data one packet at a time. Assuming that a data packet has a max size of 64K, you would then be able to send data in the following way (imagine that it's a network pipe):
Client -> 257(S) -> 64K(D) -> 257(S) -> 64K(D) -> 257(S) -> Server
allowing you to interleave the two different kinds of data in a single network connection.
Assuming that you want fast replies for console input, my suggestion to use two socket streams - one for file data and another for user input. You can use ObjectInputStream and ObjectOutputStreams to simplify your protocol. Just create a class for your protocol make it serializeable and use it with socket streams.
Related
I have to implement a client for communication with the server by custom protocol based on XML format. This is an application layer protocol, based on TCP. So, my client sends a request XML message and receives a response, also XML message. Now, I consider how to be ensured that I received whole mesage before I start parsing it.
I see two aproaches:
Receiving bytes to the some magic number that means end of message.
It is the best approch (for my eye), yes?
But, it is possible that there is no magic number and size of message is not known. What about that case? I saw some clients for other protocols and I see something like that.
while(true){
r = socket.read(buffer, offset, 1024);
if(r < 1024) break;
offset += r;
}
// parse buffer
and I am not sure whether it is ok. It assumes that if we read less than 1024 bytes then the messae is completed. Is it ok?
What is a recommend way to solve it?
In your custom protocol you need to include the steps below :
Client
Calculate number of 1024 byte chunks of the XML contents i.e ceiling(XML content bytes /1024)
Send the number of step 1 to the server through the socket
Transmit the contents in chunks of a defined buffer size e.g 1024 bytes
Server
Read number of chunks to receive from client
Inside a for loop of size equal to the number read at step 1, start receiving contents in the predifined buffer size.
This way the server knows how many bytes the actual content is before it starts receiving the XML contents.
I'm using the java AsynchronousSocketChannel from nio2 in my project. I'm using oracle jdk 1.7.0_80 on ubuntu 14.04 too.
My project is a server that processes binary data.
The code calls the read operation recursively in the completed method of the CompletionHandler anonymous class, like this:
private final CompletionHandler<Integer,AsynchronousSocketChannel> readHandler=new CompletionHandler<Integer,AsynchronousSocketChannel>(){
#Override
public void completed(Integer result,AsynchronousSocketChannel attachment) {
if(result<0){
attachment.close();
return;
}
attachment.read(swap, attachment, this);
}
}
Where the variable swap is a ByteBuffer instance.
Apparently, everything works well. But, there is a packet whose total size is 3832 bytes, when the server receive this whole packet, without segments, there is no problem. However, sometimes this packet is divided in two or more parts (TCP segments). eg: The size of first segment is 2896 bytes and the size of second is 936 bytes.
The last segment doesn't have a header, this is breaking my algorithm.
I would like know, is there a way to do the API calls the "completed" method only after reading the whole packet?
I have increased the SO_RCVBUF to 64K, but it doesn't work.
I would like know, is there a way to do the API calls the "completed" method only after reading the whole packet?
No, there is no way to do this.
The TCP protocol can break up your stream of bytes in packets of arbitrary size. The application-level protocol that you use on top of TCP must not rely on messages always being sent completely in one TCP packet.
You must design your application-level protocol in such a way that it can deal with messages arriving broken up in packets of arbitrary size.
One common way to do this is to prefix application-level messages by a length field. For example, an application-level message consists of a field of 4 bytes that contain the length of the rest of the message. When you receive a message, you first receive the length, and then you should keep on receiving until you have received that many bytes, which you can then assemble into an application-level message.
The AsynchronousSocketChannel API cannot re-assemble application-level messages automatically for you, because it does not know anything about your application-level protocol.
I've read some conflicting things about how UDP/Java datagram channels operate. I need to know a few things:
Does UDP have an inherit way to tell if the packet that is received whole, and in order, before .read(ByteBuffer b) is called? I've read in at least one article saying that UDP inherit'ly discards incomplete or out of order data.
Does datagramchannel treat one send(buffer.. ) as one datagram packet? what if its a partial send?
Can a .read(.. ) read more than one packet of data, resulting in data being discarded if the buffer being given as the commands argument was only designed to handle one packet of data?
Does UDP have an [inherent] way to tell if the packet that is received whole, and in order, before .read(ByteBuffer b) is called? I've read in at least one article saying that UDP inherit'ly discards incomplete or out of order data.
Neither statement is correct. It would be more accurate to say that IP has a way to tell if a datagram's fragments have all arrived, and then and only then does it even present it to UDP. Reassembly is the responsibility of the IP layer, not UDP. If the fragments don't arrive, UDP never even sees it. If they expire before reassembly is complete, IP throws them away.
Before/after read() is called is irrelevant.
Does datagramchannel treat one send(buffer.. ) as one datagram packet?
Yes.
what if it's a partial send?
There is no such thing in UDP.
Can a read(.. ) read more than one packet of data
A UDP read will return exactly and only one datagram, or fail.
resulting in data being discarded if the buffer being given as the commands argument was only designed to handle one packet of data?
Can't happen.
Re your comment below, which is about a completely different question, the usual technique for detecting truncation is to use a buffer one larger than the largest expected datagram. Then if you ever get a datagram that size, (i) it's an application protocol error, and (ii) it may have been truncated too.
Here is problem in my Java Serial Communication ... my jennic hardware device is connected using UART. I want to retrieve values form my device ..
i am receiving byte array of string in SerialPortEvent.DATA_AVAILABLE
case SerialPortEvent.DATA_AVAILABLE:
try {
int size;
while(inputStream.available()!=0) {
byte buff[]=new byte[100];
size=inputStream.read(buff);
inputStream.close();
String result = new String(buff,0,size);
ZPS_tsAplZdpIeeeAddrRsp IeeRsp = new ZPS_tsAplZdpIeeeAddrRsp(result);
}
first I read the bytes and store it in buff[]. then convert it into string and convert it to string array there after .. but my problem is i get the out put like but few time its breaks.
Sample output:
80011634002078445541560000341201004189
80011635002078445541560000341201004189
80011636002078445541560000341201004189
/*Here is Break my seq */
800116370020784455
41560000341201004189/*this two breaking seq generated two separate array and here is the problem*/
80011638002078445541560000341201004189
is there problem for flushing the input buffer I have tried inputStream.reset() but it doesn't work.. can anyone give me a suitable suggestion to overcome the problem..
thanks...
The 'problem' is in your expectations. Nowhere does it say that read() will fill the buffer, or that serial data transfer will preserve your message boundaries. That's up to you. All you get is a byte stream.
You need to read from the port into a buffer, and when that buffer has a whole message, flush that portion of the buffer into your message handling routines. This means you need to define your messages in a manner where each message can independently be identified and isolated.
Reading a stream will work or block when data is available or unavailable; however, reading from a stream won't guarantee that you get your data in one message-sized pieces. You only get notice that there is data to be read. You noticed a common issue when data is available to be read in the serial port buffer, and you started reading it before all of the message was available to be read. Remember that there is another issue which can occur, perhaps during another run two or more messages might be buffered in the serial port buffers before your program is ready to read the "next" message.
Rework you communication protocol to read bytes into a buffer (a class), which holds bytes until messages are available to be read. Then put an interface on that buffer readMessage() which acts like read() except at the message level (buffering until it gets a full message).
In general you cannot expect that a "message" sent from one end of a serial connection is going to be received all as one group. You may get it all at once or in several chunks of varying lengths. It is up to your receiving program to use what it knows about the incoming data to read bytes from the serial port and put them together and realize when a complete message has been received.
Normally devices handle this in one of three ways:
Fix length packets - you read until you get X bytes and then process those X bytes.
Packet length is part of the packet header indicating how many additional bytes to read before considering the data received so far as a complete packet.
Packet start/end indicators (STX or SOH to start and ETX to end usually). You treat all data received between a start and end indicator as one message packet.
I am using Java NIO's SocketChannel to write : int n = socketChannel.write(byteBuffer); Most of the times the data is sent in one or two parts; i.e. if the data could not be sent in one attemmpt, remaining data is retried.
The issue here is, sometimes, the data is not being sent completely in one attempt, rest of the data when tried to send multiple times, it occurs that even after trying several times, not a single character is being written to channel, finally after some time the remaning data is sent. This data may not be large, could be approx 2000 characters.
What could be the cause of such behaviour? Could external factors such as RAM, OS, etc cause the hindarance?
Please help me solve this issue. If any other information is required please let me know.
Thanks
EDIT:
Is there a way in NIO SocketChannel, to check, if the channel could be provided with data to write before actual writing. The intention here is, after attempting to write complete data, if some data hasn't been written on channel, before writing the remaining data can we check if the SocketChannel can take any more data; so instead of attempting multiple times fruitlessly, the thread responsible for writing this data could wait or do something else.
TCP/IP is a streaming protocol. There is no guarantee anywhere at any level that the data you send won't be broken up into single-byte segments, or anything in between that and a single segment as you wrote it.
Your expectations are misplaced.
Re your EDIT, write() will return zero when the socket send buffer fills. When you get that, register the channel for OP_WRITE and stop the write loop. When you get OP_WRITE, deregister it (very important) and continue writing. If write() returns zero again, repeat.
While using TCP, we can write over sender side socket channel only until the socket buffers are filled up and not after that. So, in case the receiver is slow in consuming the data, sender side socket buffers fill up and as you mentioned, write() might return zero.
In any case, when there is some data to be sent on the sender side, we should register the SocketChannel with the selector with OP_WRITE as the interested operation and when selector returns the SelectionKey, check key.isWritable() and try writing on that channel. As mentioned by Nilesh above, don't forget to unregister the OP_WRITE bit with the selector after writing the complete data.