I am building a non blocking server using javas NIO package, and have a couple questions about validating received data.
I have noticed that when i call read on a socket channel, it will attempt to fill the byte buffer that it is reading to (as per the documentation). When i send 10 bytes to the server from the client, the server will read the those ten bytes into the byte buffer, the rest of the bytes in the byte buffer will stay at zero, and the number returned from the read operation will be the size of my byte buffer even though the client only wrote 10 bytes.
What i am trying to figure out is if there is a way to get just the number of bytes the client sent to the server when the server reads from a socket channel (in the above case, 10 instead of 1024).
If that doesn't work i know i can get separate all the actual received data from the client from this 'excess' data stored in the byte buffer by using delimiters in conjunction with my 'instruction set headers' and what not, but it seems like this should exists so i have to wonder if i am just missing something obvious, or if there is some low level reason why this can't be done.
Thanks :)
you probably forgot to call the notorious flip() on your buffer.
buffer.clear();
channel.read(buffer);
buffer.flip();
// now you can read from the buffer
buffer.get...
I need to change my signature to nio.sucks
Related
I have recently tried my hands over Socket Programming by creating an SSL Socket used to stream data live from a server with no success of course. When I analyze the data packets through Wireshark, I realize the size of request data has been magnified n number of times in the packet and hence the request reaches the server in fragments where as the actual JSON request is a handful of bytes and should reach the server in a single shot.
Any help would be appreciated.
Wrap a BufferedOutputStream around the SSLSocket's output stream, and don't flush it until you really gotta, which is usually not until you're about to read the reply. Otherwise you can be sending one byte at a time to the SSLSocket, which becomes one SSL message per byte, which can expand the data by more than 40x.
However:
the request reaches the server in fragments
That can happen any time. The server has to be able to cope with receiving data as badly fragmented as one byte at a time.
where as the actual JSON request is a handful of bytes and should reach the server in a single shot.
There is no such guarantee in TCP.
I wrote a java client which takes in the buffer_size for the byte array as a command line argument and declares a byte array in which as file will be read and sent to the server in chunks. Client will send the buffer_size to the java server before starting to read the file so that the server can also define a byte array to receive the file chunks. So this mechanism looks something like....
Client Side:
while ((count = fileReader.read(bytes)) > 0) {
toServer.write(bytes, 0, count);
}
Server Side:
while ((count = fromClient.read(bytes)) > 0) {
//process the received file content
}
This works right for me. But the behavior of the server reading the chunks changes in a random manner. i.e., If the file that is to be read by the client is of 3000 bytes and the buffer_size if 8192 bytes (the server will also have a buffer_size of 8192), the server reads the whole chunk(from the client) with a single read() operation and sometimes the chunk is divided into 2 parts and read (as two 1500 bytes for example, taking up 2 read() operations). I don't understand what exactly happens here. Can we implement this in such a that the server doesn't divide the chunk which is being sent by the client?!
When it is tested by running both client and server in the local machine, server reads the whole content sent by the client using one write() operation with just one read() operation. The behavior changes happen only when the client and server is on different machines.
1500 is probably the MTU of the link. Data is split into packets of such size as it is transfered. When implementing a server, you must simply be prepared for the data to be diced into arbitrarily-sized packets.
The abstraction implemented by TCP is a stream of bytes, not a series of write() calls. To force a specific packet size, you would have to use UDP (and deal with packet loss, as UDP does not guarantee delivery).
To fix this, simply read in a loop until the whole buffer has been read, or the length of the file has been reached. The client can simply send the length of the file at the beginning of the protocol. There is no need for the client and server to use the same buffer size.
I'd have a question regarding java SocketChannel.
Say I have a socket channel opened in blocking mode; after calling the the write(ByteBuffer) method, I get an integer describing how many bytes were written. The javadoc says:
"Returns: The number of bytes written, possibly zero"
But what exactly does this mean? does this mean that the number of bytes really has been delivered to the client (so that sender received tcp ack making evident how many bytes have been received by server), or does this mean that the number of bytes has been written to the tcp stack? (so that some bytes still might be waiting e.g. in the network card buffer).
does this mean that the number of bytes really has been delivered to the client
No. It simply means the number of bytes delivered to the local network stack.
The only way to be sure that data has been delivered to the remote application is if you receive an application level acknowledgment for the data.
The paragraph that confuses you is for non-blocking I/O.
For non-blocking operation, your initial call may indeed not write anything at the time of the call.
Unless otherwise specified, a write
operation will return only after
writing all of the r requested bytes.
Some types of channels, depending upon
their state, may write only some of
the bytes or possibly none at all. A
socket channel in non-blocking mode,
for example, cannot write any more
bytes than are free in the socket's
output buffer.
As you can see, for blocking I/O all bytes would be sent ( or exception thrown in the middle of the send )
Note, that there is no guarantee on when bytes will appear on the receiving side, this is totally up to the low level socket protocol.
I'm using this kind of code for my TCP/IP connection:
sock = new Socket(host, port);
sock.setKeepAlive(true);
din = new DataInputStream(sock.getInputStream());
dout = new DataOutputStream(sock.getOutputStream());
Then, in separate thread I'm checking din.available() bytes to see if there are some incoming packets to read.
The problem is, that if a packet bigger than 2048 bytes arrives, the din.available() returns 2048 anyway. Just like there was a 2048 internal buffer. I can't read those 2048 bytes when I know it's not the full packet my application is waiting for. If I don't read it however - it'll all stuck at 2048 bytes and never receive more.
Can I enlarge the buffer size of DataInputStream somehow? Socket receive buffer is 16384 as returned by sock.getReceiveBufferSize() so it's not the socket limiting me to 2048 bytes.
If there is no way to increase the DataInputStream buffer size - I guess the only way is to declare my own buffer and read everything from DataInputStream to that buffer?
Regards
I'm going to make an assumption about what you're calling a "packet". I am going to assume that your "packet" is some unit of work being passed to your server. Ethernet TCP packets are limited to 1536 bytes. No matter what size writes the peer is performing.
So, you cannot expect to atomically read a full unit of work every time. It just won't happen. What you are writing will need to identify how large it is. This can be done by passing a value up front that tells the server how much data it should expect.
Given that, an approach would be to have a thread do a blocking read on din. Just process data as it becomes available until you have a complete packet. Then pass the packet to another thread to process the packet itself. (See ArrayBlockingQueue.)
You socket reader thread will process data at whatever rate and granularity it arrives. The packet processor thread always works in terms of complete packets.
Wrap your data input stream around larger buffered input stream:
DataInputStream din =
new DataInputStream(
new BufferedInputStream( sock.getInputStream( ), 4096 )
);
But I don't think it's going to help you. You have to consume the input from the socket otherwise the sender will get stuck.
You should probably invest more time in working out a better communication protocol.
If you dig into the source of DataInputStream getAvailable() is actually delegated to the stream its reading from, so the 2048 is coming from the socket input stream, not the DataInputStream (which is implemented by default in native code).
Also be aware that the API states for an InputStream
Note that while some implementations
of InputStream will return the total
number of bytes in the stream, many
will not. It is never correct to use
the return value of this method to
allocate a buffer intended to hold all
data in this stream.
So just because you are receiving a value of 2048 does not mean there is not more data available, it is simply the amount that is guaranteed to be read without blocking. Also note that while BufferedInputStream as suggested by Alexander is an option, it makes no guarantee that the buffer will always be filled (in fact if you look at the source it only attempts to fill the buffer when one of the read calls is made).
So if you want to make sure you are always receiving "full packets" you are likely better off creating your own input stream wrapper where you can add a specialty method "byte[] readPacket()" that will block until it can fill its own buffer as data is read from the underlying socket stream.
This is not how you use InputStreams. You never want to use the available method, it's pretty much useless. If you need to read "packets" of data, then you need to design that into your protocol. One easy way to do that is send the length of the packet first, then send the packet. the receiver reads the length of the packet, then reads exactly that many bytes from the stream.
You can't. DataInputStream doesn't have an internal buffer. You just need to block in a read loop.
I have created a socket programming for server client communication.
I am reading data using read(byte[]) of DataInputStream, also writing data using write(byte[]) of DataOutputStream.
Whenver I am sending small amount of data my program works fine.
But if I send a data of 20000 characters and send it 10 times then I am able to receive the data 8 times perfectly but not the 2 times.
So can I reliably send and receive data using read and write in socket programming?
My guess is that you're issuing a single call to read() and assuming it will return all the data you asked for. Streams don't generally work that way. It will block until some data is available, but it won't wait until it's got enough data to fill the array.
Generally this means looping round. For instance:
byte[] data = new byte[expectedSize];
int totalRead = 0;
while (totalRead < expectedSize)
{
int read = stream.read(data, totalRead, expectedSize-totalRead);
if (read == -1)
{
throw new IOException("Not enough data in stream");
}
totalRead += read;
}
If you don't know how many bytes you're expecting in the first place, you may well want to still loop round, but this time until read() returns -1. Use a buffer (e.g. 8K) to read into, and write into a ByteArrayOutputStream. When you've finished reading, you can then get the data out of the ByteArrayOutputStream as a byte array.
Absolutly -- TCP Sockets is a reliable network protocol provided the API is used properly.
You really need to check the number of bytes you receive on each read() call.
Sockets will arbiterily decide you have enough data and pass it back on hte read call -- the amount can dependon many factors (buffer size, memory availibility, network respose time etc.) most of which are unpredicatable. For smaller buffers you normally get as many bytes as you asked for, but, for larger buffer sizes read() will often return less data than you asked for -- you need to check the number of bytes read and repeat the read call for the remaining bytes.
It is also possible that something in your network infrastructure (router, firewall etc.) is misconfigred and trucating large packets.
Your problem is that in the server thread, you must call outputstream.flush(), to specify that the buffered data should be send to the other end of the communication