SocketChannel read() behaviour - short reads - java

The ServerSocketChannel is used this way:
ServerSocketChannel srv = ServerSocketChannel.open();
srv.socket().bind(new java.net.InetSocketAddress(8112));
SocketChannel client = srv.accept();
When a connection is received, data is read this way:
ByteBuffer data = ByteBuffer.allocate(2000);
data.order(ByteOrder.LITTLE_ENDIAN);
client.read(data);
logger.debug("Position: {} bytes read!", data.position());
It prints:
Position: 16 bytes read!
Why isn't the SocketChannel blocking until the buffer is filled?
From the ServerSocketChannel.accept() API (Java 7):
The socket channel returned by this method, if any, will be in
blocking mode regardless of the blocking mode of this channel.
Does the write(ByteBuffer buffer) of the SocketChannel block? How do I test that anyway?
Thank you for your time!

Blocking mode means that it blocks until any data is received. It doesn't have to be an entire buffer full.
If you want to make sure you've received an entire bufferful of data, you should read() in a loop until you've filled up your buffer.

Related

Sending ACK greatly slows down data transfer

Awareness of the fact that TCP checksum is actually a very poor checksum prompted me to include in the data block an additional checksum (SHA-256) to verify the integrity of data on the server and in case of corrupted, request the data block again. But the addition of ACK greatly reduces the data transfer rate. In my case (the data is transmitted by wifi) the speed has decreased from ~90mbps to ~12mbps.
Client:
SocketChannel socketChannel = SocketChannel.open(new InetSocketAddress("192.168.31.30", 3333));
ByteBuffer byteBufferData = ByteBuffer.allocateDirect(1024 * 8);
ByteBuffer byteBufferACK = ByteBuffer.allocateDirect(1);
for (int i = 0; i < 1024; i++) {
// write data (payload + checksum (SHA-256))
socketChannel.write(byteBufferData);
byteBufferData.clear();
// read ACK
socketChannel.read(byteBufferACK);
byteBufferACK.clear();
// if (byteBufferACK.get() == XXX)
// ... retransmission byteBufferData
}
Server:
ServerSocketChannel serverSocketChannel = ServerSocketChannel.open();
serverSocketChannel.socket().bind(new InetSocketAddress(3333));
SocketChannel socketChannel = serverSocketChannel.accept();
ByteBuffer byteBufferData = ByteBuffer.allocateDirect(1024 * 8);
ByteBuffer byteBufferACK = ByteBuffer.allocateDirect(1);
long startTime = System.currentTimeMillis();
while ((socketChannel.read(byteBufferData)) != -1) {
// when 8192 bytes of data were read
if (!byteBufferData.hasRemaining()) {
byteBufferData.clear();
// write ACK
socketChannel.write(byteBufferACK);
byteBufferACK.clear();
}
}
System.out.println(System.currentTimeMillis() - startTime);
Please note that the code is a test code and is not intended to convey any useful data. It is intended only for testing the data transfer rate.
I have as 2 questions:
Maybe I do not understand something or do it incorrectly, but why sending one byte of data as a confirmation of data acceptance (ACK) affects the overall data transfer rate so much? How to avoid this?
Is the SHA-256 sufficient as a checksum for data of 8kb size? (On top of the existing TCP CRC)
Because you're waiting for it. Lets say there's 200ms of latency between you and the server. Without the ack, you'd write packets as quickly as possible, saturate the bandwidth, and stop. With the ack, it looks like this:
t=0 send 1st 8k
t=200 server recieves
t=205ish server sends ack
t=405 client recieves ack.
t=410ish client sends 2nd 8k
You waste 50% of your sending time. I'm actually surprised it wasn't worse.
TCP has a LOT of features in it that prevent these kinds of issues, including sliding windows of data (you don't send one packet and ack it, you send N packets and the server acks the ones it receives, allowing missing packets to be resent out of order). YOu're reimplementing TCP badly and almost certainly shouldn't be.
If you are going to do this- don't use TCP. Use UDP or raw sockets and write your new protocol on top of that. You're still using TCP acks and checksums, so yours are redundant.

Avoiding high CPU usage with NIO

I wrote a multithreaded gameserver application which handles multiple simultaneous connections using NIO. Unfortunately this server generates full CPU load on one core as soon as the first user connects, even when that user is not actually sending or receiving any data.
Below is the code of my network handling thread (abbreviated to the essential parts for readability). The class ClientHandler is my own class which does the network abstraction for the game mechanics. All other classes in the example below are from java.nio.
As you can see it uses a while(true) loop. My theory about it is that when a key is writable, selector.select() will return immediately and clientHandler.writeToChannel() is called. But when the handler returns without writing anything, the key will stay writable. Then select is called again immediately and returns immediately. So I got a busy spin.
Is there a way to design the network handling loop in a way that it sleeps as long as there is no data to send by the clientHandlers? Note that low latency is critical for my use-case, so I can not just let it sleep an arbitrary number of ms when no handlers have data.
ServerSocketChannel server = ServerSocketChannel.open();
server.configureBlocking(false);
server.socket().bind(new InetSocketAddress(port));
Selector selector = Selector.open();
server.register(selector, SelectionKey.OP_ACCEPT);
// wait for connections
while(true)
{
// Wait for next set of client connections
selector.select();
Set<SelectionKey> keys = selector.selectedKeys();
Iterator<SelectionKey> i = keys.iterator();
while (i.hasNext()) {
SelectionKey key = i.next();
i.remove();
if (key.isAcceptable()) {
SocketChannel clientChannel = server.accept();
clientChannel.configureBlocking(false);
clientChannel.socket().setTcpNoDelay(true);
clientChannel.socket().setTrafficClass(IPTOS_LOWDELAY);
SelectionKey clientKey = clientChannel.register(selector, SelectionKey.OP_READ | SelectionKey.OP_WRITE);
ClientHandler clientHanlder = new ClientHandler(clientChannel);
clientKey.attach(clientHandler);
}
if (key.isReadable()) {
// get connection handler for this key and tell it to process data
ClientHandler clientHandler = (ClientHandler) key.attachment();
clientHandler.readFromChannel();
}
if (key.isWritable()) {
// get connection handler and tell it to send any data it has cached
ClientHandler clientHandler = (ClientHandler) key.attachment();
clientHandler.writeToChannel();
}
if (!key.isValid()) {
ClientHandler clientHandler = (ClientHandler) key.attachment();
clientHandler.disconnect();
}
}
}
SelectionKey clientKey = clientChannel.register(selector, SelectionKey.OP_READ | SelectionKey.OP_WRITE);
The problem is here. SocketChannels are almost always writable, unless the socket send buffer is full. Ergo they should normally not be registered for OP_WRITE: otherwise your selector loop will spin. They should only be so registered if:
there is something to write, and
a prior write() has returned zero.
I don't see any reason why the reading and writing must happen with the same selector. I would use one selector in a thread for read/accept operations and it will always be blocking until new data arrives.
Then, use a separate thread and selector for writing. You mention you are using a cache to store messages before they are sent on the writable channels. In practice the only time a channel would not be writable is if the kernel's buffer is full, so it will rarely not be writable. A good way to implement this would be to have a dedicated writer thread that is given messages, and sleeping; it can be either interrupt()ed when new messages should be sent, or using a take() on a blocking queue. Whenever a new message arrives, it will unblock, do a select() on all writable keys and send any pending messages; only in rare cases will a message have to remain in the cache since a channel is not writable.

Simulate back pressure in TCP send

I am writing some java TCP/IP networking code ( client - server ) in which I have to deal with scenarios where the sends are much faster than the receives , thus blocking the send operations at one end. ( because the send and recv buffers fill up ). In order to design my code , I wanted to first play around these kind of situations first and see how the client and servers behave under varying load. But I am not able to set the parameters appropriately for acheiving this back pressure. I tried setting Socket.setSendBufferSize(int size) and Socket.setReceiveBufferSize(int size) to small values - hoping that would fill up soon, but I can see that send operation completes without waiting for the client to consume enough data already written. ( which means that the small send and recv buffer size has no effect )
Another approach I took is to use Netty , and set ServerBootstrap.setOption("child.sendBufferSize", 256);, but even this is of not much use. Can anyone help me understand what I am doing wrong /
The buffers have an OS dependent minimium size, this is often around 8 KB.
public static void main(String... args) throws IOException, InterruptedException {
ServerSocketChannel ssc = ServerSocketChannel.open();
ssc.bind(new InetSocketAddress(0)); // open on a random port
InetSocketAddress remote = new InetSocketAddress("localhost", ssc.socket().getLocalPort());
SocketChannel sc = SocketChannel.open(remote);
configure(sc);
SocketChannel accept = ssc.accept();
configure(accept);
ByteBuffer bb = ByteBuffer.allocateDirect(16 * 1024 * 1024);
// write as much as you can
while (sc.write(bb) > 0)
Thread.sleep(1);
System.out.println("The socket write wrote " + bb.position() + " bytes.");
}
private static void configure(SocketChannel socketChannel) throws IOException {
socketChannel.configureBlocking(false);
socketChannel.socket().setSendBufferSize(8);
socketChannel.socket().setReceiveBufferSize(8);
}
on my machine prints
The socket write wrote 32768 bytes.
This is the sum of the send and receive buffers, but I suspect they are both 16 KB
I think Channel.setReadable is what you need. setReadable tell netty temporary pause to read data from system socket in buffer, when the buffer is full, the other end will have to wait.

Use the underlying Socket/ServerSocket in a SocketChannel/ServerSocketChannel?

I'm trying the Java.nio-package for non-blocking communication. So I got my ServerSocketChannel and all my connected clients (SocketChannel) in a Selector and wait for data (OP_ACCEPT/OP_READ) using Selector.select().
My question is: Can I - instead of using a ByteBuffer and read directly with SocketChannel.read() - use the underlying Socket, get an InputStream and read using that stream? Or will that mess up the selector-stuff?
You can't.
http://download.oracle.com/javase/1.4.2/docs/api/java/net/Socket.html#getInputStream%28%29
If this socket has an associated channel then the resulting input stream delegates all of its operations to the channel. If the channel is in non-blocking mode then the input stream's read operations will throw an IllegalBlockingModeException.

why the TCP receiver can receive data after the Socket Server has shut down?

I am using Java to implement a multithreaded TCP server-client application. But now i have encountered a strange problem: when i shutdown the server socket, the receiver can still receives the last sent packet continuously. Since the detail of socket read is of the kernel concern, i can't figure out the reason. Can anybody give some guideline?
Thanks in advance!
Edit:
The code involved is simple:
public void run() {
while(runFlag) {
//in = socket.getSocketInputStream();
//byte[] buffer = new byte[bufferSize];
try {
in.read(buffer);
//process the buffer;
}catch(IOException e) {
//
}
}
}
when shutdown the server socket, this read operation will receive packet continuously(each time enters the while loop).
The TCP/IP stack inside the OS is buffering the data on both sides of the connection. Sender fills its socket send buffer, which is drained by the device driver pushing packets onto the wire. Receiver accumulates packets off the wire in the socket receive buffer, which is drained by the application reads.
If the data is already in the client socket's buffer (kernel-level, waiting for your application to read it into userspace memory), there is no way for the server to prevent it from being read. It's like with snail mail: once you've sent it away you cannot undo it.
That's how TCP works. It's a reliable byte-stream. Undelivered data continues to be delivered after a normal close. Isn't that what you want? Why is this a 'problem'?

Categories

Resources