Passing on a netty ByteBuf (Asynchronous) - java

I have setup a TCP server using Netty. Now I need to transfer the input of the TCP server (which is a ByteBuf stream from produced by the channel thread) to a separate application thread and I want to do this in a asynchronous fashion where the channel thread writes onto a buffer stream and the application buffer reads from the thread.
I don't want to use the ByteBuf itself as I need to call msg.release() on it and I don't want the channel thread to be waiting for the msg to be consumed by the application thread. Also I can't have the application have its own channel handler.
What data structure in Java should I use that would allow such for my purpose? Is my approach correct or could I use the functionalities provisioned by Netty. InputStreams and OutputStreams also seem the way to do it but wouldn't the channel be waiting if I use them?
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
// Copy the msg's byte[] into a buffer stream and make the application read from the buffer stream
((ByteBuf) msg).release();
}

Related

How to write MappedByteBuffer to socket outputstream (server to client) with no copy

In Java code, I have a region of file mapped using MappedByteBuffer and I need to send this to the client (write to Outputstream). I need to make sure that while sending/writing to socket, it does not create any copy due to memory constraints. How can I achieve this? Will Bytebuffer.array() serve this purpose?
Sharing the code. Note: FileChannel is read-only and I need to send ByteBuffer data as it is.
private void writeData(Socket clientSocket, MappedByteBuffer byteBuffer){
Path path = Paths.get(myfile);
MappedByteBuffer memoryMappedBuffer = null;
try (FileChannel fChannel = FileChannel.open(path, StandardOpenOption.READ)) {
memoryMappedBuffer = fChannel.map(FileChannel.MapMode.READ_ONLY, location, size);
}catch(){
//How can i write memoryMappedBuffer to socket outputStream without copying data...? like
clientSocket.getOutputStream().write(memoryMappedBuffer .array());
}
If you are creating the socket yourself, you can use SocketChannel.open() instead, and use write(ByteBuffer). It manages the socket internally.
InetSocketAddress address = ...
SocketChannel channel = SocketChannel.open(address);
channel.write(memoryMappedBuffer);
// ...
channel.close(); // Closes the connection
If you have a pre-existing socket, you can create a Channel from the socket's output stream. However this allocates a (reused) temporary buffer.
WritableByteChannel channel = Channels.newChannel(clientSocket.getOutputStream());
channel.write(memoryMappedBuffer);
// ...
channel.close();
Based on your description you want to write to a socket the content of a file, or a slice of the file, slice that it is mapped to the virtual address space of your java process.
The memory-mapped files is used in general to share memory between multiple processes or to minimize the I/O operations on disk when your process is writing and reading data to/from the same files (a concrete example is Kafka which uses this procedure).
In your case, when you write the data to a socket, that socket has a buffer (regardless if it's blocking or non-blocking). When the buffer is full you will not be able to write, until the receiver acknowledge the data, and the buffer is cleared. Now, if the receiver is slow, you will remain with that portion of the file loaded to your main memory for a long time which can affect the performance of your server (I suppose you will not have a single consumer/client for your server).
One good and efficient solution is to use pipe streaming which sends data from your server to a consumer (in this case a socket) in a producer/consumer way. By default the PipeInputStream uses a buffer of 1024 bytes (you can increase it), meaning that only 1024 bytes will be kept in the memory at one moment in time for a specific execution thread. If your process has 500 clients, then you will consume only 500*1024 bytes = 512Kb. If one reader is slow, then that producer will be slow also without putting pressure on your process memory.
If all you have to do is to write the content of various files to sockets, I don't know how using memory-mapped files can helps you.

How can I interrupt reading from inputstream

I am currently working on a server-client communication. On server-side, I have a Thread listening continuously for any incoming data from the client.
while ((foo = (Foo) objectInputSteam.readObject()) != null) {...}
Sometimes I also need to send data from the server to the client. (Not as a direct response.) After sending some data to the client over the same socket, an exception will be thrown once new data reaches the server (0xAC).
As far as I know, this happens because the inputreader is reading while sending data over the same socket.
Is there any way to interrupt the listening thread while sending the data to the client or do I need to create a second socket on a different port for outgoing traffic?

How to handle tcp inputstream correctly - Java/Android

I'm creating an android application that needs a permanent TCP-Connection to a Server.
I've created a Service that establishes the Connection and listens for incoming Bytes on the Inputstream (The service runs in the background).
public class TCPServiceConnection extends Service{
//variables......
//...............
public void onCreate() {
establishTCPConnection():
}
The first 4 incoming bytes symbolize the message-length of a complete Message.
After reading a complete Message from the Inputstream into a separate buffer, I want to call another Service/Asynctask in a separate Thread that analyses the Message. (The service should continue listening for further incoming messages).
public handleTCPInput() {
while(tcp_socket.isConnected()) {
byte[] buffer = readCompletemessagefromTCPInputstream;
calltoAnotherThreadToanalyzeReceivedMessage(buffer);
}
//handle exceptions.......
}
Is there an existing Messagequeue-system in Android/Java that already handles the multi-access onto my separated byte[] buffer ?
To implement this I suggest you start a handler thread which will continuously read from the input stream.
As soon it has read the incoming message, it passes it to main thread using handler.
For eg. handler.sendMessage()
Now since this processing is not a heavy operation you can decide main/UI thread to process this information or you can start a async task to do it.

Use the underlying Socket/ServerSocket in a SocketChannel/ServerSocketChannel?

I'm trying the Java.nio-package for non-blocking communication. So I got my ServerSocketChannel and all my connected clients (SocketChannel) in a Selector and wait for data (OP_ACCEPT/OP_READ) using Selector.select().
My question is: Can I - instead of using a ByteBuffer and read directly with SocketChannel.read() - use the underlying Socket, get an InputStream and read using that stream? Or will that mess up the selector-stuff?
You can't.
http://download.oracle.com/javase/1.4.2/docs/api/java/net/Socket.html#getInputStream%28%29
If this socket has an associated channel then the resulting input stream delegates all of its operations to the channel. If the channel is in non-blocking mode then the input stream's read operations will throw an IllegalBlockingModeException.

why the TCP receiver can receive data after the Socket Server has shut down?

I am using Java to implement a multithreaded TCP server-client application. But now i have encountered a strange problem: when i shutdown the server socket, the receiver can still receives the last sent packet continuously. Since the detail of socket read is of the kernel concern, i can't figure out the reason. Can anybody give some guideline?
Thanks in advance!
Edit:
The code involved is simple:
public void run() {
while(runFlag) {
//in = socket.getSocketInputStream();
//byte[] buffer = new byte[bufferSize];
try {
in.read(buffer);
//process the buffer;
}catch(IOException e) {
//
}
}
}
when shutdown the server socket, this read operation will receive packet continuously(each time enters the while loop).
The TCP/IP stack inside the OS is buffering the data on both sides of the connection. Sender fills its socket send buffer, which is drained by the device driver pushing packets onto the wire. Receiver accumulates packets off the wire in the socket receive buffer, which is drained by the application reads.
If the data is already in the client socket's buffer (kernel-level, waiting for your application to read it into userspace memory), there is no way for the server to prevent it from being read. It's like with snail mail: once you've sent it away you cannot undo it.
That's how TCP works. It's a reliable byte-stream. Undelivered data continues to be delivered after a normal close. Isn't that what you want? Why is this a 'problem'?

Categories

Resources