I have seen lots of questions around about chunked streams in netty, but most of them were solutions about outbound streams, not inbound streams.
I would like to understand how can I get the data from the channel and send it as an InputStream to my business logic without loading all the data in memory first.
Here's what I was trying to do:
public class ServerRequestHandler extends MessageToMessageDecoder<HttpObject> {
private HttpServletRequest request;
private PipedOutputStream os;
private PipedInputStream is;
#Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
super.handlerAdded(ctx);
this.os = new PipedOutputStream();
this.is = new PipedInputStream(os);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
super.handlerRemoved(ctx);
this.os.close();
this.is.close();
}
#Override
protected void decode(ChannelHandlerContext ctx, HttpObject msg, List<Object> out)
throws Exception {
if (msg instanceof HttpRequest) {
this.request = new CustomHttpRequest((HttpRequest) msg, this.is);
out.add(this.request);
}
if (msg instanceof HttpContent) {
ByteBuf body = ((HttpContent) msg).content();
if (body.readableBytes() > 0)
body.readBytes(os, body.readableBytes());
if (msg instanceof LastHttpContent) {
os.close();
}
}
}
}
And then I have another Handler that will get my CustomHttpRequest and send to what I call a ServiceHandler, where my business logic will read from the InputStream.
public class ServiceRouterHandler extends SimpleChannelInboundHandler<CustomHttpRequest> {
...
#Override
public void channelRead0(ChannelHandlerContext ctx, CustomHttpRequest request) throws IOException {
...
future = serviceHandler.handle(request, response);
...
This does not work because when my Handler forwards the CustomHttpRequest to the ServiceHandler, and it tries to read from the InputStream, the thread is blocking, and the HttpContent is never handled in my Decoder.
I know I can try to create a separate thread for my Business Logic, but I have the impression I am overcomplicating things here.
I looked at ByteBufInputStream, but it says that
Please note that it only reads up to the number of readable bytes
determined at the moment of construction.
So I don't think it will work for Chunked Http requests. Also, I saw ChunkedWriteHandler, which seems fine for Oubound chunks, but I couldn't find something as ChunkedReadHandler...
So my question is: what's the best way to do this? My requirementes are:
- Do not keep data in memory before sending the ServiceHandlers;
- The ServiceHandlers API should be netty agnostic (that's why I use my CustomHttpRequest, instead of Netty's HttpRequest);
UPDATE
I have got this to work using a more reactive approach on the CustomHttpRequest. Now, the request does not provide an InputStream to the ServiceHandlers so they can read (which was blocking), but instead, the CustomHttpRequest now has a readInto(OutputStream) method that returns a Future, and all the service handler will just be executed when this Outputstream is fullfilled. Here is how it looks like
public class CustomHttpRequest {
...constructors and other methods hidden...
private final SettableFuture<Void> writeCompleteFuture = SettableFuture.create();
private final SettableFuture<OutputStream> outputStreamFuture = SettableFuture.create();
private ListenableFuture<Void> lastWriteFuture = Futures.transform(outputStreamFuture, x-> null);
public ListenableFuture<Void> readInto(OutputStream os) throws IOException {
outputStreamFuture.set(os);
return this.writeCompleteFuture;
}
ListenableFuture<Void> writeChunk(byte[] buf) {
this.lastWriteFuture = Futures.transform(lastWriteFuture, (AsyncFunction<Void, Void>) (os) -> {
outputStreamFuture.get().write(buf);
return Futures.immediateFuture(null);
});
return lastWriteFuture;
}
void complete() {
ListenableFuture<Void> future =
Futures.transform(lastWriteFuture, (AsyncFunction<Void, Void>) x -> {
outputStreamFuture.get().close();
return Futures.immediateFuture(null);
});
addFinallyCallback(future, () -> {
this.writeCompleteFuture.set(null);
});
}
}
And my updated ServletRequestHandler looks like this:
public class ServerRequestHandler extends MessageToMessageDecoder<HttpObject> {
private NettyHttpServletRequestAdaptor request;
#Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
super.handlerAdded(ctx);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
super.handlerRemoved(ctx);
}
#Override
protected void decode(ChannelHandlerContext ctx, HttpObject msg, List<Object> out)
throws Exception {
if (msg instanceof HttpRequest) {
HttpRequest request = (HttpRequest) msg;
this.request = new CustomHttpRequest(request, ctx.channel());
out.add(this.request);
}
if (msg instanceof HttpContent) {
ByteBuf buf = ((HttpContent) msg).content();
byte[] bytes = new byte[buf.readableBytes()];
buf.readBytes(bytes);
this.request.writeChunk(bytes);
if (msg instanceof LastHttpContent) {
this.request.complete();
}
}
}
}
This works pretty well, but still, note that everything here is done in a single thread, and maybe for large data I might want to spawn a new thread to release that thread for other channels.
You're on the right track - if your serviceHandler.handle(request, response); call is doing a blocking read, you need to create a new thread for it. Remember, there are supposed to be only a small number of Netty worker threads, so you shouldn't do any blocking calls in worker threads.
The other question to ask is, does your service handler need to be blocking? What does it do? If it's shoveling the data over the network anyway, can you incorporate it into the Netty pipeline in a non-blocking way? That way, everything is async all the way, no blocking calls and extra threads required.
Related
How can I change this code to get rid of thread blocking? Here .get() blocks the thread to receive the result from the future. But can I absolutely avoid blocking? Something like - one thread sends the requests, and the other one receives responses and implements some code. To make it fully asynchronous.
I tried to use CompletableFuture, but couldn't really understand it.
Tried to make a callback method, but wasn't successful as well.
byte[] sendRequest(JSONObject jsonObject, String username, String password) throws IOException, ExecutionException, InterruptedException {
try (AsyncHttpClient client = new AsyncHttpClient()) {
String userPassword;
if (username != null && password != null) {
userPassword = username + ":" + password;
} else {
throw new NullPointerException("Нет логина и/или пароля.");
}
Future future = client.preparePost(apiUrl)
.addHeader("Content-Type", "application/json")
.addHeader("Authorization", "Basic " + DatatypeConverter.printBase64Binary(userPassword.getBytes()))
.setBody(jsonObject.toString().getBytes())
.execute(getHandler());
String response = (String) future.get();
return response.getBytes();
}
}
private AsyncCompletionHandler<String> getHandler() throws IOException {
return new AsyncCompletionHandler<String>() {
#Override
public String onCompleted(Response response) throws IOException {
return response.getResponseBody();
}
#Override
public void onThrowable(Throwable t) {
}
};
}
What I expect:
The program sends a request in the main thread.
Then there is a kind of a callback that waits for a response in an
alternative thread.
Still, the program continues working in the main thread - it goes on with sending more requests.
When the response from the server comes, the callback from the
alternative thread catches it and processes in some way, but it
doesn't correspond with the main thread
You should run your async task in new thread (preferably using ExecutorService or CompletableFuture). Pass CallbackHandler to the Runnable/Callable tasks and once the invocation is complete invoke handler methods.
Alternatively, if all you're worried about is handling async http requests, I'd suggest to not reinvent the wheel and instead use existing solutions. Example of async http client
For other use cases, you can follow the following example.
import java.util.*;
import java.lang.*;
import java.io.*;
class Ideone {
public static void main (String[] args) throws java.lang.Exception {
for (int i=0; i<10; i++) {
new Thread(new MyRunnable(new CallbackHandler())).start();
}
}
static class MyRunnable implements Runnable {
CallbackHandler handler;
public MyRunnable(CallbackHandler handler) {
this.handler = handler;
}
public void run() {
try {
Thread.sleep(100);
} catch(Exception e) {
} finally {
Random r = new Random();
if (r.nextBoolean()) {
handler.onSuccess();
} else {
handler.onError();
}
}
}
}
static class CallbackHandler {
public void onSuccess() {
System.out.println("Success");
}
public void onError() {
System.out.println("Error");
}
}
}
I am trying to play around with netty api using Netty Telnet server to check if the true asynchronous behaviour could be observed or not.
Below are the three classes being used
TelnetServer.java
public class TelnetServer {
public static void main(String[] args) throws InterruptedException {
// TODO Auto-generated method stub
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new TelnetServerInitializer());
b.bind(8989).sync().channel().closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
TelnetServerInitializer.java
public class TelnetServerInitializer extends ChannelInitializer<SocketChannel> {
private static final StringDecoder DECODER = new StringDecoder();
private static final StringEncoder ENCODER = new StringEncoder();
private static final TelnetServerHandler SERVER_HANDLER = new TelnetServerHandler();
final EventExecutorGroup executorGroup = new DefaultEventExecutorGroup(2);
public TelnetServerInitializer() {
}
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
// Add the text line codec combination first,
pipeline.addLast(new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter()));
// the encoder and decoder are static as these are sharable
pipeline.addLast(DECODER);
pipeline.addLast(ENCODER);
// and then business logic.
pipeline.addLast(executorGroup,"handler",SERVER_HANDLER);
}
}
TelnetServerHandler.java
/**
* Handles a server-side channel.
*/
#Sharable
public class TelnetServerHandler extends SimpleChannelInboundHandler<String> {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
// Send greeting for a new connection.
ctx.write("Welcome to " + InetAddress.getLocalHost().getHostName() + "!\r\n");
ctx.write("It is " + new Date() + " now.\r\n");
ctx.flush();
ctx.channel().config().setAutoRead(true);
}
#Override
public void channelRead0(ChannelHandlerContext ctx, String request) throws Exception {
// Generate and write a response.
System.out.println("request = "+ request);
String response;
boolean close = false;
if (request.isEmpty()) {
response = "Please type something.\r\n";
} else if ("bye".equals(request.toLowerCase())) {
response = "Have a good day!\r\n";
close = true;
} else {
response = "Did you say '" + request + "'?\r\n";
}
// We do not need to write a ChannelBuffer here.
// We know the encoder inserted at TelnetPipelineFactory will do the conversion.
ChannelFuture future = ctx.write(response);
Thread.sleep(10000);
// Close the connection after sending 'Have a good day!'
// if the client has sent 'bye'.
if (close) {
future.addListener(ChannelFutureListener.CLOSE);
}
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
Now when i connect through telnet client and send commands hello hello hello three times
the request doesn't reach channelRead until first response to channelRead is being done is there any way i can make it asynchronous completely as to receive three hello as soon as they are available on socket.
Netty uses 1 thread max for every incoming read per handler, meaning that the next call to channelRead will only be dispatched after the previous call has been completed. This is required to for correct working of most handlers, including the sending back of messages in the proper order. If the amount of computation is really complex, another solution is using a custom thread pool for the messages.
If the other operation is instead a other kind of connection, you should use that as a async connection too. You can only get asynchronous if every part does this correctly.
Below code are servlet 3.1 Non Blocking IO demo:
UploadServlet:
#WebServlet(name = "UploadServlet", urlPatterns = {"/UploadServlet"}, asyncSupported=true)
public class UploadServlet extends HttpServlet {
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
AsyncContext context = request.startAsync();
// set up async listener
context.addListener(new AsyncListener() {
public void onComplete(AsyncEvent event) throws IOException {
event.getSuppliedResponse().getOutputStream().print("Complete");
}
public void onError(AsyncEvent event) {
System.out.println(event.getThrowable());
}
public void onStartAsync(AsyncEvent event) {
}
public void onTimeout(AsyncEvent event) {
System.out.println("my asyncListener.onTimeout");
}
});
ServletInputStream input = request.getInputStream();
ReadListener readListener = new ReadListenerImpl(input, response, context);
input.setReadListener(readListener);
}
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
}
}
RealListenerImpl:
public class ReadListenerImpl implements ReadListener{
private ServletInputStream input = null;
private HttpServletResponse res = null;
private AsyncContext ac = null;
private Queue queue = new LinkedBlockingQueue();
ReadListenerImpl(ServletInputStream in, HttpServletResponse r, AsyncContext c) {
input = in;
res = r;
ac = c;
}
public void onDataAvailable() throws IOException {
System.out.println("Data is available");
StringBuilder sb = new StringBuilder();
int len = -1;
byte b[] = new byte[1024];
while (input.isReady() && (len = input.read(b)) != -1) {
String data = new String(b, 0, len);
sb.append(data);
}
queue.add(sb.toString());
}
public void onAllDataRead() throws IOException {
System.out.println("Data is all read");
// now all data are read, set up a WriteListener to write
ServletOutputStream output = res.getOutputStream();
WriteListener writeListener = new WriteListenerImpl(output, queue, ac);
output.setWriteListener(writeListener);
}
public void onError(final Throwable t) {
ac.complete();
t.printStackTrace();
}
}
WriteListenerImpl:
public class WriteListenerImpl implements WriteListener{
private ServletOutputStream output = null;
private Queue queue = null;
private AsyncContext context = null;
WriteListenerImpl(ServletOutputStream sos, Queue q, AsyncContext c) {
output = sos;
queue = q;
context = c;
}
public void onWritePossible() throws IOException {
while (queue.peek() != null && output.isReady()) {
String data = (String) queue.poll();
output.print(data);
}
if (queue.peek() == null) {
context.complete();
}
}
public void onError(final Throwable t) {
context.complete();
t.printStackTrace();
}
}
above codes work fine, i want to know what are differences with blocking IO servlet? and i want to know how above code works.
Reading input data:
In the blocking scenario when you read data from the input stream each read blocks until data is available. This could be a long time for a remote client sending large data which means the thread is held for a long time.
For example consider inbound data being received over 2 minutes at regular intervals in 13 chunks. In blocking read you read the first chunk, hold the thread for ~10 seconds, read the next chunk, hold the thread for ~10 seconds etc. In this case the thread might spend less than a second actually processing data and almost 120 seconds blocked waiting for data. Then if you have a server with 10 threads you can see that you would have a throughput of 10 clients every 2 minutes.
In the non-blocking scenario the readListener reads data while isReady() returns true (it must check isReady() before each call to read data),but when isReady() returns false the readListener returns and the thread is relinquished. Then when more data arrives onDataAvailable() is called and the readListener reads data again in until isReady is false().
In the same example, this time the thread reads the data and returns, is woken up 10 seconds later, reads the next data and returns, is woken up 10 seconds later reads data and returns etc. This time, while it has still taken 2 minutes to read the data the thread(s) needed to do this were only active for less than a second and were available for other work. So while the specific request still takes 2 minutes, the server with 10 threads can now process many more requests every 2 minutes.
Sending response data:
The scenario is similar for sending data and is useful when sending large responses. For example sending a large response in 13 chunks may take 2 minutes to send in the blocking scenario because the client takes 10 seconds to acknowledge receipt of each chunk and the thread is held while waiting. However in the non-blocking scenario the thread is only held while sending the data and not while waiting to be able to send again. So, again for the particular client the response is not sent any more quickly but the thread is held for a fraction of the time and the throughput of the server which processes the request can increase significantly.
So the examples here are contrived but used to illustrate a point. The key being that non-blocking i/o does not make a single request any faster than with blocking i/o, but increases server throughput when the application can read input data faster than the client can send it and/or send response data faster than the client can receive it.
I'm currently working on a POC using Netty and till so far it goes very nice and managed to get quite some functionality up and running.
I have a question however about reusing the byte-buffer for writing. In the following example you can see a manually created bytebuffer-response, but it is created for every request and that isn't needed. I would like to make use of 'buf'. I'm currently running a bit in the trial and error mode and I have checked out the examples. Although my case looks very standard, I have not been able to figure out the correct way of making of a pooled buffer.
public class OperationHandler extends ChannelInboundHandlerAdapter {
private ByteBuf buf;
#Override
public void handlerAdded(ChannelHandlerContext ctx) {
buf = ctx.alloc().buffer(1024);
// System.out.println("Channel handler added");
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) {
buf.release();
buf = null;
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
try {
ByteBuffer response = ByteBuffer.allocate(1024);
byte[] operation = (byte[]) msg;
invoker.invoke(operation, response);
response.flip();
ctx.write(Unpooled.wrappedBuffer(response));
} finally {
ReferenceCountUtil.release(msg);
}
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
// Close the connection when an exception is raised.
cause.printStackTrace();
ctx.close();
}
}
I am trying to implement a single request/response on a AsynchronousSocketChannel in a vert.x worker verticle using CompletionHandler not Futures. From the vert.x documentation:
"Worker verticles are never executed concurrently by more than one thread."
So here is my code (not sure I got the socket handling 100% right - please comment):
// ommitted: asynchronousSocketChannel.open, connect ...
eventBus.registerHandler(address, new Handler<Message<JsonObject>>() {
#Override
public void handle(final Message<JsonObject> event) {
final ByteBuffer receivingBuffer = ByteBuffer.allocateDirect(2048);
final ByteBuffer sendingBuffer = ByteBuffer.wrap("Foo".getBytes());
asynchronousSocketChannel.write(sendingBuffer, 0L, new CompletionHandler<Integer, Long>() {
public void completed(final Integer result, final Long attachment) {
if (sendingBuffer.hasRemaining()) {
long newFilePosition = attachment + result;
asynchronousSocketChannel.write(sendingBuffer, newFilePosition, this);
}
asynchronousSocketChannel.read(receivingBuffer, 0L, new CompletionHandler<Integer, Long>() {
CharBuffer charBuffer = null;
final Charset charset = Charset.defaultCharset();
final CharsetDecoder decoder = charset.newDecoder();
public void completed(final Integer result, final Long attachment) {
if (result > 0) {
long p = attachment + result;
asynchronousSocketChannel.read(receivingBuffer, p, this);
}
receivingBuffer.flip();
try {
charBuffer = decoder.decode(receivingBuffer);
event.reply(charBuffer.toString()); // pseudo code
} catch (CharacterCodingException e) { }
}
public void failed(final Throwable exc, final Long attachment) { }
});
}
public void failed(final Throwable exc, final Long attachment) { }
});
}
});
I am hitting a lot of ReadPendingException's and WritePendingException's during load testing which seems a bit strange if there is really only one thread at a time in the handle method. How can it be that a read or a write has not fully completed if there is only 1 thread working with the AsynchronousSocketChannel at a time?
Handlers from AsynchronousSocketChannel are executed on their own AsynchronousChannelGroup which is a derivative of ExecutorService. Unless you make special efforts, that handlers are executed in parallel with the code which started I/O operation.
To execute I/O completion handler within a verticle, you have to make and register a handler from that verticle which does what AsynchronousSocketChannel's handler do now.
The AsynchronousSocketChannel's handler should only pack its arguments (result and attachment) in a message and sent that message to the event bus.