Questions About Servlet 3.1 Non Blocking IO Sample - java

Below code are servlet 3.1 Non Blocking IO demo:
UploadServlet:
#WebServlet(name = "UploadServlet", urlPatterns = {"/UploadServlet"}, asyncSupported=true)
public class UploadServlet extends HttpServlet {
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
AsyncContext context = request.startAsync();
// set up async listener
context.addListener(new AsyncListener() {
public void onComplete(AsyncEvent event) throws IOException {
event.getSuppliedResponse().getOutputStream().print("Complete");
}
public void onError(AsyncEvent event) {
System.out.println(event.getThrowable());
}
public void onStartAsync(AsyncEvent event) {
}
public void onTimeout(AsyncEvent event) {
System.out.println("my asyncListener.onTimeout");
}
});
ServletInputStream input = request.getInputStream();
ReadListener readListener = new ReadListenerImpl(input, response, context);
input.setReadListener(readListener);
}
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
}
}
RealListenerImpl:
public class ReadListenerImpl implements ReadListener{
private ServletInputStream input = null;
private HttpServletResponse res = null;
private AsyncContext ac = null;
private Queue queue = new LinkedBlockingQueue();
ReadListenerImpl(ServletInputStream in, HttpServletResponse r, AsyncContext c) {
input = in;
res = r;
ac = c;
}
public void onDataAvailable() throws IOException {
System.out.println("Data is available");
StringBuilder sb = new StringBuilder();
int len = -1;
byte b[] = new byte[1024];
while (input.isReady() && (len = input.read(b)) != -1) {
String data = new String(b, 0, len);
sb.append(data);
}
queue.add(sb.toString());
}
public void onAllDataRead() throws IOException {
System.out.println("Data is all read");
// now all data are read, set up a WriteListener to write
ServletOutputStream output = res.getOutputStream();
WriteListener writeListener = new WriteListenerImpl(output, queue, ac);
output.setWriteListener(writeListener);
}
public void onError(final Throwable t) {
ac.complete();
t.printStackTrace();
}
}
WriteListenerImpl:
public class WriteListenerImpl implements WriteListener{
private ServletOutputStream output = null;
private Queue queue = null;
private AsyncContext context = null;
WriteListenerImpl(ServletOutputStream sos, Queue q, AsyncContext c) {
output = sos;
queue = q;
context = c;
}
public void onWritePossible() throws IOException {
while (queue.peek() != null && output.isReady()) {
String data = (String) queue.poll();
output.print(data);
}
if (queue.peek() == null) {
context.complete();
}
}
public void onError(final Throwable t) {
context.complete();
t.printStackTrace();
}
}
above codes work fine, i want to know what are differences with blocking IO servlet? and i want to know how above code works.

Reading input data:
In the blocking scenario when you read data from the input stream each read blocks until data is available. This could be a long time for a remote client sending large data which means the thread is held for a long time.
For example consider inbound data being received over 2 minutes at regular intervals in 13 chunks. In blocking read you read the first chunk, hold the thread for ~10 seconds, read the next chunk, hold the thread for ~10 seconds etc. In this case the thread might spend less than a second actually processing data and almost 120 seconds blocked waiting for data. Then if you have a server with 10 threads you can see that you would have a throughput of 10 clients every 2 minutes.
In the non-blocking scenario the readListener reads data while isReady() returns true (it must check isReady() before each call to read data),but when isReady() returns false the readListener returns and the thread is relinquished. Then when more data arrives onDataAvailable() is called and the readListener reads data again in until isReady is false().
In the same example, this time the thread reads the data and returns, is woken up 10 seconds later, reads the next data and returns, is woken up 10 seconds later reads data and returns etc. This time, while it has still taken 2 minutes to read the data the thread(s) needed to do this were only active for less than a second and were available for other work. So while the specific request still takes 2 minutes, the server with 10 threads can now process many more requests every 2 minutes.
Sending response data:
The scenario is similar for sending data and is useful when sending large responses. For example sending a large response in 13 chunks may take 2 minutes to send in the blocking scenario because the client takes 10 seconds to acknowledge receipt of each chunk and the thread is held while waiting. However in the non-blocking scenario the thread is only held while sending the data and not while waiting to be able to send again. So, again for the particular client the response is not sent any more quickly but the thread is held for a fraction of the time and the throughput of the server which processes the request can increase significantly.
So the examples here are contrived but used to illustrate a point. The key being that non-blocking i/o does not make a single request any faster than with blocking i/o, but increases server throughput when the application can read input data faster than the client can send it and/or send response data faster than the client can receive it.

Related

Netty chunked input stream

I have seen lots of questions around about chunked streams in netty, but most of them were solutions about outbound streams, not inbound streams.
I would like to understand how can I get the data from the channel and send it as an InputStream to my business logic without loading all the data in memory first.
Here's what I was trying to do:
public class ServerRequestHandler extends MessageToMessageDecoder<HttpObject> {
private HttpServletRequest request;
private PipedOutputStream os;
private PipedInputStream is;
#Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
super.handlerAdded(ctx);
this.os = new PipedOutputStream();
this.is = new PipedInputStream(os);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
super.handlerRemoved(ctx);
this.os.close();
this.is.close();
}
#Override
protected void decode(ChannelHandlerContext ctx, HttpObject msg, List<Object> out)
throws Exception {
if (msg instanceof HttpRequest) {
this.request = new CustomHttpRequest((HttpRequest) msg, this.is);
out.add(this.request);
}
if (msg instanceof HttpContent) {
ByteBuf body = ((HttpContent) msg).content();
if (body.readableBytes() > 0)
body.readBytes(os, body.readableBytes());
if (msg instanceof LastHttpContent) {
os.close();
}
}
}
}
And then I have another Handler that will get my CustomHttpRequest and send to what I call a ServiceHandler, where my business logic will read from the InputStream.
public class ServiceRouterHandler extends SimpleChannelInboundHandler<CustomHttpRequest> {
...
#Override
public void channelRead0(ChannelHandlerContext ctx, CustomHttpRequest request) throws IOException {
...
future = serviceHandler.handle(request, response);
...
This does not work because when my Handler forwards the CustomHttpRequest to the ServiceHandler, and it tries to read from the InputStream, the thread is blocking, and the HttpContent is never handled in my Decoder.
I know I can try to create a separate thread for my Business Logic, but I have the impression I am overcomplicating things here.
I looked at ByteBufInputStream, but it says that
Please note that it only reads up to the number of readable bytes
determined at the moment of construction.
So I don't think it will work for Chunked Http requests. Also, I saw ChunkedWriteHandler, which seems fine for Oubound chunks, but I couldn't find something as ChunkedReadHandler...
So my question is: what's the best way to do this? My requirementes are:
- Do not keep data in memory before sending the ServiceHandlers;
- The ServiceHandlers API should be netty agnostic (that's why I use my CustomHttpRequest, instead of Netty's HttpRequest);
UPDATE
I have got this to work using a more reactive approach on the CustomHttpRequest. Now, the request does not provide an InputStream to the ServiceHandlers so they can read (which was blocking), but instead, the CustomHttpRequest now has a readInto(OutputStream) method that returns a Future, and all the service handler will just be executed when this Outputstream is fullfilled. Here is how it looks like
public class CustomHttpRequest {
...constructors and other methods hidden...
private final SettableFuture<Void> writeCompleteFuture = SettableFuture.create();
private final SettableFuture<OutputStream> outputStreamFuture = SettableFuture.create();
private ListenableFuture<Void> lastWriteFuture = Futures.transform(outputStreamFuture, x-> null);
public ListenableFuture<Void> readInto(OutputStream os) throws IOException {
outputStreamFuture.set(os);
return this.writeCompleteFuture;
}
ListenableFuture<Void> writeChunk(byte[] buf) {
this.lastWriteFuture = Futures.transform(lastWriteFuture, (AsyncFunction<Void, Void>) (os) -> {
outputStreamFuture.get().write(buf);
return Futures.immediateFuture(null);
});
return lastWriteFuture;
}
void complete() {
ListenableFuture<Void> future =
Futures.transform(lastWriteFuture, (AsyncFunction<Void, Void>) x -> {
outputStreamFuture.get().close();
return Futures.immediateFuture(null);
});
addFinallyCallback(future, () -> {
this.writeCompleteFuture.set(null);
});
}
}
And my updated ServletRequestHandler looks like this:
public class ServerRequestHandler extends MessageToMessageDecoder<HttpObject> {
private NettyHttpServletRequestAdaptor request;
#Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
super.handlerAdded(ctx);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
super.handlerRemoved(ctx);
}
#Override
protected void decode(ChannelHandlerContext ctx, HttpObject msg, List<Object> out)
throws Exception {
if (msg instanceof HttpRequest) {
HttpRequest request = (HttpRequest) msg;
this.request = new CustomHttpRequest(request, ctx.channel());
out.add(this.request);
}
if (msg instanceof HttpContent) {
ByteBuf buf = ((HttpContent) msg).content();
byte[] bytes = new byte[buf.readableBytes()];
buf.readBytes(bytes);
this.request.writeChunk(bytes);
if (msg instanceof LastHttpContent) {
this.request.complete();
}
}
}
}
This works pretty well, but still, note that everything here is done in a single thread, and maybe for large data I might want to spawn a new thread to release that thread for other channels.
You're on the right track - if your serviceHandler.handle(request, response); call is doing a blocking read, you need to create a new thread for it. Remember, there are supposed to be only a small number of Netty worker threads, so you shouldn't do any blocking calls in worker threads.
The other question to ask is, does your service handler need to be blocking? What does it do? If it's shoveling the data over the network anyway, can you incorporate it into the Netty pipeline in a non-blocking way? That way, everything is async all the way, no blocking calls and extra threads required.

ClientCallStreamObserver isReady never returns true

I'm making an input stream rate meter. It is basically a service that exposes a request stream call and counts how many messages per second it can handle.
As the client is totally async when it comes to sending messages, I use the ClientCallStreamObserver to start sending messages just when the stream is ready, to avoid memory overflow.
The client code looks like this:
public static void main(String[] args) throws Exception {
ManagedChannel channel = ManagedChannelBuilder.forAddress("server", 4242).usePlaintext(true).build();
ServerGrpc.ServerStub asyncStub = ServerGrpc.newStub(channel);
StreamObserver<MarketDataOuterClass.Trade> inputStream = asyncStub.reportNewTradeStream(new StreamObserver<Empty>() {
#Override
public void onNext(Empty empty) {
}
#Override
public void onError(Throwable throwable) {
logger.info("on error response stream");
}
#Override
public void onCompleted() {
logger.info("on completed response stream");
}
});
final ClientCallStreamObserver<MarketDataOuterClass.Trade> clientCallObserver = (ClientCallStreamObserver<MarketDataOuterClass.Trade>) inputStream;
while (!clientCallObserver.isReady()) {
Thread.sleep(2000);
logger.info("stream not ready yet");
}
counter.setLastTic(System.nanoTime());
while (true) {
counter.inc();
if (counter.getCounter() % 15000 == 0 ) {
long now = System.nanoTime();
double rate = (double) NANOSEC_TO_SEC * counter.getCounter() / (now - counter.getLastTic());
logger.info("rate: " + rate + " msgs per sec");
counter.clear();
counter.setLastTic(now);
}
inputStream.onNext(createRandomTrade());
}
}
My observation loop over isReady is never ending.
OBS: I'm using kubernetes cluster to serve my test, the server is receiving the call and returning a StreamObserver implementation.
isReady should eventually return true, as long as the RPC doesn't error/complete immediately. But the code is not observing flow control properly.
After each call to onNext() to send a request isReady() could begin returning false. Your while (true) loop should instead have the isReady() check at the beginning of each iteration.
Instead of polling, it is better to call serverCallObserver.setOnReadyHandler(yourRunnable) to be notified when the call is ready to send. Note that you should still check isReady() within yourRunnable as there can be spurious/out-of-date notifications.

AsynchronousSocketChannel not reading in entire message

When I run the below locally (on my own computer) it works fine - I can send messages to it and it reads them in properly. As soon as I put this on a remote server and send a message, only half the message gets read.
try {
this.asynchronousServerSocketChannel = AsynchronousServerSocketChannel.open().bind(new InetSocketAddress(80));
this.asynchronousServerSocketChannel.accept(null, new CompletionHandler<AsynchronousSocketChannel, Void>() {
#Override
public void completed(AsynchronousSocketChannel asynchronousSocketChannel, Void att) {
try {
asynchronousServerSocketChannel.accept(null, this);
ByteBuffer byteBuffer = ByteBuffer.allocate(10485760);
asynchronousSocketChannel.read(byteBuffer).get(120000, TimeUnit.SECONDS);
byteBuffer.flip();
System.out.println("request: " + Charset.defaultCharset().decode(byteBuffer).toString());
} catch (CorruptHeadersException | CorruptProtocolException | MalformedURLException ex) {
} catch (InterruptedException | ExecutionException | TimeoutException ex) {
}
}
#Override
public void failed(Throwable exc, Void att) {
}
});
} catch (IOException ex) {
}
I've looked around at other questions and tried some of the answers but nothing worked so far. I thought the cause might be that it's timing out due to it being slower over the network when it's placed remotely but increasing the timeout didn't resolve the issue. I also considered that the message might be too large but allocating more capacity to the ByteBuffer didn't resolve the issue either.
I believe your issue is with the Asynchronous nature of the code you're using. What you have is an open connection and you've called the asynchronous read method on your socket.
This reads n bytes from the channel where n is anything from 0 to the size of your available buffer.
I firmly believe that you have to read in a loop. That is, with Java's A-NIO; you'd need to call read again from your completed method on your CompletionHandler by, possibly, passing in the AsynchronousSocketChannel as an attachment to a new completed method on a CompletionHandler you create for read , not the one you already have for accept methods.
I think this is the same sort of pattern you'd use where you'd call accept again with this as the completion handler from your completed method in the CompletionHandler you're using for the accept method call.
It then becomes important to put an "Escape" clause into your CompletionHandler for instance, if the result is -1 or if the ByteBuffer had read X number of bytes based on what you're expecting, or based on if the final byte in the ByteBuffer is a specific message termination byte that you've agreed with the sending application.
The Java Documentation on the matter goes so far as to say the read method will only read the amount of bytes on the dst at the time of invocation.
In Summary; the completed method call for the handler for the read seems to execute once something was written to the channel; but if something is being streamed you could get half of the bytes, so you'd need to continue reading until you're satisfied you've got the end of what they were sending.
Below is some code I knocked together on reading until the end, responding whilst reading, asynchronously. It, unlike myself, can talk and listen at the same time.
public class ReadForeverCompletionHandler implements CompletionHandler<Integer, Pair<AsynchronousSocketChannel, ByteBuffer>> {
#Override
public void completed(Integer bytesRead, Pair<AsynchronousSocketChannel, ByteBuffer> statefulStuff) {
if(bytesRead != -1) {
final ByteBuffer receivedByteBuffer = statefulStuff.getRight();
final AsynchronousSocketChannel theSocketChannel = statefulStuff.getLeft();
if (receivedByteBuffer.position()>8) {
//New buffer as existing buffer is in use
ByteBuffer response = ByteBuffer.wrap(receivedByteBuffer.array());
receivedByteBuffer.clear(); //safe as we've not got any outstanding or in progress reads, yet.
theSocketChannel.read(receivedByteBuffer,statefulStuff,this); //Basically "WAIT" on more data
Future<Integer> ignoredBytesWrittenResult = theSocketChannel.write(response);
}
}
else {
//connection was closed code
try {
statefulStuff.getLeft().shutdownOutput(); //maybe
}
catch (IOException somethingBad){
//fire
}
}
}
#Override
public void failed(Throwable exc, Pair<AsynchronousSocketChannel, ByteBuffer> attachment) {
//shout fire
}
The read is originally kicked off by a call from the completed method in the handler from the very original asynchronous accept on the server socket like
public class AcceptForeverCompletionHandler implements CompletionHandler<AsynchronousSocketChannel, Pair<AsynchronousServerSocketChannel, Collection<AsynchronousSocketChannel>>> {
private final ReadForeverCompletionHandler readForeverAndEverAndSoOn = new ReadForeverCompletionHandler();
#Override
public void completed(AsynchronousSocketChannel result, Pair<AsynchronousServerSocketChannel, Collection<AsynchronousSocketChannel>> statefulStuff) {
statefulStuff.getLeft().accept(statefulStuff, this); //Accept more new connections please as we go
statefulStuff.getRight().add(result); //Collect these in case we want to for some reason, I don't know
ByteBuffer buffer = ByteBuffer.allocate(4098); //4k seems a nice number
result.read(buffer, Pair.of(result, buffer ),readForeverAndEverAndSoOn); //Kick off the read "forever"
}
#Override
public void failed(Throwable exc, Pair<AsynchronousServerSocketChannel, Collection<AsynchronousSocketChannel>> attachment) {
//Shout fire
}
}

How to eliminate race condition in Rox NIO tutorial

I've been using this tutorial for a simple file transfer client/server using socket IO. I changed the response handler to accept multiple reads as a part of one file, as I will be dealing with large files, potentially up to 500 MB. The tutorial didn't account for large server responses, so I'm struggling a bit, and I've created a race condition.
Here's the response handler code:
public class RspHandler {
private byte[] rsp = null;
public synchronized boolean handleResponse(byte[] rsp) {
this.rsp = rsp;
this.notify();
return true;
}
public synchronized void waitForResponse() {
while(this.rsp == null) {
try {
this.wait();
} catch (InterruptedException e) {
}
}
System.out.println("Received Response : " + new String(this.rsp));
}
public synchronized void waitForFile(String filename) throws IOException {
String filepath = "C:\\a\\received\\" + filename;
FileOutputStream fos = new FileOutputStream(filepath);
while(waitForFileChunk(fos) != -1){}
fos.close();
}
private synchronized int waitForFileChunk(FileOutputStream fos) throws IOException
{
while(this.rsp == null) {
try {
this.wait();
} catch (InterruptedException e) {
}
}
fos.write(this.rsp);
int length = this.rsp.length;
this.rsp = null;
if(length < NioClient.READ_SIZE)//Probably a bad way to find the end of the file
{
return -1;
}
else
{
return length;
}
}
}
The main thread of the program creates a RspHandler on the main thread, and passes it to a client, created on a separate thread. The main thread tells the client to request a file, then tells the RspHandler to listen for a response. When the client reads from the server(it reads in chunks of about 1KB right now), it calls the handleResponse(byte[] rsp) method, populating the rsp byte array.
Essentially, I'm not writing the received data to a file as fast as it comes. I'm a bit new to threads, so I'm not sure what to do to get rid of this race condition. Any hints?
this is classic consumer/producer. the most straightforward/easiest way to handle this is to use a BlockingQueue. producer calls put(), consumer calls take().
note, using a BlockingQueue usually leads to the "how do i finish" problem. the best way to do that is to use the "poison pill" method, where the producer sticks a "special" value on the queue which signals to the consumer that there is no more data.

Implementing long polling in an asynchronous fashion

Is it possible to take an HTTPServletRequest away from its thread, dissolve this thread (i.e. bring it back to the pool), but keep the underlying connection with the browser working, until I get the results from a time-consuming operation (say, processing an image)? When the return data are processed, another method should be called asynchronously, and be given the request as well as the data as parameters.
Usually, long pooling functions in a pretty blocking fashion, where the current thread is not dissolved, which reduces the scalability of the server-side app, in terms of concurrent connections.
Yes, you can do this with Servlet 3.0
Below is the sample to write the alert every 30 secs(not tested).
#WebServlet(async =“true”)
public class AsyncServlet extends HttpServlet {
Timer timer = new Timer("ClientNotifier");
public void doGet(HttpServletRequest req, HttpServletResponse res) {
AsyncContext aCtx = request.startAsync(req, res);
// Suspend request for 30 Secs
timer.schedule(new TimerTask(aCtx) {
public void run() {
try{
//read unread alerts count
int unreadAlertCount = alertManager.getUnreadAlerts(username);
// write unread alerts count
response.write(unreadAlertCount);
}
catch(Exception e){
aCtx.complete();
}
}
}, 30000);
}
}
Below is the sample to write based on an event. The alertManager has to be implemented which notifies AlertNotificationHandler when client has to be alerted.
#WebServlet(async=“true”)
public class AsyncServlet extends HttpServlet {
public void doGet(HttpServletRequest req, HttpServletResponse res) {
final AsyncContext asyncCtx = request.startAsync(req, res);
alertManager.register(new AlertNotificationHandler() {
public void onNewAlert() { // Notified on new alerts
try {
int unreadAlertCount =
alertManager.getUnreadAlerts();
ServletResponse response = asyncCtx.getResponse();
writeResponse(response, unreadAlertCount);
// Write unread alerts count
} catch (Exception ex) {
asyncCtx.complete();
// Closes the response
}
}
});
}
}
Yes, it's possible using Servlet spec ver. 3.0. Implementation I can recommend is Jetty server. See here.

Categories

Resources