I am trying to implement a single request/response on a AsynchronousSocketChannel in a vert.x worker verticle using CompletionHandler not Futures. From the vert.x documentation:
"Worker verticles are never executed concurrently by more than one thread."
So here is my code (not sure I got the socket handling 100% right - please comment):
// ommitted: asynchronousSocketChannel.open, connect ...
eventBus.registerHandler(address, new Handler<Message<JsonObject>>() {
#Override
public void handle(final Message<JsonObject> event) {
final ByteBuffer receivingBuffer = ByteBuffer.allocateDirect(2048);
final ByteBuffer sendingBuffer = ByteBuffer.wrap("Foo".getBytes());
asynchronousSocketChannel.write(sendingBuffer, 0L, new CompletionHandler<Integer, Long>() {
public void completed(final Integer result, final Long attachment) {
if (sendingBuffer.hasRemaining()) {
long newFilePosition = attachment + result;
asynchronousSocketChannel.write(sendingBuffer, newFilePosition, this);
}
asynchronousSocketChannel.read(receivingBuffer, 0L, new CompletionHandler<Integer, Long>() {
CharBuffer charBuffer = null;
final Charset charset = Charset.defaultCharset();
final CharsetDecoder decoder = charset.newDecoder();
public void completed(final Integer result, final Long attachment) {
if (result > 0) {
long p = attachment + result;
asynchronousSocketChannel.read(receivingBuffer, p, this);
}
receivingBuffer.flip();
try {
charBuffer = decoder.decode(receivingBuffer);
event.reply(charBuffer.toString()); // pseudo code
} catch (CharacterCodingException e) { }
}
public void failed(final Throwable exc, final Long attachment) { }
});
}
public void failed(final Throwable exc, final Long attachment) { }
});
}
});
I am hitting a lot of ReadPendingException's and WritePendingException's during load testing which seems a bit strange if there is really only one thread at a time in the handle method. How can it be that a read or a write has not fully completed if there is only 1 thread working with the AsynchronousSocketChannel at a time?
Handlers from AsynchronousSocketChannel are executed on their own AsynchronousChannelGroup which is a derivative of ExecutorService. Unless you make special efforts, that handlers are executed in parallel with the code which started I/O operation.
To execute I/O completion handler within a verticle, you have to make and register a handler from that verticle which does what AsynchronousSocketChannel's handler do now.
The AsynchronousSocketChannel's handler should only pack its arguments (result and attachment) in a message and sent that message to the event bus.
Related
I have singleton client with the below contract
public interface MQPublisher {
void publish(String message) throws ClientConnectionException, ClientErrorException;
void start() throws ClientException;
void stop();
}
The class which is using this publisher is as below :
public class MessagePublisher {
#Autowired
private MQPublisher publisher;
private AtomicBoolean isPublisherRunning;
public void startPublisher() {
if (!isPublisherRunning.get()) {
publisher.start();
isPublisherRunning.compareAndSet(false, true);
}
}
#Retry(RETRY_MSG_UPLOAD)
public void sendMessage(String msg) {
try {
startPublisher();
publisher.publish(msg); // when multiple requests fail with the same exception, what will happen??
} catch (Exception e) {
log.error("Exception while publishing message : {}", msg, e);
publisher.stop();
isPublisherRunning.compareAndSet(true, false);
throw e;
}
}
We are using resilience4j retry functionality to retry the sendMessage method. This works fine in case of a single request. Consider a case when multiple requests are processed parallely and all of them fails with an exception. In this case, these requests will be retried and there is a chance that one thread will start the publisher while the other will stop it and it will throw exceptions again. How to handle this scenario in a cleaner way?
It isn't clear why the whole publisher should be stopped in case of failure. Nevertheless, if there are real reasons for that, I would change the stop method to use an atomic timer that will restart on each message sending and stop the publisher only after at least 5 seconds (or the time needed for a message to be successfully sent) have passed from the message sending.
Something like that:
#Slf4j
public class MessagePublisher {
private static final int RETRY_MSG_UPLOAD = 10;
#Autowired
private MQPublisher publisher;
private AtomicBoolean isPublisherRunning;
private AtomicLong publishStart;
public void startPublisher() {
if (!isPublisherRunning.get()) {
publisher.start();
isPublisherRunning.compareAndSet(false, true);
}
}
#Retryable(maxAttempts = RETRY_MSG_UPLOAD)
public void sendMessage(String msg) throws InterruptedException {
try {
startPublisher();
publishStart.set(System.nanoTime());
publisher.publish(msg); // when multiple requests fail with the same exception, what will happen??
} catch (Exception e) {
log.error("Exception while publishing message : {}", msg, e);
while (System.nanoTime() < publishStart.get() + 5000000000L) {
Thread.sleep(1000);
}
publisher.stop();
isPublisherRunning.compareAndSet(true, false);
throw e;
}
}
}
I think it is important to mention (as you just did) that this is a terrible design, and that such calculations should be done by the publisher implementer and not by the caller.
I have seen lots of questions around about chunked streams in netty, but most of them were solutions about outbound streams, not inbound streams.
I would like to understand how can I get the data from the channel and send it as an InputStream to my business logic without loading all the data in memory first.
Here's what I was trying to do:
public class ServerRequestHandler extends MessageToMessageDecoder<HttpObject> {
private HttpServletRequest request;
private PipedOutputStream os;
private PipedInputStream is;
#Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
super.handlerAdded(ctx);
this.os = new PipedOutputStream();
this.is = new PipedInputStream(os);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
super.handlerRemoved(ctx);
this.os.close();
this.is.close();
}
#Override
protected void decode(ChannelHandlerContext ctx, HttpObject msg, List<Object> out)
throws Exception {
if (msg instanceof HttpRequest) {
this.request = new CustomHttpRequest((HttpRequest) msg, this.is);
out.add(this.request);
}
if (msg instanceof HttpContent) {
ByteBuf body = ((HttpContent) msg).content();
if (body.readableBytes() > 0)
body.readBytes(os, body.readableBytes());
if (msg instanceof LastHttpContent) {
os.close();
}
}
}
}
And then I have another Handler that will get my CustomHttpRequest and send to what I call a ServiceHandler, where my business logic will read from the InputStream.
public class ServiceRouterHandler extends SimpleChannelInboundHandler<CustomHttpRequest> {
...
#Override
public void channelRead0(ChannelHandlerContext ctx, CustomHttpRequest request) throws IOException {
...
future = serviceHandler.handle(request, response);
...
This does not work because when my Handler forwards the CustomHttpRequest to the ServiceHandler, and it tries to read from the InputStream, the thread is blocking, and the HttpContent is never handled in my Decoder.
I know I can try to create a separate thread for my Business Logic, but I have the impression I am overcomplicating things here.
I looked at ByteBufInputStream, but it says that
Please note that it only reads up to the number of readable bytes
determined at the moment of construction.
So I don't think it will work for Chunked Http requests. Also, I saw ChunkedWriteHandler, which seems fine for Oubound chunks, but I couldn't find something as ChunkedReadHandler...
So my question is: what's the best way to do this? My requirementes are:
- Do not keep data in memory before sending the ServiceHandlers;
- The ServiceHandlers API should be netty agnostic (that's why I use my CustomHttpRequest, instead of Netty's HttpRequest);
UPDATE
I have got this to work using a more reactive approach on the CustomHttpRequest. Now, the request does not provide an InputStream to the ServiceHandlers so they can read (which was blocking), but instead, the CustomHttpRequest now has a readInto(OutputStream) method that returns a Future, and all the service handler will just be executed when this Outputstream is fullfilled. Here is how it looks like
public class CustomHttpRequest {
...constructors and other methods hidden...
private final SettableFuture<Void> writeCompleteFuture = SettableFuture.create();
private final SettableFuture<OutputStream> outputStreamFuture = SettableFuture.create();
private ListenableFuture<Void> lastWriteFuture = Futures.transform(outputStreamFuture, x-> null);
public ListenableFuture<Void> readInto(OutputStream os) throws IOException {
outputStreamFuture.set(os);
return this.writeCompleteFuture;
}
ListenableFuture<Void> writeChunk(byte[] buf) {
this.lastWriteFuture = Futures.transform(lastWriteFuture, (AsyncFunction<Void, Void>) (os) -> {
outputStreamFuture.get().write(buf);
return Futures.immediateFuture(null);
});
return lastWriteFuture;
}
void complete() {
ListenableFuture<Void> future =
Futures.transform(lastWriteFuture, (AsyncFunction<Void, Void>) x -> {
outputStreamFuture.get().close();
return Futures.immediateFuture(null);
});
addFinallyCallback(future, () -> {
this.writeCompleteFuture.set(null);
});
}
}
And my updated ServletRequestHandler looks like this:
public class ServerRequestHandler extends MessageToMessageDecoder<HttpObject> {
private NettyHttpServletRequestAdaptor request;
#Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
super.handlerAdded(ctx);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
super.handlerRemoved(ctx);
}
#Override
protected void decode(ChannelHandlerContext ctx, HttpObject msg, List<Object> out)
throws Exception {
if (msg instanceof HttpRequest) {
HttpRequest request = (HttpRequest) msg;
this.request = new CustomHttpRequest(request, ctx.channel());
out.add(this.request);
}
if (msg instanceof HttpContent) {
ByteBuf buf = ((HttpContent) msg).content();
byte[] bytes = new byte[buf.readableBytes()];
buf.readBytes(bytes);
this.request.writeChunk(bytes);
if (msg instanceof LastHttpContent) {
this.request.complete();
}
}
}
}
This works pretty well, but still, note that everything here is done in a single thread, and maybe for large data I might want to spawn a new thread to release that thread for other channels.
You're on the right track - if your serviceHandler.handle(request, response); call is doing a blocking read, you need to create a new thread for it. Remember, there are supposed to be only a small number of Netty worker threads, so you shouldn't do any blocking calls in worker threads.
The other question to ask is, does your service handler need to be blocking? What does it do? If it's shoveling the data over the network anyway, can you incorporate it into the Netty pipeline in a non-blocking way? That way, everything is async all the way, no blocking calls and extra threads required.
I have the following setup. There is a message distributor that spreads inbound client messages across a configured number of message queues (LinkedBlockingQueues in my case), based on an unique identifier called appId (per connected client):
public class MessageDistributor {
private final List<BlockingQueue<MessageWrapper>> messageQueueBuckets;
public MessageDistributor(List<BlockingQueue<MessageWrapper>> messageQueueBuckets) {
this.messageQueueBuckets = messageQueueBuckets;
}
public void handle(String appId, MessageWrapper message) {
int index = (messageQueueBuckets.size() - 1) % hash(appId);
try {
messageQueueBuckets.get(index).offer(message);
} catch (Exception e) {
// handle exception
}
}
}
As I also need to answer the message later on, I wrap the message object and the netty channel inside a MessageWrapper:
public class MessageWrapper {
private final Channel channel;
private final Message message;
public MessageWrapper(Channel channel, Message message) {
this.channel = channel;
this.message = message;
}
public Channel getChannel() {
return channel;
}
public Message getMessage() {
return message;
}
}
Furthermore, there is a message consumer, which implements a Runnable and takes new messages from the assigned blocking queue. This guy performs some expensive/blocking operations that I want to have outside the main netty event loop and which should also not block operations for other connected clients too much, hence the usage of several queues:
public class MessageConsumer implements Runnable {
private final BlockingQueue<MessageWrapper> messageQueue;
public MessageConsumer(BlockingQueue<MessageWrapper> messageQueue) {
this.messageQueue = messageQueue;
}
#Override
public void run() {
while (true) {
try {
MessageWrapper msgWrap = messageQueue.take();
Channel channel = msgWrap.getChannel();
Message msg = msgWrap.getMessage();
doSthExepnsiveOrBlocking(channel, msg);
} catch (Exception e) {
// handle exception
}
}
}
public void doSthExepnsiveOrBlocking(Channel channel, Message msg) {
// some expsive/blocking operations
channe.writeAndFlush(someResultObj);
}
}
The setup of all classes looks like the following (the messageExecutor is a DefaultEventeExecutorGroup with a size of 8):
int nrOfWorkers = config.getNumberOfClientMessageQueues();
List<BlockingQueue<MessageWrapper>> messageQueueBuckets = new ArrayList<>(nrOfWorkers);
for (int i = 0; i < nrOfWorkers; i++) {
messageQueueBuckets.add(new LinkedBlockingQueue<>());
}
MessageDistributor distributor = new MessageDistributor(messageQueueBuckets);
List<MessageConsumer> consumers = new ArrayList<>(nrOfWorkers);
for (BlockingQueue<MessageWrapper> messageQueueBucket : messageQueueBuckets) {
MessageConsumer consumer = new MessageConsumer(messageQueueBucket);
consumers.add(consumer);
messageExecutor.submit(consumer);
}
My goal with this approach is to isolate connected clients from each other (not fully, but at least a bit) and also to execute expensive operations on different threads.
Now my question is: Is it valid to wrap the netty channel object inside this MessageWrapper for later use and access its write method in some other thread?
UPDATE
Instead of building additional message distribution mechanics on top of netty, I decided to simply go with a separate EventExecutorGroup for my blocking channel handlers and see how it works.
Yes it is valid to call Channel.* methods from other threads. That said the methods perform best when these are called from the EventLoop thread itself that belongs to the Channel.
I have a situation like: My Netty Server will be getting data from a Client at a blazing speed. I think the client is using somewhat PUSH mechanism for that speed. I don't know what exactly PUSH - POP mechanism is, but I do feel that the Client is using some mechanism for sending data at a very high speed.Now my requirement is, I wrote a simple TCP Netty server that receives data from the client and just adds to the BlockingQueue implemented using ArrayBlockingQueue. Now , as Netty is event based, the time taken to accept the data and store it in a queue is some what more , this is raising an exception at the client side that the Netty server is not running.But my server is running perfectly, but the time to accept single data and store it in the queue is more. How can I prevent this? Is there any fastest queue for this situation? I nam using BlockingQueue as another thread will take data from the queue and process it. So I need a synchronized queue. How can I improve the performance of the Server or is there any way to insert data at a very high speed? All I care about is latency. The latency needs to be as low as possible.
My Server code:
public class Server implements Runnable {
private final int port;
static String message;
Channel channel;
ChannelFuture channelFuture;
int rcvBuf, sndBuf, lowWaterMark, highWaterMark;
public Server(int port) {
this.port = port;
rcvBuf = 2048;
sndBuf = 2048;
lowWaterMark = 1024;
highWaterMark = 2048;
}
#Override
public void run() {
try {
startServer();
} catch (Exception ex) {
System.err.println("Error in Server : "+ex);
Logger.error(ex.getMessage());
}
}
public void startServer() {
// System.out.println("8888 Server started");
EventLoopGroup group = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(group)
.channel(NioServerSocketChannel.class)
.localAddress(new InetSocketAddress(port))
.childOption(ChannelOption.SO_RCVBUF, rcvBuf * 2048)
.childOption(ChannelOption.SO_SNDBUF, sndBuf * 2048)
.childOption(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(lowWaterMark * 2048, highWaterMark * 2048))
.childOption(ChannelOption.TCP_NODELAY, true)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
channel = ch;
System.err.println("OMS connected : " + ch.localAddress());
ch.pipeline().addLast(new ReceiveFromOMSDecoder());
}
});
channelFuture = b.bind(port).sync();
this.channel = channelFuture.channel();
channelFuture.channel().closeFuture().sync();
} catch (InterruptedException ex) {
System.err.println("Exception raised in SendToOMS class"+ex);
} finally {
group.shutdownGracefully();
}
}
}
My ServerHandler code:
#Sharable
public class ReceiveFromOMSDecoder extends MessageToMessageDecoder<ByteBuf> {
private Charset charset;
public ReceiveFromOMSDecoder() {
this(Charset.defaultCharset());
}
/**
* Creates a new instance with the specified character set.
*/
public ReceiveFromOMSDecoder(Charset charset) {
if (charset == null) {
throw new NullPointerException("charset");
}
this.charset = charset;
}
#Override
protected void decode(ChannelHandlerContext ctx, ByteBuf msg, List<Object> out) throws Exception {
String buffer = msg.toString(charset);
if(buffer!=null){
Server.sq.insertStringIntoSendingQueue(buffer); //inserting into queue
}
else{
Logger.error("Null string received"+buffer);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
// Logger.error(cause.getMessage());
System.err.println(cause);
}
}
Three quickies:
Doesn't look like you're sending a response. You probably should.
Don't block the IO thread. Use an EventExecutorGroup to dispatch the handling of the incoming payload. i.e. something like ChannelPipeline.addLast(EventExecutorGroup group, String name, ChannelHandler handler).
Just don't block in general. Ditch your ArrayBlockingQueue and take a look at JCTools or some other implementation to find a non-blocking analog.
I am trying to play around with netty api using Netty Telnet server to check if the true asynchronous behaviour could be observed or not.
Below are the three classes being used
TelnetServer.java
public class TelnetServer {
public static void main(String[] args) throws InterruptedException {
// TODO Auto-generated method stub
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new TelnetServerInitializer());
b.bind(8989).sync().channel().closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
TelnetServerInitializer.java
public class TelnetServerInitializer extends ChannelInitializer<SocketChannel> {
private static final StringDecoder DECODER = new StringDecoder();
private static final StringEncoder ENCODER = new StringEncoder();
private static final TelnetServerHandler SERVER_HANDLER = new TelnetServerHandler();
final EventExecutorGroup executorGroup = new DefaultEventExecutorGroup(2);
public TelnetServerInitializer() {
}
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
// Add the text line codec combination first,
pipeline.addLast(new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter()));
// the encoder and decoder are static as these are sharable
pipeline.addLast(DECODER);
pipeline.addLast(ENCODER);
// and then business logic.
pipeline.addLast(executorGroup,"handler",SERVER_HANDLER);
}
}
TelnetServerHandler.java
/**
* Handles a server-side channel.
*/
#Sharable
public class TelnetServerHandler extends SimpleChannelInboundHandler<String> {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
// Send greeting for a new connection.
ctx.write("Welcome to " + InetAddress.getLocalHost().getHostName() + "!\r\n");
ctx.write("It is " + new Date() + " now.\r\n");
ctx.flush();
ctx.channel().config().setAutoRead(true);
}
#Override
public void channelRead0(ChannelHandlerContext ctx, String request) throws Exception {
// Generate and write a response.
System.out.println("request = "+ request);
String response;
boolean close = false;
if (request.isEmpty()) {
response = "Please type something.\r\n";
} else if ("bye".equals(request.toLowerCase())) {
response = "Have a good day!\r\n";
close = true;
} else {
response = "Did you say '" + request + "'?\r\n";
}
// We do not need to write a ChannelBuffer here.
// We know the encoder inserted at TelnetPipelineFactory will do the conversion.
ChannelFuture future = ctx.write(response);
Thread.sleep(10000);
// Close the connection after sending 'Have a good day!'
// if the client has sent 'bye'.
if (close) {
future.addListener(ChannelFutureListener.CLOSE);
}
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
Now when i connect through telnet client and send commands hello hello hello three times
the request doesn't reach channelRead until first response to channelRead is being done is there any way i can make it asynchronous completely as to receive three hello as soon as they are available on socket.
Netty uses 1 thread max for every incoming read per handler, meaning that the next call to channelRead will only be dispatched after the previous call has been completed. This is required to for correct working of most handlers, including the sending back of messages in the proper order. If the amount of computation is really complex, another solution is using a custom thread pool for the messages.
If the other operation is instead a other kind of connection, you should use that as a async connection too. You can only get asynchronous if every part does this correctly.