I noticed that when I add a little bit of latency onto one of my handlers (supposed to be executed by the DefaultEventExecutorGroup) that it causes the user to experience timeouts on their end? Without the latency everything seems to be running fine, but 1ms of latency shouldn't cause 1/3 of my users to timeout. My current load is around 20k requests per second, not necessarily all concurrent.
Here is my pipeline setup
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast("frameDecoder", new LineBasedFrameDecoder(80));
p.addLast("stringDecoder", new StringDecoder(CharsetUtil.UTF_8));
p.addLast("stringEncoder", new StringEncoder(CharsetUtil.UTF_8));
p.addLast(CoolServer.getExecutors(), "cool_request", new CoolServerHandler());
//private static CoolServer.executors = new DefaultEventExecutorGroup(48);
//CoolServer.getExecutors() returns CoolServer.executors
}
Here is how my CoolServerHandler is setup: ("extends SimpleChannelInboundHandler/")
public class CoolServerHandler extends SimpleChannelInboundHandler<Object>{
//default constructor used
#Override
public void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception {
//LOOP TO ADD SOME LATENCY TO MY HANDLER
long currentTime = System.currentTimeMillis();
long timeIncrement = 1; // waits for this many milliseconds
for (long checkTime = System.currentTimeMillis(); checkTime < (currentTime + timeIncrement); checkTime = System.currentTimeMillis()){
//busy waiting
}
//encodeResponse taken out for simplicity, simple string manipulation function
String response = encodeResponse(key, true);
ReferenceCountUtil.retain(msg);
ctx.writeAndFlush(response);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
I have tried to keep the code simple in order to prevent this post from being too long, let me know if more information is needed. Thank You
Related
I created a small Netty server to calculate the factorial of a BigInteger and send the results. The code is as follows.
Factorial.java
public class Factorial {
private int port;
public Factorial(int port) {
this.port = port;
}
public void run(int threadcount) throws Exception {
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup(threadcount);
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new FactorialHandler());
}
})
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
ChannelFuture f = b.bind(port).sync();
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
int port = 15000;
new Factorial(port).run(Integer.parseInt(args[0]));
}
}
FactorialHandler.java
public class FactorialHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
BigInteger result = BigInteger.ONE;
String resultString;
for (int i=2000; i>0; i--)
result = result.multiply(BigInteger.valueOf(i));
resultString = result.toString().substring(0, 3)+"\n";
ByteBuf buf = Unpooled.copiedBuffer(resultString.getBytes());
ctx.write(buf);
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
When I ran this I got the following error
Jun 08, 2018 5:28:09 PM io.netty.util.ResourceLeakDetector reportTracedLeak
SEVERE: LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
As explained in the given link, I released the ByteBuffer by calling buf.release() in the channelRead method after ctx.flush().
But when I do that, the server starts throwing the following exception
io.netty.util.IllegalReferenceCountException: refCnt: 0, increment: 1
Can someone please tell me how to fix this issue?
The problem is not the outbound ByteBuf. Outbound ByteBufs are always taken care of for you (See OutboundMessages). The problem is the inbound ByteBuf. I'm looking at you, FactorialHandler. It extends ChannelInboundHandlerAdapter. Note this from the JavaDoc:
Be aware that messages are not released after the
channelRead(ChannelHandlerContext, Object) method returns
automatically. If you are looking for a ChannelInboundHandler
implementation that releases the received messages automatically,
please see SimpleChannelInboundHandler.
Your handler has a signature like this:
public void channelRead(ChannelHandlerContext ctx, Object msg)
That msg, (which you don't use, by the way) is actually a ByteBuf, which is exactly what the JavaDoc note above is warning you about. (In the absence of any other ChannelHandlers, messages will always be instances of ByteBuf.)
So your options are:
Use a SimpleChannelInboundHandler which will clean up that reference for you.
At the end of your handler, release the inbound ByteBuf using ReferenceCountUtil.release(java.lang.Object msg).
Its because you dont call msg.release() (msg is an instance of ByteBuf).
I have seen lots of questions around about chunked streams in netty, but most of them were solutions about outbound streams, not inbound streams.
I would like to understand how can I get the data from the channel and send it as an InputStream to my business logic without loading all the data in memory first.
Here's what I was trying to do:
public class ServerRequestHandler extends MessageToMessageDecoder<HttpObject> {
private HttpServletRequest request;
private PipedOutputStream os;
private PipedInputStream is;
#Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
super.handlerAdded(ctx);
this.os = new PipedOutputStream();
this.is = new PipedInputStream(os);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
super.handlerRemoved(ctx);
this.os.close();
this.is.close();
}
#Override
protected void decode(ChannelHandlerContext ctx, HttpObject msg, List<Object> out)
throws Exception {
if (msg instanceof HttpRequest) {
this.request = new CustomHttpRequest((HttpRequest) msg, this.is);
out.add(this.request);
}
if (msg instanceof HttpContent) {
ByteBuf body = ((HttpContent) msg).content();
if (body.readableBytes() > 0)
body.readBytes(os, body.readableBytes());
if (msg instanceof LastHttpContent) {
os.close();
}
}
}
}
And then I have another Handler that will get my CustomHttpRequest and send to what I call a ServiceHandler, where my business logic will read from the InputStream.
public class ServiceRouterHandler extends SimpleChannelInboundHandler<CustomHttpRequest> {
...
#Override
public void channelRead0(ChannelHandlerContext ctx, CustomHttpRequest request) throws IOException {
...
future = serviceHandler.handle(request, response);
...
This does not work because when my Handler forwards the CustomHttpRequest to the ServiceHandler, and it tries to read from the InputStream, the thread is blocking, and the HttpContent is never handled in my Decoder.
I know I can try to create a separate thread for my Business Logic, but I have the impression I am overcomplicating things here.
I looked at ByteBufInputStream, but it says that
Please note that it only reads up to the number of readable bytes
determined at the moment of construction.
So I don't think it will work for Chunked Http requests. Also, I saw ChunkedWriteHandler, which seems fine for Oubound chunks, but I couldn't find something as ChunkedReadHandler...
So my question is: what's the best way to do this? My requirementes are:
- Do not keep data in memory before sending the ServiceHandlers;
- The ServiceHandlers API should be netty agnostic (that's why I use my CustomHttpRequest, instead of Netty's HttpRequest);
UPDATE
I have got this to work using a more reactive approach on the CustomHttpRequest. Now, the request does not provide an InputStream to the ServiceHandlers so they can read (which was blocking), but instead, the CustomHttpRequest now has a readInto(OutputStream) method that returns a Future, and all the service handler will just be executed when this Outputstream is fullfilled. Here is how it looks like
public class CustomHttpRequest {
...constructors and other methods hidden...
private final SettableFuture<Void> writeCompleteFuture = SettableFuture.create();
private final SettableFuture<OutputStream> outputStreamFuture = SettableFuture.create();
private ListenableFuture<Void> lastWriteFuture = Futures.transform(outputStreamFuture, x-> null);
public ListenableFuture<Void> readInto(OutputStream os) throws IOException {
outputStreamFuture.set(os);
return this.writeCompleteFuture;
}
ListenableFuture<Void> writeChunk(byte[] buf) {
this.lastWriteFuture = Futures.transform(lastWriteFuture, (AsyncFunction<Void, Void>) (os) -> {
outputStreamFuture.get().write(buf);
return Futures.immediateFuture(null);
});
return lastWriteFuture;
}
void complete() {
ListenableFuture<Void> future =
Futures.transform(lastWriteFuture, (AsyncFunction<Void, Void>) x -> {
outputStreamFuture.get().close();
return Futures.immediateFuture(null);
});
addFinallyCallback(future, () -> {
this.writeCompleteFuture.set(null);
});
}
}
And my updated ServletRequestHandler looks like this:
public class ServerRequestHandler extends MessageToMessageDecoder<HttpObject> {
private NettyHttpServletRequestAdaptor request;
#Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
super.handlerAdded(ctx);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
super.handlerRemoved(ctx);
}
#Override
protected void decode(ChannelHandlerContext ctx, HttpObject msg, List<Object> out)
throws Exception {
if (msg instanceof HttpRequest) {
HttpRequest request = (HttpRequest) msg;
this.request = new CustomHttpRequest(request, ctx.channel());
out.add(this.request);
}
if (msg instanceof HttpContent) {
ByteBuf buf = ((HttpContent) msg).content();
byte[] bytes = new byte[buf.readableBytes()];
buf.readBytes(bytes);
this.request.writeChunk(bytes);
if (msg instanceof LastHttpContent) {
this.request.complete();
}
}
}
}
This works pretty well, but still, note that everything here is done in a single thread, and maybe for large data I might want to spawn a new thread to release that thread for other channels.
You're on the right track - if your serviceHandler.handle(request, response); call is doing a blocking read, you need to create a new thread for it. Remember, there are supposed to be only a small number of Netty worker threads, so you shouldn't do any blocking calls in worker threads.
The other question to ask is, does your service handler need to be blocking? What does it do? If it's shoveling the data over the network anyway, can you incorporate it into the Netty pipeline in a non-blocking way? That way, everything is async all the way, no blocking calls and extra threads required.
I want to send world state every 100 msec to all channels. But it calls only once.
My code:
public class IncomeMessageTcpHandler extends SimpleChannelInboundHandler<byte[]> {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
Channel channel = ctx.channel();
channel.eventLoop().scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
System.out.println("send");
channel.writeAndFlush(GameLogic.Instance.flushMessageData());
}
}, 100, 100, TimeUnit.MILLISECONDS);
}
}
Now the method call only once.
I use 4.1.13.Final
public class UnityServerTcpChannelInitializer extends ChannelInitializer<SocketChannel> {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast("frameDecoder", new FixedLengthFrameDecoder(6));
p.addLast("bytesDecoder", new ByteArrayDecoder());
p.addLast("bytesEncoder", new ByteArrayEncoder());
p.addLast(new IncomeMessageTcpHandler());
}
}
When I comment channel.writeAndFlush(GameLogic.Instance.flushMessageData()); task run every 100 msec
Omg... my fault.
My method GameLogic.Instance.flushMessageData() generates NullPointerException but I did not know that Runnable do not throw any exception.
That's why it stops running without any warn
I am trying to play around with netty api using Netty Telnet server to check if the true asynchronous behaviour could be observed or not.
Below are the three classes being used
TelnetServer.java
public class TelnetServer {
public static void main(String[] args) throws InterruptedException {
// TODO Auto-generated method stub
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new TelnetServerInitializer());
b.bind(8989).sync().channel().closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
TelnetServerInitializer.java
public class TelnetServerInitializer extends ChannelInitializer<SocketChannel> {
private static final StringDecoder DECODER = new StringDecoder();
private static final StringEncoder ENCODER = new StringEncoder();
private static final TelnetServerHandler SERVER_HANDLER = new TelnetServerHandler();
final EventExecutorGroup executorGroup = new DefaultEventExecutorGroup(2);
public TelnetServerInitializer() {
}
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
// Add the text line codec combination first,
pipeline.addLast(new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter()));
// the encoder and decoder are static as these are sharable
pipeline.addLast(DECODER);
pipeline.addLast(ENCODER);
// and then business logic.
pipeline.addLast(executorGroup,"handler",SERVER_HANDLER);
}
}
TelnetServerHandler.java
/**
* Handles a server-side channel.
*/
#Sharable
public class TelnetServerHandler extends SimpleChannelInboundHandler<String> {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
// Send greeting for a new connection.
ctx.write("Welcome to " + InetAddress.getLocalHost().getHostName() + "!\r\n");
ctx.write("It is " + new Date() + " now.\r\n");
ctx.flush();
ctx.channel().config().setAutoRead(true);
}
#Override
public void channelRead0(ChannelHandlerContext ctx, String request) throws Exception {
// Generate and write a response.
System.out.println("request = "+ request);
String response;
boolean close = false;
if (request.isEmpty()) {
response = "Please type something.\r\n";
} else if ("bye".equals(request.toLowerCase())) {
response = "Have a good day!\r\n";
close = true;
} else {
response = "Did you say '" + request + "'?\r\n";
}
// We do not need to write a ChannelBuffer here.
// We know the encoder inserted at TelnetPipelineFactory will do the conversion.
ChannelFuture future = ctx.write(response);
Thread.sleep(10000);
// Close the connection after sending 'Have a good day!'
// if the client has sent 'bye'.
if (close) {
future.addListener(ChannelFutureListener.CLOSE);
}
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
Now when i connect through telnet client and send commands hello hello hello three times
the request doesn't reach channelRead until first response to channelRead is being done is there any way i can make it asynchronous completely as to receive three hello as soon as they are available on socket.
Netty uses 1 thread max for every incoming read per handler, meaning that the next call to channelRead will only be dispatched after the previous call has been completed. This is required to for correct working of most handlers, including the sending back of messages in the proper order. If the amount of computation is really complex, another solution is using a custom thread pool for the messages.
If the other operation is instead a other kind of connection, you should use that as a async connection too. You can only get asynchronous if every part does this correctly.
I have created a fairly straight forward server using Netty 4. I have been able to scale it up to handle several thousand connections and it never climbs above ~40 threads.
In order to test it out, I have also created a test client that creates thousands of connections. Unfortunately this creates as many threads as it makes connections. I was hoping to minimize threads for the clients. I have looked at many posts for this. Many examples show single connection setup. This and this say to share NioEventLoopGroup across clients, which I do. I'm getting a limited number of nioEventLoopGroup, but getting a thread per connection elsewhere. I am not purposely creating threads in the pipeline and don't see what could be.
Here is a snippet from the setup of my client code. It seems that it should maintain a fixed thread count based on what I've researched so far. Is there something I'm missing that I should be doing to prevent a thread per client connection?
Main
final EventLoopGroup group = new NioEventLoopGroup();
for (int i=0; i<100; i++)){
MockClient client = new MockClient(i, group);
client.connect();
}
MockClient
public class MockClient implements Runnable {
private final EventLoopGroup group;
private int identity;
public MockClient(int identity, final EventLoopGroup group) {
this.identity = identity;
this.group = group;
}
#Override
public void run() {
try {
connect();
} catch (Exception e) {}
}
public void connect() throws Exception{
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.handler(new MockClientInitializer(identity, this));
final Runnable that = this;
// Start the connection attempt
b.connect(config.getHost(), config.getPort()).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
Channel ch = future.sync().channel();
} else {
//if the server is down, try again in a few seconds
future.channel().eventLoop().schedule(that, 15, TimeUnit.SECONDS);
}
}
});
}
}
As has happened to me many times before, explaining the problem in detail made me think about it more and I came across the issue. I wanted to provide it here should anyone else come across the same issue with creating thousands of Netty clients.
I have one path in my pipeline that will create a timeout task to simulate a client connection rebooting. It turns out it was this timer task that was creating the extra threads per connection whenever it received a 'reboot' signal from the server (which happens every so often) up until there was a thread per connection.
Handler
private final HashedWheelTimer timer;
#Override
protected void channelRead0(ChannelHandlerContext ctx, Packet msg) throws Exception {
Packet packet = reboot();
ChannelFutureListener closeHandler = new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
RebootTimeoutTask timeoutTask = new RebootTimeoutTask(identity, client);
timer.newTimeout(timeoutTask, SECONDS_FOR_REBOOT, TimeUnit.SECONDS);
}
};
ctx.writeAndFlush(packet).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
future.channel().close().addListener(closeHandler);
} else {
future.channel().close();
}
}
});
}
Timeout Task
public class RebootTimeoutTask implements TimerTask {
public RebootTimeoutTask(...) {...}
#Override
public void run(Timeout timeout) throws Exception {
client.connect(identity);
}
}