I am using Netty 3.2.7. I am trying to write functionality in my client such that if no messages are written after a certain amount of time (say, 30 seconds), a "keep-alive" message is sent to the server.
After some digging, I found that WriteTimeoutHandler should enable me to do this. I found this explanation here: https://issues.jboss.org/browse/NETTY-79.
The example given in the Netty documentation is:
public ChannelPipeline getPipeline() {
// An example configuration that implements 30-second write timeout:
return Channels.pipeline(
new WriteTimeoutHandler(timer, 30), // timer must be shared.
new MyHandler());
}
In my test client, I have done just this. In MyHandler, I also overrided the exceptionCaught() method:
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
if (e.getCause() instanceof WriteTimeoutException) {
log.info("Client sending keep alive!");
ChannelBuffer keepAlive = ChannelBuffers.buffer(KEEP_ALIVE_MSG_STR.length());
keepAlive.writeBytes(KEEP_ALIVE_MSG_STR.getBytes());
Channels.write(ctx, Channels.future(e.getChannel()), keepAlive);
}
}
No matter what duration the client does not write anything to the channel, the exceptionCaught() method I have overridden is never called.
Looking at the source of WriteTimeoutHandler, its writeRequested() implementation is:
public void writeRequested(ChannelHandlerContext ctx, MessageEvent e)
throws Exception {
long timeoutMillis = getTimeoutMillis(e);
if (timeoutMillis > 0) {
// Set timeout only when getTimeoutMillis() returns a positive value.
ChannelFuture future = e.getFuture();
final Timeout timeout = timer.newTimeout(
new WriteTimeoutTask(ctx, future),
timeoutMillis, TimeUnit.MILLISECONDS);
future.addListener(new TimeoutCanceller(timeout));
}
super.writeRequested(ctx, e);
}
Here, it seems that this implementation says, "When a write is requested, make a new timeout. When the write succeeds, cancel the timeout."
Using a debugger, it does seem that this is what is happening. As soon as the write completes, the timeout is cancelled. This is not the behavior I want. The behavior I want is: "If the client has not written any information to the channel for 30 seconds, throw a WriteTimeoutException."
So, is this not what WriteTimeoutHandler is for? This is how I interpreted it from what I've read online, but the implementation does not seem to work this way. Am I using it wrong? Should I use something else? In our Mina version of the same client I am trying to rewrite, I see that the sessionIdle() method is overridden to achieve the behavior I want, but this method is not available in Netty.
For Netty 4.0 and newer, you should extend ChannelDuplexHandler like in example from IdleStateHandler documentation :
// An example that sends a ping message when there is no outbound traffic
// for 30 seconds. The connection is closed when there is no inbound traffic
// for 60 seconds.
public class MyChannelInitializer extends ChannelInitializer<Channel> {
#Override
public void initChannel(Channel channel) {
channel.pipeline().addLast("idleStateHandler", new IdleStateHandler(60, 30, 0));
channel.pipeline().addLast("myHandler", new MyHandler());
}
}
// Handler should handle the IdleStateEvent triggered by IdleStateHandler.
public class MyHandler extends ChannelDuplexHandler {
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
if (evt instanceof IdleStateEvent) {
IdleStateEvent e = (IdleStateEvent) evt;
if (e.state() == IdleState.READER_IDLE) {
ctx.close();
} else if (e.state() == IdleState.WRITER_IDLE) {
ctx.writeAndFlush(new PingMessage());
}
}
}
}
I would suggest to add the IdleStateHandler and then add your custom implementation of IdleStateAwareUpstreamHandler which can react on the idle state. This works out very well for me on many different projects.
The javadocs list the following example, that you could use as the base of your implementation:
public class MyPipelineFactory implements ChannelPipelineFactory {
private final Timer timer;
private final ChannelHandler idleStateHandler;
public MyPipelineFactory(Timer timer) {
this.timer = timer;
this.idleStateHandler = new IdleStateHandler(timer, 60, 30, 0);
// timer must be shared.
}
public ChannelPipeline getPipeline() {
return Channels.pipeline(
idleStateHandler,
new MyHandler());
}
}
// Handler should handle the IdleStateEvent triggered by IdleStateHandler.
public class MyHandler extends IdleStateAwareChannelHandler {
#Override
public void channelIdle(ChannelHandlerContext ctx, IdleStateEvent e) {
if (e.getState() == IdleState.READER_IDLE) {
e.getChannel().close();
} else if (e.getState() == IdleState.WRITER_IDLE) {
e.getChannel().write(new PingMessage());
}
}
}
ServerBootstrap bootstrap = ...;
Timer timer = new HashedWheelTimer();
...
bootstrap.setPipelineFactory(new MyPipelineFactory(timer));
...
Related
I'm quite new with netty, I want to create a TCP server which does a custom application layer handshaking when a connection is to be instantiated. After the handshaking I want to pass the messages (ByteBuf) to a queue so that they could be processed by some other threads.
My question is, can I have multiple ChannelInboundHandlerAdapter's in the channel pipeline? one for the application layer handshaking protocol and the other one for passing the message to the queue. Furthermore I want to know how the messages flow through the pipeline. If a message is received at one handler (or decoder/encoder) how is it passed to another handler.
Specifically, if I change the EchoServer from here and add another ChannelInboundHandlerAdapter, the echo server handler would stop receiving any messages.
ServerBootstrap b = new ServerBootstrap();
b.group(group)
.channel(NioServerSocketChannel.class)
.localAddress(new InetSocketAddress(port))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast(new ChannelInboundHandlerAdapter() {
#Override
public void channelRead(ChannelHandlerContext ctx,
Object msg) {
}
});
ch.pipeline().addLast(
new EchoServerHandler());
}
});
My logic is: have 2 ChannelInboundHandlerAdapter's then do the handshaking with the first handler and discard packets if they do not match the handshaking criteria, and then pass the messages to a queue through the second ChannelInboundHandlerAdapter. Is my logic correct? If not how should it be?
Thank you very much.
ChannelInboundHandlerAdapter is an adapter class to the ChannelInBoundHandler interface. For beginning you can use SimpleChannelInboundHandler (or more complicated you can extend the adapter class writing your own handler that extends ChannelInboundHandlerAdapter ).
The SimpleCHannelInboundHandler releases the message automatically after channelRead() (and thereby passes it to the next handler in the ChannelPipeline).
For using the easier SimpleChannelInboundHandler see this thread Netty hello world example not working
So instead of this ch.pipeline().addLast(new ChannelInboundHandlerAdapter() {}
you have to write a new class that extends SimpleChannelInboundHandler like
public class MyHandler extends SimpleChannelInboundHandler{
#Override
protected void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception {
try {
System.out.println(in.toString(io.netty.util.CharsetUtil.US_ASCII));
} finally {
in.release();
}
}
}
and invoke it like
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new MyHandler());
}
As said above the SimpleCHannelInboundHandler releases the message automatically after channelRead() (and thereby passes it to the next handler in the ChannelPipeline).
If you use ChannelInboundHandlerAdapter you have to implement the passing of the message/event to the next handler yourself
A handler has to invoke the event propagation methods in ChannelHandlerContext ctx to forward an event to its next handler. (in the SimpleChannelInboundHandler class this is implemented yet)
public class MyInboundHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelActive(ChannelHandlerContext ctx) {
System.out.println("Connected!");
ctx.fireChannelActive();
}
}
See this http://netty.io/4.0/api/io/netty/channel/ChannelPipeline.html
I must remind that:
Only One SimpleChannelInboundHandler extention can be add to the pipeline chain.
Because SimpleChannelInboundHandler have a finally code block will release all the msg.
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
boolean release = true;
try {
if (acceptInboundMessage(msg)) {
#SuppressWarnings("unchecked")
I imsg = (I) msg;
channelRead0(ctx, imsg);
} else {
release = false;
ctx.fireChannelRead(msg);
}
} finally {
if (autoRelease && release) {
//release all handled messages,so the next handler won't be executed
ReferenceCountUtil.release(msg);**
}
}
}
Use ChannelInboundHandlerAdapter instead:
public class CustomizeChannelInboundHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
System.out.println("do something you like!")
super.channelRead(ctx, msg);
}
}
I have a scenario where I am establishing TCP connection using netty NIO, suppose server went down than how can I automatically connect to server when it comes up again ?
Or Is there any way to attach availability listener on server ?
You can have a DisconnectionHandler, as the first thing on your client pipeline, that reacts on channelInactive by immediately trying to reconnect or scheduling a reconnection task.
For example,
public class DisconnectionHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelInactive(final ChannelHandlerContext ctx) throws Exception {
Channel channel = ctx.channel();
/* If shutdown is on going, ignore */
if (channel.eventLoop().isShuttingDown()) return;
ReconnectionTask reconnect = new ReconnectionTask(channel);
reconnect.run();
}
}
The ReconnectionTask would be something like this:
public class ReconnectionTask implements Runnable, ChannelFutureListener {
Channel previous;
public ReconnectionTask(Channel c) {
this.previous = c;
}
#Override
public void run() {
Bootstrap b = createBootstrap();
b.remoteAddress(previous.remoteAddress())
.connect()
.addListener(this);
}
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (!future.isSuccess()) {
// Will try to connect again in 100 ms.
// Here you should probably use exponential backoff or some sort of randomization to define the retry period.
previous.eventLoop()
.schedule(this, 100, MILLISECONDS);
return;
}
// Do something else when success if needed.
}
}
Check here for an example of Exponential Backoff library.
I want to write keep alive command from client to server using Netty. I found out that option of IdleStateHandler. I dont know how to solve the problem in the client side, this is my code:
public void connect() {
workerGroup = new NioEventLoopGroup();
Bootstrap bs = new Bootstrap();
bs.group(workerGroup).channel(NioSocketChannel.class);
bs.handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast("idleStateHandler", new IdleStateHandler(0, 0, 300));
ch.pipeline().addLast("logger", new LoggingHandler());
ch.pipeline().addLast("commandDecoder", new CuCommandDecoder());
ch.pipeline().addLast("commandEncoder", new CuCommandEncoder());
}
});
After add IdleStateHandler to channel. Where the handling code should be?
Did it new method that implements IdleStateHandler?
According to the JavaDoc, IdleStateHandler will generate new events according to the current status of the channel:
IdleState#READER_IDLE for timeout on Read operation
IdleState#WRITER_IDLE for timeout on Write operation
IdleState#ALL_IDLE for timeout on both Read/Write operation
Then you need to implement in you handlers the handling of those events as (example taken from documentation from here ):
// Handler should handle the IdleStateEvent triggered by IdleStateHandler.
public class MyHandler extends ChannelDuplexHandler {
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
if (evt instanceof IdleStateEvent) {
IdleStateEvent e = (IdleStateEvent) evt;
if (e.state() == IdleState.READER_IDLE) {
ctx.close();
} else if (e.state() == IdleState.WRITER_IDLE) {
ctx.writeAndFlush(new PingMessage());
}
}
}
}
Here the example will close on first READ idle, and try to send a ping in Write idle. One could implement also the "pong" response, and also changing the read part to a ping request too... The way you want to handle your keep-alive being related to your protocol.
This could be done both on client and server side.
Using Netty 4.0.27 & Java 1.8.0_20
So I am attempting to learn how Netty works by building a simple chat server (the typical networking tutorial program, I guess?). Designing my own simple protocol, called ARC (Andrew's Relay Chat)... so that's why you see ARC in the code a lot. K, so here's the issue.
So here I start the server and register the various handlers...
public void start()
{
System.out.println("Registering handlers...");
ArcServerInboundHandler inboundHandler = new ArcServerInboundHandler(this);
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try
{
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class).childHandler(new ChannelInitializer<SocketChannel>()
{
#Override
public void initChannel(SocketChannel ch) throws Exception
{
ch.pipeline().addLast(new ArcDecoder(), inboundHandler);
ch.pipeline().addLast(new ArcEncoder());
}
}).option(ChannelOption.SO_BACKLOG, 128).childOption(ChannelOption.SO_KEEPALIVE, true);
try
{
System.out.println("Starting Arc Server on port " + port);
ChannelFuture f = bootstrap.bind(port).sync();
f.channel().closeFuture().sync();
}
catch(InterruptedException e)
{
e.printStackTrace();
}
}
finally
{
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
My "inboundHandler" does get called when the user connects.
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception
{
System.out.println("CLIENT CONNECTED"); // THIS PRINTS, REACHES THIS POINT
ArcPacket packet = new ArcPacket();
packet.setArc("PUBLIC_KEY");
packet.setField("KEY", Crypto.bytesToHex(server.getRsaKeys().getPublic().getEncoded()));
ctx.writeAndFlush(packet);
}
This is my encoder, which does not seem to get called at all...
public class ArcEncoder extends MessageToByteEncoder<ArcPacket>
{
#Override
protected void encode(ChannelHandlerContext ctx, ArcPacket msg, ByteBuf out) throws Exception
{
System.out.println("ENCODE"); // NEVER GETS HERE
String message = ArcPacketFactory.encode(msg);
byte[] data = message.getBytes("UTF-8");
out.writeBytes(data);
System.out.println("WROTE");
}
#Override
public boolean acceptOutboundMessage(Object msg) throws Exception
{
System.out.println("ACCEPT OUTBOUND MESSAGE"); // NEVER GETS HERE
return msg instanceof ArcPacket;
}
}
So,
The code that calls ctx.writeAndFlush(packet); is run, but it doesn't seem to invoke the encoder at any point. Am I missing something obvious? Perhaps I'm adding the encoder incorrectly? Though it looks right when I compare it to other examples I've seen.
Thanks for any help.
Your encoder (ArcEncoder) is placed after your inbound handler. It means, the ctx.*() method calls will never be evaluated by the encoder. To fix your problem, you have to move the ArcEncoder before the inbound handler:
ch.pipeline().addLast(new ArcDecoder(), new ArcEncoder(), inboundHandler);
For more information about the event evaluation order, please read the API documentation of ChannelPipeline.
I think the problem is that you're using the ChannelHandlerContext to write to the Channel. What this does is to insert the message in the pipeline at the point of your handler, going outbound. But since your decoder is added before your encoder in the pipeline this means that anything you write using the decoder context will be inserted after the encoder in the pipeline.
The correct way to do it to ensure that the encoder is called is to call:
ctx.channel.writeAndFlush()
I have created a fairly straight forward server using Netty 4. I have been able to scale it up to handle several thousand connections and it never climbs above ~40 threads.
In order to test it out, I have also created a test client that creates thousands of connections. Unfortunately this creates as many threads as it makes connections. I was hoping to minimize threads for the clients. I have looked at many posts for this. Many examples show single connection setup. This and this say to share NioEventLoopGroup across clients, which I do. I'm getting a limited number of nioEventLoopGroup, but getting a thread per connection elsewhere. I am not purposely creating threads in the pipeline and don't see what could be.
Here is a snippet from the setup of my client code. It seems that it should maintain a fixed thread count based on what I've researched so far. Is there something I'm missing that I should be doing to prevent a thread per client connection?
Main
final EventLoopGroup group = new NioEventLoopGroup();
for (int i=0; i<100; i++)){
MockClient client = new MockClient(i, group);
client.connect();
}
MockClient
public class MockClient implements Runnable {
private final EventLoopGroup group;
private int identity;
public MockClient(int identity, final EventLoopGroup group) {
this.identity = identity;
this.group = group;
}
#Override
public void run() {
try {
connect();
} catch (Exception e) {}
}
public void connect() throws Exception{
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.handler(new MockClientInitializer(identity, this));
final Runnable that = this;
// Start the connection attempt
b.connect(config.getHost(), config.getPort()).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
Channel ch = future.sync().channel();
} else {
//if the server is down, try again in a few seconds
future.channel().eventLoop().schedule(that, 15, TimeUnit.SECONDS);
}
}
});
}
}
As has happened to me many times before, explaining the problem in detail made me think about it more and I came across the issue. I wanted to provide it here should anyone else come across the same issue with creating thousands of Netty clients.
I have one path in my pipeline that will create a timeout task to simulate a client connection rebooting. It turns out it was this timer task that was creating the extra threads per connection whenever it received a 'reboot' signal from the server (which happens every so often) up until there was a thread per connection.
Handler
private final HashedWheelTimer timer;
#Override
protected void channelRead0(ChannelHandlerContext ctx, Packet msg) throws Exception {
Packet packet = reboot();
ChannelFutureListener closeHandler = new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
RebootTimeoutTask timeoutTask = new RebootTimeoutTask(identity, client);
timer.newTimeout(timeoutTask, SECONDS_FOR_REBOOT, TimeUnit.SECONDS);
}
};
ctx.writeAndFlush(packet).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
future.channel().close().addListener(closeHandler);
} else {
future.channel().close();
}
}
});
}
Timeout Task
public class RebootTimeoutTask implements TimerTask {
public RebootTimeoutTask(...) {...}
#Override
public void run(Timeout timeout) throws Exception {
client.connect(identity);
}
}