Using Netty 4.0.27 & Java 1.8.0_20
So I am attempting to learn how Netty works by building a simple chat server (the typical networking tutorial program, I guess?). Designing my own simple protocol, called ARC (Andrew's Relay Chat)... so that's why you see ARC in the code a lot. K, so here's the issue.
So here I start the server and register the various handlers...
public void start()
{
System.out.println("Registering handlers...");
ArcServerInboundHandler inboundHandler = new ArcServerInboundHandler(this);
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try
{
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class).childHandler(new ChannelInitializer<SocketChannel>()
{
#Override
public void initChannel(SocketChannel ch) throws Exception
{
ch.pipeline().addLast(new ArcDecoder(), inboundHandler);
ch.pipeline().addLast(new ArcEncoder());
}
}).option(ChannelOption.SO_BACKLOG, 128).childOption(ChannelOption.SO_KEEPALIVE, true);
try
{
System.out.println("Starting Arc Server on port " + port);
ChannelFuture f = bootstrap.bind(port).sync();
f.channel().closeFuture().sync();
}
catch(InterruptedException e)
{
e.printStackTrace();
}
}
finally
{
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
My "inboundHandler" does get called when the user connects.
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception
{
System.out.println("CLIENT CONNECTED"); // THIS PRINTS, REACHES THIS POINT
ArcPacket packet = new ArcPacket();
packet.setArc("PUBLIC_KEY");
packet.setField("KEY", Crypto.bytesToHex(server.getRsaKeys().getPublic().getEncoded()));
ctx.writeAndFlush(packet);
}
This is my encoder, which does not seem to get called at all...
public class ArcEncoder extends MessageToByteEncoder<ArcPacket>
{
#Override
protected void encode(ChannelHandlerContext ctx, ArcPacket msg, ByteBuf out) throws Exception
{
System.out.println("ENCODE"); // NEVER GETS HERE
String message = ArcPacketFactory.encode(msg);
byte[] data = message.getBytes("UTF-8");
out.writeBytes(data);
System.out.println("WROTE");
}
#Override
public boolean acceptOutboundMessage(Object msg) throws Exception
{
System.out.println("ACCEPT OUTBOUND MESSAGE"); // NEVER GETS HERE
return msg instanceof ArcPacket;
}
}
So,
The code that calls ctx.writeAndFlush(packet); is run, but it doesn't seem to invoke the encoder at any point. Am I missing something obvious? Perhaps I'm adding the encoder incorrectly? Though it looks right when I compare it to other examples I've seen.
Thanks for any help.
Your encoder (ArcEncoder) is placed after your inbound handler. It means, the ctx.*() method calls will never be evaluated by the encoder. To fix your problem, you have to move the ArcEncoder before the inbound handler:
ch.pipeline().addLast(new ArcDecoder(), new ArcEncoder(), inboundHandler);
For more information about the event evaluation order, please read the API documentation of ChannelPipeline.
I think the problem is that you're using the ChannelHandlerContext to write to the Channel. What this does is to insert the message in the pipeline at the point of your handler, going outbound. But since your decoder is added before your encoder in the pipeline this means that anything you write using the decoder context will be inserted after the encoder in the pipeline.
The correct way to do it to ensure that the encoder is called is to call:
ctx.channel.writeAndFlush()
Related
I have a situation like: My Netty Server will be getting data from a Client at a blazing speed. I think the client is using somewhat PUSH mechanism for that speed. I don't know what exactly PUSH - POP mechanism is, but I do feel that the Client is using some mechanism for sending data at a very high speed.Now my requirement is, I wrote a simple TCP Netty server that receives data from the client and just adds to the BlockingQueue implemented using ArrayBlockingQueue. Now , as Netty is event based, the time taken to accept the data and store it in a queue is some what more , this is raising an exception at the client side that the Netty server is not running.But my server is running perfectly, but the time to accept single data and store it in the queue is more. How can I prevent this? Is there any fastest queue for this situation? I nam using BlockingQueue as another thread will take data from the queue and process it. So I need a synchronized queue. How can I improve the performance of the Server or is there any way to insert data at a very high speed? All I care about is latency. The latency needs to be as low as possible.
My Server code:
public class Server implements Runnable {
private final int port;
static String message;
Channel channel;
ChannelFuture channelFuture;
int rcvBuf, sndBuf, lowWaterMark, highWaterMark;
public Server(int port) {
this.port = port;
rcvBuf = 2048;
sndBuf = 2048;
lowWaterMark = 1024;
highWaterMark = 2048;
}
#Override
public void run() {
try {
startServer();
} catch (Exception ex) {
System.err.println("Error in Server : "+ex);
Logger.error(ex.getMessage());
}
}
public void startServer() {
// System.out.println("8888 Server started");
EventLoopGroup group = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(group)
.channel(NioServerSocketChannel.class)
.localAddress(new InetSocketAddress(port))
.childOption(ChannelOption.SO_RCVBUF, rcvBuf * 2048)
.childOption(ChannelOption.SO_SNDBUF, sndBuf * 2048)
.childOption(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(lowWaterMark * 2048, highWaterMark * 2048))
.childOption(ChannelOption.TCP_NODELAY, true)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
channel = ch;
System.err.println("OMS connected : " + ch.localAddress());
ch.pipeline().addLast(new ReceiveFromOMSDecoder());
}
});
channelFuture = b.bind(port).sync();
this.channel = channelFuture.channel();
channelFuture.channel().closeFuture().sync();
} catch (InterruptedException ex) {
System.err.println("Exception raised in SendToOMS class"+ex);
} finally {
group.shutdownGracefully();
}
}
}
My ServerHandler code:
#Sharable
public class ReceiveFromOMSDecoder extends MessageToMessageDecoder<ByteBuf> {
private Charset charset;
public ReceiveFromOMSDecoder() {
this(Charset.defaultCharset());
}
/**
* Creates a new instance with the specified character set.
*/
public ReceiveFromOMSDecoder(Charset charset) {
if (charset == null) {
throw new NullPointerException("charset");
}
this.charset = charset;
}
#Override
protected void decode(ChannelHandlerContext ctx, ByteBuf msg, List<Object> out) throws Exception {
String buffer = msg.toString(charset);
if(buffer!=null){
Server.sq.insertStringIntoSendingQueue(buffer); //inserting into queue
}
else{
Logger.error("Null string received"+buffer);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
// Logger.error(cause.getMessage());
System.err.println(cause);
}
}
Three quickies:
Doesn't look like you're sending a response. You probably should.
Don't block the IO thread. Use an EventExecutorGroup to dispatch the handling of the incoming payload. i.e. something like ChannelPipeline.addLast(EventExecutorGroup group, String name, ChannelHandler handler).
Just don't block in general. Ditch your ArrayBlockingQueue and take a look at JCTools or some other implementation to find a non-blocking analog.
I use the java netty as tcpServer and delphi TIdTCPClient as tcpClient.the tcpclient can connect and send message to the tcpserver but the tcpclinet can not receive the message sendback from the tcpserver .
here is the tcpserver code written by java:
public class NettyServer {
public static void main(String[] args) throws InterruptedException {
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast(new TcpServerHandler());
}
});
ChannelFuture f = b.bind(8080).sync();
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
public class TcpServerHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws UnsupportedEncodingException {
try {
ByteBuf in = (ByteBuf) msg;
System.out.println("channelRead:" + in.toString(CharsetUtil.UTF_8));
byte[] responseByteArray = "hello".getBytes("UTF-8");
ByteBuf out = ctx.alloc().buffer(responseByteArray.length);
out.writeBytes(responseByteArray);
ctx.writeAndFlush(out);
//ctx.write("hello");
} finally {
ReferenceCountUtil.release(msg);
}
}
#Override
public void channelActive(ChannelHandlerContext ctx) throws UnsupportedEncodingException{
System.out.println("channelActive:" + ctx.channel().remoteAddress());
ChannelGroups.add(ctx.channel());
}
#Override
public void channelInactive(ChannelHandlerContext ctx) {
System.out.println("channelInactive:" + ctx.channel().remoteAddress());
ChannelGroups.discard(ctx.channel());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
here is the tcpclient code written by delphi :
AStream := TStringStream.Create;
IdTCPClient.IOHandler.ReadStream(AStream);
i also use the
IdTCPClient.IOHandler.ReadLn()
and still can not get the returnDATA.
Your Delphi code does not match your Java code, that is why your client is not working.
The default parameters of TIdIOHandler.ReadStream() expect the stream data to be preceeded by the stream length, in bytes, using either a 32bit or 64bit integer in network byte order (big endian) depending on the value of the TIdIOHandler.LargeStream property. Your Java code is not sending the array length before sending the array bytes.
The default parameters of TIdIOHandler.ReadLn() expect the line data to be terminated by either a CRLF or bare-LN terminator. Your Java code is not sending any line terminator at the end of the array bytes.
In short, your Java code is not sending anything that lets the receiver know when the sent data actually ends. Unless it closes the connection after sending the data, in which case you can set the AReadUntilDisconnect parameter of TIdIOHandler.ReadStream() to true, or use TIdIOHandler.AllData().
TCP is stream-oriented, not message-oriented. The sender must be explicit about where a message ends and the next message begins.
I am trying to play around with netty api using Netty Telnet server to check if the true asynchronous behaviour could be observed or not.
Below are the three classes being used
TelnetServer.java
public class TelnetServer {
public static void main(String[] args) throws InterruptedException {
// TODO Auto-generated method stub
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new TelnetServerInitializer());
b.bind(8989).sync().channel().closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
TelnetServerInitializer.java
public class TelnetServerInitializer extends ChannelInitializer<SocketChannel> {
private static final StringDecoder DECODER = new StringDecoder();
private static final StringEncoder ENCODER = new StringEncoder();
private static final TelnetServerHandler SERVER_HANDLER = new TelnetServerHandler();
final EventExecutorGroup executorGroup = new DefaultEventExecutorGroup(2);
public TelnetServerInitializer() {
}
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
// Add the text line codec combination first,
pipeline.addLast(new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter()));
// the encoder and decoder are static as these are sharable
pipeline.addLast(DECODER);
pipeline.addLast(ENCODER);
// and then business logic.
pipeline.addLast(executorGroup,"handler",SERVER_HANDLER);
}
}
TelnetServerHandler.java
/**
* Handles a server-side channel.
*/
#Sharable
public class TelnetServerHandler extends SimpleChannelInboundHandler<String> {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
// Send greeting for a new connection.
ctx.write("Welcome to " + InetAddress.getLocalHost().getHostName() + "!\r\n");
ctx.write("It is " + new Date() + " now.\r\n");
ctx.flush();
ctx.channel().config().setAutoRead(true);
}
#Override
public void channelRead0(ChannelHandlerContext ctx, String request) throws Exception {
// Generate and write a response.
System.out.println("request = "+ request);
String response;
boolean close = false;
if (request.isEmpty()) {
response = "Please type something.\r\n";
} else if ("bye".equals(request.toLowerCase())) {
response = "Have a good day!\r\n";
close = true;
} else {
response = "Did you say '" + request + "'?\r\n";
}
// We do not need to write a ChannelBuffer here.
// We know the encoder inserted at TelnetPipelineFactory will do the conversion.
ChannelFuture future = ctx.write(response);
Thread.sleep(10000);
// Close the connection after sending 'Have a good day!'
// if the client has sent 'bye'.
if (close) {
future.addListener(ChannelFutureListener.CLOSE);
}
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
Now when i connect through telnet client and send commands hello hello hello three times
the request doesn't reach channelRead until first response to channelRead is being done is there any way i can make it asynchronous completely as to receive three hello as soon as they are available on socket.
Netty uses 1 thread max for every incoming read per handler, meaning that the next call to channelRead will only be dispatched after the previous call has been completed. This is required to for correct working of most handlers, including the sending back of messages in the proper order. If the amount of computation is really complex, another solution is using a custom thread pool for the messages.
If the other operation is instead a other kind of connection, you should use that as a async connection too. You can only get asynchronous if every part does this correctly.
I want to write keep alive command from client to server using Netty. I found out that option of IdleStateHandler. I dont know how to solve the problem in the client side, this is my code:
public void connect() {
workerGroup = new NioEventLoopGroup();
Bootstrap bs = new Bootstrap();
bs.group(workerGroup).channel(NioSocketChannel.class);
bs.handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast("idleStateHandler", new IdleStateHandler(0, 0, 300));
ch.pipeline().addLast("logger", new LoggingHandler());
ch.pipeline().addLast("commandDecoder", new CuCommandDecoder());
ch.pipeline().addLast("commandEncoder", new CuCommandEncoder());
}
});
After add IdleStateHandler to channel. Where the handling code should be?
Did it new method that implements IdleStateHandler?
According to the JavaDoc, IdleStateHandler will generate new events according to the current status of the channel:
IdleState#READER_IDLE for timeout on Read operation
IdleState#WRITER_IDLE for timeout on Write operation
IdleState#ALL_IDLE for timeout on both Read/Write operation
Then you need to implement in you handlers the handling of those events as (example taken from documentation from here ):
// Handler should handle the IdleStateEvent triggered by IdleStateHandler.
public class MyHandler extends ChannelDuplexHandler {
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
if (evt instanceof IdleStateEvent) {
IdleStateEvent e = (IdleStateEvent) evt;
if (e.state() == IdleState.READER_IDLE) {
ctx.close();
} else if (e.state() == IdleState.WRITER_IDLE) {
ctx.writeAndFlush(new PingMessage());
}
}
}
}
Here the example will close on first READ idle, and try to send a ping in Write idle. One could implement also the "pong" response, and also changing the read part to a ping request too... The way you want to handle your keep-alive being related to your protocol.
This could be done both on client and server side.
I am using Netty 3.2.7. I am trying to write functionality in my client such that if no messages are written after a certain amount of time (say, 30 seconds), a "keep-alive" message is sent to the server.
After some digging, I found that WriteTimeoutHandler should enable me to do this. I found this explanation here: https://issues.jboss.org/browse/NETTY-79.
The example given in the Netty documentation is:
public ChannelPipeline getPipeline() {
// An example configuration that implements 30-second write timeout:
return Channels.pipeline(
new WriteTimeoutHandler(timer, 30), // timer must be shared.
new MyHandler());
}
In my test client, I have done just this. In MyHandler, I also overrided the exceptionCaught() method:
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
if (e.getCause() instanceof WriteTimeoutException) {
log.info("Client sending keep alive!");
ChannelBuffer keepAlive = ChannelBuffers.buffer(KEEP_ALIVE_MSG_STR.length());
keepAlive.writeBytes(KEEP_ALIVE_MSG_STR.getBytes());
Channels.write(ctx, Channels.future(e.getChannel()), keepAlive);
}
}
No matter what duration the client does not write anything to the channel, the exceptionCaught() method I have overridden is never called.
Looking at the source of WriteTimeoutHandler, its writeRequested() implementation is:
public void writeRequested(ChannelHandlerContext ctx, MessageEvent e)
throws Exception {
long timeoutMillis = getTimeoutMillis(e);
if (timeoutMillis > 0) {
// Set timeout only when getTimeoutMillis() returns a positive value.
ChannelFuture future = e.getFuture();
final Timeout timeout = timer.newTimeout(
new WriteTimeoutTask(ctx, future),
timeoutMillis, TimeUnit.MILLISECONDS);
future.addListener(new TimeoutCanceller(timeout));
}
super.writeRequested(ctx, e);
}
Here, it seems that this implementation says, "When a write is requested, make a new timeout. When the write succeeds, cancel the timeout."
Using a debugger, it does seem that this is what is happening. As soon as the write completes, the timeout is cancelled. This is not the behavior I want. The behavior I want is: "If the client has not written any information to the channel for 30 seconds, throw a WriteTimeoutException."
So, is this not what WriteTimeoutHandler is for? This is how I interpreted it from what I've read online, but the implementation does not seem to work this way. Am I using it wrong? Should I use something else? In our Mina version of the same client I am trying to rewrite, I see that the sessionIdle() method is overridden to achieve the behavior I want, but this method is not available in Netty.
For Netty 4.0 and newer, you should extend ChannelDuplexHandler like in example from IdleStateHandler documentation :
// An example that sends a ping message when there is no outbound traffic
// for 30 seconds. The connection is closed when there is no inbound traffic
// for 60 seconds.
public class MyChannelInitializer extends ChannelInitializer<Channel> {
#Override
public void initChannel(Channel channel) {
channel.pipeline().addLast("idleStateHandler", new IdleStateHandler(60, 30, 0));
channel.pipeline().addLast("myHandler", new MyHandler());
}
}
// Handler should handle the IdleStateEvent triggered by IdleStateHandler.
public class MyHandler extends ChannelDuplexHandler {
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
if (evt instanceof IdleStateEvent) {
IdleStateEvent e = (IdleStateEvent) evt;
if (e.state() == IdleState.READER_IDLE) {
ctx.close();
} else if (e.state() == IdleState.WRITER_IDLE) {
ctx.writeAndFlush(new PingMessage());
}
}
}
}
I would suggest to add the IdleStateHandler and then add your custom implementation of IdleStateAwareUpstreamHandler which can react on the idle state. This works out very well for me on many different projects.
The javadocs list the following example, that you could use as the base of your implementation:
public class MyPipelineFactory implements ChannelPipelineFactory {
private final Timer timer;
private final ChannelHandler idleStateHandler;
public MyPipelineFactory(Timer timer) {
this.timer = timer;
this.idleStateHandler = new IdleStateHandler(timer, 60, 30, 0);
// timer must be shared.
}
public ChannelPipeline getPipeline() {
return Channels.pipeline(
idleStateHandler,
new MyHandler());
}
}
// Handler should handle the IdleStateEvent triggered by IdleStateHandler.
public class MyHandler extends IdleStateAwareChannelHandler {
#Override
public void channelIdle(ChannelHandlerContext ctx, IdleStateEvent e) {
if (e.getState() == IdleState.READER_IDLE) {
e.getChannel().close();
} else if (e.getState() == IdleState.WRITER_IDLE) {
e.getChannel().write(new PingMessage());
}
}
}
ServerBootstrap bootstrap = ...;
Timer timer = new HashedWheelTimer();
...
bootstrap.setPipelineFactory(new MyPipelineFactory(timer));
...