I am trying to play around with netty api using Netty Telnet server to check if the true asynchronous behaviour could be observed or not.
Below are the three classes being used
TelnetServer.java
public class TelnetServer {
public static void main(String[] args) throws InterruptedException {
// TODO Auto-generated method stub
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new TelnetServerInitializer());
b.bind(8989).sync().channel().closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
TelnetServerInitializer.java
public class TelnetServerInitializer extends ChannelInitializer<SocketChannel> {
private static final StringDecoder DECODER = new StringDecoder();
private static final StringEncoder ENCODER = new StringEncoder();
private static final TelnetServerHandler SERVER_HANDLER = new TelnetServerHandler();
final EventExecutorGroup executorGroup = new DefaultEventExecutorGroup(2);
public TelnetServerInitializer() {
}
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
// Add the text line codec combination first,
pipeline.addLast(new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter()));
// the encoder and decoder are static as these are sharable
pipeline.addLast(DECODER);
pipeline.addLast(ENCODER);
// and then business logic.
pipeline.addLast(executorGroup,"handler",SERVER_HANDLER);
}
}
TelnetServerHandler.java
/**
* Handles a server-side channel.
*/
#Sharable
public class TelnetServerHandler extends SimpleChannelInboundHandler<String> {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
// Send greeting for a new connection.
ctx.write("Welcome to " + InetAddress.getLocalHost().getHostName() + "!\r\n");
ctx.write("It is " + new Date() + " now.\r\n");
ctx.flush();
ctx.channel().config().setAutoRead(true);
}
#Override
public void channelRead0(ChannelHandlerContext ctx, String request) throws Exception {
// Generate and write a response.
System.out.println("request = "+ request);
String response;
boolean close = false;
if (request.isEmpty()) {
response = "Please type something.\r\n";
} else if ("bye".equals(request.toLowerCase())) {
response = "Have a good day!\r\n";
close = true;
} else {
response = "Did you say '" + request + "'?\r\n";
}
// We do not need to write a ChannelBuffer here.
// We know the encoder inserted at TelnetPipelineFactory will do the conversion.
ChannelFuture future = ctx.write(response);
Thread.sleep(10000);
// Close the connection after sending 'Have a good day!'
// if the client has sent 'bye'.
if (close) {
future.addListener(ChannelFutureListener.CLOSE);
}
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
Now when i connect through telnet client and send commands hello hello hello three times
the request doesn't reach channelRead until first response to channelRead is being done is there any way i can make it asynchronous completely as to receive three hello as soon as they are available on socket.
Netty uses 1 thread max for every incoming read per handler, meaning that the next call to channelRead will only be dispatched after the previous call has been completed. This is required to for correct working of most handlers, including the sending back of messages in the proper order. If the amount of computation is really complex, another solution is using a custom thread pool for the messages.
If the other operation is instead a other kind of connection, you should use that as a async connection too. You can only get asynchronous if every part does this correctly.
Related
I created a small Netty server to calculate the factorial of a BigInteger and send the results. The code is as follows.
Factorial.java
public class Factorial {
private int port;
public Factorial(int port) {
this.port = port;
}
public void run(int threadcount) throws Exception {
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup(threadcount);
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new FactorialHandler());
}
})
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
ChannelFuture f = b.bind(port).sync();
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
int port = 15000;
new Factorial(port).run(Integer.parseInt(args[0]));
}
}
FactorialHandler.java
public class FactorialHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
BigInteger result = BigInteger.ONE;
String resultString;
for (int i=2000; i>0; i--)
result = result.multiply(BigInteger.valueOf(i));
resultString = result.toString().substring(0, 3)+"\n";
ByteBuf buf = Unpooled.copiedBuffer(resultString.getBytes());
ctx.write(buf);
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
When I ran this I got the following error
Jun 08, 2018 5:28:09 PM io.netty.util.ResourceLeakDetector reportTracedLeak
SEVERE: LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
As explained in the given link, I released the ByteBuffer by calling buf.release() in the channelRead method after ctx.flush().
But when I do that, the server starts throwing the following exception
io.netty.util.IllegalReferenceCountException: refCnt: 0, increment: 1
Can someone please tell me how to fix this issue?
The problem is not the outbound ByteBuf. Outbound ByteBufs are always taken care of for you (See OutboundMessages). The problem is the inbound ByteBuf. I'm looking at you, FactorialHandler. It extends ChannelInboundHandlerAdapter. Note this from the JavaDoc:
Be aware that messages are not released after the
channelRead(ChannelHandlerContext, Object) method returns
automatically. If you are looking for a ChannelInboundHandler
implementation that releases the received messages automatically,
please see SimpleChannelInboundHandler.
Your handler has a signature like this:
public void channelRead(ChannelHandlerContext ctx, Object msg)
That msg, (which you don't use, by the way) is actually a ByteBuf, which is exactly what the JavaDoc note above is warning you about. (In the absence of any other ChannelHandlers, messages will always be instances of ByteBuf.)
So your options are:
Use a SimpleChannelInboundHandler which will clean up that reference for you.
At the end of your handler, release the inbound ByteBuf using ReferenceCountUtil.release(java.lang.Object msg).
Its because you dont call msg.release() (msg is an instance of ByteBuf).
i'm new to netty and i would like to create a proxy server using netty that does the following :
_ upon receiving data from a client, the proxy server does some business logic that will possibly modify the data, and then forward it to the remote server, this business logic belongs to a transaction.
_ if the remote server return a success response then proxy server commit the transaction, otherwise the proxy server rollback the transaction.
Data flow diagram
I have taken a look at the proxy example at https://netty.io/4.1/xref/io/netty/example/proxy/package-summary.html but i havent figured out a good and simple way to implement the transaction logic mentioned above.
I should mention that i have create a separate thread pool to execute this business transaction to avoid blocking the Nio thread, my current solution is to actually use 2 thread pool with the same amount of threads : 1 on the frontendHandler and 1 on the backendHandler, the one at frontend will use wait() to wait for the response from the backend thread.
Here is my current code for the frontend handler:
#Override
public void channelActive(ChannelHandlerContext ctx) {
final Channel inboundChannel = ctx.channel();
// Start the connection attempt.
Bootstrap b = new Bootstrap();
b.group(inboundChannel.eventLoop())
.channel(ctx.channel().getClass())
.handler(new ServerBackendHandler(inboundChannel, response))
.option(ChannelOption.AUTO_READ, false);
ChannelFuture f = b.connect(remoteHost, remotePort);
outboundChannel = f.channel();
f.addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
// connection complete start to read first data
inboundChannel.read();
} else {
// Close the connection if the connection attempt has failed.
inboundChannel.close();
}
}
});
}
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (outboundChannel.isActive()) {
//Executing business logic within a different thread pool to avoid blocking asynchronous i/o operation
frontendThreadPool.execute(new Runnable(){
#Override
public void run() {
//System.out.println("Starting business logic operation at front_end for message :" + m);
synchronized(response) {
//sleeping this thread to simulate business operation, insert business logic here.
int randomNum = ThreadLocalRandom.current().nextInt(1000, 2001);
try {
Thread.currentThread().sleep(randomNum);
} catch (InterruptedException e1) {
e1.printStackTrace();
}
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
// was able to flush out data, start to read the next chunk
ctx.channel().read();
} else {
future.channel().close();
}
}
});
System.out.println("Blank response : " + response.getResponse());
//wait for response from remote server
try {
response.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Returned response from back end: " + response.getResponse());
//another piece of business logic here, if the remote server returned success then commit the transaction, if the remote server returned failure then throw exception to rollback
//stop current thread since we are done with it
Thread.currentThread().interrupt();
}
}
});
}
}
and for the backendHandler :
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
ByteBuf m = (ByteBuf) msg;
m = safeBuffer(m, ctx.alloc());
String str = m.toString(Charset.forName("UTF-8"));
backendThreadPool.execute(new Runnable() {
#Override
public void run() {
//System.out.println("Starting business logic operation at back_end.");
synchronized(response) {
int randomNum = ThreadLocalRandom.current().nextInt(1000, 2001);
try {
Thread.currentThread().sleep(randomNum);
} catch (InterruptedException e1) {
e1.printStackTrace();
}
response.setResponse(str);
System.out.println("Finished at back_end.");
response.notify();
Thread.currentThread().interrupt();
}
}
});
String s = "Message returned from remote server through proxy : " + str;
byte[] b = s.getBytes(Charset.forName("UTF-8"));
defaultResponse.writeBytes(b);
inboundChannel.writeAndFlush(defaultResponse).addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
ctx.channel().read();
} else {
future.channel().close();
}
}
});
}
this solution is not at all optimized since the server have to use 2 threads to execute one transaction. So i guess my questions are :
_ Can i (and if i can, should i) use Spring #Transactional on the channelRead method ?
_ how can i implement the logic explained above in a simple way using netty ?
I have also used JMeter to test out the code above but it doesn't seem to be very stable, lots of requests didn't even have a response with the above code at around 2000 connections and 250 max threads in each thread pool
Thanks in advance
I have a situation like: My Netty Server will be getting data from a Client at a blazing speed. I think the client is using somewhat PUSH mechanism for that speed. I don't know what exactly PUSH - POP mechanism is, but I do feel that the Client is using some mechanism for sending data at a very high speed.Now my requirement is, I wrote a simple TCP Netty server that receives data from the client and just adds to the BlockingQueue implemented using ArrayBlockingQueue. Now , as Netty is event based, the time taken to accept the data and store it in a queue is some what more , this is raising an exception at the client side that the Netty server is not running.But my server is running perfectly, but the time to accept single data and store it in the queue is more. How can I prevent this? Is there any fastest queue for this situation? I nam using BlockingQueue as another thread will take data from the queue and process it. So I need a synchronized queue. How can I improve the performance of the Server or is there any way to insert data at a very high speed? All I care about is latency. The latency needs to be as low as possible.
My Server code:
public class Server implements Runnable {
private final int port;
static String message;
Channel channel;
ChannelFuture channelFuture;
int rcvBuf, sndBuf, lowWaterMark, highWaterMark;
public Server(int port) {
this.port = port;
rcvBuf = 2048;
sndBuf = 2048;
lowWaterMark = 1024;
highWaterMark = 2048;
}
#Override
public void run() {
try {
startServer();
} catch (Exception ex) {
System.err.println("Error in Server : "+ex);
Logger.error(ex.getMessage());
}
}
public void startServer() {
// System.out.println("8888 Server started");
EventLoopGroup group = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(group)
.channel(NioServerSocketChannel.class)
.localAddress(new InetSocketAddress(port))
.childOption(ChannelOption.SO_RCVBUF, rcvBuf * 2048)
.childOption(ChannelOption.SO_SNDBUF, sndBuf * 2048)
.childOption(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(lowWaterMark * 2048, highWaterMark * 2048))
.childOption(ChannelOption.TCP_NODELAY, true)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
channel = ch;
System.err.println("OMS connected : " + ch.localAddress());
ch.pipeline().addLast(new ReceiveFromOMSDecoder());
}
});
channelFuture = b.bind(port).sync();
this.channel = channelFuture.channel();
channelFuture.channel().closeFuture().sync();
} catch (InterruptedException ex) {
System.err.println("Exception raised in SendToOMS class"+ex);
} finally {
group.shutdownGracefully();
}
}
}
My ServerHandler code:
#Sharable
public class ReceiveFromOMSDecoder extends MessageToMessageDecoder<ByteBuf> {
private Charset charset;
public ReceiveFromOMSDecoder() {
this(Charset.defaultCharset());
}
/**
* Creates a new instance with the specified character set.
*/
public ReceiveFromOMSDecoder(Charset charset) {
if (charset == null) {
throw new NullPointerException("charset");
}
this.charset = charset;
}
#Override
protected void decode(ChannelHandlerContext ctx, ByteBuf msg, List<Object> out) throws Exception {
String buffer = msg.toString(charset);
if(buffer!=null){
Server.sq.insertStringIntoSendingQueue(buffer); //inserting into queue
}
else{
Logger.error("Null string received"+buffer);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
// Logger.error(cause.getMessage());
System.err.println(cause);
}
}
Three quickies:
Doesn't look like you're sending a response. You probably should.
Don't block the IO thread. Use an EventExecutorGroup to dispatch the handling of the incoming payload. i.e. something like ChannelPipeline.addLast(EventExecutorGroup group, String name, ChannelHandler handler).
Just don't block in general. Ditch your ArrayBlockingQueue and take a look at JCTools or some other implementation to find a non-blocking analog.
I use the java netty as tcpServer and delphi TIdTCPClient as tcpClient.the tcpclient can connect and send message to the tcpserver but the tcpclinet can not receive the message sendback from the tcpserver .
here is the tcpserver code written by java:
public class NettyServer {
public static void main(String[] args) throws InterruptedException {
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast(new TcpServerHandler());
}
});
ChannelFuture f = b.bind(8080).sync();
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
public class TcpServerHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws UnsupportedEncodingException {
try {
ByteBuf in = (ByteBuf) msg;
System.out.println("channelRead:" + in.toString(CharsetUtil.UTF_8));
byte[] responseByteArray = "hello".getBytes("UTF-8");
ByteBuf out = ctx.alloc().buffer(responseByteArray.length);
out.writeBytes(responseByteArray);
ctx.writeAndFlush(out);
//ctx.write("hello");
} finally {
ReferenceCountUtil.release(msg);
}
}
#Override
public void channelActive(ChannelHandlerContext ctx) throws UnsupportedEncodingException{
System.out.println("channelActive:" + ctx.channel().remoteAddress());
ChannelGroups.add(ctx.channel());
}
#Override
public void channelInactive(ChannelHandlerContext ctx) {
System.out.println("channelInactive:" + ctx.channel().remoteAddress());
ChannelGroups.discard(ctx.channel());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
here is the tcpclient code written by delphi :
AStream := TStringStream.Create;
IdTCPClient.IOHandler.ReadStream(AStream);
i also use the
IdTCPClient.IOHandler.ReadLn()
and still can not get the returnDATA.
Your Delphi code does not match your Java code, that is why your client is not working.
The default parameters of TIdIOHandler.ReadStream() expect the stream data to be preceeded by the stream length, in bytes, using either a 32bit or 64bit integer in network byte order (big endian) depending on the value of the TIdIOHandler.LargeStream property. Your Java code is not sending the array length before sending the array bytes.
The default parameters of TIdIOHandler.ReadLn() expect the line data to be terminated by either a CRLF or bare-LN terminator. Your Java code is not sending any line terminator at the end of the array bytes.
In short, your Java code is not sending anything that lets the receiver know when the sent data actually ends. Unless it closes the connection after sending the data, in which case you can set the AReadUntilDisconnect parameter of TIdIOHandler.ReadStream() to true, or use TIdIOHandler.AllData().
TCP is stream-oriented, not message-oriented. The sender must be explicit about where a message ends and the next message begins.
Using Netty 4.0.27 & Java 1.8.0_20
So I am attempting to learn how Netty works by building a simple chat server (the typical networking tutorial program, I guess?). Designing my own simple protocol, called ARC (Andrew's Relay Chat)... so that's why you see ARC in the code a lot. K, so here's the issue.
So here I start the server and register the various handlers...
public void start()
{
System.out.println("Registering handlers...");
ArcServerInboundHandler inboundHandler = new ArcServerInboundHandler(this);
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try
{
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class).childHandler(new ChannelInitializer<SocketChannel>()
{
#Override
public void initChannel(SocketChannel ch) throws Exception
{
ch.pipeline().addLast(new ArcDecoder(), inboundHandler);
ch.pipeline().addLast(new ArcEncoder());
}
}).option(ChannelOption.SO_BACKLOG, 128).childOption(ChannelOption.SO_KEEPALIVE, true);
try
{
System.out.println("Starting Arc Server on port " + port);
ChannelFuture f = bootstrap.bind(port).sync();
f.channel().closeFuture().sync();
}
catch(InterruptedException e)
{
e.printStackTrace();
}
}
finally
{
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
My "inboundHandler" does get called when the user connects.
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception
{
System.out.println("CLIENT CONNECTED"); // THIS PRINTS, REACHES THIS POINT
ArcPacket packet = new ArcPacket();
packet.setArc("PUBLIC_KEY");
packet.setField("KEY", Crypto.bytesToHex(server.getRsaKeys().getPublic().getEncoded()));
ctx.writeAndFlush(packet);
}
This is my encoder, which does not seem to get called at all...
public class ArcEncoder extends MessageToByteEncoder<ArcPacket>
{
#Override
protected void encode(ChannelHandlerContext ctx, ArcPacket msg, ByteBuf out) throws Exception
{
System.out.println("ENCODE"); // NEVER GETS HERE
String message = ArcPacketFactory.encode(msg);
byte[] data = message.getBytes("UTF-8");
out.writeBytes(data);
System.out.println("WROTE");
}
#Override
public boolean acceptOutboundMessage(Object msg) throws Exception
{
System.out.println("ACCEPT OUTBOUND MESSAGE"); // NEVER GETS HERE
return msg instanceof ArcPacket;
}
}
So,
The code that calls ctx.writeAndFlush(packet); is run, but it doesn't seem to invoke the encoder at any point. Am I missing something obvious? Perhaps I'm adding the encoder incorrectly? Though it looks right when I compare it to other examples I've seen.
Thanks for any help.
Your encoder (ArcEncoder) is placed after your inbound handler. It means, the ctx.*() method calls will never be evaluated by the encoder. To fix your problem, you have to move the ArcEncoder before the inbound handler:
ch.pipeline().addLast(new ArcDecoder(), new ArcEncoder(), inboundHandler);
For more information about the event evaluation order, please read the API documentation of ChannelPipeline.
I think the problem is that you're using the ChannelHandlerContext to write to the Channel. What this does is to insert the message in the pipeline at the point of your handler, going outbound. But since your decoder is added before your encoder in the pipeline this means that anything you write using the decoder context will be inserted after the encoder in the pipeline.
The correct way to do it to ensure that the encoder is called is to call:
ctx.channel.writeAndFlush()