Basic Netty echo server - string encoder error? - java

This is my first time using Netty and I'm having trouble making a simple echo server! I looked at docs and it says to use the string encoder and decoder, which I am not using properly apparently. For the framedecoder, I'd like to use the header messages with one byte length, but that doesn't seem to be working either due to the string issue. I assume my implementation of the PipelineFactory is messed up.
Bonus Question:
Because I'm stupid and ambitious, I tried implementing a timeout/heartbeat handler. That didn't work either.
Here are the console output and java code:
Console:
>>telnet localhost 6969
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
>>3
Connection closed by foreign host.
Java Console:
Starting server on 6969
channelConnected
channelDisconnected
java.lang.IllegalArgumentException: unsupported message type: class java.lang.String
at org.jboss.netty.channel.socket.nio.SocketSendBufferPool.acquire(SocketSendBufferPool.java:51)
at org.jboss.netty.channel.socket.nio.NioWorker.write0(NioWorker.java:455)
...
Server.java
public class Server {
public static void main(String[] args) throws Exception {
ChannelFactory factory =
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
ServerBootstrap bootstrap = new ServerBootstrap(factory);
Timer timer = new HashedWheelTimer();
bootstrap.setPipelineFactory(new MyPipelineFactory(timer) {
public ChannelPipeline getPipeline() {
return Channels.pipeline(new ServerHandler());
}
});
bootstrap.setOption("child.tcpNoDelay", true);
bootstrap.setOption("child.keepAlive", true);
bootstrap.bind(new InetSocketAddress(6969));
System.out.println("Starting server on 6969");
}
}
ServerHandler.java
public class ServerHandler extends SimpleChannelHandler {
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e){
Channel ch = e.getChannel();
System.out.println("channelConnected");
}
#Override
public void channelDisconnected(ChannelHandlerContext ctx, ChannelStateEvent e){
Channel ch = e.getChannel();
System.out.println("channelDisconnected");
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
String msg = (String) e.getMessage();
e.getChannel().write("Did you say '" + msg + "'?\n");
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
Channel ch = e.getChannel();
ch.close();
}
}
MyPipelineFactory.java
public class MyPipelineFactory implements ChannelPipelineFactory {
private final Timer timer;
private static ChannelHandler idleStateHandler;
public MyPipelineFactory(Timer t) {
this.timer = t;
//this.idleStateHandler = new IdleStateHandler(timer, 5, 20, 0); // timer must be shared
}
public ChannelPipeline getPipeline() {
// create default pipeline from static method
ChannelPipeline pipeline = Channels.pipeline();
// Decoders
int maxFrameLength = 1024;
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(maxFrameLength, Delimiters.lineDelimiter()));
//pipeline.addLast("frameDecoder", new LengthFieldBasedFrameDecoder(maxFrameLength,0,1)); // get header from message
pipeline.addLast("stringDecoder", new StringDecoder(CharsetUtil.UTF_8));
// Encoders
pipeline.addLast("stringEncoder", new StringEncoder(CharsetUtil.UTF_8));
// Idle state handling- heartbeat
//pipeline.addLast("idleStateHandler", idleStateHandler);
return pipeline;
}
}
Bonus, because I'm stupid and want to get in over my head...
HeartbeatHandler.java
public class HeartbeatHandler extends IdleStateAwareChannelHandler {
#Override
public void channelIdle(ChannelHandlerContext ctx, IdleStateEvent e) {
if (e.getState() == IdleState.READER_IDLE) {
System.out.println("Reader idle, closing channel");
e.getChannel().close();
}
else if (e.getState() == IdleState.WRITER_IDLE) {
System.out.println("Writer idle, sending heartbeat");
e.getChannel().write("heartbeat"); //
}
}
}

It's because you mess up the ChannelPipeline.
You use:
bootstrap.setPipelineFactory(new MyPipelineFactory(timer) {
public ChannelPipeline getPipeline() {
return Channels.pipeline(new ServerHandler());
}
});
What you would need todo is modify the MyPipelineFactory class and add your ServerHandler in there. Then just set it like:
bootstrap.setPipelineFactory(new MyPipelineFactory(timer));
Then everything should work. Even your timeout stuff ;)

Related

using netty library, server unable to handle the reconnection scenario of client

Initially able to make the connection. Simply close the connection client and try to connect again or restart the client. the connection is not established. It creates connection only once.
Can someone help me to improve it. So, it can handle n number client simultaneously.
bossGroup = new NioEventLoopGroup(1);
workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class).option(ChannelOption.SO_BACKLOG, 100)
.handler(new LoggingHandler(LogLevel.INFO)).childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new DelimiterBasedFrameDecoder(20000, Delimiters.lineDelimiter()));
// p.addLast(new StringDecoder());
// p.addLast(new StringEncoder());
p.addLast(serverHandler);
}
});
// Start the server.
LOGGER.key("Simulator is opening listen port").low().end();
ChannelFuture f = b.bind(config.getPort()).sync();
LOGGER.key("Simulator started listening at port: " + config.getPort()).low().end();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} finally {
// Shut down all event loops to terminate all threads.
LOGGER.key("Shtting down all the thread if anyone is still open.").low().end();
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
Server Handler code is below:
public class SimulatorServerHandler extends SimpleChannelInboundHandler<String> {
private AtomicReference<ChannelHandlerContext> ctxRef = new AtomicReference<ChannelHandlerContext>();
private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
private AtomicInteger seqNum = new AtomicInteger(1);
private final Configuration configure;
private ScheduledFuture<?> hbTimerWorker;
private final int stx = 0x02;
private final int etx = 0x03;
private final ILogger LOGGER;
public int enablePublishFunction = 0;
public SimulatorServerHandler(Configuration config) {
this.configure = config;
//LOGGER = LogFactory.INSTANCE.createLogger();
LOGGER = new LogFactory().createLogger("SIM SERVER");
}
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
ctxRef.set(ctx);
enablePublishFunction =1;
// System.out.println("Connected!");
LOGGER.low().key("Gateway connected to the Simulator ").end();
startHBTimer();
}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
ctx.fireChannelInactive();
hbTimerWorker.cancel(false);
enablePublishFunction =0;
LOGGER.low().key("Gateway disconnected from the Simulator ").end();
}
#Override
public void channelRead0(ChannelHandlerContext ctx, String request) {
// Generate and write a response.
String response;
boolean close = false;
/* if (request.isEmpty()) {
response = "Please type something.\r\n";
} else if ("bye".equals(request.toLowerCase())) {
response = "Have a good day!\r\n";
close = true;
} else {
response = "Did you say '" + request + "'?\r\n";
}
// We do not need to write a ChannelBuffer here.
// We know the encoder inserted at TelnetPipelineFactory will do the conversion.
ChannelFuture future = ctx.write(response);
// Close the connection after sending 'Have a good day!'
// if the client has sent 'bye'.
if (close) {
future.addListener(ChannelFutureListener.CLOSE);
}
*/
System.out.println(request);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
LOGGER.key("Unknown exception while network communication :"+ cause.getStackTrace()).high().end();
cause.printStackTrace();
ctx.close();
}
Maybe because you use always the very same server handler in your pipeline for all connections (not using new ServerHandler())? Side effects in your implementation could block your handler to be reusable.

Netty ThreadRenamingRunnable

I'm having a difficult time getting my head around how to use ThreadRenamingRunnable to rename the worker thread in netty. I am new to netty and using netty 3.9.0-Final.
I want to rename the worker threads..."New I/O worker #X". I'm ok with the name of the boss thread.
This is a basic server which responds to a "ping" with a "pong".
public class NettyPingPong {
public static void main(String[] args) {
ServerBootstrap bootstrap = new ServerBootstrap(
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() throws Exception {
return Channels.pipeline(
new LineBasedFrameDecoder(255,true,true),
new PongUpstreamHandler(),
new StringEncoder());
}
});
bootstrap.bind(new InetSocketAddress(8899));
out.println("im ready");
}
}
and
public class PongUpstreamHandler extends SimpleChannelUpstreamHandler {
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
ChannelBuffer buffer = (ChannelBuffer) e.getMessage();
String message = new String(buffer.array());
if (message.equalsIgnoreCase("ping")){
e.getChannel().write("pong\n");
out.println("ponged...");
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
e.getChannel().close();
out.println("closed...");
}
}
1.5 years old question :( I landed here with the same question, working on Netty 3.5.0 Final. And I resolved with the below code.
I used a thread factory which would also generage meaningful threadNames similar to what Abe has mentioned.
And, I configured ThreadRenamingRunnable not to rename the threads I supply. Using ThreadFactory and setting ThreadNameDeterminer as CURRENT.
ThreadRenamingRunnable.setThreadNameDeterminer(ThreadNameDeterminer.CURRENT);
Netty (3.5.0 atleast) alters the threadName from its original value to a proposed value "New I/O Worker #X". The above code snippet, ensures that it doesn't alter the thread name. The name is determined by the below ThreadFactory.
public class CustomThreadFactory implements ThreadFactory {
private final AtomicInteger threadIdSequence = new AtomicInteger(0);
private String threadNamePrefix = "Netty-Worker-";
public CustomThreadFactory() {
}
public CustomThreadFactory(String namePrefix) {
this.threadNamePrefix = namePrefix;
}
#Override
public Thread newThread(Runnable runnable) {
Thread newThread = new Thread(runnable, threadNamePrefix + threadIdSequence.incrementAndGet());
if (newThread.isDaemon()) {
newThread.setDaemon(false);
}
if (newThread.getPriority() != Thread.NORM_PRIORITY) {
newThread.setPriority(Thread.NORM_PRIORITY);
}
newThread.setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() {
#Override
public void uncaughtException(final Thread thread, final Throwable e) {
System.err.println(thread + " threw exception: " + e.getMessage());
e.printStackTrace();
}
});
return newThread;
}
}
There may be better ways to do this, but this is how i got it to work. Thanks to this blog.
public class NettyPingPong {
public static void main(String[] args) {
final String WORKER_THREADNAME_PREFIX = "worker";
NioWorkerPool workerPool = new NioWorkerPool(Executors.newCachedThreadPool(), 20, new ThreadNameDeterminer() {
#Override
public String determineThreadName(String currentThreadName,String proposedThreadName) throws Exception {
StringBuilder sb = new StringBuilder(WORKER_THREADNAME_PREFIX);
sb.append(currentThreadName.substring(currentThreadName.lastIndexOf('-')));
return sb.toString();
}
});
ServerBootstrap bootstrap = new ServerBootstrap(
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
workerPool));
bootstrap.setPipelineFactory(
new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() throws Exception {
return Channels.pipeline(
new LineBasedFrameDecoder(255, true, true),
new PongUpstreamHandler(),
new StringEncoder());
}
});
bootstrap.bind(new InetSocketAddress(8899));
out.println("im ready");
}
}
You can pass in a ThreadFactory which will name your thread. Take a look at this ThreadFactory I am using to name the server threads. Provided below is the example usage
serverBootstrap = new ServerBootstrap(
new NioServerSocketChannelFactory(Executors
.newCachedThreadPool(new NamedThreadFactory(
"TCP-Server-Boss")), Executors
.newCachedThreadPool(new NamedThreadFactory(
"TCP-Server-Worker"))));

Proper way to pool client channels in netty?

I'm getting a java.nio.channels.NotYetConnectedException in the following code because I'm trying to write to a channel that is not yet open.
Essentially what I have is a channel pool in which I grab a channel to write to if one is free, and I create a new channel if one is not available. My problem is that when I create a new channel, the channel is not ready for writing when I call connect, and I don't want to wait for the connection to open before returning because I don't want to block the thread. What's the best way to do this? Also, is my logic for retrieving/returning channels valid? See code below.
I have a simple connection pool like the following:
private static class ChannelPool {
private final ClientBootstrap cb;
private Set<Channel> activeChannels = new HashSet<Channel>();
private Deque<Channel> freeChannels = new ArrayDeque<Channel>();
public ChannelPool() {
ChannelFactory clientFactory =
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
cb = new ClientBootstrap(clientFactory);
cb.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
return Channels.pipeline(
new HttpRequestEncoder(),
new HttpResponseDecoder(),
new ResponseHandler());
}
});
}
private Channel newChannel() {
ChannelFuture cf;
synchronized (cb) {
cf = cb.connect(new InetSocketAddress("localhost", 18080));
}
final Channel ret = cf.getChannel();
ret.getCloseFuture().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture arg0) throws Exception {
System.out.println("channel closed?");
synchronized (activeChannels) {
activeChannels.remove(ret);
}
}
});
synchronized (activeChannels) {
activeChannels.add(ret);
}
System.out.println("returning new channel");
return ret;
}
public Channel getFreeChannel() {
synchronized (freeChannels) {
while (!freeChannels.isEmpty()) {
Channel ch = freeChannels.pollFirst();
if (ch.isOpen()) {
return ch;
}
}
}
return newChannel();
}
public void returnChannel(Channel ch) {
synchronized (freeChannels) {
freeChannels.addLast(ch);
}
}
}
I'm trying to use this inside a handler as follows:
private static class RequestHandler extends SimpleChannelHandler {
#Override
public void messageReceived(ChannelHandlerContext ctx, final MessageEvent e) {
final HttpRequest request = (HttpRequest) e.getMessage();
Channel proxyChannel = pool.getFreeChannel();
proxyToClient.put(proxyChannel, e.getChannel());
proxyChannel.write(request);
}
}
Instead of adding the new channel to activeChannels immediately after bootstrap.connect(..), you have to add a listener to the ChannelFuture which was returned by bootstrap.connect(..), and add the channel to activeChannels in the added listener. That way, getFreeChannel() will never get the channel that is not connected yet.
Because it is likely that activeChannels is empty even if you called newChannel() (newChannel() will return even before connection is established), you have to decide what to do in such a case. If I were you, I would change the return type of getFreeChannel() from Channel to ChannelFuture so that the caller gets notified when the free channel is ready.

Netty client sometimes doesn't receive all expected messages

I have a fairly simple test Netty server/client project . I am testing some aspects of the stability of the communication by flooding the server with messages and counting the messages and bytes that I get back to make sure that everything matches.
When I run the flood from the client, the client keeps track of the number of messages it sends and how many it gets back and then when the number equal to each other it prints out some stats.
On certain occassions when running locally (I'm guessing because of congestion?) the client never ends up printing out the final message. I haven't run into this issue when the 2 components are on remote machines. Any suggestions would be appreciated:
The Encoder is just a simple OneToOneEncoder that encodes an Envelope type to a ChannelBuffer and the Decoder is a simple ReplayDecoder that does the opposite.
I tried adding a ChannelInterestChanged method to my client handler to see if the channel's interest was getting changed to not read, but that did not seem to be the case.
The relevant code is below:
Thanks!
SERVER
public class Server {
// configuration --------------------------------------------------------------------------------------------------
private final int port;
private ServerChannelFactory serverFactory;
// constructors ---------------------------------------------------------------------------------------------------
public Server(int port) {
this.port = port;
}
// public methods -------------------------------------------------------------------------------------------------
public boolean start() {
ExecutorService bossThreadPool = Executors.newCachedThreadPool();
ExecutorService childThreadPool = Executors.newCachedThreadPool();
this.serverFactory = new NioServerSocketChannelFactory(bossThreadPool, childThreadPool);
this.channelGroup = new DeviceIdAwareChannelGroup(this + "-channelGroup");
ChannelPipelineFactory pipelineFactory = new ChannelPipelineFactory() {
#Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("encoder", Encoder.getInstance());
pipeline.addLast("decoder", new Decoder());
pipeline.addLast("handler", new ServerHandler());
return pipeline;
}
};
ServerBootstrap bootstrap = new ServerBootstrap(this.serverFactory);
bootstrap.setOption("reuseAddress", true);
bootstrap.setOption("child.tcpNoDelay", true);
bootstrap.setOption("child.keepAlive", true);
bootstrap.setPipelineFactory(pipelineFactory);
Channel channel = bootstrap.bind(new InetSocketAddress(this.port));
if (!channel.isBound()) {
this.stop();
return false;
}
this.channelGroup.add(channel);
return true;
}
public void stop() {
if (this.channelGroup != null) {
ChannelGroupFuture channelGroupCloseFuture = this.channelGroup.close();
System.out.println("waiting for ChannelGroup shutdown...");
channelGroupCloseFuture.awaitUninterruptibly();
}
if (this.serverFactory != null) {
this.serverFactory.releaseExternalResources();
}
}
// main -----------------------------------------------------------------------------------------------------------
public static void main(String[] args) {
int port;
if (args.length != 3) {
System.out.println("No arguments found using default values");
port = 9999;
} else {
port = Integer.parseInt(args[1]);
}
final Server server = new Server( port);
if (!server.start()) {
System.exit(-1);
}
System.out.println("Server started on port 9999 ... ");
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
server.stop();
}
});
}
}
SERVER HANDLER
public class ServerHandler extends SimpleChannelUpstreamHandler {
// internal vars --------------------------------------------------------------------------------------------------
private AtomicInteger numMessagesReceived=new AtomicInteger(0);
// constructors ---------------------------------------------------------------------------------------------------
public ServerHandler() {
}
// SimpleChannelUpstreamHandler -----------------------------------------------------------------------------------
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
Channel c = e.getChannel();
System.out.println("ChannelConnected: channel id: " + c.getId() + ", remote host: " + c.getRemoteAddress() + ", isChannelConnected(): " + c.isConnected());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception {
System.out.println("*** EXCEPTION CAUGHT!!! ***");
e.getChannel().close();
}
#Override
public void channelDisconnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
super.channelDisconnected(ctx, e);
System.out.println("*** CHANNEL DISCONNECTED ***");
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception {
if(numMessagesReceived.incrementAndGet()%1000==0 ){
System.out.println("["+numMessagesReceived+"-TH MSG]: Received message: " + e.getMessage());
}
if (e.getMessage() instanceof Envelope) {
// echo it...
if (e.getChannel().isWritable()) {
e.getChannel().write(e.getMessage());
}
} else {
super.messageReceived(ctx, e);
}
}
}
CLIENT
public class Client implements ClientHandlerListener {
// configuration --------------------------------------------------------------------------------------------------
private final String host;
private final int port;
private final int messages;
// internal vars --------------------------------------------------------------------------------------------------
private ChannelFactory clientFactory;
private ChannelGroup channelGroup;
private ClientHandler handler;
private final AtomicInteger received;
private long startTime;
private ExecutorService cachedThreadPool = Executors.newCachedThreadPool();
// constructors ---------------------------------------------------------------------------------------------------
public Client(String host, int port, int messages) {
this.host = host;
this.port = port;
this.messages = messages;
this.received = new AtomicInteger(0);
}
// ClientHandlerListener ------------------------------------------------------------------------------------------
#Override
public void messageReceived(Envelope message) {
if (this.received.incrementAndGet() == this.messages) {
long stopTime = System.currentTimeMillis();
float timeInSeconds = (stopTime - this.startTime) / 1000f;
System.err.println("Sent and received " + this.messages + " in " + timeInSeconds + "s");
System.err.println("That's " + (this.messages / timeInSeconds) + " echoes per second!");
}
}
// public methods -------------------------------------------------------------------------------------------------
public boolean start() {
// For production scenarios, use limited sized thread pools
this.clientFactory = new NioClientSocketChannelFactory(cachedThreadPool, cachedThreadPool);
this.channelGroup = new DefaultChannelGroup(this + "-channelGroup");
this.handler = new ClientHandler(this, this.channelGroup);
ChannelPipelineFactory pipelineFactory = new ChannelPipelineFactory() {
#Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("byteCounter", new ByteCounter("clientByteCounter"));
pipeline.addLast("encoder", Encoder.getInstance());
pipeline.addLast("decoder", new Decoder());
pipeline.addLast("handler", handler);
return pipeline;
}
};
ClientBootstrap bootstrap = new ClientBootstrap(this.clientFactory);
bootstrap.setOption("reuseAddress", true);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
bootstrap.setPipelineFactory(pipelineFactory);
boolean connected = bootstrap.connect(new InetSocketAddress(host, port)).awaitUninterruptibly().isSuccess();
System.out.println("isConnected: " + connected);
if (!connected) {
this.stop();
}
return connected;
}
public void stop() {
if (this.channelGroup != null) {
this.channelGroup.close();
}
if (this.clientFactory != null) {
this.clientFactory.releaseExternalResources();
}
}
public ChannelFuture sendMessage(Envelope env) {
Channel ch = this.channelGroup.iterator().next();
ChannelFuture cf = ch.write(env);
return cf;
}
private void flood() {
if ((this.channelGroup == null) || (this.clientFactory == null)) {
return;
}
System.out.println("sending " + this.messages + " messages");
this.startTime = System.currentTimeMillis();
for (int i = 0; i < this.messages; i++) {
this.handler.sendMessage(new Envelope(Version.VERSION1, Type.REQUEST, 1, new byte[1]));
}
}
// main -----------------------------------------------------------------------------------------------------------
public static void main(String[] args) throws InterruptedException {
final Client client = new Client("localhost", 9999, 10000);
if (!client.start()) {
System.exit(-1);
return;
}
while (client.channelGroup.size() == 0) {
Thread.sleep(200);
}
System.out.println("Client started...");
client.flood();
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
System.out.println("shutting down client");
client.stop();
}
});
}
}
CLIENT HANDLER
public class ClientHandler extends SimpleChannelUpstreamHandler {
// internal vars --------------------------------------------------------------------------------------------------
private final ClientHandlerListener listener;
private final ChannelGroup channelGroup;
private Channel channel;
// constructors ---------------------------------------------------------------------------------------------------
public ClientHandler(ClientHandlerListener listener, ChannelGroup channelGroup) {
this.listener = listener;
this.channelGroup = channelGroup;
}
// SimpleChannelUpstreamHandler -----------------------------------------------------------------------------------
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception {
if (e.getMessage() instanceof Envelope) {
Envelope env = (Envelope) e.getMessage();
this.listener.messageReceived(env);
} else {
System.out.println("NOT ENVELOPE!!");
super.messageReceived(ctx, e);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception {
System.out.println("**** CAUGHT EXCEPTION CLOSING CHANNEL ***");
e.getCause().printStackTrace();
e.getChannel().close();
}
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
this.channel = e.getChannel();
System.out.println("Server connected, channel id: " + this.channel.getId());
this.channelGroup.add(e.getChannel());
}
// public methods -------------------------------------------------------------------------------------------------
public void sendMessage(Envelope envelope) {
if (this.channel != null) {
this.channel.write(envelope);
}
}
}
CLIENT HANDLER LISTENER INTERFACE
public interface ClientHandlerListener {
void messageReceived(Envelope message);
}
Without knowing how big the envelope is on the network I'm going to guess that your problem is that your client writes 10,000 messages without checking if the channel is writable.
Netty 3.x processes network events and writes in a particular fashion. It's possible that your client is writing so much data so fast that Netty isn't getting a chance to process receive events. On the server side this would result in the channel becoming non writable and your handler dropping the reply.
There are a few reasons why you see the problem on localhost but it's probably because the write bandwidth is much higher than your network bandwidth. The client doesn't check if the channel is writable, so over a network your messages are buffered by Netty until the network can catch up (if you wrote significantly more than 10,000 messages you might see an OutOfMemoryError). This acts as a natural break because Netty will suspend writing until the network is ready, allowing it to process incoming data and preventing the server from seeing a channel that's not writable.
The DiscardClientHandler in the discard handler shows how to test if the channel is writable, and how to resume when it becomes writable again. Another option is to have sendMessage return the ChannelFuture associated with the write and, if the channel is not writable after the write, block until the future completes.
Also your server handler should write the message and then check if the channel is writable. If it isn't you should set channel readable to false. Netty will notify ChannelInterestChanged when the channel becomes writable again. Then you can set channel readable to true to resume reading messages.

Handling ReadTimeoutHandler time out

I just can't realize why my read time out is not working. All I want to do is just to wait
for 10 seconds for some thread to put message to BlockedQueue<String> and on timeout return some kind of response on client.
public class NioAsynChatPipelineFactory implements ChannelPipelineFactory {
private static Timer timer = new HashedWheelTimer();
private final ChannelHandler timeoutHandler = new ReadTimeoutHandler(timer, 10);
#Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("decoder", new HttpRequestDecoder());
pipeline.addLast("encoder", new HttpResponseEncoder());
pipeline.addLast("handler", new NioAsynChatHandler());
pipeline.addLast("timeout", this.timeoutHandler);
return pipeline;
}
}
Now my handler looks like this.
public class NioAsynChatHandler extends SimpleChannelUpstreamHandler{
#Override
public void handleUpstream(
ChannelHandlerContext ctx, ChannelEvent e) throws Exception {
super.handleUpstream(ctx, e);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e)
throws Exception {
System.out.println("Exception");
\\writing some kind of response and closing channel.
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception {
Thread thread = new Thread(new ConsmerTask(e.getChannel()));
thread.start();
}
and inside ConsumerTask I'm just waiting for BlockingQueue to get response
public class ConsumerTask implements Runnable{
private Channel channel;
public ConsumerTask(Channel channel){
this.channel = channel;
}
#Override
public void run() {
try{
while(true){
String message = queue.take();
}
} catch(InterruptedException ex){
Thread.currentThread.interrupt();
} finally{
//write something to channel and close it
}
}
My problem is that I don't see that any excpetion occurs on time out.
What am I doing wrong?
Update:
public static final BlockingQueue<String> blockingQueue = new LinkedBlockingQueue<String>();
Actually my question is more generic, How to close channel on timeout while it is waiting for something in external thread?
Update 2:
Another question: due to the fact that I'm running external thread in Cha would it be better to use OrderedMemoryAwareThreadPoolExecutor in pipeline? Will it increase performance.
It's basically because you put the ReadTimeoutHandler in the wrong position. Please put it in the first position of the pipeline (i.e. before all handlers).

Categories

Resources