Initially able to make the connection. Simply close the connection client and try to connect again or restart the client. the connection is not established. It creates connection only once.
Can someone help me to improve it. So, it can handle n number client simultaneously.
bossGroup = new NioEventLoopGroup(1);
workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class).option(ChannelOption.SO_BACKLOG, 100)
.handler(new LoggingHandler(LogLevel.INFO)).childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new DelimiterBasedFrameDecoder(20000, Delimiters.lineDelimiter()));
// p.addLast(new StringDecoder());
// p.addLast(new StringEncoder());
p.addLast(serverHandler);
}
});
// Start the server.
LOGGER.key("Simulator is opening listen port").low().end();
ChannelFuture f = b.bind(config.getPort()).sync();
LOGGER.key("Simulator started listening at port: " + config.getPort()).low().end();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} finally {
// Shut down all event loops to terminate all threads.
LOGGER.key("Shtting down all the thread if anyone is still open.").low().end();
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
Server Handler code is below:
public class SimulatorServerHandler extends SimpleChannelInboundHandler<String> {
private AtomicReference<ChannelHandlerContext> ctxRef = new AtomicReference<ChannelHandlerContext>();
private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
private AtomicInteger seqNum = new AtomicInteger(1);
private final Configuration configure;
private ScheduledFuture<?> hbTimerWorker;
private final int stx = 0x02;
private final int etx = 0x03;
private final ILogger LOGGER;
public int enablePublishFunction = 0;
public SimulatorServerHandler(Configuration config) {
this.configure = config;
//LOGGER = LogFactory.INSTANCE.createLogger();
LOGGER = new LogFactory().createLogger("SIM SERVER");
}
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
ctxRef.set(ctx);
enablePublishFunction =1;
// System.out.println("Connected!");
LOGGER.low().key("Gateway connected to the Simulator ").end();
startHBTimer();
}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
ctx.fireChannelInactive();
hbTimerWorker.cancel(false);
enablePublishFunction =0;
LOGGER.low().key("Gateway disconnected from the Simulator ").end();
}
#Override
public void channelRead0(ChannelHandlerContext ctx, String request) {
// Generate and write a response.
String response;
boolean close = false;
/* if (request.isEmpty()) {
response = "Please type something.\r\n";
} else if ("bye".equals(request.toLowerCase())) {
response = "Have a good day!\r\n";
close = true;
} else {
response = "Did you say '" + request + "'?\r\n";
}
// We do not need to write a ChannelBuffer here.
// We know the encoder inserted at TelnetPipelineFactory will do the conversion.
ChannelFuture future = ctx.write(response);
// Close the connection after sending 'Have a good day!'
// if the client has sent 'bye'.
if (close) {
future.addListener(ChannelFutureListener.CLOSE);
}
*/
System.out.println(request);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
LOGGER.key("Unknown exception while network communication :"+ cause.getStackTrace()).high().end();
cause.printStackTrace();
ctx.close();
}
Maybe because you use always the very same server handler in your pipeline for all connections (not using new ServerHandler())? Side effects in your implementation could block your handler to be reusable.
Related
I am writing a minecraft server in java from scratch for private reasons.
I am new to the netty api so please explain how I can fix it
My problem is pretty simple my server waits for a connection then reads data from that connection but it never reads the then bit of info
https://wiki.vg/Server_List_Ping
I followed that and everything goes well up until the request packet which my server never reads it?
I don't know what the problem is I think it's because its closing the connection but I have no idea how to stop that
Here's the code
public class DataHandler extends SimpleChannelInboundHandler {
public void initChannel(NioServerSocketChannel nioServerSocketChannel) throws Exception {
try {
System.out.println("Data Handler");
}catch (Exception e) {
}
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
System.out.println("[DEBUG] Read complete");
//ctx.writeAndFlush(Unpooled.EMPTY_BUFFER)
// .addListener(ChannelFutureListener.CLOSE);
}
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
System.out.println("Data Handler active");
ctx.channel().read();
//ctx.pipeline().addLast("encoder",new Encoder());
//ctx.fireChannelActive();
}
private int pos = 0;
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf buf = (ByteBuf) msg;
//ByteBuf packet = buf.readBytes(length);
int length = readVarInt(buf);
int ID = readVarInt(buf);
System.out.println("[DEBUG] PACKET ID: "+ID);
Packet packet = PacketUtil.createPacket(ID,length,buf,ctx);
packet.readBuf();
Object ran = null;
//super.channelRead(ctx, msg);
}
#Override
protected void channelRead0(ChannelHandlerContext channelHandlerContext, Object o) throws Exception {
System.out.println("Test");
}
}
There is some trial and error comments in there I did not know if I should have left them in
Heres the main class
public class Server {
private int port;
public void run() throws IOException {
port = 25565;
EventLoopGroup mainGroup = new NioEventLoopGroup();
EventLoopGroup threadGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(mainGroup, threadGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
protected void initChannel(SocketChannel socketChannel) throws Exception {
socketChannel.pipeline().addLast(new DataHandler());
}
})
.option(ChannelOption.SO_BACKLOG, 5)
.option(ChannelOption.AUTO_READ,true)
.childOption(ChannelOption.SO_KEEPALIVE, true);
ChannelFuture channelFuture = b.localAddress(port).bind().sync();
System.out.println(String.format("Started on port %d", port));
System.out.println("Registering packets");
PacketUtil.registerPackets();
}catch(InterruptedException e) {
}
}
}
I'm going through the default manual link and i encountered a problem. My echo server sends messages to the client, but i don't see them. As a telnet program i use putty.
The code is the same:
public class DiscardServerHandler extends ChannelInboundHandlerAdapter { // (1)
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) { // (2)
ChannelFuture cf = ctx.write(msg);
ctx.flush();
if (!cf.isSuccess()) {
System.out.println("Send failed: " + cf.cause());
}else{
System.out.println("Send worked.");
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { // (4)
// Close the connection when an exception is raised.
cause.printStackTrace();
ctx.close();
}
}
And second class:
public class DiscardServer {
private int port;
public DiscardServer(int port) {
this.port = port;
}
public void run() throws Exception {
EventLoopGroup bossGroup = new NioEventLoopGroup(); // (1)
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap(); // (2)
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class) // (3)
.childHandler(new ChannelInitializer<SocketChannel>() { // (4)
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new DiscardServerHandler());
}
})
.option(ChannelOption.SO_BACKLOG, 128) // (5)
.childOption(ChannelOption.SO_KEEPALIVE, true); // (6)
// Bind and start to accept incoming connections.
ChannelFuture f = b.bind(port).sync(); // (7)
// Wait until the server socket is closed.
// In this example, this does not happen, but you can do that to gracefully
// shut down your server.
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
int port;
if (args.length > 0) {
port = Integer.parseInt(args[0]);
} else {
port = 8080;
}
new DiscardServer(port).run();
}
}
cf.isSuccess() is true, but in console (putty), i don't see anything. If i'm trying to send just text
ctx.writeAndFlush(Unpooled.copiedBuffer("Netty MAY rock!", CharsetUtil.UTF_8));
it works. But if i tried to send "msg" - i get nothing.
Thanks in advance for your reply.
To read and write a non-ByteBuf message you need a decoder and encoder.
ch.pipeline().addLast(new LineBasedFrameDecoder(80))
.addLast(new StringDecoder())
.addLast(new StringEncoder())
.addLast(new DiscardServerHandler());
Or you can decode and encode message manually. To encode String into ByteBuf
ctx.writeAndFlush(Unpooled.copiedBuffer("Netty MAY rock!", CharsetUtil.UTF_8));
The same code which you have mentioned here is working as it is. Tested it.
I want to write a netty based client. It should have method public String send(String msg); which should return response from the server or some future - doesen't matter. Also it should be multithreaded. Like this:
public class Client {
public static void main(String[] args) throws InterruptedException {
Client client = new Client();
}
private Channel channel;
public Client() throws InterruptedException {
EventLoopGroup loopGroup = new NioEventLoopGroup();
Bootstrap b = new Bootstrap();
b.group(loopGroup).channel(NioSocketChannel.class).handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new StringDecoder()).
addLast(new StringEncoder()).
addLast(new ClientHandler());
}
});
channel = b.connect("localhost", 9091).sync().channel();
}
public String sendMessage(String msg) {
channel.writeAndFlush(msg);
return ??????????;
}
}
And I don't get how can I retrieve response from server after I invoke writeAndFlush(); What should I do?
Also I use Netty 4.0.18.Final
Returning a Future<String> for the method is simple, we are going to implement the following method signature:
public Futute<String> sendMessage(String msg) {
The is relatively easy to do when you are known with the async programming structures. To solve the design problem, we are going to do the following steps:
When a message is written, add a Promise<String> to a ArrayBlockingQueue<Promise>
This will serve as a list of what messages have recently been send, and allows us to change our Future<String> objects return result.
When a message arrives back into the handler, resolve it against the head of the Queue
This allows us to get the correct future to change.
Update the state of the Promise<String>
We call promise.setSuccess() to finally set the state on the object, this will propagate back to the future object.
Example code
public class ClientHandler extends SimpleChannelInboundHandler<String> {
private ChannelHandlerContext ctx;
private BlockingQueue<Promise<String>> messageList = new ArrayBlockingQueue<>(16);
#Override
public void channelActive(ChannelHandlerContext ctx) {
super.channelActive(ctx);
this.ctx = ctx;
}
#Override
public void channelInactive(ChannelHandlerContext ctx) {
super.channelInactive(ctx);
synchronized(this){
Promise<String> prom;
while((prom = messageList.poll()) != null)
prom.setFailure(new IOException("Connection lost"));
messageList = null;
}
}
public Future<String> sendMessage(String message) {
if(ctx == null)
throw new IllegalStateException();
return sendMessage(message, ctx.executor().newPromise());
}
public Future<String> sendMessage(String message, Promise<String> prom) {
synchronized(this){
if(messageList == null) {
// Connection closed
prom.setFailure(new IllegalStateException());
} else if(messageList.offer(prom)) {
// Connection open and message accepted
ctx.writeAndFlush(message).addListener();
} else {
// Connection open and message rejected
prom.setFailure(new BufferOverflowException());
}
return prom;
}
}
#Override
protected void messageReceived(ChannelHandlerContext ctx, String msg) {
synchronized(this){
if(messageList != null) {
messageList.poll().setSuccess(msg);
}
}
}
}
Documentation breakdown
private ChannelHandlerContext ctx;
Used to store our reference to the ChannelHandlerContext, we use this so we can create promises
private BlockingQueue<Promise<String>> messageList = new ArrayBlockingQueue<>();
We keep the past messages in this list so we can change the result of the future
public void channelActive(ChannelHandlerContext ctx)
Called by netty when the connection becomes active. Init our variables here.
public void channelInactive(ChannelHandlerContext ctx)
Called by netty when the connection becomes inactive, either due to error or normal connection close.
protected void messageReceived(ChannelHandlerContext ctx, String msg)
Called by netty when a new message arrives, here pick out the head of the queue, and then we call setsuccess on it.
Warning advise
When using futures, there is 1 thing you need to lookout for, do not call get() from 1 of the netty threads if the future isn't done yet, failure to follow this simple rule will either result in a deadlock or a BlockingOperationException.
You can find the sample in netty project.
We can save the result into the last handler's custom fields. In the following code, it is handler.getFactorial() that is what we want.
refer to http://www.lookatsrc.com/source/io/netty/example/factorial/FactorialClient.java?a=io.netty:netty-all
FactorialClient.java
public final class FactorialClient {
static final boolean SSL = System.getProperty("ssl") != null;
static final String HOST = System.getProperty("host", "127.0.0.1");
static final int PORT = Integer.parseInt(System.getProperty("port", "8322"));
static final int COUNT = Integer.parseInt(System.getProperty("count", "1000"));
public static void main(String[] args) throws Exception {
// Configure SSL.
final SslContext sslCtx;
if (SSL) {
sslCtx = SslContextBuilder.forClient()
.trustManager(InsecureTrustManagerFactory.INSTANCE).build();
} else {
sslCtx = null;
}
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.handler(new FactorialClientInitializer(sslCtx));
// Make a new connection.
ChannelFuture f = b.connect(HOST, PORT).sync();
// Get the handler instance to retrieve the answer.
FactorialClientHandler handler =
(FactorialClientHandler) f.channel().pipeline().last();
// Print out the answer.
System.err.format("Factorial of %,d is: %,d", COUNT, handler.getFactorial());
} finally {
group.shutdownGracefully();
}
}
}
public class FactorialClientHandler extends SimpleChannelInboundHandler<BigInteger> {
private ChannelHandlerContext ctx;
private int receivedMessages;
private int next = 1;
final BlockingQueue<BigInteger> answer = new LinkedBlockingQueue<BigInteger>();
public BigInteger getFactorial() {
boolean interrupted = false;
try {
for (;;) {
try {
return answer.take();
} catch (InterruptedException ignore) {
interrupted = true;
}
}
} finally {
if (interrupted) {
Thread.currentThread().interrupt();
}
}
}
#Override
public void channelActive(ChannelHandlerContext ctx) {
this.ctx = ctx;
sendNumbers();
}
#Override
public void channelRead0(ChannelHandlerContext ctx, final BigInteger msg) {
receivedMessages ++;
if (receivedMessages == FactorialClient.COUNT) {
// Offer the answer after closing the connection.
ctx.channel().close().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
boolean offered = answer.offer(msg);
assert offered;
}
});
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
private void sendNumbers() {
// Do not send more than 4096 numbers.
ChannelFuture future = null;
for (int i = 0; i < 4096 && next <= FactorialClient.COUNT; i++) {
future = ctx.write(Integer.valueOf(next));
next++;
}
if (next <= FactorialClient.COUNT) {
assert future != null;
future.addListener(numberSender);
}
ctx.flush();
}
private final ChannelFutureListener numberSender = new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
sendNumbers();
} else {
future.cause().printStackTrace();
future.channel().close();
}
}
};
}
Calling channel.writeAndFlush(msg); already returns a ChannelFuture. To handle the result of this method call, you could add a listener to the future like this:
future.addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) {
// Perform post-closure operation
// ...
}
});
(this is taken from the Netty documentation see: Netty doc)
I'm getting a java.nio.channels.NotYetConnectedException in the following code because I'm trying to write to a channel that is not yet open.
Essentially what I have is a channel pool in which I grab a channel to write to if one is free, and I create a new channel if one is not available. My problem is that when I create a new channel, the channel is not ready for writing when I call connect, and I don't want to wait for the connection to open before returning because I don't want to block the thread. What's the best way to do this? Also, is my logic for retrieving/returning channels valid? See code below.
I have a simple connection pool like the following:
private static class ChannelPool {
private final ClientBootstrap cb;
private Set<Channel> activeChannels = new HashSet<Channel>();
private Deque<Channel> freeChannels = new ArrayDeque<Channel>();
public ChannelPool() {
ChannelFactory clientFactory =
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
cb = new ClientBootstrap(clientFactory);
cb.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
return Channels.pipeline(
new HttpRequestEncoder(),
new HttpResponseDecoder(),
new ResponseHandler());
}
});
}
private Channel newChannel() {
ChannelFuture cf;
synchronized (cb) {
cf = cb.connect(new InetSocketAddress("localhost", 18080));
}
final Channel ret = cf.getChannel();
ret.getCloseFuture().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture arg0) throws Exception {
System.out.println("channel closed?");
synchronized (activeChannels) {
activeChannels.remove(ret);
}
}
});
synchronized (activeChannels) {
activeChannels.add(ret);
}
System.out.println("returning new channel");
return ret;
}
public Channel getFreeChannel() {
synchronized (freeChannels) {
while (!freeChannels.isEmpty()) {
Channel ch = freeChannels.pollFirst();
if (ch.isOpen()) {
return ch;
}
}
}
return newChannel();
}
public void returnChannel(Channel ch) {
synchronized (freeChannels) {
freeChannels.addLast(ch);
}
}
}
I'm trying to use this inside a handler as follows:
private static class RequestHandler extends SimpleChannelHandler {
#Override
public void messageReceived(ChannelHandlerContext ctx, final MessageEvent e) {
final HttpRequest request = (HttpRequest) e.getMessage();
Channel proxyChannel = pool.getFreeChannel();
proxyToClient.put(proxyChannel, e.getChannel());
proxyChannel.write(request);
}
}
Instead of adding the new channel to activeChannels immediately after bootstrap.connect(..), you have to add a listener to the ChannelFuture which was returned by bootstrap.connect(..), and add the channel to activeChannels in the added listener. That way, getFreeChannel() will never get the channel that is not connected yet.
Because it is likely that activeChannels is empty even if you called newChannel() (newChannel() will return even before connection is established), you have to decide what to do in such a case. If I were you, I would change the return type of getFreeChannel() from Channel to ChannelFuture so that the caller gets notified when the free channel is ready.
I have a fairly simple test Netty server/client project . I am testing some aspects of the stability of the communication by flooding the server with messages and counting the messages and bytes that I get back to make sure that everything matches.
When I run the flood from the client, the client keeps track of the number of messages it sends and how many it gets back and then when the number equal to each other it prints out some stats.
On certain occassions when running locally (I'm guessing because of congestion?) the client never ends up printing out the final message. I haven't run into this issue when the 2 components are on remote machines. Any suggestions would be appreciated:
The Encoder is just a simple OneToOneEncoder that encodes an Envelope type to a ChannelBuffer and the Decoder is a simple ReplayDecoder that does the opposite.
I tried adding a ChannelInterestChanged method to my client handler to see if the channel's interest was getting changed to not read, but that did not seem to be the case.
The relevant code is below:
Thanks!
SERVER
public class Server {
// configuration --------------------------------------------------------------------------------------------------
private final int port;
private ServerChannelFactory serverFactory;
// constructors ---------------------------------------------------------------------------------------------------
public Server(int port) {
this.port = port;
}
// public methods -------------------------------------------------------------------------------------------------
public boolean start() {
ExecutorService bossThreadPool = Executors.newCachedThreadPool();
ExecutorService childThreadPool = Executors.newCachedThreadPool();
this.serverFactory = new NioServerSocketChannelFactory(bossThreadPool, childThreadPool);
this.channelGroup = new DeviceIdAwareChannelGroup(this + "-channelGroup");
ChannelPipelineFactory pipelineFactory = new ChannelPipelineFactory() {
#Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("encoder", Encoder.getInstance());
pipeline.addLast("decoder", new Decoder());
pipeline.addLast("handler", new ServerHandler());
return pipeline;
}
};
ServerBootstrap bootstrap = new ServerBootstrap(this.serverFactory);
bootstrap.setOption("reuseAddress", true);
bootstrap.setOption("child.tcpNoDelay", true);
bootstrap.setOption("child.keepAlive", true);
bootstrap.setPipelineFactory(pipelineFactory);
Channel channel = bootstrap.bind(new InetSocketAddress(this.port));
if (!channel.isBound()) {
this.stop();
return false;
}
this.channelGroup.add(channel);
return true;
}
public void stop() {
if (this.channelGroup != null) {
ChannelGroupFuture channelGroupCloseFuture = this.channelGroup.close();
System.out.println("waiting for ChannelGroup shutdown...");
channelGroupCloseFuture.awaitUninterruptibly();
}
if (this.serverFactory != null) {
this.serverFactory.releaseExternalResources();
}
}
// main -----------------------------------------------------------------------------------------------------------
public static void main(String[] args) {
int port;
if (args.length != 3) {
System.out.println("No arguments found using default values");
port = 9999;
} else {
port = Integer.parseInt(args[1]);
}
final Server server = new Server( port);
if (!server.start()) {
System.exit(-1);
}
System.out.println("Server started on port 9999 ... ");
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
server.stop();
}
});
}
}
SERVER HANDLER
public class ServerHandler extends SimpleChannelUpstreamHandler {
// internal vars --------------------------------------------------------------------------------------------------
private AtomicInteger numMessagesReceived=new AtomicInteger(0);
// constructors ---------------------------------------------------------------------------------------------------
public ServerHandler() {
}
// SimpleChannelUpstreamHandler -----------------------------------------------------------------------------------
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
Channel c = e.getChannel();
System.out.println("ChannelConnected: channel id: " + c.getId() + ", remote host: " + c.getRemoteAddress() + ", isChannelConnected(): " + c.isConnected());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception {
System.out.println("*** EXCEPTION CAUGHT!!! ***");
e.getChannel().close();
}
#Override
public void channelDisconnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
super.channelDisconnected(ctx, e);
System.out.println("*** CHANNEL DISCONNECTED ***");
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception {
if(numMessagesReceived.incrementAndGet()%1000==0 ){
System.out.println("["+numMessagesReceived+"-TH MSG]: Received message: " + e.getMessage());
}
if (e.getMessage() instanceof Envelope) {
// echo it...
if (e.getChannel().isWritable()) {
e.getChannel().write(e.getMessage());
}
} else {
super.messageReceived(ctx, e);
}
}
}
CLIENT
public class Client implements ClientHandlerListener {
// configuration --------------------------------------------------------------------------------------------------
private final String host;
private final int port;
private final int messages;
// internal vars --------------------------------------------------------------------------------------------------
private ChannelFactory clientFactory;
private ChannelGroup channelGroup;
private ClientHandler handler;
private final AtomicInteger received;
private long startTime;
private ExecutorService cachedThreadPool = Executors.newCachedThreadPool();
// constructors ---------------------------------------------------------------------------------------------------
public Client(String host, int port, int messages) {
this.host = host;
this.port = port;
this.messages = messages;
this.received = new AtomicInteger(0);
}
// ClientHandlerListener ------------------------------------------------------------------------------------------
#Override
public void messageReceived(Envelope message) {
if (this.received.incrementAndGet() == this.messages) {
long stopTime = System.currentTimeMillis();
float timeInSeconds = (stopTime - this.startTime) / 1000f;
System.err.println("Sent and received " + this.messages + " in " + timeInSeconds + "s");
System.err.println("That's " + (this.messages / timeInSeconds) + " echoes per second!");
}
}
// public methods -------------------------------------------------------------------------------------------------
public boolean start() {
// For production scenarios, use limited sized thread pools
this.clientFactory = new NioClientSocketChannelFactory(cachedThreadPool, cachedThreadPool);
this.channelGroup = new DefaultChannelGroup(this + "-channelGroup");
this.handler = new ClientHandler(this, this.channelGroup);
ChannelPipelineFactory pipelineFactory = new ChannelPipelineFactory() {
#Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("byteCounter", new ByteCounter("clientByteCounter"));
pipeline.addLast("encoder", Encoder.getInstance());
pipeline.addLast("decoder", new Decoder());
pipeline.addLast("handler", handler);
return pipeline;
}
};
ClientBootstrap bootstrap = new ClientBootstrap(this.clientFactory);
bootstrap.setOption("reuseAddress", true);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
bootstrap.setPipelineFactory(pipelineFactory);
boolean connected = bootstrap.connect(new InetSocketAddress(host, port)).awaitUninterruptibly().isSuccess();
System.out.println("isConnected: " + connected);
if (!connected) {
this.stop();
}
return connected;
}
public void stop() {
if (this.channelGroup != null) {
this.channelGroup.close();
}
if (this.clientFactory != null) {
this.clientFactory.releaseExternalResources();
}
}
public ChannelFuture sendMessage(Envelope env) {
Channel ch = this.channelGroup.iterator().next();
ChannelFuture cf = ch.write(env);
return cf;
}
private void flood() {
if ((this.channelGroup == null) || (this.clientFactory == null)) {
return;
}
System.out.println("sending " + this.messages + " messages");
this.startTime = System.currentTimeMillis();
for (int i = 0; i < this.messages; i++) {
this.handler.sendMessage(new Envelope(Version.VERSION1, Type.REQUEST, 1, new byte[1]));
}
}
// main -----------------------------------------------------------------------------------------------------------
public static void main(String[] args) throws InterruptedException {
final Client client = new Client("localhost", 9999, 10000);
if (!client.start()) {
System.exit(-1);
return;
}
while (client.channelGroup.size() == 0) {
Thread.sleep(200);
}
System.out.println("Client started...");
client.flood();
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
System.out.println("shutting down client");
client.stop();
}
});
}
}
CLIENT HANDLER
public class ClientHandler extends SimpleChannelUpstreamHandler {
// internal vars --------------------------------------------------------------------------------------------------
private final ClientHandlerListener listener;
private final ChannelGroup channelGroup;
private Channel channel;
// constructors ---------------------------------------------------------------------------------------------------
public ClientHandler(ClientHandlerListener listener, ChannelGroup channelGroup) {
this.listener = listener;
this.channelGroup = channelGroup;
}
// SimpleChannelUpstreamHandler -----------------------------------------------------------------------------------
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception {
if (e.getMessage() instanceof Envelope) {
Envelope env = (Envelope) e.getMessage();
this.listener.messageReceived(env);
} else {
System.out.println("NOT ENVELOPE!!");
super.messageReceived(ctx, e);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception {
System.out.println("**** CAUGHT EXCEPTION CLOSING CHANNEL ***");
e.getCause().printStackTrace();
e.getChannel().close();
}
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
this.channel = e.getChannel();
System.out.println("Server connected, channel id: " + this.channel.getId());
this.channelGroup.add(e.getChannel());
}
// public methods -------------------------------------------------------------------------------------------------
public void sendMessage(Envelope envelope) {
if (this.channel != null) {
this.channel.write(envelope);
}
}
}
CLIENT HANDLER LISTENER INTERFACE
public interface ClientHandlerListener {
void messageReceived(Envelope message);
}
Without knowing how big the envelope is on the network I'm going to guess that your problem is that your client writes 10,000 messages without checking if the channel is writable.
Netty 3.x processes network events and writes in a particular fashion. It's possible that your client is writing so much data so fast that Netty isn't getting a chance to process receive events. On the server side this would result in the channel becoming non writable and your handler dropping the reply.
There are a few reasons why you see the problem on localhost but it's probably because the write bandwidth is much higher than your network bandwidth. The client doesn't check if the channel is writable, so over a network your messages are buffered by Netty until the network can catch up (if you wrote significantly more than 10,000 messages you might see an OutOfMemoryError). This acts as a natural break because Netty will suspend writing until the network is ready, allowing it to process incoming data and preventing the server from seeing a channel that's not writable.
The DiscardClientHandler in the discard handler shows how to test if the channel is writable, and how to resume when it becomes writable again. Another option is to have sendMessage return the ChannelFuture associated with the write and, if the channel is not writable after the write, block until the future completes.
Also your server handler should write the message and then check if the channel is writable. If it isn't you should set channel readable to false. Netty will notify ChannelInterestChanged when the channel becomes writable again. Then you can set channel readable to true to resume reading messages.