I'm new in NIO and i need to create simple non-blocking client with next api:
void start();
void send(String msg);
void stop();
Start method should create connection for specified host and port.
stop method should stop the client and release connection.
send should send messages to server.
So I have read documentations and I created simple client:
public class NonBlockingNIOClient {
private DatagramChannel channel;
public final static int MAX_PACKET_SIZE = 65507;
private static final Logger LOGGER = LoggerFactory.getLogger(NonBlockingNIOStatsDClient.class);
public NonBlockingNIOStatsDClient(String host, int port) {
this.host = host;
this.port = port;
}
public void start() {
try {
channel = DatagramChannel.open();
channel.configureBlocking(false);
channel.connect(new InetSocketAddress(getHost(), getPort()));
while (!channel.isConnected()) {
LOGGER.debug("still connecting");
}
Thread thread = new Thread(new Runnable() {
#Override
public void run() {
while(channel.isConnected()) {
}
}
});
thread.start();
} catch(IOException e) {
throw new ClientException("Failed to start client", e);
}
}
public void stop() {
try {
channel.disconnect();
} catch(IOException e) {
throw new StatsDClientException("Failed to stop client", e);
}
}
#Override
public void send(String msg) {
LOGGER.debug("send: {}", msg);
Validate.notBlank(msg, "message to sand cannot be blank");
ByteBuffer buf = ByteBuffer.allocate(MAX_PACKET_SIZE);
buf.clear();
buf.put(msg.getBytes());
buf.flip();
try {
channel.write(buf);
} catch(IOException e) {
getErrorHandler().handle(e);
}
}
}
How I understand from docs that channel.configureBlocking(false); does not guarantee that write method from channel will work in non-blocking mode. I guess I need to use selectors to achieve non-blocking behavior. But when I was trying to do next:
Selector selector = null;
try {
selector = Selector.open();
channel.register(selector, SelectionKey.OP_WRITE);
while(channel.isConnected()){
selector.select();
Iterator<SelectionKey> iterator = selector.selectedKeys().iterator();
while(iterator.hasNext())
{
SelectionKey key = iterator.next();
if(key.isWritable())
{
//do send
}
iterator.remove();
}
}
selector.close();
}
In this case client does not respond to send() method because client was blocked by while(channel.isConnected()). Do you have any suggestions how I can make start method available and at the same time use selectors.
Related
Got some questions about java non nonblocking socket.
Unplug the client's LAN cable and close the client's SocketChannel if there is no heartbeat message for 5 seconds in Timer thread.
private void startSendHeartBeatMessage() {
new Timer().scheduleAtFixedRate(new TimerTask() {
#SneakyThrows
#Override
public void run() {
if (connections.size() != 0) {
LocalDateTime now = LocalDateTime.now();
Iterator<MessageClient> iterator = connections.iterator();
while (iterator.hasNext()) {
MessageClient client = iterator.next();
if (client.isClientAlive(now)) {
// got a heartbeat 5 seconds ago
client.setData("heartbeat".getBytes());
SelectionKey key = client.getSocketChannel().keyFor(selector);
key.interestOps(SelectionKey.OP_WRITE); // send heartbeat to client
} else {
// remove client and close socketChannel
log.error("time out");
client.getSocketChannel().close();
iterator.remove();
}
}
}
selector.wakeup();
}
}, 0, 1000);
}
Of course, channel close is called because the lan cable is unplugged.
private void startServer() {
executorService.submit(() -> {
try {
while (true) {
if (selector.select(1000) > 0) { // wait for event
Iterator<SelectionKey> keyIterator = selector.selectedKeys().iterator();
while (keyIterator.hasNext()) {
SelectionKey key = keyIterator.next();
keyIterator.remove();
if (key.isAcceptable()) {
accept(key);
} else if (key.isReadable()) {
MessageClient client = (MessageClient) key.attachment();
client.receive(key, selector);
} else if (key.isWritable()) {
MessageClient client = (MessageClient) key.attachment();
client.send(key, selector);
}
}
}
}
} catch (IOException e) {
stopServer();
}
});
startSendHeartBeatMessage();
}
After client's socketChannel closed
Timer's selector.wakeup() and selector.select(1000) is not wakeup blocking and didn't catch any event in SelectionKey
But the client prints out that the connection and message transmission were successful.
As a result of checking, KQueueSelectorImpl wakeup() is called and interruptTriggered always true when the lan plug was unplugged and the socketChannel was closed.
/**
* KQueue based Selector implementation for macOS
*/
class KQueueSelectorImpl extends SelectorImpl {
public Selector wakeup() {
synchronized (interruptLock) {
if (!interruptTriggered) {
try {
IOUtil.write1(fd1, (byte)0);
} catch (IOException ioe) {
throw new InternalError(ioe);
}
interruptTriggered = true;
}
}
return this;
}
}
Not unplugging the LAN cable
Closing the client via cntr + c allows reconnection very normally.
I really don't know the internal working structure of NIO. Please help me
The execution environment runs on MacBook's and Spring Boot 2.
Thanks for reading.
I'm reading Doug Lea's Scalable I/O in Java, and I followed the Basic Reactor Design example code. But after I started server, the client can't connect to server.
Here is the Reactor class:
class Reactor implements Runnable {
private static final Logger logger = LogManager.getLogger();
final Selector selector;
final ServerSocketChannel serverSocket;
public Reactor(int port) throws IOException {
selector = Selector.open();
serverSocket = ServerSocketChannel.open();
serverSocket.bind(new InetSocketAddress(port));
serverSocket.configureBlocking(false);
SelectionKey sk = serverSocket.register(selector, SelectionKey.OP_ACCEPT);
sk.attach(new Acceptor());
logger.info("server started.");
}
#Override
public void run() {
while (!Thread.interrupted()) {
for (final Iterator<SelectionKey> it = selector.selectedKeys().iterator(); it.hasNext(); it.remove()) {
dispatch(it.next());
}
}
}
private void dispatch(SelectionKey key) {
Runnable r = (Runnable) key.attachment();
if (r != null) {
r.run();
}
}
private final class Acceptor implements Runnable {
#Override
public void run() {
try {
SocketChannel c = serverSocket.accept();
if (c != null) {
new Handler(selector, c);
}
} catch (IOException ex) {
ex.getMessage();
}
}
}
public static void main(String[] args) throws IOException {
new Reactor(9000).run();
}
}
Handler class
final class Handler implements Runnable {
private static final Logger logger = LogManager.getLogger();
final SocketChannel c;
final SelectionKey key;
ByteBuffer buffer = ByteBuffer.allocate(1024);
public Handler(Selector sel, SocketChannel c) throws IOException {
this.c = c;
c.configureBlocking(false);
key = c.register(sel, SelectionKey.OP_READ | SelectionKey.OP_WRITE);
logger.info("client connected: " + c);
}
void read() throws IOException {
if (!buffer.hasRemaining()) {
return;
}
c.read(buffer);
}
void process() {/* */}
void write() throws IOException {
buffer.flip();
c.write(buffer);
c.close();
}
#Override
public void run() {
try {
read();
process();
write();
} catch (IOException ex) {
ex.getMessage();
}
}
}
I start server in idea and then server started is printed in console
But after I enter telnet localhost 9000 in terminal, client connected: doesn't appear.
I had to change the Reactor run method a bit. you have to call selector.select() or selector.selectNow():
#Override
public void run() {
while (!Thread.interrupted()) {
try {
int ready = selector.selectNow();
if (ready == 0){
continue;
}
Set<SelectionKey> selected = selector.selectedKeys();
Iterator<SelectionKey> it = selected.iterator();
while (it.hasNext()) {
SelectionKey key = it.next();
if(key.isAcceptable() || key.isReadable()) {
dispatch(key);
}
}
selected.clear();
} catch (IOException e) {
e.printStackTrace();
}
}
}
that allowed the client to connect.
in order to enable an echo service from Handler I implemented this:
final class Handler implements Runnable {
private static final Logger logger = LogManager.getLogger();
final SocketChannel c;
final SelectionKey key;
ByteBuffer buffer = ByteBuffer.allocate(1024);
public Handler(Selector selector, SocketChannel c) throws IOException {
this.c = c;
c.configureBlocking(false);
logger.info("client connected: " + c);
key = c.register(selector, 0);
key.attach(this);
key.interestOps(SelectionKey.OP_READ);
selector.wakeup();
}
#Override
public void run() {
try {
SocketChannel client = (SocketChannel) key.channel();
client.read(buffer);
if (new String(buffer.array()).trim().equals("close")) {
client.close();
System.out.println("close connection");
}
buffer.flip();
client.write(buffer);
buffer.clear();
} catch (IOException ex) {
ex.getMessage();
}
}
}
register the Handler instance for reading and then upon a readable selection key the run method of this instance is called to handle the reading.
I have a server app. Java NIO
I have Runnable class - EventHandler - that process incoming messages. If message == "Bye" -> EventHandler close related SocketServer and SelectorKey
I have one Runnable object - Acceptor - that is activated on OP_ACCEPT events. It creates new SocketChannel and new EventHandler to process messages from this channel
I have a problem.
First client connect. Send messages. Disconnect. Everything is ok
After first client disconnected Second client connect. Here problem begins - Acceptor object isn't invoked, therefore SocketChannel and EventHandler are not created for new client.
What is wrong in my code? SocketChannel closed improperly?
I changed the code to fix the errors that were noted in the comments. Now it works fine
Reactor. Class with the main loop
public class Reactor implements Runnable {
final Selector selector;
final ServerSocketChannel serverSocketChannel;
Reactor(int port) throws IOException {
//configure server socket channel
this.selector = Selector.open();
this.serverSocketChannel = ServerSocketChannel.open();
this.serverSocketChannel.socket().bind(new InetSocketAddress(port));
this.serverSocketChannel.configureBlocking(false);
//start acceptor
this.serverSocketChannel.register(this.selector, SelectionKey.OP_ACCEPT, new Acceptor(this.serverSocketChannel, this.selector));
}
public void run() {
System.out.println("Server is listening to port: " + serverSocketChannel.socket().getLocalPort());
try {
while (!Thread.currentThread().isInterrupted()) {
if (this.selector.select() > 0) {
Set<SelectionKey> selected = this.selector.selectedKeys();
for (SelectionKey selectionKey : selected) {
dispatch(selectionKey);
}
selected.clear(); //clear set (thanks to EJP for comment)
}
}
} catch (IOException ex) {
ex.printStackTrace();
}
}
void dispatch(SelectionKey k) {
Runnable r = (Runnable) (k.attachment());
if (r != null) {
r.run();
}
}
}
Acceptor
public class Acceptor implements Runnable {
final ServerSocketChannel serverSocketChannel;
final Selector selector;
public Acceptor(ServerSocketChannel serverSocketChannel, Selector selector) {
this.serverSocketChannel = serverSocketChannel;
this.selector = selector;
}
public void run() {
try {
SocketChannel socketChannel = this.serverSocketChannel.accept();
if (socketChannel != null) {
new EventHandler(this.selector, socketChannel);
System.out.println("Connection Accepted");
}
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
EventHandler
public class EventHandler implements Runnable {
EventHandler(Selector selector, SocketChannel socketChannel) throws IOException {
this.socketChannel = socketChannel;
socketChannel.configureBlocking(false);
this.selectionKey = this.socketChannel.register(selector, SelectionKey.OP_READ, this);
//selector.wakeup(); //we don't need to wake up selector (thanks to EJP for comment)
}
#Override
public void run() {
try {
if (this.state == EventHandlerStatus.READING) {
read();
} else if (this.state == EventHandlerStatus.SENDING) {
send();
}
} catch (IOException ex) {
ex.printStackTrace();
}
}
/**
* Reading client message
*
* #throws IOException
*/
void read() throws IOException {
int readCount = this.socketChannel.read(this.input);
//check whether the result is equal to -1, and close the connection if it is (thanks to EJP for comment)
if(readCount == -1){
this.socketChannel.close();
System.out.println("Stream is closed. Close connection.");
return;
}
if (readCount > 0) {
processMessage(readCount);
}
if(this.clientMessage.equalsIgnoreCase("Bye")){
this.socketChannel.close();
//this.selectionKey.cancel(); //we don't need to cancel selectionKey if socketChannel is just closed (thanks to EJP for comment)
System.out.println("Client said Bye. Close connection.");
return;
}
this.state = EventHandler.Status.SENDING;
this.selectionKey.interestOps(SelectionKey.OP_WRITE); //mark that we interested in writing
}
/**
* Processing of the read message.
*
* #param readCount Number of bytes to read
*/
synchronized void processMessage(int readCount) {
this.input.flip();
StringBuilder sb = new StringBuilder();
sb.append(new String(Arrays.copyOfRange(input.array(), 0, readCount))); // Assuming ASCII (bad assumption but simplifies the example)
this.clientMessage = sb.toString().trim();
this.input.clear();
System.out.println("Client said: " + this.clientMessage);
}
/**
* Sending response to client
*
* #throws IOException
*/
void send() throws IOException {
System.out.println("Answer to client: " + this.clientMessage);
this.socketChannel.write(ByteBuffer.wrap((this.clientMessage + "\n").getBytes()));
this.state = EventHandler.Status.READING;
this.selectionKey.interestOps(SelectionKey.OP_READ); //mark that we interested in reading
}
//----------------------------------------------------------------------------------------------------------------------
// Fields
//----------------------------------------------------------------------------------------------------------------------
final SocketChannel socketChannel;
final SelectionKey selectionKey;
ByteBuffer input = ByteBuffer.allocate(1024);
EventHandlerStatus state = EventHandler.Status.READING;
String clientMessage = "";
//----------------------------------------------------------------------------------------------------------------------
// Enum to mark current status of EventHandler
//----------------------------------------------------------------------------------------------------------------------
enum Status {
READING, SENDING
}
}
i'm trying to use NIO to build an efficient Socket TCP/IP server.
i have the main thread which accept connection and then add it to another thread which supposed to wait for messages from client and then read it.
when i'm using only one thread and one selector for all the operations it works great, but when i'm trying to make it works with 2 threads and 2 selectors the incoming connection accept is working but the reading is not, i think it because my selector is blocking the thread and therefor he's not aware that I've registered a new SocketChannel.
this is my Main thread:
public static void main(String[] args) {
try {
System.out.println("Who's Around Server Started!");
Selector connectionsSelector = null;
ServerSocketChannel server = null;
String host = "localhost";
int port = 80;
LiveConnectionsManager liveConnectionsManager =
new LiveConnectionsManager();
liveConnectionsManager.start();
connectionsSelector = Selector.open();
server = ServerSocketChannel.open();
server.socket().bind(new InetSocketAddress(host,port));
server.configureBlocking(false);
server.register(connectionsSelector, SelectionKey.OP_ACCEPT);
while (true) {
connectionsSelector.select();
Iterator<SelectionKey> iterator =
connectionsSelector.selectedKeys().iterator();
while (iterator.hasNext()) {
SelectionKey incomingConnection = iterator.next();
iterator.remove();
if( incomingConnection.isConnectable()) {
((SocketChannel)incomingConnection.channel()).finishConnect();
}
if( incomingConnection.isAcceptable()){
acceptConnection(server.accept(), liveConnectionsManager);
}
}
}
} catch (Throwable e) {
throw new RuntimeException("Server failure: " + e.getMessage());
}
}
private static void acceptConnection(
SocketChannel acceptedConnection,
LiveConnectionsManager liveConnectionsManager ) throws IOException
{
acceptedConnection.configureBlocking(false);
acceptedConnection.socket().setTcpNoDelay(true);
System.out.println(
"New connection from: " + acceptedConnection.socket().getInetAddress());
liveConnectionsManager.addLiveConnection(acceptedConnection);
}
and this is my LiveConnectionsManager:
private Selector messagesSelector;
public LiveConnectionsManager(){
try {
messagesSelector = Selector.open();
} catch (IOException e) {
System.out.println("Couldn't run LiveConnectionsManager");
}
}
#Override
public void run() {
try {
System.out.println("LiveConnectionManager Started!");
while(true) {
messagesSelector.select();
Iterator<SelectionKey> iterator = messagesSelector.keys().iterator();
while (iterator.hasNext()){
SelectionKey newData = iterator.next();
iterator.remove();
if( newData.isReadable()){
readIncomingData(((SocketChannel)newData.channel()));
}
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
public void addLiveConnection( SocketChannel socketChannel )
throws ClosedChannelException
{
socketChannel.register(messagesSelector, SelectionKey.OP_READ);
}
I have a fairly simple test Netty server/client project . I am testing some aspects of the stability of the communication by flooding the server with messages and counting the messages and bytes that I get back to make sure that everything matches.
When I run the flood from the client, the client keeps track of the number of messages it sends and how many it gets back and then when the number equal to each other it prints out some stats.
On certain occassions when running locally (I'm guessing because of congestion?) the client never ends up printing out the final message. I haven't run into this issue when the 2 components are on remote machines. Any suggestions would be appreciated:
The Encoder is just a simple OneToOneEncoder that encodes an Envelope type to a ChannelBuffer and the Decoder is a simple ReplayDecoder that does the opposite.
I tried adding a ChannelInterestChanged method to my client handler to see if the channel's interest was getting changed to not read, but that did not seem to be the case.
The relevant code is below:
Thanks!
SERVER
public class Server {
// configuration --------------------------------------------------------------------------------------------------
private final int port;
private ServerChannelFactory serverFactory;
// constructors ---------------------------------------------------------------------------------------------------
public Server(int port) {
this.port = port;
}
// public methods -------------------------------------------------------------------------------------------------
public boolean start() {
ExecutorService bossThreadPool = Executors.newCachedThreadPool();
ExecutorService childThreadPool = Executors.newCachedThreadPool();
this.serverFactory = new NioServerSocketChannelFactory(bossThreadPool, childThreadPool);
this.channelGroup = new DeviceIdAwareChannelGroup(this + "-channelGroup");
ChannelPipelineFactory pipelineFactory = new ChannelPipelineFactory() {
#Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("encoder", Encoder.getInstance());
pipeline.addLast("decoder", new Decoder());
pipeline.addLast("handler", new ServerHandler());
return pipeline;
}
};
ServerBootstrap bootstrap = new ServerBootstrap(this.serverFactory);
bootstrap.setOption("reuseAddress", true);
bootstrap.setOption("child.tcpNoDelay", true);
bootstrap.setOption("child.keepAlive", true);
bootstrap.setPipelineFactory(pipelineFactory);
Channel channel = bootstrap.bind(new InetSocketAddress(this.port));
if (!channel.isBound()) {
this.stop();
return false;
}
this.channelGroup.add(channel);
return true;
}
public void stop() {
if (this.channelGroup != null) {
ChannelGroupFuture channelGroupCloseFuture = this.channelGroup.close();
System.out.println("waiting for ChannelGroup shutdown...");
channelGroupCloseFuture.awaitUninterruptibly();
}
if (this.serverFactory != null) {
this.serverFactory.releaseExternalResources();
}
}
// main -----------------------------------------------------------------------------------------------------------
public static void main(String[] args) {
int port;
if (args.length != 3) {
System.out.println("No arguments found using default values");
port = 9999;
} else {
port = Integer.parseInt(args[1]);
}
final Server server = new Server( port);
if (!server.start()) {
System.exit(-1);
}
System.out.println("Server started on port 9999 ... ");
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
server.stop();
}
});
}
}
SERVER HANDLER
public class ServerHandler extends SimpleChannelUpstreamHandler {
// internal vars --------------------------------------------------------------------------------------------------
private AtomicInteger numMessagesReceived=new AtomicInteger(0);
// constructors ---------------------------------------------------------------------------------------------------
public ServerHandler() {
}
// SimpleChannelUpstreamHandler -----------------------------------------------------------------------------------
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
Channel c = e.getChannel();
System.out.println("ChannelConnected: channel id: " + c.getId() + ", remote host: " + c.getRemoteAddress() + ", isChannelConnected(): " + c.isConnected());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception {
System.out.println("*** EXCEPTION CAUGHT!!! ***");
e.getChannel().close();
}
#Override
public void channelDisconnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
super.channelDisconnected(ctx, e);
System.out.println("*** CHANNEL DISCONNECTED ***");
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception {
if(numMessagesReceived.incrementAndGet()%1000==0 ){
System.out.println("["+numMessagesReceived+"-TH MSG]: Received message: " + e.getMessage());
}
if (e.getMessage() instanceof Envelope) {
// echo it...
if (e.getChannel().isWritable()) {
e.getChannel().write(e.getMessage());
}
} else {
super.messageReceived(ctx, e);
}
}
}
CLIENT
public class Client implements ClientHandlerListener {
// configuration --------------------------------------------------------------------------------------------------
private final String host;
private final int port;
private final int messages;
// internal vars --------------------------------------------------------------------------------------------------
private ChannelFactory clientFactory;
private ChannelGroup channelGroup;
private ClientHandler handler;
private final AtomicInteger received;
private long startTime;
private ExecutorService cachedThreadPool = Executors.newCachedThreadPool();
// constructors ---------------------------------------------------------------------------------------------------
public Client(String host, int port, int messages) {
this.host = host;
this.port = port;
this.messages = messages;
this.received = new AtomicInteger(0);
}
// ClientHandlerListener ------------------------------------------------------------------------------------------
#Override
public void messageReceived(Envelope message) {
if (this.received.incrementAndGet() == this.messages) {
long stopTime = System.currentTimeMillis();
float timeInSeconds = (stopTime - this.startTime) / 1000f;
System.err.println("Sent and received " + this.messages + " in " + timeInSeconds + "s");
System.err.println("That's " + (this.messages / timeInSeconds) + " echoes per second!");
}
}
// public methods -------------------------------------------------------------------------------------------------
public boolean start() {
// For production scenarios, use limited sized thread pools
this.clientFactory = new NioClientSocketChannelFactory(cachedThreadPool, cachedThreadPool);
this.channelGroup = new DefaultChannelGroup(this + "-channelGroup");
this.handler = new ClientHandler(this, this.channelGroup);
ChannelPipelineFactory pipelineFactory = new ChannelPipelineFactory() {
#Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("byteCounter", new ByteCounter("clientByteCounter"));
pipeline.addLast("encoder", Encoder.getInstance());
pipeline.addLast("decoder", new Decoder());
pipeline.addLast("handler", handler);
return pipeline;
}
};
ClientBootstrap bootstrap = new ClientBootstrap(this.clientFactory);
bootstrap.setOption("reuseAddress", true);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
bootstrap.setPipelineFactory(pipelineFactory);
boolean connected = bootstrap.connect(new InetSocketAddress(host, port)).awaitUninterruptibly().isSuccess();
System.out.println("isConnected: " + connected);
if (!connected) {
this.stop();
}
return connected;
}
public void stop() {
if (this.channelGroup != null) {
this.channelGroup.close();
}
if (this.clientFactory != null) {
this.clientFactory.releaseExternalResources();
}
}
public ChannelFuture sendMessage(Envelope env) {
Channel ch = this.channelGroup.iterator().next();
ChannelFuture cf = ch.write(env);
return cf;
}
private void flood() {
if ((this.channelGroup == null) || (this.clientFactory == null)) {
return;
}
System.out.println("sending " + this.messages + " messages");
this.startTime = System.currentTimeMillis();
for (int i = 0; i < this.messages; i++) {
this.handler.sendMessage(new Envelope(Version.VERSION1, Type.REQUEST, 1, new byte[1]));
}
}
// main -----------------------------------------------------------------------------------------------------------
public static void main(String[] args) throws InterruptedException {
final Client client = new Client("localhost", 9999, 10000);
if (!client.start()) {
System.exit(-1);
return;
}
while (client.channelGroup.size() == 0) {
Thread.sleep(200);
}
System.out.println("Client started...");
client.flood();
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
System.out.println("shutting down client");
client.stop();
}
});
}
}
CLIENT HANDLER
public class ClientHandler extends SimpleChannelUpstreamHandler {
// internal vars --------------------------------------------------------------------------------------------------
private final ClientHandlerListener listener;
private final ChannelGroup channelGroup;
private Channel channel;
// constructors ---------------------------------------------------------------------------------------------------
public ClientHandler(ClientHandlerListener listener, ChannelGroup channelGroup) {
this.listener = listener;
this.channelGroup = channelGroup;
}
// SimpleChannelUpstreamHandler -----------------------------------------------------------------------------------
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception {
if (e.getMessage() instanceof Envelope) {
Envelope env = (Envelope) e.getMessage();
this.listener.messageReceived(env);
} else {
System.out.println("NOT ENVELOPE!!");
super.messageReceived(ctx, e);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception {
System.out.println("**** CAUGHT EXCEPTION CLOSING CHANNEL ***");
e.getCause().printStackTrace();
e.getChannel().close();
}
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
this.channel = e.getChannel();
System.out.println("Server connected, channel id: " + this.channel.getId());
this.channelGroup.add(e.getChannel());
}
// public methods -------------------------------------------------------------------------------------------------
public void sendMessage(Envelope envelope) {
if (this.channel != null) {
this.channel.write(envelope);
}
}
}
CLIENT HANDLER LISTENER INTERFACE
public interface ClientHandlerListener {
void messageReceived(Envelope message);
}
Without knowing how big the envelope is on the network I'm going to guess that your problem is that your client writes 10,000 messages without checking if the channel is writable.
Netty 3.x processes network events and writes in a particular fashion. It's possible that your client is writing so much data so fast that Netty isn't getting a chance to process receive events. On the server side this would result in the channel becoming non writable and your handler dropping the reply.
There are a few reasons why you see the problem on localhost but it's probably because the write bandwidth is much higher than your network bandwidth. The client doesn't check if the channel is writable, so over a network your messages are buffered by Netty until the network can catch up (if you wrote significantly more than 10,000 messages you might see an OutOfMemoryError). This acts as a natural break because Netty will suspend writing until the network is ready, allowing it to process incoming data and preventing the server from seeing a channel that's not writable.
The DiscardClientHandler in the discard handler shows how to test if the channel is writable, and how to resume when it becomes writable again. Another option is to have sendMessage return the ChannelFuture associated with the write and, if the channel is not writable after the write, block until the future completes.
Also your server handler should write the message and then check if the channel is writable. If it isn't you should set channel readable to false. Netty will notify ChannelInterestChanged when the channel becomes writable again. Then you can set channel readable to true to resume reading messages.