How to automatically connect to TCP server after disconnection in netty - java

I have a scenario where I am establishing TCP connection using netty NIO, suppose server went down than how can I automatically connect to server when it comes up again ?
Or Is there any way to attach availability listener on server ?

You can have a DisconnectionHandler, as the first thing on your client pipeline, that reacts on channelInactive by immediately trying to reconnect or scheduling a reconnection task.
For example,
public class DisconnectionHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelInactive(final ChannelHandlerContext ctx) throws Exception {
Channel channel = ctx.channel();
/* If shutdown is on going, ignore */
if (channel.eventLoop().isShuttingDown()) return;
ReconnectionTask reconnect = new ReconnectionTask(channel);
reconnect.run();
}
}
The ReconnectionTask would be something like this:
public class ReconnectionTask implements Runnable, ChannelFutureListener {
Channel previous;
public ReconnectionTask(Channel c) {
this.previous = c;
}
#Override
public void run() {
Bootstrap b = createBootstrap();
b.remoteAddress(previous.remoteAddress())
.connect()
.addListener(this);
}
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (!future.isSuccess()) {
// Will try to connect again in 100 ms.
// Here you should probably use exponential backoff or some sort of randomization to define the retry period.
previous.eventLoop()
.schedule(this, 100, MILLISECONDS);
return;
}
// Do something else when success if needed.
}
}
Check here for an example of Exponential Backoff library.

Related

RMI garbage collection of callbacks

I need to create a RMI service which can notify events to clients.
Each client register itself on the server, the client can emit an event and the server will broadcast it to all other clients.
The program works, but, the client reference on the server is never garbage collected, an the thread which the server uses to check if the client reference will never terminate.
So each time a client connects to the server, a new thread is created and never terminated.
The Notifier class can register and unregister a listener.
The broadcast method call each registered listener and send the message back.
public class Notifier extends UnicastRemoteObject implements INotifier{
private List<IListener> listeners = Collections.synchronizedList(new ArrayList());
public Notifier() throws RemoteException {
super();
}
#Override
public void register(IListener listener) throws RemoteException{
listeners.add(listener);
}
#Override
public void unregister(IListener listener) throws RemoteException{
boolean remove = listeners.remove(listener);
if(remove) {
System.out.println(listener+" removed");
} else {
System.out.println(listener+" NOT removed");
}
}
#Override
public void broadcast(String msg) throws RemoteException {
for (IListener listener : listeners) {
try {
listener.onMessage(msg);
} catch (RemoteException e) {
e.printStackTrace();
}
}
}
}
The listener is just printing each received message.
public class ListenerImpl extends UnicastRemoteObject implements IListener {
public ListenerImpl() throws RemoteException {
super();
}
#Override
public void onMessage(String msg) throws RemoteException{
System.out.println("Received: "+msg);
}
}
The RunListener client subscribes a listener wait few seconds to receive a message and then terminates.
public class RunListener {
public static void main(String[] args) throws Exception {
Registry registry = LocateRegistry.getRegistry();
INotifier notifier = (INotifier) registry.lookup("Notifier");
ListenerImpl listener = new ListenerImpl();
notifier.register(listener);
Thread.sleep(6000);
notifier.unregister(listener);
UnicastRemoteObject.unexportObject(listener, true);
}
}
The RunNotifier just publish the service and periodically sends a message.
public class RunNotifier {
static AtomicInteger counter = new AtomicInteger();
public static void main(String[] args) throws RemoteException, AlreadyBoundException, NotBoundException {
Registry registry = LocateRegistry.createRegistry(1099);
INotifier notifier = new Notifier();
registry.bind("Notifier", notifier);
ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
executor.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
try {
int n = counter.incrementAndGet();
System.out.println("Broadcasting "+n);
notifier.broadcast("Hello ("+n+ ")");
} catch (RemoteException e) {
e.printStackTrace();
}
}
},5 , 5, TimeUnit.SECONDS);
try {
System.in.read();
} catch (IOException e) {
}
executor.shutdown();
registry.unbind("Notifier");
UnicastRemoteObject.unexportObject(notifier, true);
}
}
I've seen many Q&A on stack overflow about RMI, but none addresses this kind of problem.
I guess I'm doing some very big mistake, but I can't spot it.
As you can see in the picture, a new RMI RenewClean thread is created for each incoming connection, and this thread will never terminate.
Once the client disconnects, and terminates, the RenewClean thread will silently swallow all ConnectionException thrown and will keep polling a client which will never reply.
As a side note, I even tried to keep just weak reference of the IListener in the Notifier class, and still the results are the same.
This may not be very helpful if you are stuck on JDK1.8, but when I test on JDK17 the multiple rmi server threads created for each incoming client RMI RenewClean-[IPADDRESS:PORT] are cleaned up on the server, and not showing "will never terminate" behaviour you may have observed on JDK1.8. It may be a JDK1.8 issue, or simply that you are not waiting long enough for the threads to end.
For quicker cleanup, try adjusting the system property for client thread garbage collection setting from the default (3600000 = 1 hour):
java -Dsun.rmi.dgc.client.gcInterval=3600000 ...
On my server I added this in one of the API callbacks:
Function<Thread,String> toString = t -> t.getName()+(t.isDaemon() ? " DAEMON" :"");
Set<Thread> threads = Thread.getAllStackTraces().keySet();
System.out.println("-".repeat(40)+" Threads x "+threads.size());
threads.stream().map(toString).forEach(System.out::println);
After RMI server startup it printed names of threads and no instances of "RMI RenewClean":
---------------------------------------- Threads x 12
After connecting many times from a client, the server reported corresponding instances of "RMI RenewClean":
---------------------------------------- Threads x 81
Leaving the RMI server for a while, these gradually shrank back - not to 12 threads -, but low enough to suggest that RMI thread handling is not filling up with many unnecessary daemon threads:
---------------------------------------- Threads x 20
After about an hour all the remaining "RMI RenewClean" were removed - probably due to housekeeping performed at the interval defined by the VM setting sun.rmi.dgc.client.gcInterval=3600000:
---------------------------------------- Threads x 13
Note also that RMI server shutdown is instant at any point - the "RMI RenewClean" daemon threads do not hold up rmi server shutdown.

Closing all open streams in GRPC-Java from client end cleanly

I am using GRPC-Java 1.1.2. In an active GRPC session, I have a few bidirectional streams open. Is there a way to clean them from the client end when the client is disconnecting? When I try to disconnect, I run the following look for a fixed number of times and then disconnect but I can see the following error on the server side (not sure if its caused by another issue though):
disconnect from client
while (!channel.awaitTermination(3, TimeUnit.SECONDS)) {
// check for upper bound and break if so
}
channel.shutdown().awaitTermination(3, TimeUnit.SECONDS);
error on server
E0414 11:26:48.787276000 140735121084416 ssl_transport_security.c:439] SSL_read returned 0 unexpectedly.
E0414 11:26:48.787345000 140735121084416 secure_endpoint.c:185] Decryption error: TSI_INTERNAL_ERROR
If you want to close gRPC (server-side or bi-di) streams from the client end, you will have to attach the rpc call with a Context.CancellableContext found in package io.grpc.
Suppose you have an rpc:
service Messaging {
rpc Listen (ListenRequest) returns (stream Message) {}
}
In the client side, you will handle it like this:
public class Messaging {
private Context.CancellableContext mListenContext;
private MessagingGrpc.MessagingStub getMessagingAsyncStub() {
/* return your async stub */
}
public void listen(final ListenRequest listenRequest, final StreamObserver<Message> messageStream) {
Runnable listenRunnable = new Runnable() {
#Override
public void run() {
Messaging.this.getMessagingAsyncStub().listen(listenRequest, messageStream);
}
if (mListenContext != null && !mListenContext.isCancelled()) {
Log.d(TAG, "listen: already listening");
return;
}
mListenContext = Context.current().withCancellation();
mListenContext.run(listenRunnable);
}
public void cancelListen() {
if (mListenContext != null) {
mListenContext.cancel(null);
mListenContext = null;
}
}
}
Calling cancelListen() will emulate the error, 'CANCELLED', the connection will be closed, and onError of your StreamObserver<Message> messageStream will be invoked with throwable message: 'CANCELLED'.
If you use shutdownNow() it will more aggressively shutdown the RPC streams you have. Also, you need to call shutdown() or shutdownNow() before calling awaitTermination().
That said, a better solution would be to end all your RPCs gracefully before closing the channel.

Netty client send keep alive to server

I want to write keep alive command from client to server using Netty. I found out that option of IdleStateHandler. I dont know how to solve the problem in the client side, this is my code:
public void connect() {
workerGroup = new NioEventLoopGroup();
Bootstrap bs = new Bootstrap();
bs.group(workerGroup).channel(NioSocketChannel.class);
bs.handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast("idleStateHandler", new IdleStateHandler(0, 0, 300));
ch.pipeline().addLast("logger", new LoggingHandler());
ch.pipeline().addLast("commandDecoder", new CuCommandDecoder());
ch.pipeline().addLast("commandEncoder", new CuCommandEncoder());
}
});
After add IdleStateHandler to channel. Where the handling code should be?
Did it new method that implements IdleStateHandler?
According to the JavaDoc, IdleStateHandler will generate new events according to the current status of the channel:
IdleState#READER_IDLE for timeout on Read operation
IdleState#WRITER_IDLE for timeout on Write operation
IdleState#ALL_IDLE for timeout on both Read/Write operation
Then you need to implement in you handlers the handling of those events as (example taken from documentation from here ):
// Handler should handle the IdleStateEvent triggered by IdleStateHandler.
public class MyHandler extends ChannelDuplexHandler {
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
if (evt instanceof IdleStateEvent) {
IdleStateEvent e = (IdleStateEvent) evt;
if (e.state() == IdleState.READER_IDLE) {
ctx.close();
} else if (e.state() == IdleState.WRITER_IDLE) {
ctx.writeAndFlush(new PingMessage());
}
}
}
}
Here the example will close on first READ idle, and try to send a ping in Write idle. One could implement also the "pong" response, and also changing the read part to a ping request too... The way you want to handle your keep-alive being related to your protocol.
This could be done both on client and server side.

Create thousands of Netty clients without also creating thousands of threads

I have created a fairly straight forward server using Netty 4. I have been able to scale it up to handle several thousand connections and it never climbs above ~40 threads.
In order to test it out, I have also created a test client that creates thousands of connections. Unfortunately this creates as many threads as it makes connections. I was hoping to minimize threads for the clients. I have looked at many posts for this. Many examples show single connection setup. This and this say to share NioEventLoopGroup across clients, which I do. I'm getting a limited number of nioEventLoopGroup, but getting a thread per connection elsewhere. I am not purposely creating threads in the pipeline and don't see what could be.
Here is a snippet from the setup of my client code. It seems that it should maintain a fixed thread count based on what I've researched so far. Is there something I'm missing that I should be doing to prevent a thread per client connection?
Main
final EventLoopGroup group = new NioEventLoopGroup();
for (int i=0; i<100; i++)){
MockClient client = new MockClient(i, group);
client.connect();
}
MockClient
public class MockClient implements Runnable {
private final EventLoopGroup group;
private int identity;
public MockClient(int identity, final EventLoopGroup group) {
this.identity = identity;
this.group = group;
}
#Override
public void run() {
try {
connect();
} catch (Exception e) {}
}
public void connect() throws Exception{
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.handler(new MockClientInitializer(identity, this));
final Runnable that = this;
// Start the connection attempt
b.connect(config.getHost(), config.getPort()).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
Channel ch = future.sync().channel();
} else {
//if the server is down, try again in a few seconds
future.channel().eventLoop().schedule(that, 15, TimeUnit.SECONDS);
}
}
});
}
}
As has happened to me many times before, explaining the problem in detail made me think about it more and I came across the issue. I wanted to provide it here should anyone else come across the same issue with creating thousands of Netty clients.
I have one path in my pipeline that will create a timeout task to simulate a client connection rebooting. It turns out it was this timer task that was creating the extra threads per connection whenever it received a 'reboot' signal from the server (which happens every so often) up until there was a thread per connection.
Handler
private final HashedWheelTimer timer;
#Override
protected void channelRead0(ChannelHandlerContext ctx, Packet msg) throws Exception {
Packet packet = reboot();
ChannelFutureListener closeHandler = new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
RebootTimeoutTask timeoutTask = new RebootTimeoutTask(identity, client);
timer.newTimeout(timeoutTask, SECONDS_FOR_REBOOT, TimeUnit.SECONDS);
}
};
ctx.writeAndFlush(packet).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
future.channel().close().addListener(closeHandler);
} else {
future.channel().close();
}
}
});
}
Timeout Task
public class RebootTimeoutTask implements TimerTask {
public RebootTimeoutTask(...) {...}
#Override
public void run(Timeout timeout) throws Exception {
client.connect(identity);
}
}

Implementing keep-alive messages in Netty using WriteTimeoutHandler

I am using Netty 3.2.7. I am trying to write functionality in my client such that if no messages are written after a certain amount of time (say, 30 seconds), a "keep-alive" message is sent to the server.
After some digging, I found that WriteTimeoutHandler should enable me to do this. I found this explanation here: https://issues.jboss.org/browse/NETTY-79.
The example given in the Netty documentation is:
public ChannelPipeline getPipeline() {
// An example configuration that implements 30-second write timeout:
return Channels.pipeline(
new WriteTimeoutHandler(timer, 30), // timer must be shared.
new MyHandler());
}
In my test client, I have done just this. In MyHandler, I also overrided the exceptionCaught() method:
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
if (e.getCause() instanceof WriteTimeoutException) {
log.info("Client sending keep alive!");
ChannelBuffer keepAlive = ChannelBuffers.buffer(KEEP_ALIVE_MSG_STR.length());
keepAlive.writeBytes(KEEP_ALIVE_MSG_STR.getBytes());
Channels.write(ctx, Channels.future(e.getChannel()), keepAlive);
}
}
No matter what duration the client does not write anything to the channel, the exceptionCaught() method I have overridden is never called.
Looking at the source of WriteTimeoutHandler, its writeRequested() implementation is:
public void writeRequested(ChannelHandlerContext ctx, MessageEvent e)
throws Exception {
long timeoutMillis = getTimeoutMillis(e);
if (timeoutMillis > 0) {
// Set timeout only when getTimeoutMillis() returns a positive value.
ChannelFuture future = e.getFuture();
final Timeout timeout = timer.newTimeout(
new WriteTimeoutTask(ctx, future),
timeoutMillis, TimeUnit.MILLISECONDS);
future.addListener(new TimeoutCanceller(timeout));
}
super.writeRequested(ctx, e);
}
Here, it seems that this implementation says, "When a write is requested, make a new timeout. When the write succeeds, cancel the timeout."
Using a debugger, it does seem that this is what is happening. As soon as the write completes, the timeout is cancelled. This is not the behavior I want. The behavior I want is: "If the client has not written any information to the channel for 30 seconds, throw a WriteTimeoutException."
So, is this not what WriteTimeoutHandler is for? This is how I interpreted it from what I've read online, but the implementation does not seem to work this way. Am I using it wrong? Should I use something else? In our Mina version of the same client I am trying to rewrite, I see that the sessionIdle() method is overridden to achieve the behavior I want, but this method is not available in Netty.
For Netty 4.0 and newer, you should extend ChannelDuplexHandler like in example from IdleStateHandler documentation :
// An example that sends a ping message when there is no outbound traffic
// for 30 seconds. The connection is closed when there is no inbound traffic
// for 60 seconds.
public class MyChannelInitializer extends ChannelInitializer<Channel> {
#Override
public void initChannel(Channel channel) {
channel.pipeline().addLast("idleStateHandler", new IdleStateHandler(60, 30, 0));
channel.pipeline().addLast("myHandler", new MyHandler());
}
}
// Handler should handle the IdleStateEvent triggered by IdleStateHandler.
public class MyHandler extends ChannelDuplexHandler {
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
if (evt instanceof IdleStateEvent) {
IdleStateEvent e = (IdleStateEvent) evt;
if (e.state() == IdleState.READER_IDLE) {
ctx.close();
} else if (e.state() == IdleState.WRITER_IDLE) {
ctx.writeAndFlush(new PingMessage());
}
}
}
}
I would suggest to add the IdleStateHandler and then add your custom implementation of IdleStateAwareUpstreamHandler which can react on the idle state. This works out very well for me on many different projects.
The javadocs list the following example, that you could use as the base of your implementation:
public class MyPipelineFactory implements ChannelPipelineFactory {
private final Timer timer;
private final ChannelHandler idleStateHandler;
public MyPipelineFactory(Timer timer) {
this.timer = timer;
this.idleStateHandler = new IdleStateHandler(timer, 60, 30, 0);
// timer must be shared.
}
public ChannelPipeline getPipeline() {
return Channels.pipeline(
idleStateHandler,
new MyHandler());
}
}
// Handler should handle the IdleStateEvent triggered by IdleStateHandler.
public class MyHandler extends IdleStateAwareChannelHandler {
#Override
public void channelIdle(ChannelHandlerContext ctx, IdleStateEvent e) {
if (e.getState() == IdleState.READER_IDLE) {
e.getChannel().close();
} else if (e.getState() == IdleState.WRITER_IDLE) {
e.getChannel().write(new PingMessage());
}
}
}
ServerBootstrap bootstrap = ...;
Timer timer = new HashedWheelTimer();
...
bootstrap.setPipelineFactory(new MyPipelineFactory(timer));
...

Categories

Resources