I need to create a RMI service which can notify events to clients.
Each client register itself on the server, the client can emit an event and the server will broadcast it to all other clients.
The program works, but, the client reference on the server is never garbage collected, an the thread which the server uses to check if the client reference will never terminate.
So each time a client connects to the server, a new thread is created and never terminated.
The Notifier class can register and unregister a listener.
The broadcast method call each registered listener and send the message back.
public class Notifier extends UnicastRemoteObject implements INotifier{
private List<IListener> listeners = Collections.synchronizedList(new ArrayList());
public Notifier() throws RemoteException {
super();
}
#Override
public void register(IListener listener) throws RemoteException{
listeners.add(listener);
}
#Override
public void unregister(IListener listener) throws RemoteException{
boolean remove = listeners.remove(listener);
if(remove) {
System.out.println(listener+" removed");
} else {
System.out.println(listener+" NOT removed");
}
}
#Override
public void broadcast(String msg) throws RemoteException {
for (IListener listener : listeners) {
try {
listener.onMessage(msg);
} catch (RemoteException e) {
e.printStackTrace();
}
}
}
}
The listener is just printing each received message.
public class ListenerImpl extends UnicastRemoteObject implements IListener {
public ListenerImpl() throws RemoteException {
super();
}
#Override
public void onMessage(String msg) throws RemoteException{
System.out.println("Received: "+msg);
}
}
The RunListener client subscribes a listener wait few seconds to receive a message and then terminates.
public class RunListener {
public static void main(String[] args) throws Exception {
Registry registry = LocateRegistry.getRegistry();
INotifier notifier = (INotifier) registry.lookup("Notifier");
ListenerImpl listener = new ListenerImpl();
notifier.register(listener);
Thread.sleep(6000);
notifier.unregister(listener);
UnicastRemoteObject.unexportObject(listener, true);
}
}
The RunNotifier just publish the service and periodically sends a message.
public class RunNotifier {
static AtomicInteger counter = new AtomicInteger();
public static void main(String[] args) throws RemoteException, AlreadyBoundException, NotBoundException {
Registry registry = LocateRegistry.createRegistry(1099);
INotifier notifier = new Notifier();
registry.bind("Notifier", notifier);
ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
executor.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
try {
int n = counter.incrementAndGet();
System.out.println("Broadcasting "+n);
notifier.broadcast("Hello ("+n+ ")");
} catch (RemoteException e) {
e.printStackTrace();
}
}
},5 , 5, TimeUnit.SECONDS);
try {
System.in.read();
} catch (IOException e) {
}
executor.shutdown();
registry.unbind("Notifier");
UnicastRemoteObject.unexportObject(notifier, true);
}
}
I've seen many Q&A on stack overflow about RMI, but none addresses this kind of problem.
I guess I'm doing some very big mistake, but I can't spot it.
As you can see in the picture, a new RMI RenewClean thread is created for each incoming connection, and this thread will never terminate.
Once the client disconnects, and terminates, the RenewClean thread will silently swallow all ConnectionException thrown and will keep polling a client which will never reply.
As a side note, I even tried to keep just weak reference of the IListener in the Notifier class, and still the results are the same.
This may not be very helpful if you are stuck on JDK1.8, but when I test on JDK17 the multiple rmi server threads created for each incoming client RMI RenewClean-[IPADDRESS:PORT] are cleaned up on the server, and not showing "will never terminate" behaviour you may have observed on JDK1.8. It may be a JDK1.8 issue, or simply that you are not waiting long enough for the threads to end.
For quicker cleanup, try adjusting the system property for client thread garbage collection setting from the default (3600000 = 1 hour):
java -Dsun.rmi.dgc.client.gcInterval=3600000 ...
On my server I added this in one of the API callbacks:
Function<Thread,String> toString = t -> t.getName()+(t.isDaemon() ? " DAEMON" :"");
Set<Thread> threads = Thread.getAllStackTraces().keySet();
System.out.println("-".repeat(40)+" Threads x "+threads.size());
threads.stream().map(toString).forEach(System.out::println);
After RMI server startup it printed names of threads and no instances of "RMI RenewClean":
---------------------------------------- Threads x 12
After connecting many times from a client, the server reported corresponding instances of "RMI RenewClean":
---------------------------------------- Threads x 81
Leaving the RMI server for a while, these gradually shrank back - not to 12 threads -, but low enough to suggest that RMI thread handling is not filling up with many unnecessary daemon threads:
---------------------------------------- Threads x 20
After about an hour all the remaining "RMI RenewClean" were removed - probably due to housekeeping performed at the interval defined by the VM setting sun.rmi.dgc.client.gcInterval=3600000:
---------------------------------------- Threads x 13
Note also that RMI server shutdown is instant at any point - the "RMI RenewClean" daemon threads do not hold up rmi server shutdown.
Related
i'm new to netty and i would like to create a proxy server using netty that does the following :
_ upon receiving data from a client, the proxy server does some business logic that will possibly modify the data, and then forward it to the remote server, this business logic belongs to a transaction.
_ if the remote server return a success response then proxy server commit the transaction, otherwise the proxy server rollback the transaction.
Data flow diagram
I have taken a look at the proxy example at https://netty.io/4.1/xref/io/netty/example/proxy/package-summary.html but i havent figured out a good and simple way to implement the transaction logic mentioned above.
I should mention that i have create a separate thread pool to execute this business transaction to avoid blocking the Nio thread, my current solution is to actually use 2 thread pool with the same amount of threads : 1 on the frontendHandler and 1 on the backendHandler, the one at frontend will use wait() to wait for the response from the backend thread.
Here is my current code for the frontend handler:
#Override
public void channelActive(ChannelHandlerContext ctx) {
final Channel inboundChannel = ctx.channel();
// Start the connection attempt.
Bootstrap b = new Bootstrap();
b.group(inboundChannel.eventLoop())
.channel(ctx.channel().getClass())
.handler(new ServerBackendHandler(inboundChannel, response))
.option(ChannelOption.AUTO_READ, false);
ChannelFuture f = b.connect(remoteHost, remotePort);
outboundChannel = f.channel();
f.addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
// connection complete start to read first data
inboundChannel.read();
} else {
// Close the connection if the connection attempt has failed.
inboundChannel.close();
}
}
});
}
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (outboundChannel.isActive()) {
//Executing business logic within a different thread pool to avoid blocking asynchronous i/o operation
frontendThreadPool.execute(new Runnable(){
#Override
public void run() {
//System.out.println("Starting business logic operation at front_end for message :" + m);
synchronized(response) {
//sleeping this thread to simulate business operation, insert business logic here.
int randomNum = ThreadLocalRandom.current().nextInt(1000, 2001);
try {
Thread.currentThread().sleep(randomNum);
} catch (InterruptedException e1) {
e1.printStackTrace();
}
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
// was able to flush out data, start to read the next chunk
ctx.channel().read();
} else {
future.channel().close();
}
}
});
System.out.println("Blank response : " + response.getResponse());
//wait for response from remote server
try {
response.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Returned response from back end: " + response.getResponse());
//another piece of business logic here, if the remote server returned success then commit the transaction, if the remote server returned failure then throw exception to rollback
//stop current thread since we are done with it
Thread.currentThread().interrupt();
}
}
});
}
}
and for the backendHandler :
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
ByteBuf m = (ByteBuf) msg;
m = safeBuffer(m, ctx.alloc());
String str = m.toString(Charset.forName("UTF-8"));
backendThreadPool.execute(new Runnable() {
#Override
public void run() {
//System.out.println("Starting business logic operation at back_end.");
synchronized(response) {
int randomNum = ThreadLocalRandom.current().nextInt(1000, 2001);
try {
Thread.currentThread().sleep(randomNum);
} catch (InterruptedException e1) {
e1.printStackTrace();
}
response.setResponse(str);
System.out.println("Finished at back_end.");
response.notify();
Thread.currentThread().interrupt();
}
}
});
String s = "Message returned from remote server through proxy : " + str;
byte[] b = s.getBytes(Charset.forName("UTF-8"));
defaultResponse.writeBytes(b);
inboundChannel.writeAndFlush(defaultResponse).addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
ctx.channel().read();
} else {
future.channel().close();
}
}
});
}
this solution is not at all optimized since the server have to use 2 threads to execute one transaction. So i guess my questions are :
_ Can i (and if i can, should i) use Spring #Transactional on the channelRead method ?
_ how can i implement the logic explained above in a simple way using netty ?
I have also used JMeter to test out the code above but it doesn't seem to be very stable, lots of requests didn't even have a response with the above code at around 2000 connections and 250 max threads in each thread pool
Thanks in advance
I have a scenario where I am establishing TCP connection using netty NIO, suppose server went down than how can I automatically connect to server when it comes up again ?
Or Is there any way to attach availability listener on server ?
You can have a DisconnectionHandler, as the first thing on your client pipeline, that reacts on channelInactive by immediately trying to reconnect or scheduling a reconnection task.
For example,
public class DisconnectionHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelInactive(final ChannelHandlerContext ctx) throws Exception {
Channel channel = ctx.channel();
/* If shutdown is on going, ignore */
if (channel.eventLoop().isShuttingDown()) return;
ReconnectionTask reconnect = new ReconnectionTask(channel);
reconnect.run();
}
}
The ReconnectionTask would be something like this:
public class ReconnectionTask implements Runnable, ChannelFutureListener {
Channel previous;
public ReconnectionTask(Channel c) {
this.previous = c;
}
#Override
public void run() {
Bootstrap b = createBootstrap();
b.remoteAddress(previous.remoteAddress())
.connect()
.addListener(this);
}
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (!future.isSuccess()) {
// Will try to connect again in 100 ms.
// Here you should probably use exponential backoff or some sort of randomization to define the retry period.
previous.eventLoop()
.schedule(this, 100, MILLISECONDS);
return;
}
// Do something else when success if needed.
}
}
Check here for an example of Exponential Backoff library.
I have the following code in my main application:
package acast;
import java.net.SocketException;
import java.util.concurrent.ConcurrentLinkedQueue;
public class ACast {
private ConcurrentLinkedQueue<String> queue;
public ACast() throws SocketException{
queue = new ConcurrentLinkedQueue<String>();
UDPServer srv = new UDPServer(4321);
srv.addUDPacketListener(new UDPPacketListener() {
#Override
public void onPacketReceived(String packet) {
ACast.this.queue.offer(packet);
}
});
srv.start();
}
public static void main(String[] args) throws SocketException {
try{
new ACast();
}
catch(SocketException e){
//e.printStackTrace();
System.out.println("Socket allready opened. Can't start application");
System.exit(1);
}
}
}
My UDPServer extends Thread and calls onPacketReceived every time it receives an UDP datagram. I want my main app to do something every time a configured timeout passes since the last received datagram. I would like to avoid running a Thread that just checks the timeout from second to second. I would like to start a countdown thread exactly on the moment of the last received datagram and cancel any other ongoing timeout threads if available. Any help ?
A simple solution would be to start a Timer with the timeout task, and every time a new datagram is received, cancel the currently running timer and start a new one.
I would lose the asynchronicity altogether, and use blocking I/O with a read timeout.
I have created a fairly straight forward server using Netty 4. I have been able to scale it up to handle several thousand connections and it never climbs above ~40 threads.
In order to test it out, I have also created a test client that creates thousands of connections. Unfortunately this creates as many threads as it makes connections. I was hoping to minimize threads for the clients. I have looked at many posts for this. Many examples show single connection setup. This and this say to share NioEventLoopGroup across clients, which I do. I'm getting a limited number of nioEventLoopGroup, but getting a thread per connection elsewhere. I am not purposely creating threads in the pipeline and don't see what could be.
Here is a snippet from the setup of my client code. It seems that it should maintain a fixed thread count based on what I've researched so far. Is there something I'm missing that I should be doing to prevent a thread per client connection?
Main
final EventLoopGroup group = new NioEventLoopGroup();
for (int i=0; i<100; i++)){
MockClient client = new MockClient(i, group);
client.connect();
}
MockClient
public class MockClient implements Runnable {
private final EventLoopGroup group;
private int identity;
public MockClient(int identity, final EventLoopGroup group) {
this.identity = identity;
this.group = group;
}
#Override
public void run() {
try {
connect();
} catch (Exception e) {}
}
public void connect() throws Exception{
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.handler(new MockClientInitializer(identity, this));
final Runnable that = this;
// Start the connection attempt
b.connect(config.getHost(), config.getPort()).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
Channel ch = future.sync().channel();
} else {
//if the server is down, try again in a few seconds
future.channel().eventLoop().schedule(that, 15, TimeUnit.SECONDS);
}
}
});
}
}
As has happened to me many times before, explaining the problem in detail made me think about it more and I came across the issue. I wanted to provide it here should anyone else come across the same issue with creating thousands of Netty clients.
I have one path in my pipeline that will create a timeout task to simulate a client connection rebooting. It turns out it was this timer task that was creating the extra threads per connection whenever it received a 'reboot' signal from the server (which happens every so often) up until there was a thread per connection.
Handler
private final HashedWheelTimer timer;
#Override
protected void channelRead0(ChannelHandlerContext ctx, Packet msg) throws Exception {
Packet packet = reboot();
ChannelFutureListener closeHandler = new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
RebootTimeoutTask timeoutTask = new RebootTimeoutTask(identity, client);
timer.newTimeout(timeoutTask, SECONDS_FOR_REBOOT, TimeUnit.SECONDS);
}
};
ctx.writeAndFlush(packet).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
future.channel().close().addListener(closeHandler);
} else {
future.channel().close();
}
}
});
}
Timeout Task
public class RebootTimeoutTask implements TimerTask {
public RebootTimeoutTask(...) {...}
#Override
public void run(Timeout timeout) throws Exception {
client.connect(identity);
}
}
Motivation
I want extra eyes to confirm that I am able to call this method XMPPConnection.sendPacket(
Packet ) concurrently. For my current code, I am invoking a List of Callables (max 3) in a serial fashion. Each Callable sends/receives XMPP packets on the one piece of XMPPConnection. I plan to parallelize these Callables by spinning off multiple threads & each Callable will invoke sendPacket on the shared XMPPConnection without synchronization.
XMPPConnection
class XMPPConnection
{
private boolean connected = false;
public boolean isConnected()
{
return connected;
}
PacketWriter packetWriter;
public void sendPacket( Packet packet )
{
if (!isConnected())
throw new IllegalStateException("Not connected to server.");
if (packet == null)
throw new NullPointerException("Packet is null.");
packetWriter.sendPacket(packet);
}
}
PacketWriter
class PacketWriter
{
public void sendPacket(Packet packet)
{
if (!done) {
// Invoke interceptors for the new packet
// that is about to be sent. Interceptors
// may modify the content of the packet.
processInterceptors(packet);
try {
queue.put(packet);
}
catch (InterruptedException ie) {
ie.printStackTrace();
return;
}
synchronized (queue) {
queue.notifyAll();
}
// Process packet writer listeners. Note that we're
// using the sending thread so it's expected that
// listeners are fast.
processListeners(packet);
}
protected PacketWriter( XMPPConnection connection )
{
this.queue = new ArrayBlockingQueue<Packet>(500, true);
this.connection = connection;
init();
}
}
What I conclude
Since the PacketWriter is using a BlockingQueue, there is no problem with my intention to invoke sendPacket from multiple threads. Am I correct ?
Yes you can send packets from different threads without any problems.
The Smack blocking queue is because what you can't do is let the different threads write the output stream at the same time. Smack takes the responsibility of synchronizing the output stream by writing it with a per packet granularity.
The pattern implemented by Smack is simply a typical producer/consumer concurrency pattern. You may have several producers (your threads) and only one consumer (the Smack's PacketWriter running in it's own thread).
Regards.
You haven't provided enough information here.
We don't know how the following are implemented:
processInterceptors
processListeners
Who reads / writes the 'done' variable? If one thread sets it to true, then all the other threads will silently fail.
From a quick glance, this doesn't look thread safe, but there's no way to tell for sure from what you've posted.
Other issues:
Why is PacketWriter a class member of XMPPConnectionwhen it's only used in one method?
Why does PacketWriter have a XMPPConnection member var and not use it?
You might consider using a BlockingQueue if you can restrict to Java 5+.
From the Java API docs, with a minor change to use ArrayBlockingQueue:
class Producer implements Runnable {
private final BlockingQueue queue;
Producer(BlockingQueue q) { queue = q; }
public void run() {
try {
while(true) { queue.put(produce()); }
} catch (InterruptedException ex) { ... handle ...}
}
Object produce() { ... }
}
class Consumer implements Runnable {
private final BlockingQueue queue;
Consumer(BlockingQueue q) { queue = q; }
public void run() {
try {
while(true) { consume(queue.take()); }
} catch (InterruptedException ex) { ... handle ...}
}
void consume(Object x) { ... }
}
class Setup {
void main() {
BlockingQueue q = new ArrayBlockingQueue();
Producer p = new Producer(q);
Consumer c1 = new Consumer(q);
Consumer c2 = new Consumer(q);
new Thread(p).start();
new Thread(c1).start();
new Thread(c2).start();
}
}
For your usage you'd have your real sender (holder of the actual connection) be the Consumer, and packet preparers/senders be the producers.
An interesting additional thought is that you could use a PriorityBlockingQueue to allow flash override XMPP packets that are sent before any other waiting packets.
Also, Glen's points on the design are good points. You might want to take a look at the Smack API (http://www.igniterealtime.org/projects/smack/) rather than creating your own.