As described in a separate question, when using Undertow, all the processing should be done in a dedicated Worker thread pool, which looks like this:
public class Start {
public static void main(String[] args) {
Undertow server = Undertow.builder()
.addListener(8080, "localhost")
.setHandler(new HttpHandler() {
public void handleRequest(HttpServerExchange exchange)
throws Exception {
if (exchange.isInIoThread()) {
exchange.dispatch(this);
return;
}
exchange.getResponseHeaders()
.put(Headers.CONTENT_TYPE, "text/plain");
exchange.getResponseSender()
.send("Hello World");
}
})
.build();
server.start();
}
}
I understand that BlockingHandler can be used for explicitly telling Undertow to schedule the request on a dedicated thread pool for blocking requests. We could adapt the above example by wrapping the HttpHandler in an instance of BlockingHandler, like so:
.setHandler(new BlockingHandler(new HttpHandler() {
This would work for calls that we know are always blocking.
However, in case some code is non-blocking most of the time, but sometimes requires a blocking call, how to turn that blocking call into a non-blocking one? For example, if the requested value is present in cache, the following code would not block (it's just fetching from some Map<>), but if it's not, it has to be fetched from the database.
public class Start {
public static void main(String[] args) {
Undertow server = Undertow.builder()
.addListener(8080, "localhost")
.setHandler(new HttpHandler() {
public void handleRequest(HttpServerExchange exchange)
throws Exception {
if (exchange.isInIoThread()) {
exchange.dispatch(this);
return;
}
if (valueIsPresentInCache(exchange)) {
return valueFromCache; // non-blocking
} else {
return fetchValueFromDatabase(); // blocking!!!
}
}
})
.build();
server.start();
}
}
According to the docs, there is a method HttpServerExchange.startBlocking(), but according to JavaDoc, unless you really need to use the input stream, this call is still a blocking one.
Calling this method puts the exchange in blocking mode, and creates a
BlockingHttpExchange object to store the streams. When an exchange is
in blocking mode the input stream methods become available, other than
that there is presently no major difference between blocking an
non-blocking modes
How would one turn this blocking call into a non-blocking one?
The correct way is to actually do the logic in the IO thread, if it is non-blocking. Otherwise, delegate the request to a dedicated thread, like this:
public class Example {
public static void main(String[] args) {
Undertow server = Undertow.builder()
.addListener(8080, "localhost")
.setHandler(new HttpHandler() {
public void handleRequest(HttpServerExchange exchange)
throws Exception {
if (valueIsPresentInCache(exchange)) {
getValueFromCache(); // non-blocking, can be done from IO thread
} else {
if (exchange.isInIoThread()) {
exchange.dispatch(this);
// we return immediately, otherwise this request will be
// handled both in IO thread and a Worker thread, throwing
// an exception
return;
}
fetchValueFromDatabase(); // blocking!!!
}
}
})
.build();
server.start();
}
}
Related
I need to create a RMI service which can notify events to clients.
Each client register itself on the server, the client can emit an event and the server will broadcast it to all other clients.
The program works, but, the client reference on the server is never garbage collected, an the thread which the server uses to check if the client reference will never terminate.
So each time a client connects to the server, a new thread is created and never terminated.
The Notifier class can register and unregister a listener.
The broadcast method call each registered listener and send the message back.
public class Notifier extends UnicastRemoteObject implements INotifier{
private List<IListener> listeners = Collections.synchronizedList(new ArrayList());
public Notifier() throws RemoteException {
super();
}
#Override
public void register(IListener listener) throws RemoteException{
listeners.add(listener);
}
#Override
public void unregister(IListener listener) throws RemoteException{
boolean remove = listeners.remove(listener);
if(remove) {
System.out.println(listener+" removed");
} else {
System.out.println(listener+" NOT removed");
}
}
#Override
public void broadcast(String msg) throws RemoteException {
for (IListener listener : listeners) {
try {
listener.onMessage(msg);
} catch (RemoteException e) {
e.printStackTrace();
}
}
}
}
The listener is just printing each received message.
public class ListenerImpl extends UnicastRemoteObject implements IListener {
public ListenerImpl() throws RemoteException {
super();
}
#Override
public void onMessage(String msg) throws RemoteException{
System.out.println("Received: "+msg);
}
}
The RunListener client subscribes a listener wait few seconds to receive a message and then terminates.
public class RunListener {
public static void main(String[] args) throws Exception {
Registry registry = LocateRegistry.getRegistry();
INotifier notifier = (INotifier) registry.lookup("Notifier");
ListenerImpl listener = new ListenerImpl();
notifier.register(listener);
Thread.sleep(6000);
notifier.unregister(listener);
UnicastRemoteObject.unexportObject(listener, true);
}
}
The RunNotifier just publish the service and periodically sends a message.
public class RunNotifier {
static AtomicInteger counter = new AtomicInteger();
public static void main(String[] args) throws RemoteException, AlreadyBoundException, NotBoundException {
Registry registry = LocateRegistry.createRegistry(1099);
INotifier notifier = new Notifier();
registry.bind("Notifier", notifier);
ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
executor.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
try {
int n = counter.incrementAndGet();
System.out.println("Broadcasting "+n);
notifier.broadcast("Hello ("+n+ ")");
} catch (RemoteException e) {
e.printStackTrace();
}
}
},5 , 5, TimeUnit.SECONDS);
try {
System.in.read();
} catch (IOException e) {
}
executor.shutdown();
registry.unbind("Notifier");
UnicastRemoteObject.unexportObject(notifier, true);
}
}
I've seen many Q&A on stack overflow about RMI, but none addresses this kind of problem.
I guess I'm doing some very big mistake, but I can't spot it.
As you can see in the picture, a new RMI RenewClean thread is created for each incoming connection, and this thread will never terminate.
Once the client disconnects, and terminates, the RenewClean thread will silently swallow all ConnectionException thrown and will keep polling a client which will never reply.
As a side note, I even tried to keep just weak reference of the IListener in the Notifier class, and still the results are the same.
This may not be very helpful if you are stuck on JDK1.8, but when I test on JDK17 the multiple rmi server threads created for each incoming client RMI RenewClean-[IPADDRESS:PORT] are cleaned up on the server, and not showing "will never terminate" behaviour you may have observed on JDK1.8. It may be a JDK1.8 issue, or simply that you are not waiting long enough for the threads to end.
For quicker cleanup, try adjusting the system property for client thread garbage collection setting from the default (3600000 = 1 hour):
java -Dsun.rmi.dgc.client.gcInterval=3600000 ...
On my server I added this in one of the API callbacks:
Function<Thread,String> toString = t -> t.getName()+(t.isDaemon() ? " DAEMON" :"");
Set<Thread> threads = Thread.getAllStackTraces().keySet();
System.out.println("-".repeat(40)+" Threads x "+threads.size());
threads.stream().map(toString).forEach(System.out::println);
After RMI server startup it printed names of threads and no instances of "RMI RenewClean":
---------------------------------------- Threads x 12
After connecting many times from a client, the server reported corresponding instances of "RMI RenewClean":
---------------------------------------- Threads x 81
Leaving the RMI server for a while, these gradually shrank back - not to 12 threads -, but low enough to suggest that RMI thread handling is not filling up with many unnecessary daemon threads:
---------------------------------------- Threads x 20
After about an hour all the remaining "RMI RenewClean" were removed - probably due to housekeeping performed at the interval defined by the VM setting sun.rmi.dgc.client.gcInterval=3600000:
---------------------------------------- Threads x 13
Note also that RMI server shutdown is instant at any point - the "RMI RenewClean" daemon threads do not hold up rmi server shutdown.
I'm following Jenkov's tutorial on vertx. Here I have two files:
MyVerticle.java:
import io.vertx.core.AbstractVerticle;
import io.vertx.core.Future;
public class MyVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> startFuture) {
System.out.println("MyVerticle started!");
}
#Override
public void stop(Future stopFuture) throws Exception {
System.out.println("MyVerticle stopped!");
}
}
and VertxVerticleMain.java:
import io.vertx.core.Vertx;
public class VertxVerticleMain {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new MyVerticle());
}
}
After running VertxVerticleMain.java, I saw "MyVerticle started!" in Eclipse's console but don't know how to call stop in MyVerticle.
Jenkov said that The stop() method is called when Vert.x shuts down and your verticle needs to stop. How exactly do I shut down my Vert.x and stop this verticle? I want to see MyVerticle stopped! in the console.
From the Vert.x docs:
Vert.x calls this method when un-deploying the instance. You do not call it yourself.
If you run Vert.x from a main method and you terminate the JVM process (by clicking the 'stop' button in Eclipse, for example), Vert.x probably isn't signaled to undeploy the verticles, or the JVM terminates before Vert.x has time to undeploy the verticles.
You can do a number of things to ensure that the verticle will be undeployed and the stop() method will be called:
Start the Verticle using the vertx commandline. When you stop the process (or tell vert.x to stop), Vert.x will make sure that all verticles are undeployed.
You can programmatically undeploy the deployed verticles by fetching the list of deploymentId's and calling undeploy for all id's:
vertx.deploymentIDs().forEach(vertx::undeploy);
You can programmatically tell Vert.x to stop:
vertx.close();
You can add a shutdown hook to make sure that one of the options above is executed on JVM termination:
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
vertx.close();
}
});
You can either programmatically undeploy the verticle by calling the Vert.x API, or just stop the Java process, which in term triggers the Vert.x process to stop.
By the way, it's worth asking yourself whether it's really necessary that the stop() method is always called when the process running the verticle stops. You can never be sure that that happens; when the process is forced to stop or killed, the stop() method might not be called.
your code should add super.stop() and super.start() function like that:
public class MyVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> startFuture) {
//must call super.start() or call startFuture.complete()
super.start(startFuture);
System.out.println("MyVerticle started!");
System.out.println("Verticle_stopFuture.start(): deployId=" + context.deploymentID());
}
#Override
public void stop(Future stopFuture) throws Exception {
//must call super.stop() or call stopFuture.complete()
super.stop(stopFuture);
System.out.println("MyVerticle stopped!");
}
}
and VertxVerticleMain.java:
public class VertxVerticleMain {
static String verticle_deployId;
public static void main(String[] args) throws InterruptedException {
System.out.println("start main(): thread="+Thread.currentThread().getId());
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new MyVerticle(), new Handler<AsyncResult<String>>(){
#Override
public void handle(AsyncResult<String> asyncResult) {
if (asyncResult.succeeded()) { // khi startFuture.complete() đc gọi
System.out.println("asyncResult = DeployId =" + asyncResult.result());
verticle_deployId = asyncResult.result();
} else { //khi startFuture.fail() đc gọi
System.out.println("Deployment failed!"); //vì chưa đc cấp id
}
}
});
// waiting for Verticle context is allocate by Vertx
Thread.currentThread().sleep(500);
Set<String> deploymentIDs = vertx.deploymentIDs();
System.out.println("============== (sleeped 500ms wait for Context allocated), list of deploymentIDs: number Deployments =" + deploymentIDs.size());
for(String depId: deploymentIDs){
//
System.out.println(depId);
}
//close verticle here
vertx.undeploy(verticle_deployId);
}
}
I implementing websockets using Vert.x 3.
The scenario is simple: opening socket from client doing some 'blocking' work at the vertex verticle worker and when finish response with the answer to the client(via the open socket)
Please tell me if I am doing it right:
Created VertxWebsocketServerVerticle. as soon as the websocket is opening and request coming from the client I am using eventBus and passing the message to
EventBusReceiverVerticle. there I am doing blocking operation.
how I am actually sending back the response back to VertxWebsocketServerVerticle and sending it back to the client?
code:
Main class:
public static void main(String[] args) throws InterruptedException {
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new EventBusReceiverVerticle("R1"),new DeploymentOptions().setWorker(true));
vertx.deployVerticle(new VertxWebsocketServerVerticle());
}
VertxWebsocketServerVerticle:
public class VertxWebsocketServerVerticle extends AbstractVerticle {
public void start() {
vertx.createHttpServer().websocketHandler(webSocketHandler -> {
System.out.println("Connected!");
Buffer buff = Buffer.buffer().appendInt(12).appendString("foo");
webSocketHandler.writeFinalBinaryFrame(buff);
webSocketHandler.handler(buffer -> {
String inputString = buffer.getString(0, buffer.length());
System.out.println("inputString=" + inputString);
vertx.executeBlocking(future -> {
vertx.eventBus().send("anAddress", inputString, event -> System.out.printf("got back from reply"));
future.complete();
}, res -> {
if (res.succeeded()) {
webSocketHandler.writeFinalTextFrame("output=" + inputString + "_result");
}
});
});
}).listen(8080);
}
#Override
public void stop() throws Exception {
super.stop();
}
}
EventBusReceiverVerticle :
public class EventBusReceiverVerticle extends AbstractVerticle {
private String name = null;
public EventBusReceiverVerticle(String name) {
this.name = name;
}
public void start(Future<Void> startFuture) {
vertx.eventBus().consumer("anAddress", message -> {
System.out.println(this.name +
" received message: " +
message.body());
try {
//doing some looong work..
Thread.sleep(10000);
System.out.printf("finished waiting\n");
startFuture.complete();
} catch (InterruptedException e) {
e.printStackTrace();
}
});
}
}
I always get:
WARNING: Message reply handler timed out as no reply was received - it will be removed
github project at: https://github.com/IdanFridman/VertxAndWebSockets
thank you,
ray.
Since you are blocking your websocket handler until it receives a reply for the sent message to the EventBus, which will not, in fact, be received until the set up delay of 10s laps, you certainly will get warning since the reply handler of the event bus will timeout -> Message sent but no response received before the timeout delay.
Actually I don't know if you are just experimenting the Vert.x toolkit or you are trying to fulfill some requirement, but certainly you have to adapt your code to match in the Vert.x spirit:
First you should better not block until a message is received in your websocket handler, keep in mind that everything is asynchrounous when it comes to Vert.x.
In order to sleep for some time, use the Vert.x way and not the Thread.sleep(delay), i.e. vertx.setTimer(...).
I have created a fairly straight forward server using Netty 4. I have been able to scale it up to handle several thousand connections and it never climbs above ~40 threads.
In order to test it out, I have also created a test client that creates thousands of connections. Unfortunately this creates as many threads as it makes connections. I was hoping to minimize threads for the clients. I have looked at many posts for this. Many examples show single connection setup. This and this say to share NioEventLoopGroup across clients, which I do. I'm getting a limited number of nioEventLoopGroup, but getting a thread per connection elsewhere. I am not purposely creating threads in the pipeline and don't see what could be.
Here is a snippet from the setup of my client code. It seems that it should maintain a fixed thread count based on what I've researched so far. Is there something I'm missing that I should be doing to prevent a thread per client connection?
Main
final EventLoopGroup group = new NioEventLoopGroup();
for (int i=0; i<100; i++)){
MockClient client = new MockClient(i, group);
client.connect();
}
MockClient
public class MockClient implements Runnable {
private final EventLoopGroup group;
private int identity;
public MockClient(int identity, final EventLoopGroup group) {
this.identity = identity;
this.group = group;
}
#Override
public void run() {
try {
connect();
} catch (Exception e) {}
}
public void connect() throws Exception{
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.handler(new MockClientInitializer(identity, this));
final Runnable that = this;
// Start the connection attempt
b.connect(config.getHost(), config.getPort()).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
Channel ch = future.sync().channel();
} else {
//if the server is down, try again in a few seconds
future.channel().eventLoop().schedule(that, 15, TimeUnit.SECONDS);
}
}
});
}
}
As has happened to me many times before, explaining the problem in detail made me think about it more and I came across the issue. I wanted to provide it here should anyone else come across the same issue with creating thousands of Netty clients.
I have one path in my pipeline that will create a timeout task to simulate a client connection rebooting. It turns out it was this timer task that was creating the extra threads per connection whenever it received a 'reboot' signal from the server (which happens every so often) up until there was a thread per connection.
Handler
private final HashedWheelTimer timer;
#Override
protected void channelRead0(ChannelHandlerContext ctx, Packet msg) throws Exception {
Packet packet = reboot();
ChannelFutureListener closeHandler = new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
RebootTimeoutTask timeoutTask = new RebootTimeoutTask(identity, client);
timer.newTimeout(timeoutTask, SECONDS_FOR_REBOOT, TimeUnit.SECONDS);
}
};
ctx.writeAndFlush(packet).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
future.channel().close().addListener(closeHandler);
} else {
future.channel().close();
}
}
});
}
Timeout Task
public class RebootTimeoutTask implements TimerTask {
public RebootTimeoutTask(...) {...}
#Override
public void run(Timeout timeout) throws Exception {
client.connect(identity);
}
}
I am using Undertow to create a simple application.
public class App {
public static void main(String[] args) {
Undertow server = Undertow.builder().addListener(8080, "localhost")
.setHandler(new HttpHandler() {
public void handleRequest(HttpServerExchange exchange) throws Exception {
Thread.sleep(5000);
exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain");
exchange.getResponseSender().send("Hello World");
}
}).build();
server.start();
}
}
I open a browser tab on localhost:8080 and I open a second
tab also on localhost:8080
This time the first tab will wait 5 seconds, and the second will wait 10 seconds
Why is it so?
The HttpHandler is executing in an I/O thread. As noted in the documentation:
IO threads perform non blocking tasks, and should never perform blocking operations because they are responsible for multiple connections, so while the operation is blocking other connections will essentially hang. One IO thread per CPU core is a reasonable default.
The request lifecycle docs discuss how to dispatch a request to a worker thread:
import io.undertow.Undertow;
import io.undertow.server.*;
import io.undertow.util.Headers;
public class Under {
public static void main(String[] args) {
Undertow server = Undertow.builder()
.addListener(8080, "localhost")
.setHandler(new HttpHandler() {
public void handleRequest(HttpServerExchange exchange)
throws Exception {
if (exchange.isInIoThread()) {
exchange.dispatch(this);
return;
}
exchange.getResponseHeaders()
.put(Headers.CONTENT_TYPE, "text/plain");
exchange.getResponseSender()
.send("Hello World");
}
})
.build();
server.start();
}
}
I noted that you won't necessarily get one worker thread per request - when I set a breakpoint on the header put I got about one thread per client. There are gaps in both the Undertow and the underlying XNIO docs so I'm not sure what the intention is.
Undertow uses NIO, which means that a single thread handles all the requests. If you want to do blocking operations in your request handler, you have to dispatch this operation to a worker thread.
In your example, you put the thread to sleep, which means tha any request handling is put to sleep, since this thread handles all requests.
However even if you dispatched the operation to worker thread and put that to sleep, you still would see the blocking issue you mention. This is because you open the same url in several tabs on the same browser. The browsers have an internal blocking of their own. If you open the same url in different tabs, the second url will start the request after the first has finished. Try any url you want to see for yourself. You can easily be confused with this browser behaviour.
The easiest thing to do would be to wrap your handler in a BlockingHandler.
import io.undertow.Undertow;
import io.undertow.server.*;
import io.undertow.server.handlers.BlockingHandler;
import io.undertow.util.Headers;
public class Under {
public static void main(String[] args) {
Undertow server = Undertow.builder()
.addHttpListener(8080, "localhost")
.setHandler(new BlockingHandler(new HttpHandler() {
public void handleRequest(HttpServerExchange exchange)
throws Exception {
exchange.getResponseHeaders()
.put(Headers.CONTENT_TYPE, "text/plain");
exchange.getResponseSender()
.send("Hello World");
}
})).build();
server.start();
}
}