I am trying to understand https://vertx.io/ Verticle system and the event loop thread.
Consider the following code:
public class MyVerticle extends AbstractVerticle {
public void start() {
vertx.createHttpServer().requestHandler(req -> {
req.response()
.putHeader("content-type", "text/plain")
.end("Hello from Vert.x!");
}).listen(8080);
}
}
The code above is going to create a new Verticle(MyVerticle) that also owns event loop thread.
When the HTTP server is created with vertx.createHttpServer(), does it spread a new Verticle for HTTP server? If correct, the HTTP server runs on own Verticle with event loop thread and two verticles are active.
Does the MyVerticle event loop thread:
requestHandler(req -> {
req.response()
.putHeader("content-type", "text/plain")
.end("Hello from Vert.x!");
}
execute the registered request handler? If yes, how does MyVerticle receive the events from Http server to run the handler when a request comes in?
The code above is not clear, how the two verticles communicate with each other. Would be great if someone could clarify it.
Update
I am trying to depict the scenario:
Assume, I deploy two instances of the same verticle, then each verticle will have its own event-loop and the HTTP server will be started twice.
When the user sends the first request, then it will process on the Verticle 1 and the second request on the Verticle 2. When my assumption is correct, then the event loop threads are independent from each other. For me, that means for me it is no more single threaded.
For example:
public class MyVerticle extends AbstractVerticle {
final int state;
public void start() {
vertx.createHttpServer().requestHandler(req -> {
state = state + 1;
req.response()
.putHeader("content-type", "text/plain")
.end("Hello from Vert.x!");
}).listen(8080);
}
}
When I change the state, then I have to sychronize between Verticles?
I pretty sure I am wrong, that means I not understand the concept of verticle yet.
A verticle is a deployment unit associated with an event-loop. In your example the verticle controls the HTTP server (with listen and close methods). The HTTP server will uses the event-loop of the verticle that controls it.
When you deploy two instances of the same verticle (setting the deployment options to two), each verticle will have its own event-loop and the HTTP server will be started twice. The first verticle that binds the HTTP server triggers the server bind operation, the second verticle instead will register its request handler on the actual server started by the first verticle (since they use the same port). When the server accepts a new connection it will then load balance the connection on the two verticle instances that it is aware of. This is explained in this section of the documentation https://vertx.io/docs/vertx-core/java/#_server_sharing.
The recommended way for verticle to communicate is the event-bus. It provides a lighweight, fast and asynchronous message passing between verticles. A shared data structure can also be appropriate depending on the use case.
Related
I'm a starter in Spring Web-Flux. i want to have a reactive service for example named isConfirmed and this service must wait until another service of my server is called for example named confirm. both services are located in my server and the first reactive service must wait until the second service (confirm) is called and then return the confirm message. i want no threads to be blocked in my server until the second service is called. like an observer pattern. is it possible with spring web flux?
update: can we have this feature while server using distributed cache?
I think you could use a CompletableFuture between your 2 services, something like that:
CompletableFuture<String> future = new CompletableFuture<>();
public Mono<String> isConfirmed() {
return Mono.fromFuture(future);
}
public void confirm(String confirmation) {
future.complete(confirmation);
}
I'm new to Vert.x and just stumbled about a problem.
I've the following Verticle:
public class HelloVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
String greetingName = config().getString("greetingName", "Welt");
String greetingNameEnv = System.getenv("GREETING_NAME");
String greetingNameProp = System.getProperty("greetingName");
Router router = Router.router(vertx);
router.get("/hska").handler(routingContext -> {
routingContext.response().end(String.format("Hallo %s!", greetingName));
});
router.get().handler(routingContext -> {
routingContext.response().end("Hallo Welt");
});
vertx
.createHttpServer()
.requestHandler(router::accept)
.listen(8080);
}
}
I want to unit test this verticle but i dont know how to wait for the verticle to be deployed.
#Before
public void setup(TestContext context) throws InterruptedException {
vertx = Vertx.vertx();
JsonObject config = new JsonObject().put("greetingName", "Unit Test");
vertx.deployVerticle(HelloVerticle.class.getName(), new DeploymentOptions().setConfig(config));
}
when i setup my test like this i have to add a Thread.sleep after the deploy call, to make the tests be executed after some time of watiting for the verticle.
I heared about Awaitability and that it should be possible to wait for the verticle to be deployed with this library. But I didn't find any examples of how to use Awaitability with vertx-unit and the deployVerticle method.
Could anyone bring some light into this?
Or do i really have to hardcode a sleep timer after calling the deployVerticle-Method in my tests?
Have a look into the comments of the accepted answer
First of all you need to implement start(Future future) instead of just start(). Then you need to add a callback handler (Handler<AsyncResult<HttpServer>> listenHandler) to the listen(...) call — which then resolves the Future you got via start(Future future).
Vert.x is highly asynchronous — and so is the start of an Vert.x HTTP server. In your case, the Verticle would be fully functional when the HTTP server is successfully started. Therefore, you need implement the stuff I mentioned above.
Second you need to tell the TestContext that the asynchronous deployment of your Verticle is done. This can be done via another callback handler (Handler<AsyncResult<String>> completionHandler). Here is blog post shows how to do that.
The deployment of a Verticle is always asynchronous even if you implemented the plain start() method. So you should always use a completionHandler if you want to be sure that your Verticle was successfully deployed before test.
So, no you don't need to and you definitely shouldn't hardcode a sleep timer in any of your Vert.x applications. Mind The Golden Rule - Don’t Block the Event Loop.
Edit:
If the initialisation of your Verticle is synchronous you should overwrite the plain start() method — like it's mentioned in the docs:
If your verticle does a simple, synchronous start-up then override this method and put your start-up code in there.
If the initialisation of your Verticle is asynchronous (e.g. deploying a Vert.x HTTP server) you should overwrite start(Future future) and complete the Future when your asynchronous initialisation is finished.
I am analyzing some jersey 2.0 code and i have a question on how the following method works:
#Stateless
#Path("/mycoolstuff")
public class MyEjbResource {
…
#GET
#Asynchronous //does this mean the method executes on child thread ?
public void longRunningOperation(#Suspended AsyncResponse ar) {
final String result = executeLongRunningOperation();
ar.resume(result);
}
private String executeLongRunningOperation() { … }
}
Lets say im at a web browser and i type in www.mysite/mycoolstuff
this will execute the method but im not understanding what the asyncResponse is used for neither the #Asynchronous annotation. From the browser how would i notice its asychnronous ? what would be the difference in removing the annotation ? Also the suspended annotation after reading the documentation i'm not clear its purpose.
is the #Asynchronous annotation simply telling the program to execute this method on a new thread ? is it a convenience method for doing "new Thread(.....)" ?
Update: this annotation relieves the server of hanging onto the request processing thread. Throughput can be better. Anyway from the official docs:
Request processing on the server works by default in a synchronous processing mode, which means that a client connection of a request is processed in a single I/O container thread. Once the thread processing the request returns to the I/O container, the container can safely assume that the request processing is finished and that the client connection can be safely released including all the resources associated with the connection. This model is typically sufficient for processing of requests for which the processing resource method execution takes a relatively short time. However, in cases where a resource method execution is known to take a long time to compute the result, server-side asynchronous processing model should be used. In this model, the association between a request processing thread and client connection is broken. I/O container that handles incoming request may no longer assume that a client connection can be safely closed when a request processing thread returns. Instead a facility for explicitly suspending, resuming and closing client connections needs to be exposed. Note that the use of server-side asynchronous processing model will not improve the request processing time perceived by the client. It will however increase the throughput of the server, by releasing the initial request processing thread back to the I/O container while the request may still be waiting in a queue for processing or the processing may still be running on another dedicated thread. The released I/O container thread can be used to accept and process new incoming request connections.
#Suspended have more definite if you used it, else it will not make any difference of using it.
Let's talk about benefits of it:
#Suspended will pause/Suspend the current thread until it gets response,by default #NO_TIMEOUT no suspend timeout set. So it doesn't mean your request response (I/O)thread will get free and be available for other request.
Now Assume you want your service to be a response with some specific time, but the method you are calling from resource not guarantee the response time, then how will you manage your service response time? At that time, you can set suspend timeout for your service using #Suspended, and even provide a fall back response when time get exceed.
Below is some sample of code for setting suspend/pause timeout
public void longRunningOperation(#Suspended AsyncResponse ar) {
ar.setTimeoutHandler(customHandler);
ar.setTimeout(10, TimeUnit.SECONDS);
final String result = executeLongRunningOperation();
ar.resume(result);
}
for more details refer this
The #Suspended annotation is added before an AsyncResponse parameter on the resource method to tell the underlying web server not to expect this thread to return a response for the remote caller:
#POST
public void asyncPost(#Suspended final AsyncResponse ar, ... <args>) {
someAsyncMethodInYourServer(<args>, new AsyncMethodCallback() {
#Override
void completed(<results>) {
ar.complete(Response.ok(<results>).build());
}
#Override
void failed(Throwable t) {
ar.failed(t);
}
}
}
Rather, the AsyncResponse object is used by the thread that calls completed or failed on the callback object to return an 'ok' or throw an error to the client.
Consider using such asynchronous resources in conjunction with an async jersey client. If you're trying to implement a ReST service that exposes a fundamentally async api, these patterns allow you to project the async api through the ReST interface.
We don't create async interfaces because we have a process that takes a long time (minutes or hours) to run, but rather because we don't want our threads to ever sleep - we send the request and register a callback handler to be called later when the result is ready - from milliseconds to seconds later - in a synchronous interface, the calling thread would be sleeping during that time, rather than doing something useful. One of the fastest web servers ever written is single threaded and completely asynchronous. That thread never sleeps, and because there is only one thread, there's no context switching going on under the covers (at least within that process).
The #suspend annotation makes the caller actually wait until your done work. Lets say you have a lot of work to do on another thread. when you use jersey #suspend the caller just sits there and waits (so on a web browser they just see a spinner) until your AsyncResponse object returns data to it.
Imagine you had a really long operation you had to do and you want to do it on another thread (or multiple threads). Now we can have the user wait until we are done. Don't forget in jersey you'll need to add the " true" right in the jersey servlet definition in web.xml to get it to work.
I'm currently trying to build a tcp server with netty. The server should then be part of my main program.
My application needs to send messages to the connected clients. I know I can keep track of the channels using a concurrent hash map or a ChannelGroup inside a handler. To not block my application the server itself has to run in a seperate thread. From my pov the corresponding run method would look like this:
public class Server implements Runnable {
#Override
public void run() {
EventLoopGroup bossEventGroup = new NioEventLoopGroup();
EventLoopGroup workerEventGroup = new NioEventLoopGroup();
try {
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap
.group(bossEventGroup, workerEventGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new MyServerInitializer());
ChannelFuture future = bootstrap.bind(8080).sync().channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
workerEventGroup.shutdownGracefully();
bossEventGroup.shutdownGracefully();
}
}
}
But now I have no idea how to integerate e.g. a sendMessage(Message message) method which can be used by my main application. I believe the function itself has to be defined in the handler to have access to the stored connected channels. But can someone give me an idea how to make such a function usable from the outside? Do I have to implement some sort of message queue which is checked in a loop after the bind? I could imagine that then the method invocation looks like this:
ServerHandlerTest t = (ServerHandlerTest) future.channel().pipeline().last();
(if newMessageInQueue) {
t.sendMessage(...);
}
Maybe someone is able to explain me what is the preferred implementation method for this use case.
I would go to create your own application handler to manage the business behavior within your own Netty handler, because that is the main logic (event based).
Your own (last) handler take care of all your application behavior, such that each client is served correctly, directly within the handler, using the ContextCHannelHandler ctx
Of course, you can still think of a particular application handler that would do something as:
Creation of the handler (in the pipeline creation within MyServerInitializer) will initiate the handler to look for a messageQueue to send
Then polling on the messageQueue to send but to the right client using a hashMap
But I believe it is far more complicated (which queue for which client or a global queue, how to handle the queue without blocking the server thread - not to do -, ...).
Moreover, sendMessage method ? Do you want to talk about write (or writeAndFlush) method ?
I'm developing a server based on the Netty libraby and I'm having a problem with how to structure the application with regards to business Logic.
currenty I have the business logic in the last handler and that's where I access the database. The thing I can't wrap my head around is the latency of accessing the database(blocking code). Is it advisable to do it in the handler or is there an alternative? code below:
public void channelRead(ChannelHandlerContext ctx, Object msg)
throws Exception {
super.channelRead(ctx, msg);
Msg message = (Msg)msg;
switch(message.messageType){
case MType.SIGN_UP:
userReg.signUp(message.user);// blocking database access
break;
}
}
you should execute the blocking calls in DefaultEventExecutorGroup or your custom threadpool that can be added to when the handler is added
pipeline.addLast(new DefaultEventExecutorGroup(50),"BUSSINESS_LOGIC_HANDLER", new BHandler());
ctx.executor().execute(new Runnable() {
#Override
public void run() {
//Blocking call
}});
Your custom handler is initialized by Netty everytime the Server receives a request, hence one instance of the handler is responsible for handling one Client.
So, it is perfectly fine for issuing blocking calls in your handler. It will not affect other Client's, as long as you don't block it indefinitely (or atleast not for very long time), thereby not blocking Netty's Thread for long and you do not get too much load on your server instance.
However, if you want to go for asynchronous design, then there can be more than a few design patterns that you can use.
For eg. with Netty, if you can implement WebSockets, then perhaps you can make the blocking calls in a separate Thread, and when the results are available, you can push them to the client through the WebSocket already established.