Vert.x Unit Test a Verticle that does not implement the start method with future - java

I'm new to Vert.x and just stumbled about a problem.
I've the following Verticle:
public class HelloVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
String greetingName = config().getString("greetingName", "Welt");
String greetingNameEnv = System.getenv("GREETING_NAME");
String greetingNameProp = System.getProperty("greetingName");
Router router = Router.router(vertx);
router.get("/hska").handler(routingContext -> {
routingContext.response().end(String.format("Hallo %s!", greetingName));
});
router.get().handler(routingContext -> {
routingContext.response().end("Hallo Welt");
});
vertx
.createHttpServer()
.requestHandler(router::accept)
.listen(8080);
}
}
I want to unit test this verticle but i dont know how to wait for the verticle to be deployed.
#Before
public void setup(TestContext context) throws InterruptedException {
vertx = Vertx.vertx();
JsonObject config = new JsonObject().put("greetingName", "Unit Test");
vertx.deployVerticle(HelloVerticle.class.getName(), new DeploymentOptions().setConfig(config));
}
when i setup my test like this i have to add a Thread.sleep after the deploy call, to make the tests be executed after some time of watiting for the verticle.
I heared about Awaitability and that it should be possible to wait for the verticle to be deployed with this library. But I didn't find any examples of how to use Awaitability with vertx-unit and the deployVerticle method.
Could anyone bring some light into this?
Or do i really have to hardcode a sleep timer after calling the deployVerticle-Method in my tests?
Have a look into the comments of the accepted answer

First of all you need to implement start(Future future) instead of just start(). Then you need to add a callback handler (Handler<AsyncResult<HttpServer>> listenHandler) to the listen(...) call — which then resolves the Future you got via start(Future future).
Vert.x is highly asynchronous — and so is the start of an Vert.x HTTP server. In your case, the Verticle would be fully functional when the HTTP server is successfully started. Therefore, you need implement the stuff I mentioned above.
Second you need to tell the TestContext that the asynchronous deployment of your Verticle is done. This can be done via another callback handler (Handler<AsyncResult<String>> completionHandler). Here is blog post shows how to do that.
The deployment of a Verticle is always asynchronous even if you implemented the plain start() method. So you should always use a completionHandler if you want to be sure that your Verticle was successfully deployed before test.
So, no you don't need to and you definitely shouldn't hardcode a sleep timer in any of your Vert.x applications. Mind The Golden Rule - Don’t Block the Event Loop.
Edit:
If the initialisation of your Verticle is synchronous you should overwrite the plain start() method — like it's mentioned in the docs:
If your verticle does a simple, synchronous start-up then override this method and put your start-up code in there.
If the initialisation of your Verticle is asynchronous (e.g. deploying a Vert.x HTTP server) you should overwrite start(Future future) and complete the Future when your asynchronous initialisation is finished.

Related

Does the http server run on verticle and own event loop?

I am trying to understand https://vertx.io/ Verticle system and the event loop thread.
Consider the following code:
public class MyVerticle extends AbstractVerticle {
public void start() {
vertx.createHttpServer().requestHandler(req -> {
req.response()
.putHeader("content-type", "text/plain")
.end("Hello from Vert.x!");
}).listen(8080);
}
}
The code above is going to create a new Verticle(MyVerticle) that also owns event loop thread.
When the HTTP server is created with vertx.createHttpServer(), does it spread a new Verticle for HTTP server? If correct, the HTTP server runs on own Verticle with event loop thread and two verticles are active.
Does the MyVerticle event loop thread:
requestHandler(req -> {
req.response()
.putHeader("content-type", "text/plain")
.end("Hello from Vert.x!");
}
execute the registered request handler? If yes, how does MyVerticle receive the events from Http server to run the handler when a request comes in?
The code above is not clear, how the two verticles communicate with each other. Would be great if someone could clarify it.
Update
I am trying to depict the scenario:
Assume, I deploy two instances of the same verticle, then each verticle will have its own event-loop and the HTTP server will be started twice.
When the user sends the first request, then it will process on the Verticle 1 and the second request on the Verticle 2. When my assumption is correct, then the event loop threads are independent from each other. For me, that means for me it is no more single threaded.
For example:
public class MyVerticle extends AbstractVerticle {
final int state;
public void start() {
vertx.createHttpServer().requestHandler(req -> {
state = state + 1;
req.response()
.putHeader("content-type", "text/plain")
.end("Hello from Vert.x!");
}).listen(8080);
}
}
When I change the state, then I have to sychronize between Verticles?
I pretty sure I am wrong, that means I not understand the concept of verticle yet.
A verticle is a deployment unit associated with an event-loop. In your example the verticle controls the HTTP server (with listen and close methods). The HTTP server will uses the event-loop of the verticle that controls it.
When you deploy two instances of the same verticle (setting the deployment options to two), each verticle will have its own event-loop and the HTTP server will be started twice. The first verticle that binds the HTTP server triggers the server bind operation, the second verticle instead will register its request handler on the actual server started by the first verticle (since they use the same port). When the server accepts a new connection it will then load balance the connection on the two verticle instances that it is aware of. This is explained in this section of the documentation https://vertx.io/docs/vertx-core/java/#_server_sharing.
The recommended way for verticle to communicate is the event-bus. It provides a lighweight, fast and asynchronous message passing between verticles. A shared data structure can also be appropriate depending on the use case.

how to make spring web flux wait until specified condition met in server and then return the response

I'm a starter in Spring Web-Flux. i want to have a reactive service for example named isConfirmed and this service must wait until another service of my server is called for example named confirm. both services are located in my server and the first reactive service must wait until the second service (confirm) is called and then return the confirm message. i want no threads to be blocked in my server until the second service is called. like an observer pattern. is it possible with spring web flux?
update: can we have this feature while server using distributed cache?
I think you could use a CompletableFuture between your 2 services, something like that:
CompletableFuture<String> future = new CompletableFuture<>();
public Mono<String> isConfirmed() {
return Mono.fromFuture(future);
}
public void confirm(String confirmation) {
future.complete(confirmation);
}

Using Java Executor Service In Online Application

I have one functionality in online application. I need to mail receipt to customer after generate receipt. My problem is mail function takes more time nearly 20 to 30 seconds, customer could not wait for long time during online transaction.
So i have used java ExecutorService to run independently mail service [sendMail] and return response PAGE to customer either mail sent or not.
Is it right to use ExecutorService in online application [Http request & Response]. Below is my code. Kindly advice.
#RequestMapping(value="/generateReceipt",method=RequestMethod.GET)
public #ResponseBody ReceiptBean generateReceipt(HttpServletRequest httpRequest,HttpServletResponse httpResponse) {
// Other codes here
...
...
I need run below line independently, since it takes more time. so commeneted and wrote executor service
//mailService.sendMail(httpRequest, httpResponse, receiptBean);
java.util.concurrent.ExecutorService executorService = java.util.concurrent.Executors.newFixedThreadPool(10);
executorService.execute(new Runnable() {
ReceiptBean receiptBean1;
public void run() {
mailService.sendMail(httpRequest, httpResponse, receiptBean);
}
public Runnable init(ReceiptBean receiptBean) {
this.receiptBean = receiptBean1;
return(this);
}
}.init(receiptBean));
executorService.shutdown();
return receiptBean;
}
You can do that, although I wouldn't expect this code in a controller class but in a separate on (Separation of Concerns and all).
However, since you seem to be using Spring, you might as well use their scheduling framework.
It is fine to use Executor Service to make an asynchronous mail sending request, but you should try to follow SOLID principles in your design. Let the service layer take care of running the executor task.
https://en.wikipedia.org/wiki/SOLID
I agree with both #daniu and #Ankur regarding the separation of concerns u should follow. So just create a dedicated service like "EmailService" and inject it where needed.
Moreover you are leveraging the Spring framework and u can take advantage of its Async feature.
If u prefer to write your own async code then I'll suggest to use maybe a CompletableFuture instead of the ExecutorService to better handling failure (maybe u want store messages not sent into a queue for achieving retry feature or some other behaviour).

WLPs MicroProfile (FaultTolerance) Timeout Implementation does not interrupt threads?

I'm testing the websphere liberty's fault tolerance (microprofile) implementation. Therefore I made a simple REST-Service with a ressource which sleeps for 5 seconds:
#Path("client")
public class Client {
#GET
#Path("timeout")
public Response getClientTimeout() throws InterruptedException {
Thread.sleep(5000);
return Response.ok().entity("text").build();
}
}
I call this client within the same application within another REST-service:
#Path("mpfaulttolerance")
#RequestScoped
public class MpFaultToleranceController {
#GET
#Path("timeout")
#Timeout(4)
public Response getFailingRequest() {
System.out.println("start");
// calls the 5 seconds-ressource; should time out
Response response = ClientBuilder.newClient().target("http://localhost:9080").path("/resilience/api/client/timeout").request().get();
System.out.println("hello");
}
}
Now I'd expect that the method getFailingRequest() would time out after 4 ms and throw an exception. The actual behaviour is that the application prints "start", waits 5 seconds until the client returns, prints "hello" and then throws an "org.eclipse.microprofile.faulttolerance.exceptions.TimeoutException".
I turned on further debug information:
<logging traceSpecification="com.ibm.ws.microprofile.*=all" />
in server.xml. I get these information, that the timeout is registered even bevor the client is called! But the thread is not interrupted.
(if someone tells me how to get the stacktrace pretty in here... I can do that.)
Since this a very basic example: Am I doing anything wrong here? What can I do to make this example run properly.
Thanks
Edit: Running this example on WebSphere Application Server 18.0.0.2/wlp-1.0.21.cl180220180619-0403) auf Java HotSpot(TM) 64-Bit Server VM, Version 1.8.0_172-b11 (de_DE) with the features webProfile-8.0, mpFaultTolerance-1.0 and localConnector-1.0.
Edit: Solution, thanks to Andy McCright and Azquelt.
Since the call cannot be interrupted I have to make it asynchronous. So you got 2 threads: The first an who invoke the second thread with the call. The first thread will be interrupted, the second remains until the call finishes. But now you can go on with failure handling, open the circuit and stuff like that to prevent making further calls to the broken service.
#Path("mpfaulttolerance")
#RequestScoped
public class MpFaultToleranceController {
#Inject
private TestBase test;
#GET
#Path("timeout")
#Timeout(4)
public Response getFailingRequest() throws InterruptedException, ExecutionException {
Future<Response> resp = test.createFailingRequestToClientAsynch();
return resp.get();
}
}
And the client call:
#ApplicationScoped
public class TestBase {
#Asynchronous
public Future<Response> createFailingRequestToClientAsynch() {
Response response = ClientBuilder.newClient().target("http://localhost:9080").path("/resilience/api/client/timeout").request().get();
return CompletableFuture.completedFuture(response);
}
}
It does interrupt threads using Thread.interrupt(), but unfortunately not all Java operations respond to thread interrupts.
Lots of things do respond to interrupts by throwing an InterruptedException (like Thread.sleep(), Object.wait(), Future.get() and subclasses of InterruptableChannel) but InputStreams and Sockets don't.
I suspect that you (or the library you're using to make the request) is using a Socket which isn't interruptible so you don't see your method return early.
It's particularly unintuitive because Liberty's JAX-RS client doesn't respond to thread interrupts as Andy McCright mentioned. We're aware it's not a great situation and we're working on making it better.
I had the same problem. For some URLs I consume, the Fault Tolerance timeout doesn't work.
In my case I use RestClient. I solved my problem using the readTimeout() of the RestClientBuilder:
MyRestClientClass myRestClientClass = RestClientBuilder.newBuilder().baseUri(uri).readTimeout(3l, TimeUnit.SECONDS) .build(MyRestClientClient.class);
One advantage of using this Timeout control is that you can pass the timeout as a parameter.

Business Logic in Netty?

I'm developing a server based on the Netty libraby and I'm having a problem with how to structure the application with regards to business Logic.
currenty I have the business logic in the last handler and that's where I access the database. The thing I can't wrap my head around is the latency of accessing the database(blocking code). Is it advisable to do it in the handler or is there an alternative? code below:
public void channelRead(ChannelHandlerContext ctx, Object msg)
throws Exception {
super.channelRead(ctx, msg);
Msg message = (Msg)msg;
switch(message.messageType){
case MType.SIGN_UP:
userReg.signUp(message.user);// blocking database access
break;
}
}
you should execute the blocking calls in DefaultEventExecutorGroup or your custom threadpool that can be added to when the handler is added
pipeline.addLast(new DefaultEventExecutorGroup(50),"BUSSINESS_LOGIC_HANDLER", new BHandler());
ctx.executor().execute(new Runnable() {
#Override
public void run() {
//Blocking call
}});
Your custom handler is initialized by Netty everytime the Server receives a request, hence one instance of the handler is responsible for handling one Client.
So, it is perfectly fine for issuing blocking calls in your handler. It will not affect other Client's, as long as you don't block it indefinitely (or atleast not for very long time), thereby not blocking Netty's Thread for long and you do not get too much load on your server instance.
However, if you want to go for asynchronous design, then there can be more than a few design patterns that you can use.
For eg. with Netty, if you can implement WebSockets, then perhaps you can make the blocking calls in a separate Thread, and when the results are available, you can push them to the client through the WebSocket already established.

Categories

Resources