Sprint-Boot Application does not end when using a HttpClient with sendAsync with a newFixedThreadPool - java

When using a HttpClient (java.net.http.HttpClient) to do some GET requests I want to use a FixedThreadPool.
Therefore I wrote a spring-boot application with this method (for testing it):
public class AppStartupRunner implements ApplicationRunner {
// ....
private ExecutorService executorService = Executors.newFixedThreadPool(10);
#Override
public void run(ApplicationArguments args) throws Exception {
HttpClient httpClient = HttpClient.newBuilder().executor(executorService).build();
//HttpClient httpClient = HttpClient.newBuilder().build();
String url = "https://<url>";
try {
httpClient
.sendAsync(HttpRequest.newBuilder().uri(URI.create(url)).GET().build(), BodyHandlers.ofString())
.thenApply(HttpResponse::body).get();
} catch (
Exception e) {
e.printStackTrace();
}
System.out.println("Out");
}
// ....
}
My problem is that when I run this application the http call gets done and I receive my data. I also already checked with the "isDone" method and it confirms that the request is completed. But the application does not end - I think the main thread does but there is still one (or more?) processes hanging/waiting.
When I am replacing the httpClient with the one which does not use my executorService then the application ends correclty.
One point: when I let the HttpClient use the executorService and at the end explicitly shutdown the executorServer (--> executorService.shutdown();) the application ends correctly, too. However I am not sure if this is the right solution, or just hiding a problem. Because the 10 Threads from the executorService are always running (whether the httpClient uses them or not), so I do not see what I would need to shutdown the executorService if it is used and otherwise I won't...

The proper way is to call executorService.shutdown() after confirming all threads are done. Before shutdown() is called, the 10 threads would still be alive and waiting.
Bare in mind, after calling executorService.shutdown(), no more new threads can be added into the service, which in your case I think it is safe, as every time your application runs, a new ExecutorService is being built.
You can read further in documentation: https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ExecutorService.html

Related

WLPs MicroProfile (FaultTolerance) Timeout Implementation does not interrupt threads?

I'm testing the websphere liberty's fault tolerance (microprofile) implementation. Therefore I made a simple REST-Service with a ressource which sleeps for 5 seconds:
#Path("client")
public class Client {
#GET
#Path("timeout")
public Response getClientTimeout() throws InterruptedException {
Thread.sleep(5000);
return Response.ok().entity("text").build();
}
}
I call this client within the same application within another REST-service:
#Path("mpfaulttolerance")
#RequestScoped
public class MpFaultToleranceController {
#GET
#Path("timeout")
#Timeout(4)
public Response getFailingRequest() {
System.out.println("start");
// calls the 5 seconds-ressource; should time out
Response response = ClientBuilder.newClient().target("http://localhost:9080").path("/resilience/api/client/timeout").request().get();
System.out.println("hello");
}
}
Now I'd expect that the method getFailingRequest() would time out after 4 ms and throw an exception. The actual behaviour is that the application prints "start", waits 5 seconds until the client returns, prints "hello" and then throws an "org.eclipse.microprofile.faulttolerance.exceptions.TimeoutException".
I turned on further debug information:
<logging traceSpecification="com.ibm.ws.microprofile.*=all" />
in server.xml. I get these information, that the timeout is registered even bevor the client is called! But the thread is not interrupted.
(if someone tells me how to get the stacktrace pretty in here... I can do that.)
Since this a very basic example: Am I doing anything wrong here? What can I do to make this example run properly.
Thanks
Edit: Running this example on WebSphere Application Server 18.0.0.2/wlp-1.0.21.cl180220180619-0403) auf Java HotSpot(TM) 64-Bit Server VM, Version 1.8.0_172-b11 (de_DE) with the features webProfile-8.0, mpFaultTolerance-1.0 and localConnector-1.0.
Edit: Solution, thanks to Andy McCright and Azquelt.
Since the call cannot be interrupted I have to make it asynchronous. So you got 2 threads: The first an who invoke the second thread with the call. The first thread will be interrupted, the second remains until the call finishes. But now you can go on with failure handling, open the circuit and stuff like that to prevent making further calls to the broken service.
#Path("mpfaulttolerance")
#RequestScoped
public class MpFaultToleranceController {
#Inject
private TestBase test;
#GET
#Path("timeout")
#Timeout(4)
public Response getFailingRequest() throws InterruptedException, ExecutionException {
Future<Response> resp = test.createFailingRequestToClientAsynch();
return resp.get();
}
}
And the client call:
#ApplicationScoped
public class TestBase {
#Asynchronous
public Future<Response> createFailingRequestToClientAsynch() {
Response response = ClientBuilder.newClient().target("http://localhost:9080").path("/resilience/api/client/timeout").request().get();
return CompletableFuture.completedFuture(response);
}
}
It does interrupt threads using Thread.interrupt(), but unfortunately not all Java operations respond to thread interrupts.
Lots of things do respond to interrupts by throwing an InterruptedException (like Thread.sleep(), Object.wait(), Future.get() and subclasses of InterruptableChannel) but InputStreams and Sockets don't.
I suspect that you (or the library you're using to make the request) is using a Socket which isn't interruptible so you don't see your method return early.
It's particularly unintuitive because Liberty's JAX-RS client doesn't respond to thread interrupts as Andy McCright mentioned. We're aware it's not a great situation and we're working on making it better.
I had the same problem. For some URLs I consume, the Fault Tolerance timeout doesn't work.
In my case I use RestClient. I solved my problem using the readTimeout() of the RestClientBuilder:
MyRestClientClass myRestClientClass = RestClientBuilder.newBuilder().baseUri(uri).readTimeout(3l, TimeUnit.SECONDS) .build(MyRestClientClient.class);
One advantage of using this Timeout control is that you can pass the timeout as a parameter.

Vert.x Unit Test a Verticle that does not implement the start method with future

I'm new to Vert.x and just stumbled about a problem.
I've the following Verticle:
public class HelloVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
String greetingName = config().getString("greetingName", "Welt");
String greetingNameEnv = System.getenv("GREETING_NAME");
String greetingNameProp = System.getProperty("greetingName");
Router router = Router.router(vertx);
router.get("/hska").handler(routingContext -> {
routingContext.response().end(String.format("Hallo %s!", greetingName));
});
router.get().handler(routingContext -> {
routingContext.response().end("Hallo Welt");
});
vertx
.createHttpServer()
.requestHandler(router::accept)
.listen(8080);
}
}
I want to unit test this verticle but i dont know how to wait for the verticle to be deployed.
#Before
public void setup(TestContext context) throws InterruptedException {
vertx = Vertx.vertx();
JsonObject config = new JsonObject().put("greetingName", "Unit Test");
vertx.deployVerticle(HelloVerticle.class.getName(), new DeploymentOptions().setConfig(config));
}
when i setup my test like this i have to add a Thread.sleep after the deploy call, to make the tests be executed after some time of watiting for the verticle.
I heared about Awaitability and that it should be possible to wait for the verticle to be deployed with this library. But I didn't find any examples of how to use Awaitability with vertx-unit and the deployVerticle method.
Could anyone bring some light into this?
Or do i really have to hardcode a sleep timer after calling the deployVerticle-Method in my tests?
Have a look into the comments of the accepted answer
First of all you need to implement start(Future future) instead of just start(). Then you need to add a callback handler (Handler<AsyncResult<HttpServer>> listenHandler) to the listen(...) call — which then resolves the Future you got via start(Future future).
Vert.x is highly asynchronous — and so is the start of an Vert.x HTTP server. In your case, the Verticle would be fully functional when the HTTP server is successfully started. Therefore, you need implement the stuff I mentioned above.
Second you need to tell the TestContext that the asynchronous deployment of your Verticle is done. This can be done via another callback handler (Handler<AsyncResult<String>> completionHandler). Here is blog post shows how to do that.
The deployment of a Verticle is always asynchronous even if you implemented the plain start() method. So you should always use a completionHandler if you want to be sure that your Verticle was successfully deployed before test.
So, no you don't need to and you definitely shouldn't hardcode a sleep timer in any of your Vert.x applications. Mind The Golden Rule - Don’t Block the Event Loop.
Edit:
If the initialisation of your Verticle is synchronous you should overwrite the plain start() method — like it's mentioned in the docs:
If your verticle does a simple, synchronous start-up then override this method and put your start-up code in there.
If the initialisation of your Verticle is asynchronous (e.g. deploying a Vert.x HTTP server) you should overwrite start(Future future) and complete the Future when your asynchronous initialisation is finished.

When to use a Thread pool instead of calling new Thread

I have a JAX-RS/Jersey Rest API which gets a request and needs to do an additional job in a separate thread but I am not sure whether it would be advisable to use a threadpool or not. I expect a lot of requests to this API (a few thousands a day) but I only have a single additional job in the background.
Would it be bad to just create a new Thread each time like this? Any advice would be appreciated. I have not used a ThreadPool before.
#Get
#Path("/myAPI")
public Response myCall() {
// call load in the background
load();
...
// do main job here
mainJob();
...
}
private void load() {
new Thread(new Runnable() {
#Override
public void run() {
doSomethingInTheBackground();
}
}).start();
}
Edit:
Just to clarify. I only need a single additional job to run in the background. This job will call another API to log some info and that's it. But it has to do this for every request and I do not need to wait for a response. That's why I thought of just doing this in a new background thread.
Edit2:
So this is what I came up with now. Could anyone please tell me if this seems OK (it works locally) and if I need to shutdown the executor (see my comment in the code)?
// Configuration class
#Bean (name = "executorService")
public ExecutorService executorService() {
return Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() + 1);
}
// Some other class
#Qualifier("executorService")
#Autowired
private ExecutorService executorService;
....
private void load() {
executorService.submit(new Runnable() {
#Override
public void run() {
doSomethingInTheBackground();
}
});
// If I enable this I will get a RejectedExecutionException
// for a next request.
// executorService.shutdown();
}
Threadpool is a good way of dealing with this for two reasons:
1) you will reuse existing threads in the pool, sort of less overhead
2) more importantly, your system will not get bog down if system goes under attack and some party tries to start zillions of sessions at once because of size of the pool will be preset.
Use of threadpools is not complicated at all. See here more about threadpools. And also take a look at oracle documentation.
It sounds to me you don't need to create multiple threads at all.
(although I might be wrong, I don't know the specifics of your task).
Could you perhaps create exactly 1 thread that does background work, and give that thread a LinkedBlockingQueue to store the parameters of the doSomethingInTheBackground call?
This solution wouldn't work if it is of the utmost importance that the background task starts right away, even when the server is under heavy load. But for example for my most recent task (retrieve text externally, return them to the API caller, then delayed-add the text to the SOLR layer) this was a perfectly fine solution.
I suggest using neither of the approaches you mention, but to use a JMS queue. You can easily embed an ActiveMQ instance in your application. First create one or more separate consumer threads in the background to pick up jobs from the queue.
Then when a request is received just push a message with the job details on the JMS queue. This is a much better architecture and more scalable than fiddling with low level threads or thread pools.
See also: this answer and the activeMQ site.

Spring's DeferredResult setResult interaction with timeouts

I'm experimenting with Spring's DeferredResult on Tomcat, and I'm getting crazy results. Is what I'm doing wrong, or is there some bug in Spring or Tomcat? My code is simple enough.
#Controller
public class Test {
private DeferredResult<String> deferred;
static class DoSomethingUseful implements Runnable {
public void run() {
try { Thread.sleep(2000); } catch (InterruptedException e) { }
}
}
#RequestMapping(value="/test/start")
#ResponseBody
public synchronized DeferredResult<String> start() {
deferred = new DeferredResult<>(4000L, "timeout\n");
deferred.onTimeout(new DoSomethingUseful());
return deferred;
}
#RequestMapping(value="/test/stop")
#ResponseBody
public synchronized String stop() {
deferred.setResult("stopped\n");
return "ok\n";
}
}
So. The start request creates a DeferredResult with a 4 second timeout. The stop request will set a result on the DeferredResult. If you send stop before or after the deferred result times out, everything works fine.
However if you send stop at the same time as start times out, things go crazy. I've added an onTimeout action to make this easy to reproduce, but that's not necessary for the problem to occur. With an APR connector, it simply deadlocks. With a NIO connector, it sometimes works, but sometimes it incorrectly sends the "timeout" message to the stop client and never answers the start client.
To test this:
curl http://localhost/test/start & sleep 5; curl http://localhost/test/stop
I don't think I'm doing anything wrong. The Spring documentation seems to say it's okay to call setResult at any time, even after the request already expired, and from any thread ("the
application can produce the result from a thread of its choice").
Versions used: Tomcat 7.0.39 on Linux, Spring 3.2.2.
This is an excellent bug find !
Just adding more information about the bug (that got fixed) for a better understanding.
There was a synchronized block inside setResult() that extended up to the part of submitting a dispatch. This can cause a deadlock if a timeout occurs at the same time since the Tomcat timeout thread has its own locking that permits only one thread to do timeout or dispatch processing.
Detailed explanation:
When you call "stop" at the same time as the request "times out", two threads are attempting to lock the DeferredResult object 'deferred'.
The thread that executes the "onTimeout" handler
Here is the excerpt from the Spring doc:
This onTimeout method is called from a container thread when an async request times out before the DeferredResult has been set. It may invoke setResult or setErrorResult to resume processing.
Another thread that executes the "stop" service.
If the dispatch processing called during the stop() service obtains the 'deferred' lock, it will wait for a tomcat lock (say TomcatLock) to finish the dispatch.
And if the other thread doing timeout handling has already acquired the TomcatLock, that thread waits to acquire a lock on 'deferred' to complete the setResult()!
So, we end up in a classic deadlock situation !

ThreadPoolExecutor for running AbortableHttpRequest - how to call abort?

I'm running a networking service in android where I direct all my http requests to run and get callbacks from the service when the requests are complete. I run the requests in a ThreadPoolExecutor to limit the number of concurrent requests. As the requests run within the pool, they eventually create an HttpGet or HttpPost, both of which indirectly implement AbortableHttpRequest, which allows one to cancel the connection (say, if it's blocking for a long time).
If a user cancels a request, I'd like to somehow drill into the thread queue and call the abort routine for that request. If, for example, a web site is not responding and the user chooses to do something else, right now my only option is to wait for the standard 5 minute http timeout to occur for that hung request before that thread is freed up. If I could access the thread that has my request and call abort, that would free things up right away.
From what I can understand, it appears once my request has gone into the thread pool, it's a black box until it comes out the other end. Querying the queue will only hand back futures, which hides the runnable.
Is there a better approach for this? I'm fairly new to java and threading (I mostly do perl, which doesn't do threads very well at all).
Just because you give a task to a thread pool executor doesn't mean you can't hold a reference on it. Keep a reference on the task, and if the user chooses to cancel it, then call abort on your task.
public class MyAbortableRunnable implements Runnable {
private final Object lock = new Object();
private AbortableHttpRequest request;
public void abort() {
synchronized(lock) {
if (request != null) {
request.abort();
}
}
}
#Override
public void run() {
...
// create the request
synchronized(lock) {
this.request = ...;
}
...
}
}

Categories

Resources