How implements a retry feature on bad http code with AsyncHttpClient? - java

I need to call an url and it's possible that call return bad http code like 404, 500 etc. I want to implement a retry feature when I get those errors: a new call will be proceed every hour and ten times maximum.
I use the async-http-client library to do my POST call async.
Do you have any idea?
Thanks in advance for your help.

It's worth considering the Spring Retry functionality.
The API is constructed to be agnostic to what you want to retry, and concerns itself with retry policies, backoffs, limiting number of retries etc.
Another possibility if you're using Java 7/8 is the AsyncRetryExecutor. e.g.
ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
RetryExecutor executor = new AsyncRetryExecutor(scheduler).
retryOn(SocketException.class).
withExponentialBackoff(500, 2). //500ms times 2 after each retry
withMaxDelay(10_000). //10 seconds
withUniformJitter(). //add between +/- 100 ms randomly
withMaxRetries(20);

Related

Java + Resilience4J - Time limiter on my own handler?

Small question regarding Resilience4j please.
Currently, Resilience4J propose a very nice TimeLimiter pattern.
With it, one can very easily configure the time limit on an outbound http call. For instance, my service A is using an http client to call a service B. The time limit is to wait for service B for 1 second.
When properly configured, we can see when service B takes more than 1 second, to return a fallback instead, some kind of "no need to wait for service B anymore".
I was wondering, is it possible to configure similar, but for my own service/handler?
By that, I mean I would like to set a TimeLimiter on my own service A. Since I have an SLA defined by a contract of 5 seconds, instead of telling all my clients ("don't wait for me if I take more than 5 seconds") and to let them configure some kind of TimeLimiter on their end.
Very naive, I put the Resilience4J time limiter annotation on my own handler, and put very long computation in it, but not working. I think the Resilience4J time limiter is more for outbound calls.
Would it be possible to have the same mechanism on my own service A please?
Thank you
Yes, it also possible for your own service A.
But your service must return a CompletableFuture, Mono or Flux so that the TimeLimiter can throw an exception.

API Perfromance testing tool - JMeter or JUnit?

i am working on some performance testing task. The main goal is to compare duration of old NCP protocol calls againts new REST API calls. I have this scenario:
Client has an authenticated session
Client access protected resource
I have to create two variants:
a) One-by-one variant: The question is: How long does it take to perform 2000 requests sent one by one?
b) Concurrent variant: The question is: How long does it take to perform 2000 Request which are sent concurrently (300 Threads ideal)
I dont know the best way to solve this problem. My idea is:
a) Creation of 2000 Http clients -> Each client sends HTTP Post with credentials in body -> Each client sends HTTP GET and get the response (I will measure the time between sending the GET request and getting a response for each iteration and Sum it.
b) Creation of 2000 Httpclients -> Use executor service with fixed thread pool (300) -> each thread will perform sending get request.
Is there any other way? I know that Jmeter is a great tool but i am not sure that this scenario could by performed on Jmeter. Thanks!
For the second variant: you need to determine what is you targeted throughput (TP). 2000 request per hour? Per minute? Per second? Once you get the TP, and a guesstimate for the scenario response time (RT), you could estimate the number of VUsers using the Little's Law. Alternatively, you can use a calculator to determine that number.
Jmeter provides a mechanism to submit this workload (scenarios) by using Arrivals Thread Group. This TG will instantiate the number of threads needed to sustain the targeted TP.
Be aware that there is possibility that you might not reach the TP goal due to:
the SUT does not have the capacity to handle the load
a bottleneck (resource saturation) somewhere in the environment
the client (JMeter) does have enough resources to produce the load
JUnit itself doesn't provide any multithreading logic, you will have to construct the HTTP requests yourself (or with a 3rd-party library like RestAssured) and then execute them using i.e. ExecutorService or jmh and then come up with something for results analysis.
JMeter has everything out of the box so you won't need to write a single line of code, reporting is also included, it might be not that CI friendly as JMeter .jmx scripts are XML but on the other hand you will get nice protocol metrics and ability to correlate increasing load with increasing response time

Cassandra Retry Policy for NoHostAvailableException [duplicate]

I am using the Datastax Cassandra driver and have a RetryPolicy setup to retry when a host is unavailable. However, I have noticed that it retries as fast as it can. I would like to change it to have an increasing delay between retries rather than hammer the cluster if it is struggling. This is particularly important for OVERLOADED request errors since I do want to retry in these scenarios, but with a substantial delay.
Where is the right place to put a delay and what is the right mechanism? Should I just throw a Thread.sleep(...) in my RetryPolicy?
I don't mind taking up a request on-the-wire slot (towards the maximum number of in-flight requests) but I am not okay with completely blocking other writes if we are not yet at the in-flight request limit.
You can implement your own retry policy by adding a delay. The simplest way is to pick the source code of the default retry and modify it yourself to implement an exponential delay for retry or something similar.
For exponential delay, just look at the source code of http://docs.datastax.com/en/drivers/java/3.0/com/datastax/driver/core/policies/ExponentialReconnectionPolicy.html to see how it works

Reactive event processing with retrying in Java/Groovy

I would like to implement a microservice which after receive a request (via message queue) will try to execute it via REST/SOAP calls to the external services. On success the reply should be sent back via MQ, but on failure the request should be rescheduled for the execution later (using some custom algorithm like 10 seconds, 1 minute, 10 minutes, timeout - give up). After specified amount of time the failure message should be sent back to the requester.
It should run on Java 8 and/or Groovy. Event persistence is not required.
First I though about Executor and Runnable/Future together with ScheduledExecutorService.scheduleWithFixedDelay, but it looks to much low level for me. The second idea was actors with Akka and Scheduler (for rescheduling), but I'm sure there could be some other approaches.
Question. What technique would you use for reactive event processing with an ability to reschedule them on failure?
"Event" is quite fuzzy term, but most of definitions I met was talking about one of techniques of Inversion of Control. This one was characterized with fact, that you don't care WHEN and BY WHOM some piece of code will be called, but ON WHAT CONDITION. That means that you invert (or more precisely "lose") control over execution flow.
Now, you want event-driven processing (so you don't want to handle WHEN and BY WHOM), yet you want to specify TIMED (so strictly connected to WHEN) behaviour on failure. This is some kind of paradox to me.
I'd say you would do better, if you'd use callbacks for reactive programming, and on failure you'd just start new thread that will sleep for 10 seconds and re-run callback.
In the end I have found the library async-retry which was written just for this purpose. It allows to asynchronously retry the execution in a very customizable way. Internally it leverages ScheduledExecutorService and CompletableFuture (or ListenableScheduledFuture from Guava when Java 7 has to be used).
Sample usage (from the project web page):
ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
RetryExecutor executor = new AsyncRetryExecutor(scheduler).
retryOn(SocketException.class).
withExponentialBackoff(500, 2). //500ms times 2 after each retry
withMaxDelay(10_000). //10 seconds
withUniformJitter(). //add between +/- 100 ms randomly
withMaxRetries(20);
final CompletableFuture<Socket> future = executor.getWithRetry(() ->
new Socket("localhost", 8080)
);
future.thenAccept(socket ->
System.out.println("Connected! " + socket)
);

What's the effect on a second request of calling Thread.currentThread().sleep(2000) in a Spring MVC request handler?

I need to wait for a condition in a Spring MVC request handler while I call a third party service to update some entities for a user.
The wait averages about 2 seconds.
I'm calling Thread.sleep to allow the remote call to complete and for the entities to be updated in the database:
Thread.currentThread().sleep(2000);
After this, I retrieve the updated models from the database and display the view.
However, what will be the effect on parallel requests that arrive for processing at this controller/request handler?
Will parallel requests also experience a wait?
Or will they be spawned off into separate threads and so not be affected by the delay experienced by the current request?
What are doing may work sometimes, but it is not a reliable solution.
The Java Future interface, along with a configured ExecutorService allows you to begin some operation and have one or more threads wait until the result is ready (or optionally until a certain amount of time has passed).
You can find documentation for it here:
http://download.oracle.com/javase/6/docs/api/java/util/concurrent/Future.html

Categories

Resources