I have a problem with Resilience4j RateLimiter
public static void main(final String[] args) throws InterruptedException {
final ExternalService service = new ExternalService();
final ExecutorService executorService = Executors.newFixedThreadPool(30);
final RateLimiterConfig config = RateLimiterConfig.custom()
.limitRefreshPeriod(Duration.ofSeconds(10))
.limitForPeriod(3)
.timeoutDuration(Duration.ofSeconds(12))
.build();
final RateLimiter rateLimiter = RateLimiter.of("RateLimiter", config);
final Callable<Response<String>> callable = RateLimiter.decorateCallable(
rateLimiter, () -> service.get(200, "OK")
);
executorService.submit(callable); //fine in first period
executorService.submit(callable); //fine in first period
executorService.submit(callable); //fine in first period
executorService.submit(callable); //should wait 10 sec and fine in second period
executorService.submit(callable); //should wait 10 sec and fine in second period
executorService.submit(callable); //should wait 10 sec and fine in second period
executorService.submit(callable); //should exit with timeout after 12 seconds
executorService.submit(callable); //should exit with timeout after 12 seconds
executorService.submit(callable); //should exit with timeout after 12 seconds
Thread.sleep(Duration.ofSeconds(40).toMillis());
executorService.shutdown();
}
In ExternalService I have some basic logging with localTime of responses. I think that it should work as I explained in comments, but my response is:
> Task :Main.main()
[12:24:53.5] Return standard response
[12:24:53.5] Return standard response
[12:24:53.5] Return standard response
[12:25:03.5] Return standard response
[12:25:03.5] Return standard response
[12:25:03.5] Return standard response
[12:25:03.5] Return standard response
[12:25:03.5] Return standard response
BUILD SUCCESSFUL in 40s
So it seems that the first cycle is good, but after that, FIVE next threads are allowed by RateLimiter, and the last thread is never called.
Unfortunately it was a bug which was introduced in a PR #672 which is part of release v1.2.0. The PR added the possibility to requests multiple permits per call. The bug was fixed now.
Related
I'm using springframework's reactive WebClient to make a client HTTP request to another service.
I currently have:
PredictionClientService.java
var response = externalServiceClient.sendPostRequest(predictionDto);
if (response.getStatusCode() == HttpStatusCode.OK) {
predictionService.updateStatus(predictionDto, Status.OK);
} else {
listOfErrors.add(response.getPayload());
predictionService.updateStatus(predictionDto, Stage.FAIL);
//Perhaps change above line to Stage.PENDING and then
//Poll the DB every 30, 60, 120 mins
//if exhausted, then call
// predictionService.updateStatus(predictionDto, Stage.FAILED);??
}
}
ExternalServiceClient.java
public PredictionResponseDto sendPostRequest(PredictionDto predictionDto) {
var response = webClient.post()
.uri(url)
.contentType(MediaType.APPLICATION_JSON)
.body(BodyInserters.fromValue(predictionDto.getPayload()))
.exchange()
.retryWhen(Retry.backoff(3, Duration.ofMinutes(30)))
//Maybe I can remove the retry logic here
//and handle retrying in PredictionClientService?
.onErrorResume(throwable ->
Mono.just(ClientResponse.create(TIMEOUT_HTTP_CODE,
ExchangeStrategies.empty().build()).build()))
.blockOptional();
return response.map(clientResponse ->
new PredictionResponseDto(
clientResponse.rawStatusCode(),
clientResponse.bodyToMono(String.class).block()))
.orElse(PredictionResponseDto.builder().build());
}
This will retry a maximum of 3 times on intervals 30, 60, 120 mins. The issue is, I don't want to keep a processing for running upwards of 30 mins.
The top code block is probably where I need to add the retry logic (poll from database if status = pending and retries < 3)?
Is there any sensible solution here? I was thinking if I could save the failed request to a DB with columns 'Request Body', "Retry attempt", "Status" and poll from this? Although not sure if cron is the way to go here.
How would I retry sending the HTTP request every 30, 60, 120 mins to avoid these issues? Would appreciate any code samples or links!
Because of the server side problem, we are trying to disable the connection pool used by OkHttp.
The initializer for OkHttp ConnectionPool class receives maxIdleConnections and keep alive duration information.
public ConnectionPool(int maxIdleConnections, long keepAliveDuration, TimeUnit timeUnit) {
this.delegate = new RealConnectionPool(maxIdleConnections, keepAliveDuration, timeUnit);
}
public RealConnectionPool(int maxIdleConnections, long keepAliveDuration, TimeUnit timeUnit) {
this.maxIdleConnections = maxIdleConnections;
this.keepAliveDurationNs = timeUnit.toNanos(keepAliveDuration);
// Put a floor on the keep alive duration, otherwise cleanup will spin loop.
if (keepAliveDuration <= 0) {
throw new IllegalArgumentException("keepAliveDuration <= 0: " + keepAliveDuration);
}
}
Would it be ok to set maxIdleConnections to 0?
We just need to create a new connection for each request.
Yep, Set maxIdleConnections to 0.
Below code returned a timeout in client (Elasticsearch Client) when number of records are higher.
CompletableFuture<BulkByScrollResponse> future = new CompletableFuture<>();
client.reindexAsync(request, RequestOptions.DEFAULT, new ActionListener<BulkByScrollResponse>() {
#Override
public void onResponse(BulkByScrollResponse bulkByScrollResponse) {
future.complete(bulkByScrollResponse);
}
#Override
public void onFailure(Exception e) {
future.completeExceptionally(e);
}
});
BulkByScrollResponse response = future.get(10, TimeUnit.MINUTES); // client timeout occured before this timeout
Below is the client config.
connectTimeout: 60000
socketTimeout: 600000
maxRetryTimeoutMillis: 600000
Is there a way to wait indefinitely until the re-indexing complete?
submit the reindex request as a task:
TaskSubmissionResponse task = esClient.submitReindexTask(reindex, RequestOptions.DEFAULT);
acquire the task id:
TaskId taskId = new TaskId(task.getTask());
then check the task status periodically:
GetTaskRequest taskQuery = new GetTaskRequest(taskId.getNodeId(), taskId.getId());
GetTaskResponse taskStatus;
do {
Thread.sleep(TimeUnit.MINUTES.toMillis(1));
taskStatus = esClient.tasks()
.get(taskQuery, RequestOptions.DEFAULT)
.orElseThrow(() -> new IllegalStateException("Reindex task not found. id=" + taskId));
} while (!taskStatus.isCompleted());
Elasticsearch java api doc about task handling just sucks.
Ref
I don't think its a better choice to wait indefinitely to complete the re-indexing process and give very high value for timeout as this is not a proper fix and will cause more harm than good.
Instead you should examine the response, add more debugging logging to find the root-cause and address them. Also please have a look at my tips to improve re-indexing speed, which should fix some of your underlying issues.
How do you wait for a single value to come on an observable with a timout?
I am looking for something like:
Observable<Acknowledgement> acknowledgementObservable;
port.send(new Message());
Optional<Acknowledgement> ack = acknowledgementObservable.getFirst(100, TimeUnit.MILLISECONDS);
First, convert Observable to CompletableFuture as described in Converting between Completablefuture and Observable:
Observable<Acknowledgement> acknowledgementObservable;
port.send(new Message());
CompletableFuture<T> future = new CompletableFuture<>();
acknowledgementObservable
.doOnError(future::completeExceptionally)
.single()
.forEach(future::complete);
Then, wait for the event using timeout:
Acknowledgement ack = future.get(100, TimeUnit.MILLISECONDS);
It throws TimeoutException if timeout occurs.
You can do so by adding .timeout() and .onErrorComplete() .blockingGet()
So in this example it would be:
Acknowledgement ack = acknowledgementObservable
.firstElement()
.timeout(3, TimeUnit.SECONDS)
.onErrorComplete()
.blockingGet();
If the timeout is hit, ack will be null.
I have a simple http vertx based server with the following code:
public class JdbcVertx extends AbstractVerticle{
private static int cnt;
#Override
public void start() throws Exception {
this.vertx.createHttpServer()
.requestHandler(request -> {
JdbcVertx.cnt++;
System.out.println("Request "+JdbcVertx.cnt+" "+Thread.currentThread().getName());
this.vertx.executeBlocking(future -> {
System.out.println("Blocking: "+Thread.currentThread().getName());
final String resp = this.dbcall();
future.complete(resp);
}, asyncResp -> {
request.response().putHeader("content-type", "text/html");
if (asyncResp.succeeded()) {
request.response().end(asyncResp.result().toString());
} else {
request.response().end("ERROR");
}
});
}).listen(8080);
}
private String dbcall(){
try {
Thread.sleep(2000);
System.out.println("From sleep: "+Thread.currentThread().getName());
} catch (InterruptedException ex) {
Logger.getLogger(JdbcVertx.class.getName()).log(Level.SEVERE, null, ex);
}
return UUID.randomUUID().toString();
}
From official docs i have read that default worker pool size is 20. But this is my output
Request 1 vert.x-eventloop-thread-0
Blocking: vert.x-worker-thread-0
Request 2 vert.x-eventloop-thread-0
Request 3 vert.x-eventloop-thread-0
Request 4 vert.x-eventloop-thread-0
Request 5 vert.x-eventloop-thread-0
From sleep: vert.x-worker-thread-0
Blocking: vert.x-worker-thread-0
Request 6 vert.x-eventloop-thread-0
From sleep: vert.x-worker-thread-0
I have two questions:
1)Why my verticle use only one worker thread?
2) From output
Request 1 vert.x-eventloop-thread-0
Blocking: vert.x-worker-thread-0
Request 2 vert.x-eventloop-thread-0
Request 3 vert.x-eventloop-thread-0
Request 4 vert.x-eventloop-thread-0
Request 5 vert.x-eventloop-thread-0
server get first request , put it to the worker thread and then get 2,3,4,5 requests.Why it works in this way? Maybe responses are put to the queue for worker pool?
Thank in advance
BTW i deploy using console (vertx run JdbcVertx.java)
That's an excellent question.
executeBlocking() actually has three parameters blockingHandler, ordered and resultHandler
When you call it with only two arguments, ordered is defaults to true
For that reasons all requests within the same context will receive the same worker thread - they're executed sequentially.
Set it to false to see that all worker threads start working.
You can also check this example of mine:
https://github.com/AlexeySoshin/VertxAnswers/blob/master/src/main/java/clientServer/ClientWithExecuteBlocking.java
And here you can see that it's actually being put on the queue:
https://github.com/eclipse/vert.x/blob/master/src/main/java/io/vertx/core/impl/ContextImpl.java#L280