Retry HTTP Request using Spring's Webclient - java

I'm using springframework's reactive WebClient to make a client HTTP request to another service.
I currently have:
PredictionClientService.java
var response = externalServiceClient.sendPostRequest(predictionDto);
if (response.getStatusCode() == HttpStatusCode.OK) {
predictionService.updateStatus(predictionDto, Status.OK);
} else {
listOfErrors.add(response.getPayload());
predictionService.updateStatus(predictionDto, Stage.FAIL);
//Perhaps change above line to Stage.PENDING and then
//Poll the DB every 30, 60, 120 mins
//if exhausted, then call
// predictionService.updateStatus(predictionDto, Stage.FAILED);??
}
}
ExternalServiceClient.java
public PredictionResponseDto sendPostRequest(PredictionDto predictionDto) {
var response = webClient.post()
.uri(url)
.contentType(MediaType.APPLICATION_JSON)
.body(BodyInserters.fromValue(predictionDto.getPayload()))
.exchange()
.retryWhen(Retry.backoff(3, Duration.ofMinutes(30)))
//Maybe I can remove the retry logic here
//and handle retrying in PredictionClientService?
.onErrorResume(throwable ->
Mono.just(ClientResponse.create(TIMEOUT_HTTP_CODE,
ExchangeStrategies.empty().build()).build()))
.blockOptional();
return response.map(clientResponse ->
new PredictionResponseDto(
clientResponse.rawStatusCode(),
clientResponse.bodyToMono(String.class).block()))
.orElse(PredictionResponseDto.builder().build());
}
This will retry a maximum of 3 times on intervals 30, 60, 120 mins. The issue is, I don't want to keep a processing for running upwards of 30 mins.
The top code block is probably where I need to add the retry logic (poll from database if status = pending and retries < 3)?
Is there any sensible solution here? I was thinking if I could save the failed request to a DB with columns 'Request Body', "Retry attempt", "Status" and poll from this? Although not sure if cron is the way to go here.
How would I retry sending the HTTP request every 30, 60, 120 mins to avoid these issues? Would appreciate any code samples or links!

Related

Update authorization header in flux WebClient

I'm trying to create a resilient sse (server sent event) client in reactive programming.
The sse endpoint is authenticated, therefore I have to add an authorization header to each request.
The authorization token expires after 1 hour.
Below is a snippet of my code
webClient.get()
.uri("/events")
.headers(httpHeaders -> httpHeaders.setBearerAuth(authService.getIdToken()))
.retrieve()
.bodyToFlux(ServerSideEvent.class)
.timeout(Duration.ofSeconds(TIMEOUT))
.retryWhen(Retry.fixedDelay(Long.MAX_VALUE, Duration.ofSeconds(RETRY_DELAY)))
.subscribe(
content -> {
handleEvent(content);
},
error -> logger.error("Error receiving SSE: {}", error),
() -> logger.info("Completed!!!"));
If after 1 hour the connection is lost for any reason, this code stops working because the token is expired.
How can I refresh the token into the retry logic or in some other way?
Thank you
You can use webclient filter.
A filter can intercept and modify client request and response.
Example:
WebClient.builder().filter((request, next) -> {
ClientRequest newReuqest = ClientRequest.from(request)
.header("Authorization", "YOUR_TOKEN")
.build();
return next.exchange(newRequest);
}).build();
UPDATE:
Sorry, Not read your question clearly. Try this:
Assume that server return 401 code when token expired.
WebClient.builder().filter((request, next) -> {
final Mono<ClientResponse> response = next.exchange(request);
return response.filter(clientResponse -> clientResponse.statusCode() != HttpStatus.UNAUTHORIZED)
// handle 401 Unauthorized (token expired)
.switchIfEmpty(next.exchange(ClientRequest.from(request)
.headers(httpHeaders -> httpHeaders.setBearerAuth(getNewToken()))
.build()));
}).build();
Or you can cache your token (e.g. Save to redis and set TTL in one hour), when the token is empty from redis, get new one then save to redis again.

Avoid refreshing and saving token multiple times in a cache during parallel https calls to service

I am currently in the process of replacing the blocking HTTP calls from one of my projects with Asynchronous/reactive calls. The saveOrderToExternalAPI is an example of the refactor I have done that replaced the blocking HTTP call and now using reactive WebClient.
The function saveOrderToExternalAPI calls an endpoint and returns an empty MONO in case of success. It also retries when the API returns 401 Unauthorized error which happens when the token is expired. If you notice the retryWith logic, you will see the code is renewing the token (tokenProvider.renewToken()) before actually retrying.
The tokenProvider.renewToken() for now is a blocking call that fetches a new token from another endpoint and saves it to a Cache so that we don't have to renew the token again and subsequent calls can just reuse it. Whereas the tokenProvider.loadCachedToken() function check and return token if it is not Null, otherwise renew and then return it.
public Mono<ResponseEntity<Void>> saveOrderToExternalAPI(Order order) {
log.info("Sending request to save data.");
return webclient.post()
.uri("/order")
.headers(httpHeaders -> httpHeaders.setBearerAuth(tokenProvider.loadCachedToken()))
.accept(MediaType.APPLICATION_JSON)
.bodyValue(order)
.retrieve()
.toBodilessEntity()
.doOnError(WebClientResponseException.class, logHttp4xx)
.retryWhen(unAuthorizedRetrySpec())
.doOnNext(responseEntity -> log.debug("Response status code: {}",
responseEntity.getStatusCodeValue()));
}
private RetryBackoffSpec unAuthorizedRetrySpec() {
return Retry.fixedDelay(1, Duration.ZERO)
.doBeforeRetry(retrySignal -> log.info("Renew token before retrying."))
.doBeforeRetry(retrySignal -> tokenProvider.renewToken())
.filter(throwable -> throwable instanceof WebClientResponseException.Unauthorized)
.onRetryExhaustedThrow(propagateException);
}
private final Consumer<WebClientResponseException> logHttp4xx = e -> {
if (e.getStatusCode().is4xxClientError() && !e.getStatusCode().equals(HttpStatus.UNAUTHORIZED)) {
log.error("Request failed with message: {}", e.getMessage());
log.error("Body: {}", e.getResponseBodyAsString());
}
}
// -------- tokenProvider service
public String loadCachedToken() {
if (token == null) {
renewToken();
}
return token;
}
Everything is working fine because for now, the saveOrderToExternalAPI is eventually doing a blocking call in the Service layer.
for (ShipmentRequest order: listOfOrders) {
orderHttpClient.saveOrderToExternalAPI(order).block();
}
But, I would like this to be changed and refactor the service layer. I am wondering would happen if I keep using the existing logic of token renewal and process the saveOrderToExternalAPI parallelly as below.
List<Mono<ResponseEntity<Void>>> postedOrdersMono = listOfOrders.stream()
.map(orderHttpClient::saveOrderToExternalAPI)
.collect(Collectors.toList());
Mono.when(postedOrdersMono).block();
Now the calls will be done parallelly and the threads would try to update token cache at the same time (in case of 401 unauthorized) even though updating it once is enough. I am really new to reactive and would like to know if there's any callback or configuration that can fix this issue?

Avoid timeout in Elasticsearch re-indexing in Java

Below code returned a timeout in client (Elasticsearch Client) when number of records are higher.
CompletableFuture<BulkByScrollResponse> future = new CompletableFuture<>();
client.reindexAsync(request, RequestOptions.DEFAULT, new ActionListener<BulkByScrollResponse>() {
#Override
public void onResponse(BulkByScrollResponse bulkByScrollResponse) {
future.complete(bulkByScrollResponse);
}
#Override
public void onFailure(Exception e) {
future.completeExceptionally(e);
}
});
BulkByScrollResponse response = future.get(10, TimeUnit.MINUTES); // client timeout occured before this timeout
Below is the client config.
connectTimeout: 60000
socketTimeout: 600000
maxRetryTimeoutMillis: 600000
Is there a way to wait indefinitely until the re-indexing complete?
submit the reindex request as a task:
TaskSubmissionResponse task = esClient.submitReindexTask(reindex, RequestOptions.DEFAULT);
acquire the task id:
TaskId taskId = new TaskId(task.getTask());
then check the task status periodically:
GetTaskRequest taskQuery = new GetTaskRequest(taskId.getNodeId(), taskId.getId());
GetTaskResponse taskStatus;
do {
Thread.sleep(TimeUnit.MINUTES.toMillis(1));
taskStatus = esClient.tasks()
.get(taskQuery, RequestOptions.DEFAULT)
.orElseThrow(() -> new IllegalStateException("Reindex task not found. id=" + taskId));
} while (!taskStatus.isCompleted());
Elasticsearch java api doc about task handling just sucks.
Ref
I don't think its a better choice to wait indefinitely to complete the re-indexing process and give very high value for timeout as this is not a proper fix and will cause more harm than good.
Instead you should examine the response, add more debugging logging to find the root-cause and address them. Also please have a look at my tips to improve re-indexing speed, which should fix some of your underlying issues.

Spring Integration Aggregator Throttler

I have one message SomeMessage that looks like this:
class SomeMessage{
id,
title
}
Currently, I aggregate messages based on id. Messages are released after 10 seconds.
.aggregate(
a ->
a
.outputProcessor(messageProcessor())
.messageStore(messageGroupStore())
.correlationStrategy(correlationStrategy())
.expireGroupsUponCompletion(true)
.sendPartialResultOnExpiry(true)
.groupTimeout(TimeUnit.SECONDS.toMillis(10)))
.handle(amqpOutboundEndpoint)
What I need is a way to throttle messages based on title property. If title=="A", it should still wait 10 seconds for aggregation; If title=="B" it should wait 60 seconds for aggregation and it should not be immediately sent to amqpOutboundEndpoint but it should have some throttling (eg. 30 seconds between every message that has title=="B").
What would be the best way to do this? Is there something like throttling on AmqpOutboundEndpoint?
UPDATE
.groupTimeout(messageGroup -> {
if(anyMessageInGroupHasTitleB(messageGroup)){
return TimeUnit.SECONDS.toMillis(60);
}
else {
return TimeUnit.SECONDS.toMillis(10);
}
}))
.route(
(Function<SomeMessage, Boolean>) ec ->
ec.getTitle().equals("B"),
m -> m.subFlowMapping(true, sf ->
sf.channel(channels -> channels.queue(1))
.bridge(e -> e.poller(Pollers
.fixedDelay(60, TimeUnit.SECONDS)
.maxMessagesPerPoll(1)
))
).subFlowMapping(false, IntegrationFlowDefinition::bridge))
.handle(amqpOutboundEndpoint)
Use groupTimeoutExpression() instead of a fixed timeout...
payload.title == 'A' ? 10000 : 30000

SPA: how to know if user's alive

I have: SPA application, which means it can even do offline-processing. It means my app ask server only if it needs some additional info or wants to save something.
I holds user connections in Guava Cache with expired policy. It means I shouldn't worry about session destruction after timeout.
Crucial point: Each time user do some request I reset timeout to avoid session destruction. When user is inactive during some specified period, Guava Cache just throw his session away.
Problem: The problem is linked with SPA. With SPA, if user don't send any requests it doesn't mean that he's inactive.
I want: Automatically close user session an log out him after timeout.
Question: How can I know if user is active or not in SPA?
The first what comes in mind - is to use something like expired cache on server-side, and send each e.g. 5 minutes keep_alive request from the client to indicate that client is active. But it seems to be overengineering. Plus we clog the server with hundreds of unnecessary requests.
So I found (no mean to be pioneer) the better solution: client calculate an inactive period by his own, and send the appropriate request if inactive_period is greater than timeout.
activityListener = {
timeout: null,
activityHandler: function () {
$.cookie('last_activity', new Date().getTime());
},
initialize: function (timeout) {
this.timeout = timeout;
$.cookie('last_activity', new Date().getTime());
if ($.cookie('do_activity_check') != true) {
$.cookie('do_activity_check', true);
setInterval(this.activityCheck, timeout / 2);
}
addEventListener('click', this.activityHandler, false);
addEventListener('scroll', this.activityHandler, false);
},
handleTimeout: function () {
if (oLoginPage.authorities != null) {
le.send({
"#class": "UserRequest$Logout",
"id": "UserRequest.Logout"
});
}
},
activityCheck: function () {
var after_last_activity_ms = new Date().getTime() - $.cookie('last_activity');
if (after_last_activity_ms > activityListener.timeout) activityListener.handleTimeout();
}
}

Categories

Resources