Webflux producer consumer problem (webClient) - java

Hi I have problem with WebFlux and backpressure:
Flux.range(0, 100)
.flatMap((Integer y) -> {
return reallySlowApi();
})
.doOnEach((Signal<String> x1) -> {
log("next-------" );
})
.subscribeOn(Schedulers.elastic())
.subscribe()
;
How I can limit calls like to one call per 5 seconds. Note: only reallySlowApi can be modified.
private Mono<String> reallySlowApi() {
return webClient
.get()
.retrieve()
.bodyToMono(String.class);
}
Edit: I know about delayElements but it won't resolve issue if Api will get even slower. I need optimal way of working with reallySlowApi.

One way is with delayElements()
public void run() {
Flux.range(0, 100)
.delayElements(Duration.ofSeconds(5)) // only emit every 5 seconds
.flatMap(y -> reallySlowApi())
.doOnNext(x1 -> System.out.println("next-------"))
.blockLast(); // subscribe AND wait for the flux to complete
}
private Mono<String> reallySlowApi() {
return Mono.just("next");
}
You could also use Flux.interval() plus a take() to limit the number of iterations.
Flux.interval(Duration.ofSeconds(5))
.take(100)
Note that the subscribeOn in your example doesn't do anything partcularly as the subscribe operation applies to the generation of the range 0-100 which is not blocking.

You can use retry mechanisam in ur webclient code
.doOnError(error -> handleError(error.getMessage()))
.timeout(Duration.ofSeconds(ServiceConstants.FIVE))
.retryWhen(
Retry.backoff(retryCount, Duration.ofSeconds(ServiceConstants.FIVE))
.filter(throwable -> throwable instanceof TimeoutException)
)

Just to put here solution that I found. WebFlux when mapping response we can pass concurrency parameter that solve this issue.
flatMap(mapper, concurrency)
.flatMap((Integer y) -> {
return reallySlowApi();
} , 3)

Related

Wait for a webclient remote flux with awaitility

I am testing a webclient that returns a flux and I need to wait for it to initialise properly. Like this
I setup a flux as null
private Flux<Event> events = null;
Then call a webclient to get the Flux from a remote URL
Flux<String> events = getFlux(guid);
The webclient is
WebClient client; // already setup with headers and URL
public Flux<String> getFlux(String guid) {
return client.get()
.uri(Props.getBaseEndpoint() + "?id=" + guid)
.retrieve()
.onStatus(status -> status.value() == 401,clientResponse -> Mono.empty())
.bodyToFlux(String.class)
.timeout(Duration.ofSeconds(Props.getTimeout()));
}
The getFlux method appears to return before the Flux is completely initialised. So I want to wait a couple of seconds for it:
Awaitility.await().atMost(5, TimeUnit.SECONDS).until(isFluxInitialised());
where something like:
public Callable<Boolean> isFluxInitialised() {
return new Callable<Boolean>() {
public Boolean call() throws Exception {
if (events != null)
return true;
return false;
}
};
}
Waiting for the Flux to be not null still causes a race condition in the test. I can't figure out what to wait for so that the getFlux has returned an initialised Flux that can then be subscribed to. The test continues with a subscription to the flux as below but finishes before the test data that's sent to the remote endpoint can arrive in the subscription.
events.subscribe(e -> Logs.Info("event: " + e));
Here's the intellisense
Not sure I understand the logic of the isFluxInitialised but looking at the description you could be confused by Assembly vs Subscription time. Also, please note that subscribe is not synchronous operation and your program could exit before results are available.
I would suggest to start with unit test using StepVerifier to make sure your flow is correct.
StepVerifier.create(getFlux(...))
.expectNextCount(count)
.verifyComplete();
If you need to wait until Flux is complete in your logic you can use common pattern using CountDownLatch. The same can be achieved with Awaitility if you like.
CountDownLatch completionLatch = new CountDownLatch(1);
getFlux(...)
.doOnComplete(completionLatch::countDown)
.doOnNext(e -> Logs.Info("event: " + e))
.subscribe();
completionLatch.await();
There is no reason to introduce blocking operator blockFirst() into the flow. Still not sure about the use case but technically you are trying to wait for the first element from the Flux. The same could be achieved without blocking
AtomicBoolean elementAvailable = new AtomicBoolean();
getFlux()
.doOnNext(rec -> elementAvailable.set(true))
.subscribe();
Awaitility.await().atMost(5, TimeUnit.SECONDS).until(elementAvailable::get);
Thanks to #Alex answer about assembly v subscription time. That's the problem. I actually got the awaitility to work properly by moving it off waiting for "assembly" time (which I couldn't get to work) and instead to wait for the 1st subscription, by using blockFirst() as below:
Awaitility.await().atMost(5, TimeUnit.SECONDS).until(isFluxInitialised());
and
public Callable<Boolean> isFluxInitialised() {
return new Callable<Boolean>() {
public Boolean call() throws Exception {
if (events.blockFirst() != null)
return true;
return false;
}
};
}

Throw an exception if a Reactor Flux doesn't complete in a set time

I have a potentially long-running Flux that I'd like to stop after a certain duration has passed. I've found several methods of doing this, however what I'm struggling with is how to be able to tell that the Flux timed out rather than just completed naturally.
Sample (very simple) code:
Flux.range(0, 10000)
.take(Duration.ofMillis(1))
.doOnNext(System.out::println)
.collectList()
.block();
What I'd like is something like this:
Flux.range(0, 10000)
.take(Duration.ofMillis(1))
.doOnNext(System.out::println)
.doOnError(t -> {
if (t instanceof TimeoutException) {
System.out.println("I timed out");
}
})
.collectList()
.block();
however take doesn't seem to notify on error; all I see is a terminate and complete signal which is what I get if I don't include the take in the Flux and just let it complete naturally.
I've looked briefly into the timeout operator which does throw an exception, however timeout looks like it'll only throw an exception if the Flux doesn't emit a single element within a certain time, rather than if the whole Flux doesn't complete in a certain time.
Does anyone have any tips or examples of how they've solved this?
Thanks in advance!
UPDATED
You can use takeUntilOther operator to signal an error after a given time:
Duration timeout = Duration.ofMillis(500);
Flux.range(0, 10000)
.delayElements(Duration.ofMillis(10))
.doOnNext(System.out::println)
.takeUntilOther(Mono.delay(timeout).then(Mono.error(new TimeoutException())))
// .takeUntilOther(Mono.never().timeout(Duration.ofMillis(500))) // based on Michael's answer this is also an option
.doOnError(t -> {
if (t instanceof TimeoutException) {
System.out.println("I timed out");
}
})
.blockLast();
If Flux is converted into Mono using for example .collectList() or .then() operator, then you can simply apply the .timeout(...) operator after that and it will behave as you require:
Flux.range(0, 10000)
.delayElements(Duration.ofMillis(10))
.doOnNext(System.out::println)
.collectList()
.timeout(Duration.ofMillis(500))
.doOnError(t -> {
if (t instanceof TimeoutException) {
System.out.println("I timed out");
}
})
.block();
Instead of using take(), you can use .mergeWith(Flux.never().timeout(Duration.ofMillis(500))) to merge your flux with another that will always throw a timeout exception after a certain timeframe.
To take your example, you'd do something like:
Flux.range(0, 10000)
.delayElements(Duration.ofMillis(10))
.mergeWith(Flux.<Integer>never().timeout(Duration.ofMillis(500)))
.doOnNext(System.out::println)
.doOnError(t -> {
if (t instanceof TimeoutException) {
System.out.println("I timed out");
}
})
.collectList()
.block();
...which will give you something like:
(...snip)
24
25
26
27
28
29
30
31
I timed out

RxJava retryWhen (exponential back-off) not working

So I know this has been asked many times before, but I have tried many things and nothing seems to work.
Let's start with these blogs/articles/code:
https://blog.danlew.net/2016/01/25/rxjavas-repeatwhen-and-retrywhen-explained/
https://jimbaca.com/rxjava-retrywhen/
http://blog.inching.org/RxJava/2016-12-12-rx-java-error-handling.html
https://pamartinezandres.com/rxjava-2-exponential-backoff-retry-only-when-internet-is-available-5a46188ab175
https://gist.github.com/wotomas/35006d156a16345349a2e4c8e159e122
And many others.
In a nutshell all of them describe how you can use retryWhen to implement exponential back-off. Something like this:
source
.retryWhen(
errors -> {
return errors
.zipWith(Observable.range(1, 3), (n, i) -> i)
.flatMap(
retryCount -> {
System.out.println("retry count " + retryCount);
return Observable.timer((long) Math.pow(1, retryCount), SECONDS);
});
})
Even the documentation in the library agrees with it:
https://github.com/ReactiveX/RxJava/blob/3.x/src/main/java/io/reactivex/rxjava3/core/Observable.java#L11919.
However, I've tried this and some pretty similar variations, not worthy to describe here, and nothing seems to work. There's a way in that the examples works and is using blocking subscribers but I want to avoid blocking threads.
So if to the previous observable we apply a blocking subscriber like this:
.blockingForEach(System.out::println);
It works as expected. But as that's not the idea. If we try:
.subscribe(
x -> System.out.println("onNext: " + x),
Throwable::printStackTrace,
() -> System.out.println("onComplete"));
The flow runs only once, thus not what I want to achieve.
Does that mean it cannot be used as I'm trying to? From the documentation it doesn't seem to be a problem trying to accomplish my requirement.
Any idea what am I missing?
TIA.
Edit: There are 2 ways I'm testing this:
A test method (using testng):
Observable<Integer> source =
Observable.just("test")
.map(
x -> {
System.out.println("trying again");
return Integer.parseInt(x);
});
source
.retryWhen(
errors -> {
return errors
.zipWith(Observable.range(1, 3), (n, i) -> i)
.flatMap(
retryCount -> {
return Observable.timer((long) Math.pow(1, retryCount), SECONDS);
});
})
.subscribe(...);
From a Kafka consumer (using Spring boot):
This is only the subscription to the observer, but the retries logic is what I described earlier in the post.
#KafkaListener(topics = "${kafka.config.topic}")
public void receive(String payload) {
log.info("received payload='{}'", payload);
service
.updateMessage(payload)
.subscribe(...)
.dispose();
}
The main issue of your code is that Observable.timer is by default operating on the computation scheduler. This adds extra effort when trying to verify the behaviour within a test.
Here is some unit testing code that verifies that your retry code is actually retrying.
It adds a counter, just so we can easily check how many calls have happened.
It uses the TestScheduler instead of the computation scheduler so that we can pretend moving in time through advanceTimeBy.
TestScheduler testScheduler = new TestScheduler();
AtomicInteger counter = new AtomicInteger();
Observable<Integer> source =
Observable.just("test")
.map(
x -> {
System.out.println("trying again");
counter.getAndIncrement();
return Integer.parseInt(x);
});
TestObserver<Integer> testObserver = source
.retryWhen(
errors -> {
return errors
.zipWith(Observable.range(1, 3), (n, i) -> i)
.flatMap(
retryCount -> {
return Observable.timer((long) Math.pow(1, retryCount), SECONDS, testScheduler);
});
})
.test();
assertEquals(1, counter.get());
testScheduler.advanceTimeBy(1, SECONDS);
assertEquals(2, counter.get());
testScheduler.advanceTimeBy(1, SECONDS);
assertEquals(3, counter.get());
testScheduler.advanceTimeBy(1, SECONDS);
assertEquals(4, counter.get());
testObserver.assertComplete();

Limit for `onErrorContinue(...)` in Flux?

I have a (possibly infinite) Flux source that is supposed to first store each message (e.g. into a database) and then asynchronously forward the messages (e.g. using Spring WebClient).
The forward(s) in case of failure are supposed to log an error, without completing the source Flux.
I however realized that forward(s) wihtin the flow (flatMap(...)) block execution of the source Flux after exactly 256 messages that cause exceptions (e.g. reactor.retry.RetryExhaustedException).
Representative example that fails in the assert since only 256 messages are processed:
#Test
#SneakyThrows
public void sourceBlockAfter256Exceptions() {
int numberOfRequests = 500;
Set<Integer> sink = new HashSet<>();
Flux
.fromStream(IntStream.range(0, numberOfRequests).boxed())
.map(sink::add)
.flatMap(i -> Mono
// normally the forwards are contained here e.g. by means of Mono.when(...).thenReturn(...).retryWhen(...):
.error(new Exception("any"))
)
.onErrorContinue((throwable, o) -> log.error("Error", throwable))
.subscribe();
Thread.sleep(3000);
Assertions.assertEquals(numberOfRequests, sink.size());
}
Doing the forward within the subscribe(...) doesn't block the source Flux but that's certainly no solution, since I don't possibly want to lose messages.
Questions:
What has happened here? (probably related to some state stored in just one bit)
How can I do this correctly?
EDIT:
According to the discussion below I've constructed an example that uses FluxMessageChannel (which up to my understanding is made for infinite streams and definitly not expected to block after 256 Errors) and has exactly the same behaviour:
#Test
#SneakyThrows
public void maxConnectionWithChannelTest() {
int numberOfRequests = 500;
Set<Integer> sink = new HashSet<>();
FluxMessageChannel fluxMessageChannel = MessageChannels.flux().get();
fluxMessageChannel.subscribeTo(
Flux
.fromStream(IntStream
.range(0, numberOfRequests).boxed()
.map(i -> MessageBuilder.withPayload(i).build())
)
.map(Message::getPayload)
.map(sink::add)
.flatMap(i -> Mono.error(new Exception("whatever")))
);
Flux
.from(fluxMessageChannel)
.subscribe();
Thread.sleep(3000);
Assert.assertEquals(numberOfRequests, sink.size());
}
EDIT:
I just raised an issue in the reactor core project: https://github.com/reactor/reactor-core/issues/2011

Use Fallback Observable x number of times

I have an Observable which implements Error handling in the onErrorResumeNext method.
getMyObservable(params)
.take(1)
.doOnError(e -> {
})
.onErrorResumeNext(throwable -> {
if (throwable.getMessage().contains("401")) {
return getMyObservable(params);
} else {
sendServerCommunicationError();
return Observable.error(throwable);
}
})
.subscribe(result -> {
... }
});
GetMyObservable() returns a web service request from a generated client. The Use Case is: If we receive 401 we may need to refresh the client with a new UserToken. That is why we use the Fallback Observable in onErrorResumeNext() and cannot just use retry.
I have some questions:
Why do I need to implement doOnError? If I donĀ“t implement it, I sometimes get an "onError not implemented" Exception. I thought when I use onErrorResumeNext, this method is automatically used in case of an Error.
How can I achieve that on specific Errors (like 401) I use a fallback Observable with some backoff time and after 5 Times I produce an Error. So can I combine retryWhen and onErrorResumeNext somehow or is it done differently?
Why do I need to implement doOnError?
You don't and doOnError is not an error handler but a peek into the error channel. You have to implement an error handler in subscribe:
.subscribe(result -> {
// ...
},
error -> {
// ...
});
How can I achieve that on specific Errors (like 401) I use a fallback Observable with some backoff time and after 5 Times
Use retryWhen:
Observable.defer(() -> getMyObservable(params))
.retryWhen(errors -> {
AtomicInteger count = new AtomicInteger();
return errors.flatMap(error -> {
if (error.toString().contains("401")) {
int c = count.incrementAndGet();
if (c <= 5) {
return Observable.timer(c, TimeUnit.SECONDS);
}
return Observable.error(new Exception("Failed after 5 retries"));
}
return Observable.error(error);
})
})

Categories

Resources