I am testing a webclient that returns a flux and I need to wait for it to initialise properly. Like this
I setup a flux as null
private Flux<Event> events = null;
Then call a webclient to get the Flux from a remote URL
Flux<String> events = getFlux(guid);
The webclient is
WebClient client; // already setup with headers and URL
public Flux<String> getFlux(String guid) {
return client.get()
.uri(Props.getBaseEndpoint() + "?id=" + guid)
.retrieve()
.onStatus(status -> status.value() == 401,clientResponse -> Mono.empty())
.bodyToFlux(String.class)
.timeout(Duration.ofSeconds(Props.getTimeout()));
}
The getFlux method appears to return before the Flux is completely initialised. So I want to wait a couple of seconds for it:
Awaitility.await().atMost(5, TimeUnit.SECONDS).until(isFluxInitialised());
where something like:
public Callable<Boolean> isFluxInitialised() {
return new Callable<Boolean>() {
public Boolean call() throws Exception {
if (events != null)
return true;
return false;
}
};
}
Waiting for the Flux to be not null still causes a race condition in the test. I can't figure out what to wait for so that the getFlux has returned an initialised Flux that can then be subscribed to. The test continues with a subscription to the flux as below but finishes before the test data that's sent to the remote endpoint can arrive in the subscription.
events.subscribe(e -> Logs.Info("event: " + e));
Here's the intellisense
Not sure I understand the logic of the isFluxInitialised but looking at the description you could be confused by Assembly vs Subscription time. Also, please note that subscribe is not synchronous operation and your program could exit before results are available.
I would suggest to start with unit test using StepVerifier to make sure your flow is correct.
StepVerifier.create(getFlux(...))
.expectNextCount(count)
.verifyComplete();
If you need to wait until Flux is complete in your logic you can use common pattern using CountDownLatch. The same can be achieved with Awaitility if you like.
CountDownLatch completionLatch = new CountDownLatch(1);
getFlux(...)
.doOnComplete(completionLatch::countDown)
.doOnNext(e -> Logs.Info("event: " + e))
.subscribe();
completionLatch.await();
There is no reason to introduce blocking operator blockFirst() into the flow. Still not sure about the use case but technically you are trying to wait for the first element from the Flux. The same could be achieved without blocking
AtomicBoolean elementAvailable = new AtomicBoolean();
getFlux()
.doOnNext(rec -> elementAvailable.set(true))
.subscribe();
Awaitility.await().atMost(5, TimeUnit.SECONDS).until(elementAvailable::get);
Thanks to #Alex answer about assembly v subscription time. That's the problem. I actually got the awaitility to work properly by moving it off waiting for "assembly" time (which I couldn't get to work) and instead to wait for the 1st subscription, by using blockFirst() as below:
Awaitility.await().atMost(5, TimeUnit.SECONDS).until(isFluxInitialised());
and
public Callable<Boolean> isFluxInitialised() {
return new Callable<Boolean>() {
public Boolean call() throws Exception {
if (events.blockFirst() != null)
return true;
return false;
}
};
}
Related
We are given a Mono, that's handling some action(say a database update), and returns a value.
We want to add that Mono(transformed) to a special list that contains actions to be completed for example during shutdown.
That mono may be eagerly subscribed after adding to the list, to start processing now, or .subscribe() might not be called meaning it will be only subscribed during shutdown.
During shutdown we can iterate on the list in the following way:
for (Mono mono : specialList) {
Object value = mono.block(); // (do something with value)
}
How to transform the original Mono such that when shutdown code executes, and Mono was previously subscribed(), the action will not be triggered again but instead it will either wait for it to complete or replay it's stored return value?
OK, looks like it is as simple as calling mono.cache(), so this is how I used it in practice
public Mono<Void> addShutdownMono(Mono<Void> mono) {
mono = mono.cache();
Mono<Void> newMono = mono.doFinally(signal -> shutdownMonos.remove(mono));
shutdownMonos.add(mono);
return newMono;
}
public Function<Mono<Void>,Mono<Void>> asShutdownAwaitable() {
return mono -> addShutdownMono(mono);
}
database.doSomeAction()
.as(asShutdownAwaitable)
.subscribe() // Or don't subscribe at all, deferring until shutdown
Here is the actual shutdown code.
It was also important to me that they execute in order of being added, if user chose not to eagerly subscribe them, that's reason for Flux.concat instead of Flux.merge.
public void shutdown() {
Flux.concat(Lists.transform(new ArrayList<>(shutdownMonos), mono -> mono.onErrorResume(err -> {
logger.error("Async exception during shutdown, ignoring", err);
return Mono.empty();
}))
).blockLast();
}
PROBLEM Method needs to wait for Mono operation result, use it in Flux operation and return Flux.
public Flux<My> getMy() {
Mono<ZonedDateTime> dateTimeMono = getDateTime();
Flux<My> mies = reactiveMongoTemplate.find(
new Query(Criteria.where("dateTime").gt(dateTimeMono)),
My.class,
collectionName);
return mies;
}
RESEARCH
I expect that dateTimeMono stream is subscribed and terminated by Mongo reactive driver, so I don't subscribe. If I use Mono.zip I get Mono<Flux> as return type.
TASKS
How to wait for dateTimeMono value, use it in Flux operation and get Flux out of it?
You should use flaMapMany:
public Flux<My> getMy() {
return getDateTime().flatMapMany(date -> reactiveMongoTemplate.find(new Query(Criteria.where("dateTime").gt(date)),My.class,collectionName));
}
I have a SOAP call that I need to make and then process the results from the SOAP call in a REST call. Each set of calls is based on a batch of records. I am getting completely lost in trying to get this to run using JDK8 streams as asynchronous as possible. How can I accomplish this?
SOAP Call:
CompletableFuture<Stream<Product>> getProducts(final Set<String> criteria)
{
return supplyAsync(() -> {
...
return service.findProducts(request);
}, EXECUTOR_THREAD_POOL);
}
REST Call:
final CompletableFuture<Stream<Result>> validateProducts(final Stream<Product> products)
{
return supplyAsync(() -> service
.submitProducts(products, false)
.stream(), EXECUTOR_THREAD_POOL);
}
I am trying to invoke the SOAP call, pass the result into the REST call, and collect the results using a JDK8 stream. Each SOAP->REST call is a "set" of records (or batch) similar to paging. (this is totally not working right now but just an example).
#Test
public void should_execute_validations()
{
final Set<String> samples = generateSamples();
//Prepare paging...
final int total = samples.size();
final int pages = getPages(total);
log.debug("Items: {} / Pages: {}", total, pages);
final Stopwatch stopwatch = createStarted();
final Set<Result> results = range(0, pages)
.mapToObj(index -> {
final Set<String> subset = subset(index, samples);
return getProducts(subset)
.thenApply(this::validateProducts);
})
.flatMap(CompletableFuture::join)
.collect(toSet());
log.debug("Executed {} calls in {}", pages, stopwatch.stop());
assertThat(results, notNullValue());
}
I think there are two usage that are incorrect in your example: thenApply and join.
To chain the 1st call (SOAP) and the 2nd call (REST), you need to use thenCompose instead of thenApply. This is because method "validateProducts" returns completable futures, using "thenApply" will create CompletableFuture<CompletableFuture<Stream<Result>>> in your stream mapping. But what you need is probably CompletableFuture<Stream<Result>>. Using thenCompose can resolve this problem, because it is analogous to "Optional.flatMap" or "Stream.flatMap":
.mapToObj(index -> {
final Set<String> subset = subset(index, samples);
return getProducts(subset)
.thenCompose(this::validateProducts);
})
The 2nd incorrect usage is join. Using join blocks the current thread waiting for the result of that CompletableFuture. In your cases, there are N completable futures, where N is the number of pages. Instead of waiting them one by one, the better solution is to wait all the them use CompletableFuture.allOf(...). This method returns a new CompletableFuture that is completed when all of the given CompletableFutures complete. So I suggest that you modify your stream usage and return a list of futures. Then, wait the completion. And finally, retrieve the results:
List<CompletableFuture<Stream<Result>>> futures = range(0, pages)
.mapToObj(index -> {
final Set<String> subset = subset(index, samples);
return getProducts(subset).thenCompose(this::validateProducts);
})
.collect(Collectors.toList());
CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).join();
for (CompletableFuture<Stream<Result>> cf : futures) {
// TODO Handle the results and exceptions here
}
You can see the complete program on GitHub.
In the following code
public CompletableFuture<String> getMyFuture(String input)
{
CompletableFuture<String> future = new CompletableFuture<String>().thenApply((result) -> result+ "::");
ExecutorService service = Executors.newFixedThreadPool(6);
service.submit(() -> {
try {
future.complete(getResult(input));
} catch (InterruptedException e) {
e.printStackTrace();
}
});
return future;
}
public String getResult(String input) throws InterruptedException
{
Thread.sleep(3000);
return "hello "+ input +" :" + LocalTime.now();
}
I am expecting the output to contain trailing "::" but program doesn't is "hello first :16:49:30.231
" Is my implementation of apply correct ?
You're invoking complete() method of the CompletionStage that you got at the first line (where you call "thenApply" method).
If your intention is to complete the CompletableFuture with some string value (future.complete(getResult(input))) and then apply some function, you'd better place thenApply() at the end (where you return the future).
public CompletableFuture<String> getMyFuture(String input)
{
CompletableFuture<String> future = new CompletableFuture<String>();
ExecutorService service = Executors.newFixedThreadPool(6);
service.submit(() -> {
try {
future.complete(getResult(input));
} catch (InterruptedException e) {
e.printStackTrace();
}
});
return future.thenApply(result -> result+ "::");
}
I don't know how to explain it in a more understandable way. But in short: you're calling complete() method on the wrong object reference inside your Runnable.
You are creating two CompletableFuture instances. The first, created via new CompletableFuture<String>() will never get completed, you don’t even keep a reference to it that would make completing it possible.
The second, created by calling .thenApply((result) -> result+ "::") on the first one, could get completed by evaluating the specified function once the first one completed, using the first’s result as an argument to the function. However, since the first never completes, the function becomes irrelevant.
But CompletableFuture instances can get completed by anyone, not just a function passed to a chaining method. The possibility to get completed is even prominently displayed in its class name. In case of multiple completion attempts, one would turn out to be the first one, winning the race and all subsequent completion attempts will be ignored. In your code, you have only one completion attempt, which will successfully complete it with the value returned by getResult, without any adaptations.
You could change your code to keep a reference to the first CompletableFuture instance to complete it manually, so that the second gets completed using the function passed to thenApply, but on the other hand, there is no need for manual completion here:
public CompletableFuture<String> getMyFuture(String input) {
ExecutorService service = Executors.newFixedThreadPool(6);
return CompletableFuture.supplyAsync(() -> getResult(input), service)
.thenApply(result -> result + "::");
}
public String getResult(String input) {
LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(3));
return "hello "+ input +" :" + LocalTime.now();
}
When specifying the executor to supplyAsync, the function will be evaluated using that executor. More is not needed.
Needless to say, that’s just for example. You should never create a temporary thread pool executor, as the whole point of a thread pool executor is to allow reusing the threads (and you’re using only one of these six threads at all) and it should get shut down after use.
I've been experimenting with different ways to handle blocking methods with disconnected results while maintaining state which might have been interrupted. I've found it to be frustrating having to deal with disparate classes and methods where sending and receiving are difficult to align.
In the following example, SomeBlockingMethod() normally returns void as a message is sent to some other process. But instead I've made it synchronized with a listener which receives the result. By spinning it off to a thread, I can wait() for the result with a timeout or indefinitely.
This is nice because once the result is returned, I can continue working with a particular state which I had to pause while waiting for the result of the threaded task.
This there anything wrong with my approach?
Although this question may seem generic, I am specifically looking for advice on threading in Java.
Example pseudocode:
public class SomeClass implements Command {
#Override
public void onCommand() {
Object stateObject = new SomeObjectWithState();
// Do things with stateObject
Runnable rasync = () -> {
Object r = SomeBlockingMethod();
// Blocking method timed out
if (r == null)
return;
Runnable rsync = () -> {
// Continue operation on r which must be done synchronously
// Also do things with stateObject
};
Scheduler().run(rsync);
};
Scheduler().run(rasync);
}
Update with CompletableFuture:
CompletableFuture<Object> f = CompletableFuture.supplyAsync(() -> {
return SomeBlockingMethod();
});
f.thenRun(() -> { () -> {
String r = null;
try {
r = f.get();
}
catch (Exception e) {
e.printStackTrace();
}
// Continue but done asynchronously
});
or better yet:
CompletableFuture.supplyAsync(() -> {
return SomeBlockingMethod();
}).thenAccept((
Object r) -> {
// Continue but done asynchronously
});
The problem with using strictly CompletableFuture is that CompletableFuture.thenAccept is run from the global thread pool and is not guaranteed to be synchronous with the calling thread.
Adding the scheduler back for the synchronous task fixes this:
CompletableFuture.supplyAsync(() -> {
return SomeBlockingMethod();
}).thenAccept((
Object r) -> {
Runnable rsync = () -> {
// Continue operation on r which must be done synchronously
};
Scheduler().run(rsync);
});
A caveat of using CompletableFuture compared to the complete scheduler method is that any previous state which exists outside must be final or effectively final.
You should check out RxJava, it uses stream manipulation and has threading support.
api.getPeople()
.observeOn(Schedulers.computation())
.filter(p -> return p.isEmployee();)
.map(p -> return String.format("%s %s - %s", p.firstName(), p.lastName(), p.payrollNumber());)
.toList()
.observerOn(<ui scheudler>)
.subscirbe(p -> screen.setEmployees(p);)