I have a use case in my Spring boot application as follows:
I would like to fetch the id field value from the response with the following function:
String id = getIdFromResponse(response);
If I don't get any id in the response, then I check if the id field is present in the request argument with the following function:
String id = getIdFromRequest(request);
As of now, I am invoking them sequentially. But I would like to make these two functions run parallelly, I would like to stop as soon as I get an id from either of them.
I am wondering if there is any way to implement this using streams in Java 8.
You can use something like this:
String id = Stream.<Supplier<String>>of(
() -> getIdFromResponse(response),
() -> getIdFromRequest(request)
)
.parallel()
.map(Supplier::get)
.filter(Objects::nonNull)
.findFirst()
.orElseThrow():
The suppliers are needed, because when you don't use them, then both requests are still executed sequentially.
I also assumed that your methods return null when nothing is found, so I had to filter these values out with .filter(Objects::nonNull).
Depending on your use case, you can replace .orElseThrow() with something different, like .orElse(null)
There is no need to use Stream API as long as there exists a method exactly for this.
ExecutorService::invokeAny(Collection<? extends Callable<T>>)
Executes the given tasks, returning the result of one that has completed successfully (i.e., without throwing an exception), if any do. Upon normal or exceptional return, tasks that have not completed are cancelled.
List<Callable<String>> collection = Arrays.asList(
() -> getIdFromResponse(response),
() -> getIdFromRequest(request)
);
// you want the same number of threads as the size of the collection
ExecutorService executorService = Executors.newFixedThreadPool(collection.size());
String id = executorService.invokeAny(collection);
Three notes:
There is also an overloaded method with timeout throwing TimeoutException if no result is available in time: invokeAny(Collection<? extends Callable<T>>, long, TimeUnit)
You need to handle ExecutionException and InterruptedException from the invokeAny method.
Don't forget to close the service once you are done
If you want to be in full control over when to enable the alternative evaluation, you may use CompletableFuture:
CompletableFuture<String> job
= CompletableFuture.supplyAsync(() -> getIdFromResponse(response));
String id;
try {
id = job.get(300, TimeUnit.MILLISECONDS);
}
catch(TimeoutException ex) {
// did not respond within the specified time, set up alternative
id = job.applyToEither(
CompletableFuture.supplyAsync(() -> getIdFromRequest(request)), s -> s).join();
}
catch(InterruptedException|ExecutionException ex) {
// handle error
}
The second job is only submitted when the first did not complete within the specified time. Then, whichever job responds first will provide the result value.
Related
I've created a reactive flow at my controller Endpoint addEntry where one object inside should be created only once per request since it holds a state.
#Override
public Mono<FileResultDto> addEntry(final Flux<byte[]> body,
final String fileId) {
return keyVaultRepository.findByFiletId(fileId)
.switchIfEmpty(Mono.defer(() -> {
final KeyVault keyVault = KeyVault.of(fileId);
return keyVaultRepository.save(keyVault);
}))
.map(keyVault -> Mono
.just(encryption.createEncryption(keyVault.getKey(), ENCRYPT_MODE)) // createEncryption obj. that holds a state
.cache())
.map(encryption -> Flux
.from(body)
.map(bytes -> encryption
.share()
.block()
.update(bytes) // works with the state and changes it per byte[] going through this flux
)
)
.flatMap(flux -> persistenceService.addEntry(flux, fileId));
}
before I asked this question I used
encryption.block() which was failing.
I found this one and updated my code accordingly (added .share()).
The test itself is working. But I am wondering if this is the proper way to go to work with an object that should be created and used only once in the reactive flow, provided by
encryptionService.createEncryption(keyVault.getKey(), ENCRYPT_MODE)
Happy to hear your opinion
Mono.just is only a wrapper around a pre-computed value, so there is no need to cache or share it, because it is only just giving back a cached variable on subscription.
But, in your example, there is something I do not understand.
If we simplify / decompose it, it gives the following:
Mono<KeyVault> vault = keyVaultRepository.findByFiletId(fileId)
.switchIfEmpty(Mono.defer(() -> keyVaultRepository.save(KeyVault.of(fileId));
));
Mono<Mono<Encryption>> fileEncryption = vault
.map(it -> Mono.just(createEncryption(it.getKey)).cache()); // <1>
Mono<Flux<Encryption>> encryptedContent = fileEncryption.map(encryption -> Flux
.from(body)
.map(bytes -> encryption
.share()
.block()
.update(bytes))); // <2>
Mono<FileResultDto> file = encryptedContent.map(flux -> persistenceService.addEntry(flux, fileId));
Why are you trying to wrap your encryption object ? The result is already part of a reactive pipeline. Doing Mono.just() is redundant because you are already in a map operation, and doing cache() over just() is also redundant, because a "Mono.just" is essentially a permanent cache.
What does your "update(bytes)" method do ? Does it mutate the same object every time ? because if it does, you might have a problem here. Reactive streams cannot ensure thread-safety and proper ordering of actions on internal mutated states, that is out of its reach. You might bypass the problem by using scan operator, though.
Without additional details, I would start refactoring the code like this:
Mono<KeyVault> vault = keyVaultRepository.findByFileId(fileId)
.switchIfEmpty(Mono.defer(() -> keyVaultRepository.save(KeyVault.of(fileId));
Mono<Encryption> = vault.map(it -> createEncryption(it.getKey()));
Flux<Encryption> encryptedContent = fileEncryption
.flatMapMany(encryption -> body.scan(encryption, (it, block) -> it.update(block)));
Mono<FileResultDto> result = persistenceService.addEntry(encryptedContent, fileId);
In what way can we sync two asynchronous calls using RxJava? In the example below, the method contentService.listContents which is a API call must first finish before the processSchema method to take place for each schema.
schemaService.listSchema()
.toObservable()
.flatMapIterable(schemas -> {
schemas.forEach(schema -> {
// async call
contentService.listContents(schema.getName()).subscribe(contents -> {
doSomethingWithThe(contents);
});
});
// contentService.listContents` must complete first before
// processSchema should be called for each schema
return schemas;
}).subscribe(schema -> { processSchema(schema); },
error -> { Console.error(error.getMessage()); });
The problem with the code above the processSchema would not wait for the contentService.listContents since it is async not not synchronized with each other.
You have to use flatMap to process the schemas and since it is a list, you have to unroll it and flatMap again:
schemaService.listSchema()
.toObservable()
.flatMap(schemas ->
Observable.fromIterable(schemas)
.flatMap(schema ->
contentService.listContents(schema.getName())
.doOnNext(contents -> doSomethingWith(contents))
)
// probably you don't care about the inner contents
.ignoreElements()
// andThen will switch to this only when the sequence above completes
.andThen(Observable.just(schemas))
)
.subscribe(
schema -> processSchema(schema),
error -> Console.error(error.getMessage())
);
Note that you haven't defined the return types of the service calls so you may have to use flatMapSingle and doOnSuccess for example.
You are probably looking for flatMap.
From the docs
Continuations
Sometimes, when an item has become available, one would
like to perform some dependent computations on it. This is sometimes
called continuations and, depending on what should happen and what
types are involved, may involve various operators to accomplish.
Dependent
The most typical scenario is to given a value, invoke
another service, await and continue with its result:
service.apiCall()
.flatMap(value -> service.anotherApiCall(value))
.flatMap(next -> service.finalCall(next))
It is often the case also that later sequences would require values
from earlier mappings. This can be achieved by moving the outer
flatMap into the inner parts of the previous flatMap for example:
service.apiCall()
.flatMap(value ->
service.anotherApiCall(value)
.flatMap(next -> service.finalCallBoth(value, next))
)
How can I return an object or list after using Blocking.get() method in Ratpack?
Blocking.get(()->
xRepository.findAvailable()).then(x->x.stream().findFirst().get());
Above line returns void - I want to be able to do something like below so that it returns the object in the then clause. I tried adding a return statement but doesn't work.
Object x = Blocking.get(()->
xRepository.findAvailable()).then(x->x.stream().findFirst().get());
You can use map to work with the value when it's available.
Blocking.get(() -> xRepository.findAvailable())
.map(x -> x.stream().findFirst().get())
.then(firstAvailable -> ctx.render("Here is the first available x " + firstAvailable))
Ratpack's Promise<T> does not provide blocking operation like Promise.get() that blocks current thread and returns a result. Instead you have to subscribe to the promise object. One of the methods you can use is Promise.then(Action<? super T> then) which allows you to specify and action that will be triggered when given value is available. In above example we use ctx.render() as an action triggered when value from blocking operation is ready, but you can do other things as well.
The problem I am facing is as follows:
I have two observables one is fetching data from network and the other from db. The second one might be empty but the lack of the first one is considered an error. Then if the result from network comes I need to compare it with the latest results from db ( if present ) and if they differ I want to store them ( if the db observable is empty I want to store network results anyway).
Is there any dedicated operator that handles a case like this?
So far I tried a solution with zipWith ( which is not working as expected if db is empty ), buffer ( which is working but is far from ideal ),
and I also tried flatmapping ( which requires additional casting in the subscriber ).
Below is the solution with buffer.
Observable.concat(ratesFromNetwork(), latestRatesFromDB())
.buffer(3000, 2)
.filter(buffer -> !(buffer.size() == 2 && !buffer.get(0).differentThan(buffer.get(1))))
.map(buffer -> buffer.get(0))
.subscribe(this::save,
(ex) -> System.out.println(ex.getMessage()),
() -> System.out.println("completed"));
If I modify latestRatesFromDb so that it is not returning Observable but and Optional instead the whole problem becomes trivial because I can filter using this result. It seams that there is no way to filter in an asynchronous way ( or did I miss something ?)
Okay, here is how I would go about writing this.
Firstly, whatever class has the differentThan function should be changed to override equals instead. Otherwise you can't use a lot of basic methods with these objects.
For the purpose of this example I wrote all the observables using the Integer class as my type parameter. I then use a scheduler to write two mock methods:
static Observable<Integer> ratesFromNetwork(Scheduler scheduler) {
return Observable.<Integer>create(sub -> {
sub.onNext(2);
sub.onCompleted();
}).delay(99, TimeUnit.MILLISECONDS, scheduler);
}
static Observable<Integer> latestRatesFromDB(Scheduler scheduler) {
return Observable.<Integer>create(sub -> {
sub.onNext(1);
sub.onCompleted();
}).delay(99, TimeUnit.MILLISECONDS, scheduler);
}
As you can see both are similar, however, they will emit different values.
lack of the first one is considered an error
The best way to achieve this is to use a timeout. You can log the error immediately here and continue:
final Observable<Integer> networkRate = ratesFromNetwork(scheduler)
.timeout(networkTimeOut, TimeUnit.MILLISECONDS, scheduler)
.doOnError(e -> System.err.println("Failed to get rates from network."));
When the timeout fails an error will be thrown by rx. doOnError will give you a better idea of where this error started and let it propagate through the rest of the sequence.
The second one might be empty
In this case I would do a similar strategy, however, do not let the error propagate by using the method onErrorResumeNext. Now you can make sure the observable emits at least one value by using firstOrDefault. In this method use some dummy value that you expect to never match with the network results.
final Observable<Integer> databaseRate = latestRatesFromDB(scheduler)
.timeout(databaseTimeOut, TimeUnit.MILLISECONDS, scheduler)
.doOnError(e -> System.err.println("Failed to get rates from database"))
.onErrorResumeNext(Observable.empty())
.firstOrDefault(-1);
Now by using the distinct method you can grab a value only when it is different than the one that came before it (which is why you need to override equals).
databaseRate.concatWith(networkRate).distinct().skip(1)
.subscribe(i -> System.out.println("Updating to " + i),
System.err::println,
() -> System.out.println("completed"));
Here the database rate was placed before the network rate to take advantage of distinct. a skip is then added to always ignore the database rate value.
Complete Code:
final long networkTimeOut = 100;
final long databaseTimeOut = 100;
final TestScheduler scheduler = new TestScheduler();
final Observable<Integer> networkRate = ratesFromNetwork(scheduler)
.timeout(networkTimeOut, TimeUnit.MILLISECONDS, scheduler)
.doOnError(e -> System.err.println("Failed to get rates from network."));
final Observable<Integer> databaseRate = latestRatesFromDB(scheduler)
.timeout(databaseTimeOut, TimeUnit.MILLISECONDS, scheduler)
.doOnError(e -> System.err.println("Failed to get rates from database"))
.onErrorResumeNext(Observable.empty())
.firstOrDefault(-1);
databaseRate.concatWith(networkRate).distinct().skip(1)
.subscribe(i -> System.out.println("Updating to " + i),
System.err::println,
() -> System.out.println("completed"));
scheduler.advanceTimeBy(200, TimeUnit.MILLISECONDS);
When networkTimeOut and databaseTimeOut are greater than 100 it prints:
Updating to 2
completed
When networkTimeOut is less than 100 it prints:
Failed to get rates from network.
java.util.concurrent.TimeoutException
When databaseTimeOut is less than 100 it prints:
Failed to get rates from database
Updating to 2
completed
And if you modify latestRatesFromDB and ratesFromNetwork to return the same value, it simply prints:
completed
And if you don't care about forcing timeouts or logging then it boils down to:
latestRatesFromDB().firstOrDefault(dummyValue)
.concatWith(ratesFromNetwork())
.distinct().skip(1)
.subscribe(this::save,
System.err::println,
() -> System.out.println("completed"));
I'm writing a server end program using Twitter Finagle. I do not use the full Twitter server stack, just the part that enables asynchronous processing (so Future, Function, etc). I want the Future objects to have timeouts, so I wrote this:
Future<String> future = Future.value(some_input).flatMap(time_consuming_function1);
future.get(Duration.apply(5, TimeUnit.SECONDS));
time_consuming_function1 runs for longer than 5 seconds. But future doesn't time out after 5 seconds and it waits till time_consuming_function1 has finished.
I think this is because future.get(timeout) only cares about how long the future took to create, not the whole operation chain. Is there a way to timeout the whole operation chain?
Basically if you call map/flatMap on a satisfied Future, the code is executed immediately.
In your example, you're satisfying your future immediately when you call Future.value(some_input), so flatMap executes the code immediately and the call to get doesn't need to wait for anything. Also, everything is happening in one thread. A more appropriate use would be like this:
import scala.concurrent.ops._
import com.twitter.conversions.time._
import com.twitter.util.{Future,Promise}
val p = new Promise[String]
val longOp = (s: String) => {
val p = new Promise[String]
spawn { Thread.sleep(5000); p.setValue("Received: " + s) }
p
}
val both = p flatMap longOp
both.get(1 second) // p is not complete, so longOp hasn't been called yet, so this will fail
p.setValue("test") // we set p, but we have to wait for longOp to complete
both.get(1 second) // this fails because longOp isn't done
both.get(5 seconds) // this will succeed