Using the Vertx mongodb client fluently vs inline vs nested - java

Read through the docs and i'm still confused as the the advantages (if any) of using the mongoClient in a fluent way. Can anyone explain them to to me and if it will guarantee order;
Running in line - Both will be run at the same time with no guarentee of order.
mongoClient.runCommand("aggregate", getSomeCommand1(), res -> {});
mongoClient.runCommand("aggregate", getSomeCommand2(), res -> {});
Running nested - getSomeCommand1 will be run to completion first before getSomeCommand2.
mongoClient.runCommand("aggregate", getSomeCommand1(), res1 -> {
mongoClient.runCommand("aggregate", getSomeCommand2(), res2 -> {});
});
Running in a fluent way - is the same as running in line?
mongoClient.runCommand("aggregate", getSomeCommand1(), res -> {}).mongoClient.runCommand("aggregate", getSomeCommand2(), res -> {});

Far from a complete answer but running a few basic tests indicates that running in a fluent way is the same as running in line;
I ran a slow command (aggregate) and a fast command (count) on a large dataset.
mongoClient.runCommand("aggregate", getTotalRecsPerTypeCommand(sellerId, collection), res -> {
result.put("totalRecsPerType", res.result());
}).count(collection, new JsonObject().put("sellerId", sellerId), res -> {
result.put("totalRecs", res.result());
requestMessage.reply(result);
});
Initially only the total is returned however when the reply is moved from the fast command to the slow command then both results are returned. This indicates they are both run at the same time with no guarentee of order.
mongoClient.runCommand("aggregate", getTotalRecsPerTypeCommand(sellerId, collection), res -> {
result.put("totalRecsPerType", res.result());
requestMessage.reply(result);
}).count(collection, new JsonObject().put("sellerId", sellerId), res -> {
result.put("totalRecs", res.result());
});

Running in line does not guarantee the order of execution, that is maybe running the first code several times on not heavily loaded machine will preserve order.
The same thing is with fluent API. In this case it only helps you to omit semicolon. If you want to create a flow, where the next command will be fired after the first one ends, use RxJava (or the nested case, but in the long run you might end up with callback hell).
Take a look here: https://github.com/vert-x3/vertx-mongo-client/blob/master/vertx-mongo-service/src/main/generated/io/vertx/rxjava/ext/mongo/MongoService.java
Although I'm not a big fan of ObservableFuture used in this class (I recommend using http://reactivex.io/RxJava/javadoc/rx/subjects/AsyncSubject.html), it's a good starting point.

Related

Sync two asynchronous API call with RxJava

In what way can we sync two asynchronous calls using RxJava? In the example below, the method contentService.listContents which is a API call must first finish before the processSchema method to take place for each schema.
schemaService.listSchema()
.toObservable()
.flatMapIterable(schemas -> {
schemas.forEach(schema -> {
// async call
contentService.listContents(schema.getName()).subscribe(contents -> {
doSomethingWithThe(contents);
});
});
// contentService.listContents` must complete first before
// processSchema should be called for each schema
return schemas;
}).subscribe(schema -> { processSchema(schema); },
error -> { Console.error(error.getMessage()); });
The problem with the code above the processSchema would not wait for the contentService.listContents since it is async not not synchronized with each other.
You have to use flatMap to process the schemas and since it is a list, you have to unroll it and flatMap again:
schemaService.listSchema()
.toObservable()
.flatMap(schemas ->
Observable.fromIterable(schemas)
.flatMap(schema ->
contentService.listContents(schema.getName())
.doOnNext(contents -> doSomethingWith(contents))
)
// probably you don't care about the inner contents
.ignoreElements()
// andThen will switch to this only when the sequence above completes
.andThen(Observable.just(schemas))
)
.subscribe(
schema -> processSchema(schema),
error -> Console.error(error.getMessage())
);
Note that you haven't defined the return types of the service calls so you may have to use flatMapSingle and doOnSuccess for example.
You are probably looking for flatMap.
From the docs
Continuations
Sometimes, when an item has become available, one would
like to perform some dependent computations on it. This is sometimes
called continuations and, depending on what should happen and what
types are involved, may involve various operators to accomplish.
Dependent
The most typical scenario is to given a value, invoke
another service, await and continue with its result:
service.apiCall()
.flatMap(value -> service.anotherApiCall(value))
.flatMap(next -> service.finalCall(next))
It is often the case also that later sequences would require values
from earlier mappings. This can be achieved by moving the outer
flatMap into the inner parts of the previous flatMap for example:
service.apiCall()
.flatMap(value ->
service.anotherApiCall(value)
.flatMap(next -> service.finalCallBoth(value, next))
)

AssertThrows not throwing exception when going through Java Streams

So I'm writing unit tests in which I'm testing capability to blacklist and unblacklist users (which is a feature in my code that is itself working fine).
Here's a sample command that works as expected:
assertThrows(ExecutionException.class, () -> onlineStore.lookup("533"));
If I blacklist user "533", and then run the above command, it works fine, because an ExecutionException is raised (because you're trying to lookup a user who is blacklisted). Similarly, if I had NOT blacklisted user "533" but still ran the above command, the test would fail, which is expected too for similar reason (i.e. no exception is now thrown as you're NOT fetching a blacklisted user).
However if I have a List of user IDs called userIds (which user "533" is now part of) and I blacklist them all (funtionality which I know is working fine), and then run the command below:
userIds.stream().map(id -> assertDoesNotThrow(() -> onlineStore.lookup(id)));
... the test passes, even through it should have FAILED. Why ? Because all users are now blacklisted, so when fetching these users, ExecutionExceptions should have been thrown ..
If I now, replace the streams command above with either of the following, they work as expected:
assertThrows(ExecutionException.class, () -> onlineStore.lookup("533"));
assertDoesNotThrow(() -> onlineStore.lookup("533"));
So this all leads me to believe that for some reason, when going through Java Streams, thrown ExecutionExceptions aren't getting caught.
Any explanation for this behavior ?
You're not calling any terminal operation on the stream, so your assertion is never executed.
You're abusing map(), which is supposed to create a new stream by transforming every element. What you actually want to do is to execute a method which has a side effect on every element. That's what forEach is for (and it's also a terminal operation which actually consumes the stream):
userIds.stream().forEach(id -> assertDoesNotThrow(() -> onlineStore.lookup(id)));

Writing unit tests for Java 8 streams

I have a list and I'm streaming this list to get some filtered data as:
List<Future<Accommodation>> submittedRequestList =
list.stream().filter(Objects::nonNull)
.map(config -> taskExecutorService.submit(() -> requestHandler
.handle(jobId, config))).collect(Collectors.toList());
When I wrote tests, I tried to return some data using a when():
List<Future<Accommodation>> submittedRequestList = mock(LinkedList.class);
when(list.stream().filter(Objects::nonNull)
.map(config -> executorService.submit(() -> requestHandler
.handle(JOB_ID, config))).collect(Collectors.toList())).thenReturn(submittedRequestList);
I'm getting org.mockito.exceptions.misusing.WrongTypeOfReturnValue:
LinkedList$$EnhancerByMockitoWithCGLIB$$716dd84d cannot be returned by submit() error. How may I resolve this error by using a correct when()?
You can only mock single method calls, not entire fluent interface cascades.
Eg, you could do
Stream<Future> fs = mock(Stream.class);
when(requestList.stream()).thenReturn(fs);
Stream<Future> filtered = mock(Stream.class);
when(fs.filter(Objects::nonNull).thenReturn(filtered);
and so on.
IMO it's really not worth mocking the whole thing, just verify that all filters were called and check the contents of the result list.

No option for ConcatMap with skip error - RxJava

Consider this example:
I have a file downloading in sequence. If one download fails, it should move to next.
Psudo code:
Observable.from(urls)
.concatMap(url -> downloadObservable(url))
There is no option for moving to next url if the download fails.
There is no way to skip with onErrorResumeNext() as I just want to move to next url. Can anyone help?
There is an operator for this: concatMapDelayError since 1.3. In general, if there is a reason errors could be delayed until all sources have been consumed fully, there is likely a opNameDelayError operator for it.
Observable.from(urls)
.concatMapDelayError(url -> downloadObservable(url))
.doOnError(error -> {
if (error instanceof CompositeException) {
System.out.println(((CompositeException)error).getExceptions().size());
} else {
System.out.println(1);
}
});
(The doOnError addendum comes from the updated OP's cross post on the RxJava issue list.)
If you are using RxJava 1, a quick and dirty solution is to return null when the download fails and then filter them out:
Observable
.from(urls)
.concatMap(url -> downloadObservable(url).onErrorReturn(null))
.filter(result -> result != null)
A nicer solution would be to create a wrapper for the result having a method like wasSuccessful() for checking in the filter and a method like getResult() for extracting the result from the wrapper. This way you don't have to handle nulls.
According to: https://github.com/ReactiveX/RxJava/issues/3870 there is no way to do this. Of course you can introduce some other error handling, i.e. handle error inside downloadObservable then filter null answers.
You have to think that is a pipeline so, in case you don't want to stop the emission of the pipeline, you have to control the error and return something in order to continue with the next emission.
The only way to use onErrorResumeNext and not stop the emission after that, is if it´s executed in a flatMap
Observable.from(urls)
.flatMap(url -> downloadObservable(url)
.onErrorResumeNext(t -> Observable.just("Something went wrong"))))
You can see an example here https://github.com/politrons/reactive/blob/master/src/test/java/rx/observables/errors/ObservableExceptions.java

How do I use RxJava 2 to orchestrate a race with retry?

Suppose we have a set of flaky (sometimes failing) parsers that may or may not be able to handle a given file i.e. a given parser either succeeds with some probability p > 0, or fails always (p=0). Is it possible to use RxJava to have this set of parsers subscribe to a stream of incoming files and 'race' to parse the file?
Given that it is possible for the parser to fail initially but still be able to parse the file, it is necessary to have them retry with some backoff policy. Given that it is also possible for no parser to be able to handle a given file, the retry count should be capped.
Implementing exponential backoff is relatively easy to implement using retryWhen with something like this (source):
source.retryWhen(errors ->
errors.zipWith(Observable.range(1, 3), (n, i) -> i)
.flatMap(retryCount -> Observable.timer((long) Math.pow(5, retryCount), TimeUnit.SECONDS))
);
However, setting up a parallel race is something I cannot figure out how to do. It seems like the amb operator is what we want here, but applying it to an arbitrary number of streams seems to require using blockingIterable, which (I think) defeats the purpose of the race as it blocks. I have been unable to find anything useful relating to this use case of amb on the internet.
My attempts thus far resemble something like this:
Set<Parser> parserSet = new HashSet<>();
parserSet.add(new Parser(..., ..., ...));
// Add more parsers
int numParsers = parserSet.size();
Flowable<Parser> parsers = Flowable.fromIterable(parserSet).repeat();
fileSource
.flatMap(f -> parsers.take(numParsers)
.map(p -> p.parse(f))
.retryWhen(/* snippet from above */)
.onErrorReturn(/* some error value */)
).take(1)
Flowable introduced the .parallel() operator which just recently got the addition of ParallelFailureHandling (see this pr) which has a RETRY method, but I can't seem to get the flowables to stop retrying after one of them has returned.
Is this problem solvable with RxJava?
Making the reasonable assumption that your parsers are synchronous, something like
Set<Parser> parserSet = new HashSet<>();
parserSet.add(new Parser(..., ..., ...));
// Add more parsers
int numParsers = parserSet.size();
ArrayList<Flowable<T>> parserObservableList = new ArrayList<>();
for (Parser p: parserSet) {
parserObservableList.add(Flowable.fromCallable(() -> p.parse(f))
.retryWhen(/* Add your retry logic */)
.onErrorReturn(/* some error value */));
}
Flowable.amb(parserObservableList).subscribe(/* do what you want with the results */);
should meet your requirements.

Categories

Resources