Mono.onErrorResume() doesn't always work as expected - java

I'm not sure whether there is something wrong with my code or is it a bug in the reactor-core.
What i'm trying to do is to handle errors such as OptimisticLockingFailureException that happens when different threads are trying to save the same database record/entity at the same moment.
the example below is just a simpler form of what i have in the code and assuming that i have a main stream that contains a lot of operations and a sub-stream with the issue i have that should be executing some database calls using reactive mongo which is represented here with "processFlux()" method.
I'm applying a retry using onErrorResume() to call the same method from inside the "errorFallBack()" and my code goes like this:
import reactor.core.publisher.Flux;
public class TestClass1 {
public static void main(String[] args) {
Flux.range(1,10)
.doOnNext(System.out::println)
.publish(integerFlux -> processFlux(integerFlux))
.doOnNext(System.out::println)
.onErrorContinue((throwable, o) -> System.out.println("on error continue!!!"))
.subscribe(x -> sleepMillis(100));
}
private static void sleepMillis(long millis) {
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public static Flux<Integer> processFlux(Flux<Integer> input) {
return input
.map(integer -> {
if (integer == 5)
throw new NumberFormatException();
return integer;
})
.onErrorResume(NumberFormatException.class, e -> errorFallBack(input));
}
private static Flux<Integer> errorFallBack(Flux<Integer> input) {
return input
.doOnNext(integer -> System.out.println("inside errorFallBack()"))
.flatMap(integer -> processFlux(Flux.just(integer)));
}
}
in the "processFlux()" method i assume there will be an exception thrown at some point of time and i try to catch it with the "onErrorResume()".
Since i'm using "publish()" in the main stream to wrap around "processFlux()" i expect that when the error happens the "onErrorResume()" is the one that will be called...the actual behaviour is that the "onErrorContinue()" of the min stream is the one that is being called.
is this a bug? or shall i change something in my code?
I'm using reactive mongo with spring boot 2.1.13.RELEASE,
spring cloud version Greenwich.SR5
and reactor-core version 3.3.6.RELEASE

Related

Project Reactor in onErrorContinue value that triggered error is null

I'm having some issues with code written using project reactor:
<dependency>
<groupId>io.projectreactor</groupId>
<artifactId>reactor-core</artifactId>
<version>3.2.12.RELEASE</version>
</dependency>
Please consider the following code:
class Scratch {
public static void main(String[] args) {
ArrayBlockingQueue<Long> q = new ArrayBlockingQueue<>(10);
startProducer(q);
Flux.<Long> create(sink -> consumeItemsFromQueue(q, sink))
.doOnNext(ctx -> System.out.println("Processing " + ctx))
.flatMap(ctx -> Flux.push((sink)->{ throw new IllegalArgumentException("bum!");}))
.onErrorContinue((ex, obj)->{
System.err.println("Caught error "+ex.getMessage() +" in obj:" +obj);
})
.doOnNext(element -> System.out.println("Do On NExt: " + element))
.subscribe();
}
private static void consumeItemsFromQueue(ArrayBlockingQueue<Long> q, FluxSink<Long> sink) {
while (true) {
try {
sink.next(q.take());
} catch (Throwable t) {
System.err.println("Error in catch");
}
}
}
private static void startProducer(ArrayBlockingQueue<Long> q) {
Thread thread = new Thread(() -> {
while (true) {
try {
q.put(System.currentTimeMillis());
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
});
thread.start();
}
}
This code produces the following output:
Processing 1580494319870
Caught error bum! in obj:null
Processing 1580494321871
Caught error bum! in obj:null
According to the documentation in onErrorContinue the object should be the value that caused the error. Therefore I would expect it to be the ctx object from flatMap. Instead it is null.
Is this a bug or my understanding of the documentation is flawed?
Reasoning about onErrorContinue behaviour can be rather counter-intuitive, so I always recommend avoiding its use where possible.
According to the documentation in onErrorContinue the object should be the value that caused the error. Therefore I would expect it to be the ctx object from flatMap. Instead it is null.
Ah, but ctx isn't the value that caused the error, because your outer flatMap() call is working just fine - it's simply relaying an error that occurred in the inner Flux (the Flux.push() line in your example.) Since there's no value that caused this error (it just threw an exception), there's no value reported. So the behaviour you're reporting with this example is exactly what I'd expect.
If you changed that line to something like:
.flatMap(ctx -> Flux.push(sink -> sink.next(ctx)).flatMap(x -> Mono.error(new IllegalArgumentException("bum!"))))
...or:
.flatMap(ctx -> Flux.just(ctx).flatMap(x -> Mono.error(new IllegalArgumentException("bum!"))))
...Then you'd see something similar to Caught error bum! in obj:1591657236326, as there the exception actually has a cause, that is an error caused by an operator processing that value.

asynchronous programming in java with void methods

I have never really worked with asynchronous programming in Java and got very confused on the practice is the best one.
I got this method
public static CompletableFuture<Boolean> restoreDatabase(){
DBRestorerWorker dbWork = new DBRestorerWorker();
dbWork.run();
return "someresult" ;
}
then this one which calls the first one
#POST
#Path("{backupFile}")
#Consumes("application/json")
public void createOyster(#PathParam("backupFile") String backupFile) {
RestUtil.restoreDatabase("utv_johan", backupFile);
//.then somemethod()
//.then next method()
}
What I want to do is first call the restoreDatabase() method which calls dbWork.run() (which is an void method) and when that method is done I want createOyster to do the next one and so forth until I have done all the steps needed. Someone got a guideline were to start with this. Which practice is best in today's Java?
As you already use CompletableFuture, you may build your async execution pipeline like.
CompletableFuture.supplyAsync(new Supplier<String>() {
#Override
public String get() {
DBRestorerWorker dbWork = new DBRestorerWorker();
dbWork.run();
return "someresult";
};
}).thenComposeAsync((Function<String, CompletionStage<Void>>) s -> {
CompletableFuture<String> future = new CompletableFuture<>();
try{
//createOyster
future.complete("oyster created");
}catch (Exception ex) {
future.completeExceptionally(ex);
}
return null;
});
As you could see, You can call thenComposeAsync or thenCompose to build a chain of CompletionStages and perform tasks using results of the previous step or make Void if you don't have anything to return.
Here's a very good guide
You can use AsyncResponse:
import javax.ws.rs.container.AsyncResponse;
public static CompletableFuture<String> restoreDatabase(){
DBRestorerWorker dbWork = new DBRestorerWorker();
dbWork.run();
return CompletableFuture.completedFuture("someresult");
}
and this
#POST
#Path("{backupFile}")
#Consumes("application/json")
public void createOyster(#PathParam("backupFile") String backupFile,
#Suspended AsyncResponse ar) {
RestUtil.restoreDatabase("utv_johan", backupFile)
.thenCompose(result -> doSomeAsyncCall())
.thenApply(result -> doSomeSyncCall())
.whenComplete(onFinish(ar))
//.then next method()
}
utility function to send response
static <R> BiConsumer<R, Throwable> onFinish(AsyncResponse ar) {
return (R ok, Throwable ex) -> {
if (ex != null) {
// do something with exception
ar.resume(ex);
}
else {
ar.resume(ok);
}
};
}

Generic rxjava2 database access layer

I just started with java/rxjava2/android dev and managed to get the following working example:
Observable<Object> source3 = Observable.create(emitter-> {
cursor = app.dbh.getAlllTransactions2();
emitter.onNext(cursor);
emitter.onComplete();
}).subscribeOn(Schedulers.io());
source3.subscribe(c -> {
transactionAdapter = new TransactionCursorAdapter(this.getActivity(), (Cursor)c);
LSTVW_transactions.setAdapter(transactionAdapter);
});
Now I have 2 questions:
how is it that I am forced to use Object as a type. If I use anything else
android studio says it expects Object. Is it because of the lambda expression. I have done tests before and they allowed me to use any type.
I would like to make the below in a more generic fashion. The goal is to have Observable as the result with an arbitrary db function as a parameter which in then generically called. An older example I have found of this can be found here but I don't see how i could convert it to lambda/rxjava2 style (original link: https://dzone.com/articles/easy-sqlite-android-rxjava)
An example of such setup which I would like to convert:
private static <T> Observable<T> makeObservable(final Callable<T> func) {
return Observable.create(
new Observable.OnSubscribe<T>() {
#Override
public void call(Subscriber<? super T> subscriber) {
try {
subscriber.onNext(func.call());
} catch(Exception ex) {
Log.e(TAG, "Error reading from the database", ex);
}
}
});
}
Try this:
Observable.create((ObservableOnSubscribe<YourType>) e -> { ... }
I don't get exactly what do you want to achieve with the second snippet, but I think you can simplify just having this body, for the makeObservable method (I just removed the try-catch part):
return Observable.create(e -> e.onNext(func.call()));
About Rx abuse: I think that it is not a good idea to pass the Cursor as item of a stream. You would probably have a stream of data read from the database, so that your Observer can react properly.

Rxjava2 + Retrofit2 + Android. Best way to do hundreds of network calls

I have an app. I have a big button that allows the user to sync all their data at once to the cloud. A re-sync feature that allows them to send all their data again. (300+ entries)
I am using RXjava2 and retrofit2. I have my unit test working with a single call. However I need to make N network calls.
What I want to avoid is having the observable call the next item in a queue. I am at the point where I need to implement my runnable. I have seen a bit about Maps but I have not seen anyone use it as a queue. Also I want to avoid having one item fail and it report back as ALL items fail, like the Zip feature would do. Should I just do the nasty manager class that keeps track of a queue? Or is there a cleaner way to send several hundred items?
NOTE: SOLUTION CANNOT DEPEND ON JAVA8 / LAMBDAS. That has proved to be way more work than is justified.
Note all items are the same object.
#Test
public void test_Upload() {
TestSubscriber<Record> testSubscriber = new TestSubscriber<>();
ClientSecureDataToolKit clientSecureDataToolKit = ClientSecureDataToolKit.getClientSecureDataKit();
clientSecureDataToolKit.putUserDataToSDK(mPayloadSecureDataToolKit).subscribe(testSubscriber);
testSubscriber.awaitTerminalEvent();
testSubscriber.assertNoErrors();
testSubscriber.assertValueCount(1);
testSubscriber.assertCompleted();
}
My helper to gather and send all my items
public class SecureDataToolKitHelper {
private final static String TAG = "SecureDataToolKitHelper";
private final static SimpleDateFormat timeStampSimpleDateFormat =
new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
public static void uploadAll(Context context, RuntimeExceptionDao<EventModel, UUID> eventDao) {
List<EventModel> eventModels = eventDao.queryForAll();
QueryBuilder<EventModel, UUID> eventsQuery = eventDao.queryBuilder();
String[] columns = {...};
eventsQuery.selectColumns(columns);
try {
List<EventModel> models;
models = eventsQuery.orderBy("timeStamp", false).query();
if (models == null || models.size() == 0) {
return;
}
ArrayList<PayloadSecureDataToolKit> toSendList = new ArrayList<>();
for (EventModel eventModel : models) {
try {
PayloadSecureDataToolKit payloadSecureDataToolKit = new PayloadSecureDataToolKit();
if (eventModel != null) {
// map my items ... not shown
toSendList.add(payloadSecureDataToolKit);
}
} catch (Exception e) {
Log.e(TAG, "Error adding payload! " + e + " ..... Skipping entry");
}
}
doAllNetworkCalls(toSendList);
} catch (SQLException e) {
e.printStackTrace();
}
}
my Retrofit stuff
public class ClientSecureDataToolKit {
private static ClientSecureDataToolKit mClientSecureDataToolKit;
private static Retrofit mRetrofit;
private ClientSecureDataToolKit(){
mRetrofit = new Retrofit.Builder()
.baseUrl(Utilities.getSecureDataToolkitURL())
.addCallAdapterFactory(RxJavaCallAdapterFactory.create())
.addConverterFactory(GsonConverterFactory.create())
.build();
}
public static ClientSecureDataToolKit getClientSecureDataKit(){
if(mClientSecureDataToolKit == null){
mClientSecureDataToolKit = new ClientSecureDataToolKit();
}
return mClientSecureDataToolKit;
}
public Observable<Record> putUserDataToSDK(PayloadSecureDataToolKit payloadSecureDataToolKit){
InterfaceSecureDataToolKit interfaceSecureDataToolKit = mRetrofit.create(InterfaceSecureDataToolKit.class);
Observable<Record> observable = interfaceSecureDataToolKit.putRecord(NetworkUtils.SECURE_DATA_TOOL_KIT_AUTH, payloadSecureDataToolKit);
return observable;
}
}
public interface InterfaceSecureDataToolKit {
#Headers({
"Content-Type: application/json"
})
#POST("/api/create")
Observable<Record> putRecord(#Query("api_token") String api_token, #Body PayloadSecureDataToolKit payloadSecureDataToolKit);
}
Update. I have been trying to apply this answer to not much luck. I am running out of steam for tonight. I am trying to implement this as a unit test, like I did for the original call for one item.. It looks like something is not right with use of lambda maybe..
public class RxJavaBatchTest {
Context context;
final static List<EventModel> models = new ArrayList<>();
#Before
public void before() throws Exception {
context = new MockContext();
EventModel eventModel = new EventModel();
//manually set all my eventmodel data here.. not shown
eventModel.setSampleId("SAMPLE0");
models.add(eventModel);
eventModel.setSampleId("SAMPLE1");
models.add(eventModel);
eventModel.setSampleId("SAMPLE3");
models.add(eventModel);
}
#Test
public void testSetupData() {
Assert.assertEquals(3, models.size());
}
#Test
public void testBatchSDK_Upload() {
Callable<List<EventModel> > callable = new Callable<List<EventModel> >() {
#Override
public List<EventModel> call() throws Exception {
return models;
}
};
Observable.fromCallable(callable)
.flatMapIterable(models -> models)
.flatMap(eventModel -> {
PayloadSecureDataToolKit payloadSecureDataToolKit = new PayloadSecureDataToolKit(eventModel);
return doNetworkCall(payloadSecureDataToolKit) // I assume this is just my normal network call.. I am getting incompatibility errors when I apply a testsubscriber...
.subscribeOn(Schedulers.io());
}, true, 1);
}
private Observable<Record> doNetworkCall(PayloadSecureDataToolKit payloadSecureDataToolKit) {
ClientSecureDataToolKit clientSecureDataToolKit = ClientSecureDataToolKit.getClientSecureDataKit();
Observable observable = clientSecureDataToolKit.putUserDataToSDK(payloadSecureDataToolKit);//.subscribe((Observer<? super Record>) testSubscriber);
return observable;
}
Result is..
An exception has occurred in the compiler (1.8.0_112-release). Please file a bug against the Java compiler via the Java bug reporting page (http://bugreport.java.com) after checking the Bug Database (http://bugs.java.com) for duplicates. Include your program and the following diagnostic in your report. Thank you.
com.sun.tools.javac.code.Symbol$CompletionFailure: class file for java.lang.invoke.MethodType not found
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:compile<MyBuildFlavorhere>UnitTestJavaWithJavac'.
> Compilation failed; see the compiler error output for details.
Edit. No longer trying Lambdas. Even after setting up the path on my mac, javahome to point to 1.8, etc. I could not get it to work. If this was a newer project I would push harder. However as this is an inherited android application written by web developers trying android, it is just not a great option. Nor is it worth the time sink to get it working. Already into the days of this assignment instead of the half day it should have taken.
I could not find a good non lambda flatmap example. I tried it myself and it was getting messy.
If I understand you correctly, you want to make your calls in parallel?
So rx-y way of doing this would be something like:
Observable.fromCallable(() -> eventsQuery.orderBy("timeStamp", false).query())
.flatMapIterable(models -> models)
.flatMap(model -> {
// map your model
//avoid throwing exceptions in a chain, just return Observable.error(e) if you really need to
//try to wrap your methods that throw exceptions in an Observable via Observable.fromCallable()
return doNetworkCall(someParameter)
.subscribeOn(Schedulers.io());
}, true /*because you don't want to terminate a stream if error occurs*/, maxConcurrent /* specify number of concurrent calls, typically available processors + 1 */)
.subscribe(result -> {/* handle result */}, error -> {/* handle error */});
In your ClientSecureDataToolKit move this part into constructor
InterfaceSecureDataToolKit interfaceSecureDataToolKit = mRetrofit.create(InterfaceSecureDataToolKit.class);

RxJava with vertx: can't have multiple subscriptions exception

I'm trying avoid vertx callback hell with RxJava.
But I have "rx.exceptions.OnErrorNotImplementedException: Cannot have multiple subscriptions". What's wrong here?
public class ShouldBeBetterSetter extends AbstractVerticle {
#Override
public void start(Future<Void> startFuture) throws Exception {
Func1<AsyncMap<String,Long>, Observable<Void>> obtainAndPutValueToMap = stringLongAsyncMap -> {
Long value = System.currentTimeMillis();
return stringLongAsyncMap.putObservable("timestamp", value)
.doOnError(Throwable::printStackTrace)
.doOnNext(aVoid -> System.out.println("succesfully putted"));
};
Observable<AsyncMap<String,Long>> clusteredMapObservable =
vertx.sharedData().<String,Long>getClusterWideMapObservable("mymap")
.doOnError(Throwable::printStackTrace);
vertx.periodicStream(3000).toObservable()
.flatMap(l-> clusteredMapObservable.flatMap(obtainAndPutValueToMap))
.forEach(o -> {
System.out.println("just printing.");
});
}
}
Working Verticle (without Rx) can be found here:
https://gist.github.com/IvanZelenskyy/9d50de8980b7bdf1e959e19593f7ce4a
vertx.sharedData().getClusterWideMapObservable("mymap") returns observable, which supports single subscriber only - hence exception. One solution worth a try is:
Observable<AsyncMap<String,Long>> clusteredMapObservable =
Observable.defer(
() -> vertx.sharedData().<String,Long>getClusterWideMapObservable("mymap")
);
That way every time clusteredMapObservable.flatMap() will be called, it will subscribe to new observable returned by Observable.defer().
EDIT
In case it's OK to use same AsyncMap, as pointed by #Ivan Zelenskyy, solution can be
Observable<AsyncMap<String,Long>> clusteredMapObservable =
vertx.sharedData().<String,Long>getClusterWideMapObservable("mymap").cache()
What's happening is that on each periodic emission, the foreach is re-subscribing to the clusteredMapObservable variable you defined above.
To fix, just move the call to vertx.sharedData().<String,Long>getClusterWideMapObservable("mymap") inside your periodic stream flatmap.
Something like this:
vertx.periodicStream(3000).toObservable()
.flatMap(l-> vertx.sharedData().<String,Long>getClusterWideMapObservable("mymap")
.doOnError(Throwable::printStackTrace)
.flatMap(obtainAndPutValueToMap))
.forEach(o -> {
System.out.println("just printing.");
});
UPDATE
If you don't like labmda in lambda, then don't. Here's an update without
vertx.periodicStream(3000).toObservable()
.flatMap(l-> {
return vertx.sharedData().<String,Long>getClusterWideMapObservable("mymap");
})
.doOnError(Throwable::printStackTrace)
.flatMap(obtainAndPutValueToMap)
.forEach(o -> {
System.out.println("just printing.");
});
PS - Your call to .flatMap(obtainAndPutValueToMap)) is also lambda in lambda - you've just moved it into a function.

Categories

Resources