execute first emitted element of reactive chain - java

I am doing a spring webflux reactive code
.flatMap(r -> save1)
.flatMapMany(r -> save2) //save a flux of elements
.flatMap(r -> save3) //need r.getId
.subscribe();
I want to control the emission of events and only save 3 event for first emitted element of the previous chain (flatmapMany) instead of N times.
So in resume, i need to save N elements and catch the first save and ignore the others. I need to result of save2 to pass argument for save3 method.
thanks,

If I understood the problem correctly you can use .next() to "convert" Flux into Mono by taking the first emitted element and then cancel the subscription.
.flatMap(r -> save1)
.flatMapMany(r -> save2) //save a flux of elements
.next()
.flatMap(r -> save3) //need r.getId
Here is a test for the above flow that shows that only "2.1" was handled in save3
#Test
void testFlow() {
var flow = Mono.just("1")
.flatMap(r -> save1(r))
.flatMapMany(r -> save2(r))
.next()
.flatMap(r -> save3(r));
StepVerifier.create(flow)
.expectNext("2.1")
.verifyComplete();
}
where
private Mono<String> save3(String r) {
return Mono.just(r);
}
private Flux<String> save2(String r) {
return Flux.just("2.1", "2.2", "2.3");
}
private Mono<String> save1(String r) {
return Mono.just(r);
}

but without a subscribe() it will not execute the entire chain (save1, save2) , right or i'm doing confusion?
Imagine that i have 3 mono objects to save on save2
-> save 2.1
-> save 2.2
-> save 2.3
after 2.1 save operation (first element) -> get response and save only once
-> save 3
instead of save 3 times
-> save 3
-> save 3
-> save 3
So ... the idea is stop chain propagation to not duplicate saved objects in DB.

Related

How to save method returned mono in Reactor's Context to use it the next time the method is called in this reactor stream?

I have a method that returns data from some repo. Of course - these data can be changed at any given moment. I want to ensure that the data returned from the method remains consistent during a chain of operations (the method can also be called from another component during the chain). for example:
fetchDataThanCanBeChanged()
.flatMap(...)
.doOnNext(...)
.doOnSuccess(...)
.map(...)
.zipWith(...)
.flatMap(...)
.then(fetchDataThanCanBeChanged())......
I want to be sure that the second call to fetchDataThanCanBeChanged() returns the same data as the first.
I tried to wrap fetchDataThatCanBeChanged with method with Mono.deferContextual:
private Mono<Object> fetchDataUsingCache() {
return fetchDataThatCanBeChanged()
.flatMap(s ->
Mono.deferContextual(ctx -> {
System.out.println("cached snapshot=" + ctx.getOrEmpty("KEY"));
return Mono.just(s);
}).contextWrite(ctx ->
ctx.put("KEY", ctx.getOrDefault("KEY", s))
));
}
and call it twice:
fetchDataUsingCache()
.flatMap(d -> change(d))
.doOnNext(p -> saveChangedDataInRepo(p))
.delayElement(Duration.ofMillis(2000))
.fetchDataUsingCache()
.block();
but the content of fetchDataThatCanBeChanged() is only executed once and the returned data is the updated.
any idea for a solution???
Many thanks in advance!
finally I wrapped my code where I need the old fetched data with Mono.deferContextual and wrote the data to context like that:
fetchDataThanCanBeChanged()
.flatMap(s -> Mono.deferContextual(ctx ->
changeDataInRepo()
.doOnSuccess(b -> fetchDataThanCanBeChanged())
.thenReturn(doWhateverNeedWithOldData(ctx.getOrEmpty("KEY"))))
.contextWrite(ctx -> ctx.put("KEY", s))))
.block();

Conditionally combine Mono with Flux

I need to combine the results from two reactive Publishers - Mono and Flux. I tried to do it with zip and join functions, but I was not able to fulfil two specific conditions:
result should contain as many element as Flux emits, but corresponding Mono source should be called only once (this condition alone can be implemented with join)
when Flux is empty, then chain should complete without waiting for Mono element
The solution for the first condition is presented in the Combine Mono with Flux entry (pasted below). But I was not able to achieve second condition without blocking the chain - and that I would like to avoid.
Flux<Integer> flux = Flux.concat(Mono.just(1).delayElement(Duration.ofMillis(100)),
Mono.just(2).delayElement(Duration.ofMillis(500))).log();
Mono<String> mono = Mono.just("a").delayElement(Duration.ofMillis(50)).log();
List<String> list = flux.join(mono, (v1) -> Flux.never(), (v2) -> Flux.never(), (x, y) -> {
return x + y;
}).collectList().block();
System.out.println(list);
If you want to cancel the entire operation if Flux is empty you could do the below
Flux<Integer> flux = Flux.concat(Mono.just(1).delayElement(Duration.ofMillis(100)),
Mono.just(2).delayElement(Duration.ofMillis(500))).log();
//Uncomment below and comment out above statement for empty flux
//Flux<Integer> flux = Flux.empty();
Mono<String> mono = Mono.just("a").delayElement(Duration.ofMillis(5000)).log();
//Throw exception if flux is empty
flux = flux.switchIfEmpty(Flux.error(IllegalStateException::new));
List<String> list = flux
.join(mono, s -> Flux.never() , s -> Flux.never(), (x, y) -> x + y)
//Catch exception and return nothing
.onErrorResume(s -> Flux.empty())
.collectList().block();
System.out.println(list);
You could do something like the following if you want Mono to complete but don't want join to hang
DirectProcessor<Integer> processor = DirectProcessor.create();
//Could omit sink, and use processor::onComplete in place of sink::complete
//But typically recommended as provides better thread safety
FluxSink<Integer> sink = processor.serialize().sink();
Flux<Integer> flux = Flux.concat(Mono.just(1).delayElement(Duration.ofMillis(100)),
Mono.just(2).delayElement(Duration.ofMillis(500))).log();
//Uncomment below and comment out above statement for empty flux
//Flux<Integer> flux = Flux.empty();
Mono<String> mono = Mono.just("a").delayElement(Duration.ofMillis(5000)).log();
List<String> list = flux
.doOnComplete(sink::complete)
.join(mono, s -> processor , s -> processor, (x, y) -> x + y).collectList().block();
System.out.println(list);

Spring webflux When to use map over flatmap

I am new to java reactive programming and i have started learning it with spring webflux.
One thing that is always bothering me that map is synchronous while flatmap is asynchronous.
I am learning it from book spring in action chapter 10 and i see the below example.
Flux<Player> playerFlux = Flux
.just("Michael Jordan", "Scottie Pippen", "Steve Kerr")
.map(n -> {
String[] split = n.split("\\s");
return new Player(split[0], split[1]);
});
and after very next line it says
What’s important to understand about map() is that the mapping is performed synchronously,
as each item is published by the source Flux. If you want to perform the
mapping asynchronously, you should consider the flatMap() operation.
and showed this example
Flux<Player> playerFlux = Flux
.just("Michael Jordan", "Scottie Pippen", "Steve Kerr")
.flatMap(n -> Mono.just(n)
.map(p -> {
String[] split = p.split("\\s");
return new Player(split[0], split[1]);
})
.subscribeOn(Schedulers.parallel())
);
Alright i think i got the point, but when i was practicing i think actually i didn't got it. lot of question started to rise in my head when i was trying to populate my class fields.
#Data
#Accessors(chain = true)
public class Response {
private boolean success;
private String message;
private List data;
}
Here is the code how i was trying to populate it
Mono<Response> response = Mono.just(new Response())
.map(res -> res.setSuccess(true))
.map(res -> res.setMessage("Success"))
.map(res -> res.setData(new ArrayList()));
after writing this code, one line from the book blinks in my head that map its synchronous will it be a blocking code, could it be bad for entire application because once i read that in a non-blocking application single blocking code could ruin the entire app.
So i decided to convert it into flatmap and according to book it should be look like this.
Mono<Response> response1 = Mono.just(new Response())
.flatMap(
m -> Mono.just(m)
.map(res -> res.setSuccess(true))
)
.flatMap(
m -> Mono.just(m)
.map(res -> res.setMessage("Success"))
)
.flatMap(
m -> Mono.just(m)
.map(res -> res.setData(new ArrayList()))
);
Both example output same but then what is the difference here. it so we should always use flatmap?
Thanks

Zip reactive flow with itself

I'm using Java Reactor Core, and I have a reactive Flux of objects. For each object of the Flux I need to do an external query that will return one different object for each input. The newly generated Flux needs then to be zipped with the original one - so the items of the 2 Flux must be synchronized and generated in the same order.
I'm just re-using the same flow twice, like this:
Flux<MyObj> aTest = Flux.fromIterable(aListOfObj);
Flux<String> myObjLists = aTest.map(o -> MyRepository.findById(o.name)).map(o -> {
if (!o.isPresent()) {
System.out.println("Fallback to empty-object");
return "";
}
List<String> l = o.get();
if (l.size() > 1) {
System.out.println("that's bad");
}
return l.get(0);
});
Flux.zip(aTest, myObjLists, (a, b) -> doSomethingWith(a,b))
Is it the right way to do it? If the myObjLists emits an error, how do I prevent the zip phase to skip the failing iteration?
I've finally opted for using Tuples and Optionals (to prevent null-items that would break the flux), so that I don't need to re-use the initial Flux:
Flux<Tuple<MyObj, Optional<String>>> myObjLists = Flux.fromIterable(aListOfObj)
.map(o -> Tuples.of(o, Optional.ofNullable(MyRepository.findById(o.name))
.flatMap(t -> {
if (!t.getT2().isPresent()) {
System.out.println("Discarding this item");
return Flux.empty();
}
List<String> l = t.getT2().get();
if (l.size() > 1) {
System.out.println("that's bad");
}
return Tuples.of(t.getT1(), l.get(0));
})
.map(t -> doSomethingWith(t.getT1(),t.getT2()))
Note that the flatMap could be replaced with a .map().filter(), removing tuples with missing Optional items

RxJava filter and emit other items

Its possible to filter and continue emiting itens like below ?
My code that calls subscriber 2 times:
Observable<Map.Entry<String, ArrayList<MockOverview>>> requestEntries =
this.requestView.request(request)
.map(HashMap::entrySet)
.flatMapIterable(entries -> entries);
requestEntries.filter(entry -> entry.getKey().equals("featured"))
.map((Func1<Map.Entry<String, ArrayList<MockOverview>>, List<MockOverview>>) Map.Entry::getValue)
.subscribe(mockOverviews -> {
Log.i("subscrive", "featured");
});
requestEntries.filter(entry -> entry.getKey().equals("done"))
.map((Func1<Map.Entry<String, ArrayList<MockOverview>>, List<MockOverview>>) Map.Entry::getValue)
.subscribe(mockOverviews -> {
Log.i("subscrive", "featured");
});
What i want:
requestEntries.filter(entry -> entry.getKey().equals("featured"))
.map((Func1<Map.Entry<String, ArrayList<MockOverview>>, List<MockOverview>>) Map.Entry::getValue)
.subscribe(mockOverviews -> {
})
.filter(entry -> entry.getKey().equals("done"))
.map((Func1<Map.Entry<String, ArrayList<MockOverview>>, List<MockOverview>>) Map.Entry::getValue)
.subscribe(mockOverviews -> {
});
By the looks of things your second version is not equal to your first: the former looks at the requestEntries stream twice, filters on featured and done keys respectively and does its own things with it. Your second version however first filters on featured first then does some transformations and side-effects and then filter out the done. However, that Observable<entryset> is not at all in scope in that second filter lambda.
What you need to do here is use publish(<lambda>) on requestEntries and in the lambda do the stuff from your first version, use onNext instead of subscribe, merge the streams and return that combined stream. Then outside of the publish you subscribe once (and do nothing in there) or go on and use the result of your stream somewhere else.
requestEntries.publish(re -> {
Observable<...> x = re.filter(...<featured>...).map(...).doOnNext(...Log.i(...));
Observable<...> y = re.filter(...<done>...).map(...).doOnNext(...Log.i(...));
return x.mergeWith(y);
})
You can use doOnNext in the place of the first subscribe()
requestEntry.filter(v -> ...)
.map(v -> ...)
.doOnNext(v -> ...)
.filter(v -> ...)
.map(v -> ...)
.subscribe(...)
or use publish(Func1):
requestEntry.filter(v -> ...)
.map(v -> ...)
.publish(o -> {
o.subscribe(...);
return o;
})
.filter(v -> ...)
.map(v -> ...)
.subscribe(...)

Categories

Resources