Spring webflux When to use map over flatmap - java

I am new to java reactive programming and i have started learning it with spring webflux.
One thing that is always bothering me that map is synchronous while flatmap is asynchronous.
I am learning it from book spring in action chapter 10 and i see the below example.
Flux<Player> playerFlux = Flux
.just("Michael Jordan", "Scottie Pippen", "Steve Kerr")
.map(n -> {
String[] split = n.split("\\s");
return new Player(split[0], split[1]);
});
and after very next line it says
What’s important to understand about map() is that the mapping is performed synchronously,
as each item is published by the source Flux. If you want to perform the
mapping asynchronously, you should consider the flatMap() operation.
and showed this example
Flux<Player> playerFlux = Flux
.just("Michael Jordan", "Scottie Pippen", "Steve Kerr")
.flatMap(n -> Mono.just(n)
.map(p -> {
String[] split = p.split("\\s");
return new Player(split[0], split[1]);
})
.subscribeOn(Schedulers.parallel())
);
Alright i think i got the point, but when i was practicing i think actually i didn't got it. lot of question started to rise in my head when i was trying to populate my class fields.
#Data
#Accessors(chain = true)
public class Response {
private boolean success;
private String message;
private List data;
}
Here is the code how i was trying to populate it
Mono<Response> response = Mono.just(new Response())
.map(res -> res.setSuccess(true))
.map(res -> res.setMessage("Success"))
.map(res -> res.setData(new ArrayList()));
after writing this code, one line from the book blinks in my head that map its synchronous will it be a blocking code, could it be bad for entire application because once i read that in a non-blocking application single blocking code could ruin the entire app.
So i decided to convert it into flatmap and according to book it should be look like this.
Mono<Response> response1 = Mono.just(new Response())
.flatMap(
m -> Mono.just(m)
.map(res -> res.setSuccess(true))
)
.flatMap(
m -> Mono.just(m)
.map(res -> res.setMessage("Success"))
)
.flatMap(
m -> Mono.just(m)
.map(res -> res.setData(new ArrayList()))
);
Both example output same but then what is the difference here. it so we should always use flatmap?
Thanks

Related

execute first emitted element of reactive chain

I am doing a spring webflux reactive code
.flatMap(r -> save1)
.flatMapMany(r -> save2) //save a flux of elements
.flatMap(r -> save3) //need r.getId
.subscribe();
I want to control the emission of events and only save 3 event for first emitted element of the previous chain (flatmapMany) instead of N times.
So in resume, i need to save N elements and catch the first save and ignore the others. I need to result of save2 to pass argument for save3 method.
thanks,
If I understood the problem correctly you can use .next() to "convert" Flux into Mono by taking the first emitted element and then cancel the subscription.
.flatMap(r -> save1)
.flatMapMany(r -> save2) //save a flux of elements
.next()
.flatMap(r -> save3) //need r.getId
Here is a test for the above flow that shows that only "2.1" was handled in save3
#Test
void testFlow() {
var flow = Mono.just("1")
.flatMap(r -> save1(r))
.flatMapMany(r -> save2(r))
.next()
.flatMap(r -> save3(r));
StepVerifier.create(flow)
.expectNext("2.1")
.verifyComplete();
}
where
private Mono<String> save3(String r) {
return Mono.just(r);
}
private Flux<String> save2(String r) {
return Flux.just("2.1", "2.2", "2.3");
}
private Mono<String> save1(String r) {
return Mono.just(r);
}
but without a subscribe() it will not execute the entire chain (save1, save2) , right or i'm doing confusion?
Imagine that i have 3 mono objects to save on save2
-> save 2.1
-> save 2.2
-> save 2.3
after 2.1 save operation (first element) -> get response and save only once
-> save 3
instead of save 3 times
-> save 3
-> save 3
-> save 3
So ... the idea is stop chain propagation to not duplicate saved objects in DB.

How to save method returned mono in Reactor's Context to use it the next time the method is called in this reactor stream?

I have a method that returns data from some repo. Of course - these data can be changed at any given moment. I want to ensure that the data returned from the method remains consistent during a chain of operations (the method can also be called from another component during the chain). for example:
fetchDataThanCanBeChanged()
.flatMap(...)
.doOnNext(...)
.doOnSuccess(...)
.map(...)
.zipWith(...)
.flatMap(...)
.then(fetchDataThanCanBeChanged())......
I want to be sure that the second call to fetchDataThanCanBeChanged() returns the same data as the first.
I tried to wrap fetchDataThatCanBeChanged with method with Mono.deferContextual:
private Mono<Object> fetchDataUsingCache() {
return fetchDataThatCanBeChanged()
.flatMap(s ->
Mono.deferContextual(ctx -> {
System.out.println("cached snapshot=" + ctx.getOrEmpty("KEY"));
return Mono.just(s);
}).contextWrite(ctx ->
ctx.put("KEY", ctx.getOrDefault("KEY", s))
));
}
and call it twice:
fetchDataUsingCache()
.flatMap(d -> change(d))
.doOnNext(p -> saveChangedDataInRepo(p))
.delayElement(Duration.ofMillis(2000))
.fetchDataUsingCache()
.block();
but the content of fetchDataThatCanBeChanged() is only executed once and the returned data is the updated.
any idea for a solution???
Many thanks in advance!
finally I wrapped my code where I need the old fetched data with Mono.deferContextual and wrote the data to context like that:
fetchDataThanCanBeChanged()
.flatMap(s -> Mono.deferContextual(ctx ->
changeDataInRepo()
.doOnSuccess(b -> fetchDataThanCanBeChanged())
.thenReturn(doWhateverNeedWithOldData(ctx.getOrEmpty("KEY"))))
.contextWrite(ctx -> ctx.put("KEY", s))))
.block();

Java 8 stream filter behaves strangely (does not filter)

Anybody Please help me on this issue.
I'm using java 8 stream api filter method, and don't get what I expect.
Who can explain me, why this block of code doesn't work, say the first filter doesn't filter:
List<Participant> participants = participantRepository.findAllByConferenceId(conferenceId);
participants
.stream()
.filter(participant -> partIdSet.contains(participant.getParticipantId()))
.filter(participant -> !ParticipantStatusCode.DISCONNECTED.equals(participant.getStatusCode()))
.forEach(p -> {
p.setStatusCode(ParticipantStatusCode.DISCONNECTED);
p.getActivity().forEach(activity -> activity.setEndDatetime(TimeUtils.getTime()));
});
but in this way of using filter, it works properly
List<Participant> participants = participantRepository.findAllByConferenceId(conferenceId);
participants = participants
.stream()
.filter(part -> partIdSet.contains(part.getParticipantId()))
.filter(participant -> !ParticipantStatusCode.DISCONNECTED.equals(participant.getStatusCode()))
.collect(Collectors.toList());
participants.forEach(p -> {
p.setStatusCode(ParticipantStatusCode.DISCONNECTED);
p.getActivity().forEach(activity -> activity.setEndDatetime(TimeUtils.getTime()));
});
And Please, if you have any other solution, that could be nicer and readable, don't hasitate to suggest
The reason is you are mutating the input of a consumer in both examples, which is not the intended way to use streams. Ideally you would want immutable participants and a way to create new participants and collecting them later:
List<Participant> processedParticipants = participants.stream()
.filter(participant -> partIdSet.contains(participant.getParticipantId()))
.filter(participant -> !ParticipantStatusCode.DISCONNECTED.equals(participant.getStatusCode()))
.map(p -> new Participant(
ParticipantStatusCode.DISCONNECTED,
p.getActivity().stream()
.map(...)
.collect(Collectors.toList()))
.collect(Collectors.toList());

Efficient way to use fork join pool with multiple parallel streams

I am using three streams which needs to call http requests. All of the calls are independent. So, I use parallel streams and collect the results from http response.
Currently I am using three separate parallel streams for these operations.
Map<String, ClassA> list1 = listOfClassX.stream().parallel()
.map(item -> {
ClassA instanceA = httpGetCall(item.id);
return instanceA;
})
.collect(Collectors.toConcurrentMap(item -> item.id, item -> item);
Map<String, ClassB> list1 = listOfClassY.stream().parallel()
.map(item -> {
ClassB instanceB = httpGetCall(item.id);
return instanceB;
})
.collect(Collectors.toConcurrentMap(item -> item.id, item -> item);
Map<String, ClassC> list1 = listOfClassZ.stream().parallel()
.map(item -> {
ClassC instanceC = httpGetCall(item.id);
return instanceC;
})
.collect(Collectors.toConcurrentMap(item -> item.id, item -> item);
It runs the three parallel streams separately one after another though each call is independent.
Will common fork join pool help in this case to optimize the use of thread pool here?
Is there any other way to optimize the performance of this code further?

Concurrent processing of project reactor's flux

I’m very new to project reactor or reactive programming at large so I'm probably doing something wrong. I’m struggling to build a flow that does the following:
Given a class Entity:
Entity {
private Map<String, String> items;
public Map<String, String> getItems() {
return items;
}
}
read Entity from DB (ListenableFuture<Entity> readEntity())
perform some parallel async processing on every item (boolean processItem(Map.Entry<String, String> item))
when all finished call doneProcessing (void doneProcessing(boolean b))
Currently my code is:
handler = this;
Mono
.fromFuture(readEntity())
.doOnError(t -> {
notifyError(“some err-msg” , t);
return;
})
.doOnSuccess(e -> log.info("Got the Entity: " + e))
.flatMap( e -> Flux.fromIterable(e.getItems().entrySet()))
.all(handler::processItem)
.consume(handler::doneProcessing);
The thing works, but the handler::processItem calls don’t run concurrently on all items. I tried using dispatchOn and publishOn with both io and async SchedulerGroup and with various parameters, but still the calls run serially on one thread.
What am I doing wrong?
Apart, I’m sure that in general the above can be improved so any suggestion will be appreciated.
Thanks
You need another flatMap that forks and joins computation for each individual map element:
Mono.fromFuture(readEntity())
.flatMap(v -> Flux.fromIterable(v.getItems().entrySet()))
.flatMap(v -> Flux.just(v)
.publishOn(SchedulerGroup.io())
.doOnNext(handler::processItem))
.consume(handler::doneProcessing);

Categories

Resources