Its possible to filter and continue emiting itens like below ?
My code that calls subscriber 2 times:
Observable<Map.Entry<String, ArrayList<MockOverview>>> requestEntries =
this.requestView.request(request)
.map(HashMap::entrySet)
.flatMapIterable(entries -> entries);
requestEntries.filter(entry -> entry.getKey().equals("featured"))
.map((Func1<Map.Entry<String, ArrayList<MockOverview>>, List<MockOverview>>) Map.Entry::getValue)
.subscribe(mockOverviews -> {
Log.i("subscrive", "featured");
});
requestEntries.filter(entry -> entry.getKey().equals("done"))
.map((Func1<Map.Entry<String, ArrayList<MockOverview>>, List<MockOverview>>) Map.Entry::getValue)
.subscribe(mockOverviews -> {
Log.i("subscrive", "featured");
});
What i want:
requestEntries.filter(entry -> entry.getKey().equals("featured"))
.map((Func1<Map.Entry<String, ArrayList<MockOverview>>, List<MockOverview>>) Map.Entry::getValue)
.subscribe(mockOverviews -> {
})
.filter(entry -> entry.getKey().equals("done"))
.map((Func1<Map.Entry<String, ArrayList<MockOverview>>, List<MockOverview>>) Map.Entry::getValue)
.subscribe(mockOverviews -> {
});
By the looks of things your second version is not equal to your first: the former looks at the requestEntries stream twice, filters on featured and done keys respectively and does its own things with it. Your second version however first filters on featured first then does some transformations and side-effects and then filter out the done. However, that Observable<entryset> is not at all in scope in that second filter lambda.
What you need to do here is use publish(<lambda>) on requestEntries and in the lambda do the stuff from your first version, use onNext instead of subscribe, merge the streams and return that combined stream. Then outside of the publish you subscribe once (and do nothing in there) or go on and use the result of your stream somewhere else.
requestEntries.publish(re -> {
Observable<...> x = re.filter(...<featured>...).map(...).doOnNext(...Log.i(...));
Observable<...> y = re.filter(...<done>...).map(...).doOnNext(...Log.i(...));
return x.mergeWith(y);
})
You can use doOnNext in the place of the first subscribe()
requestEntry.filter(v -> ...)
.map(v -> ...)
.doOnNext(v -> ...)
.filter(v -> ...)
.map(v -> ...)
.subscribe(...)
or use publish(Func1):
requestEntry.filter(v -> ...)
.map(v -> ...)
.publish(o -> {
o.subscribe(...);
return o;
})
.filter(v -> ...)
.map(v -> ...)
.subscribe(...)
Related
I am doing a spring webflux reactive code
.flatMap(r -> save1)
.flatMapMany(r -> save2) //save a flux of elements
.flatMap(r -> save3) //need r.getId
.subscribe();
I want to control the emission of events and only save 3 event for first emitted element of the previous chain (flatmapMany) instead of N times.
So in resume, i need to save N elements and catch the first save and ignore the others. I need to result of save2 to pass argument for save3 method.
thanks,
If I understood the problem correctly you can use .next() to "convert" Flux into Mono by taking the first emitted element and then cancel the subscription.
.flatMap(r -> save1)
.flatMapMany(r -> save2) //save a flux of elements
.next()
.flatMap(r -> save3) //need r.getId
Here is a test for the above flow that shows that only "2.1" was handled in save3
#Test
void testFlow() {
var flow = Mono.just("1")
.flatMap(r -> save1(r))
.flatMapMany(r -> save2(r))
.next()
.flatMap(r -> save3(r));
StepVerifier.create(flow)
.expectNext("2.1")
.verifyComplete();
}
where
private Mono<String> save3(String r) {
return Mono.just(r);
}
private Flux<String> save2(String r) {
return Flux.just("2.1", "2.2", "2.3");
}
private Mono<String> save1(String r) {
return Mono.just(r);
}
but without a subscribe() it will not execute the entire chain (save1, save2) , right or i'm doing confusion?
Imagine that i have 3 mono objects to save on save2
-> save 2.1
-> save 2.2
-> save 2.3
after 2.1 save operation (first element) -> get response and save only once
-> save 3
instead of save 3 times
-> save 3
-> save 3
-> save 3
So ... the idea is stop chain propagation to not duplicate saved objects in DB.
I previously wrote following:
events.stream()
.map(d -> cli.getItem(d.getValue()))
.map(event -> Report.builder()
.id(event.getId())
.value(event.getValue())
.build())
.filter(r -> !excludesSet.contains(r.value))
.forEach(r -> {
System.out.println(String.format(r.value);
});
getItem returns Item here, I just updated getItem to getItems, which returns List<Item>, and want to keep some logic for every item. Which means I need to create a foreach and put original map, filter and forEach method in it under .map(d -> cli.getItem(d.getValue())), how could I do this?
Thanks!
This is a use case for flatMap:
events.stream()
.flatMap(d -> cli.getItems(d.getValue()).stream())
// everything below stays the same
flatMap allows you to map each item in a stream to multiple items.
Anybody Please help me on this issue.
I'm using java 8 stream api filter method, and don't get what I expect.
Who can explain me, why this block of code doesn't work, say the first filter doesn't filter:
List<Participant> participants = participantRepository.findAllByConferenceId(conferenceId);
participants
.stream()
.filter(participant -> partIdSet.contains(participant.getParticipantId()))
.filter(participant -> !ParticipantStatusCode.DISCONNECTED.equals(participant.getStatusCode()))
.forEach(p -> {
p.setStatusCode(ParticipantStatusCode.DISCONNECTED);
p.getActivity().forEach(activity -> activity.setEndDatetime(TimeUtils.getTime()));
});
but in this way of using filter, it works properly
List<Participant> participants = participantRepository.findAllByConferenceId(conferenceId);
participants = participants
.stream()
.filter(part -> partIdSet.contains(part.getParticipantId()))
.filter(participant -> !ParticipantStatusCode.DISCONNECTED.equals(participant.getStatusCode()))
.collect(Collectors.toList());
participants.forEach(p -> {
p.setStatusCode(ParticipantStatusCode.DISCONNECTED);
p.getActivity().forEach(activity -> activity.setEndDatetime(TimeUtils.getTime()));
});
And Please, if you have any other solution, that could be nicer and readable, don't hasitate to suggest
The reason is you are mutating the input of a consumer in both examples, which is not the intended way to use streams. Ideally you would want immutable participants and a way to create new participants and collecting them later:
List<Participant> processedParticipants = participants.stream()
.filter(participant -> partIdSet.contains(participant.getParticipantId()))
.filter(participant -> !ParticipantStatusCode.DISCONNECTED.equals(participant.getStatusCode()))
.map(p -> new Participant(
ParticipantStatusCode.DISCONNECTED,
p.getActivity().stream()
.map(...)
.collect(Collectors.toList()))
.collect(Collectors.toList());
Consider this code:
Mono.just(myVar)
.flatMap(MyClass::heavyOperation)
.flatMap(MyClass::anotherHeavyOperation)
.flatMap(res -> doSomething(res, MyClass.heavyOperation(myVar)));
I don't want to call twice MyClass.heavyOperation(myVar) with the same input for the sake of performance.
How can I reuse the result of the second operation in the fourth one?
I want to do something like this, which is forbidden:
Object myObj;
Mono.just(myVar)
.flatMap(var -> {
myObj = MyClass.heavyOperation(var);
return myObj;
})
.flatMap(MyClass::anotherHeavyOperation)
.flatMap(res -> doSomething(res, myObj));
Probably the best solution is to put everything that uses myObj in the same pipeline step.
Like this:
Mono.just(myVar)
.flatMap(MyClass::heavyOperation)
.flatMap(myObj -> MyClass.anotherHeavyOperation(myObj)
.flatMap(res -> doSomething(res, myObj)));
The step that uses myObj can in turn be de-composed into a number of smaller sub-pipelines, and the top level pipeline can also continue as normally.
This is the basis of monadic operations in functional languages!
You can create a tuple in the second flat map:
Mono.just(myVar)
.flatMap(MyClass::heavyOperation)
.flatMap(x -> Tuples.of(x, MyClass.anotherHeavyOperation(myVar))
.flatMap(res -> doSomething(res.getT2(), res.getT1()));
Consider keeping the scope:
Mono.just(myVar)
.flatMap(var -> {
Object myObj = MyClass.heavyOperation(var);
return MyClass.anotherHeavyOperation(myObj)
.flatMap(res -> doSomething(res, myObj));
});
You could save Mono to variable and then just zip it again with Mono after anotherHeavyOperation.
var heavyOperation = Mono.just(myVar)
.flatMap(MyClass::heavyOperation)
.cache();
heavyOperation
.flatMap(MyClass::anotherHeavyOperation)
.zipWith(heavyOperation, (res, ho) -> doSomething(res, ho));
I have an Observable that does never finish. It emits a List<Item>. I need to filter out some of those items every time it emits that list. Currently I have this as a solution:
mData.getItemsObservable() // Observable<List<Item>>
.compose(...)
.flatMapSingle(items -> Observable.fromIterable(items)
.filter(item -> item.someCondition())
.toList())
.subscribe(items -> {
// ...
}, error -> {
// ...
});
Is this the best way to filter out some items? Is there a simpler (more readable) way to do the same?
I've tried this too, but it didn't emit anything:
mData.getItemsObservable() // Observable<List<Item>>
.compose(...)
.flatMap(Observable::fromIterable) // or like this: flatMapIterable(items -> items)
.filter(item -> item.someCondition())
.toList()
.subscribe(items -> {
// ...
}, error -> {
// ...
});
The first approach is okay if you want to stick to RxJava. Otherwise, you could use IxJava and perform the filtering directly in a map operation:
mData.getItemsObservable() // Observable<List<Item>>
.compose(...)
.map(v -> Ix.from(v).filter(w -> w.someCondition()).toList())
.subscribe(items -> {
// ...
}, error -> {
// ...
});