Say I have this:
Flux<GroupedFlux<Integer, Integer>> intsGrouped = Flux.range(0, 12)
.groupBy(i -> i % 3);
and say I have a method:
Mono<Integer> getFromService(Integer i);
I want to call getFromService in parallel for each of the groups, but make sure the calls are serial within each group.
For the above example that would be three parallel streams with these input values:
stream 1: 0 -> 3 -> 6 -> 9
stream 2: 1 -> 4 -> 7 -> 10
stream 3: 2 -> 5 -> 8 -> 11
I tried this, but it's not doing what I want:
Flux.range(0, 12)
.groupBy(i -> i % 3)
.flatMap(g -> g.flatMap(i -> getFromService(g.key(), i)))
This is calling the service in parallel for all the ints at once. How do I proceed?
Use either concatMap or flatMapSequential instead of the inner .flatMap
If you want sequential execution within each group (i.e. only one subscription to getFromService at a single time within each group), then use .concatMap, like this:
Flux.range(0, 12)
.groupBy(i -> i % 3)
.flatMap(g -> g.concatMap(i -> getFromService(g.key(), i)))
If parallel execution within a group is ok, but you just care about the order in which the sequence is emitted, then use flatMapSequential, like this:
Flux.range(0, 12)
.groupBy(i -> i % 3)
.flatMap(g -> g.flatMapSequential(i -> getFromService(g.key(), i)))
Another option is to use .flatMap with the concurrency argument set to 1, but I'd recommend one of the above instead.
Related
Here is a simple project-reactor code snippet:
Consumer<String> slowConsumer = x -> {
try {
TimeUnit.MILLISECONDS.sleep(50);
} catch(Exception ignore) {
}
};
Flux<String> publisher = Flux
.just(1, 2, 3)
.parallel(2)
.runOn(Schedulers.newParallel("writer", 2))
.flatMap(rail -> Flux.range(1, 300).map(i -> String.format("Rail %d -> Row %d", rail, i)).log())
.sequential()
.publishOn(Schedulers.newSingle("reader"))
.doOnNext(slowConsumer);
publisher.subscribe();
My expectation is that everything that happens in "flatMap" should be executed within "writer" threads. However, this is what is logged:
onNext(Rail 2 -> Row 300) [Flux.MapFuseable.1] [writer-2]
...
onNext(Rail 1 -> Row 300) [Flux.MapFuseable.2] [writer-1]
...
onNext(Rail 3 -> Row 256) [Flux.MapFuseable.3] [writer-1]
request(256) [Flux.MapFuseable.3] [reader-1]
onNext(Rail 3 -> Row 257) [Flux.MapFuseable.3] [reader-1]
...
onNext(Rail 3 -> Row 300) [Flux.MapFuseable.3] [reader-1]
Can someone explain this? How come "reader" thread is processing the tail of the last rail? What am I missing?
I wrote this code:
Flux.range(0, 300)
.doOnNext(i -> System.out.println("i = " + i))
.flatMap(i -> Mono.just(i)
.subscribeOn(Schedulers.elastic())
.delayElement(Duration.ofMillis(1000))
)
.doOnNext(i -> System.out.println("end " + i))
.blockLast();
When running it, the first System.out.println shows that the Flux stop emitting numbers at the 256th element, then it waits for the older to be completed before emitting new ones.
Why is this happening?
Why 256?
Why this happening?
The flatMap operator can be characterized as operator that (rephrased from javadoc):
subscribes to its inners eagerly
does not preserve ordering of elements.
lets values from different inners interleave.
For this question the first point is important. Project Reactor restricts the
number of in-flight inner sequences via concurrency parameter.
While flatMap(mapper) uses the default parameter the flatMap(mapper, concurrency) overload accepts this parameter explicitly.
The flatMaps javadoc describes the parameter as:
The concurrency argument allows to control how many Publisher can be subscribed to and merged in parallel
Consider the following code using concurrency = 500
Flux.range(0, 300)
.doOnNext(i -> System.out.println("i = " + i))
.flatMap(i -> Mono.just(i)
.subscribeOn(Schedulers.elastic())
.delayElement(Duration.ofMillis(1000)),
500
// ^^^^^^^^^^
)
.doOnNext(i -> System.out.println("end " + i))
.blockLast();
In this case there is no waiting:
i = 297
i = 298
i = 299
end 0
end 1
end 2
In contrast if you pass 1 as concurrency the output will be similar to:
i = 0
end 0
i = 1
end 1
Awaiting one second before emitting the next element.
Why 256?
256 is the default value for concurrency of flatMap.
Take a look at Queues.SMALL_BUFFER_SIZE:
public static final int SMALL_BUFFER_SIZE = Math.max(16,
Integer.parseInt(System.getProperty("reactor.bufferSize.small", "256")));
I have an issue with missing some messages after aggregation. I need to aggregate my messages into groups with same number of elements. For my current problem I have 118 elements in messageChannel. These messages succesfully combines into 11 groups with 10 elements for each group. But last 8 have been lost
IntegrationFlows
.from(messageChannel)
.split(s -> s
.applySequence(false).get().getT2().setDelimiters("[\r\n]"))
.aggregate(s -> s
.correlationExpression("payload")
.releaseExpression("size() >= 10")
.expireGroupsUponCompletion(true)
)
.handle(h ->
System.out.println(h))
.get();
I expect receiving 8 lost messages into new group
Maybe expireGroupsUponTimeout can help here:
IntegrationFlows
.from(messageChannel)
.split(s -> s
.applySequence(false).get().getT2().setDelimiters("[\r\n]"))
.aggregate(s -> s
.correlationExpression("payload")
.releaseExpression("size() >= 10")
.expireGroupsUponCompletion(true)
.expireGroupsUponTimeout( 500 )
)
.handle(h ->
System.out.println(h))
.get();
I have a code where I´m making an interval until a condition acomplish and then in the subscribe send back the result.
But since is an interval the subscription continue.
I was wondering if there´s any way to unsubscribe an Observable interval once emmit something
here the code
Subscription subscriber = Observable.interval(0, 5, TimeUnit.MILLISECONDS)
.map(i -> eventHandler.getProcessedEvents())
.filter(eventsProcessed -> eventsProcessed >= 10)
.doOnNext(eventsProcessed -> eventHandler.initProcessedEvents())
.doOnNext(eventsProcessed -> logger.info(null, "Total number of events processed:" + eventsProcessed))
.subscribe(t -> resumeRequest(asyncResponse));
new TestSubscriber((Observer) subscriber).awaitTerminalEvent(10, TimeUnit.SECONDS);
subscriber.unsubscribe();
For now as a hack I use a timer and then unsubscribe, but it´s bad!
Regards
You can use the first operator
Subscription subscriber = Observable.interval(0, 5, TimeUnit.MILLISECONDS)
.map(i -> eventHandler.getProcessedEvents())
.first(eventsProcessed -> eventsProcessed >= 10)
.doOnNext(eventsProcessed -> eventHandler.initProcessedEvents())
.doOnNext(eventsProcessed -> logger.info(null, "Total number of events processed:" + eventsProcessed))
.subscribe(t -> resumeRequest(asyncResponse));
instead of the filter. This ensures that you only get a single emission if your condition is met. Note that you will get an exception if your condition interval Observable terminates without your condition being met.
This question already has answers here:
Is it possible to use Streams.intRange function?
(3 answers)
Closed 6 years ago.
I have an old style for loop to do some load tests:
For (int i = 0 ; i < 1000 ; ++i) {
if (i+1 % 100 == 0) {
System.out.println("Test number "+i+" started.");
}
// The test itself...
}
How can I use new Java 8 stream API to be able to do this without the for?
Also, the use of the stream would make it easy to switch to parallel stream. How to switch to parallel stream?
* I'd like to keep the reference to i.
IntStream.range(0, 1000)
/* .parallel() */
.filter(i -> i+1 % 100 == 0)
.peek(i -> System.out.println("Test number " + i + " started."))
/* other operations on the stream including a terminal one */;
If the test is running on each iteration regardless of the condition (take the filter out):
IntStream.range(0, 1000)
.peek(i -> {
if (i + 1 % 100 == 0) {
System.out.println("Test number " + i + " started.");
}
}).forEach(i -> {/* the test */});
Another approach (if you want to iterate over an index with a predefined step, as #Tunaki mentioned) is:
IntStream.iterate(0, i -> i + 100)
.limit(1000 / 100)
.forEach(i -> { /* the test */ });
There is an awesome overloaded method Stream.iterate(seed, condition, unaryOperator) in JDK 9 which perfectly fits your situation and is designed to make a stream finite and might replace a plain for:
Stream<Integer> stream = Stream.iterate(0, i -> i < 1000, i -> i + 100);
You can use IntStream as shown below and explained in the comments:
(1) Iterate IntStream range from 1 to 1000
(2) Convert to parallel stream
(3) Apply Predicate condition to allow integers with (i+1)%100 == 0
(4) Now convert the integer to a string "Test number "+i+" started."
(5) Output to console
IntStream.range(1, 1000). //iterates 1 to 1000
parallel().//converts to parallel stream
filter( i -> ((i+1)%100 == 0)). //filters numbers & allows like 99, 199, etc..)
mapToObj((int i) -> "Test number "+i+" started.").//maps integer to String
forEach(System.out::println);//prints to the console