Basically, I'm making a queue processor in Spring Boot and want to use Reactor for async. I've made a function needs to loop forever as it's the one that pulls from the queue then marks the item as processed.
here's the blocking version that works Subscribe returns a Mono
while(true) {
manager.Subscribe().block()
}
I'm not sure how to turn this into a Flux I've looked a interval, generate, create, etc. and I can't get anything to work without calling block()
Here's an example of what I've tried
Flux.generate(() -> manager,
(state, sink) -> {
state.Subscribe().block();
sink.next("done");
return state;
}));
Being a newbie to Reactor, I haven't been able to find anything about just loop and processing the Monos synchronously without blocking.
Here's what the Subscribe method does using the AWS Java SDK v2:
public Mono Subscribe() {
return Mono.fromFuture(_client.receiveMessage(ReceiveMessageRequest.builder()
.waitTimeSeconds(10)
.queueUrl(_queueUrl)
.build()))
.filter(x -> x.messages() != null)
.flatMap(x -> Mono.when(x.messages()
.stream()
.map(y -> {
_log.warn(y.body());
return Mono.fromFuture(_client.deleteMessage(DeleteMessageRequest.builder()
.queueUrl(_queueUrl)
.receiptHandle(y.receiptHandle())
.build()));
}).collect(Collectors.toList())));
}
Basically, I'm just polling an SQS queue, deleting the messages then I want to do it again. This is all just exploratory for me.
Thanks!
You need two things: a way to subscribe in a loop and a way to ensure that the Subscribe() method is effectively called on each iteration (because the Future needs to be recreated).
repeat() is a baked in operator that will resubscribe to its source once the source completes. If the source errors, the repeat cycle stops. The simplest variant continues to do so Long.MAX_VALUE times.
The only problem is that in your case the Mono from Subscribe() must be recreated on each iteration.
To do so, you can wrap the Subscribe() call in a defer: it will re-invoke the method each time a new subscription happens, which includes each repeat attempt:
Flux<Stuff> repeated = Mono
.defer(manager::Subscribe)
.repeat();
Related
I am using the SpringData MongoDB Reactive Streams driver with code that does something like this:
reactiveMongoOperations.changeStream(changeStreamOptions, MyObject.class)
.parallel()
.runOn(Schedulers.newParallel("my-scheduler", 4))
.map(ChangeStreamEvent::getBody)
.flatMap(o -> {
reactiveMongoOperations.findAndModify(query, update, options, MyObject.class)
})
.subscribe(this::process)
I would expected everything to execute in my-scheduler. What actually happens is that the flatMap operation does execute in my-scheduler, while the code in my process() method does not.
Can someone please explain why this is so - is this a bug or am I doing something wrong? How can I get all the operations defined in the Flux to execute on the same scheduler?
runOn() specifies the scheduler which is used to run each "rail" of the parallel thread. It doesn't effect subscribers.
If you want to specify a scheduler for subscribers, then you should specify that using subscribeOn() on the original Flux (before the parallel() call.)
I want to implement lock in my application to let only one chain fragment execute at the time and any other to wait each other.
For example:
val demoDao = DemoDao() // data that must be accessed only by one rx-chain fragment at one time
Observable.range(0, 150)
.subscribeOn(Schedulers.io())
.flatMapCompletable {
dataLockManager.lock("action") { // fragment-start
demoDao.get()
.flatMapCompletable { data ->
demoDao.set(...)
}
} // fragment-end
}
.subscribe()
Observable.range(0, 100)
.subscribeOn(Schedulers.io())
.flatMapCompletable {
dataLockManager.lock("action") { // fragment-start
demoDao.get()
.flatMapCompletable { data ->
demoDao.set(...)
}
} // fragment-end
}
.subscribe()
I tried to implement it via custom Completable.create with CountDownLatch but it may lead to deadlock.
And I suck at this point. What can you recommend me?
To serialize access to demoDao.get(), there are a few ways of achieving this but try hard not to use a lock to do it as that can stuff up a reactive stream with deadlocks for starters (as you have found out).
If you do want to use a lock you should ensure that no lock is held across a stream signal like an emission to downstream or request to upstream. In that situation you can use a lock (shortlived).
One approach is to combine the actions of the two streams into one (with say merge) and do the demoDao stuff on that one stream.
Another approach is to create a PublisheSubject using PublishSubject.create().serialized() which does the demoDao.get() stuff downstream and subscribe to it once only. Then the two sources you have mentioned can .doOnNext(x -> subject.onNext()). Depends if each source must know about failure independently or if it is acceptable that the PublishSubject subscription is the only spot where the failure is notified.
in asynchronous world, using of locks is strongly discouraged. Instead, locking is modelled by serialized execution of an actor or a serial executor. In turn, actor can be modelled by an Obserever and serial executor by Schedulers.single(), though more experienced RxJava programmers can make more advice.
Consider the following Flux
Flux.range(1, 5)
.parallel(10)
.runOn(Schedulers.parallel())
.map(i -> "https://www.google.com")
.flatMap(uri -> Mono.fromCallable(new HttpGetTask(httpClient, uri)))
HttpGetTask is a Callable whose actual implementation is irrelevant in this case, it makes a HTTP GET call to the given URI and returns the content if successful.
Now, I'd like to slow down the emission by introducing an artificial delay, such that up to 10 threads are started simultaneously, but each one doesn't complete as soon as HttpGetTask is done. For example, say no thread must finish before 3 seconds. How do I achieve that?
If the requirement is really "not less than 3s" you could add a delay of 3 seconds to the Mono inside the flatMap by using Mono.fromCallable(...).delayElement(Duration.ofSeconds(3)).
I'm trying to create a Flowable which is wrapping an Iterable. I push elements to my Iterable periodically but it seems that the completion event is implicit. I don't know how to signal that processing is complete. For example in my code:
// note that this code is written in Kotlin
val iterable = LinkedBlockingQueue<Int>()
iterable.addAll(listOf(1, 2, 3))
val flowable = Flowable.fromIterable(iterable)
.subscribeOn(Schedulers.computation())
.observeOn(Schedulers.computation())
flowable.subscribe(::println, {it.printStackTrace()}, {println("completed")})
iterable.add(4)
Thread.sleep(1000)
iterable.add(5)
Thread.sleep(1000)
This prints:
1
2
3
4
completed
I checked the source of the Flowable interface but it seems that I can't signal that a Flowable is complete explicitly. How can I do so? In my program I publish events which have some delay between them and I would like to be explicit when to complete the event flow.
Clarification:
I have a long running process which emits events. I gather them in a queue and I expose a method which returns a Flowable which wraps around my queue. The problem is that there might be already elements in the queue when I create the Flowable. I will process the events only once and I know when the flow of events stops so I know when I need to complete the Flowable.
Using .fromIterable is the wrong way to create a Flowable for your use case.
Im not actually clear on what that use case is, but you probably want to use Flowable.create() or a PublishSubject
val flowable = Flowable.create<Int>( {
it.onNext(1)
it.onNext(2)
it.onComplete()
}, BackpressureStrategy.MISSING)
val publishSubject = PublishSubject.create<Int>()
val flowableFromSubject = publishSubject.toFlowable(BackpressureStrategy.MISSING)
//This data will be dropepd unless something is subscribed to the flowable.
publishSubject.onNext(1)
publishSubject.onNext(2)
publishSubject.onComplete()
Of course how you deal with back-pressure will depend on the nature of the source of data.
Like suggested by akarnokd, ReplayProcessor do exactly what you want. Replace iterable.add(item) with processor.onNext(item), and call processor.onComplete() when you are done.
I have an Observable that go to database and query for some information. I don't want my observable executes longer than 5 seconds, thus I use:
myObservable.timeout(5,second);
Then I want to handle the error notification also, thus I use:
myObservable.timeout(5,second).onError(return empty result);
Then I wonder for what will happen to the code in myObservable that is used to do database query. Will it also be terminated, or it will continue to run ? (which happens to Java native Future.get(timeLimit))
Let's take an example :
Observable.interval(1, TimeUnit.SECONDS)
.timeout(10, TimeUnit.MICROSECONDS)
.onErrorReturn(e -> -1L)
.subscribe(System.out::println,
Throwable::printStackTrace,
() -> System.err.println("completed"));
the timeout operator will emit an error. But precedent operators won't be notifier of this error.
The operator onErrorReturn will transform your error to an event and then will complete your stream (and mark it as finished) and then your source observable will be unsubscribe.
This unsubscription part will run some code that, depending of how your source observable is written, that may stop your request, or just do nothing, or free some resources.
In your case, it may call the cancel method on your Future (according to the Subscriptions class)