Given the following file inbound:
IntegrationFlows.from(s -> s
.file(directory, new LastModifiedFileComparator())
.patternFilter(inputFileNamePattern)
.preventDuplicates(),
e -> e.poller(p -> p.trigger(filePollerTrigger))
)
and the trigger that throws exception in case certain time was overreached, how does one receives exception that was thrown?
Will it appear in flow exception channel or in inbound specific error handler?
What is the correct way to deal with it in java dsl?
Thanks in advance.
The exception thrown from the trigger.nextExecutionTime() causes the polling task to be stopped, or not rescheduled if that sounds better:
public ScheduledFuture<?> schedule() {
synchronized (this.triggerContextMonitor) {
this.scheduledExecutionTime = this.trigger.nextExecutionTime(this.triggerContext);
if (this.scheduledExecutionTime == null) {
return null;
}
long initialDelay = this.scheduledExecutionTime.getTime() - System.currentTimeMillis();
this.currentFuture = this.executor.schedule(this, initialDelay, TimeUnit.MILLISECONDS);
return this;
}
}
As you see by code it is fully equivalent to the null from the trigger. We just exit from the rescheduling loop.
Consider to implement the exception handling logic in the custom Trigger as a wrapper around that target one. For example ErrorHandlingTrigger.
Related
I‘ve been pulling my hairs on how to implement the retry pattern (retry the WHOLE flow) for a Spring-Integration poller flow.
Please find below my (erroneous) source-code (doesn't work).
What am I doing wrong ?
(if I put a breakpoint on the line throwing an exception, it's only hit once)
thanks a lot in advance for your time and your expertise.
Best Regards
nkjp
PS: maybe try to extend AbstractHandleMessageAdvice with a RetryTemplate ?
return IntegrationFLows.from(SOME_QUEUE_CHANNEL)
.transform(p -> p, e -> e.poller(Pollers.fixedDelay(5000)
.advice(RetryInterceptorBuilder.stateless().maxAttempts(5).backOffOptions(1,2,10).build())))
.transform(p -> {
if (true) {
throw new RuntimeException("KABOOM");
}
return p;
})
.channel(new NullChannel())
.get();
If you add poller.advice(), then an Advice is applied to the whole flow starting with poll() method. Since you have already polled a message from that queue, there is nothing to poll from it on the next attempt. It is kinda anti-pattern to use retry for non-transactional queues: you don't rollback transactions so your data doesn't come back to store to be available for the next poll().
There is no way at the moment to retry a whole sub-flow from some point, but you definitely can use a RequestHandlerRetryAdvice on the specific erroneous endpoint like that your transform() with KABOOM exception:
.transform(p -> {
if (true) {
throw new RuntimeException("KABOOM");
}
return p;
}, e -> e.advice(new RequestHandlerRetryAdvice()))
See its setRetryTemplate(RetryTemplate retryTemplate) for more retry options instead of just 3 attempts by default.
To make for a sub-flow, we need to consider to implement a HandleMessageAdvice.
Something like this:
.transform(p -> p, e -> e.poller(Pollers.fixedDelay(500000))
.advice(new HandleMessageAdvice() {
RetryOperationsInterceptor delegate =
RetryInterceptorBuilder.stateless()
.maxAttempts(5)
.backOffOptions(1, 2, 10)
.build();
#Override
public Object invoke(MethodInvocation invocation) throws Throwable {
return delegate.invoke(invocation);
}
}))
But again: it's not a poller advice., it is an endpoint one on its MessageHandler.handleMessage().
I have the following observable:
ScheduledExecutorService executorService = Executors.newScheduledThreadPool( 1 );
Observable<List<Widget>> findWidgetsObservable = Observable.create( emitter -> {
executorService.scheduleWithFixedDelay( emitFindWidgets( emitter, 0, 30, TimeUnit.SECONDS );
} );
private Runnable emitFindWidgets( ObservableEmitter<List<Widgets>> emitter ) {
return () -> {
emitter.onNext( Collections.emptyList() ); // dummy empty array
};
}
And I'm returning it in a graphql-java subscription resolver like so:
ConnectableObservable<List<Widget>> connectableObservable = findWidgetsObservable.share().publish();
Disposable connectionDisposable = connectableObservable.connect();
return connectableObservable.toFlowable( BackpressureStrategy.LATEST )
The graphql subscription works as expected and emits data to the JavaScript graphql client, but when the client unsubscribes, my Runnable continues seemingly infinitely. That said, the flowable's doOnCancel() event handler IS being run.
In order to remedy this problem, I've attempted to do the following within the flowable's doOnCancel():
Disposable connectionDisposable = connectableObservable.connect();
return connectableObservable.toFlowable( BackpressureStrategy.LATEST ).doOnCancel( () -> {
findWidgetsObservable.toFuture().cancel( true );
connectionDisposable.dispose();
})
However, the Runnable continues omitting indefinitely. Is there any way I can solve this problem and completely stop the emits?
I did have one thought: scheduleWithFixedDelay returns a ScheduledFuture, which has a cancel() method, but I'm not sure that there's anyway I can do that when the scheduling itself is scoped within an observable! Any help is appreciated.
The runnable keeps on emitting because you are scheduling the emission on a scheduler that is not known/bound to observable stream.
When you dispose your connection, you stop receiving the items from upstream because the connection to upstream observable is cut. But since you are scheduling the emitter to run repeatedly on a separate scheduler, the runnable keeps running.
You can describe the custom scheduling behavior using a custom scheduler and passing it in subscribeOn(Your-Custom-Scheduler)
Also, you mentioned you can invoke cancel() on ScheduledFuture in doOnDispose().
But you should switch schedulers explicitly in the observable chain. Otherwise, it becomes harder to debug.
I have ThreadPoolExecutorService, with core pool size 5 and max pool size 10. Also i have set queue size to 0. So when i try to submit 11 tasks - the last one is rejected. For these case i am using RejectedExecutionHandler. However, from submitter thread i can not determine - my task is submitted or rejected.
Here is code from submitter thread:
public void submitToAsyncExecution(Runnable r) {
Future<Void> task = this.threadPool.submit(r)
//is it possible to use returned future object to find out if my task is rejected or not ?
}
I know - alternative is to omit RejectedExecutionHandler and let rejection exception to be throw, however handler approach is more reasonable in my case.
If I understood correctly, you want the code that is using the Future to receive the RejectedExecutionException and handle it, instead of the code calling submitToAsyncExecution.
I can offer you the following:
public static Future<?> submitToAsyncExecution(Runnable r) {
final ExecutorService s;
try {
return s.submit(r);
} catch (RejectedExecutionException e) {
final CompletableFuture<Void> cf = new CompletableFuture<>();
// You can wrap it or create another exception if needed.
cf.completeExceptionally(e);
return cf;
}
}
It catches the exception and creates a Future from it. Every piece of code using it will receive the exception from the rejection and deal with it.
I wanted to prototype an example where I call a ServiceC using a value returned by ServiceA using Spring Reactor Stream API. So I wrote code like this
final ExecutorService executor = new ThreadPoolExecutor(4, 4, 10, TimeUnit.MINUTES, new LinkedBlockingQueue<Runnable>());
Streams.defer(executor.submit(new CallToRemoteServiceA()))
.flatMap(s -> Streams.defer(executor.submit(new CallToRemoteServiceC(s))))
.consume(s -> System.out.println("End Result : " + s));
To simulate the latency involved in ServiceA and ServiceC the call() methods of CallToRemoteServiceA and CallToRemoteServiceC has Thread.sleep() methods. The problem is that when I comment out the Thread.sleep() method i.e. the service method calls have no latency which is not true in the real world the consume method gets called. If the Thread.sleep() methods are kept in place then the consume method doesn't get called. I understand that the Streams.defer() returns a cold stream and hence it probably only executes the consume method for items accepted after it's registration but then I was wondering how I could create a HotStream from a Future returned by the ExecutorService?
I believe this is because of a bug in the reactor.rx.stream.FutureStream.subscribe() method. In this line:
try {
// Bug in the line below since unit is never null
T result = unit == null ? future.get() : future.get(time, unit);
buffer.complete();
onNext(result);
onComplete();
} catch (Throwable e) {
onError(e); <-- With default constructor this gets called if time == 0 and
future has as yet not returned
}
In this case when the default FutureStream(Future) constructor is called the unit is never null and hence the above code always calls future.get(0, TimeUnit.SECONDS) leading to an immediate timeout exception in the catch(Throwable) block. If you guys agree that this is a bug I can make a pull request with a fix for this issue??
I think what you want is to use Streams.just. You can optionally .dispatchOn(Dispatcher) if you want, but since you're already in the thread of the thread pool, you'll probably want to use the sync Dispatcher. Here's a quick test to illustrate:
#Test
public void streamsDotJust() throws InterruptedException {
ExecutorService executor = Executors.newSingleThreadExecutor();
Streams
.just(executor.submit(() -> "Hello World!"))
.map(f -> {
try {
return f.get();
} catch (Exception e) {
throw new IllegalStateException(e);
}
})
.consume(System.out::println);
Thread.sleep(100);
}
e.g. I looking to find a way to execute #Async method not absolutely asynchronously.
For example I want to invoke #Asynctask that will block my process for a up to a maximum defined time if task still haven't completed.
#Async
public Future<ModelObject> doSomething() {
//here we will block for a max allowed time if task still haven't been completed
}
So such code will be semi asynchronous but the blocking time can be controlled by developer.
P.S : of course I can achieve this by simply blocking calling thread for a limited time. but I look to achieve that within spring layer
In short, no, there is no way to configure Spring to do this.
The #Async annotation is handled by the AsyncExecutionInterceptor which delegates the work to a AsyncTaskExecutor. You could, in theory, write your own implementation of the AsyncTaskExecutor but even then there would be no way to use the #Async annotation to pass the desired wait time to your executor. Even then, it's not clear to me what the caller's interface would look like since they'd still be getting a Future object back. You would probably also need to subclass the Future object as well. Basically, by the time you are finished, you will have written the entire feature again more or less from scratch.
You could always wrap the returned Future object in your own WaitingFuture proxy which provides an alternate get implementation although even then you'd have no way of specifying the wait value on the callee side:
WaitingFuture<ModelObject> future = new WaitingFuture<ModelObject>(service.doSomething());
ModelObject result = future.get(3000); //Instead of throwing a timeout, this impl could just return null if 3 seconds pass with no answer
if(result == null) {
//Path A
} else {
//Path B
}
Or if you don't want to write your own class then just catch the TimeoutException.
Future<ModelObject> future = doSomething();
try {
ModelObject result = future.get(3000,TimeUnit.MILLISECONDS);
//Path B
} catch (TimeoutException ex) {
//Path A
}
You can do it with an #Async method that returns a Future:
Future<String> futureString = asyncTimeout(10000);
futureString.get(5000, TimeUnit.MILLISECONDS);
#Async
public Future<String> asyncTimeout(long mills) throws InterruptedException {
return new AsyncResult<String>(
sleepAndWake(mills)
);
}
public String sleepAndWake(long mills) throws InterruptedException{
Thread.sleep(mills);
return "wake";
}