I am trying to make a flowable with a backpressure.
My idea is that new item of the flowable won't be emitted until one of the current items finishes its processing. I am using a ResourceSubscriber and subscribeWith() method to achieve that.
Each element of the flowable is being processed asynchronously on a separate thread pool. (Which I achieve by using flatMap/subscribeOn)
I expect that each element after second will be emitted AFTER onNext method of the subscriber called. However when I am trying to run this code the Flowable emits elements uncontrollably. The backpressure dosn't work.
There is the code to reproduce the issue:
import io.reactivex.Flowable;
import io.reactivex.schedulers.Schedulers;
import io.reactivex.subscribers.ResourceSubscriber;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.concurrent.atomic.AtomicInteger;
public class RxTest2 {
private static final Logger log = LoggerFactory.getLogger(RxTest.class);
static AtomicInteger integer = new AtomicInteger();
public static void main(String[] args) {
Flowable.generate(emitter -> {
final int i1 = integer.incrementAndGet();
if (i1 >= 20) {
Thread.sleep(10000);
System.exit(0);
}
emitter.onNext(i1);
})
.doOnNext(i -> log.info("Published: " + i))
.flatMap(i -> Flowable.defer(() -> {
log.info("Starting consuming {}", i);
Thread.sleep(100);
log.info("Finished consuming {}", i);
return Flowable.just(i);
}).subscribeOn(Schedulers.computation()))
.doOnNext(i -> log.info("Consuming finished, result: " + i))
.subscribeWith(new BackpressureSubscriber(2));
}
}
class BackpressureSubscriber extends ResourceSubscriber<Object> {
private static final Logger log = LoggerFactory.getLogger(BackpressureSubscriber.class);
private final long initialRequest;
public BackpressureSubscriber(final long initialRequest) {
this.initialRequest = initialRequest;
}
#Override
protected void onStart() {
super.onStart();
log.info("Starting execution with {} initial requests", initialRequest);
request(initialRequest);
}
#Override
public void onNext(final Object message) {
log.info("On next for {}", message);
request(1);
}
#Override
public void onError(final Throwable throwable) {
log.error("Unhandled error: ", throwable);
}
#Override
public void onComplete() {
log.info("On Complete");
}
}
Expected output something like:
[main] INFO RxTest - Published: 1
[main] INFO RxTest - Published: 2
[RxComputationThreadPool-1] INFO RxTest - Starting consuming 1
[RxComputationThreadPool-1] INFO RxTest - Finished consuming 1
[RxComputationThreadPool-2] INFO RxTest - Starting consuming 2
[RxComputationThreadPool-1] INFO RxTest - On next for 1
[main] INFO RxTest - Published: 3
[RxComputationThreadPool-1] INFO RxTest - Finished consuming 2
Actual Output:
11:30:32.166 [main] INFO BackpressureSubscriber - Starting execution with 2 initial requests
11:30:32.170 [main] INFO RxTest - Published: 1
11:30:32.189 [main] INFO RxTest - Published: 2
11:30:32.189 [RxComputationThreadPool-1] INFO RxTest - Starting consuming 1
11:30:32.189 [RxComputationThreadPool-2] INFO RxTest - Starting consuming 2
11:30:32.189 [main] INFO RxTest - Published: 3
11:30:32.190 [main] INFO RxTest - Published: 4
11:30:32.190 [RxComputationThreadPool-3] INFO RxTest - Starting consuming 3
11:30:32.190 [main] INFO RxTest - Published: 5
11:30:32.190 [RxComputationThreadPool-4] INFO RxTest - Starting consuming 4
11:30:32.190 [main] INFO RxTest - Published: 6
11:30:32.190 [RxComputationThreadPool-5] INFO RxTest - Starting consuming 5
11:30:32.190 [main] INFO RxTest - Published: 7
11:30:32.191 [RxComputationThreadPool-6] INFO RxTest - Starting consuming 6
11:30:32.191 [main] INFO RxTest - Published: 8
11:30:32.191 [RxComputationThreadPool-7] INFO RxTest - Starting consuming 7
11:30:32.191 [main] INFO RxTest - Published: 9
11:30:32.191 [RxComputationThreadPool-8] INFO RxTest - Starting consuming 8
11:30:32.191 [main] INFO RxTest - Published: 10
11:30:32.191 [RxComputationThreadPool-9] INFO RxTest - Starting consuming 9
11:30:32.191 [main] INFO RxTest - Published: 11
11:30:32.191 [RxComputationThreadPool-10] INFO RxTest - Starting consuming 10
11:30:32.192 [main] INFO RxTest - Published: 12
11:30:32.192 [RxComputationThreadPool-11] INFO RxTest - Starting consuming 11
11:30:32.192 [main] INFO RxTest - Published: 13
11:30:32.192 [main] INFO RxTest - Published: 14
11:30:32.192 [RxComputationThreadPool-12] INFO RxTest - Starting consuming 12
11:30:32.192 [main] INFO RxTest - Published: 15
11:30:32.192 [main] INFO RxTest - Published: 16
11:30:32.192 [main] INFO RxTest - Published: 17
11:30:32.192 [main] INFO RxTest - Published: 18
11:30:32.192 [main] INFO RxTest - Published: 19
11:30:32.294 [RxComputationThreadPool-2] INFO RxTest - Finished consuming 2
11:30:32.294 [RxComputationThreadPool-1] INFO RxTest - Finished consuming 1
11:30:32.294 [RxComputationThreadPool-1] INFO RxTest - Consuming finished, result: 1
11:30:32.294 [RxComputationThreadPool-1] INFO BackpressureSubscriber - On next for 1
Tested on libraries versions:
2.2.19
2.1.2
As far as I understand ReactiveX documentation I think it is RX Bug. However I might be wrong and would be grateful if you point out
flatMap actually requests from upstream in batches and will buffer items until downstream requests them. That fact is sufficient to describe the behaviour you are seeing. If you had set bufferSize to 1 you might see the behaviour you expected. There is an overload that lets you set bufferSize.
In addition flatMap has a maxConcurrent parameter which is easier to understand if you realize that flatMap is effectively a map, then a merge is applied to the stream of streams given by the map. The merge can only realistically subscribe to a limited number of sources at a time and that is maxConcurrent. Default for bufferSize and maxConcurrent is 128.
Bear in mind here that when the merge step receives a request from downstream it has no idea how many streams (remember we are dealing with a stream of streams here) it will need to subscribe to to fulfill the request! The first 10 streams could return no values at all. If the first stream returns nothing and doesn't complete for 1 hour and we have maxConcurrent=1 then we will receive no events at all for that first hour even though stream 2 and stream 3 were ready to send us stuff. For reasons like these we have to choose all-purpose defaults for bufferSize and maxConcurrent and the values are normally chosen that optimize performance in certain benchmark cases and minimize problems for many edge-cases.
Related
I am trying to understand how doAfterTerminate works with delaySequence. I have the following test:
#Test
fun testDoAfterTerminate() {
logger.info("Starting test")
val sch = Schedulers.single()
val testFlux = Flux.fromArray(intArrayOf(1, 2, 3).toTypedArray())
.doAfterTerminate { logger.info("Finished processing batch!") }
.delaySequence(Duration.ofSeconds(1), sch)
.doOnNext { logger.info("Done $it")}
.doAfterTerminate { logger.info("Finished v2")}
StepVerifier.create(testFlux).expectNextCount(3).verifyComplete()
}
The output of this test is:
22:27:54.547 [Test worker] INFO leon.patmore.kafkareactive.TestReactor - Finished processing batch!
22:27:55.561 [single-1] INFO leon.patmore.kafkareactive.TestReactor - Done 1
22:27:55.561 [single-1] INFO leon.patmore.kafkareactive.TestReactor - Done 2
22:27:55.561 [single-1] INFO leon.patmore.kafkareactive.TestReactor - Done 3
22:27:55.562 [single-1] INFO leon.patmore.kafkareactive.TestReactor - Finished v2
Does anyone understand why the first doAfterTerminate is called before the flux completes?
If I remove the .delaySequence(Duration.ofSeconds(1), sch) line, the termination happens as expected:
22:29:37.588 [Test worker] INFO leon.patmore.kafkareactive.TestReactor - Done 1
22:29:37.588 [Test worker] INFO leon.patmore.kafkareactive.TestReactor - Done 2
22:29:37.588 [Test worker] INFO leon.patmore.kafkareactive.TestReactor - Done 3
22:29:37.588 [Test worker] INFO leon.patmore.kafkareactive.TestReactor - Finished v2
22:29:37.588 [Test worker] INFO leon.patmore.kafkareactive.TestReactor - Finished processing batch!
Thanks!
The first doAfterTerminate is triggered on the main thread without any delay. Later, the signals are delayed and continue on the single() Scheduler.
Adding some logs() to make it more clear:
INFO main r.F.P.1 - | onSubscribe([Fuseable] FluxPeekFuseable.PeekFuseableSubscriber)
INFO main r.Flux.Peek.2 - onSubscribe(FluxPeek.PeekSubscriber)
INFO main r.Flux.Peek.2 - request(unbounded)
INFO main r.F.P.1 - | request(unbounded)
INFO main r.F.P.1 - | onNext(1)
INFO main r.F.P.1 - | onNext(2)
INFO main r.F.P.1 - | onNext(3)
INFO main r.F.P.1 - | onComplete()
Finished processing batch!
Done 1
Done 2
INFO single-1 r.Flux.Peek.2 - onNext(1)
Done 3
INFO single-1 r.Flux.Peek.2 - onNext(2)
INFO single-1 r.Flux.Peek.2 - onNext(3)
INFO single-1 r.Flux.Peek.2 - onComplete()
Finished v2
I have a stream of outgoing messages. They can occur at arbitrary intervals. If there is no message for a period after the last message was sent, I'd like to emit a new message which acts as a keep-alive or heartbeat.
Here's a code example of what I tried. Suppose I'd like to emit a heartbeat message every 1s after "C" until "D".
Flux.concat(
Flux.just("A", "B", "C").delayElements(Duration.ofMillis(500)),
Flux.just("D").delaySequence(Duration.ofSeconds(5))
)
.windowTimeout(1, Duration.ofSeconds(1))
.flatMap(window -> window.switchIfEmpty(Mono.just("*")))
.log()
.blockLast();
Here's the output
14:30:14.659 [parallel-2] INFO reactor.Flux.FlatMap.1 - onNext(A)
14:30:15.162 [parallel-3] INFO reactor.Flux.FlatMap.1 - onNext(B)
14:30:15.663 [parallel-4] INFO reactor.Flux.FlatMap.1 - onNext(C)
14:30:16.664 [parallel-1] INFO reactor.Flux.FlatMap.1 - onNext(*)
14:30:17.665 [parallel-1] INFO reactor.Flux.FlatMap.1 - onNext(*)
14:30:18.664 [parallel-1] INFO reactor.Flux.FlatMap.1 - onNext(*)
14:30:19.670 [parallel-1] INFO reactor.Flux.FlatMap.1 - onNext(*)
14:30:20.665 [parallel-1] INFO reactor.Flux.FlatMap.1 - onNext(*)
14:30:20.676 [parallel-1] INFO reactor.Flux.FlatMap.1 - onNext(D)
14:30:20.677 [parallel-1] INFO reactor.Flux.FlatMap.1 - onNext(*) // Why?
14:30:20.679 [parallel-1] INFO reactor.Flux.FlatMap.1 - onComplete()
In this example, D follows C by 5.013 seconds even though I specified 5 seconds, so I'm not bothered if there are 4 or 5 items/heartbeats emitted in-between. It doesn't need to be that precise.
But why is there another item omitted after D? Is there a way to fix it? Perhaps I'm using the wrong operation.
I suppose I can achieve it using a processor, but the documentation says
Most of the time, you should try to avoid using a Processor.
What are the situations which cause Flux::flatMap to listen to multiple sources (0...infinity) concurrently?
I found out, while experimenting, that when the upstream send signals to flatMap in thread thread-upstream-1 and there are N inner streams which flatMap will listen to and each of them send signals in different thread: thread-inner-stream-i for 1<=i<=N, than for every 1<=i<=N if thread-upstream-1 != thread-inner-stream-i, flatMap will listen concurrently to all the inner streams.
I think that it's not exactly true and I missed some other scenarios.
flatMap doesn't do any parallel work, as in: it doesn't change threads. The simplest example is
Flux.range(1, 5).hide()
.flatMap(v -> Flux.range(10 * v, 2))
.log()
.blockLast(); //for test purpose
This prints:
[main] INFO reactor.Flux.FlatMap.1 - onSubscribe(FluxFlatMap.FlatMapMain)
[main] INFO reactor.Flux.FlatMap.1 - request(unbounded)
[main] INFO reactor.Flux.FlatMap.1 - onNext(10)
[main] INFO reactor.Flux.FlatMap.1 - onNext(11)
[main] INFO reactor.Flux.FlatMap.1 - onNext(20)
[main] INFO reactor.Flux.FlatMap.1 - onNext(21)
[main] INFO reactor.Flux.FlatMap.1 - onNext(30)
[main] INFO reactor.Flux.FlatMap.1 - onNext(31)
[main] INFO reactor.Flux.FlatMap.1 - onNext(40)
[main] INFO reactor.Flux.FlatMap.1 - onNext(41)
[main] INFO reactor.Flux.FlatMap.1 - onNext(50)
[main] INFO reactor.Flux.FlatMap.1 - onNext(51)
[main] INFO reactor.Flux.FlatMap.1 - onComplete()
As you can see, only produces in main. If you add a publishOn after the initial range, flatMap produces everything in the same single thread publishOn will switch to.
What flatMap does however is subscribe to multiple inner Publisher, up to the concurrency parameter with a default of Queues.SMALL_BUFFER_SIZE (256).
That means that if you set it to 3, flatMap will map 3 source elements to their inner Publisher and subscribe to these publishers, but will wait for at least one to complete before it starts mapping more source elements.
If the inner Publisher use publishOn or subscribeOn, then flatMap will naturally let their events occur in the then-defined threads:
Flux.range(1, 5).hide()
.flatMap(v -> Flux.range(v * 10, 2)
.publishOn(Schedulers.newParallel("foo", 3)))
.flatMap(v -> Flux.range(10 * v, 2))
.log()
.blockLast(); //for test purpose
Which prints:
[main] INFO reactor.Flux.FlatMap.1 - onSubscribe(FluxFlatMap.FlatMapMain)
[main] INFO reactor.Flux.FlatMap.1 - request(unbounded)
[foo-1] INFO reactor.Flux.FlatMap.1 - onNext(10)
[foo-1] INFO reactor.Flux.FlatMap.1 - onNext(11)
[foo-1] INFO reactor.Flux.FlatMap.1 - onNext(20)
[foo-1] INFO reactor.Flux.FlatMap.1 - onNext(21)
[foo-1] INFO reactor.Flux.FlatMap.1 - onNext(30)
[foo-1] INFO reactor.Flux.FlatMap.1 - onNext(31)
[foo-4] INFO reactor.Flux.FlatMap.1 - onNext(50)
[foo-4] INFO reactor.Flux.FlatMap.1 - onNext(51)
[foo-4] INFO reactor.Flux.FlatMap.1 - onNext(40)
[foo-4] INFO reactor.Flux.FlatMap.1 - onNext(41)
[foo-4] INFO reactor.Flux.FlatMap.1 - onComplete()
I want to send multiple messages that will traverse the same route asynchronously and be able to know when all processing have completed.
Since I need to know when each route has terminated, I thought about using
ProducerTemplate#asyncRequestBody which use InOut pattern so that calling get on the Future object returned will block until the route has terminated.
So far so good, each request are sent asynchronously to the route, and looping over all Future calling get method will block until all
my routes have completed.
The problem is that, while the requests are sent asynchronously, I want them to be also consumed in parallel.
For example, consider P being the ProducerTemplate, Rn beeing requests and En being endpoints - what I want is :
-> R0 -> from(E1).to(E2).to(E3) : done.
/
P -> R1 -> from(E1).to(E2).to(E3) : done.
\
-> R2 -> from(E1).to(E2).to(E3) : done.
^__ Requests consumed in parallel.
After a few research, I stumbled onto Competing Consumers which parallelize execution adding more consumers.
However, since there is multiple executions at the same time, this slow down the execution of each route which causes some ExchangeTimedOutException :
The OUT message was not received within: 20000 millis due reply message with correlationID...
Not a surprise, as I am sending an InOut request. But actually, I don't really care of the response, I use it only to know
when my route has terminated. I would use an InOnly (ProducerTemplate#asyncSendBody), but calling Future#get would not block until
the entire task is completed.
Is there another alternative to send requests asynchronously and detect when they have all completed?
Note that changing the timeout is not an option in my case.
My first instinct is to recommend using NotifyBuilder to track the processing, more specifically using the whenBodiesDone to target specific bodies.
EDIT:
Here's a trivial implementation but it does demonstrate a point:
#SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
#Component
public static class ParallelProcessingRouteBuilder extends RouteBuilder {
#Override
public void configure() throws Exception {
from("seda:test?concurrentConsumers=5")
.routeId("parallel")
.log("Received ${body}, processing")
.delay(5000)
.log("Processed ${body}")
.stop();
from("timer:testStarter?delay=3000&period=300000")
.routeId("test timer")
.process(exchange -> {
// messages we want to track
List<Integer> toSend = IntStream.range(0, 5).boxed().collect(toList());
NotifyBuilder builder = new NotifyBuilder(getContext())
.fromRoute("parallel")
.filter(e -> toSend.contains(e.getIn().getBody(Integer.class)))
.whenDone(toSend.size())
.create();
ProducerTemplate template = getContext().createProducerTemplate();
// messages we do not want to track
IntStream.range(10, 15)
.forEach(body -> template.sendBody("seda:test", body));
toSend.forEach(body -> template.sendBody("seda:test", body));
exchange.getIn().setBody(builder.matches(1, TimeUnit.MINUTES));
})
.log("Matched? ${body}");
}
}
}
And here's a sample of the logs:
2016-08-06 11:45:03.861 INFO 27410 --- [1 - seda://test] parallel : Received 10, processing
2016-08-06 11:45:03.861 INFO 27410 --- [5 - seda://test] parallel : Received 11, processing
2016-08-06 11:45:03.864 INFO 27410 --- [2 - seda://test] parallel : Received 12, processing
2016-08-06 11:45:03.865 INFO 27410 --- [4 - seda://test] parallel : Received 13, processing
2016-08-06 11:45:03.866 INFO 27410 --- [3 - seda://test] parallel : Received 14, processing
2016-08-06 11:45:08.867 INFO 27410 --- [1 - seda://test] parallel : Processed 10
2016-08-06 11:45:08.867 INFO 27410 --- [3 - seda://test] parallel : Processed 14
2016-08-06 11:45:08.867 INFO 27410 --- [4 - seda://test] parallel : Processed 13
2016-08-06 11:45:08.868 INFO 27410 --- [2 - seda://test] parallel : Processed 12
2016-08-06 11:45:08.868 INFO 27410 --- [5 - seda://test] parallel : Processed 11
2016-08-06 11:45:08.870 INFO 27410 --- [1 - seda://test] parallel : Received 0, processing
2016-08-06 11:45:08.872 INFO 27410 --- [4 - seda://test] parallel : Received 2, processing
2016-08-06 11:45:08.872 INFO 27410 --- [3 - seda://test] parallel : Received 1, processing
2016-08-06 11:45:08.872 INFO 27410 --- [2 - seda://test] parallel : Received 3, processing
2016-08-06 11:45:08.872 INFO 27410 --- [5 - seda://test] parallel : Received 4, processing
2016-08-06 11:45:13.876 INFO 27410 --- [1 - seda://test] parallel : Processed 0
2016-08-06 11:45:13.876 INFO 27410 --- [3 - seda://test] parallel : Processed 1
2016-08-06 11:45:13.876 INFO 27410 --- [4 - seda://test] parallel : Processed 2
2016-08-06 11:45:13.876 INFO 27410 --- [5 - seda://test] parallel : Processed 4
2016-08-06 11:45:13.876 INFO 27410 --- [2 - seda://test] parallel : Processed 3
2016-08-06 11:45:13.877 INFO 27410 --- [r://testStarter] test timer : Matched? true
You'll notice how the NotifyBuilder returned the result as soon as the results matched.
If you know each batch of messages you are consuming has X messages in them you can use an aggregator at the end of your parallel processing. For your example, each group of message would have it's own unique header tag that will be picked up by the aggregator. After all the messages have been processes and all the messages have ended up at the aggregator you can aggregate the messages to whatever format you want and return them.
I'm starting to learn Apache Camel and faced with the problem.
I need to read XML file from file system, parse it and transfer some file specified in this XML to another location.
This is example of XML located in "C:/Users/JuISe/Desktop/jms".
<file>
<from>C:/Users/JuISe/Desktop/from</from>
<to>C:/Users/JuISe/Desktop/to</to>
</file>
It means transfer all files from
"C:/Users/JuISe/Desktop/from" directory to "C:/Users/JuISe/Desktop/to"
Here is my code:
public class FileShifter {
public static void main(String args[]) {
CamelContext context = new DefaultCamelContext();
try {
context.addRoutes(new MyRouteBuilder());
context.start();
Thread.sleep(10000);
context.stop();
}catch (Exception ex) {
ex.printStackTrace();
}
}
}
class MyRouteBuilder extends RouteBuilder {
private String from;
private String to;
public void configure(){
from("file:C:/Users/JuISe/Desktop/jms?noop=true")
.setHeader("from", xpath("file/from/text()").stringResult())
.setHeader("to", xpath("file/to/text()").stringResult())
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
from = exchange.getIn().getHeader("from").toString();
to = exchange.getIn().getHeader("to").toString();
}
})
.pollEnrich("file:" + from)
.to("file:" + to);
}
}
It doesn't works.
Here is logs:
[main] INFO org.apache.camel.impl.converter.DefaultTypeConverter - Loaded 216 type converters
[main] INFO org.apache.camel.impl.DefaultRuntimeEndpointRegistry - Runtime endpoint registry is in extended mode gathering usage statistics of all incoming and outgoing endpoints (cache limit: 1000)
[main] INFO org.apache.camel.impl.DefaultCamelContext - AllowUseOriginalMessage is enabled. If access to the original message is not needed, then its recommended to turn this option off as it may improve performance.
[main] INFO org.apache.camel.impl.DefaultCamelContext - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
[main] INFO org.apache.camel.component.file.FileEndpoint - Endpoint is configured with noop=true so forcing endpoint to be idempotent as well
[main] INFO org.apache.camel.component.file.FileEndpoint - Using default memory based idempotent repository with cache max size: 1000
[main] INFO org.apache.camel.impl.DefaultCamelContext - Route: route1 started and consuming from: Endpoint[file://C:/Users/JuISe/Desktop/jms?noop=true]
[main] INFO org.apache.camel.impl.DefaultCamelContext - Total 1 routes, of which 1 is started.
[main] INFO org.apache.camel.impl.DefaultCamelContext - Apache Camel 2.16.1 (CamelContext: camel-1) started in 1.033 seconds
[Camel (camel-1) thread #0 - file://C:/Users/JuISe/Desktop/jms] WARN org.apache.camel.component.file.strategy.MarkerFileExclusiveReadLockStrategy - Deleting orphaned lock file: C:\Users\JuISe\Desktop\jms\message.xml.camelLock
[Camel (camel-1) thread #0 - file://C:/Users/JuISe/Desktop/jms] INFO org.apache.camel.builder.xml.XPathBuilder - Created default XPathFactory com.sun.org.apache.xpath.internal.jaxp.XPathFactoryImpl#2308d4c8
[main] INFO org.apache.camel.impl.DefaultCamelContext - Apache Camel 2.16.1 (CamelContext: camel-1) is shutting down
[main] INFO org.apache.camel.impl.DefaultShutdownStrategy - Starting to graceful shutdown 1 routes (timeout 300 seconds)
[Camel (camel-1) thread #2 - ShutdownTask] INFO org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are still 2 inflight and pending exchanges to complete, timeout in 300 seconds. Inflights per route: [route1 = 2]
[Camel (camel-1) thread #2 - ShutdownTask] INFO org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are still 2 inflight and pending exchanges to complete, timeout in 299 seconds. Inflights per route: [route1 = 2]
[Camel (camel-1) thread #2 - ShutdownTask] INFO org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are still 2 inflight and pending exchanges to complete, timeout in 298 seconds. Inflights per route: [route1 = 2]
[Camel (camel-1) thread #2 - ShutdownTask] INFO org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are still 2 inflight and pending exchanges to complete, timeout in 297 seconds. Inflights per route: [route1 = 2]
[Camel (camel-1) thread #2 - ShutdownTask] INFO org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are still 2 inflight and pending exchanges to complete, timeout in 296 seconds. Inflights per route: [route1 = 2]
[Camel (camel-1) thread #2 - ShutdownTask] INFO org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are still 2 inflight and pending exchanges to complete, timeout in 295 seconds. Inflights per route: [route1 = 2]
[Camel (camel-1) thread #2 - ShutdownTask] INFO org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are still 2 inflight and pending exchanges to complete, timeout in 294 seconds. Inflights per route: [route1 = 2]
Thanks for a help!
Try using a bean with producer and consumer template , file end points directory cannot be dynamic
from("file:/Users/smunirat/apps/destination/jms?noop=true")
.setHeader("from", xpath("file/from/text()").stringResult())
.setHeader("to", xpath("file/to/text()").stringResult())
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
from = exchange.getIn().getHeader("from").toString();
to = exchange.getIn().getHeader("to").toString();
exchange.getOut().setHeader("from", from);
exchange.getOut().setHeader("to", to);
}
})
.to("log:Sundar?showAll=true&multiline=true")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
ConsumerTemplate createConsumerTemplate = exchange.getContext().createConsumerTemplate();
ProducerTemplate createProducerTemplate = exchange.getContext().createProducerTemplate();
Exchange receive = createConsumerTemplate.receive("file://"+exchange.getIn().getHeader("from"));
createProducerTemplate.sendBody("file://"+exchange.getIn().getHeader("to"),receive.getIn().getMandatoryBody());
}
})
.log("Message");
This might require a little tweaking to change the file name and delete the original file from the from location