Implementing threads for seperate routes in apache camel - java

I would like to implement a distinct thread for each route in apache camel.I do not want to use a thread pool or async as I want my process to remain synchronous.Could I please get a code example for the same in java DSL format.

Each route uses its own thread, unless a route is using the direct component (http://camel.apache.org/direct), which will re-use the caller thread.
For example having 2 routes
from("file:foo").to("bean:blah");
from("jms:queue:bar").to("bean:great")
Is 2 routes, and each route uses its own thread.
On the other hand the following 2 routes
from("file:foo").to("bean:blah").to("direct:bar");
from("direct:bar").to("bean:great")
Then the 2nd route being a direct endpoint, will re-use the caller thread, from the 1st route, when the 1st route routes the message to it, using: .to("direct:bar")

you can use camel-direct to have to a single-threaded, synchronous request/response route...

Related

Is it possible to call a Camel Route xml from Java method?

I am trying to call a camel route xml from the Java method. Is it possible to call and return to the same method again?
Yes, you can use a ProducerTemplate for that.
If your caller has to wait for the route to complete, your route must be executed synchronously (for example direct routes). See exchange pattern for more details.
If you expect a response from the route you should use one of the request* methods of the producer template (not the send* methods).

Producing tasks on-demand in producer-consumer pattern

Let's consider the following situation in producer-consumer pattern:
I cannot wait with a task to be performed. I want to produce task on demand (eg. with Supplier) when a consumer is ready to process it. In SynchronousQueue I need to have actual task when executing put() method. How to solve my problem?
I know that I could solve it by design - just make a set of workers and tell them to produceTask-consume-Task-repeat, but I'm looking for another way.
To be more specific:
Let's consider that I have remote http resource A. I can get a 'task' from it to process in my worker threads. Results are sent asynchronously. But the thing is that I should not get a task from A if I am not able to process it right now.
"I want to produce task on demand (eg. with Supplier) when a consumer is ready to process it."
One example of producing data on demand is Reactive Streams protocol, where Subscriber (consumer) requests Publisher (producer) to push next chunk of data with Subscription.request() method.
This protocol is implemented in RxJava and other libraries.
If I were You, in the case of "producer-consumer pattern" you should not use blocking queues, however You looking for a non-blocking asynchronous queue.
Then everybody notified just in time.
Or is there any other constraint with the actual tasks. Or somehow I misunderstand You? Which side of the producer-consumer goes hungry?

Spring Integration - channels and threads

I would like to understand how messages are processed in Spring Integration: serial or parallel. In particular I have an inbound channel adapter with a poller and an HTTP outbound gateway. I guess splitters, transformers, header enrichers etc are not spawning their own threads.
I could have missed them, but are these details specified somewhere in the documentation?
Also can I programatically get all the channels in the system?
Channel types are described here.
The default channel type is Direct (the endpoint runs on the caller's thread); QueueChannel and ExcecutorChannel provide for asynch operations.
context.getBeansOfType(MessageChannel.class)
Actually "threading" dependes on MessageChannel type:
E.g. DirectChannel (<channel id="foo"/> - default config) doesn't do anything with threads and just shifts the message from send to the subscriber to handle it. If the handler is an AbstractReplyProducingMessageHandler it sends its result to the outputChannel and if the last one is DirectChannel, too, the work is done within the same Thread.
Another sample is about your inbound channel adapter. On the background there is a scheduled task, which executes within the Scheduler thread and if your poll is very often the next poll task might be executed within the new Thread.
The last "rule" is acceptable for QueueChannel: their handle work is done with Scheduler thread, too.
ExcecutorChannel just places the handle task to the Executor.
All other info you you can find in the Reference Manual provided by Gary Russell in his answer.

Camel guaranteed message order for different routes

I use Apache Camel 2.10.0 with spring-ws component to route some (20+) WS/SOAP operations.
The sample code looks like:
from("spring-ws:rootqname:{http://my/name/space}myOperation1?endpointMapping=#endpointMapping")
from("spring-ws:rootqname:{http://my/name/space}myOperation2?endpointMapping=#endpointMapping")
from("spring-ws:rootqname:{http://my/name/space}myOperation3?endpointMapping=#endpointMapping")
Operations normally access several DB and could last up to couple of seconds
It works perfectly, but now I have a new requirement: 3 of the operations must be synchronized.
For example: if client1 calls operation1 1ms before client2 calling operation1, the client1’s call must be finished before starting client2’s one.
Same is valid for 1 client calling 2 different operations.
For example: if client1 calls operation1 1ms before calling operation2, the operation1’s call must be finished before starting operation2’s one. Clients call the WS asynchronously and this cannot be changed
The application is running with WebLogic 10.3.5.
Reducing the container threads to 1 only would affect all operations, thus I was thinking about adding of some custom queue (JMS style) only to these 3 operations.
Do you have any better idea?
It looks all the calling some be put into the queue first, then we can decide which one should be invoked then.
ActiveMQ is easy to setup and would work with Camel very well.
You need to route the request to JMS queue first, the queue itself is the transaction break point, then you consume JMS messages sequentially. You have much more control of the threading and message consuming by using the message pattern

Ordinary Queue vs SEDA Queue

Being new to Apache Camel, I was recently reviewing its long list of components and stumbled upon their support for SEDA queue components.
The page didn't make much sense to me, so I did a couple of online searches for the term "SEDA queue" and got the wikipedia article here.
After reading that article, I can't tell what the difference is between a SEDA queue and a normal, "ordinary" queue! Both embrace the notion of decoupling systems through the use of asynchronous queues.
From the article, "SEDA" just sounds like an architecture that consists of placing a queue between each component. Is this correct?
But if it's just an architecture, then why is a "SEDA" queue a special Apache Camel component?
SEDA is an acronym that stands for Staged Event Driven Architecture it is designed as a mechanism to regulate the flow between different phases of message processing. The idea is to smooth out the frequency of message output from an overall process so that it matches the input, It allows an enpoint´s consumer threads to offload the work of long-running operations into the background, thereby freeing them up to consume messages from the transport.
When an exchange is passed to a seda: endpoint, it is placed into a BlockingQueue. The list exists within the Camel context, wich means that only those routes that are within the same context can be joined by this type of endpoint. The queue is unbounded by default, although that can be changed by setting the size attribute on the URI of the consumer.
By default, a single thread assigned to the endpoint reads exchanges off the list and processes them through the route. As seen in the proceding example, it is possible to increase the number of concurrenctConsumers to ensure that exchanges are getting processed from that list in a timely fashion.
The SEDA pattern is best suited to processing the InOnly messages, where one route finishes processing and hands off to another to deal with the next phase. It is possible to ask for a response from seda:endpoint by calling it when the message exchange pattern is InOut.
Reference:
Apache Camel Developer´s Cookbook
SEDA queues are just like a regular queue (and as Peter said above, in Camel they have a thread pool associated with them as part of the component). SEDA is an architecture. The SEDA component in Camel uses in-memory queues in your process and are a separate component in order to distinguish them from the other queue component in Apache camel, namely the JMS component.
SEDA offers decoupling of the components within a single camel route. Or for that matter within a single process. . Meaning it helps you make async calls to other components... its an in memory blockingqueue.
On the other hand JMS is used for decoupling of the whole system.. JMS will have an external broker involved.. SEDA willl just create a separate thread from the consumer component

Categories

Resources