I have a situation in which I am generating a list of tasks in my camel producer then I need to send multiple exchanges containing each of these single task so that next component can process it.
Is there any pattern or producer in Camel which can serve my purpose.
I know there is DefaultTemplateProducer but my next part in route is processor which doesnot have an endpoint to point to so its of no use to me.
You could perhaps use the Camel splitter pattern to split the list and send it down the route?
Apache Camel Splitter
Its a bit unclear what you want. Maybe explain your question in more details. Also make sure to understand and read about the Enterprise Integration Patterns that Camel offers: http://camel.apache.org/eip
And as U2one says, the splitter pattern may be a good start accordingly to your question.
Related
I am trying to understand the "best" way to accomplish this goal:
Using FileReadingMessageSource as the source
for each line in the file (using FileSplitter)
augment the row of data and send a REST request to another server and wait for a response.
Now sending the REST request is not a transform, I don't think it is an adapter, not a router.
What is it? what is the right way to design this?
Your question isn't clear, but I'll try my best.
To call REST service you should use HttpRequestExecutingMessageHandler, which essentially is a gateway EI pattern implementation. But from the application perspective it is a service any way. So, to handle message to the HttpRequestExecutingMessageHandler you should use #ServiceActivator. That is a term to proceed.
I'm fairly new to Apache Camel and have a couple questions. I want my route to do the following:
load a list of lists in LoadSomeThingsProcessor
split twice so I can handle each item in the inner list
filter out some things I don't need
join all remaining exchanges from the inner split
then eventually join again (back to one exchange)
My route that looks something like the following:
from("direct:myRoute")
.process(new LoadSomeThingsProcessor())
.split(body())
.streaming()
.process(new SomeProcessor())
.split(body())
.streaming()
.filter(new SomeFilter())
.aggregate(header("myHeader", new MyAggregationStrategy())
.completionPredicate(new MyCompletionPredicate())
// more processors
// aggregate again (should just be one exchange after this point
// more processors
.to("direct:someOtherRoute");
MyCompletionPredicate's matches method is just:
return exchange.getIn().getProperty("CamelSplitComplete", Boolean.class);
I want to ensure that ALL exchanges in each split are aggregated together before I continue.
My questions are:
- The CamelSplitComplete header is somehow never true. What could cause this?
- Is trying to aggregate inside a nested split going to cause any issues?
- What happens if the last exchange (the one that is supposed to have CamelSplitComplete = true is filtered out? How can I know that I have aggregated all my exchanges together?
- Is this even the right way to approach this problem? If no, what else should I consider?
FYI my aggregation strategy just takes the bodies of the new exchange and adds them to the body of the old exchange.
Many thanks in advance.
See the Composed Message Processor EIP: https://camel.apache.org/components/latest/eips/composed-message-processor.html
And there is an example with the title Splitter Only which allows to do a fork/join style. You can still use a filter in the splitter only to filter out specific items etc.
And try to not make it to complicated with 2 x splits and all kind of stuff - keep it a bit simpler then its much easier to use, test and work with.
Is there functionality built into Kafka Streams that allows for dynamically connecting a single input stream into multiple output streams? KStream.branch allows branching based on true/false predicates, but this isn't quite what I want. I'd like each incoming log to determine the topic it will be streamed to at runtime, e.g., a log {"date": "2017-01-01"} will be streamed to the topic topic-2017-01-01 and a log {"date": "2017-01-02"} will be streamed to the topic topic-2017-01-02.
I could call forEach on the stream, then write to a Kafka producer, but that doesn't seem very elegant. Is there a better way to do this within the Streams framework?
If you want to create topics dynamically based on your data, you do not get any support within Kafka's Streaming API at the moment (v0.10.2 and earlier). You will need to create a KafkaProducer and implement your dynamic "routing" by yourself (for example using KStream#foreach() or KStream#process()). Note, that you need to do synchronous writes to avoid data loss (which are not very performant unfortunately). There are plans to extend Streaming API with dynamic topic routing, but there is no concrete timeline for this feature right now.
There is one more consideration you should take into account. If you do not know your destination topic(s) ahead of time and just rely on the so-called "topic auto creation" feature, you should make sure that those topics are being created with the desired configuration settings (e.g., number of partitions or replication factor).
As an alternative to "topic auto creation" you can also use Admin Client (available since v0.10.1) to create topics with correct configuration. See https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
I'm sort of struggling with the dynamic routing concept and consumer rules.
So let's say I have a route with exchange data, and then I want to use a header from the exchange in a different route in the "from" endpoint.
I think it would look something like this:
Route 1:
from("file:/dir1")
...
.to ("direct:start");
Route 2:
from("direct: start")//get the old exchange data
.from("file:/dir1/?fileName=${header.myHeader}")//start consuming from a different endpoint using old exchange data
...
.to("direct: end);
So those steps seems right to me, but I feel like Im sort of polluting the exchange.
To me, Im using dynamic routing but Im also creating a new consumer at the same time. That means Im creating a new exchange right? So, how does camel know which exchange to pick and use in the rest of the route?
At first I thought it probably combined them, but I did a bit more digging and found that you actually need to use "enrich" to add to an existing exchange.
Can someone explain how camel handles this sort of scenario? If you have an example that would be great too. I searched for one in the camel package with no success.
You can achieve "dynamic from" with Content Enricher pattern.
Let's say your first route is used to add file name to the header for instance like this:
from("timer:trigger?repeatCount=1")
.routeId("define-file-name")
.setHeader("myHeader", constant("file.txt"))
.to("direct:start");
Then your second route can poll for that file using the information from the exchange header like this.
from("direct:start")
.routeId("poll-file")
.pollEnrich().simple("file://dir1?fileName=${in.header.myHeader}").timeout(10000)
.log("${body}");
So, after reading some documentation and getting a lot of help from you guys here, I finally implemented a recipient list that chooses the endpoints dynamically (a dynamic recipient list):
http://camel.apache.org/recipient-list.html
http://camel.apache.org/recipientlist-annotation.html
In my code, MainApp_A generates reports every 10 seconds, and I want it to send the reports to all the servers at the same time, instead of doing it one by one. Thus I have developed the following route.
MainApp_A
main.addRouteBuilder(new RouteBuilder(){
#Override
public void configure() throws Exception {
from("direct:start").multicast().parallelProcessing()
.beanRef("recipientListBean", "route").end()
.log("${body}");
}
});
RecipientListBean
#RecipientList
public Set<String> route(String body) {
return servers; //returns a collection of several severs
}
I also checked the documentation for the Multicast pattern and the Dynamic route:
http://camel.apache.org/multicast.html
http://camel.apache.org/dynamic-router.html
Now I have a few questions and I am confused. So I have a few questions:
Is the recipient dynamic list simply the junction of the Multicast pattern with the Dynamic Route Pattern?
the multicast() call alone is purely sequential right? What is the difference between using multicast() and recipientList()?
For both multicast() and recipientList() to be concorrent I have to use parallelProcessing(). In my case, which is more efficient?
The dynamic router is used to evaluate which endpoints to send a specific message to at runtime, one endpoint at a time. Recipient List is a set of endpoints to send the very same message to.
I recommend you to read EAI patterns by Gregor Hohpe and Bobby Woolf for some background and insight.
http://www.eaipatterns.com/
Multicast allows a hard coded recipient list and recipientList(..) allows endpoints computed at runtime. Both are "recipient lists".
They will likely be equally efficient, if you don't have a lot of logic (say DB lookups) computing the endpoints in the recipient list - the invocation of the endpoints is likely by far the hardest part for Camel. Anyway - a static "multicast" is more readable.
In Camel, a reason for choosing one construct over another is often readability of the route. You should be able to look at a route and follow what it does (given you know EIP). At least, that's a good aim.