im using camel to translate a h2db into java-objects. It works! But now i dont know how to configurate the camel to read each table entry only once, except it was updated since the last read-in.
I cant read every 2 seconds a db with more than 1 million entrys, so i only want the new entries or the updated.
That's my camel-route:
<route id="db-import-route">
<from uri="timer://queryTimer?period=2s" />
<setBody>
<constant>
SELECT * FROM address
</constant>
</setBody>
<to uri="jdbc:dataSource" />
<split>
<simple>${body}</simple>
<process ref="SqlDecoder" />
<process ref="Validator" />
</split>
</route>
Thanks a lot!
Related
When selecting a very large amount of data from a db, I want to use camel's split-component. It should split the stream into 100 objects and get it in iterator format.
I successfully received multiple data from db, however failed during split.
I'm getting data from <to>, but I don't know where to write this <to> syntax.
<routes xmlns="http://camel.apache.org/schema/spring">
<route id="DbToKafka" streamCache="true">
<from uri="timer:foo?repeatCount=1" />
<split streaming="true">
<tokenize token="{" />
<to uri="sql:classpath:sql/get.sql" />
</split>
<log message="DATA : ${body}" />
<marshal>
<json library="Jackson" />
</marshal>
<log message="JSON : ${body}" />
<to uri="kafka:test-topic?brokers={{kafka.brokers}}" />
<log message="data sended : ${body}" />
</route>
</routes>
2019-09-06 13:29:29.380 ERROR 10260 --- [3 - timer://foo]
o.a.camel.processor.DefaultErrorHandler : Failed delivery for
(MessageId: ID-DESKTOP-225HFIE-1567744166599-0-2 on ExchangeId:
ID-DESKTOP-225HFIE-1567744166599-0-1). Exhausted after delivery
attempt: 1 caught: java.lang.NullPointerException: source
Message History
--------------------------------------------------------------------------------------------------------------------------------------- RouteId ProcessorId Processor
Elapsed (ms) [DbToKafka ] [DbToKafka ]
[timer://foo?repeatCount=1
] [ 9] [DbToKafka ] [split1 ]
[split[tokenize{body() using token: ,}]
] [ 7]
run without the split statement, get this result:
[{"ROW_ID":"520","COM_CD_NM":"사용중"},{"ROW_ID":"521","COM_CD_NM":"메모지
수수 "},{"ROW_ID":"522","COM_CD_NM":"상호대화
"},{"ROW_ID":"523","COM_CD_NM":"물품수수
"},{"ROW_ID":"524","COM_CD_NM":"기타
"},{"ROW_ID":"525","COM_CD_NM":"자격증변조 "},...]
Welcome to StackOverflow!
The errors is because you are trying to split nothing. You need to retrieve the data before you go into the split, and then every operation you want to operate on the split data needs to be within the split element.
I think it won't work anyway, since your marshalling won't work as the tokenize will remove the token { and therefore not be real JSON.
Looking at Apache Camel with Json Array split will give you an example of how to do it. I suspect you need some thing like this - the marshalling may not be necessary as you are splitting by JSONpath anyway.
<route id="DbToKafka" streamCache="true">
<from uri="timer:foo?repeatCount=1" />
<to uri="sql:classpath:sql/get.sql" />
<split streaming="true">
<jsonpath>$</jsonpath>
<log message="DATA : ${body}" />
<marshal>
<json library="Jackson" />
</marshal>
<log message="JSON : ${body}" />
<to uri="kafka:test-topic?brokers={{kafka.brokers}}" />
<log message="data sended : ${body}" />
</split>
</route>
How do I use Camel File Component Consumer using multiple threads ?
meaning I have this code :
<route id="incomingFile">
<from
uri="file://{{incomingFileBaseFolder}}?filter=#fileFilter&recursive=true&readLock=changed&move=${file:parent}/.backup/${date:now:yyyy}/backup_${exchangeId}_${file:onlyname.noext}.${file:name.ext}&sortBy=file:modified&delay={{incomingFileDelay}}" />
<transacted />
<threads poolSize="10">
<convertBodyTo type="java.lang.String" />
<setHeader headerName="{{incoming_file_backup_date_header_name}}">
<simple>$simple{date:now:yyyy}
</simple>
</setHeader>
<bean ref="saveFile" method="duplicateCeck" />
<to uri="direct:validateFileDirect" />
<to uri="direct:inputFileContentHandle" />
</threads>
</route>
but it does not seems to work on more than one file at a time.
How do I make it happens ?
Remove <transacted/> as it does not support asynchronous routing. Also transactions only works with component/resources that support JTA transactions natively, which typically is only JMS and JDBC.
I have a Camel route that looks something like the one below. If all records parse successfully, then I get an email from the onCompletion step. If one record gets an exception then the rest of the records will process, which is fine, but the onCompletion step does not fire.
What I'd like is for the onCompletion step to run even if there are errors and to be able to send a message saying "route completed with errors". How can I do this?
<route id="route1">
<from uri="file://C:/TEMP/load?noop=true&idempotentRepository=#sysoutStore&sorter=#externalDataFilesSorter"/>
<choice>
<when>
<simple>${file:name} regex '*file.*.(txt)'</simple>
<to uri="direct:RouteFile" />
</when>
</choice>
</route>
<route id="testRouteDirect">
<from uri="direct:RouteFile" />
<onException>
<exception>java.lang.IllegalArgumentException</exception>
<redeliveryPolicy maximumRedeliveries="1" />
<handled>
<constant>true</constant>
</handled>
<to uri="log:java.lang.IllegalArgumentException"></to>
</onException>
<onException>
<exception>java.text.ParseException</exception>
<redeliveryPolicy maximumRedeliveries="1" />
<handled>
<constant>true</constant>
</handled>
<to uri="log:java.text.ParseException"></to>
</onException>
<split parallelProcessing="false" strategyRef="exchangePropertiesAggregatorStrategy" >
<tokenize token="\r\n"/>
<to uri="log:Record"></to>
</split>
<onCompletion>
<to uri="log:completion"></to>
<to uri="smtp://mail.com?contentType=text/html&to=done#test.com&from=route#test.com&subject=we're done" />
</onCompletion>
</route>
The best part of your route is, you have onException inside your route with handled=true. So move your onCompletion to the parent route(route1), It should work !
There are a bunch of tickets related to oncompletion on the camel site: Camel Jira URL. I upgraded to a newer version of camel & I don't get this issue any more.
I have a route where I want Camel to visit the following beans:
First, loggingBean
Second, an aggregator that waits for a certain number of messages to aggregate on it
Once the Aggregator's completionSize is reached (3), keep going to #4
Third, processorBean
And fourth/last, finalizerBean
Here is my route:
<route id="my-camel-route">
<from uri="direct:starter" />
<to uri="bean:loggingBean?method=shutdown" />
<aggregate strategyRef="myAggregationStrategy" completionSize="3">
<correlationExpression>
<simple>${header.id} == 1</simple>
</correlationExpression>
<to uri="bean:processorBean?method=process" />
</aggregate>
<to uri="bean:finalizerBean?method=shutdown" />
</route>
My question: do I need to place finalizerBean inside the <aggregate> element like so:
<aggregate strategyRef="myAggregationStrategy" completionSize="3">
<correlationExpression>
<simple>${header.id} == 1</simple>
</correlationExpression>
<to uri="bean:processorBean?method=process" />
<to uri="bean:finalizerBean?method=shutdown" />
</aggregate>
Basically, I'm wondering if the way I currently have things will prompt Camel to send the message to the aggregator, and then also send it on to finalizerBean (essentially, bypassing the aggregator). In my case, I want it to aggregate until completionSize is 3, and then send the aggregated exchange on to the processorBean and then finally finalizerBean.
Or have I configured this correctly? What's the difference between finalizerBean being inside the <aggregate> element vs being outside it?
The second example is correct.
<aggregate strategyRef="myAggregationStrategy" completionSize="3">
<correlationExpression>
<simple>${header.id} == 1</simple>
</correlationExpression>
<to uri="bean:processorBean?method=process" />
<to uri="bean:finalizerBean?method=shutdown" />
</aggregate>
If finalizerBean is "outside" the <aggregate>, it will get executed for every message that comes from direct:starter - which isn't what you want ;)
I have a situation where I want to pass data into an Aggregator, but I don't want the aggregator to do anything until it has received messages from 3 distinct routes:
<route id="route-1">
<from uri="direct:fizz" />
<to uri="bean:bean1?method=process" />
<setHeader headerName="id">
<constant>1</constant>
</setHeader>
<to uri="direct:aggregator" />
</route>
<route id="route-2">
<from uri="direct:buzz" />
<to uri="bean:bean2?method=process" />
<setHeader headerName="id">
<constant>2</constant>
</setHeader>
<to uri="direct:aggregator" />
</route>
<route id="route-3">
<from uri="direct:foo" />
<to uri="bean:bean3?method=process" />
<setHeader headerName="id">
<constant>3</constant>
</setHeader>
<to uri="direct:aggregator" />
</route>
<route id="aggregator-route">
<from uri="direct:aggregator" />
<aggregate strategyRef="myAggregationStrategy" completionSize="1">
<correlationExpression>
<simple>header.id</simple>
</correlationExpression>
<to uri="bean:lastBean?method=process" />
</aggregate>
</route>
The way this is configured, when the aggregator's completionSize is set to 1 or 2, the aggregated Exchange is routed on to my lastBean. However, if I set completionSize to 3, for some reason, lastBean#process never gets invoked.
I'm sure that I'm using header.id and the aggregator incorrectly here. In the correlationExpression, I just need to make sure that we have 1 Message from each of the 3 routes.
So my question: what do I need to do to make my aggregator "wait" until it has received 1 message from route-1, 1 message from route-2 and 1 message from route-3?
If you are correlating messages from three routes, there needs to be a way for them all to have a matching header.id value by the time they reach the aggregating route.
In your example, each route sets a different id value so there would be no match. If you set the id value to "1" in each route, I think it would start to work as expected.