I am having a camel route which reads from activemq and updates inventory and i am trying to add delayer to it as follows :
from("activemq:{{vs.inventory.queue.name}}")
.delay(200L)
.filter( body().isNotNull() )
But this doesn't work as expected(delay is not for 200 ms but is everytime an ambigous delay is being set).
I referrred to http://camel.apache.org/delayer.html for this but couldn't get a working way for it.
my question is am I using this in correct way or is their something being missed.
Related
I am struggling to find a fully fledged example of how to use Apache Camel in Spring Boot framework for the purpose of a polling consumer.
I have looked at this: https://camel.apache.org/manual/latest/polling-consumer.html as well as this: https://camel.apache.org/components/latest/timer-component.html but the code examples are not wide enough for me to understand what it is that I need to do to accomplish my task in Java.
I'm typically a C# developer, so a lot of these small references to things don't make sense.
I am seeking an example of the following to do in Java including all the imports and other dependencies that are required to get this to work.
What I am trying to do, is the following
A web request is made to an endpoint, which should trigger the start of a polling consumer
The polling consumer needs to poll another web endpoint with a provided "ID" that needs to be sent to the consumer at the time that it is trigger.
The polling consumer should poll every X seconds (let's say 5 seconds).
Once a specific successful response is received from the endpoint we are polling, the consumer should stop polling and send a message to another web endpoint.
I would like to know if this is possible, and if so, can you provide a small example of everything that is needed to achieve this (as the documentation from the Camel website is extremely sparse in terms of imports and class structure etc.)?
After discussions with some fellow Java colleagues, they have assured me that this use case is not one that Camel is designed for. This is reason it was so difficult to find anything on the internet before I posted this question.
For those that are seeking this answer via Google, the best suggested approach is to use a different tool or just use standard java.
In my case, I ended up using plain old Java thread to achieve what was required. Once the request is received I simply start a new Runnable thread, that handles the checking of the result from the other service, sleeps for X seconds, and terminates when the response is successful.
A simple example is below:
Runnable runner = new Runnable() {
#Override
public void run() {
boolean cont = true;
while (cont) {
cont = getResponseFromServer();
try {
Thread.sleep(5000);
} catch (Exception e) {
// we don't care about this, it just means this time it didn't sleep
}
}
}
}
new Thread(runner).start();
I have a problem while trying my hands on the Hello World example explained here.
Kindly note that I have just modified the HelloEntity.java file to be able to return something other than "Hello, World!". Most certain my changes are taking time and hence I am getting the below Timeout error.
I am currently trying (doing a PoC) on a single node to understand the Lagom framework and do not have liberty to deploy multiple nodes.
I have also tried modifying the default lagom.circuit-breaker in application.conf "call-timeout = 100s" however, this does not seem to have helped.
Following is the exact error message for your reference:
{"name":"akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://hello-impl-application/system/sharding/HelloEntity#1074448247]] after [5000 ms]. Sender[null] sent message of type \"com.lightbend.lagom.javadsl.persistence.CommandEnvelope\".","detail":"akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://hello-impl-application/system/sharding/HelloEntity#1074448247]] after [5000 ms]. Sender[null] sent message of type \"com.lightbend.lagom.javadsl.persistence.CommandEnvelope\".\n\tat akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:595)\n\tat akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:605)\n\tat akka.actor.Scheduler$$anon$4.run(Scheduler.scala:140)\n\tat scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:866)\n\tat scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)\n\tat scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103)\n\tat scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:864)\n\tat akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)\n\tat akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)\n\tat akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)\n\tat akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)\n\tat java.lang.Thread.run(Thread.java:748)\n"}
Question: Is there a way to increase the akka Timeout by modifying the application.conf or any of the java source files in the Hello World project? Can you please help me with the exact details.
Thanks in advance for you time and help.
The call timeout is the timeout for circuit breakers, which is configured using lagom.circuit-breaker.default.call-timeout. But that's not what is timing out above, the thing that is timing out above is the request to your HelloEntity, that timeout is configured using lagom.persistence.ask-timeout. The reason why there's a timeout on requests to entities is because in a multi-node environment, your entities are sharded across nodes, so an ask on them may go to another node, which is why a timeout is needed in case that node is not responding.
All that said, I don't think changing the ask-timeout will solve your problem. If you have a single node, then your entities should respond instantly if everything is working ok.
Is that the only error you're seeing in the logs?
Are you seeing this in devmode (ie, using the runAll command), or are you running the Lagom service some other way?
Is your database responding?
Thanks James for the help/pointer.
Adding following lines to resources/application.conf did the trick for me:
lagom.persistence.ask-timeout=30s
hello {
..
..
call-timeout = 30s
call-timeout = ${?CIRCUIT_BREAKER_CALL_TIMEOUT}
..
}
A Call is a Service-to-Service communication. That’s a SeviceClient communicating to a remote server. It uses a circuit breaker. It is a extra-service call.
An ask (in the context of lagom.persistence) is sending a command to a persistent entity. That happens across the nodes insied your Lagom service. It is not using circuit breaking. It is an intra-service call.
I was wondering if anyone would know if I could use the watch service in a FileInboundChannelAdapter along with a LastModifiedFileListFilter?
The sample code below is giving me fairly inconsistent results. Sometimes the file just sits in the folder and remains unprocessed.
I suspect that the watch service might be incompatible with the LastModifiedFileListFilter. For e.g.
If the LastModifiedFileListfilter is set to look for files at least 5
seconds old, and the poller is set to poll every 10 seconds.
At the 9th second, a file could be created in the watched folder.
At 10 seconds the poller queries the watch service to find out what
changed in the past 10 seconds.
It finds the newly created file.
The newly created has a last modified time of -1 second, so it
does not process it.
At 20 seconds, the poller queries the watch
service a second time, this time it does not see the unprocessed
file as it was created more than 10 seconds ago.
Would anyone else have any experience with this? Would there be a recommended way to get around this issue and allow me to verify that the file has been fully written before proceeding?
#Bean
public IntegrationFlow ftpInputFileWatcher()
{
return IntegrationFlows.from(ftpInboundFolder(), filePoller())
.handle()
/*abbreviated*/
.get();
}
private FileInboundChannelAdapterSpec ftpInboundFolder() {
LastModifiedFileListFilter lastModifiedFileListFilter = new LastModifiedFileListFilter();
lastModifiedFileListFilter.setAge(5);
return Files.inboundAdapter(inboundFolder)
.preventDuplicates(false)
.useWatchService(true)
.filter(fileAgeFilterToPreventPrematurePickup());
}
protected Consumer<SourcePollingChannelAdapterSpec> filePoller(){
return poller -> poller.poller((Function<PollerFactory, PollerSpec>) p -> p.fixedRate(2000));
}
Thanks!
Yeah, that's good catch!
Right they are not compatible. The WatchService is event-based and store files from the events into the internal queue. When the poller triggers its action, it polls files from that queue and applies its filters. Since LastModifiedFileListFilter discards the file and there is no any events for it any more, we won't see that file again.
Please, raise a JIRA on the matter and we'll think how to be .
Meanwhile as a workaround do not use WatchService for this kind of logic.
I am following an example REST Service Task
I start my process engine using
val configuration = new StandaloneProcessEngineConfiguration(); configuration.setProcessEngineName(processEngineName)
Here is my bpmn file snippet
<process id="approve-loan" name="Loan Approval" isExecutable="true">
<serviceTask id="process_task" activiti:class="com.noggin.bpm.loan.ProcessRequestDelegate" activiti:exclusive="true" name="compute
Task">
<extensionElements>
<activiti:connector>
<activiti:connectorId>http-connector</activiti:connectorId>
<activiti:inputOutput>
<activiti:inputParameter name="url">http://127.0.0.1:5004/Hello/sayhello</activiti:inputParameter>
<activiti:inputParameter name="method">POST</activiti:inputParameter>
<activiti:inputParameter name="headers">
<activiti:map>
<activiti:entry key="Accept">application/json</activiti:entry>
<activiti:entry key="Content-type">application/json</activiti:entry>
</activiti:map>
</activiti:inputParameter>
<activiti:inputParameter name="payload"><![CDATA[{"bundleId":"101","script":"def greet = {\n \"Hello World\"\n }\n greet()"}]]></activiti:inputParameter>
<activiti:outputParameter name="isActive">Result</activiti:outputParameter>
</activiti:inputOutput>
</activiti:connector>
</extensionElements>
I start the process like this
val processEngine = ProcessEngines.getProcessEngine(processEngineName)
val runtime = processEngine.getRuntimeService
val processInstance = runtime.startProcessInstanceByKey(processInstanceKey)
Successfully, I am able to send the payload to ( http://127.0.0.1:5004/Hello/sayhello ).
My question is how to retrieve the response message from the location i started the instance. Since the response will be in a Json message which should be sent back to process initiator.
I believe I saw a similar question from you posted to the Camunda forum yesterday.
Either way, I believe the question and answer is the same.
Let me make sure I understand what you are asking.
1. You are starting the instance using the Java API
2. Your process definition includes a single Service Task that makes a REST call.
3. Your JavaDelegate class populates the "Result" process variable with the response of the REST call.
4. You want to capture the response.
If I have captured your requirement, then I think the problem is in your understanding of how he BPMN engine works.
With the process as you have it modeled, the process instance will start, make the REST call, populate the Response variable and then immediately end.
As you have currently modeled the process, you will not be able to capture the response during process execution.
Your options:
1. Change your model to either send the "Result" using a message service of some sort, or add a wait state where you can retrieve the response.
2. Use the Historical query REST API (or the equivalent Java API) to retrieve the Result payload from the completed instance.
It really depends on your use case as to the most appropriate option to take.
Cheers,
Greg
I have spent a few good amount of hours reading about Spring Integration. And today I started experimenting with the framework. There are aspects of how it works that I have trouble understanding despite of all my reading. I hope somebody here can put me back on tracks.
I have the following channel and endpoint defined:
<in:channel id="orderSource"/>
<in:service-activator input-channel="orderSource"
ref="defaultOrderService"
method="placeOrder"/>
Since the channel is a DirectChannel I expect everything to happen within a single thread and get a return value at the end.
The placeOrder method look as follows:
#Override
public Order placeOrder(Order order) {
return order;
}
In my main method I have:
MessageChannel input = context.getBean("orderSource", MessageChannel.class);
Message<Order> message = MessageBuilder.withPayload(new Order(123)).build();
MessagingTemplate messenger = new MessagingTemplate(input);
Message<?> result = messenger.sendAndReceive(message);
Object found = result.getPayload();
And this all works like a charm. The found is the order the service activator sends back.
My problem starts when I want to notify a set of subscribers that the order was placed. For simplicity, let's do this synchronously, like this:
<in:channel id="orderSource"/>
<in:service-activator input-channel="orderSource"
output-channel="savedOrders"
ref="defaultOrderService"
method="validateOrder"/>
<in:publish-subscribe-channel id="savedOrders"/>
<in:outbound-channel-adapter channel="savedOrders"
ref="defaultOrderService"
method="notifyCustomerService"/>
<in:outbound-channel-adapter channel="savedOrders"
ref="defaultOrderService"
method="notifyShipmentManager"/>
<in:outbound-channel-adapter channel="savedOrders"
ref="defaultOrderService"
method="notifyWarehouseManager"/>
The question now is what should the input channel expect in return when I invoke sendAndReceive?
My current code blocks and I never reach the end of the main thread.
How can I make sure I receive a reply containing the result of the service activator as it passed it to all subscribers?
Also I am really curious about what a given channel can expect in terms of returning values when there are asynchronous channels in the flow. I'd like to get the result at end of a transaction and before new thread is spawn, but I don't know how to do that.
Any thoughts, advice or guidance?
Presumably, your "notify" methods return null. If that's the case, there's no "reply" sent to the MessagingTemplate.
Make the final one return the order, or add a <bridge/> to nowhere as a fourth subscriber to the pub-sub channel.
A bridge to nowhere is simply a bridge with no output channel. When a message arrives at an endpoint that produces a reply, and there is no output-channel, the message's replyChannel header is used to route the reply to the originator.
It works with async channels too, but I'd need to understand your requirements there before I can provide guidance.
Also, consider using a Messaging Gateway on the calling side instead of building a message yourself and using the MessagingTemplate. Rather than exposing your caller to the messaging infrastructure, the framework will create a proxy for you that will take care of all that and you just interact with the POJI.
I spent some more time reading and I discovered that this is all a matter of configuring the reply channel either in the message or in the gateway and using bridge just as Gary Rusell suggested did the trick for me.
This is my code, now working:
<in:channel id="arrivals"/>
<in:service-activator input-channel="arrivals"
output-channel="validated"
ref="defaultOrderService"
method="validateOrder"/>
<in:channel id="validated"/>
<in:service-activator input-channel="validated"
output-channel="persisted"
ref="defaultOrderService"
method="placeOrder"/>
<in:publish-subscribe-channel id="persisted"/>
<in:channel id="replyChannel"/>
<in:bridge input-channel="persisted" output-channel="replyChannel"/>
<in:outbound-channel-adapter channel="persisted"
ref="defaultOrderService"
method="notifyCustomerService"/>
<in:outbound-channel-adapter channel="persisted"
ref="defaultOrderService"
method="notifyShipmentManager"/>
<in:outbound-channel-adapter channel="persisted"
ref="defaultOrderService"
method="notifyWarehouseManager"/>
<in:gateway id="orderService"
service-interface="codemasters.services.OrderService"
default-request-channel="arrivals"
default-reply-channel="replyChannel"/>
And using a gateway, this all looks much cooler now:
OrderService service = context.getBean("orderService", OrderService.class);
Order result = service.validateOrder(new Order(4321));