Using EventScheduler does not trigger event handlers in Aggregate - java

When I trigger a scheduled event via a Command, I do not see the expected Event Handlers trigger. I am trying to isolate a one-time business transaction in a Saga while still allowing Aggregates to be event sourced to be able to replay state changes.
I have configured the following SimpleEventScheduler.
#Bean
public SimpleEventScheduler simpleEventScheduler(EventBus eventBus) {
return SimpleEventScheduler.builder()
.eventBus(eventBus)
.scheduledExecutorService(scheduledExecutorService())
.build();
}
private ScheduledExecutorService scheduledExecutorService() {
return Executors.unconfigurableScheduledExecutorService(Executors.newSingleThreadScheduledExecutor());
}
I have an aggregate modeled that has a #CommandHandler
#CommandHandler
public Letter(ScheduleLetterCommand cmd, EventScheduler scheduler) {
String id = cmd.getLetterId();
log.info("Received schedule command for letter id {}", id);
ScheduleToken scheduleToken = scheduler.schedule(Duration.ofSeconds(5), new BeginSendLetterEvent(id, LetterEventType.BEGIN_SEND));
AggregateLifecycle.apply(new LetterScheduledEvent(id, LetterEventType.SCHEDULED, scheduleToken));
}
and two #EventSourcingHandler
#EventSourcingHandler
public void on(BeginSendLetterEvent event) {
log.info("Letter sending process started {} {}", event.getLetterId(), event.getEventType());
scheduleToken = null;
}
#EventSourcingHandler
public void on(LetterSentEvent event) {
log.info("Letter sent {} {}", event.getLetterId(), event.getEventType());
this.sent = true;
}
I have a saga that does some 'business logic' when BeginSendLetterEvent is triggered and publishes LetterSentEvent.
#Saga
#Slf4j
public class LetterSchedulingSaga {
private EventGateway eventGateway;
public LetterSchedulingSaga() {
//Axon requires empty constructor
}
#StartSaga
#EndSaga
#SagaEventHandler(associationProperty = "letterId")
public void handle(BeginSendLetterEvent event) {
log.info("Sending letter {}...", event.getLetterId());
eventGateway.publish(new LetterSentEvent(event.getLetterId(), LetterEventType.SENT));
}
#Autowired
public void setEventGateway(EventGateway eventGateway) {
this.eventGateway = eventGateway;
}
}
Here is my output:
com.flsh.web.LetterScheduler : Received request to schedule letter
com.flsh.web.LetterScheduler : Finished request to schedule letter
com.flsh.axon.Letter : Received schedule command for letter id b7338082-e0e1-4ba0-b137-c7ff92afe3a1
com.flsh.axon.Letter : LetterScheduledEvent b7338082-e0e1-4ba0-b137-c7ff92afe3a1 SCHEDULED
com.flsh.axon.LetterSchedulingSaga : Sending letter b7338082-e0e1-4ba0-b137-c7ff92afe3a1...
The thing is I am not seeing the above two event handlers being triggered at all. Can someone see what I am doing wrong here? :) Any help would be appreciated...
If this is the wrong way to use Sagas and Event Handlers please let me know. I realize my rudimentary example doesn't facilitate a good domain model.

The short answer to your problem #GoldFish, is that you are expecting to handle events in your Command Model.
The aggregate in Axon terms is a Command Handling Component, as such being part of your Command Model when thinking about CQRS.
As such, it handles command messages and validates whether the given operation (read: command) can be executed. If the outcome of the validation is "yes", that's when you will end up publishing an event in the lifecycle of a given aggregate instances.
The #EventSourcingHandler annotated methods you can introduce into an aggregate are their to "source the aggregate instance based on its own events".
Having said that, you can anticipate that an Aggregate will never handle events directly from any other source then its own.
The EventScheduler is just as much an external source of events as another aggregate's events would be when sourcing. Hence, they will be disregarded for the aggregate.
The EventScheduler will publish an event at a latter stage, so that it might be handled by Event Handling Components, for example Saga instances.
If you want to schedule that something should occur for a specific aggregate or saga instance, you should have a look at the DeadlineManager instead.
Regardless, for what you're trying to achieve, which (I believe) is triggering an operation in your aggregate from a saga, you should use command messages, since the aggregate can only handle command messages.

Related

Using OTEL Java agent, how to create a new Context without using #WithSpan

The opentelemetry-javaagent-all agent (versions 0.17.0 and 1.0.1) has been the starting point for adding trace information to my Java application. Auto-instrumentation works great.
Some of my application cannot be auto-instrumented. For this part of the application, I began by adding #WithSpan annotations to interesting spots in the code.
I now reach the limits of what seems possible with simple #WithSpan annotations. However, the framework underlying my app allows me to register callbacks to be invoked at certain points -- e.g. I can provide handlers that are notified when a client connects / disconnects.
What I think I need is to start a new Span when Foo.onConnect() is called, and set it be the parent for the Spans that correspond to each request.
public class Foo {
void onConnect() {
// called when a client connects to my app
// Here I want to create a Span that will be the parent of the Span created in
// Foo.processEachRequest().
}
#WithSpan
public void processEachRequest() {
// works, but since it is called for each request... each span is in a separate Trace
}
void onDisconnect() {
// called when the client disconnects from my app
// Here I can end the parent Span.
}
}
Other ideas - that didn't work out:
1 - The obvious solution would be to add #WithSpan annotations to the underlying framework. For various reasons, this is not going to be a practical way forward.
2 - Next choice might be to search for a way to tell the javaagent about methods in my underlying framework. (The New Relic agent can do something like this.) That doesn't seem to be a feature of the open-telemetry agent, today anyway.
So, I'm left with looking for a way to do this using the callbacks, as above.
Is there a way to do this?
That should be possible by manually instrumenting your code. You would use the Tracer interface of OpenTelemetry, as described in the OpenTelemetry Java docs.
This should give you a general idea:
public class Foo {
private Span parentSpan; // you might need a Map/List/Stack here
void onConnect() {
Tracer tracer =
openTelemetry.getTracer("instrumentation-library-name", "1.0.0");
Span span = tracer.spanBuilder("my span").startSpan();
this.parentSpan = span; // might need to store span per request/client/connection-id
}
public void processEachRequest() {
final Span parent = this.lookupParentSpan();
if (parent != null) {
try (Scope scope = span.makeCurrent()) {
yourLogic();
} catch (Throwable t) {
span.setStatus(StatusCode.ERROR, "error message");
throw t;
}
} else {
yourLogic();
}
}
void onDisconnect() {
final Span parent = this.lookupParentSpan();
if (parent != null) {
parent.end();
}
}
private Span lookupParentSpan() {
// you probably want to lookup the span by client or connection id from a (weak) map
return this.parentSpan;
}
}
NB: You must guarantee that a span is always ended and does not leak. Make sure to properly scope your spans and eventually call Span#end().

Log Correlation ID with Vertx [duplicate]

while doing logs in the multiple module of vertx, it is a basic requirement that we should be able to correlate all the logs for a single request.
as vertx being asynchronous what will be the best place to keep logid, conversationid, eventid.
any solution or patterns we can implement?
In a thread based system, you current context is held by the current thread, thus MDC or any ThreadLocal would do.
In an actor based system such as Vertx, your context is the message, thus you have to add a correlation ID to every message you send.
For any handler/callback you have to pass it as method argument or reference a final method variable.
For sending messages over the event bus, you could either wrap your payload in a JsonObject and add the correlation id to the wrapper object
vertx.eventBus().send("someAddr",
new JsonObject().put("correlationId", "someId")
.put("payload", yourPayload));
or you could add the correlation id as a header using the DeliveryOption
//send
vertx.eventBus().send("someAddr", "someMsg",
new DeliveryOptions().addHeader("correlationId", "someId"));
//receive
vertx.eventBus().consumer("someAddr", msg -> {
String correlationId = msg.headers().get("correlationId");
...
});
There are also more sophisticated options possible, such as using an Interceptor on the eventbus, which Emanuel Idi used to implement Zipkin support for Vert.x, https://github.com/emmanuelidi/vertx-zipkin, but I'm not sure about the current status of this integration.
There's a surprising lack of good answers published about this, which is odd, given how easy it is.
Assuming you set the correlationId in your MDC context on receipt of a request or message, the simplest way I've found to propagate it is to use interceptors to pass the value between contexts:
vertx.eventBus()
.addInboundInterceptor(deliveryContext -> {
MultiMap headers = deliveryContext.message().headers();
if (headers.contains("correlationId")) {
MDC.put("correlationId", headers.get("correlationId"));
deliveryContext.next();
}
})
.addOutboundInterceptor(deliveryContext -> {
deliveryContext.message().headers().add("correlationId", MDC.get("correlationId"));
deliveryContext.next();
});
If by multiple module you mean multiple verticles running on the same Vertx instance, you should be able to use a normal logging library such as SLF4J, Log4J, JUL, etc. You can then keep the logs in a directory of your choice, e.g. /var/logs/appName.
If, however, you mean how do you correlate logs between multiple instances of Vertx, then I'd suggest looking into GrayLog or similar applications for distributed/centralised logging. If you use a unique ID per request, you can pass that around and use it in the logs. Or depending on your authorization system, if you use unique tokens per request you can log those. The centralised logging system can be used to aggregate and filter logs based on that information.
The interceptor example presented by Clive Evans works great. I added a more details example showing how this might work:
import io.vertx.core.AbstractVerticle;
import io.vertx.core.DeploymentOptions;
import io.vertx.core.MultiMap;
import io.vertx.core.Promise;
import io.vertx.core.Vertx;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.slf4j.MDC;
import java.time.Duration;
import java.util.UUID;
public class PublisherSubscriberInterceptor {
private static final Logger LOG = LoggerFactory.getLogger(PublisherSubscriberInterceptor.class);
public static final String ADRESS = "sender.address";
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
createInterceptors(vertx);
vertx.deployVerticle(new Publisher());
vertx.deployVerticle(new Subscriber1());
//For our example lets deploy subscriber2 2 times.
vertx.deployVerticle(Subscriber2.class.getName(), new DeploymentOptions().setInstances(2));
}
private static void createInterceptors(Vertx vertx) {
vertx.eventBus()
.addInboundInterceptor(deliveryContext -> {
MultiMap headers = deliveryContext.message().headers();
if (headers.contains("myId")) {
MDC.put("myId", headers.get("myId"));
deliveryContext.next();
}
})
.addOutboundInterceptor(deliveryContext -> {
deliveryContext.message().headers().add("myId", MDC.get("myId"));
deliveryContext.next();
});
}
public static class Publisher extends AbstractVerticle {
#Override
public void start(Promise<Void> startPromise) throws Exception {
startPromise.complete();
vertx.setPeriodic(Duration.ofSeconds(5).toMillis(), id -> {
MDC.put("myId", UUID.randomUUID().toString());
vertx.eventBus().publish(Publish.class.getName(), "A message for all");
});
}
}
public static class Subscriber1 extends AbstractVerticle {
private static final Logger LOG = LoggerFactory.getLogger(Subscriber1.class);
#Override
public void start(Promise<Void> startPromise) throws Exception {
startPromise.complete();
vertx.eventBus().consumer(Publish.class.getName(), message-> {
LOG.debug("Subscriber1 Received: {}", message.body());
});
}
}
public static class Subscriber2 extends AbstractVerticle {
private static final Logger LOG = LoggerFactory.getLogger(Subscriber2.class);
#Override
public void start(Promise<Void> startPromise) throws Exception {
startPromise.complete();
vertx.eventBus().consumer(Publish.class.getName(), message-> {
LOG.debug("Subscriber2 Received: {}", message.body());
});
}
}
}
you can see the log example for publishing 2 messages:
13:37:14.315 [vert.x-eventloop-thread-3][myId=a2f0584c-9d4e-48a8-a724-a24ea12f7d80] DEBUG o.s.v.l.PublishSubscribeInterceptor$Subscriber2 - Subscriber2 Received: A message for all
13:37:14.315 [vert.x-eventloop-thread-1][myId=a2f0584c-9d4e-48a8-a724-a24ea12f7d80] DEBUG o.s.v.l.PublishSubscribeInterceptor$Subscriber1 - Subscriber1 Received: A message for all
13:37:14.315 [vert.x-eventloop-thread-4][myId=a2f0584c-9d4e-48a8-a724-a24ea12f7d80] DEBUG o.s.v.l.PublishSubscribeInterceptor$Subscriber2 - Subscriber2 Received: A message for all
13:37:19.295 [vert.x-eventloop-thread-1][myId=63b5839e-3b0b-43a5-b379-92bd1466b870] DEBUG o.s.v.l.PublishSubscribeInterceptor$Subscriber1 - Subscriber1 Received: A message for all
13:37:19.295 [vert.x-eventloop-thread-3][myId=63b5839e-3b0b-43a5-b379-92bd1466b870] DEBUG o.s.v.l.PublishSubscribeInterceptor$Subscriber2 - Subscriber2 Received: A message for all
13:37:19.295 [vert.x-eventloop-thread-4][myId=63b5839e-3b0b-43a5-b379-92bd1466b870] DEBUG o.s.v.l.PublishSubscribeInterceptor$Subscriber2 - Subscriber2 Received: A message for all
Surprised no one mentioned this Reactiverse project Contextual logging for Eclipse Vert.x
From their page:
In traditional Java development models (e.g. Spring or Java EE), the
server implements a one thread per request design. As a consequence,
it is possible to store contextual data in ThreadLocal variables and
use it when logging. Both logback and log4j2 name this Mapped
Diagnostic Context (MDC).
Vert.x implements the reactor pattern. In practice, this means many
concurrent requests can be handled by the same thread, thus preventing
usage of ThreadLocals to store contextual data.
This project uses an alternative storage method for contextual data
and makes it possible to have MDC logging in Vert.x applications.
Use vertx-sync and a ThreadLocal for the correlation ID. (i.e., a "FiberLocal"). Works great for me.

How to create Processor with Transaction and DLQ with Rabbit binding?

I'm just starting to learn Spring Cloud Streams and Dataflow and I want to know one of important use cases for me. I created example processor Multiplier which takes message and resends it 5 times to output.
#EnableBinding(Processor.class)
public class MultiplierProcessor {
#Autowired
private Source source;
private int repeats = 5;
#Transactional
#StreamListener(Processor.INPUT)
public void handle(String payload) {
for (int i = 0; i < repeats; i++) {
if(i == 4) {
throw new RuntimeException("EXCEPTION");
}
source.output().send(new GenericMessage<>(payload));
}
}
}
What you can see is that before 5th sending this processor crashes. Why? Because it can (programs throw exceptions). In this case I wanted to practice fault prevention on Spring Cloud Stream.
What I would like to achieve is to have input message backed in DLQ and 4 messages that were send before to be reverted and not consumed by next operand (just like in normal JMS transaction). I tried already to define following properties in my processor project but without success.
spring.cloud.stream.bindings.output.producer.autoBindDlq=true
spring.cloud.stream.bindings.output.producer.republishToDlq=true
spring.cloud.stream.bindings.output.producer.transacted=true
spring.cloud.stream.bindings.input.consumer.autoBindDlq=true
Could you tell me if it possible and also what am I doing wrong? I would be overwhelmingly thankful for some examples.
You have several issues with your configuration:
missing .rabbit in the rabbit-specific properties)
you need a group name and durable subscription to use autoBindDlq
autoBindDlq doesn't apply on the output side
The consumer has to be transacted so that the producer sends are performed in the same transaction.
I just tested this with 1.0.2.RELEASE:
spring.cloud.stream.bindings.output.destination=so8400out
spring.cloud.stream.rabbit.bindings.output.producer.transacted=true
spring.cloud.stream.bindings.input.destination=so8400in
spring.cloud.stream.bindings.input.group=so8400
spring.cloud.stream.rabbit.bindings.input.consumer.durableSubscription=true
spring.cloud.stream.rabbit.bindings.input.consumer.autoBindDlq=true
spring.cloud.stream.rabbit.bindings.input.consumer.transacted=true
and it worked as expected.
EDIT
Actually, no, the published messages were not rolled back. Investigating...
EDIT2
OK; it does work, but you can't use republishToDlq - because when that is enabled, the binder publishes the failed message to the DLQ and the transaction is committed.
When that is false, the exception is thrown to the container, the transaction is rolled back, and RabbitMQ moves the failed message to the DLQ.
Note, however, that retry is enabled by default (3 attempts) so, if your processor succeeds during retry, you will get duplicates in your output.
For this to work as you want, you need to disable retry by setting the max attempts to 1 (and don't use republishToDlq).
EDIT3
OK, if you want more control over the publishing of the errors, this will work, when the fix for this JIRA is applied to Spring AMQP...
#SpringBootApplication
#EnableBinding({ Processor.class, So39018400Application.Errors.class })
public class So39018400Application {
public static void main(String[] args) {
SpringApplication.run(So39018400Application.class, args);
}
#Bean
public Foo foo() {
return new Foo();
}
public interface Errors {
#Output("errors")
MessageChannel errorChannel();
}
private static class Foo {
#Autowired
Source source;
#Autowired
Errors errors;
#StreamListener(Processor.INPUT)
public void handle (Message<byte[]> in) {
try {
source.output().send(new GenericMessage<>("foo"));
source.output().send(new GenericMessage<>("foo"));
throw new RuntimeException("foo");
}
catch (RuntimeException e) {
errors.errorChannel().send(MessageBuilder.fromMessage(in)
.setHeader("foo", "bar") // add whatever you want, stack trace etc.
.build());
throw e;
}
}
}
}
with properties:
spring.cloud.stream.bindings.output.destination=so8400out
spring.cloud.stream.bindings.errors.destination=so8400errors
spring.cloud.stream.rabbit.bindings.errors.producer.transacted=false
spring.cloud.stream.rabbit.bindings.output.producer.transacted=true
spring.cloud.stream.bindings.input.destination=so8400in
spring.cloud.stream.bindings.input.group=so8400
spring.cloud.stream.rabbit.bindings.input.consumer.transacted=true
spring.cloud.stream.rabbit.bindings.input.consumer.requeue-rejected=false
spring.cloud.stream.bindings.input.consumer.max-attempts=1

Using a timing based PollingConsumer to a direct endpoint

Functionally I wish to check a URL is active before I consume from a JMS (WMQ) endpoint.
If the URL cannot be reached or a server error, then I do not want to pick up from the queue. So I want to keep trying (with unlimited retries) the URL via a polling consumer. So as soon as it is available I can pick up from JMS.
I have a RouteBuilder that is set up with a direct endpoint, that is configured to run a Processor that will ping a service.
So:
public class PingRoute extends RouteBuilder {
#Override
public void configureCamel() {
from("direct:pingRoute").routeId(PingRoute.class.getSimpleName())
.process(new PingProcessor(url))
.to("log://PingRoute?showAll=true");
}
}
In another route I am setting up my timer:
#Override
public void configureCamel() {
from(timerEndpoint).beanRef(PollingConsumerBean.class.getSimpleName(), "checkPingRoute");
...
}
And with the PollingConsumerBean I am attempting to receive the body via a consumer:
public void checkPingRoute(){
// loop to check the consumer. Check we can carry on with the pick up from the JMS queue.
while(true){
Boolean pingAvailable = consumer.receiveBody("direct:pingRoute", Boolean.class);
...
}
I add the route to the context and use a producer to send:
context.addRoutes(new PingRoute());
context.start();
producer.sendBody(TimerPollingRoute.TIMER_POLLING_ROUTE_ENDPOINT, "a body");
And I get the following IllegalArgumentException:
Cannot add a 2nd consumer to the same endpoint. Endpoint Endpoint[direct://pingRoute] only allows one consumer.
Is there a way to setup the direct route as a polling consumer?
Business logic is not quite clear, unfortunately. As I understand it - you need to wait for a response from the service. IMHO you have to use Content Enricher EIP http://camel.apache.org/content-enricher.html . pollEnrich is what you need at timer route.
.pollEnrich("direct:waitForResponce", -1) or
.pollEnrich("seda:waitForResponce", -1)
public class PingRoute extends RouteBuilder {
#Override
public void configureCamel() {
from("direct:pingRoute").routeId(PingRoute.class.getSimpleName())
.process(new PingProcessor(url))
.choice().when(body())
.to("log://PingRoute?showAll=true")
.to("direct:waitForResponce")
.otherwise()
.to("direct:pingRoute")
.end();
}
};
timer:
#Override
public void configureCamel() {
from(timerEndpoint)
.inOnly("direct:pingRoute")
.pollEnrich("direct:waitForResponce", -1)
...
}
Based on the OP's clarification of their use case, they have several problems to solve:
Consume the message from the JMS queue if, and only if, the ping to the URL is positive.
If the URL is unresponsive, the JMS message should not disappear from the queue and a retry must take place until the URL becomes responsive again, in which case the message will be ultimately consumed.
The OP has not specified if the amount of retries is limited or unlimited.
Based on this problem scenario, I suggest a redesign of their solution that leverages ActiveMQ retries, broker-side redelivery and JMS transactions in Camel to:
Return the message to the queue if the URL ping failed (via a transaction rollback).
Ensure that the message is not lost (by using JMS persistence and broker-side redeliveries, AMQ will durably schedule the retry cycle).
Be able to specify a sophisticated retry cycle per message, e.g. with exponential backoffs, maximum retries, etc.
Optionally sending the message to a Dead Letter Queue if the retry cycle was exhausted without a positive result, so that some other (possibly manual) action can be planned.
Now, implementation-wise:
from("activemq:queue:abc?transacted=true") // (1)
.to("http4://host.endpoint.com/foo?method=GET") // (2) (3)
.process(new HandleSuccess()); // (4)
Comments:
Note the transacted flag.
If the HTTP invocation fails, the HTTP4 endpoint will raise an Exception.
Since there are no configured exception handlers, Camel will propagate the exception to the consumer endpoint (activemq) which will rollback the transaction.
If the invocation succeeded, the flow will continue and the exchange body will now contain the payload returned by the HTTP server and you can handle it in whichever way you wish. Here I'm using a processor.
Next, what's important is that you configure the redelivery policy in ActiveMQ, as well as enable broker-side redeliveries. You do that in your activemq.xml configuration file:
<plugins>
<redeliveryPlugin fallbackToDeadLetter="true" sendToDlqIfMaxRetriesExceeded="true">
<redeliveryPolicyMap>
<redeliveryPolicyMap>
<redeliveryPolicyEntries>
<redeliveryPolicy queue="my.queue"
initialRedeliveryDelay="30000"
maximumRedeliveries="17"
maximumRedeliveryDelay="259200000"
redeliveryDelay="30000"
useExponentialBackOff="true"
backOffMultiplier="2" />
</redeliveryPolicyEntries>
</redeliveryPolicyMap>
</redeliveryPolicyMap>
</redeliveryPlugin>
</plugins>
And make sure that the scheduler support is enabled in the top-level <broker /> element:
<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="mybroker"
schedulerSupport="true">
...
</broker>
I hope that helps.
EDIT 1: OP is using IBM WebSphere MQ as a broker, I missed that. You could use a JMS QueueBrowser to peek at messages and try their corresponding URLs before actually consuming a message, but it is not possible to selectively consume an individual message – that's not what MOM (messaging-oriented middleware) is about.
So I insist that you should explore JMS transactions, but rather than leaving it up to the broker to redeliver the message, you can start the pinging cycle to the URL within the TX body itself. With regards to Camel, you could implement it as follows:
from("jms:queue:myqueue?transacted=true")
.bean(new UrlPinger());
UrlPinger.java:
public class UrlPinger {
#EndpointInject
private ProducerTemplate template;
private Pattern pattern = Pattern.compile("^(http(?:s)?)\\:");
#Handler
public void pingUrl(#Body String url, CamelContext context) throws InterruptedException {
// Replace http(s): with http(s)4: to use the Camel HTTP4 endpoint.
Matcher m = pattern.matcher(url);
if (m.matches()) {
url = m.replaceFirst(m.group(1) + "4:");
}
// Try forever until the status code is 200.
while (getStatusCode(url, context) != 200) {
Thread.sleep(5000);
}
}
private int getStatusCode(String url, CamelContext context) {
Exchange response = template.request(url + "?method=GET&throwExceptionOnFailure=false", new Processor() {
#Override public void process(Exchange exchange) throws Exception {
// No body since this is a GET request.
exchange.getIn().getBody(null);
}
});
return response.getIn().getHeader(Exchange.HTTP_RESPONSE_CODE, Integer.class);
}
}
Notes:
Note the throwExceptionOnFailure=false option. An Exception will not be raised, therefore the loop will execute until the condition is true.
Inside the bean, I'm looping forever until the HTTP status is 200. Of course, your logic will be different.
Between attempt and attempt, I'm sleeping 5000ms.
I'm assuming the URL to ping is in the body of the incoming JMS message. I'm replacing the leading http(s): with http(s)4: in order to use the Camel HTTP4 endpoint.
Performing the pinging inside the TX guarantees that the message will only be consumed once the ping condition is true (in this case HTTP status == 200).
You might want to introduce a desist condition (you don't want to keep trying forever). Maybe introduce some backoff to not overwhelm the other party.
If either Camel or the broker goes down within a retry cycle, the message will be automatically rolled back.
Take into account that JMS transactions are Session-bound, so if you want to start many concurrent consumers (concurrentConsumers JMS endpoint option), you'll need to set cacheLevelName=CACHE_NONE for each thread to use a different JMS Session.
I am having a bit of difficulty figuring out exactly what you want to do, but it appears to me that you want to consume data from an endpoint on an interval. For this the best pattern is a polling consumer: http://camel.apache.org/polling-consumer.html
The error you are currently receiving is because you have two consumers both trying to read from the "direct://pingRoute" If this was intended you could change the direct to a seda://pingRoute so its an in memory queue your data will be in.
All the answers here pointed me on the right direction but I finally came up with a solution that managed to fit our code base and framework.
Firstly, I discovered there isn't a need to have bean to act as a polling consumer but a processor could be used instead.
#Override
public void configureCamel() {
from("timer://fnzPoller?period=2000&delay=2000").processRef(UrlPingProcessor.class.getSimpleName())
.processRef(StopStartProcessor.class.getSimpleName()).to("log://TimerPollingRoute?showAll=true");
}
Then in the UrlPingProcessor there is CXF service to ping the url and can check the response :
#Override
public void process(Exchange exchange) {
try {
// CXF service
FnzPingServiceImpl fnzPingService = new FnzPingServiceImpl(url);
fnzPingService.getPing();
} catch (WebApplicationException e) {
int responseCode = e.getResponse().getStatus();
boolean isValidResponseCode = ResponseCodeUtil.isResponseCodeValid(responseCode);
if (!isValidResponseCode) {
// Sets a flag to stop for the StopStartProcessor
stopRoute(exchange);
}
}
}
Then in the StopStartProcessor it is using a ExecutorService to stop or start a route via new thread.:
#Override
public void process(final Exchange exchange) {
// routeBuilder is set on the constructor.
final String routeId = routeBuilder.getClass().getSimpleName();
Boolean stopRoute = ExchangeHeaderUtil.getHeader(exchange, Exchange.ROUTE_STOP, Boolean.class);
boolean stopRoutePrim = BooleanUtils.isTrue(stopRoute);
if (stopRoutePrim) {
StopRouteThread stopRouteThread = new StopRouteThread(exchange, routeId);
executorService.execute(stopRouteThread);
} else {
CamelContext context = exchange.getContext();
Route route = context.getRoute(routeId);
if (route == null) {
try {
context.addRoutes(routeBuilder);
} catch (Exception e) {
String msg = "Unable to add a route: " + routeBuilder;
LOGGER.warn(msg, e);
}
}
}
}

Does this program introduce a parallel execution?

Here is a simple server application using Bonjour and written in Java. The main part of the code is given here:
public class ServiceAnnouncer implements IServiceAnnouncer, RegisterListener {
private DNSSDRegistration serviceRecord;
private boolean registered;
public boolean isRegistered(){
return registered;
}
public void registerService() {
try {
serviceRecord = DNSSD.register(0,0,null,"_killerapp._tcp", null,null,1234,null,this);
} catch (DNSSDException e) {
// error handling here
}
}
public void unregisterService(){
serviceRecord.stop();
registered = false;
}
public void serviceRegistered(DNSSDRegistration registration, int flags,String serviceName, String regType, String domain){
registered = true;
}
public void operationFailed(DNSSDService registration, int error){
// do error handling here if you want to.
}
}
I understand it in the following way. We can try to register a service calling "registerService" method which, in its turn, calls "DNSSD.register" method. "DNSSD.register" try to register the service and, in general case, it can end up with two results: service was "successfully registered" and "registration failed". In both cases "DNSSD.register" calls a corresponding method (either "serviceRegistered" or "operationFailed") of the object which was given to the DNSSD.register as the last argument. And programmer decides what to put into "serviceRegistered" and "operationFailed". It is clear.
But should I try to register a service from the "operationFailed"? I am afraid that in this way my application will try to register the service too frequently. Should I put some "sleep" or "pause" into "operationFailed"? But in any case, it seems to me, that when the application is unable to register a service it will be also unable to do something else (for example to take care of GUI). Or may be DNSSD.register introduce some kind of parallelism? I mean it starts a new thread but that if I try to register service from "operation Failed", I could generate a huge number of the threads. Can it happen? If it is the case, should it be a problem? And if it is the case, how can I resolve this problem?
Yes, callbacks from the DNSSD APIs can come asynchronously from another thread. This exerpt from the O'Reilly book on ZeroConf networking gives some useful information.
I'm not sure retrying the registration from your operationFailed callback is a good idea. At least without some understanding of why the registration failed, is simply retrying it with the same parameters going to make sense?

Categories

Resources