How to create Processor with Transaction and DLQ with Rabbit binding? - java

I'm just starting to learn Spring Cloud Streams and Dataflow and I want to know one of important use cases for me. I created example processor Multiplier which takes message and resends it 5 times to output.
#EnableBinding(Processor.class)
public class MultiplierProcessor {
#Autowired
private Source source;
private int repeats = 5;
#Transactional
#StreamListener(Processor.INPUT)
public void handle(String payload) {
for (int i = 0; i < repeats; i++) {
if(i == 4) {
throw new RuntimeException("EXCEPTION");
}
source.output().send(new GenericMessage<>(payload));
}
}
}
What you can see is that before 5th sending this processor crashes. Why? Because it can (programs throw exceptions). In this case I wanted to practice fault prevention on Spring Cloud Stream.
What I would like to achieve is to have input message backed in DLQ and 4 messages that were send before to be reverted and not consumed by next operand (just like in normal JMS transaction). I tried already to define following properties in my processor project but without success.
spring.cloud.stream.bindings.output.producer.autoBindDlq=true
spring.cloud.stream.bindings.output.producer.republishToDlq=true
spring.cloud.stream.bindings.output.producer.transacted=true
spring.cloud.stream.bindings.input.consumer.autoBindDlq=true
Could you tell me if it possible and also what am I doing wrong? I would be overwhelmingly thankful for some examples.

You have several issues with your configuration:
missing .rabbit in the rabbit-specific properties)
you need a group name and durable subscription to use autoBindDlq
autoBindDlq doesn't apply on the output side
The consumer has to be transacted so that the producer sends are performed in the same transaction.
I just tested this with 1.0.2.RELEASE:
spring.cloud.stream.bindings.output.destination=so8400out
spring.cloud.stream.rabbit.bindings.output.producer.transacted=true
spring.cloud.stream.bindings.input.destination=so8400in
spring.cloud.stream.bindings.input.group=so8400
spring.cloud.stream.rabbit.bindings.input.consumer.durableSubscription=true
spring.cloud.stream.rabbit.bindings.input.consumer.autoBindDlq=true
spring.cloud.stream.rabbit.bindings.input.consumer.transacted=true
and it worked as expected.
EDIT
Actually, no, the published messages were not rolled back. Investigating...
EDIT2
OK; it does work, but you can't use republishToDlq - because when that is enabled, the binder publishes the failed message to the DLQ and the transaction is committed.
When that is false, the exception is thrown to the container, the transaction is rolled back, and RabbitMQ moves the failed message to the DLQ.
Note, however, that retry is enabled by default (3 attempts) so, if your processor succeeds during retry, you will get duplicates in your output.
For this to work as you want, you need to disable retry by setting the max attempts to 1 (and don't use republishToDlq).
EDIT3
OK, if you want more control over the publishing of the errors, this will work, when the fix for this JIRA is applied to Spring AMQP...
#SpringBootApplication
#EnableBinding({ Processor.class, So39018400Application.Errors.class })
public class So39018400Application {
public static void main(String[] args) {
SpringApplication.run(So39018400Application.class, args);
}
#Bean
public Foo foo() {
return new Foo();
}
public interface Errors {
#Output("errors")
MessageChannel errorChannel();
}
private static class Foo {
#Autowired
Source source;
#Autowired
Errors errors;
#StreamListener(Processor.INPUT)
public void handle (Message<byte[]> in) {
try {
source.output().send(new GenericMessage<>("foo"));
source.output().send(new GenericMessage<>("foo"));
throw new RuntimeException("foo");
}
catch (RuntimeException e) {
errors.errorChannel().send(MessageBuilder.fromMessage(in)
.setHeader("foo", "bar") // add whatever you want, stack trace etc.
.build());
throw e;
}
}
}
}
with properties:
spring.cloud.stream.bindings.output.destination=so8400out
spring.cloud.stream.bindings.errors.destination=so8400errors
spring.cloud.stream.rabbit.bindings.errors.producer.transacted=false
spring.cloud.stream.rabbit.bindings.output.producer.transacted=true
spring.cloud.stream.bindings.input.destination=so8400in
spring.cloud.stream.bindings.input.group=so8400
spring.cloud.stream.rabbit.bindings.input.consumer.transacted=true
spring.cloud.stream.rabbit.bindings.input.consumer.requeue-rejected=false
spring.cloud.stream.bindings.input.consumer.max-attempts=1

Related

Republish message to same queue with updated headers after automatic nack in Spring AMQP

I am trying to configure my Spring AMQP ListenerContainer to allow for a certain type of retry flow that's backwards compatible with a custom rabbit client previously used in the project I'm working on.
The protocol works as follows:
A message is received on a channel.
If processing fails the message is nacked with the republish flag set to false
A copy of the message with additional/updated headers (a retry counter) is published to the same queue
The headers are used for filtering incoming messages, but that's not important here.
I would like the behaviour to happen on an opt-in basis, so that more standardised Spring retry flows can be used in cases where compatibility with the old client isn't a concern, and the listeners should be able to work without requiring manual acking.
I have implemented a working solution, which I'll get back to below. Where I'm struggling is to publish the new message after signalling to the container that it should nack the current message, because I can't really find any good hooks after the nack or before the next message.
Reading the documentation it feels like I'm looking for something analogous to the behaviour of RepublishMessageRecoverer used as the final step of a retry interceptor. The main difference in my case is that I need to republish immediately on failure, not as a final recovery step. I tried to look at the implementation of RepublishMessageRecoverer, but the many of layers of indirection made it hard for me to understand where the republishing is triggered, and if a nack goes before that.
My working implementation looks as follows. Note that I'm using an AfterThrowsAdvice, but I think an error handler could also be used with nearly identical logic.
/*
MyConfig.class, configuring the container factory
*/
#Configuration
public class MyConfig {
#Bean
// NB: bean name is important, overwrites autoconfigured bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(
ConnectionFactory connectionFactory,
Jackson2JsonMessageConverter messageConverter,
RabbitTemplate rabbitTemplate
) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setMessageConverter(messageConverter);
// AOP
var a1 = new CustomHeaderInspectionAdvice();
var a2 = new MyThrowsAdvice(rabbitTemplate);
Advice[] adviceChain = {a1, a2};
factory.setAdviceChain(adviceChain);
return factory;
}
}
/*
MyThrowsAdvice.class, hooking into the exception flow from the listener
*/
public class MyThrowsAdvice implements ThrowsAdvice {
private static final Logger logger = LoggerFactory.getLogger(MyThrowsAdvice2.class);
private final AmqpTemplate amqpTemplate;
public MyThrowsAdvice2(AmqpTemplate amqpTemplate) {
this.amqpTemplate = amqpTemplate;
}
public void afterThrowing(Method method, Object[] args, Object target, ListenerExecutionFailedException ex) {
var message = message(args);
var cause = ex.getCause();
// opt-in to old protocol by throwing an instance of BusinessException in business logic
if (cause instanceof BusinessException) {
/*
NB: Since we want to trigger execution after the current method fails
with an exception we need to schedule it in another thread and delay
execution until the nack has happened.
*/
new Thread(() -> {
try {
Thread.sleep(1000L);
var messageProperties = message.getMessageProperties();
var count = getCount(messageProperties);
messageProperties.setHeader("xb-count", count + 1);
var routingKey = messageProperties.getReceivedRoutingKey();
var exchange = messageProperties.getReceivedExchange();
amqpTemplate.send(exchange, routingKey, message);
logger.info("Sent!");
} catch (InterruptedException e) {
logger.error("Sleep interrupted", e);
}
}).start();
// NB: Produce the desired nack.
throw new AmqpRejectAndDontRequeueException("Business logic exception, message will be re-queued with updated headers", cause);
}
}
private static long getCount(MessageProperties messageProperties) {
try {
Long c = messageProperties.getHeader("xb-count");
return c == null ? 0 : c;
} catch (Exception e) {
return 0;
}
}
private static Message message(Object[] args) {
try {
return (Message) args[1];
} catch (Exception e) {
logger.info("Bad cast parse", e);
throw new AmqpRejectAndDontRequeueException(e);
}
}
}
Now, as you can imagine, I'm not particularly pleased with the indeterminism of scheduling a new thread with a delay.
So my question is simply, is there any way I could produce a deterministic solution to my problem using the provided hooks of the ListenerContainer ?
Your current solution risks message loss; since you are publishing on a different thread after a delay. If the server crashes during that delay, the message is lost.
It would be better to publish immediately to another queue with a TTL and dead-letter configuration to republish the expired message back to the original queue.
Using the RepublishMessageRecoverer with retries set to maxattempts=1 should do what you need.

Resilience4j circuit breaker does not switch back to closed state from half open after the downstream system is up again

I have two micro-services, order-service, and inventory-service. The order-service makes a call to inventory-service to check if ordered items are in stock. An order is placed only if all the items in the order request are in stock. If the inventory-service is down or slow, the circuit breaker part triggers. The case of an order not being placed because of a missing item is not a failure case for the circuit breaker. This works well as intended only until the circuit has never switched from closed state to half open state.
What I mean by that is if more than 5 consecutive orders cannot be placed because of a missing item, the circuit does not switch to open state. This is as expected. If the inventory-service is brought down, 3 more failed requests cause the circuit to move to open and subsequently to half open state. This also is as expected. However when the inventory-service comes up again, and the only requests that are made are requests containing one or more items not in stock, the circuit remains in half_open state continuously. This is not ok. A missing item in an order is a success case and should increment the successful buffered call count, but it doesn't. Looking at the actuator info, it looks like these calls are not counted either as failure or as success cases.
What am I doing wrong.
Note -- if I make sufficient number of calls where order gets placed, then the circuit switches to closed again. That's ok but shouldn't the case of ignored exception count as a success case even if the only calls that are made are those with one or missing items.
Following are the properties of my circuit breaker in the calling microservice.
resilience4j.circuitbreaker.instances.inventory.register-health-indicator=true
resilience4j.circuitbreaker.instances.inventory.event-consumer-buffer-size=10
resilience4j.circuitbreaker.instances.inventory.sliding-window-type=COUNT_BASED
resilience4j.circuitbreaker.instances.inventory.sliding-window-size=5
resilience4j.circuitbreaker.instances.inventory.failure-rate-threshold=50
resilience4j.circuitbreaker.instances.inventory.wait-duration-in-open-state=5s
resilience4j.circuitbreaker.instances.inventory.permitted-number-of-calls-in-half-open-state=3
resilience4j.circuitbreaker.instances.inventory.automatic-transition-from-open-to-half-open-enabled=true
resilience4j.circuitbreaker.instances.inventory.ignore-exceptions=com.mayrevision.orderservice.exception.OrderItemNotFoundException
I have a custom exception handler for OrderItemNotFoundException.
#ControllerAdvice
#ResponseStatus
public class OrderItemNotFoundExceptionHandler extends ResponseEntityExceptionHandler {
#ExceptionHandler(OrderItemNotFoundException.class)
public ResponseEntity<ErrorResponse> getErrorMessage(OrderItemNotFoundException exception) {
ErrorResponse response = new ErrorResponse(HttpStatus.NOT_ACCEPTABLE, exception.getMessage());
return ResponseEntity.status(HttpStatus.NOT_ACCEPTABLE).body(response);
}
}
The controller code is --
#PostMapping
#ResponseStatus(HttpStatus.CREATED)
#CircuitBreaker(name = "inventory", fallbackMethod = "fallBackMethod")
public String placeOrder(#RequestBody OrderRequest orderRequest) throws OrderItemNotFoundException {
orderService.placeOrder(orderRequest);
return "Order placed successfully";
}
public String fallBackMethod(OrderRequest orderRequest, RuntimeException runtimeException) {
return "The order could not be placed. Please try back after some time.";
}
Edit -- Edited resilience4j.circuitbreaker.instances.inventory.ignore-exceptions[0]=com.mayrevision.orderservice.exception.OrderItemNotFoundException to resilience4j.circuitbreaker.instances.inventory.ignore-exceptions=com.mayrevision.orderservice.exception.OrderItemNotFoundException in application.properties.
Looks like this is how the exception mechanism in Resilience4j is designed to work. If I want the exception to be treated as a success case in all the cases (including the case mentioned above), I should catch it. So I changed the code as follows --
#PostMapping
#CircuitBreaker(name = "inventory", fallbackMethod = "fallBackMethod")
public CompletableFuture<ResponseEntity<StatusResponse>> placeOrder(#RequestBody OrderRequest orderRequest) {
return CompletableFuture.supplyAsync(() -> {
try {
String message = orderService.placeOrder(orderRequest);
StatusResponse response = new StatusResponse(HttpStatus.CREATED, message);
return ResponseEntity.status(HttpStatus.CREATED).body(response);
} catch (OrderItemNotFoundException e) {
return ResponseEntity.status(HttpStatus.NOT_ACCEPTABLE)
.body(new StatusResponse(HttpStatus.NOT_ACCEPTABLE, e.getMessage()));
}
});
}
public CompletableFuture<ResponseEntity<StatusResponse>> fallBackMethod(OrderRequest orderRequest, RuntimeException e) {
return CompletableFuture.supplyAsync(() -> {
//some logic
});
}
This is working close to expected.

Spring Boot: Kafka health indicator

I have something like below which works well, but I would prefer checking health without sending any message, (not only checking socket connection). I know Kafka has something like KafkaHealthIndicator out of the box, does someone have experience or example using it ?
public class KafkaHealthIndicator implements HealthIndicator {
private final Logger log = LoggerFactory.getLogger(KafkaHealthIndicator.class);
private KafkaTemplate<String, String> kafka;
public KafkaHealthIndicator(KafkaTemplate<String, String> kafka) {
this.kafka = kafka;
}
#Override
public Health health() {
try {
kafka.send("kafka-health-indicator", "❥").get(100, TimeUnit.MILLISECONDS);
} catch (InterruptedException | ExecutionException | TimeoutException e) {
return Health.down(e).build();
}
return Health.up().build();
}
}
In order to trip health indicator, retrieve data from one of the future objects otherwise indicator is UP even when Kafka is down!!!
When Kafka is not connected future.get() throws an exception which in turn set this indicator down.
#Configuration
public class KafkaConfig {
#Autowired
private KafkaAdmin kafkaAdmin;
#Bean
public AdminClient kafkaAdminClient() {
return AdminClient.create(kafkaAdmin.getConfigurationProperties());
}
#Bean
public HealthIndicator kafkaHealthIndicator(AdminClient kafkaAdminClient) {
final DescribeClusterOptions options = new DescribeClusterOptions()
.timeoutMs(1000);
return new AbstractHealthIndicator() {
#Override
protected void doHealthCheck(Health.Builder builder) throws Exception {
DescribeClusterResult clusterDescription = kafkaAdminClient.describeCluster(options);
// In order to trip health indicator DOWN retrieve data from one of
// future objects otherwise indicator is UP even when Kafka is down!!!
// When Kafka is not connected future.get() throws an exception which
// in turn sets the indicator DOWN.
clusterDescription.clusterId().get();
// or clusterDescription.nodes().get().size()
// or clusterDescription.controller().get();
builder.up().build();
// Alternatively directly use data from future in health detail.
builder.up()
.withDetail("clusterId", clusterDescription.clusterId().get())
.withDetail("nodeCount", clusterDescription.nodes().get().size())
.build();
}
};
}
}
Use the AdminClient API to check the health of the cluster via describing the cluster and/or the topic(s) you'll be interacting with, and verifying those topics have the required number of insync replicas, for example
Kafka has something like KafkaHealthIndicator out of the box
It doesn't. Spring's Kafka integration might

Is there a way to stream a log to a Spring Boot application?

I have a Spring Boot app that is used as an event logger. Each client sends different events via a REST api, which are then saved in a database. But apart from simple events, I need the clients to also send their execution logs to Spring Boot.
Now, uploading a log after a client finishes executing is easy, and there are plenty examples for it out there. What I need is to stream the log as the client is executing, line by line, and not wait until the client has finished.
I've spent quite some time googling for a possible answer and I couldn't find anything that fits my needs. Any advice how to do this using Spring Boot (future releases included)? Is it feasible?
I see a couple of possibilities here. First, consider using a logback (the default Spring Boot logging implementation) SocketAppender or ServerSocketAppender in your client. See: https://logback.qos.ch/manual/appenders.html. This would let you send log messages to any logging service.
But I might suggest that you not log to your Spring Boot Event App as I suspect that will add complexity to your app unnecessarily, and I can see a situation where there is some bug in the Event App that then causes clients to log a bunch of errors which in turn all go back to the event app making it difficult to determine the initial error.
What I would respectfully suggest is that you instead log to a logging server - logstash: https://www.elastic.co/products/logstash for example, or if you already have a db that you are saving the event to, then maybe use the logbook DBAppender and write the logs directly to a db.
I wrote here an example on how to stream file updates in a spring boot endpoint. The only difference is that the code uses the Java WatchService API to trigger file updates on a given file.
However, in your situation, I would also choose the log appender to directly send messages to the connected clients (with sse - call template.broadcast from there) instead of watching for changes like I described.
The endpoint:
#GetMapping(path = "/logs", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public SseEmitter streamSseMvc() {
return sseService.newSseEmitter();
}
The service:
public class LogsSseService {
private static final Logger log = LoggerFactory.getLogger(LogsSseService.class);
private static final String TOPIC = "logs";
private final SseTemplate template;
private static final AtomicLong COUNTER = new AtomicLong(0);
public LogsSseService(SseTemplate template, MonitoringFileService monitoringFileService) {
this.template = template;
monitoringFileService.listen(file -> {
try {
Files.lines(file)
.skip(COUNTER.get())
.forEach(line ->
template.broadcast(TOPIC, SseEmitter.event()
.id(String.valueOf(COUNTER.incrementAndGet()))
.data(line)));
} catch (IOException e) {
e.printStackTrace();
}
});
}
public SseEmitter newSseEmitter() {
return template.newSseEmitter(TOPIC);
}
}
The custom appender (which you have to add to your logger - check here):
public class StreamAppender extends UnsynchronizedAppenderBase<ILoggingEvent> implements SmartLifecycle {
public static final String TOPIC = "logs";
private final SseTemplate template;
public StreamAppender(SseTemplate template) {
this.template = template;
}
#Override
protected void append(ILoggingEvent event) {
template.broadcast(TOPIC, SseEmitter.event()
.id(event.getThreadName())
.name("log")
.data(event.getFormattedMessage()));
}
#Override
public boolean isRunning() {
return isStarted();
}
}

Using a timing based PollingConsumer to a direct endpoint

Functionally I wish to check a URL is active before I consume from a JMS (WMQ) endpoint.
If the URL cannot be reached or a server error, then I do not want to pick up from the queue. So I want to keep trying (with unlimited retries) the URL via a polling consumer. So as soon as it is available I can pick up from JMS.
I have a RouteBuilder that is set up with a direct endpoint, that is configured to run a Processor that will ping a service.
So:
public class PingRoute extends RouteBuilder {
#Override
public void configureCamel() {
from("direct:pingRoute").routeId(PingRoute.class.getSimpleName())
.process(new PingProcessor(url))
.to("log://PingRoute?showAll=true");
}
}
In another route I am setting up my timer:
#Override
public void configureCamel() {
from(timerEndpoint).beanRef(PollingConsumerBean.class.getSimpleName(), "checkPingRoute");
...
}
And with the PollingConsumerBean I am attempting to receive the body via a consumer:
public void checkPingRoute(){
// loop to check the consumer. Check we can carry on with the pick up from the JMS queue.
while(true){
Boolean pingAvailable = consumer.receiveBody("direct:pingRoute", Boolean.class);
...
}
I add the route to the context and use a producer to send:
context.addRoutes(new PingRoute());
context.start();
producer.sendBody(TimerPollingRoute.TIMER_POLLING_ROUTE_ENDPOINT, "a body");
And I get the following IllegalArgumentException:
Cannot add a 2nd consumer to the same endpoint. Endpoint Endpoint[direct://pingRoute] only allows one consumer.
Is there a way to setup the direct route as a polling consumer?
Business logic is not quite clear, unfortunately. As I understand it - you need to wait for a response from the service. IMHO you have to use Content Enricher EIP http://camel.apache.org/content-enricher.html . pollEnrich is what you need at timer route.
.pollEnrich("direct:waitForResponce", -1) or
.pollEnrich("seda:waitForResponce", -1)
public class PingRoute extends RouteBuilder {
#Override
public void configureCamel() {
from("direct:pingRoute").routeId(PingRoute.class.getSimpleName())
.process(new PingProcessor(url))
.choice().when(body())
.to("log://PingRoute?showAll=true")
.to("direct:waitForResponce")
.otherwise()
.to("direct:pingRoute")
.end();
}
};
timer:
#Override
public void configureCamel() {
from(timerEndpoint)
.inOnly("direct:pingRoute")
.pollEnrich("direct:waitForResponce", -1)
...
}
Based on the OP's clarification of their use case, they have several problems to solve:
Consume the message from the JMS queue if, and only if, the ping to the URL is positive.
If the URL is unresponsive, the JMS message should not disappear from the queue and a retry must take place until the URL becomes responsive again, in which case the message will be ultimately consumed.
The OP has not specified if the amount of retries is limited or unlimited.
Based on this problem scenario, I suggest a redesign of their solution that leverages ActiveMQ retries, broker-side redelivery and JMS transactions in Camel to:
Return the message to the queue if the URL ping failed (via a transaction rollback).
Ensure that the message is not lost (by using JMS persistence and broker-side redeliveries, AMQ will durably schedule the retry cycle).
Be able to specify a sophisticated retry cycle per message, e.g. with exponential backoffs, maximum retries, etc.
Optionally sending the message to a Dead Letter Queue if the retry cycle was exhausted without a positive result, so that some other (possibly manual) action can be planned.
Now, implementation-wise:
from("activemq:queue:abc?transacted=true") // (1)
.to("http4://host.endpoint.com/foo?method=GET") // (2) (3)
.process(new HandleSuccess()); // (4)
Comments:
Note the transacted flag.
If the HTTP invocation fails, the HTTP4 endpoint will raise an Exception.
Since there are no configured exception handlers, Camel will propagate the exception to the consumer endpoint (activemq) which will rollback the transaction.
If the invocation succeeded, the flow will continue and the exchange body will now contain the payload returned by the HTTP server and you can handle it in whichever way you wish. Here I'm using a processor.
Next, what's important is that you configure the redelivery policy in ActiveMQ, as well as enable broker-side redeliveries. You do that in your activemq.xml configuration file:
<plugins>
<redeliveryPlugin fallbackToDeadLetter="true" sendToDlqIfMaxRetriesExceeded="true">
<redeliveryPolicyMap>
<redeliveryPolicyMap>
<redeliveryPolicyEntries>
<redeliveryPolicy queue="my.queue"
initialRedeliveryDelay="30000"
maximumRedeliveries="17"
maximumRedeliveryDelay="259200000"
redeliveryDelay="30000"
useExponentialBackOff="true"
backOffMultiplier="2" />
</redeliveryPolicyEntries>
</redeliveryPolicyMap>
</redeliveryPolicyMap>
</redeliveryPlugin>
</plugins>
And make sure that the scheduler support is enabled in the top-level <broker /> element:
<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="mybroker"
schedulerSupport="true">
...
</broker>
I hope that helps.
EDIT 1: OP is using IBM WebSphere MQ as a broker, I missed that. You could use a JMS QueueBrowser to peek at messages and try their corresponding URLs before actually consuming a message, but it is not possible to selectively consume an individual message – that's not what MOM (messaging-oriented middleware) is about.
So I insist that you should explore JMS transactions, but rather than leaving it up to the broker to redeliver the message, you can start the pinging cycle to the URL within the TX body itself. With regards to Camel, you could implement it as follows:
from("jms:queue:myqueue?transacted=true")
.bean(new UrlPinger());
UrlPinger.java:
public class UrlPinger {
#EndpointInject
private ProducerTemplate template;
private Pattern pattern = Pattern.compile("^(http(?:s)?)\\:");
#Handler
public void pingUrl(#Body String url, CamelContext context) throws InterruptedException {
// Replace http(s): with http(s)4: to use the Camel HTTP4 endpoint.
Matcher m = pattern.matcher(url);
if (m.matches()) {
url = m.replaceFirst(m.group(1) + "4:");
}
// Try forever until the status code is 200.
while (getStatusCode(url, context) != 200) {
Thread.sleep(5000);
}
}
private int getStatusCode(String url, CamelContext context) {
Exchange response = template.request(url + "?method=GET&throwExceptionOnFailure=false", new Processor() {
#Override public void process(Exchange exchange) throws Exception {
// No body since this is a GET request.
exchange.getIn().getBody(null);
}
});
return response.getIn().getHeader(Exchange.HTTP_RESPONSE_CODE, Integer.class);
}
}
Notes:
Note the throwExceptionOnFailure=false option. An Exception will not be raised, therefore the loop will execute until the condition is true.
Inside the bean, I'm looping forever until the HTTP status is 200. Of course, your logic will be different.
Between attempt and attempt, I'm sleeping 5000ms.
I'm assuming the URL to ping is in the body of the incoming JMS message. I'm replacing the leading http(s): with http(s)4: in order to use the Camel HTTP4 endpoint.
Performing the pinging inside the TX guarantees that the message will only be consumed once the ping condition is true (in this case HTTP status == 200).
You might want to introduce a desist condition (you don't want to keep trying forever). Maybe introduce some backoff to not overwhelm the other party.
If either Camel or the broker goes down within a retry cycle, the message will be automatically rolled back.
Take into account that JMS transactions are Session-bound, so if you want to start many concurrent consumers (concurrentConsumers JMS endpoint option), you'll need to set cacheLevelName=CACHE_NONE for each thread to use a different JMS Session.
I am having a bit of difficulty figuring out exactly what you want to do, but it appears to me that you want to consume data from an endpoint on an interval. For this the best pattern is a polling consumer: http://camel.apache.org/polling-consumer.html
The error you are currently receiving is because you have two consumers both trying to read from the "direct://pingRoute" If this was intended you could change the direct to a seda://pingRoute so its an in memory queue your data will be in.
All the answers here pointed me on the right direction but I finally came up with a solution that managed to fit our code base and framework.
Firstly, I discovered there isn't a need to have bean to act as a polling consumer but a processor could be used instead.
#Override
public void configureCamel() {
from("timer://fnzPoller?period=2000&delay=2000").processRef(UrlPingProcessor.class.getSimpleName())
.processRef(StopStartProcessor.class.getSimpleName()).to("log://TimerPollingRoute?showAll=true");
}
Then in the UrlPingProcessor there is CXF service to ping the url and can check the response :
#Override
public void process(Exchange exchange) {
try {
// CXF service
FnzPingServiceImpl fnzPingService = new FnzPingServiceImpl(url);
fnzPingService.getPing();
} catch (WebApplicationException e) {
int responseCode = e.getResponse().getStatus();
boolean isValidResponseCode = ResponseCodeUtil.isResponseCodeValid(responseCode);
if (!isValidResponseCode) {
// Sets a flag to stop for the StopStartProcessor
stopRoute(exchange);
}
}
}
Then in the StopStartProcessor it is using a ExecutorService to stop or start a route via new thread.:
#Override
public void process(final Exchange exchange) {
// routeBuilder is set on the constructor.
final String routeId = routeBuilder.getClass().getSimpleName();
Boolean stopRoute = ExchangeHeaderUtil.getHeader(exchange, Exchange.ROUTE_STOP, Boolean.class);
boolean stopRoutePrim = BooleanUtils.isTrue(stopRoute);
if (stopRoutePrim) {
StopRouteThread stopRouteThread = new StopRouteThread(exchange, routeId);
executorService.execute(stopRouteThread);
} else {
CamelContext context = exchange.getContext();
Route route = context.getRoute(routeId);
if (route == null) {
try {
context.addRoutes(routeBuilder);
} catch (Exception e) {
String msg = "Unable to add a route: " + routeBuilder;
LOGGER.warn(msg, e);
}
}
}
}

Categories

Resources