We have a Spring Integration DSL pipeline connected to a GCP Pubsub and things "work": The data is received and processed as defined in the pipeline, using a collection of Function implementations and .handle().
The problem we have (and why I used "work" in quotes) is that, in some handlers, when some of the data isn't found in the companion database, we raise IllegalStateException, which forces the data to be reprocessed (along the way, another service may complete the companion database and then function will now work). This exception is never shown anywhere.
We tried to capture the content of errorHandler, but we really can't find the proper way of doing it programmatically (no XML).
Our Functions have something like this:
Record record = recordRepository.findById(incomingData).orElseThrow(() -> new IllegalStateException("Missing information: " + incomingData));
This IllegalStateException is the one that is not appearing anywhere in the logs.
Also, maybe it's worth mentioning that we have our channels defined as
#Bean
public DirectChannel cardInputChannel() {
return new DirectChannel();
}
#Bean
public PubSubInboundChannelAdapter cardChannelAdapter(
#Qualifier("cardInputChannel") MessageChannel inputChannel,
PubSubTemplate pubSubTemplate) {
PubSubInboundChannelAdapter adapter = new PubSubInboundChannelAdapter(pubSubTemplate, SUBSCRIPTION_NAME);
adapter.setOutputChannel(inputChannel);
adapter.setAckMode(AckMode.AUTO);
adapter.setPayloadType(CardDto.class);
return adapter;
}
I am not familiar with the adapter, but I just looked at the code and it looks like they just nack the message and don't log anything.
You can add an Advice to the handler's endpoint to capture and log the exception
.handle(..., e -> e.advice(exceptionLoggingAdvice)
#Bean
public MethodInterceptor exceptionLoggingAdvice() {
return invocation -> {
try {
return invocation.proceed();
}
catch (Exception thrown) {
// log it
throw thrown;
}
}
}
EDIT
#SpringBootApplication
public class So57224614Application {
public static void main(String[] args) {
SpringApplication.run(So57224614Application.class, args);
}
#Bean
public IntegrationFlow flow(MethodInterceptor myAdvice) {
return IntegrationFlows.from(() -> "foo", endpoint -> endpoint.poller(Pollers.fixedDelay(5000)))
.handle("crasher", "crash", endpoint -> endpoint.advice(myAdvice))
.get();
}
#Bean
public MethodInterceptor myAdvice() {
return invocation -> {
try {
return invocation.proceed();
}
catch (Exception e) {
System.out.println("Failed with " + e.getMessage());
throw e;
}
};
}
}
#Component
class Crasher {
public void crash(Message<?> msg) {
throw new RuntimeException("test");
}
}
and
Failed with nested exception is java.lang.RuntimeException: test
Related
I am consuming batches in kafka, where retry is not supported in spring cloud stream kafka binder with batch mode, there is an option given that You can configure a SeekToCurrentBatchErrorHandler (using a ListenerContainerCustomizer) to achieve similar functionality to retry in the binder.
I tried the same, but with SeekToCurrentBatchErrorHandler, but it's retrying more than the time set which is 3 times.
How can I do that?
I would like to retry the whole batch.
How can I send the whole batch to dlq topic? like for record listener I used to match deliveryAttempt(retry) to 3 then send to DLQ topic, check in listener.
I have checked this link, which is exactly my issue but an example would be great help, with this library spring-cloud-stream-kafka-binder, can I achieve that. Please explain with an example, I am new to this.
Currently I have below code.
#Configuration
public class ConsumerConfig {
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer() {
return (container, dest, group) -> {
container.getContainerProperties().setAckOnError(false);
SeekToCurrentBatchErrorHandler seekToCurrentBatchErrorHandler
= new SeekToCurrentBatchErrorHandler();
seekToCurrentBatchErrorHandler.setBackOff(new FixedBackOff(0L, 2L));
container.setBatchErrorHandler(seekToCurrentBatchErrorHandler);
//container.setBatchErrorHandler(new BatchLoggingErrorHandler());
};
}
}
Listerner:
#StreamListener(ActivityChannel.INPUT_CHANNEL)
public void handleActivity(List<Message<Event>> messages,
#Header(name = KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment
acknowledgment,
#Header(name = "deliveryAttempt", defaultValue = "1") int
deliveryAttempt) {
try {
log.info("Received activity message with message length {}", messages.size());
nodeConfigActivityBatchProcessor.processNodeConfigActivity(messages);
acknowledgment.acknowledge();
log.debug("Processed activity message {} successfully!!", messages.size());
} catch (MessagePublishException e) {
if (deliveryAttempt == 3) {
log.error(
String.format("Exception occurred, sending the message=%s to DLQ due to: ",
"message"),
e);
publisher.publishToDlq(EventType.UPDATE_FAILED, "message", e.getMessage());
} else {
throw e;
}
}
}
After seeing #Gary's response added the ListenerContainerCustomizer #Bean with RetryingBatchErrorHandler, but not able to import the class. attaching screenshots.
not able to import RetryingBatchErrorHandler
my spring cloud dependencies
Use a RetryingBatchErrorHandler to send the whole batch to the DLT
https://docs.spring.io/spring-kafka/docs/current/reference/html/#retrying-batch-eh
Use a RecoveringBatchErrorHandler where you can throw a BatchListenerFailedException to tell it which record in the batch failed.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#recovering-batch-eh
In both cases provide a DeadLetterPublishingRecoverer to the error handler; disable DLTs in the binder.
EDIT
Here's an example; it uses the newer functional style rather than the deprecated #StreamListener, but the same concepts apply (but you should consider moving to the functional style).
#SpringBootApplication
public class So69175145Application {
public static void main(String[] args) {
SpringApplication.run(So69175145Application.class, args);
}
#Bean
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer(
KafkaTemplate<byte[], byte[]> template) {
return (container, dest, group) -> {
container.setBatchErrorHandler(new RetryingBatchErrorHandler(new FixedBackOff(5000L, 2L),
new DeadLetterPublishingRecoverer(template,
(rec, ex) -> new TopicPartition("errors." + dest + "." + group, rec.partition()))));
};
}
/*
* DLT topic won't be auto-provisioned since enableDlq is false
*/
#Bean
public NewTopic topic() {
return TopicBuilder.name("errors.so69175145.grp").partitions(1).replicas(1).build();
}
/*
* Functional equivalent of #StreamListener
*/
#Bean
public Consumer<List<String>> input() {
return list -> {
System.out.println(list);
throw new RuntimeException("test");
};
}
/*
* Not needed here - just to show we sent them to the DLT
*/
#KafkaListener(id = "so69175145", topics = "errors.so69175145.grp")
public void listen(String in) {
System.out.println("From DLT: " + in);
}
}
spring.cloud.stream.bindings.input-in-0.destination=so69175145
spring.cloud.stream.bindings.input-in-0.group=grp
spring.cloud.stream.bindings.input-in-0.content-type=text/plain
spring.cloud.stream.bindings.input-in-0.consumer.batch-mode=true
# for DLT listener
spring.kafka.consumer.auto-offset-reset=earliest
[foo]
2021-09-14 09:55:32.838ERROR...
...
[foo]
2021-09-14 09:55:37.873ERROR...
...
[foo]
2021-09-14 09:55:42.886ERROR...
...
From DLT: foo
I have a spring boot application that uses the libraries: SimpleMessageListenerContainer (https://docs.spring.io/spring-amqp/docs/current/api/org/springframework/amqp/rabbit/listener/SimpleMessageListenerContainer.html) and SimpleMessageListenerContainerFactory (https://www.javadoc.io/static/org.springframework.cloud/spring-cloud-aws-messaging/2.2.0.RELEASE/org/springframework/cloud/aws/messaging/config/SimpleMessageListenerContainerFactory.html). The application uses ASW SQS and Kafka but I'm experiencing some out of order data and trying to investigate why. Is there a way to view logging from the libraries? I know I cannot edit them directly but when I create the bean, I want to be able to see the logs from those two libraries and if possible to add to them.
Currently I'm setting up the bean in this way:
#ConditionalOnProperty(value = "application.listener-mode", havingValue = "SQS")
#Component
public class SqsConsumer {
private final static Logger logger = LoggerFactory.getLogger(SqsConsumer.class);
#Autowired
private ConsumerMessageHandler consumerMessageHandler;
#Autowired
private KafkaProducer producer;
#PostConstruct
public void init() {
logger.info("Loading SQS Listener Bean");
}
#SqsListener("${application.aws-iot.sqs-url}")
public void receiveMessage(String message) {
byte[] decodedValue = Base64.getDecoder().decode(message);
consumerMessageHandler.handle(decodedValue, message);
}
#Bean
public SimpleMessageListenerContainerFactory simpleMessageListenerContainerFactory(AmazonSQSAsync amazonSqs) {
SimpleMessageListenerContainerFactory factory = new SimpleMessageListenerContainerFactory();
factory.setAmazonSqs(amazonSqs);
factory.setMaxNumberOfMessages(10);
factory.setWaitTimeOut(20);
logger.info("Created simpleMessageListenerContainerFactory");
logger.info(factory.toString());
return factory;
}
}
For reference, this is a method in the SimpleMessageListenerContainer. It is these logs which I would like to investigate and potentially add to:
#Override
public void run() {
while (isQueueRunning()) {
try {
ReceiveMessageResult receiveMessageResult = getAmazonSqs()
.receiveMessage(
this.queueAttributes.getReceiveMessageRequest());
CountDownLatch messageBatchLatch = new CountDownLatch(
receiveMessageResult.getMessages().size());
for (Message message : receiveMessageResult.getMessages()) {
if (isQueueRunning()) {
MessageExecutor messageExecutor = new MessageExecutor(
this.logicalQueueName, message, this.queueAttributes);
getTaskExecutor().execute(new SignalExecutingRunnable(
messageBatchLatch, messageExecutor));
}
else {
messageBatchLatch.countDown();
}
}
try {
messageBatchLatch.await();
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
catch (Exception e) {
getLogger().warn(
"An Exception occurred while polling queue '{}'. The failing operation will be "
+ "retried in {} milliseconds",
this.logicalQueueName, getBackOffTime(), e);
try {
// noinspection BusyWait
Thread.sleep(getBackOffTime());
}
catch (InterruptedException ie) {
Thread.currentThread().interrupt();
}
}
}
SimpleMessageListenerContainer.this.scheduledFutureByQueue
.remove(this.logicalQueueName);
}
How would I be able to see all of that logging from where I create the bean?
Any help would be much appreciated!
I've tried to upgrade Spring Boot to 2.2.4.RELEASE version. Everzthing if fine exept problem with CompositeHealthIndicator which is deprecated.
I have this bean method
#Autowired
private HealthAggregator healthAggregator;
#Bean
public HealthIndicator solrHealthIndicator() {
CompositeHealthIndicator composite = new CompositeHealthIndicator(
this.healthAggregator);
composite.addHealthIndicator("solr1", createHealthIndicator(firstHttpSolrClient()));
composite.addHealthIndicator("solr2", createHealthIndicator(secondHttpSolrClient()));
composite.addHealthIndicator("querySolr", createHealthIndicator(queryHttpSolrClient()));
return composite;
}
private CustomSolrHealthIndicator createHealthIndicator(SolrClient source) {
try {
return new CustomSolrHealthIndicator(source);
} catch (Exception ex) {
throw new IllegalStateException("Unable to create helthCheckIndicator for solr client instance.", ex);
}
}
That registers HealthIndicator for 3 instances of SOLR (2 indexing, 1 for query). Everything worked fine until Spring Boot update. After update the method CompositeHealthIndicator.addHealthIndicator is not present, the whole class is marked as Deprecated.
The class which is created in createHealthIndicator is like this:
public class CustomSolrHealthIndicator extends SolrHealthIndicator {
private final SolrClient solrClient;
public CustomSolrHealthIndicator(SolrClient solrClient) {
super(solrClient);
this.solrClient = solrClient;
}
#Override
protected void doHealthCheck(Health.Builder builder) throws Exception {
if (!this.solrClient.getClass().isAssignableFrom(HttpSolrClient.class)) {
super.doHealthCheck(builder);
}
HttpSolrClient httpSolrClient = (HttpSolrClient) this.solrClient;
if (StringUtils.isBlank(httpSolrClient.getBaseURL())) {
return;
}
super.doHealthCheck(builder);
}
}
Is there any easy way to transform the old way how to register the instances of SOLR i want to check if they are up or down at Spring Boot version 2.2.X?
EDIT:
I have tried this:
#Bean
public CompositeHealthContributor solrHealthIndicator() {
Map<String, HealthIndicator> solrIndicators = Maps.newLinkedHashMap();
solrIndicators.put("solr1", createHealthIndicator(firstHttpSolrClient()));
solrIndicators.put("solr2", createHealthIndicator(secondHttpSolrClient()));
solrIndicators.put("querySolr", createHealthIndicator(queryHttpSolrClient()));
return CompositeHealthContributor.fromMap(solrIndicators);
}
private CustomSolrHealthIndicator createHealthIndicator(SolrClient source) {
try {
return new CustomSolrHealthIndicator(source);
} catch (Exception ex) {
throw new IllegalStateException("Unable to create healthCheckIndicator for solr client instance.", ex);
}
}
The CustomSolrHealthIndicator has no changes against start state.
But I cannot create that bean. When calling createHealthIndicator I am getting NoClassDefFoundError
Does anyone know where the problem is?
Looks like you can just use CompositeHealthContributor. It's not much different from what you have already. It appears something like this would work. You could override the functionality to add them one at a time if you'd like, also, which might be preferable if you have a large amount of configuration. Shouldn't be any harm with either approach.
#Bean
public HealthIndicator solrHealthIndicator() {
Map<String, HealthIndicator> solrIndicators;
solrIndicators.put("solr1", createHealthIndicator(firstHttpSolrClient()));
solrIndicators.put("solr2", createHealthIndicator(secondHttpSolrClient()));
solrIndicators.put("querySolr", createHealthIndicator(queryHttpSolrClient()));
return CompositeHealthContributor.fromMap(solrIndicators);
}
Instead of deprecated CompositeHealthIndicator#addHealthIndicator use constructor with map:
#Bean
public HealthIndicator solrHealthIndicator() {
Map<String, HealthIndicator> healthIndicators = new HashMap<>();
healthIndicators.put("solr1", createHealthIndicator(firstHttpSolrClient()));
healthIndicators.put("solr2", createHealthIndicator(secondHttpSolrClient()));
healthIndicators.put("querySolr", createHealthIndicator(queryHttpSolrClient()));
return new CompositeHealthIndicator(this.healthAggregator, healthIndicators);
}
Is there a way to catch DestinationResolutionException and MessageDispatchingException when using DSL? These exceptions usually indicate misconfiguration but I am not sure how I could configure my flow to catch these exceptions and apply some custom logic?
#SpringBootApplication
public class IntegrationMisconfigurationExampleApplication {
public static void main(final String[] args) {
SpringApplication.run(IntegrationMisconfigurationExampleApplication.class, args);
}
#Bean
public IntegrationFlow loggingFlow() {
return IntegrationFlows.from("input")
.<String, String>transform(String::toUpperCase)
// .nullChannel();
.get();
}
#Bean
public CommandLineRunner demo() {
return args -> {
final MessagingTemplate template = messagingTemplate();
template.convertAndSend("input", "abc");
};
}
#Bean
public MessagingTemplate messagingTemplate() {
return new MessagingTemplate();
}
}
The example above throws a DestinationResolutionException because loggingFlow.transformer#0 is not properly initialized. Is there a way to catch this exception?
Those exceptions are runtime errors. We really can't determine a misconfiguration at startup.
The way to catch runtime exception like that and do some analyzing work is with the ExpressionEvaluatingRequestHandlerAdvice, which you can add to your transform(String::toUpperCase) configuration in the second argument for endpoint configuration:
.<String, String>transform(String::toUpperCase, e-> e.advice(myExpressionEvaluatingRequestHandlerAdvice()))
See more info about this advice in the Reference Manual: https://docs.spring.io/spring-integration/docs/current/reference/html/#message-handler-advice-chain
Also you need to keep in mind that transformer is really a request-reply component with required non-null return value. Therefore you really can't configure a transform() just for one-way flow. It is going to throw an exception when there is no next channel in the flow or no replyChannel header in the message.
I am using a Spring-Boot project on Spring-Boot Version 1.5.4, with spring-boot-starter-amqp, spring-boot-starter-web-services and spring-ws-support v. 2.4.0.
So far, I have successfully created a #RabbitListener Component, which does exactly what it is supposed to do, when a message is sent to the broker via rabbitTemplate.sendAndReceive(uri, message). I tried to see what would happen if I used AsyncRabbitTemplate for this, as it is possible that the message processing might take a while, and I don't want to lock my application while waiting for a response.
The problem is: the first message I put in the queue is not even being picked up by the listener. The callback just acknowledges a success with the published message instead of the returned message.
Listener:
#RabbitListener(queues = KEY_MESSAGING_QUEUE)
public Message processMessage(#Payload byte[] payload, #Headers Map<String, Object> headers) {
try {
byte[] resultBody = messageProcessor.processMessage(payload, headers);
MessageBuilder builder = MessageBuilder.withBody(resultBody);
if (resultBody.length == 0) {
builder.setHeader(HEADER_NAME_ERROR_MESSAGE, "Error occurred during processing.");
}
return builder.build();
} catch (Exception ex) {
return MessageBuilder.withBody(EMPTY_BODY)
.setHeader(HEADER_NAME_ERROR_MESSAGE, ex.getMessage())
.setHeader(HEADER_NAME_STACK_TRACE, ex.getStackTrace())
.build();
}
}
When I am executing my Tests, one test fails, and the second test succeeds. The class is annotated with #RunWith(SpringJUnit4ClassRunner.class) and #SpringBootTest(classes = { Application.class, Test.TestConfiguration.class }) and has a #ClassRule of BrokerRunning.isRunningWintEmptyQueues(QUEUE_NAME)
TestConfiguration (inner class):
public static class TestConfiguration {
#Bean // referenced in the tests as art
public AsyncRabbitTemplate asyncRabbitTemplate(ConnectionFactory connectionFactory, RabbitTemplate rabbitTemplate) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setQueueNames(QUEUE_NAME);
return new AsyncRabbitTemplate(rabbitTemplate, container);
}
#Bean
public MessageListener messageListener() {
return new MessageListener();
}
}
Tests:
#Test
public void shouldListenAndReplyToQueue() throws Exception {
doReturn(RESULT_BODY)
.when(innerMock)
.processMessage(any(byte[].class), anyMapOf(String.class, Object.class));
Message msg = MessageBuilder
.withBody(MESSAGE_BODY)
.setHeader("header", "value")
.setHeader("auth", "entication")
.build();
RabbitMessageFuture pendingReply = art.sendAndReceive(QUEUE_NAME, msg);
pendingReply.addCallback(new ListenableFutureCallback<Message>() {
#Override
public void onSuccess(Message result) { }
#Override
public void onFailure(Throwable ex) {
throw new RuntimeException(ex);
}
});
while (!pendingReply.isDone()) {}
result = pendingReply.get();
// assertions omitted
}
Test 2:
#Test
public void shouldReturnExceptionToCaller() throws Exception {
doThrow(new SSLSenderInstantiationException("I am a message", new Exception()))
.when(innerMock)
.processMessage(any(byte[].class), anyMapOf(String.class, Object.class));
Message msg = MessageBuilder
.withBody(MESSAGE_BODY)
.setHeader("header", "value")
.setHeader("auth", "entication")
.build();
RabbitMessageFuture pendingReply = art.sendAndReceive(QUEUE_NAME, msg);
pendingReply.addCallback(/*same as above*/);
while (!pendingReply.isDone()) {}
result = pendingReply.get();
//assertions omitted
}
When I run both tests together, the test that is executed first fails, while the second call succeeds.
When I run both tests separately, both fail.
When I add an #Before-Method, which uses the AsyncRabbitTemplate art to put any message into the queue, both tests MAY pass, or the second test MAY not pass, so in addition to being unexpected, the behaviour is inconsistent as well.
The interesting thing is, that the callback passed to the method reports a success before the listener is invoked, and reports the sent message as result.
The only class missing from this is the general configuration class, which is annotated with #EnableRabbit and has this content:
#Bean
public SimpleRabbitListenerContainerFactory simpleRabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrentConsumers(10);
return factory;
}
Other Things I have tried:
specifically create AsyncRabbitTemplate myself, start and stop it manually before and after every message process -> both tests succeeded
increase / decrease receive timeout -> no effect
remove and change the callback -> no effect
explicitly created the queue again with an injected RabbitAdmin -> no effect
extracted the callback to a constant -> tests didn't even start correctly
As stated above, I used RabbitTemplate directly, which worked exactly as intended
If anyone has any ideas what is missing, I'd be very happy to hear.
You can't use the same queue for requests and replies...
#Bean // referenced in the tests as art
public AsyncRabbitTemplate asyncRabbitTemplate(ConnectionFactory connectionFactory, RabbitTemplate rabbitTemplate) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setQueueNames(QUEUE_NAME);
return new AsyncRabbitTemplate(rabbitTemplate, container);
}
Will listen for replies on QUEUE_NAME, so...
RabbitMessageFuture pendingReply = art.sendAndReceive(QUEUE_NAME, msg);
...simply sends a message to itself. It looks like you intended...
RabbitMessageFuture pendingReply = art.sendAndReceive(KEY_MESSAGING_QUEUE, msg);