Using spring integration v5.5.14 lots of queued task are increasing - java

after upgrading to spring boot version v2.7.1 we are seeing that there are lots of queued task, we never had seen such queued task increasing in the last version we were using v2.2.2.
Our team has tried to check the things in v2.7.1 but couldn't found anything in this version.
Can anyone please review the code and let us know what we are missing or have written wrong that is causing the issue. We are using spring integration to pull emails from client server and for that we have add a taskexecutor to have concurrent polling.
Versions that we use:
Spring Boot = 2.7.1
Spring Integration = 5.5.14
Earlier we were using:
Spring Boot = 2.2.2 release
Spring Integration = 5.2.3 release
I've attached the code below.
Configuration class for Imap Integration
#Configuration
#EnableIntegration
public class ImapIntegrationConfig {
private final ApplicationContext applicationContext;
#Autowired
public ImapIntegrationConfig(ApplicationContext applicationContext) {
this.applicationContext = applicationContext;
}
#Bean("mailTaskExecutor")
public ThreadPoolTaskExecutor mailTaskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(1000);
taskExecutor.setCorePoolSize(100);
taskExecutor.setTaskDecorator(new SecurityAwareTaskDecorator(applicationContext));
taskExecutor.setWaitForTasksToCompleteOnShutdown(true);
taskExecutor.setAwaitTerminationSeconds(Integer.MAX_VALUE);
return taskExecutor;
}
#Bean("imapMailChannel")
public ExecutorChannelSpec imapMailChannel() {
return MessageChannels.executor(mailTaskExecutor());
}
#Bean
public HeaderMapper<MimeMessage> mailHeaderMapper() {
return new DefaultMailHeaderMapper();
}
}
ImapListener Class to register the flow
public void registerImapFlow(ImapSetting imapSetting) {
ImapMailReceiver mailReceiver = createImapMailReceiver(imapSetting);
// create the flow for an email process
//#formatter:off
StandardIntegrationFlow flow = IntegrationFlows
.from(Mail.imapInboundAdapter(mailReceiver),
consumer -> consumer.autoStartup(true)
.poller(Pollers.fixedDelay(Duration.ofSeconds(5), Duration.ofMinutes(2))
.taskExecutor(taskExecutor)
.errorHandler(t -> logger.error("Error while polling emails for address " + imapSetting.getUsername(), t))
.maxMessagesPerPoll(10)))
.enrichHeaders(Map.of(CONCERN_CODE, imapSetting.getConcernCode(), IMAP_CONFIG_ID, imapSetting.getImapSettingId()))
.channel(imapMailChannel).get();
//#formatter:on
// give the bean a unique name to avoid clashes with multiple imap settings
String flowId = concernIdentifier.getConcernIdentifier() + "-" + imapSetting.getImapSettingId();
IntegrationFlowContext.IntegrationFlowRegistration existingFlow = integrationFlowContext.getRegistrationById(flowId);
if (existingFlow != null) {
// destroy the previous beans
existingFlow.destroy();
}
// register the new flow
integrationFlowContext.registration(flow).id(flowId).useFlowIdAsPrefix().register();
}
Process message method
#ServiceActivator(inputChannel = "imapMailChannel")
public void processMessage(Message<?> message) throws InvalidMessageException {
String concern = (String) message.getHeaders().get(CONCERN_CODE);
if (isEmpty(concern)) {
logger.error("Received null concern!");
}
Long imapConfigId = (Long) message.getHeaders().get(IMAP_CONFIG_ID);
String logMessage = null;
String messageId = null;
try {
Object payload = message.getPayload();
if (payload instanceof MimeMultipart) {
//.......................//
}
else if (payload instanceof String) {
//......................//
}
catch (Exception e) {
logger.error("Error while processing " + logMessage, e);
if (concern != null) {
metricUtil.emailFailed(concern);
}
throw new MaxxtonException("CCM-MessageID: Exception in processMessage() method", e, MessageErrorCode.UNABLE_TO_PROCESS_EMAIL);
}
metricUtil.emailProcessed(concern);
}
ImapMailReceiver method
private ImapMailReceiver createImapMailReceiver(ImapSetting imapSettings) {
String url = String.format(imapSettings.getImapUrl(), URLEncoder.encode(imapSettings.getUsername(), UTF_8), URLEncoder.encode(imapSettings.getPassword(), UTF_8));
ImapMailReceiver receiver = new ImapMailReceiver(url);
receiver.setSimpleContent(true);
Properties mailProperties = new Properties();
mailProperties.put("mail.debug", "false");
mailProperties.put("mail.imap.connectionpoolsize", "5");
mailProperties.put("mail.imap.fetchsize", 4194304);
mailProperties.put("mail.imap.connectiontimeout", 15000);
mailProperties.put("mail.imap.timeout", 30000);
mailProperties.put("mail.imaps.connectionpoolsize", "5");
mailProperties.put("mail.imaps.fetchsize", 4194304);
mailProperties.put("mail.imaps.connectiontimeout", 15000);
mailProperties.put("mail.imaps.timeout", 30000);
receiver.setJavaMailProperties(mailProperties);
receiver.setSearchTermStrategy(this::notSeenTerm);
receiver.setAutoCloseFolder(false);
receiver.setShouldDeleteMessages(false);
receiver.setShouldMarkMessagesAsRead(true);
receiver.setHeaderMapper(mailHeaderMapper);
receiver.setEmbeddedPartsAsBytes(false);
return receiver;
}
Added a screenshot taken from Grafana of active and queued task when we have upgraded to SP v2.7.1 and SI v5.5.14

At a glance it all looks OK. Unless you really don't close that folder manually elsewhere since you use receiver.setAutoCloseFolder(false);
There is no reason in that .taskExecutor(taskExecutor) since you use MessageChannels.executor(mailTaskExecutor()) immediately after producing message from the Mail.imapInboundAdapter().
I remember that in Gitter I suggested you to check how it works with the spring.task.scheduling.pool.size=10 placed into the application.properties. This is the only obvious difference between the mentioned versions: https://docs.spring.io/spring-boot/docs/current/reference/html/messaging.html#messaging.spring-integration.
Your screenshot doesn't prove that the problem is exactly with Spring Integration. Perhaps tasks are queued somehow by the tool which exports metrics to Graphana. I believe you have upgraded not just Spring Integration in your project...

Related

Republish message to same queue with updated headers after automatic nack in Spring AMQP

I am trying to configure my Spring AMQP ListenerContainer to allow for a certain type of retry flow that's backwards compatible with a custom rabbit client previously used in the project I'm working on.
The protocol works as follows:
A message is received on a channel.
If processing fails the message is nacked with the republish flag set to false
A copy of the message with additional/updated headers (a retry counter) is published to the same queue
The headers are used for filtering incoming messages, but that's not important here.
I would like the behaviour to happen on an opt-in basis, so that more standardised Spring retry flows can be used in cases where compatibility with the old client isn't a concern, and the listeners should be able to work without requiring manual acking.
I have implemented a working solution, which I'll get back to below. Where I'm struggling is to publish the new message after signalling to the container that it should nack the current message, because I can't really find any good hooks after the nack or before the next message.
Reading the documentation it feels like I'm looking for something analogous to the behaviour of RepublishMessageRecoverer used as the final step of a retry interceptor. The main difference in my case is that I need to republish immediately on failure, not as a final recovery step. I tried to look at the implementation of RepublishMessageRecoverer, but the many of layers of indirection made it hard for me to understand where the republishing is triggered, and if a nack goes before that.
My working implementation looks as follows. Note that I'm using an AfterThrowsAdvice, but I think an error handler could also be used with nearly identical logic.
/*
MyConfig.class, configuring the container factory
*/
#Configuration
public class MyConfig {
#Bean
// NB: bean name is important, overwrites autoconfigured bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(
ConnectionFactory connectionFactory,
Jackson2JsonMessageConverter messageConverter,
RabbitTemplate rabbitTemplate
) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setMessageConverter(messageConverter);
// AOP
var a1 = new CustomHeaderInspectionAdvice();
var a2 = new MyThrowsAdvice(rabbitTemplate);
Advice[] adviceChain = {a1, a2};
factory.setAdviceChain(adviceChain);
return factory;
}
}
/*
MyThrowsAdvice.class, hooking into the exception flow from the listener
*/
public class MyThrowsAdvice implements ThrowsAdvice {
private static final Logger logger = LoggerFactory.getLogger(MyThrowsAdvice2.class);
private final AmqpTemplate amqpTemplate;
public MyThrowsAdvice2(AmqpTemplate amqpTemplate) {
this.amqpTemplate = amqpTemplate;
}
public void afterThrowing(Method method, Object[] args, Object target, ListenerExecutionFailedException ex) {
var message = message(args);
var cause = ex.getCause();
// opt-in to old protocol by throwing an instance of BusinessException in business logic
if (cause instanceof BusinessException) {
/*
NB: Since we want to trigger execution after the current method fails
with an exception we need to schedule it in another thread and delay
execution until the nack has happened.
*/
new Thread(() -> {
try {
Thread.sleep(1000L);
var messageProperties = message.getMessageProperties();
var count = getCount(messageProperties);
messageProperties.setHeader("xb-count", count + 1);
var routingKey = messageProperties.getReceivedRoutingKey();
var exchange = messageProperties.getReceivedExchange();
amqpTemplate.send(exchange, routingKey, message);
logger.info("Sent!");
} catch (InterruptedException e) {
logger.error("Sleep interrupted", e);
}
}).start();
// NB: Produce the desired nack.
throw new AmqpRejectAndDontRequeueException("Business logic exception, message will be re-queued with updated headers", cause);
}
}
private static long getCount(MessageProperties messageProperties) {
try {
Long c = messageProperties.getHeader("xb-count");
return c == null ? 0 : c;
} catch (Exception e) {
return 0;
}
}
private static Message message(Object[] args) {
try {
return (Message) args[1];
} catch (Exception e) {
logger.info("Bad cast parse", e);
throw new AmqpRejectAndDontRequeueException(e);
}
}
}
Now, as you can imagine, I'm not particularly pleased with the indeterminism of scheduling a new thread with a delay.
So my question is simply, is there any way I could produce a deterministic solution to my problem using the provided hooks of the ListenerContainer ?
Your current solution risks message loss; since you are publishing on a different thread after a delay. If the server crashes during that delay, the message is lost.
It would be better to publish immediately to another queue with a TTL and dead-letter configuration to republish the expired message back to the original queue.
Using the RepublishMessageRecoverer with retries set to maxattempts=1 should do what you need.

How to improve message processing faster in spring boot and MQ app?

Got a small spring-boot messaging app that receives a message from the queue and inserts/updates a row in DB2 table. Noticed this week it received lot of messages but consumption was very slow that msgs filling the disk (infra complained about it). How could we improve the reading of the messages from the queue faster?
JMS Config
...
...
#Bean
public MQXAConnectionFactory mqxaQueueConnectionFactory() {
MQXAConnectionFactory mqxaConnectionFactory = new MQXAConnectionFactory();
log.info("Host: {}", host);
log.info("Port: {}", port);
log.info("Channel: {}", channel);
log.info("Timeout: {}", receiveTimeout);
try {
mqxaConnectionFactory.setHostName(host);
mqxaConnectionFactory.setPort(port);
mqxaConnectionFactory.setQueueManager(queueManager);
if (channel != null && !channel.trim().isEmpty()) {
mqxaConnectionFactory.setChannel(channel);
}
mqxaConnectionFactory.setTransportType(WMQConstants.WMQ_CM_CLIENT);
} catch (JMSException e) {
throw new RuntimeException(e);
}
return mqxaConnectionFactory;
}
#Bean
public CachingConnectionFactory cachingConnectionFactory(MQXAConnectionFactory mqxaConnectionFactory) {
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory();
cachingConnectionFactory.setTargetConnectionFactory(mqxaConnectionFactory);
cachingConnectionFactory.setSessionCacheSize(this.sessionCacheSize);
cachingConnectionFactory.setCacheConsumers(this.cacheConsumers);
cachingConnectionFactory.setReconnectOnException(true);
return cachingConnectionFactory;
}
#Bean
#Primary
public SingleConnectionFactory singleConnectionFactory(MQXAConnectionFactory mqxaConnectionFactory) {
SingleConnectionFactory singleConnectionFactory = new SingleConnectionFactory(mqxaConnectionFactory);
singleConnectionFactory.setTargetConnectionFactory(mqxaConnectionFactory);
singleConnectionFactory.setReconnectOnException(true);
return singleConnectionFactory;
}
#Bean
public PlatformTransactionManager platformTransactionManager(CachingConnectionFactory cachingConnectionFactory) {
return new JmsTransactionManager(cachingConnectionFactory);
}
#Bean
public JmsOperations jmsOperations(CachingConnectionFactory cachingConnectionFactory) {
JmsTemplate jmsTemplate = new JmsTemplate(cachingConnectionFactory);
jmsTemplate.setReceiveTimeout(receiveTimeout);
return jmsTemplate;
}
#Bean
public DefaultJmsListenerContainerFactory defaultJmsListenerContainerFactory(PlatformTransactionManager transactionManager,
#Qualifier("singleConnectionFactory") SingleConnectionFactory singleConnectionFactory) {
EnhancedJmsListenerContainerFactory factory = new EnhancedJmsListenerContainerFactory();
factory.setConnectionFactory(singleConnectionFactory);
factory.setTransactionManager(transactionManager);
factory.setConcurrency(concurrency);
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setSessionTransacted(true);
factory.setMaxMessagesPerTask(this.messagesPerTask);
factory.setIdleTaskExecutionLimit(this.idleTaskExecutionLimit);
return factory;
}
...
...
We moved from CachingConnectionFactory to SingleConnectionFactory since it opened too many queue connections.
Queue Listener
#Transactional
#Slf4j
#Component
#Profile("!thdtest")
public class QueueListener {
private CatalogStatusDataProcessor catalogStatusDataProcessor;
private HostToDestDataAckProcessor hostToDestDataAckProcessor;
#Autowired
public QueueListener(CatalogStatusDataProcessor catalogStatusDataProcessor, HostToDestDataAckProcessor hostToDestDataAckProcessor) {
this.catalogStatusDataProcessor = catalogStatusDataProcessor;
this.hostToDestDataAckProcessor = hostToDestDataAckProcessor;
}
#JmsListener(destination = "${project.mq.queue}", containerFactory = "defaultJmsListenerContainerFactory")
public void onMessage(MAOMessage message) throws Exception {
String messageString = message.getStringMessagePayload();
try {
if (log.isDebugEnabled())
log.debug("Message received = "+messageString);
StopWatch sw = new StopWatch("Received message");
sw.start();
Object obj= XMLGenerator.generateTOfromXML(messageString);
if(obj instanceof ResponseTO){
catalogStatusDataProcessor.processCatalogStatusInfo((ResponseTO)obj);
}
else if(obj instanceof HostToStoreDataAckWrapper){
hostToDestDataAckProcessor.processHostToDestDataACK(messageString);
}
sw.stop();
log.info("Message is processed in = "+sw.getTotalTimeSeconds() +" seconds");
} catch (Exception e) {
log.error("Exception in processing message: {}", e);
throw e;
}
}
}
Tried changing the concurrency settings from 2-4 to 4-6 and it didn't improve much.
Using spring-boot 1.5.4.RELEASE, JdK8, javax.jms 2.0.1, MQ allclient 9.0
How do you know this is an MQ problem, In my experience (MQ performance) most performance problems were due to the other processing for example
MQGET(fast) database update(slow).. commit.
I would put some code around the various components, and time them. For example
Get time1
MQGET
Get time2
calculate time2 - time 1
if delta > 10 ms report this ( or add it to a global counter)
do database work
get time 3
calculate time 3 - time 2
If delta > 10 ms report this (or add it to a global counter).
If this does not report any problems, drop the time from 10ms to 2 ms.
If this does not report any problems, try adding additional instances of your program, as it could be that work is coming in faster than one thread can process it.
I've also seen reducing threads helped! When they did a database insert/update this caused contention and threads were all waiting on the thread holding the lock- you would see this from the long database times.
As a first step you could turn on some MQ traces to report how long the MQ calls take.... but adding the code I mentioned is good practice - especially if they problems is else where.

Polled Consumer With Functional Programming & Stream bridge

I'm using spring cloud stream with kafka broker for microservice inter-communication. As part of which, stream bridge will be used to send the message, which is fine.
But during consumption of said message, the message need not be immediately consumed, rather when a condition is satisfied, then only, it should be consumed.
From the documentation, I understand that I need to use Polled Consumers(do correct me if I'm mistaken) for this.
This is what I've tried from what I've understood of the documentation.
application.properties
spring.cloud.stream.pollable-source = consumeResponse
spring.cloud.stream.function.definition = consumeResponse
#stream bridge
spring.cloud.stream.bindings.outputchannel1.destination = REQUEST_TOPIC
spring.cloud.stream.bindings.outputchannel1.binder= kafka1
#polled Consumer
spring.cloud.stream.bindings.consumeResponse-in-0.binder= kafka1
spring.cloud.stream.bindings.consumeResponse-in-0.destination = REQUEST_TOPIC
spring.cloud.stream.bindings.consumeResponse-in-0.group = consumer_cloud_stream1
spring.cloud.stream.binders.kafka1.type=kafka
MainApplication.java
#Bean
public CommandLineRunner commandLineRunner(ApplicationContext ctx) {
return args -> {
//produce message
for (int i = 0; i < 5; i++) {
streamBridge.send("outputchannel1", "msg"+i);
System.out.println("Request :: " + "msg"+i);
}
};
}
#Bean
public Consumer<String> consumeResponse() {
return (response) -> {
//consume message
System.out.println("Response :: " + response);
};
}
#Bean
public ApplicationRunner poller(PollableMessageSource destIn, MessageChannel destOut) {
return args -> {
while (someCondition) { //some condition that checks whether or not to consume the message
try {
//condition satisfied, so forward message to consumer
if (!destIn.poll(m -> {
String newPayload = ((String) m.getPayload());
destOut.send(new GenericMessage<>(newPayload));
})) {
Thread.sleep(1000);
}
} catch (Exception e) {
e.printStackTrace();
}
}
};
}
But this throws the following exception:-
Parameter 1 of method poller in com.MainApplication required a single bean, but 2 were found:
- nullChannel: defined in null
- errorChannel: defined in null
I'd appreciate it if someone could help me out here or point me towards a working example for the same.
Spring boot version: 2.6.4,
Spring cloud version: 2021.0.1
Why don't you just inject the StreamBridge into your runner instead of the message channel?
By default, stream bridge output channels are created on-demand (first send).

Retry max 3 times when consuming batches in Spring Cloud Stream Kafka Binder

I am consuming batches in kafka, where retry is not supported in spring cloud stream kafka binder with batch mode, there is an option given that You can configure a SeekToCurrentBatchErrorHandler (using a ListenerContainerCustomizer) to achieve similar functionality to retry in the binder.
I tried the same, but with SeekToCurrentBatchErrorHandler, but it's retrying more than the time set which is 3 times.
How can I do that?
I would like to retry the whole batch.
How can I send the whole batch to dlq topic? like for record listener I used to match deliveryAttempt(retry) to 3 then send to DLQ topic, check in listener.
I have checked this link, which is exactly my issue but an example would be great help, with this library spring-cloud-stream-kafka-binder, can I achieve that. Please explain with an example, I am new to this.
Currently I have below code.
#Configuration
public class ConsumerConfig {
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer() {
return (container, dest, group) -> {
container.getContainerProperties().setAckOnError(false);
SeekToCurrentBatchErrorHandler seekToCurrentBatchErrorHandler
= new SeekToCurrentBatchErrorHandler();
seekToCurrentBatchErrorHandler.setBackOff(new FixedBackOff(0L, 2L));
container.setBatchErrorHandler(seekToCurrentBatchErrorHandler);
//container.setBatchErrorHandler(new BatchLoggingErrorHandler());
};
}
}
Listerner:
#StreamListener(ActivityChannel.INPUT_CHANNEL)
public void handleActivity(List<Message<Event>> messages,
#Header(name = KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment
acknowledgment,
#Header(name = "deliveryAttempt", defaultValue = "1") int
deliveryAttempt) {
try {
log.info("Received activity message with message length {}", messages.size());
nodeConfigActivityBatchProcessor.processNodeConfigActivity(messages);
acknowledgment.acknowledge();
log.debug("Processed activity message {} successfully!!", messages.size());
} catch (MessagePublishException e) {
if (deliveryAttempt == 3) {
log.error(
String.format("Exception occurred, sending the message=%s to DLQ due to: ",
"message"),
e);
publisher.publishToDlq(EventType.UPDATE_FAILED, "message", e.getMessage());
} else {
throw e;
}
}
}
After seeing #Gary's response added the ListenerContainerCustomizer #Bean with RetryingBatchErrorHandler, but not able to import the class. attaching screenshots.
not able to import RetryingBatchErrorHandler
my spring cloud dependencies
Use a RetryingBatchErrorHandler to send the whole batch to the DLT
https://docs.spring.io/spring-kafka/docs/current/reference/html/#retrying-batch-eh
Use a RecoveringBatchErrorHandler where you can throw a BatchListenerFailedException to tell it which record in the batch failed.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#recovering-batch-eh
In both cases provide a DeadLetterPublishingRecoverer to the error handler; disable DLTs in the binder.
EDIT
Here's an example; it uses the newer functional style rather than the deprecated #StreamListener, but the same concepts apply (but you should consider moving to the functional style).
#SpringBootApplication
public class So69175145Application {
public static void main(String[] args) {
SpringApplication.run(So69175145Application.class, args);
}
#Bean
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer(
KafkaTemplate<byte[], byte[]> template) {
return (container, dest, group) -> {
container.setBatchErrorHandler(new RetryingBatchErrorHandler(new FixedBackOff(5000L, 2L),
new DeadLetterPublishingRecoverer(template,
(rec, ex) -> new TopicPartition("errors." + dest + "." + group, rec.partition()))));
};
}
/*
* DLT topic won't be auto-provisioned since enableDlq is false
*/
#Bean
public NewTopic topic() {
return TopicBuilder.name("errors.so69175145.grp").partitions(1).replicas(1).build();
}
/*
* Functional equivalent of #StreamListener
*/
#Bean
public Consumer<List<String>> input() {
return list -> {
System.out.println(list);
throw new RuntimeException("test");
};
}
/*
* Not needed here - just to show we sent them to the DLT
*/
#KafkaListener(id = "so69175145", topics = "errors.so69175145.grp")
public void listen(String in) {
System.out.println("From DLT: " + in);
}
}
spring.cloud.stream.bindings.input-in-0.destination=so69175145
spring.cloud.stream.bindings.input-in-0.group=grp
spring.cloud.stream.bindings.input-in-0.content-type=text/plain
spring.cloud.stream.bindings.input-in-0.consumer.batch-mode=true
# for DLT listener
spring.kafka.consumer.auto-offset-reset=earliest
[foo]
2021-09-14 09:55:32.838ERROR...
...
[foo]
2021-09-14 09:55:37.873ERROR...
...
[foo]
2021-09-14 09:55:42.886ERROR...
...
From DLT: foo

RabbitListener does not pick up every message sent with AsyncRabbitTemplate

I am using a Spring-Boot project on Spring-Boot Version 1.5.4, with spring-boot-starter-amqp, spring-boot-starter-web-services and spring-ws-support v. 2.4.0.
So far, I have successfully created a #RabbitListener Component, which does exactly what it is supposed to do, when a message is sent to the broker via rabbitTemplate.sendAndReceive(uri, message). I tried to see what would happen if I used AsyncRabbitTemplate for this, as it is possible that the message processing might take a while, and I don't want to lock my application while waiting for a response.
The problem is: the first message I put in the queue is not even being picked up by the listener. The callback just acknowledges a success with the published message instead of the returned message.
Listener:
#RabbitListener(queues = KEY_MESSAGING_QUEUE)
public Message processMessage(#Payload byte[] payload, #Headers Map<String, Object> headers) {
try {
byte[] resultBody = messageProcessor.processMessage(payload, headers);
MessageBuilder builder = MessageBuilder.withBody(resultBody);
if (resultBody.length == 0) {
builder.setHeader(HEADER_NAME_ERROR_MESSAGE, "Error occurred during processing.");
}
return builder.build();
} catch (Exception ex) {
return MessageBuilder.withBody(EMPTY_BODY)
.setHeader(HEADER_NAME_ERROR_MESSAGE, ex.getMessage())
.setHeader(HEADER_NAME_STACK_TRACE, ex.getStackTrace())
.build();
}
}
When I am executing my Tests, one test fails, and the second test succeeds. The class is annotated with #RunWith(SpringJUnit4ClassRunner.class) and #SpringBootTest(classes = { Application.class, Test.TestConfiguration.class }) and has a #ClassRule of BrokerRunning.isRunningWintEmptyQueues(QUEUE_NAME)
TestConfiguration (inner class):
public static class TestConfiguration {
#Bean // referenced in the tests as art
public AsyncRabbitTemplate asyncRabbitTemplate(ConnectionFactory connectionFactory, RabbitTemplate rabbitTemplate) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setQueueNames(QUEUE_NAME);
return new AsyncRabbitTemplate(rabbitTemplate, container);
}
#Bean
public MessageListener messageListener() {
return new MessageListener();
}
}
Tests:
#Test
public void shouldListenAndReplyToQueue() throws Exception {
doReturn(RESULT_BODY)
.when(innerMock)
.processMessage(any(byte[].class), anyMapOf(String.class, Object.class));
Message msg = MessageBuilder
.withBody(MESSAGE_BODY)
.setHeader("header", "value")
.setHeader("auth", "entication")
.build();
RabbitMessageFuture pendingReply = art.sendAndReceive(QUEUE_NAME, msg);
pendingReply.addCallback(new ListenableFutureCallback<Message>() {
#Override
public void onSuccess(Message result) { }
#Override
public void onFailure(Throwable ex) {
throw new RuntimeException(ex);
}
});
while (!pendingReply.isDone()) {}
result = pendingReply.get();
// assertions omitted
}
Test 2:
#Test
public void shouldReturnExceptionToCaller() throws Exception {
doThrow(new SSLSenderInstantiationException("I am a message", new Exception()))
.when(innerMock)
.processMessage(any(byte[].class), anyMapOf(String.class, Object.class));
Message msg = MessageBuilder
.withBody(MESSAGE_BODY)
.setHeader("header", "value")
.setHeader("auth", "entication")
.build();
RabbitMessageFuture pendingReply = art.sendAndReceive(QUEUE_NAME, msg);
pendingReply.addCallback(/*same as above*/);
while (!pendingReply.isDone()) {}
result = pendingReply.get();
//assertions omitted
}
When I run both tests together, the test that is executed first fails, while the second call succeeds.
When I run both tests separately, both fail.
When I add an #Before-Method, which uses the AsyncRabbitTemplate art to put any message into the queue, both tests MAY pass, or the second test MAY not pass, so in addition to being unexpected, the behaviour is inconsistent as well.
The interesting thing is, that the callback passed to the method reports a success before the listener is invoked, and reports the sent message as result.
The only class missing from this is the general configuration class, which is annotated with #EnableRabbit and has this content:
#Bean
public SimpleRabbitListenerContainerFactory simpleRabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrentConsumers(10);
return factory;
}
Other Things I have tried:
specifically create AsyncRabbitTemplate myself, start and stop it manually before and after every message process -> both tests succeeded
increase / decrease receive timeout -> no effect
remove and change the callback -> no effect
explicitly created the queue again with an injected RabbitAdmin -> no effect
extracted the callback to a constant -> tests didn't even start correctly
As stated above, I used RabbitTemplate directly, which worked exactly as intended
If anyone has any ideas what is missing, I'd be very happy to hear.
You can't use the same queue for requests and replies...
#Bean // referenced in the tests as art
public AsyncRabbitTemplate asyncRabbitTemplate(ConnectionFactory connectionFactory, RabbitTemplate rabbitTemplate) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setQueueNames(QUEUE_NAME);
return new AsyncRabbitTemplate(rabbitTemplate, container);
}
Will listen for replies on QUEUE_NAME, so...
RabbitMessageFuture pendingReply = art.sendAndReceive(QUEUE_NAME, msg);
...simply sends a message to itself. It looks like you intended...
RabbitMessageFuture pendingReply = art.sendAndReceive(KEY_MESSAGING_QUEUE, msg);

Categories

Resources