How to improve message processing faster in spring boot and MQ app? - java

Got a small spring-boot messaging app that receives a message from the queue and inserts/updates a row in DB2 table. Noticed this week it received lot of messages but consumption was very slow that msgs filling the disk (infra complained about it). How could we improve the reading of the messages from the queue faster?
JMS Config
...
...
#Bean
public MQXAConnectionFactory mqxaQueueConnectionFactory() {
MQXAConnectionFactory mqxaConnectionFactory = new MQXAConnectionFactory();
log.info("Host: {}", host);
log.info("Port: {}", port);
log.info("Channel: {}", channel);
log.info("Timeout: {}", receiveTimeout);
try {
mqxaConnectionFactory.setHostName(host);
mqxaConnectionFactory.setPort(port);
mqxaConnectionFactory.setQueueManager(queueManager);
if (channel != null && !channel.trim().isEmpty()) {
mqxaConnectionFactory.setChannel(channel);
}
mqxaConnectionFactory.setTransportType(WMQConstants.WMQ_CM_CLIENT);
} catch (JMSException e) {
throw new RuntimeException(e);
}
return mqxaConnectionFactory;
}
#Bean
public CachingConnectionFactory cachingConnectionFactory(MQXAConnectionFactory mqxaConnectionFactory) {
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory();
cachingConnectionFactory.setTargetConnectionFactory(mqxaConnectionFactory);
cachingConnectionFactory.setSessionCacheSize(this.sessionCacheSize);
cachingConnectionFactory.setCacheConsumers(this.cacheConsumers);
cachingConnectionFactory.setReconnectOnException(true);
return cachingConnectionFactory;
}
#Bean
#Primary
public SingleConnectionFactory singleConnectionFactory(MQXAConnectionFactory mqxaConnectionFactory) {
SingleConnectionFactory singleConnectionFactory = new SingleConnectionFactory(mqxaConnectionFactory);
singleConnectionFactory.setTargetConnectionFactory(mqxaConnectionFactory);
singleConnectionFactory.setReconnectOnException(true);
return singleConnectionFactory;
}
#Bean
public PlatformTransactionManager platformTransactionManager(CachingConnectionFactory cachingConnectionFactory) {
return new JmsTransactionManager(cachingConnectionFactory);
}
#Bean
public JmsOperations jmsOperations(CachingConnectionFactory cachingConnectionFactory) {
JmsTemplate jmsTemplate = new JmsTemplate(cachingConnectionFactory);
jmsTemplate.setReceiveTimeout(receiveTimeout);
return jmsTemplate;
}
#Bean
public DefaultJmsListenerContainerFactory defaultJmsListenerContainerFactory(PlatformTransactionManager transactionManager,
#Qualifier("singleConnectionFactory") SingleConnectionFactory singleConnectionFactory) {
EnhancedJmsListenerContainerFactory factory = new EnhancedJmsListenerContainerFactory();
factory.setConnectionFactory(singleConnectionFactory);
factory.setTransactionManager(transactionManager);
factory.setConcurrency(concurrency);
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setSessionTransacted(true);
factory.setMaxMessagesPerTask(this.messagesPerTask);
factory.setIdleTaskExecutionLimit(this.idleTaskExecutionLimit);
return factory;
}
...
...
We moved from CachingConnectionFactory to SingleConnectionFactory since it opened too many queue connections.
Queue Listener
#Transactional
#Slf4j
#Component
#Profile("!thdtest")
public class QueueListener {
private CatalogStatusDataProcessor catalogStatusDataProcessor;
private HostToDestDataAckProcessor hostToDestDataAckProcessor;
#Autowired
public QueueListener(CatalogStatusDataProcessor catalogStatusDataProcessor, HostToDestDataAckProcessor hostToDestDataAckProcessor) {
this.catalogStatusDataProcessor = catalogStatusDataProcessor;
this.hostToDestDataAckProcessor = hostToDestDataAckProcessor;
}
#JmsListener(destination = "${project.mq.queue}", containerFactory = "defaultJmsListenerContainerFactory")
public void onMessage(MAOMessage message) throws Exception {
String messageString = message.getStringMessagePayload();
try {
if (log.isDebugEnabled())
log.debug("Message received = "+messageString);
StopWatch sw = new StopWatch("Received message");
sw.start();
Object obj= XMLGenerator.generateTOfromXML(messageString);
if(obj instanceof ResponseTO){
catalogStatusDataProcessor.processCatalogStatusInfo((ResponseTO)obj);
}
else if(obj instanceof HostToStoreDataAckWrapper){
hostToDestDataAckProcessor.processHostToDestDataACK(messageString);
}
sw.stop();
log.info("Message is processed in = "+sw.getTotalTimeSeconds() +" seconds");
} catch (Exception e) {
log.error("Exception in processing message: {}", e);
throw e;
}
}
}
Tried changing the concurrency settings from 2-4 to 4-6 and it didn't improve much.
Using spring-boot 1.5.4.RELEASE, JdK8, javax.jms 2.0.1, MQ allclient 9.0

How do you know this is an MQ problem, In my experience (MQ performance) most performance problems were due to the other processing for example
MQGET(fast) database update(slow).. commit.
I would put some code around the various components, and time them. For example
Get time1
MQGET
Get time2
calculate time2 - time 1
if delta > 10 ms report this ( or add it to a global counter)
do database work
get time 3
calculate time 3 - time 2
If delta > 10 ms report this (or add it to a global counter).
If this does not report any problems, drop the time from 10ms to 2 ms.
If this does not report any problems, try adding additional instances of your program, as it could be that work is coming in faster than one thread can process it.
I've also seen reducing threads helped! When they did a database insert/update this caused contention and threads were all waiting on the thread holding the lock- you would see this from the long database times.
As a first step you could turn on some MQ traces to report how long the MQ calls take.... but adding the code I mentioned is good practice - especially if they problems is else where.

Related

Republish message to same queue with updated headers after automatic nack in Spring AMQP

I am trying to configure my Spring AMQP ListenerContainer to allow for a certain type of retry flow that's backwards compatible with a custom rabbit client previously used in the project I'm working on.
The protocol works as follows:
A message is received on a channel.
If processing fails the message is nacked with the republish flag set to false
A copy of the message with additional/updated headers (a retry counter) is published to the same queue
The headers are used for filtering incoming messages, but that's not important here.
I would like the behaviour to happen on an opt-in basis, so that more standardised Spring retry flows can be used in cases where compatibility with the old client isn't a concern, and the listeners should be able to work without requiring manual acking.
I have implemented a working solution, which I'll get back to below. Where I'm struggling is to publish the new message after signalling to the container that it should nack the current message, because I can't really find any good hooks after the nack or before the next message.
Reading the documentation it feels like I'm looking for something analogous to the behaviour of RepublishMessageRecoverer used as the final step of a retry interceptor. The main difference in my case is that I need to republish immediately on failure, not as a final recovery step. I tried to look at the implementation of RepublishMessageRecoverer, but the many of layers of indirection made it hard for me to understand where the republishing is triggered, and if a nack goes before that.
My working implementation looks as follows. Note that I'm using an AfterThrowsAdvice, but I think an error handler could also be used with nearly identical logic.
/*
MyConfig.class, configuring the container factory
*/
#Configuration
public class MyConfig {
#Bean
// NB: bean name is important, overwrites autoconfigured bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(
ConnectionFactory connectionFactory,
Jackson2JsonMessageConverter messageConverter,
RabbitTemplate rabbitTemplate
) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setMessageConverter(messageConverter);
// AOP
var a1 = new CustomHeaderInspectionAdvice();
var a2 = new MyThrowsAdvice(rabbitTemplate);
Advice[] adviceChain = {a1, a2};
factory.setAdviceChain(adviceChain);
return factory;
}
}
/*
MyThrowsAdvice.class, hooking into the exception flow from the listener
*/
public class MyThrowsAdvice implements ThrowsAdvice {
private static final Logger logger = LoggerFactory.getLogger(MyThrowsAdvice2.class);
private final AmqpTemplate amqpTemplate;
public MyThrowsAdvice2(AmqpTemplate amqpTemplate) {
this.amqpTemplate = amqpTemplate;
}
public void afterThrowing(Method method, Object[] args, Object target, ListenerExecutionFailedException ex) {
var message = message(args);
var cause = ex.getCause();
// opt-in to old protocol by throwing an instance of BusinessException in business logic
if (cause instanceof BusinessException) {
/*
NB: Since we want to trigger execution after the current method fails
with an exception we need to schedule it in another thread and delay
execution until the nack has happened.
*/
new Thread(() -> {
try {
Thread.sleep(1000L);
var messageProperties = message.getMessageProperties();
var count = getCount(messageProperties);
messageProperties.setHeader("xb-count", count + 1);
var routingKey = messageProperties.getReceivedRoutingKey();
var exchange = messageProperties.getReceivedExchange();
amqpTemplate.send(exchange, routingKey, message);
logger.info("Sent!");
} catch (InterruptedException e) {
logger.error("Sleep interrupted", e);
}
}).start();
// NB: Produce the desired nack.
throw new AmqpRejectAndDontRequeueException("Business logic exception, message will be re-queued with updated headers", cause);
}
}
private static long getCount(MessageProperties messageProperties) {
try {
Long c = messageProperties.getHeader("xb-count");
return c == null ? 0 : c;
} catch (Exception e) {
return 0;
}
}
private static Message message(Object[] args) {
try {
return (Message) args[1];
} catch (Exception e) {
logger.info("Bad cast parse", e);
throw new AmqpRejectAndDontRequeueException(e);
}
}
}
Now, as you can imagine, I'm not particularly pleased with the indeterminism of scheduling a new thread with a delay.
So my question is simply, is there any way I could produce a deterministic solution to my problem using the provided hooks of the ListenerContainer ?
Your current solution risks message loss; since you are publishing on a different thread after a delay. If the server crashes during that delay, the message is lost.
It would be better to publish immediately to another queue with a TTL and dead-letter configuration to republish the expired message back to the original queue.
Using the RepublishMessageRecoverer with retries set to maxattempts=1 should do what you need.

Using spring integration v5.5.14 lots of queued task are increasing

after upgrading to spring boot version v2.7.1 we are seeing that there are lots of queued task, we never had seen such queued task increasing in the last version we were using v2.2.2.
Our team has tried to check the things in v2.7.1 but couldn't found anything in this version.
Can anyone please review the code and let us know what we are missing or have written wrong that is causing the issue. We are using spring integration to pull emails from client server and for that we have add a taskexecutor to have concurrent polling.
Versions that we use:
Spring Boot = 2.7.1
Spring Integration = 5.5.14
Earlier we were using:
Spring Boot = 2.2.2 release
Spring Integration = 5.2.3 release
I've attached the code below.
Configuration class for Imap Integration
#Configuration
#EnableIntegration
public class ImapIntegrationConfig {
private final ApplicationContext applicationContext;
#Autowired
public ImapIntegrationConfig(ApplicationContext applicationContext) {
this.applicationContext = applicationContext;
}
#Bean("mailTaskExecutor")
public ThreadPoolTaskExecutor mailTaskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(1000);
taskExecutor.setCorePoolSize(100);
taskExecutor.setTaskDecorator(new SecurityAwareTaskDecorator(applicationContext));
taskExecutor.setWaitForTasksToCompleteOnShutdown(true);
taskExecutor.setAwaitTerminationSeconds(Integer.MAX_VALUE);
return taskExecutor;
}
#Bean("imapMailChannel")
public ExecutorChannelSpec imapMailChannel() {
return MessageChannels.executor(mailTaskExecutor());
}
#Bean
public HeaderMapper<MimeMessage> mailHeaderMapper() {
return new DefaultMailHeaderMapper();
}
}
ImapListener Class to register the flow
public void registerImapFlow(ImapSetting imapSetting) {
ImapMailReceiver mailReceiver = createImapMailReceiver(imapSetting);
// create the flow for an email process
//#formatter:off
StandardIntegrationFlow flow = IntegrationFlows
.from(Mail.imapInboundAdapter(mailReceiver),
consumer -> consumer.autoStartup(true)
.poller(Pollers.fixedDelay(Duration.ofSeconds(5), Duration.ofMinutes(2))
.taskExecutor(taskExecutor)
.errorHandler(t -> logger.error("Error while polling emails for address " + imapSetting.getUsername(), t))
.maxMessagesPerPoll(10)))
.enrichHeaders(Map.of(CONCERN_CODE, imapSetting.getConcernCode(), IMAP_CONFIG_ID, imapSetting.getImapSettingId()))
.channel(imapMailChannel).get();
//#formatter:on
// give the bean a unique name to avoid clashes with multiple imap settings
String flowId = concernIdentifier.getConcernIdentifier() + "-" + imapSetting.getImapSettingId();
IntegrationFlowContext.IntegrationFlowRegistration existingFlow = integrationFlowContext.getRegistrationById(flowId);
if (existingFlow != null) {
// destroy the previous beans
existingFlow.destroy();
}
// register the new flow
integrationFlowContext.registration(flow).id(flowId).useFlowIdAsPrefix().register();
}
Process message method
#ServiceActivator(inputChannel = "imapMailChannel")
public void processMessage(Message<?> message) throws InvalidMessageException {
String concern = (String) message.getHeaders().get(CONCERN_CODE);
if (isEmpty(concern)) {
logger.error("Received null concern!");
}
Long imapConfigId = (Long) message.getHeaders().get(IMAP_CONFIG_ID);
String logMessage = null;
String messageId = null;
try {
Object payload = message.getPayload();
if (payload instanceof MimeMultipart) {
//.......................//
}
else if (payload instanceof String) {
//......................//
}
catch (Exception e) {
logger.error("Error while processing " + logMessage, e);
if (concern != null) {
metricUtil.emailFailed(concern);
}
throw new MaxxtonException("CCM-MessageID: Exception in processMessage() method", e, MessageErrorCode.UNABLE_TO_PROCESS_EMAIL);
}
metricUtil.emailProcessed(concern);
}
ImapMailReceiver method
private ImapMailReceiver createImapMailReceiver(ImapSetting imapSettings) {
String url = String.format(imapSettings.getImapUrl(), URLEncoder.encode(imapSettings.getUsername(), UTF_8), URLEncoder.encode(imapSettings.getPassword(), UTF_8));
ImapMailReceiver receiver = new ImapMailReceiver(url);
receiver.setSimpleContent(true);
Properties mailProperties = new Properties();
mailProperties.put("mail.debug", "false");
mailProperties.put("mail.imap.connectionpoolsize", "5");
mailProperties.put("mail.imap.fetchsize", 4194304);
mailProperties.put("mail.imap.connectiontimeout", 15000);
mailProperties.put("mail.imap.timeout", 30000);
mailProperties.put("mail.imaps.connectionpoolsize", "5");
mailProperties.put("mail.imaps.fetchsize", 4194304);
mailProperties.put("mail.imaps.connectiontimeout", 15000);
mailProperties.put("mail.imaps.timeout", 30000);
receiver.setJavaMailProperties(mailProperties);
receiver.setSearchTermStrategy(this::notSeenTerm);
receiver.setAutoCloseFolder(false);
receiver.setShouldDeleteMessages(false);
receiver.setShouldMarkMessagesAsRead(true);
receiver.setHeaderMapper(mailHeaderMapper);
receiver.setEmbeddedPartsAsBytes(false);
return receiver;
}
Added a screenshot taken from Grafana of active and queued task when we have upgraded to SP v2.7.1 and SI v5.5.14
At a glance it all looks OK. Unless you really don't close that folder manually elsewhere since you use receiver.setAutoCloseFolder(false);
There is no reason in that .taskExecutor(taskExecutor) since you use MessageChannels.executor(mailTaskExecutor()) immediately after producing message from the Mail.imapInboundAdapter().
I remember that in Gitter I suggested you to check how it works with the spring.task.scheduling.pool.size=10 placed into the application.properties. This is the only obvious difference between the mentioned versions: https://docs.spring.io/spring-boot/docs/current/reference/html/messaging.html#messaging.spring-integration.
Your screenshot doesn't prove that the problem is exactly with Spring Integration. Perhaps tasks are queued somehow by the tool which exports metrics to Graphana. I believe you have upgraded not just Spring Integration in your project...

Retry max 3 times when consuming batches in Spring Cloud Stream Kafka Binder

I am consuming batches in kafka, where retry is not supported in spring cloud stream kafka binder with batch mode, there is an option given that You can configure a SeekToCurrentBatchErrorHandler (using a ListenerContainerCustomizer) to achieve similar functionality to retry in the binder.
I tried the same, but with SeekToCurrentBatchErrorHandler, but it's retrying more than the time set which is 3 times.
How can I do that?
I would like to retry the whole batch.
How can I send the whole batch to dlq topic? like for record listener I used to match deliveryAttempt(retry) to 3 then send to DLQ topic, check in listener.
I have checked this link, which is exactly my issue but an example would be great help, with this library spring-cloud-stream-kafka-binder, can I achieve that. Please explain with an example, I am new to this.
Currently I have below code.
#Configuration
public class ConsumerConfig {
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer() {
return (container, dest, group) -> {
container.getContainerProperties().setAckOnError(false);
SeekToCurrentBatchErrorHandler seekToCurrentBatchErrorHandler
= new SeekToCurrentBatchErrorHandler();
seekToCurrentBatchErrorHandler.setBackOff(new FixedBackOff(0L, 2L));
container.setBatchErrorHandler(seekToCurrentBatchErrorHandler);
//container.setBatchErrorHandler(new BatchLoggingErrorHandler());
};
}
}
Listerner:
#StreamListener(ActivityChannel.INPUT_CHANNEL)
public void handleActivity(List<Message<Event>> messages,
#Header(name = KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment
acknowledgment,
#Header(name = "deliveryAttempt", defaultValue = "1") int
deliveryAttempt) {
try {
log.info("Received activity message with message length {}", messages.size());
nodeConfigActivityBatchProcessor.processNodeConfigActivity(messages);
acknowledgment.acknowledge();
log.debug("Processed activity message {} successfully!!", messages.size());
} catch (MessagePublishException e) {
if (deliveryAttempt == 3) {
log.error(
String.format("Exception occurred, sending the message=%s to DLQ due to: ",
"message"),
e);
publisher.publishToDlq(EventType.UPDATE_FAILED, "message", e.getMessage());
} else {
throw e;
}
}
}
After seeing #Gary's response added the ListenerContainerCustomizer #Bean with RetryingBatchErrorHandler, but not able to import the class. attaching screenshots.
not able to import RetryingBatchErrorHandler
my spring cloud dependencies
Use a RetryingBatchErrorHandler to send the whole batch to the DLT
https://docs.spring.io/spring-kafka/docs/current/reference/html/#retrying-batch-eh
Use a RecoveringBatchErrorHandler where you can throw a BatchListenerFailedException to tell it which record in the batch failed.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#recovering-batch-eh
In both cases provide a DeadLetterPublishingRecoverer to the error handler; disable DLTs in the binder.
EDIT
Here's an example; it uses the newer functional style rather than the deprecated #StreamListener, but the same concepts apply (but you should consider moving to the functional style).
#SpringBootApplication
public class So69175145Application {
public static void main(String[] args) {
SpringApplication.run(So69175145Application.class, args);
}
#Bean
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer(
KafkaTemplate<byte[], byte[]> template) {
return (container, dest, group) -> {
container.setBatchErrorHandler(new RetryingBatchErrorHandler(new FixedBackOff(5000L, 2L),
new DeadLetterPublishingRecoverer(template,
(rec, ex) -> new TopicPartition("errors." + dest + "." + group, rec.partition()))));
};
}
/*
* DLT topic won't be auto-provisioned since enableDlq is false
*/
#Bean
public NewTopic topic() {
return TopicBuilder.name("errors.so69175145.grp").partitions(1).replicas(1).build();
}
/*
* Functional equivalent of #StreamListener
*/
#Bean
public Consumer<List<String>> input() {
return list -> {
System.out.println(list);
throw new RuntimeException("test");
};
}
/*
* Not needed here - just to show we sent them to the DLT
*/
#KafkaListener(id = "so69175145", topics = "errors.so69175145.grp")
public void listen(String in) {
System.out.println("From DLT: " + in);
}
}
spring.cloud.stream.bindings.input-in-0.destination=so69175145
spring.cloud.stream.bindings.input-in-0.group=grp
spring.cloud.stream.bindings.input-in-0.content-type=text/plain
spring.cloud.stream.bindings.input-in-0.consumer.batch-mode=true
# for DLT listener
spring.kafka.consumer.auto-offset-reset=earliest
[foo]
2021-09-14 09:55:32.838ERROR...
...
[foo]
2021-09-14 09:55:37.873ERROR...
...
[foo]
2021-09-14 09:55:42.886ERROR...
...
From DLT: foo

RabbitListener does not pick up every message sent with AsyncRabbitTemplate

I am using a Spring-Boot project on Spring-Boot Version 1.5.4, with spring-boot-starter-amqp, spring-boot-starter-web-services and spring-ws-support v. 2.4.0.
So far, I have successfully created a #RabbitListener Component, which does exactly what it is supposed to do, when a message is sent to the broker via rabbitTemplate.sendAndReceive(uri, message). I tried to see what would happen if I used AsyncRabbitTemplate for this, as it is possible that the message processing might take a while, and I don't want to lock my application while waiting for a response.
The problem is: the first message I put in the queue is not even being picked up by the listener. The callback just acknowledges a success with the published message instead of the returned message.
Listener:
#RabbitListener(queues = KEY_MESSAGING_QUEUE)
public Message processMessage(#Payload byte[] payload, #Headers Map<String, Object> headers) {
try {
byte[] resultBody = messageProcessor.processMessage(payload, headers);
MessageBuilder builder = MessageBuilder.withBody(resultBody);
if (resultBody.length == 0) {
builder.setHeader(HEADER_NAME_ERROR_MESSAGE, "Error occurred during processing.");
}
return builder.build();
} catch (Exception ex) {
return MessageBuilder.withBody(EMPTY_BODY)
.setHeader(HEADER_NAME_ERROR_MESSAGE, ex.getMessage())
.setHeader(HEADER_NAME_STACK_TRACE, ex.getStackTrace())
.build();
}
}
When I am executing my Tests, one test fails, and the second test succeeds. The class is annotated with #RunWith(SpringJUnit4ClassRunner.class) and #SpringBootTest(classes = { Application.class, Test.TestConfiguration.class }) and has a #ClassRule of BrokerRunning.isRunningWintEmptyQueues(QUEUE_NAME)
TestConfiguration (inner class):
public static class TestConfiguration {
#Bean // referenced in the tests as art
public AsyncRabbitTemplate asyncRabbitTemplate(ConnectionFactory connectionFactory, RabbitTemplate rabbitTemplate) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setQueueNames(QUEUE_NAME);
return new AsyncRabbitTemplate(rabbitTemplate, container);
}
#Bean
public MessageListener messageListener() {
return new MessageListener();
}
}
Tests:
#Test
public void shouldListenAndReplyToQueue() throws Exception {
doReturn(RESULT_BODY)
.when(innerMock)
.processMessage(any(byte[].class), anyMapOf(String.class, Object.class));
Message msg = MessageBuilder
.withBody(MESSAGE_BODY)
.setHeader("header", "value")
.setHeader("auth", "entication")
.build();
RabbitMessageFuture pendingReply = art.sendAndReceive(QUEUE_NAME, msg);
pendingReply.addCallback(new ListenableFutureCallback<Message>() {
#Override
public void onSuccess(Message result) { }
#Override
public void onFailure(Throwable ex) {
throw new RuntimeException(ex);
}
});
while (!pendingReply.isDone()) {}
result = pendingReply.get();
// assertions omitted
}
Test 2:
#Test
public void shouldReturnExceptionToCaller() throws Exception {
doThrow(new SSLSenderInstantiationException("I am a message", new Exception()))
.when(innerMock)
.processMessage(any(byte[].class), anyMapOf(String.class, Object.class));
Message msg = MessageBuilder
.withBody(MESSAGE_BODY)
.setHeader("header", "value")
.setHeader("auth", "entication")
.build();
RabbitMessageFuture pendingReply = art.sendAndReceive(QUEUE_NAME, msg);
pendingReply.addCallback(/*same as above*/);
while (!pendingReply.isDone()) {}
result = pendingReply.get();
//assertions omitted
}
When I run both tests together, the test that is executed first fails, while the second call succeeds.
When I run both tests separately, both fail.
When I add an #Before-Method, which uses the AsyncRabbitTemplate art to put any message into the queue, both tests MAY pass, or the second test MAY not pass, so in addition to being unexpected, the behaviour is inconsistent as well.
The interesting thing is, that the callback passed to the method reports a success before the listener is invoked, and reports the sent message as result.
The only class missing from this is the general configuration class, which is annotated with #EnableRabbit and has this content:
#Bean
public SimpleRabbitListenerContainerFactory simpleRabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrentConsumers(10);
return factory;
}
Other Things I have tried:
specifically create AsyncRabbitTemplate myself, start and stop it manually before and after every message process -> both tests succeeded
increase / decrease receive timeout -> no effect
remove and change the callback -> no effect
explicitly created the queue again with an injected RabbitAdmin -> no effect
extracted the callback to a constant -> tests didn't even start correctly
As stated above, I used RabbitTemplate directly, which worked exactly as intended
If anyone has any ideas what is missing, I'd be very happy to hear.
You can't use the same queue for requests and replies...
#Bean // referenced in the tests as art
public AsyncRabbitTemplate asyncRabbitTemplate(ConnectionFactory connectionFactory, RabbitTemplate rabbitTemplate) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setQueueNames(QUEUE_NAME);
return new AsyncRabbitTemplate(rabbitTemplate, container);
}
Will listen for replies on QUEUE_NAME, so...
RabbitMessageFuture pendingReply = art.sendAndReceive(QUEUE_NAME, msg);
...simply sends a message to itself. It looks like you intended...
RabbitMessageFuture pendingReply = art.sendAndReceive(KEY_MESSAGING_QUEUE, msg);

Spring JMS manually creating MessageListenerContainer leakage when application stops

I have a spring application that has to consume messages from some JMS queues. The number of queues has to be configurable, and because of this we have to manually create the consumers by reading a config file. So I can have x queues of type1 and y queues of type2, and all the connection details are specified in this config file.
I would say it is a rather complicated code, and I need to point out the following facts: I manually create the spring DefaultMessageListenerContainer and call start and stop on it, the transaction manager is distributed between the JMS and JDBC resources. Also, the application runs on WebLogic and the JMS queues are in WebLogic too.
The flow is that the app reads messages from the queues, tries to put the messages into the database, but if the database is down, the transaction (shared between both JMS and JDBC) is rolled back, so the messages is put back into the queue - this is the failover mechanism when database is down.
The issue that I am experiencing is that when I stop the application while it performs the failover mechanism, there are some JMS consumer threads that are not stopped. This way I get to leak threads and overload the system.
So my question is how can I make sure that when the application stops, it stops all the consumer threads? Calling stop on the message listener container doesn't seem to do the job.
Below are some code snippets:
config:
[
{
"factoryInitial": "weblogic.jndi.WLInitialContextFactory",
"providerUrl": "t3://localhost:7001",
"securityPrincipal": "user",
"securityCredentials": "password",
"connectionFactory": "jms/QCF",
"channels": {
"type1": "jms/queue1"
}
}
]
java:
public class JmsConfig {
private Map<String, List<DefaultMessageListenerContainer>> channels = new HashMap<>();
private Map<String, MessageListener> messageConsumers;
private PlatformTransactionManager transactionManager;
public JmsConfig(Map<String, MessageListener> messageConsumers, PlatformTransactionManager transactionManager) throws Exception {
this.messageConsumers = messageConsumers;
this.transactionManager = transactionManager;
List<JmsServerConfiguration> serverConfigurationList = readJsonFile();
for (JmsServerConfiguration jmsServerConfiguration : serverConfigurationList) {
Properties environment = createEnvironment(jmsServerConfiguration);
JndiTemplate jndiTemplate = new JndiTemplate();
jndiTemplate.setEnvironment(environment);
ConnectionFactory connectionFactory = createConnectionFactory(jndiTemplate, jmsServerConfiguration);
populateMessageListenerContainers(jmsServerConfiguration, jndiTemplate, connectionFactory);
}
}
#PreDestroy
public void stopListenerContainers() {
for (Map.Entry<String, List<DefaultMessageListenerContainer>> channel : channels.entrySet()) {
for (DefaultMessageListenerContainer listenerContainer : channel.getValue()) {
listenerContainer.stop();
}
}
}
private void populateMessageListenerContainers(
JmsServerConfiguration jmsServerConfiguration,
JndiTemplate jndiTemplate, ConnectionFactory connectionFactory) throws Exception {
Set<Map.Entry<String, String>> channelsEntry = jmsServerConfiguration.getChannels().entrySet();
for (Map.Entry<String, String> channel : channelsEntry) {
Destination destination = createDestination(jndiTemplate, channel.getValue());
DefaultMessageListenerContainer listenerContainer =
createListenerContainer(connectionFactory, destination, messageConsumers.get(channel.getKey()));
if (!channels.containsKey(channel.getKey())) {
channels.put(channel.getKey(),
new ArrayList<DefaultMessageListenerContainer>());
}
channels.get(channel.getKey()).add(listenerContainer);
}
}
private Properties createEnvironment(JmsServerConfiguration jmsServerConfiguration) {
Properties properties = new Properties();
properties.setProperty("java.naming.factory.initial", jmsServerConfiguration.getFactoryInitial());
properties.setProperty("java.naming.provider.url", jmsServerConfiguration.getProviderUrl());
properties.setProperty("java.naming.security.principal", jmsServerConfiguration.getSecurityPrincipal());
properties.setProperty("java.naming.security.credentials", jmsServerConfiguration.getSecurityCredentials());
return properties;
}
private ConnectionFactory createConnectionFactory(JndiTemplate jndiTemplate,
JmsServerConfiguration jmsServerConfiguration) throws Exception {
JndiObjectFactoryBean connectionFactory = new JndiObjectFactoryBean();
connectionFactory.setJndiTemplate(jndiTemplate);
connectionFactory.setJndiName(jmsServerConfiguration.getConnectionFactory());
connectionFactory.setExpectedType(ConnectionFactory.class);
connectionFactory.afterPropertiesSet();
return (ConnectionFactory) connectionFactory.getObject();
}
private Destination createDestination(JndiTemplate jndiTemplate, String jndiName) throws Exception {
JndiObjectFactoryBean destinationFactory = new JndiObjectFactoryBean();
destinationFactory.setJndiTemplate(jndiTemplate);
destinationFactory.setJndiName(jndiName);
destinationFactory.setExpectedType(Destination.class);
destinationFactory.afterPropertiesSet();
return (Destination) destinationFactory.getObject();
}
private DefaultMessageListenerContainer createListenerContainer(
ConnectionFactory connectionFactory, Destination destination,
MessageListener messageListener) {
DefaultMessageListenerContainer listenerContainer = new DefaultMessageListenerContainer();
listenerContainer.setConcurrentConsumers(3);
listenerContainer.setConnectionFactory(connectionFactory);
listenerContainer.setDestination(destination);
listenerContainer.setMessageListener(messageListener);
listenerContainer.setTransactionManager(transactionManager);
listenerContainer.setSessionTransacted(true);
listenerContainer.afterPropertiesSet();
listenerContainer.start();
return listenerContainer;
}
}
So the issue was solved by calling listenerContainer.shutdown(); instead of stop().

Categories

Resources