Spring Integration JMS Consumers not consuming all messages - java

Setup
I have Spring Boot application called the Dispatcher. It runs on 1 Machine and has an embedded ActiveMQ Broker:
#Bean
public BrokerService broker(ActiveMQProperties properties) throws Exception {
BrokerService broker = new BrokerService();
broker.setPersistent(false);
broker.addConnector(properties.getBrokerUrl());
return broker;
}
which writes tasks to a JMS queue:
#Bean
public IntegrationFlow outboundFlow(ActiveMQConnectionFactory connectionFactory) {
return IntegrationFlows
.from(taskQueue())
.bridge(Bridges.blockingPoller(outboundTaskScheduler()))
.transform(outboundTransformer)
.handle(Jms.outboundAdapter(connectionFactory)
.extractPayload(false)
.destination(JmsQueueNames.STANDARD_TASKS))
.get();
}
#Bean
public QueueChannel standardTaskQueue() {
return MessageChannels.priority()
.comparator(TASK_PRIO_COMPARATOR)
.get();
}
// 2 more queues with different names but same config
The Worker Application runs on 10 Machines with 20 cores each and is configured like this:
#Bean
public IntegrationFlow standardTaskInbound(ConnectionFactory connectionFactory) {
int maxWorkers = 20;
return IntegrationFlows
.from(Jms.channel(connectionFactory)
.sessionTransacted(true)
.concurrentConsumers(maxWorkers)
.taskExecutor(
Executors.newFixedThreadPool(maxWorkers, new CustomizableThreadFactory("standard-"))
)
.destination(JmsQueueNames.STANDARD_TASKS))
.channel(ChannelNames.TASKS_INBOUND)
.get();
}
// 2 more inbound queues with different names but same config
This is repeated for a 2nd queue, plus 1 special case. So there is a total of 401 consumers.
Observation
Using JConsole, I can see that there are tasks in the ActiveMQ queue:
[TODO insert screenshot]
As expected, on any Worker machine, there are 20 consumer threads:
[TODO insert screenshot]
But most if not all of them are idle even though there are still messages in the queue. Using our monitoring tool, I see that about 50 to 400 tasks are being processed at any given time, when the expectation is a constant 400.
I also observed that Spring creates AbstractPollingMessageListenerContainer for each consumer, which seem to result in 1 JMS connection being opened per application per queue per second (33 connections per second).
Investigation
So I found I do not receive messages in my second consumer which hints at prefetch being the culprit. This sounded plausible, so I configured tcp://dispatcher:61616?jms.prefetchPolicy.queuePrefetch=1 on each worker. Then, however, only about 25 tasks were being processed at any point which made no sense to me at all.
Question
I don't seem to understand what's going on and since I'm running out of time to investigate, I was hoping that anyone could point me in the right direction. Which factors could be the reason? The number of consumers/connections? The prefetch? Anything else?

Turned out to be actually caused by the prefetch policy. The correct configuration in my case was to use tcp://dispatcher:61616?jms.prefetchPolicy.all=0
In my earlier (failed) test I used jms.prefetchPolicy.queuePrefetch=1 but in hindsight I'm not sure whether I configured it at the correct place.

Related

Kafka partition blocked when infra problem in a single instance of application

I have a problem with some micro service when running the microservice with kubernetes with many PODs.
I use the Manual commit strategy so I should acknowledge or not every message.
All the instance of the application belong the same kafka group. And the topic have at least 20 partition divided between the PODs. When consuming the message the listener make a call to a extern component (like a rest API by WebClient or RestTemplate, or a kafka producer of a different topic) . The Kafka consumer look like this:
#KafkaListener(topics = "topic")
#Trace
public void listen(#Payload Object message, , Acknowledgment acknowledgment)) {
try {
api.call(message);
acknowledgment.acknowledge();
} catch (InfraException e) {
acknowledgment.nack(1000);
}
But sometime this external component have infra problems and is not available. The problem usually happens when for some reason a single POD have connectivity issues. As the message is not acknowledge, he continue to be consumed, what is good. But the problem is that the message continue to be send to the same problematic instance of the application and is never redirected to another 'healthy' consumer.. As the consumer is able to get message from kafka and send heartbeats it is never considered a problematic consumer for kafka even after the rebalancing.
There some strategy or something we can do in the config to solve this problem or avoid the partition to be blocked?
Thank you for your attention so far.

Handle multiple amqp messages concurrently through one consumer inside one spring-rabbit service

EDIT
Just found out how to run multiple consumers inside one service:
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory, MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(RENDER_QUEUE);
container.setConcurrentConsumers(concurrentConsumers); // setting this in env
container.setMessageListener(listenerAdapter);
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(RenderMessageConsumer receiver) {
return new MessageListenerAdapter(receiver, "reciveMessageFromRenderQueue");
}
Now the only question that remains is: how can I have a global limit? So how do multiple instances of the AMQP receiver share the total number of consumers? So I want to set a global number of concurrentConsumers to 10, run 2 instances of the consumerSerivce and have each instance run around 5 consumers. Can this be managed by rabbitMq?
I have a Spring service that consumes AMQP messages and calls a http resource for each message.
After the http call completes another queue is called to either report error or done. Only then will message handling complete and the next message be taken from the queue.
// simplified
#RabbitListener(queues = RENDER_QUEUE)
public void reciveMessageFromRenderQueue(String message) {
try {
RenderMessage renderMessage = JsonUtils.stringToObject(message, RenderMessage.class);
String result = renderService.httpCallRenderer(renderMessage);
messageProducer.sendDoneMessage(result);
} catch (Exception e) {
logError(type, e);
messageProducer.sendErrorMessage(e.getMessage());
}
}
There are at times hundreds or thousands of render messages in the queue but the http call is rather long running and not doing much. This becomes obvious as I can improve the message handling rate by running multiple instances of the service thus adding more consumers and calling the http endpoint multiple times. One instance has exactly one consumer for the channel so the number of instances is equal to the number of consumers. However that heavily increases memory usage (since the service uses spring) for just forwarding a message and handling the result.
So I thought, I'd do the http call asynchronously and return immediatly after accepting the message:
.httpCallRendererAsync(renderMessage)
.subscribeOn(Schedulers.newThread())
.subscribe(new Observer<String >() {
public void onNext(String result) {
messageProducer.sendDoneMessage(result);
}
public void onError(Throwable throwable) {
messageProducer.sendErrorMessage(throwable.getMessage());
}
});
That however overloads the http endpoint which cannot deal with 1000 or more simultanous requests.
What I need is for my amqp service to take a certain amount of messages from the queue, handle them in separate threads, make the http call in each of them and return with "message handled". The amount of messages taken from the queue however needs to be shared between multiple instances of that service, so if maximum is 10, message consumption is round robin, the first 5 odd messages should be handled by instance one and the first 5 even messages by instance 2 and as soon as one instance finishes handling the message it should take another one from the queue.
What I found are things like prefetch with limts by consumer and by channel as described by rabbitmq. And the spring-rabbit implementation which uses prefetchCount and the transactionSize described here. That however does not seem to do anything for a single running instance. It will not spawn additional threads to handle more messages concurrently. And of course it will not reduce the number of messages handled in my async scenario since those messages are immediatly considered "handled".
#Bean
public RabbitListenerContainerFactory<SimpleMessageListenerContainer> prefetchContainerFactory(ConnectionFactory rabbitConnectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(rabbitConnectionFactory);
factory.setPrefetchCount(5);
factory.setTxSize(5);
return factory;
}
// and then using
#RabbitListener(queues = RENDER_QUEUE, containerFactory = "prefetchContainerFactory")
The most important requirement for me seems to be that multiple messages should be handled in one instance while the maximum of concurrently handled messages should be shared between instances.
Can that be done using rabbitMq and spring? Or do I have to implemenent something in between.
In an early stage it might be acceptable to just have concurrent message handling in one instance and not share that limit. Then I'll have to configure the limit manually using environment variables while scaling the number of instances.
Now the only question that remains is: how can I have a global limit? So how do multiple instances of the AMQP receiver share the total number of consumers? So I want to set a global number of concurrentConsumers to 10, run 2 instances of the consumerSerivce and have each instance run around 5 consumers. Can this be managed by rabbitMq?
There is no mechanism in either RabbitMQ or Spring to support such a scenario automatically. You can, however, change the concurrency at runtime (setConcurrentConsumers() on the container) so you could use some external agent to manage the concurrency on each instance.

Too many active consumers for ActiveMQ queue

My Spring app consumes ActiveMQ queue. There are two approaches possible. Initial part of ActiveMQ integration is the same for the both approaches:
#Bean
public ConnectionFactory connectionFactory() {
return new ActiveMQConnectionFactory();
}
#Bean
public Queue notificationQueue() {
return resolveAvcQueueByJNDIName("java:comp/env/jms/name.not.important.queue");
}
Single thread approach:
#Bean
public IntegrationFlow orderNotify() {
return IntegrationFlows.from(Jms.inboundAdapter(connectionFactory()).destination(notificationQueue()),
c -> c.poller(Pollers.fixedDelay(QUEUE_POLLING_INTERVAL_MS)
.errorHandler(e -> logger.error("Can't handle incoming message", e))))
.handle(...).get();
}
But I want to consume messages using the several worker threads, so I refactored the code from Inbound adapter to Message driven channel adapter:
#Bean
public IntegrationFlow orderNotify() {
return IntegrationFlows.from(Jms.messageDriverChannelAdapter(connectionFactory()).configureListenerContainer(c -> {
final DefaultMessageListenerContainer container = c.get();
container.setMaxConcurrentConsumers(notifyThreadPoolSize);
}).destination(notificationQueue()))
.handle(...).get();
}
The problem is that the app doesn't stop ActiveMQ's consumer when it being redeployed into Tomcat or being restarted for the second approach. It creates new consumer during it's startup. But all new messages are being routed to the old "dead" consumer so they're sat in the "Pending messages" section and never being dequeued.
What can be the problem here?
You have to stop Tomcat fully, I believe. Typically during redeploy for the application, the Spring container should be stopped and clear properly, but looks like that's not your case: there is something missed for the Tomcat redeploy hook. Therefore I suggest to stop it fully.
Another option is to forget external Tomcat and just migrate to Spring Boot with its ability to start embedded servlet container. This way there is not going to be any leaks after rebuilding and restarting application.

How to open a new jms connection per thread when using jms outbound adapter?

I have a spring integration JMS outbound gateway that I'm using to push messages to multiple queues in my queue manager.
#Bean
public IntegrationFlow sendTo101flow() {
return IntegrationFlows.from("sendTo101Channel")
.handle(Jms.outboundAdapter(context.getBean("connection101", ConnectionFactory.class))
.destinationExpression("headers." + HeaderKeys.DESTINATION_NAME)
.configureJmsTemplate(jmsOutboundTemplateSpec())
.get(), jmsOutboundEndpointSpec())
.get();
}
I'm facing problems when we get concurrent requests with huge payloads which need to be inserted into the same queue. On inspection it looks like even though the threads trying to insert the message are separate, they're only allowed to do the insertion sequentially.
I have checked the mq documentation and it looks like actual parallel insertion will only work if a new connection is opened for each message.
Is there a way to make a JMS outbound gateway open a new connection per message? Or set the number of concurrent connections opened through it (like on the inbound side)?
That is the default behavior, as long as you don't use a CachingConnectionFactory (or its parent SingleConnectionFactory) which shares a single connection across all operations.
Connection (and session, producer) caching is generally recommended, to avoid the overhead of creating connection, session, and producer for each send. But there may be cases, like yours, where this is unavoidable.

ActiveMQ generating queues which don't exists

I have a problem with using ActiveMQ in Spring application.
I have a few environments on separate machines. On each machine I had one ActiveMQ instance installed. Now, I realized that I can have only one ActiveMQ instance installed on one server, and few applications can use that ActiveMQ for sending messages. So, I must change queue names in order to have different queues for different environments ("queue.search.sandbox", "queue.search.production", ...).
After that change, now ActiveMQ is generating new queues, but also the old ones, although there is no such configuration for doing that.
I am using Java Spring application with Java configuration, not XML.
First, I create queueTemplate as a Spring bean:
#Bean
public JmsTemplate jmsAuditQueueTemplate() {
log.debug("ActiveMQConfiguration jmsAuditQueueTemplate");
JmsTemplate jmsTemplate = new JmsTemplate();
String queueName = "queue.audit.".concat(env.getProperty("activeMqBroker.queueName.suffix"));
jmsTemplate.setDefaultDestination(new ActiveMQQueue(queueName));
jmsTemplate.setConnectionFactory(connectionFactory());
return jmsTemplate;
}
Second, I create ActiveMQ Listener configuration:
#Bean
public DefaultMessageListenerContainer jmsAuditQueueListenerContainer() {
log.debug("ActiveMQConfiguration jmsAuditQueueListenerContainer");
DefaultMessageListenerContainer dmlc = new DefaultMessageListenerContainer();
dmlc.setConnectionFactory(connectionFactory);
String queueName = "queue.audit.".concat(env.getProperty("activeMqBroker.queueName.suffix"));
ActiveMQQueue activeMQ = new ActiveMQQueue(queueName);
dmlc.setDestination(activeMQ);
dmlc.setRecoveryInterval(30000);
dmlc.setSessionTransacted(true);
// To perform actual message processing
dmlc.setMessageListener(auditQueueListenerService);
dmlc.setConcurrentConsumers(10);
// ... more parameters that you might want to inject ...
return dmlc;
}
After building my application, as the result I have properly created queue with suffix ("queue.audit.sandbox"), but after some time ActiveMQ generates and the old version ("queue.audit").
Does someone knows how ActiveMQ is doing this? Thanks in advance.
There is probably still an entry in the index for the queue, so when ActiveMQ restarts it is displaying the queue. If you want to be certain about destinations, use startup destinations and disable auto-creation by denying the "admin" permission to the connecting user account in the authorization entry
After some time ActiveMQ just stopped creating queues that don't exist.
Now, we have expected behavior, without unnecessary queues.
Still I didn't found out what solved this problem, to be sincere...

Categories

Resources