ActiveMQ generating queues which don't exists - java

I have a problem with using ActiveMQ in Spring application.
I have a few environments on separate machines. On each machine I had one ActiveMQ instance installed. Now, I realized that I can have only one ActiveMQ instance installed on one server, and few applications can use that ActiveMQ for sending messages. So, I must change queue names in order to have different queues for different environments ("queue.search.sandbox", "queue.search.production", ...).
After that change, now ActiveMQ is generating new queues, but also the old ones, although there is no such configuration for doing that.
I am using Java Spring application with Java configuration, not XML.
First, I create queueTemplate as a Spring bean:
#Bean
public JmsTemplate jmsAuditQueueTemplate() {
log.debug("ActiveMQConfiguration jmsAuditQueueTemplate");
JmsTemplate jmsTemplate = new JmsTemplate();
String queueName = "queue.audit.".concat(env.getProperty("activeMqBroker.queueName.suffix"));
jmsTemplate.setDefaultDestination(new ActiveMQQueue(queueName));
jmsTemplate.setConnectionFactory(connectionFactory());
return jmsTemplate;
}
Second, I create ActiveMQ Listener configuration:
#Bean
public DefaultMessageListenerContainer jmsAuditQueueListenerContainer() {
log.debug("ActiveMQConfiguration jmsAuditQueueListenerContainer");
DefaultMessageListenerContainer dmlc = new DefaultMessageListenerContainer();
dmlc.setConnectionFactory(connectionFactory);
String queueName = "queue.audit.".concat(env.getProperty("activeMqBroker.queueName.suffix"));
ActiveMQQueue activeMQ = new ActiveMQQueue(queueName);
dmlc.setDestination(activeMQ);
dmlc.setRecoveryInterval(30000);
dmlc.setSessionTransacted(true);
// To perform actual message processing
dmlc.setMessageListener(auditQueueListenerService);
dmlc.setConcurrentConsumers(10);
// ... more parameters that you might want to inject ...
return dmlc;
}
After building my application, as the result I have properly created queue with suffix ("queue.audit.sandbox"), but after some time ActiveMQ generates and the old version ("queue.audit").
Does someone knows how ActiveMQ is doing this? Thanks in advance.

There is probably still an entry in the index for the queue, so when ActiveMQ restarts it is displaying the queue. If you want to be certain about destinations, use startup destinations and disable auto-creation by denying the "admin" permission to the connecting user account in the authorization entry

After some time ActiveMQ just stopped creating queues that don't exist.
Now, we have expected behavior, without unnecessary queues.
Still I didn't found out what solved this problem, to be sincere...

Related

Too many active consumers for ActiveMQ queue

My Spring app consumes ActiveMQ queue. There are two approaches possible. Initial part of ActiveMQ integration is the same for the both approaches:
#Bean
public ConnectionFactory connectionFactory() {
return new ActiveMQConnectionFactory();
}
#Bean
public Queue notificationQueue() {
return resolveAvcQueueByJNDIName("java:comp/env/jms/name.not.important.queue");
}
Single thread approach:
#Bean
public IntegrationFlow orderNotify() {
return IntegrationFlows.from(Jms.inboundAdapter(connectionFactory()).destination(notificationQueue()),
c -> c.poller(Pollers.fixedDelay(QUEUE_POLLING_INTERVAL_MS)
.errorHandler(e -> logger.error("Can't handle incoming message", e))))
.handle(...).get();
}
But I want to consume messages using the several worker threads, so I refactored the code from Inbound adapter to Message driven channel adapter:
#Bean
public IntegrationFlow orderNotify() {
return IntegrationFlows.from(Jms.messageDriverChannelAdapter(connectionFactory()).configureListenerContainer(c -> {
final DefaultMessageListenerContainer container = c.get();
container.setMaxConcurrentConsumers(notifyThreadPoolSize);
}).destination(notificationQueue()))
.handle(...).get();
}
The problem is that the app doesn't stop ActiveMQ's consumer when it being redeployed into Tomcat or being restarted for the second approach. It creates new consumer during it's startup. But all new messages are being routed to the old "dead" consumer so they're sat in the "Pending messages" section and never being dequeued.
What can be the problem here?
You have to stop Tomcat fully, I believe. Typically during redeploy for the application, the Spring container should be stopped and clear properly, but looks like that's not your case: there is something missed for the Tomcat redeploy hook. Therefore I suggest to stop it fully.
Another option is to forget external Tomcat and just migrate to Spring Boot with its ability to start embedded servlet container. This way there is not going to be any leaks after rebuilding and restarting application.

How to open a new jms connection per thread when using jms outbound adapter?

I have a spring integration JMS outbound gateway that I'm using to push messages to multiple queues in my queue manager.
#Bean
public IntegrationFlow sendTo101flow() {
return IntegrationFlows.from("sendTo101Channel")
.handle(Jms.outboundAdapter(context.getBean("connection101", ConnectionFactory.class))
.destinationExpression("headers." + HeaderKeys.DESTINATION_NAME)
.configureJmsTemplate(jmsOutboundTemplateSpec())
.get(), jmsOutboundEndpointSpec())
.get();
}
I'm facing problems when we get concurrent requests with huge payloads which need to be inserted into the same queue. On inspection it looks like even though the threads trying to insert the message are separate, they're only allowed to do the insertion sequentially.
I have checked the mq documentation and it looks like actual parallel insertion will only work if a new connection is opened for each message.
Is there a way to make a JMS outbound gateway open a new connection per message? Or set the number of concurrent connections opened through it (like on the inbound side)?
That is the default behavior, as long as you don't use a CachingConnectionFactory (or its parent SingleConnectionFactory) which shares a single connection across all operations.
Connection (and session, producer) caching is generally recommended, to avoid the overhead of creating connection, session, and producer for each send. But there may be cases, like yours, where this is unavoidable.

How to clean HornetQ messaging journal before/after performing a test?

There's an Arquillian integration test using JMS HornetQ with persisted messages. Some test leave the messaging journal filled with unhandled messages that break other tests expecting no data.
Is there a way of telling JMS to clean its messaging journal before or after executing a test?
This does not exist in the JMS API itself, but there's a method 'removeMessages(filter)' in the HornetQ QueueControl management object. This method can be found in the JMX Bean for the Queue, but I wouldn't know how to get that in Arquillian.
Luckily, you can invoke management operations via the 'hornetq.management' queue. See http://docs.jboss.org/hornetq/2.2.5.Final/user-manual/en/html/management.html. In practice, the following should work:
Queue managementQueue = HornetQJMSClient.createQueue("hornetq.management");
QueueRequestor requestor = new QueueRequestor(session, managementQueue);
Message m = session.createMessage();
JMSManagementHelper.putOperationInvocation(m,
"jms.queue.exampleQueue",
"removeMessages","*");
Message reply = requestor.request(m);
boolean success = JMSManagementHelper.hasOperationSucceeded(reply);
If you're restarting the server, you could remove the paging and data folders (while keeping the bindings).

Rolling upgrade with MDBs

We have a Java EE application deployed on a Glassfish 3.1.2 cluster which provides a REST API using JAX-RS. We regularly deploy new versions of the application by deploying an EAR to a duplicate cluster instance, then update the HTTP load balancer to send traffic to the updated instance instead of the old one.
This allows us to upgrade with no loss of availability as described here: http://docs.oracle.com/cd/E18930_01/html/821-2426/abdio.html#abdip. We are frequently making significant changes to the application, which makes the new versions "incompatible" (which is why we use two clusters).
We now have to provide a message-queue interface to the application for some high throughput internal messaging (from C++ producers). However, using Message Driven Beans I cannot see how it is possible to upgrade an application without any service disruption?
The options I have investigated are:
Single remote JMS queue (openMQ)
Producers send messages to a single message queue, messages are handled by a MDB. When we start a second cluster instance, messages should be load-balanced to the upgraded cluster, but when we stop the "old" cluster, outstanding transactions will be lost.
I considered using JMX to disable producers/consumers to that message queue during the upgrade, but that only pauses message delivery. Oustanding messages will still be lost when we disable the old cluster (I think?).
I also considered ditching the #MessageDriven annotation and creating a MessageConsumer manually. This does seem to work, but the MessageConsumer cannot then access other EJB's using the EJB annotation (as far as I know):
// Singleton bean with start()/stop() functions that
// enable/disable message consumption
#Singleton
#Startup
public class ServerControl {
private boolean running=false;
#Resource(lookup = "jms/TopicConnectionFactory")
private TopicConnectionFactory topicConnectionFactory;
#Resource(lookup = "jms/MyTopic")
private Topic topic;
private Connection connection;
private Session session;
private MessageConsumer consumer;
public ServerControl()
{
this.running = false;
}
public void start() throws JMSException {
if( this.running ) return;
connection = topicConnectionFactory.createConnection();
session = dbUpdatesConnection.createSession(false, Session.DUPS_OK_ACKNOWLEDGE);
consumer = dbUpdatesSession.createConsumer(topic);
consumer.setMessageListener(new MessageHandler());
// Start the message queue handlers
connection.start();
this.running = true;
}
public void stop() throws JMSException {
if( this.running == false ) return;
// Stop the message queue handlers
consumer.close();
this.running = false;
}
}
// MessageListener has to invoke functions defined in other EJB's
#Stateless
public class MessageHandler implements MessageListener {
#EJB
SomeEjb someEjb; // This is null
public MessageHandler() {
}
#Override
public void onMessage(Message message) {
// This works but someEjb is null unless I
// use the #MessageDriven annotation, but then I
// can't gracefully disconnect from the queue
}
}
Local/Embedded JMS queue for each cluster
Clients would have to connect to two different message queue brokers (one for each cluster).
Clients would have to be notified that a cluster instance is going down and stop sending messages to that queues on that broker.
Generally much less convenient and tidy than the existing http solution.
Alternative message queue providers
Connect Glassfish up to a different type of message queue or different vendor (e.g. Apache OpenMQ), perhaps one of these has the ability to balance traffic away from a particular set of consumers?
I have assumed that just disabling the application will just "kill" any outstanding transactions. If disabling the application allows existing transactions to complete then I could just do that after bringing the second cluster up.
Any help would be appreciated! Thanks in advance.
If you use highly availability then all of your messages for the cluster will be stored in a single data store as opposed to the local data store on each instance. You could then configure both clusters to use the same store. Then when shutting down the old and spinning up the new you have access to all the messages.
This is a good video that helps to explain high availability jms for glassfish.
I don't understand your assumption that when we stop the "old" cluster, outstanding transactions will be lost. The MDBs will be allowed to finish their message processing before the application stops and any unacknowledged messages will be handled by the "new" cluster.
If the loadbalancing between old and new version is an issue, I would put MDBs into separate .ear and stop old MDBs as soon as new MDBs is online, or even before that, if your use case allows for delay in message processing until the new version is deployed.

Message Driven Bean Selectors (JMS)

I have recently discovered message selectors
#ActivationConfigProperty(
propertyName="messageSelector",
propertyValue="Fragile IS TRUE")
My Question is: How can I make the selector dynamic at runtime?
Lets say a consumer decided they wanted only messages with the property "Fragile IS FALSE"
Could the consumer change the selector somehow without redeploying the MDB?
Note: I am using Glassfish v2.1
To my knowledge, this is not possible. There may be implementations that will allow it via some custom server hooks, but it would be implementation dependent. For one, it requires a change to the deployment descriptor, which is not read after the EAR is deployed.
JMS (Jakarta Messaging) is designed to provide simple means to do simple things and more complicated things to do more complicated but less frequently needed things. Message-driven beans are an example of the first case. To do some dynamic reconfiguration, you need to stop using MDBs and start consuming messages using the programmatic API, using an injected JMSContext and topic or queue. For example:
#Inject
private JMSContext context;
#Resource(lookup="jms/queue/thumbnail")
Queue thumbnailQueue;
JMSConsumer connectListener(String messageSelector) {
JMSConsumer consumer = context.createConsumer(logTopic, messageSelector);
consumer.setMessageListener(message -> {
// process message
});
return consumer;
}
You can call connectListener during startup, e.g. in a CDI bean:
public void start(#Observes #Initialized(ApplicationScoped.class) Object startEvent) {
connectListener("Fragile IS TRUE");
}
Then you can easily reconfigure it by closing the returned consumer and creating it again with a new selector string:
consumer.close();
consumer = connectListener("Fragile IS FALSE");

Categories

Resources