ActiveMQ, how can i create only one consumer? - java

I'm working with Java EE and ActiveMQ. I want to realize a JMS Queue where I can send messages to my QUEUE and a Consumer + MessageListener should read this messages.
The Code for my Consumer ist the following:
private void initializeActiveMq() throws JMSException {
// Create a ConnectionFactory
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(brokerUrl);
// Create a Connection
connection = connectionFactory.createConnection();
connection.start();
// Create a Session
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
// Create the destination (Queue)
Queue queue = session.createQueue(queueName);
// Create a MessageConsumer from the Session to the Queue
consumer = session.createConsumer(queue);
consumer.setMessageListener(this);
}
But the problem is, every time i will run this code it ads a new consumer to my queue, and then i have some strange behavior the consumers did then not deliver the messages correctly. If i have only one consumer it works perfect.
So how can I make sure that I have only one consumer in my queue?

I've experienced the exact same issue. You need to make sure that your war has a concept of a graceful shutdown procedure when undeployed.
You can achieve this by having a HTTP Servlet that implements init (do all your initialisation here) and destroy (do all you clearing down here). Destory() will get get called by the JVM when the war is undeployed.
I use Spring to define all my beans, ActiveMQ connections, message consumers and producers. My init() loads the Spring application context from a master xml file, caches a reference to it locally, then the destory() calls close() on the application context.

Related

Producers not getting cleaned while using CachingConnectionFactory

It seems the JMSProducer is not getting garbage collected and keeps alive after delivering messages to queue, I'm using Spring 3.2.2 and CachingConnectionFactory with Keep-alive setting for sending message.
Producers count keeps increasing every time I send message.
Is it related to spring version I am using?
or am I doing something wrong in my configuration?
You need to call close() method on your MessageProducer. As per the Java docs:-
void close()
throws JMSException
Closes the message producer.
Since a provider may allocate some resources on behalf of a MessageProducer outside the Java virtual machine, clients should close them when they are not needed. Relying on garbage collection to eventually reclaim these resources may not be timely enough.
As per the spring CachingConnectionFactory docs :-
NOTE: This ConnectionFactory requires explicit closing of all Sessions
obtained from its shared Connection. This is the usual recommendation
for native JMS access code anyway. However, with this
ConnectionFactory, its use is mandatory in order to actually allow for
Session reuse.
So you need to call getCachedSessionProxy instead of getSession and once done with sending message call the close() (in finally block) . As per the source code, the close call to this Session proxy is handled such that the session and messageproducer is reused. Gary's comments states the same.

JMS send and recv synchronization

I have a question about best pattern for JMS message send and recv synchronization...
I have an C++ client talking to J2EE server using JAX-RS (REST) over HTTP. On the server side I have two EJBs - one for resource manipulation and other for session state tracking (#Singleton). And I need to notify client when something is changed, created or deleted on server. So I made this approach:
1 - When client connects and logs in, session bean creates temporary jms queue (non transactional) with code like:
connection = connectionFactory.createConnection();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
connection.start();
producer = session.createProducer(queue);
consumer = session.createConsumer(queue);
2 - Client call "listener" REST resource which calls:
public void listen() {
...
consumer.recv(timeout);
...
}
and blocks until it got an message or timeout expires.
3 - When I make some changes in resource bean it calls
public void update(...) {
em.update(...);
producer.send(session.createMessage(CHANGE_NOTIFY_MESSAGE));
}
4 - "listener" resource get the message from JMS and returns it to client with changed (created) object information and than client calls "get" method to get created or changed object.
My problem is that "listener" gets message before changes are written to database (I'm using JPA), so when client asks for created or changed object, this one doesn't exist yet or has old information.
How I can modify my alghoritm to be notified only after changes are saved to database?
Thank you for ideas in advance)))
As Steve C says, I think you need to make the session transactional. An example:
// do the operation
em.getTransaction().begin();
em.update(p);
em.getTransaction().commit();
// now you can send
producer.send(session.createMessage(CHANGE_NOTIFY_MESSAGE));

Rolling upgrade with MDBs

We have a Java EE application deployed on a Glassfish 3.1.2 cluster which provides a REST API using JAX-RS. We regularly deploy new versions of the application by deploying an EAR to a duplicate cluster instance, then update the HTTP load balancer to send traffic to the updated instance instead of the old one.
This allows us to upgrade with no loss of availability as described here: http://docs.oracle.com/cd/E18930_01/html/821-2426/abdio.html#abdip. We are frequently making significant changes to the application, which makes the new versions "incompatible" (which is why we use two clusters).
We now have to provide a message-queue interface to the application for some high throughput internal messaging (from C++ producers). However, using Message Driven Beans I cannot see how it is possible to upgrade an application without any service disruption?
The options I have investigated are:
Single remote JMS queue (openMQ)
Producers send messages to a single message queue, messages are handled by a MDB. When we start a second cluster instance, messages should be load-balanced to the upgraded cluster, but when we stop the "old" cluster, outstanding transactions will be lost.
I considered using JMX to disable producers/consumers to that message queue during the upgrade, but that only pauses message delivery. Oustanding messages will still be lost when we disable the old cluster (I think?).
I also considered ditching the #MessageDriven annotation and creating a MessageConsumer manually. This does seem to work, but the MessageConsumer cannot then access other EJB's using the EJB annotation (as far as I know):
// Singleton bean with start()/stop() functions that
// enable/disable message consumption
#Singleton
#Startup
public class ServerControl {
private boolean running=false;
#Resource(lookup = "jms/TopicConnectionFactory")
private TopicConnectionFactory topicConnectionFactory;
#Resource(lookup = "jms/MyTopic")
private Topic topic;
private Connection connection;
private Session session;
private MessageConsumer consumer;
public ServerControl()
{
this.running = false;
}
public void start() throws JMSException {
if( this.running ) return;
connection = topicConnectionFactory.createConnection();
session = dbUpdatesConnection.createSession(false, Session.DUPS_OK_ACKNOWLEDGE);
consumer = dbUpdatesSession.createConsumer(topic);
consumer.setMessageListener(new MessageHandler());
// Start the message queue handlers
connection.start();
this.running = true;
}
public void stop() throws JMSException {
if( this.running == false ) return;
// Stop the message queue handlers
consumer.close();
this.running = false;
}
}
// MessageListener has to invoke functions defined in other EJB's
#Stateless
public class MessageHandler implements MessageListener {
#EJB
SomeEjb someEjb; // This is null
public MessageHandler() {
}
#Override
public void onMessage(Message message) {
// This works but someEjb is null unless I
// use the #MessageDriven annotation, but then I
// can't gracefully disconnect from the queue
}
}
Local/Embedded JMS queue for each cluster
Clients would have to connect to two different message queue brokers (one for each cluster).
Clients would have to be notified that a cluster instance is going down and stop sending messages to that queues on that broker.
Generally much less convenient and tidy than the existing http solution.
Alternative message queue providers
Connect Glassfish up to a different type of message queue or different vendor (e.g. Apache OpenMQ), perhaps one of these has the ability to balance traffic away from a particular set of consumers?
I have assumed that just disabling the application will just "kill" any outstanding transactions. If disabling the application allows existing transactions to complete then I could just do that after bringing the second cluster up.
Any help would be appreciated! Thanks in advance.
If you use highly availability then all of your messages for the cluster will be stored in a single data store as opposed to the local data store on each instance. You could then configure both clusters to use the same store. Then when shutting down the old and spinning up the new you have access to all the messages.
This is a good video that helps to explain high availability jms for glassfish.
I don't understand your assumption that when we stop the "old" cluster, outstanding transactions will be lost. The MDBs will be allowed to finish their message processing before the application stops and any unacknowledged messages will be handled by the "new" cluster.
If the loadbalancing between old and new version is an issue, I would put MDBs into separate .ear and stop old MDBs as soon as new MDBs is online, or even before that, if your use case allows for delay in message processing until the new version is deployed.

Can I send messages to a JMS queue from outside the app server?

As I understand it, a J2EE container is required to include a JMS provider. Is it possible for a standalone Java application to send messages to a JMS queue provided by the container? If so, how do I access the JNDI lookups from outside the container?
(I am trying this with Geronimo if it makes any difference, but I am hoping there is a standard way of doing this.)
You should be able to create an InitialContext that uses the JNDI server in Geronimo. You can then use this to lookup your JMS Connection Factory and Queue.
The following example was adapted from http://forums.sun.com/thread.jspa?threadID=5283256 to use the Geronimo JNDI Factory.
Context jndiContext = null;
ConnectionFactory connectionFactory = null;
Connection connection = null;
Session session = null;
Queue queue = null;
MessageProducer messageProducer = null;
try
{
//[1] Create a JNDI API InitialContext object.
Hashtable properties = new Hashtable(2);
// CHANGE these to match Geronimos JNDI service
properties.put(Context.INITIAL_CONTEXT_FACTORY, "org.apache.openejb.client.RemoteInitialContextFactory");
properties.put(Context.PROVIDER_URL, "ejbd://127.0.0.1:4201");
jndiContext = new InitialContext(properties);
//[2] Look up connection factory and queue.
connectionFactory = (ConnectionFactory)jndiContext.lookup("jms/ConnectionFactory");
queue = (Queue)jndiContext.lookup("jms/Queue");
//[3]
// - Create connection
// - Create session from connection; false means session is not transacted.
// - Create sender and text message.
// - Send messages, varying text slightly.
connection = connectionFactory.createConnection();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
messageProducer = session.createProducer(queue);
//send a message
TextMessage message = session.createTextMessage(this.jTextSend.getText());
messageProducer.send(message);
//example for send some object
//ObjectMessage message = session.createObjectMessage();
//MyObj myObj = new MyObj ("Name"); //this class must be serializable
//message.setObject(myObj );
//messageProducer.send(message);
}
catch(Exception ex)
{
LOG.error(ex);
}
finally
{
if(connection !=null)
{
try
{
connection.close();
}
catch(JMSException e)
{
LOG.error(e);
}
}
}
You can place messages in a JMS queue without an application server.
However, you'll need to know how to get to the JMS provider directly -- without using JNDI, since that is provided by the JavaEE application server.
You can do it, and there may be multiple ways depending on the thin client that is accessing the queue. The example given by #pjp will work providing you have the correct jar files for accessing the server in question, including a jar which will provide your application with a JNDI instance. These jars should be provided by the vendor, and may include instructions on how to connect without using JNDI as well. Although I think the JNDI method is the simplest and keeps coding consistent both on and off the server.
Each vendor will have different jars to provide client access, in IBM's case, they are different for the internal JMS provider vs. WebSphere MQ (since they are two different implementations).

Post a message to a remote JMS queue using JBoss

This looks simple but I can't find a simple answer.
I want to open a connection to a remote JMS broker (IP and port are known), open a session to the a specific queue (name known) and post a message to this queue.
Is there any simple Java API (standard if possible) to do that ?
EDIT
Ok I understand now that JMS is a driver spec just like JDBC and not a communication protocol as I thought.
Given I am running in JBoss, I still don't understand how to create a JBossConnectionFactory.
EDIT
I actually gave the problem some thoughts (hmmm) and if JMS needs to be treated the same as JDBC, then I need to use a client provided by my MQ implementation. Since we are using SonicMQ for our broker, I decided to embed the sonic_Client.jar library provided with SonicMQ.
This is working in a standalone Java application and in our JBoss service.
Thanks for the help
You'll need to use JMS, create a QueueConnectionFactory and go from there. Exactly how you create the QueueConnectionFactory will be vendor specific (JMS is basically a driver spec for message queues just as JDBC is for databases) but on IBM MQ it something like this:
MQQueueConnectionFactory connectionFactory = new MQQueueConnectionFactory();
connectionFactory.setHostName(<hostname>);
connectionFactory.setPort(<port>);
connectionFactory.setTransportType(JMSC.MQJMS_TP_CLIENT_MQ_TCPIP);
connectionFactory.setQueueManager(<queue manager>);
connectionFactory.setChannel("SYSTEM.DEF.SVRCONN");
QueueConnection queueConnection = connectionFactory.createQueueConnection();
QueueSession queueSession = connection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
Queue queue = queueSession.createQueue(<queue name>);
QueueSender queueSender = session.createSender(queue);
QueueReceiver queueReceiver = session.createReceiver(queue);
EDIT (following question edit)
The best way to access a remote queue, or any queue for that matter, is to add a Queue instance to the JNDI registry. For remote queues this is achieved using MBeans that add the Queue instance when the server starts.
Take a look at http://www.jboss.org/community/wiki/UsingWebSphereMQSeriesWithJBossASPart4, which while it's an example with IBM MQ, is essentially what you have to do to connect to any remote queue.
If you look at jbossmq-destinations-service.xml and org.jboss.mq.server.jmx you'll see the MBeans you need to create in relation to a JBoss queue.
Here is the code we used to connect to the SonicMQ broker using the sonic_Client.jar library:
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.JMSException;
import javax.jms.MessageProducer;
import javax.jms.Session;
public class JmsClient
{
public static void main(String[] args) throws JMSException
{
ConnectionFactory factory = new progress.message.jclient.ConnectionFactory("tcp://<host>:<port>", "<user>", "<password>");
Connection connection = factory.createConnection();
try
{
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
try
{
MessageProducer producer = session.createProducer(session.createQueue("<queue>"));
try
{
producer.send(session.createTextMessage("<message body>"));
}
finally
{
producer.close();
}
}
finally
{
session.close();
}
}
finally
{
connection.close();
}
}
}
Actually I'm using JBoss 4 and JNDI is not difficult to use.
First of all you have to know where your JNDI is running.
In my JBoss (conf\jboss-service.xml) I have:
<mbean code="org.jboss.naming.NamingService" name="jboss:service=Naming" xmbean-dd="resource:xmdesc/NamingService-xmbean.xml">
...
<attribute name="Port">7099</attribute>
...
</mbean>
This is important, this is port you want to connect to.
Now you can easily connect to JNDI using this code:
Hashtable<String, String> contextProperties = new Hashtable<String, String>();
contextProperties.put(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
contextProperties.put(Context.PROVIDER_URL, "jnp://localhost:7099");
InitialContext initContext = new InitialContext(contextProperties);
Now when you have context, it's very similar to #Nick Holt's answer, except connection factory creation, you have to use:
QueueConnectionFactory connFactory = (QueueConnectionFactory) initContext.lookup("ConnectionFactory");
Also you do not need to create queue if there is deployed some
Queue queue = (Queue) initContext.lookup("queueName");
All the code above was tested with JBoss 4.2.2 GA and JBossMQ (JBossMQ was, if I'm correct, replaced in 4.2.3 with JBoss messaging).

Categories

Resources