Detecting when an asynchronous JMS MessageConsumer has an Exception? - java

I'm processing messages using a JMS MessageConsumer with a MessageListener. If something happens that causes the MessageConsumer to stop receiving and processing messages -- for example, if the underlying connection closes -- how can I detect it? There doesn't seem to be any notification mechanism that I can find in the spec.
I think the question is clear as is, but if you'd like me to post code to clarify the question, just ask!
In case it's important, I'm using ActiveMQ 5.8, although obviously I'd like a scheme that's not implementation-specific.

Use ExceptionListener
If the JMS system detects a problem, it calls the listener's onException method:
public class MyConsumer implements ExceptionListener, MessageListener {
private void init(){
Connection connection = ... //create connection
connection.setExceptionListener(this);
connection.start();
}
public void onException(JMSException e){
String errorCode = e.getErrorCode();
Exception ex = e.getLinkedException();
//clean up resources, or, attempt to reconnect
}
public void onMessage(Message m){
...
}
Not much to it, really, the above is standard practice for standalone consumers; it's not implementation-specific; actually, quite the contrary as it's part of the spec!, so all JMS-compliant providers will support it.

Related

How can I acknowledge a JMS message on another thread and request for redelivery of unacknowledged messages?

Step 1: I need to receive message by one thread.
Step 2: Process and sending ack and redelivery request (throwing exception) by another thread.
Sample code:
List<Message> list=new ArrayList();
#JmsListener(destination = "${jms.queue-name}", concurrency = "${jms.max-thread-count}")
public void receiveMessage(Message message) throws JMSException,UnsupportedEncodingException {
list.add(message)
}
void run() {
foreach(Message message:list) {
//need to send ack or throw exception for redeliver if error
}
}
Now another thread will start and process the list which contains data then how can I send an acknowledgement or throw an exception for redelivery?
Typically you'd let your framework (e.g. Spring) deal with concurrent message processing. This is, in fact, one of the benefits of such frameworks. I don't see any explicit benefit to dumping all the messages into a List and then manually spawning a thread to process it. Spring is already doing this for you via the #JmsListener by invoking receiveMessage in a thread and providing configurable concurrency.
Furthermore, if you want to trigger redelivery then you'll need to use a transacted JMS session and invoke rollback() but JMS sessions are not threadsafe so you'll have to control access to it somehow. This will almost certainly make your code needlessly complex.

JMS client that doesn't terminate

We have a JMS client that needs to sit idle until it receives a message. When it receives a message, it will execute some functionality and then go back to an idle state. My question is, what is the best way to ensure that the client stays up and running? Is it good practice to let the JMS client software handle this or do we need to run the software as a service on the host machine (and/or do something else)? We are currently relying on the JMS client software as it seems to keep a thread active with the open connection but I am unsure if this is best practice. We use ActiveMQ as our message broker and for the client software.
Edit: Added code sample
Below is an example of how the client stays up using the JMS client connection:
import javax.jms.Connection;
import javax.jms.Destination;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import javax.jms.MessageListener;
import javax.jms.Session;
import org.apache.activemq.ActiveMQConnectionFactory;
public class JmsTestWithoutClose implements MessageListener {
private final Connection connection;
private final Session session;
private final MessageConsumer consumer;
public static void main(String[] args) throws JMSException {
System.out.println("Starting...");
JmsTestWithoutClose test = new JmsTestWithoutClose("<username>", "<password>", "tcp://<host>:<port>");
// if you uncomment the line below, the program will terminate
// if you keep it commented, the program will NOT terminate
// test.close();
System.out.println("Last line of main method...");
}
public JmsTestWithoutClose(String username, String password, String url) throws JMSException {
// create connection and session
ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory(username, password, url);
this.connection = factory.createConnection();
connection.start();
this.session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createTopic("Topic_Name");
consumer = session.createConsumer(destination);
consumer.setMessageListener(this);
}
public void close() throws JMSException {
session.close();
connection.close();
}
#Override
public void onMessage(Message message) {
// process the message
}
}
Is the current client implementation fulfilling your requirements? If so, I would not change it. If the current client implementation is not fulfilling your requirements then I would change it. Changing software which is working without a problem and fulfilling your needs (and with no clear foreseeable problems) simply to adhere to a "best practice" is almost certainly not the best investment of resources.
Ultimately the behavior of the application is up to the application itself. I don't see how running the application as a service or doing anything else external to the application would actually force it to stay up and running properly while it listens/waits for messages. It would be a bug if the application were programmed, for example, to use the JMS API and create a MessageListener (i.e. the class responsible for receiving messages asynchronously) and then exit without actually waiting for messages. Running such an application as a service so that the OS keeps restarting it after it incorrectly exits wouldn't be a good solution.
Having a properly written client is the best practice. Propping it up with some external mechanism is bad practice.
The sample code you provided is poorly written and has some clear problems:
The Connection and Session objects fall out of scope which means they can never be properly closed. Eventually those objects will be garbage collected by the JVM. This is improper resource management.
The only reason the application doesn't terminate completely is due to the way the ActiveMQ 5.x client is implemented (i.e. the open connection blocks the JVM process from terminating). This implementation detail is not part of the public contract provided by the JMS API and should therefore not be relied upon. If you were to use a different JMS client implementation, e.g. the ActiveMQ Artemis core JMS client, the application would terminate completely as soon as main() exited.
To fix this, your main() method should wait after creating the JmsTestWithoutClose instance. There are lots of ways to do this in Java:
Use a while loop with a Thread.sleep() where the loop's condition can be modified as needed to allow the application to exit.
Use an object specifically designed for thread coordination like java.util.concurrent.CountDownLatch. Using a CountDownLatch the main() method can invoke await() and when the application is finished processing messages the MessageListener implementation, for example, can invoke countDown()
Use a while loop reading input from the console and only continue when a special string is input (e.g. exit).
In any case, it's important that the application close all of the resources it created (e.g. sessions, connections, etc.).

How to handle concurrency with an asynchronous singleton bean

I want to implement a singleton bean in Java EE, which starts a VPN connection on demand.
Thus I created a Class like:
#Singleton
class VPNClient{
private boolean connected;
#Lock(LockType.READ)
public boolean isConnected(){
return this.connected;
}
#Asynchronous
#Lock(LockType.WRITE)
public void connect(){
// do connect, including a while loop for the socket:
while(true){
// read socket, do stuff like setting connected when VPN successfully established
}
}
}
Then I have another bean, which has the demand for the VPN connection and tries to create it:
class X {
#Inject
VPNClient client;
private void sendStuffToVPN(){
// call the async connect method
client.connect();
// wait for connect (or exception and stuff in original source)
while(!client.isConnected()){
// wait for connection to be established
Thread.sleep(5000);
}
}
}
My problem now is, that because of the connect method, that never ends until the connection is destroyed, the write lock it has, will block all reads to isConnected().
[Update]
This should hopefully illustrate the problem:
Thread 1 (Bean X) calls Thread 2 (Singleton Bean VPNClient) .connect()
Now there is an endless write lock on the singleton bean VPNClient. But because the method was called async. Thread 1 proceeds:
Thread 1 (Bean x) tries to call Thread 2 (VPNClient.isConnected()), but has to wait for the release of the write lock (which started with connect()).
Then the J2EE container throws an javax.ejb.ConcurrentAccessTimeoutException because it waited until timeout.
Is there a good pattern to solve this kind of concurrency problem?
#Lock(LockType.WRITE) locks all methods in the singleton bean until the called method has completed even if the caller has moved on via #Asynchronous.
This is the correct behaviour if you think about it - concurrency problems can happen from other method calls to the bean if processing is still in progress.
The way to get around this is to set #ConcurrencyManagement(ConcurrencyManagementType.BEAN) on your singleton to handle concurrency and locking of access to the connection yourself.
Have a look at http://docs.oracle.com/javaee/6/tutorial/doc/gipvi.html#indexterm-1449 for an introduction.
Try this.
class X {
private void sendStuffToVPN(){
VPNClient client = new VPNClient();
// call the async connect method
new Thread(new Runnable(){
public void run()
{
client.connect();
}
}).start();
// wait for connect (or exception and stuff in original source)
while(!client.isConnected()){
// wait for connection to be established
Thread.sleep(5000);
}
}
}

Catch Exceptions inside a Message Driven Bean (MDB)

How must I handle exceptions inside a mdb? I have the funny feeling that the exception happens after the try catch block so I'm not able to catch and log it. Glassfish v3 decides to repeat the whole message. It runns into a infinite loop and writes lot's of logfiles on the harddrive.
I'm using Glassfishv3.01 + Eclipselink 2.0.1
public class SaveAdMessageDrivenBean implements MessageListener {
#PersistenceContext(unitName="QIS")
private EntityManager em;
#Resource
private MessageDrivenContext mdc;
public void onMessage(Message message) {
try {
if (message instanceof ObjectMessage) {
ObjectMessage obj = (ObjectMessage)message;
AnalyzerResult alyzres = (AnalyzerResult)obj.getObject();
save(alyzres);
}
} catch (Throwable e) {
mdc.setRollbackOnly();
log.log(Level.SEVERE, e);
}
}
#TransactionAttribute(TransactionAttributeType.REQUIRED)
private void save(AnalyzerResult alyzres) throws PrdItemNotFoundException {
Some s = em.find(Some.class, somepk);
s.setSomeField("newvalue");
// SQL Exception happens after leaving this method because of missing field for ex.
}
}
You got a bad case of message poisoning...
The main issues I see are that:
you are calling directly the save() method in your onMessage(): this means thet the container has no way to inject the proper transaction handling proxy around the save method
in any case the save() method should have #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) in order to commit in a separate transaction, otherwise it will join the onMessage transaction (which default to REQUIRED) and bypass your exception handling code, beign committed after the successful execution of onMessage
What I woud do is:
Move the save method to a new Stateless session bean:
#Stateless
public class AnalyzerResultSaver
{
#PersistenceContext(unitName="QIS")
private EntityManager em;
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
private void save(AnalyzerResult alyzres) throws PrdItemNotFoundException {
Some s = em.find(Some.class, somepk);
s.setSomeField("newvalue");
// SQL Exception happens after leaving this method
}
}
Inject this bean in your MDB:
public class SaveAdMessageDrivenBean implements MessageListener {
#Inject
private AnalyzerResultSaver saver;
#Resource
private MessageDrivenContext mdc;
public void onMessage(Message message) {
try {
if (message instanceof ObjectMessage) {
ObjectMessage obj = (ObjectMessage)message;
AnalyzerResult alyzres = (AnalyzerResult)obj.getObject();
saver.save(alyzres);
}
} catch (Throwable e) {
mdc.setRollbackOnly();
log.log(Level.SEVERE, e);
}
}
}
Another tip: in this code the message poisoning still exists. Now it derives from the line invoking mdc.setRollbackOnly();.
I'd suggest here to log the exception and transfer the message to a poison queue, thus preventing the container to resubmit the message ad infinitum.
UPDATE:
A 'poison queue' or 'error queue' is simply a mean to guarantee that your (hopefully recoverable) discarded messages will not be completely lost. It is used heavily in integration scenarios, where the correctness of the message data is not guaranteed.
Setting up a poison queue implies defining a destination queue or topic and redeliver the 'bad' messages to this destination.
Periodically, an operator should inspect this queue (via a dedicated application) and either modify the messages and resubmit to the 'good' queue, or discard the message and ask for a resumbit.
I believe that the code that you have posted is mostly OK.
Your use of
#TransactionAttribute(TransactionAttributeType.REQUIRED)
is completely ignored because this (and most other) annotations can only be applied to business methods (including onMessage). That doesn't matter though because your onMessage method gets an implicit one for free.
This leads to the fact that message handling is transactional in a Java EE container. If the transaction fails for any reason the container is required to try and deliver the message again.
Now, your code is catching the exception from the save method, which is good. But then you're explicitly marking the transaction for rollback. This has the effect of telling the container that message delivery failed and that it should try again.
Therefore, if you remove:
mdc.setRollbackOnly();
the container will stop trying to redeliver the message.
If I'm not mistaken, you're letting the container handle the transactions. This way, the entity manager will queue the operations that will be flushed after the method finishes, that's why you're having exceptions after the method is finished.
Using em.flush() directly as a final step in the method will execute all the related queries of the transaction, throwing the exceptions there instead of being thrown later when the flush() is made by the container while commiting the transaction.

Rolling upgrade with MDBs

We have a Java EE application deployed on a Glassfish 3.1.2 cluster which provides a REST API using JAX-RS. We regularly deploy new versions of the application by deploying an EAR to a duplicate cluster instance, then update the HTTP load balancer to send traffic to the updated instance instead of the old one.
This allows us to upgrade with no loss of availability as described here: http://docs.oracle.com/cd/E18930_01/html/821-2426/abdio.html#abdip. We are frequently making significant changes to the application, which makes the new versions "incompatible" (which is why we use two clusters).
We now have to provide a message-queue interface to the application for some high throughput internal messaging (from C++ producers). However, using Message Driven Beans I cannot see how it is possible to upgrade an application without any service disruption?
The options I have investigated are:
Single remote JMS queue (openMQ)
Producers send messages to a single message queue, messages are handled by a MDB. When we start a second cluster instance, messages should be load-balanced to the upgraded cluster, but when we stop the "old" cluster, outstanding transactions will be lost.
I considered using JMX to disable producers/consumers to that message queue during the upgrade, but that only pauses message delivery. Oustanding messages will still be lost when we disable the old cluster (I think?).
I also considered ditching the #MessageDriven annotation and creating a MessageConsumer manually. This does seem to work, but the MessageConsumer cannot then access other EJB's using the EJB annotation (as far as I know):
// Singleton bean with start()/stop() functions that
// enable/disable message consumption
#Singleton
#Startup
public class ServerControl {
private boolean running=false;
#Resource(lookup = "jms/TopicConnectionFactory")
private TopicConnectionFactory topicConnectionFactory;
#Resource(lookup = "jms/MyTopic")
private Topic topic;
private Connection connection;
private Session session;
private MessageConsumer consumer;
public ServerControl()
{
this.running = false;
}
public void start() throws JMSException {
if( this.running ) return;
connection = topicConnectionFactory.createConnection();
session = dbUpdatesConnection.createSession(false, Session.DUPS_OK_ACKNOWLEDGE);
consumer = dbUpdatesSession.createConsumer(topic);
consumer.setMessageListener(new MessageHandler());
// Start the message queue handlers
connection.start();
this.running = true;
}
public void stop() throws JMSException {
if( this.running == false ) return;
// Stop the message queue handlers
consumer.close();
this.running = false;
}
}
// MessageListener has to invoke functions defined in other EJB's
#Stateless
public class MessageHandler implements MessageListener {
#EJB
SomeEjb someEjb; // This is null
public MessageHandler() {
}
#Override
public void onMessage(Message message) {
// This works but someEjb is null unless I
// use the #MessageDriven annotation, but then I
// can't gracefully disconnect from the queue
}
}
Local/Embedded JMS queue for each cluster
Clients would have to connect to two different message queue brokers (one for each cluster).
Clients would have to be notified that a cluster instance is going down and stop sending messages to that queues on that broker.
Generally much less convenient and tidy than the existing http solution.
Alternative message queue providers
Connect Glassfish up to a different type of message queue or different vendor (e.g. Apache OpenMQ), perhaps one of these has the ability to balance traffic away from a particular set of consumers?
I have assumed that just disabling the application will just "kill" any outstanding transactions. If disabling the application allows existing transactions to complete then I could just do that after bringing the second cluster up.
Any help would be appreciated! Thanks in advance.
If you use highly availability then all of your messages for the cluster will be stored in a single data store as opposed to the local data store on each instance. You could then configure both clusters to use the same store. Then when shutting down the old and spinning up the new you have access to all the messages.
This is a good video that helps to explain high availability jms for glassfish.
I don't understand your assumption that when we stop the "old" cluster, outstanding transactions will be lost. The MDBs will be allowed to finish their message processing before the application stops and any unacknowledged messages will be handled by the "new" cluster.
If the loadbalancing between old and new version is an issue, I would put MDBs into separate .ear and stop old MDBs as soon as new MDBs is online, or even before that, if your use case allows for delay in message processing until the new version is deployed.

Categories

Resources