JMS client that doesn't terminate - java

We have a JMS client that needs to sit idle until it receives a message. When it receives a message, it will execute some functionality and then go back to an idle state. My question is, what is the best way to ensure that the client stays up and running? Is it good practice to let the JMS client software handle this or do we need to run the software as a service on the host machine (and/or do something else)? We are currently relying on the JMS client software as it seems to keep a thread active with the open connection but I am unsure if this is best practice. We use ActiveMQ as our message broker and for the client software.
Edit: Added code sample
Below is an example of how the client stays up using the JMS client connection:
import javax.jms.Connection;
import javax.jms.Destination;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import javax.jms.MessageListener;
import javax.jms.Session;
import org.apache.activemq.ActiveMQConnectionFactory;
public class JmsTestWithoutClose implements MessageListener {
private final Connection connection;
private final Session session;
private final MessageConsumer consumer;
public static void main(String[] args) throws JMSException {
System.out.println("Starting...");
JmsTestWithoutClose test = new JmsTestWithoutClose("<username>", "<password>", "tcp://<host>:<port>");
// if you uncomment the line below, the program will terminate
// if you keep it commented, the program will NOT terminate
// test.close();
System.out.println("Last line of main method...");
}
public JmsTestWithoutClose(String username, String password, String url) throws JMSException {
// create connection and session
ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory(username, password, url);
this.connection = factory.createConnection();
connection.start();
this.session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createTopic("Topic_Name");
consumer = session.createConsumer(destination);
consumer.setMessageListener(this);
}
public void close() throws JMSException {
session.close();
connection.close();
}
#Override
public void onMessage(Message message) {
// process the message
}
}

Is the current client implementation fulfilling your requirements? If so, I would not change it. If the current client implementation is not fulfilling your requirements then I would change it. Changing software which is working without a problem and fulfilling your needs (and with no clear foreseeable problems) simply to adhere to a "best practice" is almost certainly not the best investment of resources.
Ultimately the behavior of the application is up to the application itself. I don't see how running the application as a service or doing anything else external to the application would actually force it to stay up and running properly while it listens/waits for messages. It would be a bug if the application were programmed, for example, to use the JMS API and create a MessageListener (i.e. the class responsible for receiving messages asynchronously) and then exit without actually waiting for messages. Running such an application as a service so that the OS keeps restarting it after it incorrectly exits wouldn't be a good solution.
Having a properly written client is the best practice. Propping it up with some external mechanism is bad practice.
The sample code you provided is poorly written and has some clear problems:
The Connection and Session objects fall out of scope which means they can never be properly closed. Eventually those objects will be garbage collected by the JVM. This is improper resource management.
The only reason the application doesn't terminate completely is due to the way the ActiveMQ 5.x client is implemented (i.e. the open connection blocks the JVM process from terminating). This implementation detail is not part of the public contract provided by the JMS API and should therefore not be relied upon. If you were to use a different JMS client implementation, e.g. the ActiveMQ Artemis core JMS client, the application would terminate completely as soon as main() exited.
To fix this, your main() method should wait after creating the JmsTestWithoutClose instance. There are lots of ways to do this in Java:
Use a while loop with a Thread.sleep() where the loop's condition can be modified as needed to allow the application to exit.
Use an object specifically designed for thread coordination like java.util.concurrent.CountDownLatch. Using a CountDownLatch the main() method can invoke await() and when the application is finished processing messages the MessageListener implementation, for example, can invoke countDown()
Use a while loop reading input from the console and only continue when a special string is input (e.g. exit).
In any case, it's important that the application close all of the resources it created (e.g. sessions, connections, etc.).

Related

WebSocket breaks after Tomcat 7 to 8 upgrade, using NIO Connector, while having ThreadLocal in code along with Spring’s TextWebSocketHandler

Websocket connection works on Spring’s WebSocket 4.1.6’s TextWebSocketHandler. For connection establishment we have HandshakeInterceptor, which in its beforeHandshake() method sets the user context:
AuthenticatedUser authUser = (AuthenticatedUser) httpServletRequest
.getSession().getAttribute(WEB_SOCKET_USER);
UserContextHolder.setUserContext(new UserContext(authUser));
The UserContextHolder class is modeled like Spring’s org.springframework.context.i18n.LocaleContextHolder - a class to hold the current UserContext in thread local storage:
private static ThreadLocal<UserContext> userContextHolder = new InheritableThreadLocal<UserContext>();
public static void setUserContext(UserContext userContext) {
userContextHolder.set(userContext);
if (userContext != null) {
LocaleContextHolder.setLocaleContext(userContext);
} else {
LocaleContextHolder.setLocaleContext(new LocaleContext() {
public Locale getLocale() {
return UserContext.getDefaultLocale();
}
});
}
}
This UserContext holds the thread’s authenticated User information for entire app, not only Spring (we also use another software, like Quartz, coexisting on this codebase on JVM, so we need to communicate between them).
Everything works okay when we run on the previous Tomcat 7 with standard BIO Connector. The problem arises with the upgrade to Tomcat 8 with the new NIO Connector being enabled by default.
When the WebSocket’s messages arrive they are processed in a call to Service methods which is validated in #SecurityValidation annotation, the MethodInterceptor checks if given thread has the User Context set:
AuthenticatedUser user = UserContextHolder.getUser();
if (user == null) {
throw new Exception(/* ... */);
}
But it’s null, so Exception gets thrown.
We believe that the problem is in threading change after the switch from BIO to NIO Connector.
BIO Scenario – we have one thread per one WebSocket model, so one
Handshake sets one UserContext and it works on this exact thread. It
works okay, even when there are more sockets, because i.e., when we
have 4 different WebSockets open, there are 4 different threads
handling them, that’s why ThreadLocal usage is working well.
NIO Scenario – the Non-Blocking IO concept is for reducing the threads
number (simplifying for our case), internally in 3rd party NIO
Connector there is used NIO’s Selector to manage the workload on a
single thread (with an Event Loop I guess, need to confirm). As we now
have just one single thread to handle all the WebSockets (or at least,
some part of them) the unexpected exception is thrown.
I’m not sure why once set UserContext gets nullified later, the code investigation brings us no clues, that’s why we think that might be a bug (or something).
The UserContext’s ThreadLocal being null in NIO seems to be the cause of the exception. Has anyone used ThreadLocal with NIO Connector? What is the best way to migrate Tomcat 7 BIO implementation to Tomcat 8 NIO implementation when using ThreadLocal in WebSocket communiction? Thanks!

JMS Queue Resource Leak Suspected - injecting Connection, Session, and Queue

I am trying to identify where a suspected memory / resource leak is occurring with regards to a JMS Queue I have built. I am new to JMS queues, so I have used many of the standard JMS class objects to ensure stability. But somewhere in my code or configuration I am doing something wrong, and my queue is filling up or resources are slowing down, perhaps inherent to unknown deficiencies within the architecture I am attempting to implement.
When load testing my API (using Gatling), I can run 20 messages a second through (which is a tiny load) for most of a ten minute duration. But after that, the messages seem to back up, and the ability to process them slows to a crawl. Generally time-out errors begin to occur once the overall requests exceed 60 seconds to complete. There is more business logic that processes data and persists it to a relational database, but none of that appears to be an issue.
Interestingly, subsequent test runs continue with the poor performance, indicating that whatever resource is leaking is transcending the tests. A restart of the application clears out whatever has become bloated leaking. Then the tests run fast again, for the first seven or eight minutes... upon which the cycle repeats itself. Only a restart of the App clears the issue. Since the issue doesn't self-correct itself, even after waiting for a period of time, something has filled up resources.
When pulling the JMS calls from the logic, I am able to process hundreds of messages a second. And I can run back-to-back tests runs without leaking or filling up the queue.
Although this is a Spring project, I am not using Spring's JMS Template, so I wrote my own Connection object, which I injected as a Spring Bean and implemented as a single connection to avoid creating a new connection for every JMS message I sent through.
Likewise, I configured my JMS Session to also be an injected Bean, in which I use the Connection Bean. That way I can persist my Connection and Session objects for sending all of my JMS messages through, which are sent one at a time. A Qpid Server I am calling receives these messages. While it is possible I am exceeding it's capacity to consume the messages I am producing, I expect that the resource leak is associated with my code, and not the JMS Server.
Here are some code snippets to give you an idea of my approach. Any feedback is appreciated.
JmsConfiguration (key methods)
#Bean
public ConnectionFactory jmsConnectionFactory() {
return new JmsConnectionFactory(user, pass, host);
}
#Bean(name="jmsSession")
public Session jmsConnection() throws JMSException {
Connection conn = jmsConnectionFactory().createConnection();
Session session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
return session; //Injected as Singleton
}
#Bean(name="jmsQueue")
public Queue jmsQueue() throws JMSException {
return jmsConnection().createQueue(queue);
}
//Jackson's objectMapper is heavy enough to warrant injecting and re-using it.
#Bean
public ObjectMapper objectMapper() {
return new ObjectMapper();
}
JmsMessageEnqueuer
#Component
public class MessageJmsEnqueuer extends CommonThreadScope {
#Autowired
#Qualifier("Session")
private Session jmsSession;
#Autowired
#Qualifier("jmsQueue")
private Queue jmsQueue;
#Value("${acme.jms.queue}")
private String jmsQueueName;
#Autowired
#Qualifier("jmsObjectMapper")
private ObjectMapper jmsObjectMapper;
public void enqueue(String message, String dataType) {
try {
String messageAsJson = objectMapper.writeValueAsString(message);
MessageProducer jmsMessageProducer = jmsSession.createProducer(jmsQueue);
TextMessage message = jmsSession.createTextMessage(message);
message.setStringProperty("dataType", dataType.name());
jmsMessageProducer.send(message);
logger.log(Level.INFO, "Message successfully sent. Queue=" + jmsQueueName + ", Message -> " + message);
} catch (JMSRuntimeException | JsonProcessingException jmsre) {
String msg = "JMS Message Processing encountered an error...";
logService.severe(logger, messagesBuilder() ... msg)
}
//Skip the close() method to persist connection...
//Reconnect logic exists to reset an expired connection from server.
}
}
I was able to solve my resource leak / deadlock issue simply by rewriting my code to use the simplified API provided with the release of JMS 2.0. Although I was never able to determine which of the Connection / Session / Queue objects was giving my code grief, using the Context object to build my connection and session was the golden ticket in this case.
Upon switching to the simplified API (since I was already pulling in the JMS 2.0 dependency), the resource leak immediately vanished! This leads me to believe that the simplified API does more than just make life easier by providing an easier API for the developer to code against. While that is already an advantage to begin with (even without the few features that the simplified API doesn't support), it is now clear to me that the underlying connection and session objects are being managed by the API, and thus resolved whatever was filling up or deadlocking.
Furthermore, because the resource build-up was no longer occurring, I was able to triple the number of messages I passed through, allowing me to process 60 users a second, instead of 20. That is a significant increase, and I have fixed the compatibility issues that prevented me from using the simplified JMS API to begin with.
While I would have liked to identify precisely what was fouling up the code, this works as a solution. Plus, the fact that version 2.0 of JMS was released in April of 2013 would indicate that the simplified API is definitely the preferred solution.
Just a guess, but a MessageProducer extends AutoClosable, suggesting it to be closed after it is no longer of use. Since you're not using a try-with-resources or explicitly close it afterwards, the jmsSession may contain more and more producers over time. Although I am not sure whether you should close per method call, or re-use the created producer.
Have you tried using a profiler such as VisualVM to visualize the heap and metaspace? If so, did you find any significant changes over time?

Detecting when an asynchronous JMS MessageConsumer has an Exception?

I'm processing messages using a JMS MessageConsumer with a MessageListener. If something happens that causes the MessageConsumer to stop receiving and processing messages -- for example, if the underlying connection closes -- how can I detect it? There doesn't seem to be any notification mechanism that I can find in the spec.
I think the question is clear as is, but if you'd like me to post code to clarify the question, just ask!
In case it's important, I'm using ActiveMQ 5.8, although obviously I'd like a scheme that's not implementation-specific.
Use ExceptionListener
If the JMS system detects a problem, it calls the listener's onException method:
public class MyConsumer implements ExceptionListener, MessageListener {
private void init(){
Connection connection = ... //create connection
connection.setExceptionListener(this);
connection.start();
}
public void onException(JMSException e){
String errorCode = e.getErrorCode();
Exception ex = e.getLinkedException();
//clean up resources, or, attempt to reconnect
}
public void onMessage(Message m){
...
}
Not much to it, really, the above is standard practice for standalone consumers; it's not implementation-specific; actually, quite the contrary as it's part of the spec!, so all JMS-compliant providers will support it.

Best practive for socket client within EJB or CDI bean

my application must open a tcp socket connection to a server and listen to periodically incoming messages.
What are the best practices to implement this in a JEE 7 application?
Right now I have something like this:
#javax.ejb.Singleton
public class MessageChecker {
#Asynchronous
public void startChecking() {
// set up things
Socket client = new Socket(...);
[...]
// start a loop to retrieve the incoming messages
while((line = reader.readLine())!=null){
LOG.debug("Message from socket server: " + line);
}
}
}
The MessageChecker.startChecking() function is called from a #Startup bean with a #PostConstruct method.
#javax.ejb.Singleton
#Startup
public class Starter() {
#Inject
private MessageChecker checker;
#PostConstruct
public void startup() {
checker.startChecking();
}
}
Do you think this is the correct approach?
Actually it is not working well. The application server (JBoss 8 Wildfly) hangs and does not react to shutdown or re-deployment commands any more. I have the feeling that the it gets stuck in the while(...) loop.
Cheers
Frank
Frank, it is bad practice to do any I/O operations while you're in an EJB context. The reason behind this is simple. When working in a cluster:
They will inherently block each other while waiting on I/O connection timeouts and all other I/O related waiting timeouts. That is if the connection does not block for an unspecified amount of time, in which case you will have to create another Thread which scans for dead connections.
Only one of the EJBs will be able to connect and send/recieve information , the others will just wait in line. This way your system will not scale. No matter how many how many EJBs you have in your cluster, only one will actually do its work.
Apparently you already ran into problems by doing that :) . Jboss 8 seems not to be able to properly create and destroy the bean.
Now, I know your bean is a #Singleton so your architecture does not rely on transactionality, clustering and distribution of reading from that socket. So you might be ok with that.
However :D , you are asking for a java EE compliant way of solving this. Here is what should be done:
Redesign your solution to go with JMS. It 'smells' like you are trying to provide an async messaging functionality (Send a message & wait for reply). You might be using a synchronous protocol to do async messaging. Just give it a thought.
Create a JCA compliant adapter which will be injected in your EJB as a #Resource
You will have a connection pool configurable at AS level ( so you can have different values for different environments
You will have transactionality and rollback. Of course the rollback behavior will have to be coded by you
You can inject it via a #Resource annotation
There are some adapters out there, some might fit like a glove, some might be a bit overdesigned.
Oracle JCA Adapter

Rolling upgrade with MDBs

We have a Java EE application deployed on a Glassfish 3.1.2 cluster which provides a REST API using JAX-RS. We regularly deploy new versions of the application by deploying an EAR to a duplicate cluster instance, then update the HTTP load balancer to send traffic to the updated instance instead of the old one.
This allows us to upgrade with no loss of availability as described here: http://docs.oracle.com/cd/E18930_01/html/821-2426/abdio.html#abdip. We are frequently making significant changes to the application, which makes the new versions "incompatible" (which is why we use two clusters).
We now have to provide a message-queue interface to the application for some high throughput internal messaging (from C++ producers). However, using Message Driven Beans I cannot see how it is possible to upgrade an application without any service disruption?
The options I have investigated are:
Single remote JMS queue (openMQ)
Producers send messages to a single message queue, messages are handled by a MDB. When we start a second cluster instance, messages should be load-balanced to the upgraded cluster, but when we stop the "old" cluster, outstanding transactions will be lost.
I considered using JMX to disable producers/consumers to that message queue during the upgrade, but that only pauses message delivery. Oustanding messages will still be lost when we disable the old cluster (I think?).
I also considered ditching the #MessageDriven annotation and creating a MessageConsumer manually. This does seem to work, but the MessageConsumer cannot then access other EJB's using the EJB annotation (as far as I know):
// Singleton bean with start()/stop() functions that
// enable/disable message consumption
#Singleton
#Startup
public class ServerControl {
private boolean running=false;
#Resource(lookup = "jms/TopicConnectionFactory")
private TopicConnectionFactory topicConnectionFactory;
#Resource(lookup = "jms/MyTopic")
private Topic topic;
private Connection connection;
private Session session;
private MessageConsumer consumer;
public ServerControl()
{
this.running = false;
}
public void start() throws JMSException {
if( this.running ) return;
connection = topicConnectionFactory.createConnection();
session = dbUpdatesConnection.createSession(false, Session.DUPS_OK_ACKNOWLEDGE);
consumer = dbUpdatesSession.createConsumer(topic);
consumer.setMessageListener(new MessageHandler());
// Start the message queue handlers
connection.start();
this.running = true;
}
public void stop() throws JMSException {
if( this.running == false ) return;
// Stop the message queue handlers
consumer.close();
this.running = false;
}
}
// MessageListener has to invoke functions defined in other EJB's
#Stateless
public class MessageHandler implements MessageListener {
#EJB
SomeEjb someEjb; // This is null
public MessageHandler() {
}
#Override
public void onMessage(Message message) {
// This works but someEjb is null unless I
// use the #MessageDriven annotation, but then I
// can't gracefully disconnect from the queue
}
}
Local/Embedded JMS queue for each cluster
Clients would have to connect to two different message queue brokers (one for each cluster).
Clients would have to be notified that a cluster instance is going down and stop sending messages to that queues on that broker.
Generally much less convenient and tidy than the existing http solution.
Alternative message queue providers
Connect Glassfish up to a different type of message queue or different vendor (e.g. Apache OpenMQ), perhaps one of these has the ability to balance traffic away from a particular set of consumers?
I have assumed that just disabling the application will just "kill" any outstanding transactions. If disabling the application allows existing transactions to complete then I could just do that after bringing the second cluster up.
Any help would be appreciated! Thanks in advance.
If you use highly availability then all of your messages for the cluster will be stored in a single data store as opposed to the local data store on each instance. You could then configure both clusters to use the same store. Then when shutting down the old and spinning up the new you have access to all the messages.
This is a good video that helps to explain high availability jms for glassfish.
I don't understand your assumption that when we stop the "old" cluster, outstanding transactions will be lost. The MDBs will be allowed to finish their message processing before the application stops and any unacknowledged messages will be handled by the "new" cluster.
If the loadbalancing between old and new version is an issue, I would put MDBs into separate .ear and stop old MDBs as soon as new MDBs is online, or even before that, if your use case allows for delay in message processing until the new version is deployed.

Categories

Resources