We have a Java EE application deployed on a Glassfish 3.1.2 cluster which provides a REST API using JAX-RS. We regularly deploy new versions of the application by deploying an EAR to a duplicate cluster instance, then update the HTTP load balancer to send traffic to the updated instance instead of the old one.
This allows us to upgrade with no loss of availability as described here: http://docs.oracle.com/cd/E18930_01/html/821-2426/abdio.html#abdip. We are frequently making significant changes to the application, which makes the new versions "incompatible" (which is why we use two clusters).
We now have to provide a message-queue interface to the application for some high throughput internal messaging (from C++ producers). However, using Message Driven Beans I cannot see how it is possible to upgrade an application without any service disruption?
The options I have investigated are:
Single remote JMS queue (openMQ)
Producers send messages to a single message queue, messages are handled by a MDB. When we start a second cluster instance, messages should be load-balanced to the upgraded cluster, but when we stop the "old" cluster, outstanding transactions will be lost.
I considered using JMX to disable producers/consumers to that message queue during the upgrade, but that only pauses message delivery. Oustanding messages will still be lost when we disable the old cluster (I think?).
I also considered ditching the #MessageDriven annotation and creating a MessageConsumer manually. This does seem to work, but the MessageConsumer cannot then access other EJB's using the EJB annotation (as far as I know):
// Singleton bean with start()/stop() functions that
// enable/disable message consumption
#Singleton
#Startup
public class ServerControl {
private boolean running=false;
#Resource(lookup = "jms/TopicConnectionFactory")
private TopicConnectionFactory topicConnectionFactory;
#Resource(lookup = "jms/MyTopic")
private Topic topic;
private Connection connection;
private Session session;
private MessageConsumer consumer;
public ServerControl()
{
this.running = false;
}
public void start() throws JMSException {
if( this.running ) return;
connection = topicConnectionFactory.createConnection();
session = dbUpdatesConnection.createSession(false, Session.DUPS_OK_ACKNOWLEDGE);
consumer = dbUpdatesSession.createConsumer(topic);
consumer.setMessageListener(new MessageHandler());
// Start the message queue handlers
connection.start();
this.running = true;
}
public void stop() throws JMSException {
if( this.running == false ) return;
// Stop the message queue handlers
consumer.close();
this.running = false;
}
}
// MessageListener has to invoke functions defined in other EJB's
#Stateless
public class MessageHandler implements MessageListener {
#EJB
SomeEjb someEjb; // This is null
public MessageHandler() {
}
#Override
public void onMessage(Message message) {
// This works but someEjb is null unless I
// use the #MessageDriven annotation, but then I
// can't gracefully disconnect from the queue
}
}
Local/Embedded JMS queue for each cluster
Clients would have to connect to two different message queue brokers (one for each cluster).
Clients would have to be notified that a cluster instance is going down and stop sending messages to that queues on that broker.
Generally much less convenient and tidy than the existing http solution.
Alternative message queue providers
Connect Glassfish up to a different type of message queue or different vendor (e.g. Apache OpenMQ), perhaps one of these has the ability to balance traffic away from a particular set of consumers?
I have assumed that just disabling the application will just "kill" any outstanding transactions. If disabling the application allows existing transactions to complete then I could just do that after bringing the second cluster up.
Any help would be appreciated! Thanks in advance.
If you use highly availability then all of your messages for the cluster will be stored in a single data store as opposed to the local data store on each instance. You could then configure both clusters to use the same store. Then when shutting down the old and spinning up the new you have access to all the messages.
This is a good video that helps to explain high availability jms for glassfish.
I don't understand your assumption that when we stop the "old" cluster, outstanding transactions will be lost. The MDBs will be allowed to finish their message processing before the application stops and any unacknowledged messages will be handled by the "new" cluster.
If the loadbalancing between old and new version is an issue, I would put MDBs into separate .ear and stop old MDBs as soon as new MDBs is online, or even before that, if your use case allows for delay in message processing until the new version is deployed.
Related
Current Solution
I have a Java server (Tomcat) setup issue that I'm hoping someone can provide some guidance on. Currently my web application is a single-server that has a Java backend running on Tomcat 8.5. To handle Websocket connections, I keep a Map of all the javax.websocket.Session passed in the onOpen() method.
#ServerEndpoint("/status")
public class StatusMessenger
{
private static ConcurrentHashMap<String, Session> sessions = new ConcurrentHashMap();
#OnOpen
public void onOpen(Session session) throws Exception
{
String sessionId = session.getRequestParameterMap().get("sessionId").get(0);
sessions.put(session.getId(), session);
}
My application only broadcasts messages to all users, so the broadcast() in my code simply loops through sessions.values() and sends the message through each javax.websocket.Session.
public static void broadcast(String event, String message)
{
for (Session session: sessions.values())
{
// send the message
}
}
I'm not even sure that's the correct way to handle Websockets in Tomcat, but it's worked for me for years, so I assume it's acceptable.
The Problem
I want to now horizontally scale out my application on AWS to multiple servers. For the most part my application is stateless and I store the regular HTTP session information in the database. My problem is this static Map of javax.websocket.Session - it's not stateless, and there's a different Map on each server, each with their own list of javax.websocket.Sessions.
In my application, the server code in certain situations will need to broadcast out a message to all the users. These events may happen on any server in this multi-server setup. The event will trigger the broadcast() method which loops through the javax.websocket.Sessions. However, it will only loop through the sessions in it's own Map.
How do I get the multi-server application to broadcast this message to all websocket connections stored across all the servers in the setup? The application works fine on a single-server (obviously) because there's only 1 list of websocket sessions. In other words, how do I write a stateless application that needs to store the websocket connections so it can communicate with them later?
I found 2 alternative solutions for this...
In my load balancer I put a rule to route all paths with /{my websocket server path} to 1 server so that all the Sessions were on the same server.
Use a 3rd party web push library like Pusher (http://pusher.com)
EDIT
Just found out how to run multiple consumers inside one service:
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory, MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(RENDER_QUEUE);
container.setConcurrentConsumers(concurrentConsumers); // setting this in env
container.setMessageListener(listenerAdapter);
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(RenderMessageConsumer receiver) {
return new MessageListenerAdapter(receiver, "reciveMessageFromRenderQueue");
}
Now the only question that remains is: how can I have a global limit? So how do multiple instances of the AMQP receiver share the total number of consumers? So I want to set a global number of concurrentConsumers to 10, run 2 instances of the consumerSerivce and have each instance run around 5 consumers. Can this be managed by rabbitMq?
I have a Spring service that consumes AMQP messages and calls a http resource for each message.
After the http call completes another queue is called to either report error or done. Only then will message handling complete and the next message be taken from the queue.
// simplified
#RabbitListener(queues = RENDER_QUEUE)
public void reciveMessageFromRenderQueue(String message) {
try {
RenderMessage renderMessage = JsonUtils.stringToObject(message, RenderMessage.class);
String result = renderService.httpCallRenderer(renderMessage);
messageProducer.sendDoneMessage(result);
} catch (Exception e) {
logError(type, e);
messageProducer.sendErrorMessage(e.getMessage());
}
}
There are at times hundreds or thousands of render messages in the queue but the http call is rather long running and not doing much. This becomes obvious as I can improve the message handling rate by running multiple instances of the service thus adding more consumers and calling the http endpoint multiple times. One instance has exactly one consumer for the channel so the number of instances is equal to the number of consumers. However that heavily increases memory usage (since the service uses spring) for just forwarding a message and handling the result.
So I thought, I'd do the http call asynchronously and return immediatly after accepting the message:
.httpCallRendererAsync(renderMessage)
.subscribeOn(Schedulers.newThread())
.subscribe(new Observer<String >() {
public void onNext(String result) {
messageProducer.sendDoneMessage(result);
}
public void onError(Throwable throwable) {
messageProducer.sendErrorMessage(throwable.getMessage());
}
});
That however overloads the http endpoint which cannot deal with 1000 or more simultanous requests.
What I need is for my amqp service to take a certain amount of messages from the queue, handle them in separate threads, make the http call in each of them and return with "message handled". The amount of messages taken from the queue however needs to be shared between multiple instances of that service, so if maximum is 10, message consumption is round robin, the first 5 odd messages should be handled by instance one and the first 5 even messages by instance 2 and as soon as one instance finishes handling the message it should take another one from the queue.
What I found are things like prefetch with limts by consumer and by channel as described by rabbitmq. And the spring-rabbit implementation which uses prefetchCount and the transactionSize described here. That however does not seem to do anything for a single running instance. It will not spawn additional threads to handle more messages concurrently. And of course it will not reduce the number of messages handled in my async scenario since those messages are immediatly considered "handled".
#Bean
public RabbitListenerContainerFactory<SimpleMessageListenerContainer> prefetchContainerFactory(ConnectionFactory rabbitConnectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(rabbitConnectionFactory);
factory.setPrefetchCount(5);
factory.setTxSize(5);
return factory;
}
// and then using
#RabbitListener(queues = RENDER_QUEUE, containerFactory = "prefetchContainerFactory")
The most important requirement for me seems to be that multiple messages should be handled in one instance while the maximum of concurrently handled messages should be shared between instances.
Can that be done using rabbitMq and spring? Or do I have to implemenent something in between.
In an early stage it might be acceptable to just have concurrent message handling in one instance and not share that limit. Then I'll have to configure the limit manually using environment variables while scaling the number of instances.
Now the only question that remains is: how can I have a global limit? So how do multiple instances of the AMQP receiver share the total number of consumers? So I want to set a global number of concurrentConsumers to 10, run 2 instances of the consumerSerivce and have each instance run around 5 consumers. Can this be managed by rabbitMq?
There is no mechanism in either RabbitMQ or Spring to support such a scenario automatically. You can, however, change the concurrency at runtime (setConcurrentConsumers() on the container) so you could use some external agent to manage the concurrency on each instance.
I am trying to identify where a suspected memory / resource leak is occurring with regards to a JMS Queue I have built. I am new to JMS queues, so I have used many of the standard JMS class objects to ensure stability. But somewhere in my code or configuration I am doing something wrong, and my queue is filling up or resources are slowing down, perhaps inherent to unknown deficiencies within the architecture I am attempting to implement.
When load testing my API (using Gatling), I can run 20 messages a second through (which is a tiny load) for most of a ten minute duration. But after that, the messages seem to back up, and the ability to process them slows to a crawl. Generally time-out errors begin to occur once the overall requests exceed 60 seconds to complete. There is more business logic that processes data and persists it to a relational database, but none of that appears to be an issue.
Interestingly, subsequent test runs continue with the poor performance, indicating that whatever resource is leaking is transcending the tests. A restart of the application clears out whatever has become bloated leaking. Then the tests run fast again, for the first seven or eight minutes... upon which the cycle repeats itself. Only a restart of the App clears the issue. Since the issue doesn't self-correct itself, even after waiting for a period of time, something has filled up resources.
When pulling the JMS calls from the logic, I am able to process hundreds of messages a second. And I can run back-to-back tests runs without leaking or filling up the queue.
Although this is a Spring project, I am not using Spring's JMS Template, so I wrote my own Connection object, which I injected as a Spring Bean and implemented as a single connection to avoid creating a new connection for every JMS message I sent through.
Likewise, I configured my JMS Session to also be an injected Bean, in which I use the Connection Bean. That way I can persist my Connection and Session objects for sending all of my JMS messages through, which are sent one at a time. A Qpid Server I am calling receives these messages. While it is possible I am exceeding it's capacity to consume the messages I am producing, I expect that the resource leak is associated with my code, and not the JMS Server.
Here are some code snippets to give you an idea of my approach. Any feedback is appreciated.
JmsConfiguration (key methods)
#Bean
public ConnectionFactory jmsConnectionFactory() {
return new JmsConnectionFactory(user, pass, host);
}
#Bean(name="jmsSession")
public Session jmsConnection() throws JMSException {
Connection conn = jmsConnectionFactory().createConnection();
Session session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
return session; //Injected as Singleton
}
#Bean(name="jmsQueue")
public Queue jmsQueue() throws JMSException {
return jmsConnection().createQueue(queue);
}
//Jackson's objectMapper is heavy enough to warrant injecting and re-using it.
#Bean
public ObjectMapper objectMapper() {
return new ObjectMapper();
}
JmsMessageEnqueuer
#Component
public class MessageJmsEnqueuer extends CommonThreadScope {
#Autowired
#Qualifier("Session")
private Session jmsSession;
#Autowired
#Qualifier("jmsQueue")
private Queue jmsQueue;
#Value("${acme.jms.queue}")
private String jmsQueueName;
#Autowired
#Qualifier("jmsObjectMapper")
private ObjectMapper jmsObjectMapper;
public void enqueue(String message, String dataType) {
try {
String messageAsJson = objectMapper.writeValueAsString(message);
MessageProducer jmsMessageProducer = jmsSession.createProducer(jmsQueue);
TextMessage message = jmsSession.createTextMessage(message);
message.setStringProperty("dataType", dataType.name());
jmsMessageProducer.send(message);
logger.log(Level.INFO, "Message successfully sent. Queue=" + jmsQueueName + ", Message -> " + message);
} catch (JMSRuntimeException | JsonProcessingException jmsre) {
String msg = "JMS Message Processing encountered an error...";
logService.severe(logger, messagesBuilder() ... msg)
}
//Skip the close() method to persist connection...
//Reconnect logic exists to reset an expired connection from server.
}
}
I was able to solve my resource leak / deadlock issue simply by rewriting my code to use the simplified API provided with the release of JMS 2.0. Although I was never able to determine which of the Connection / Session / Queue objects was giving my code grief, using the Context object to build my connection and session was the golden ticket in this case.
Upon switching to the simplified API (since I was already pulling in the JMS 2.0 dependency), the resource leak immediately vanished! This leads me to believe that the simplified API does more than just make life easier by providing an easier API for the developer to code against. While that is already an advantage to begin with (even without the few features that the simplified API doesn't support), it is now clear to me that the underlying connection and session objects are being managed by the API, and thus resolved whatever was filling up or deadlocking.
Furthermore, because the resource build-up was no longer occurring, I was able to triple the number of messages I passed through, allowing me to process 60 users a second, instead of 20. That is a significant increase, and I have fixed the compatibility issues that prevented me from using the simplified JMS API to begin with.
While I would have liked to identify precisely what was fouling up the code, this works as a solution. Plus, the fact that version 2.0 of JMS was released in April of 2013 would indicate that the simplified API is definitely the preferred solution.
Just a guess, but a MessageProducer extends AutoClosable, suggesting it to be closed after it is no longer of use. Since you're not using a try-with-resources or explicitly close it afterwards, the jmsSession may contain more and more producers over time. Although I am not sure whether you should close per method call, or re-use the created producer.
Have you tried using a profiler such as VisualVM to visualize the heap and metaspace? If so, did you find any significant changes over time?
I have a problem with using ActiveMQ in Spring application.
I have a few environments on separate machines. On each machine I had one ActiveMQ instance installed. Now, I realized that I can have only one ActiveMQ instance installed on one server, and few applications can use that ActiveMQ for sending messages. So, I must change queue names in order to have different queues for different environments ("queue.search.sandbox", "queue.search.production", ...).
After that change, now ActiveMQ is generating new queues, but also the old ones, although there is no such configuration for doing that.
I am using Java Spring application with Java configuration, not XML.
First, I create queueTemplate as a Spring bean:
#Bean
public JmsTemplate jmsAuditQueueTemplate() {
log.debug("ActiveMQConfiguration jmsAuditQueueTemplate");
JmsTemplate jmsTemplate = new JmsTemplate();
String queueName = "queue.audit.".concat(env.getProperty("activeMqBroker.queueName.suffix"));
jmsTemplate.setDefaultDestination(new ActiveMQQueue(queueName));
jmsTemplate.setConnectionFactory(connectionFactory());
return jmsTemplate;
}
Second, I create ActiveMQ Listener configuration:
#Bean
public DefaultMessageListenerContainer jmsAuditQueueListenerContainer() {
log.debug("ActiveMQConfiguration jmsAuditQueueListenerContainer");
DefaultMessageListenerContainer dmlc = new DefaultMessageListenerContainer();
dmlc.setConnectionFactory(connectionFactory);
String queueName = "queue.audit.".concat(env.getProperty("activeMqBroker.queueName.suffix"));
ActiveMQQueue activeMQ = new ActiveMQQueue(queueName);
dmlc.setDestination(activeMQ);
dmlc.setRecoveryInterval(30000);
dmlc.setSessionTransacted(true);
// To perform actual message processing
dmlc.setMessageListener(auditQueueListenerService);
dmlc.setConcurrentConsumers(10);
// ... more parameters that you might want to inject ...
return dmlc;
}
After building my application, as the result I have properly created queue with suffix ("queue.audit.sandbox"), but after some time ActiveMQ generates and the old version ("queue.audit").
Does someone knows how ActiveMQ is doing this? Thanks in advance.
There is probably still an entry in the index for the queue, so when ActiveMQ restarts it is displaying the queue. If you want to be certain about destinations, use startup destinations and disable auto-creation by denying the "admin" permission to the connecting user account in the authorization entry
After some time ActiveMQ just stopped creating queues that don't exist.
Now, we have expected behavior, without unnecessary queues.
Still I didn't found out what solved this problem, to be sincere...
I have recently discovered message selectors
#ActivationConfigProperty(
propertyName="messageSelector",
propertyValue="Fragile IS TRUE")
My Question is: How can I make the selector dynamic at runtime?
Lets say a consumer decided they wanted only messages with the property "Fragile IS FALSE"
Could the consumer change the selector somehow without redeploying the MDB?
Note: I am using Glassfish v2.1
To my knowledge, this is not possible. There may be implementations that will allow it via some custom server hooks, but it would be implementation dependent. For one, it requires a change to the deployment descriptor, which is not read after the EAR is deployed.
JMS (Jakarta Messaging) is designed to provide simple means to do simple things and more complicated things to do more complicated but less frequently needed things. Message-driven beans are an example of the first case. To do some dynamic reconfiguration, you need to stop using MDBs and start consuming messages using the programmatic API, using an injected JMSContext and topic or queue. For example:
#Inject
private JMSContext context;
#Resource(lookup="jms/queue/thumbnail")
Queue thumbnailQueue;
JMSConsumer connectListener(String messageSelector) {
JMSConsumer consumer = context.createConsumer(logTopic, messageSelector);
consumer.setMessageListener(message -> {
// process message
});
return consumer;
}
You can call connectListener during startup, e.g. in a CDI bean:
public void start(#Observes #Initialized(ApplicationScoped.class) Object startEvent) {
connectListener("Fragile IS TRUE");
}
Then you can easily reconfigure it by closing the returned consumer and creating it again with a new selector string:
consumer.close();
consumer = connectListener("Fragile IS FALSE");