Module clustering and JMS - java

I have a module which runs standalone in a JVM (no containers) and communicates with other modules via JMS.
My module is both a producer in one queue and a consumer in a different queue.
I have then need to cluster this module, both for HA reasons and for workload reasons, and I'm probably going to go with Terracotta+Hibernate for clustering my entities.
Currently when my app starts it launches a thread (via Executors.newSingleThreadExecutor()) which serves as the consumer (I can attach actual code sample if relevant and neccessary).
What I understood from reading questions here is that if I just start up my module on N different JVMs then N different subscribers will be created and each message in the queue will arrive to N subscribers.
What I'd like to do is have only one of them (let's currently say that which one is not important) process that message and so in actuality enable me to process N messages at a time.
How can/should this be done? Am I way off the track?
BTW, I'm using OpenMQ as my implementation but I don't know if that's relevant.
Thanks for any help

A classic case of message handling in clustered environment. This is what I would do.
Use Broadcast message (Channel based) in place of Queue. Queue being useful for point to point communication is not very effective. Set validity of message till the time it is consumed by one of the consumer. This way, other consumers wont even see the message and only one consumer will consume it.

Take a look at JGroups. You may consider implementing your module/subscribers to use jgroups for the kind of synchronization you need. JGroups provide Reliable Multicast Communication.

Related

Websphere Message Queue Multithreaded

We are backend processor and doing programming with JMS MQ. We have 2 queues. one is used to get message and another one is used to send message. All the banking users will put messages to Q1 through their IB, MB etc. We receive messages from Q1 and process it and we send message to Q2.
Currently we do not use multi threading for this. Can we able to use multi threading for this or single thread is enough to do this. because we are getting messages from Q1 one by one and process it.
Kindly revert back to me if question is not understandable. Please someone help me.
Yes, JMS allows for multiple readers on the same queue. You can do this by multi-threading, multiple application instances, or a dispatch layer that fetches messages and then passes them to a handler through a callback or other mechanism.
The application must support that, however. For example, if two messages are related and must be processed in order, the order is not preserved if there is more than one listener on the queue. This is one reason why async messaging patterns strongly prefer messages not to have order dependencies or affinities.
If you use multi-threading, it is important to make sure to preserve transactionality. If multiple threads use the same connection and one issues a COMMIT then that commits all outstanding messages across all the threads sharing that connection.

Does MQ api support alias modify

I can use the Java MQ Api to put and get messages.
I can also disable gets and put on a queue.
During a migration project, we'll have an App running in parallell. Old and New. Old and New will have their own separate queues. I regulary have messages from a client going to Old. Occasionally want the msgs to flow to New instead.
wondering if MQ supports a gate/switch concept. where via API I can point a queue to go only to New, or only to Old, for a short time.
Trying to avoid going to message based routing via WMB since I dont have to do that today. THe parallel mode is only for a few months.
You do not mention the version of MQ or whether there are message affinities or dependence on preserving the MQMD.MsgID. These are critical in devising a solution to this problem. I'll try to describe enough options so that at least one will be viable whatever version you are at.
Pub/Sub
The easiest thing to do is to have the messages arrive on an alias over a topic. Any message that arrives is published immediately on that topic. Then it is a simple matter to generate administrative subscriptions to direct messages to the queues on which the apps needing the messages are listening. This is entirely a configuration change and requires no external components, processes or code. It is available from v7.1 of MQ and higher, which is to say any of the currently supported versions of MQ.
The down side is that IBM MQ will change the MQMD.MsgID from the time the message is received on the topic to the time it is published on the application's input queue. This breaks the app's ability to use the MQMD.MsgID of the incoming message as a correlation ID when replying. If the requesting app pre-loads the correlation ID or doesn't rely on a correlation ID, this is not an issue.
Aliasing
But for apps where this is an issue, it gets a bit harder. You can alias over a queue and have inbound messages land on the alias. When you need to switch from one queue to another, you change the alias. There are a couple issues with this. The first is that it is never possible to deliver the message stream to more than one of the applications. In a parallel processing test it is often desirable to do exactly that and then compare summary or detail reports.
The second problem is more operational in nature. It isn't possible to change the alias while it is open. If the messages arrive over a RCVR, RQSTR or `CLUSRCVR channel, no problem. Stop the channe, switch the alias and restart the channel. In a series of MQSC script commands this can be done faster than it can be typed. However, if the applications putting the messages are connected in bindings mode or via client directly to the alias, they must all be stopped in order to change the alias.
That said, aliasing works on all versions of MQ out of the box.
Physical copy
One solution that's been around for quite some time is to use the Q program (SupportPac MA01) to direct the messages. In this scenario, the queue on which messages land is a local queue. The Q program is either triggered or set to constantly listen on the queue. When a message arrives, Q then copies it to one or both of the destination queues.
Switching the behavior if Q is triggered involves pre-defining 2 or 3 processes where each defines a different behavior - move new messages to QUEUEA, to QUEUEB or to both. Changing the queue's PROCESS attribute to point to a different process results in an instantaneous change of the behavior.
Alternatively, if Q is configured to listen on the queue forever then changing the behavior involves use of three different scripts to execute it where one causes messages to be copied to QUEUEA, another to QUEUEB and another to both queues. Changing the behavior involves killing the script and starting a different one.
The Q program works with all versions of MQ, regardless of whether it is triggered or scripted.
Downsides to this approach include the obvious - more moving parts. You have to trigger the queue or else make a transactional program act like a daemon. Not hard but if you are betting the business on it then perhaps some monitoring is in order to make sure the input queue doesn't start building.
Recommendation
Of all these methods, I really like the Pub/Sub version. It is extremely reliable, has the least moving parts, and if anything breaks it's under IBM support. When you need to change something, you can do that with minimal impact to the running applications. If at all possible, use that.

Kafka PointToPoint

The Problem
We have a multi-datacenter activeMQ setup, with NFS for each HA pair, and it seems that activeMQ isn't really scalable, and doesn't play well with NFS issues. (we're using 5.7)
The Possible Solution
Move to Kafka
Requirements
We need PointToPoint & pub/sub functionality
Message Priorities (I know kafka doesn't provide that out of the box, but there's a workaround for it on our side)
Question
Is this possible with Kafka (not necessarily out-of-the-box, but with some client tweaking)? If not, then what other technology would you suggest? It doesn't have to be JMS, but it needs to be scalable and reliable (and it needs to play well with NFS)
We need PointToPoint & pub/sub functionality
Kafka does that, shared my finding here
Message Priorities
Little confused what exactly you mean, but by priorities if you mean to consume from a specific offset then the Low level or Simple consumer API provide that. It also supports re-submission of messages as well
For point to point delivery of messages, (single producer, single consumer), you can configure the number of partitions as 1. Producer will publish messages on this topic and partition and a single consumer, will read from that topic and partition. A topic would equate to your queue in ActiveMQ terms.
If you want to add message priorities, I would use the lower level Kafka client, you could then increase the number of partitions, (each partition for a message of different priority level), and have the consumer fetch messages from the highest priority topic first, if no message exists it would then fetch from the next lower priority topic.
In your case I would use Kafka with 1 partition per topic with separate topics for each message priority level, solving prioritization of delivery on the subscriber side.
Kafka provides point to point and publish subscribe usage patterns by user groups concept.
What there is not directly supported is selectors, priorities. But you get possibility tho distribute messages across partitions so you could for example distribute messages based on priority to partitions.
You also get for free message persistence (one of the core principles) limited by retention policy. Each message in kafka is essentially key-value pair. Key has some specific semantics in partitioning and log compaction. There is nothing like in traditional messaging systems that you can use custom header which you can use for routing etc. Following article tries to summarize that.

Run multiple JMS Consumers on different JVM

I have JMS Queue which has hundreds of jobs running and have multiple consumers on the same jvm which are processing to Queue to pick the job and process them, the system is crashing due to some performance issue and so I want to deploy JMS Consumers on different JVM, am new to deploying application or part of application between different JVM and would really appreciate if you can provide me some working example of how to do it and how to implement then it would be very helpful to me.
Update:
Now there are two ways to consume messages from JMS Queues either by creating standalone jms consumers or by creating message driven beans, now I need to have multiple consumers on different jvms that will listen to the queue and process the messages, with standalone jms consumers, i can run multiple of those consumers ( just run different jms consumers as standard java program as they have main function) but my question is how can i run multiple of message driven beans on different jvms, meaning how can i run multiple onMessage() functions on different jvms, hope am making some sense here. Kindly advise or point me in the right direction.
Any guidance would be highly appreciated.
Thanks.

Potential pitfalls in using a JMS queue?

I've been asked to design and implement a system for receiving a high volume of automated sensor data from a large number of devices. This data will be produced at regular intervals and sent to the server as xml in an http post. The devices will keep resending the same data if they don't receive a specific acknowledgment from the server. Some potentially heavy duty processing of this data will need to occur before it's inserted to a number of tables in the main database via a transaction, and additionally some data points will need to be enqueued to be re-directed to other external urls.
I'm planning on using a Java application server (leaning towards GlassFish) with a servlet to receive the incoming data. I'd like to implement some kind of queuing mechanism to store the data temporarily so that the response back to the sensor isn't dependent on all the intermediate processing. Separate independent queues are also a requirement for the data re-direction piece. After doing some research the two main options seem to be:
1) Install a database on the app server and use tables for the various queues. The queues would be processed by a Java application, either running in the app server or standalone as it's own service.
2) Use a database backed JMS solution to implement the queuing.
I'm not that familiar with JMS but from what I've read it seems to be the better solution in this case. The primary requirement is that no sensor data ever be lost or dropped from the queue before being processed and that it be processed more or less sequentially. We'd also like to make it easy to halt the processing of some of the queues at certain times but still have them accumulate data and for these messages to never automatically expire.
With strategy 1 it's obvious to me how to meet these requirements but it may be less robust and scalable, and more complex to develop than strategy 2, since I'll need to write my own multi-threaded code to handle the various independent queues. I'm wondering what the potential pitfalls could be in using JMS queues for this purpose since I've never worked with them before.
Data integrity is a big issue so I need to make sure JMS can guarantee no data loss in the event of a server reboot, power outage, or if the queue gets very large for some reason. For instance could a problem completing transactions to the main database for a period of time potentially cause the JVM to run out of memory, crash, and lose all accumulated data? (This would be the nightmare scenario).
Also, I was wondering if there would be any way to pause the JMS queue processing via an app server admin tool or to easily see what's in the queue (I would be enqueuing an object which would be the message xml plus some other data, including timestamp received, etc.) I've read a few posts on here that deal with related issues but wanted to get some direct feedback. Basically I'd like to know of instances (if any) where JMS is not an appropriate queuing solution and if this is one of those cases. Any advice is greatly appreciated.
Kaleb's answer talks about the benefits of JMS quite eloquently, but since you're asking about pitfalls, here's what I can think of.
Not all JMS implementations are equal. In theory you can use whatever implementation suits your needs, but unless you're prepared to do some serious load testing and failure condition testing, you can't know that a particular implementation isn't going to fail under your particular use case.
Most JMS use a transactional datastore like a relational database as their back end. That means that rather than writing directly to whatever datastore you're familiar with, you have to rely on the JMS implementation's extra layer between you and that stored messages.
While swapping JMS implementations to find the one that perfectly fits your needs may seem like a simple endeavor because of the homogeneous JMS API, the critical features for failure handling, JMS server monitoring, and all the other cool stuff that exists above and beyond messaging is going to be a hassle to deal with if you do change your implementation.
That said, I think you'd be crazy to write to the DB yourself instead of going with JMS. On the first point, ActiveMQ is a venerable JMS server used in many enterprise environments. On the second point, the fact is you'd just end up writing that extra layer yourself in order to implement messaging, and your code won't have the benefit of thousands of eyes (or a set of paid developers who's sole job it is to respond to customers and make sure the JMS implementation is solid). On the third point, well the same ends up being true of your backend datastore. Use JMS, you'll save yourself trouble in the long run.
If you want to go the JMS route, a standalone JMS-compatible message broker (separate from your app server) would be a good choice. Message brokers range from free open-source (like ActiveMQ at http://activemq.apache.org/ or OpenMQ at https://mq.dev.java.net/), to large-scale commercial solutions (IBM's WebSphere MQ at http://www-01.ibm.com/software/integration/wmq/ is one of the largest).
Message brokers offer guaranteed delivery (provided the server's up and listening), and you can do quite a bit to ensure that the system is fail-safe including integrated backup broker servers and instant power backup. Broker queues can eventually run out of room if your app server isn't picking up the messages, but you can assign huge queue depth (100's of GB) and have the server send alerts if the messages aren't getting processed and the queue reaches a certain percentage.
Your Java app would then run on a different server entirely, and would connect to the broker and pull messages off of the queue as fast as possible. If the app server crashes or stops picking up messages for any other reason, the broker would just keep all messages in that queue until the app server begins picking them up again.
You will be wanting to implement a poison message queue in your implementation - this is the place that messages unable to be processed after some number of retries will arrive.
You will probably need to write some code that can examine the messages in that queue and re-send them to the appropriate destination after fixing whatever is causing them to fail.
If sequence of message processing is important, a message ending up in the poison queue could mean all processing is halted until that message is corrected.
As far as fault tolerance goes, you can have multiple instances of the consuming services subscribe to the same queue or topic, providing an ability to continue processing even if one or more instances goes down.
Finally, have a watchdog process that pings the various consumers on your message queue, and if one doesn't respond, have it send a message that results in a new instance being started. In this way, your message processing environment can be somewhat self regulating.

Categories

Resources