ActiveMQ - Controlling how many messages are consumed at a time. - java

Apologies for the wording of my question.
I am using Tomee.
I have an ActiveMQ queue set up and receiving messages from a Producer (the Tomee provided example)
It is persisted in MySQL (in case that matters)
My scenario is this...
A message comes into the queue
A Consumer/Monitor reads the message and starts a thread to run a process (backup, copying, processing etc...) that could take some time to complete.
At anyone time I could have 5 messages to process or 500+ (and anything in between)
Ideally, I would like some Java/Apache library that is designed to monitor the queue and read 10 messages (for example) and then start the threads and then wait for one to finish before starting any more. For all intents and purposes I am trying to create a 'thread pool' or 'work queue' that prevents too many processes from starting up at any one time.
OR
Does this need to be thread pooled outside of ActiveMQ ?
I'm new to JMS and am beginning to understand it but still a long way to go.
Any help is appreciated.
Trevor

What you are looking to do sounds like something that could easily be solved using Apache Camel. Take a look at the Camel documentation for the competing consumers EIP which sounds like an ideal fit for your case.

Related

Apache Camel RabbitMQ consumers leaves back extra threads running

I have built an app which starts multiple RabbitMQ consumers. When I start the app in debug mode in eclipse, I can see desired numbers of threads spawned as can be seen in the Debug window:
The app deals with several RabbitMQ queues plus some seda queues. The app continues executing by processing and moving messages from one queue to another.
There are at least 7 routes starting from RabbitMQ consumer. These routes roughly looks like this:
from("rabbitmq://url")
.process(Processor1.class)
.process(Processor2.class)
There is one specific start queue. Depending upon messages published, the message flow from different sequence of queues. So I was testing different sequence flows by publishing different messages to the start queue. After testing some flows by publishing different messages to the start queue, I realized that it spawned many new threads. And even after that sequence flow finishes (that is the message leaves final queue and the final processor in the camel route also executes completely), the thread is left back in running state. I found many such threads added up after I tested multiple flows. Five such threads can be seen in the screenshot below.
Above are just five extra threads, however this count shoots fast enough as I test multiple complex flow. I have came across 44 threads count. So I was wondering what wrong I am doing. Do I have to explicitly stop the route threads in some way? Did I miss/forget some configuration that I must on the camel route? Why is this happening? Is it normal?
PS: My machine is excessively slow on RAM, just 4GB. It runs two lightweight db servers, two web apps, eclipse and my main (above) aap. Most of the time, 3.7 GBs are full. Some times it takes time for a breakpoint (inside camel processor) to hit when I publish a message in the queue. Can such machine be the reason
for erratic threads leaving behind? (Though I primarily think its me missing some setting on the routes)

reading messages from MQ using java

I need some design and developments inputs on reading messages from queue. i have following requirements and constraints
i need read message from queue and inert to db.
messages can come at any interval (100's at same time or 1 by one with few mins gap)
don't have any MDB container to host (just plain tomcat server)
Need to write java application to perform the above.
so not very sure how to put this simple application.
if is use quartz scheduler to trigger job to read all messages in the queue then not sure before even that complete next instance of scheduler might start and create problem.
please suggest me any inputs.
this is basically some utility so i don't want to spend too long time nor too much resources on this.
thanks & regards
LR
The usage of an ESB like Mule or Camel would simplify a lot your development. You'd find already developed components (called endpoints) for reading from a queue, and writing into a db. Also for scheduling jobs with quartz.

Module clustering and JMS

I have a module which runs standalone in a JVM (no containers) and communicates with other modules via JMS.
My module is both a producer in one queue and a consumer in a different queue.
I have then need to cluster this module, both for HA reasons and for workload reasons, and I'm probably going to go with Terracotta+Hibernate for clustering my entities.
Currently when my app starts it launches a thread (via Executors.newSingleThreadExecutor()) which serves as the consumer (I can attach actual code sample if relevant and neccessary).
What I understood from reading questions here is that if I just start up my module on N different JVMs then N different subscribers will be created and each message in the queue will arrive to N subscribers.
What I'd like to do is have only one of them (let's currently say that which one is not important) process that message and so in actuality enable me to process N messages at a time.
How can/should this be done? Am I way off the track?
BTW, I'm using OpenMQ as my implementation but I don't know if that's relevant.
Thanks for any help
A classic case of message handling in clustered environment. This is what I would do.
Use Broadcast message (Channel based) in place of Queue. Queue being useful for point to point communication is not very effective. Set validity of message till the time it is consumed by one of the consumer. This way, other consumers wont even see the message and only one consumer will consume it.
Take a look at JGroups. You may consider implementing your module/subscribers to use jgroups for the kind of synchronization you need. JGroups provide Reliable Multicast Communication.

How to design non-EJB load balanced applications?

I have a java class Processor that is listening to a jms topic and it is struggling to keep up with the speed in which messages are arriving so we've decided to go concurrent:
A single class listening to the topic who's job is to hand the messages out to a pool of worker threads, effectively being a load balancer. It also has to prevent 2 workers processing messages for the same customer.
I expected there to be quite a lot of information on the internet about this but everything seems to suggest the use of EJBs where the app server manages the pool and does the balancing. I'm sure this must be a really common problem, but can't seem to find any libraries or design patterns to assist. Am I making more out of it than it is and should just delve in and write my own code?
Why don't you just use a queue instead of a topic and have several instances of the same application handle messages from this queue ?
This is an easy problem to solve with a pool of listeners. That's what the app server would be doing for you.
I'd get a good app server and use its MDBs to solve this quickly. Size the pool to keep up and you'll be fine.
If you insist on writing your own code, get a good open source pool implementation and use it.
If it must be non-EJB, consider Spring. It has message driven POJOs that could be just what you need.

Potential pitfalls in using a JMS queue?

I've been asked to design and implement a system for receiving a high volume of automated sensor data from a large number of devices. This data will be produced at regular intervals and sent to the server as xml in an http post. The devices will keep resending the same data if they don't receive a specific acknowledgment from the server. Some potentially heavy duty processing of this data will need to occur before it's inserted to a number of tables in the main database via a transaction, and additionally some data points will need to be enqueued to be re-directed to other external urls.
I'm planning on using a Java application server (leaning towards GlassFish) with a servlet to receive the incoming data. I'd like to implement some kind of queuing mechanism to store the data temporarily so that the response back to the sensor isn't dependent on all the intermediate processing. Separate independent queues are also a requirement for the data re-direction piece. After doing some research the two main options seem to be:
1) Install a database on the app server and use tables for the various queues. The queues would be processed by a Java application, either running in the app server or standalone as it's own service.
2) Use a database backed JMS solution to implement the queuing.
I'm not that familiar with JMS but from what I've read it seems to be the better solution in this case. The primary requirement is that no sensor data ever be lost or dropped from the queue before being processed and that it be processed more or less sequentially. We'd also like to make it easy to halt the processing of some of the queues at certain times but still have them accumulate data and for these messages to never automatically expire.
With strategy 1 it's obvious to me how to meet these requirements but it may be less robust and scalable, and more complex to develop than strategy 2, since I'll need to write my own multi-threaded code to handle the various independent queues. I'm wondering what the potential pitfalls could be in using JMS queues for this purpose since I've never worked with them before.
Data integrity is a big issue so I need to make sure JMS can guarantee no data loss in the event of a server reboot, power outage, or if the queue gets very large for some reason. For instance could a problem completing transactions to the main database for a period of time potentially cause the JVM to run out of memory, crash, and lose all accumulated data? (This would be the nightmare scenario).
Also, I was wondering if there would be any way to pause the JMS queue processing via an app server admin tool or to easily see what's in the queue (I would be enqueuing an object which would be the message xml plus some other data, including timestamp received, etc.) I've read a few posts on here that deal with related issues but wanted to get some direct feedback. Basically I'd like to know of instances (if any) where JMS is not an appropriate queuing solution and if this is one of those cases. Any advice is greatly appreciated.
Kaleb's answer talks about the benefits of JMS quite eloquently, but since you're asking about pitfalls, here's what I can think of.
Not all JMS implementations are equal. In theory you can use whatever implementation suits your needs, but unless you're prepared to do some serious load testing and failure condition testing, you can't know that a particular implementation isn't going to fail under your particular use case.
Most JMS use a transactional datastore like a relational database as their back end. That means that rather than writing directly to whatever datastore you're familiar with, you have to rely on the JMS implementation's extra layer between you and that stored messages.
While swapping JMS implementations to find the one that perfectly fits your needs may seem like a simple endeavor because of the homogeneous JMS API, the critical features for failure handling, JMS server monitoring, and all the other cool stuff that exists above and beyond messaging is going to be a hassle to deal with if you do change your implementation.
That said, I think you'd be crazy to write to the DB yourself instead of going with JMS. On the first point, ActiveMQ is a venerable JMS server used in many enterprise environments. On the second point, the fact is you'd just end up writing that extra layer yourself in order to implement messaging, and your code won't have the benefit of thousands of eyes (or a set of paid developers who's sole job it is to respond to customers and make sure the JMS implementation is solid). On the third point, well the same ends up being true of your backend datastore. Use JMS, you'll save yourself trouble in the long run.
If you want to go the JMS route, a standalone JMS-compatible message broker (separate from your app server) would be a good choice. Message brokers range from free open-source (like ActiveMQ at http://activemq.apache.org/ or OpenMQ at https://mq.dev.java.net/), to large-scale commercial solutions (IBM's WebSphere MQ at http://www-01.ibm.com/software/integration/wmq/ is one of the largest).
Message brokers offer guaranteed delivery (provided the server's up and listening), and you can do quite a bit to ensure that the system is fail-safe including integrated backup broker servers and instant power backup. Broker queues can eventually run out of room if your app server isn't picking up the messages, but you can assign huge queue depth (100's of GB) and have the server send alerts if the messages aren't getting processed and the queue reaches a certain percentage.
Your Java app would then run on a different server entirely, and would connect to the broker and pull messages off of the queue as fast as possible. If the app server crashes or stops picking up messages for any other reason, the broker would just keep all messages in that queue until the app server begins picking them up again.
You will be wanting to implement a poison message queue in your implementation - this is the place that messages unable to be processed after some number of retries will arrive.
You will probably need to write some code that can examine the messages in that queue and re-send them to the appropriate destination after fixing whatever is causing them to fail.
If sequence of message processing is important, a message ending up in the poison queue could mean all processing is halted until that message is corrected.
As far as fault tolerance goes, you can have multiple instances of the consuming services subscribe to the same queue or topic, providing an ability to continue processing even if one or more instances goes down.
Finally, have a watchdog process that pings the various consumers on your message queue, and if one doesn't respond, have it send a message that results in a new instance being started. In this way, your message processing environment can be somewhat self regulating.

Categories

Resources