I have a for loop that keeps putting messages onto the JMS Queue but its quite possible that in future the for loop may execute way faster than the Queue can handle requests and might reach the max-pool limit.
I am catching the JMSException but the thing is that I don't have any fallback logic in place to resume the job. I mean I can store the state of last element passed on to the queue but I have no clue as how to start putting the messages back to the queue after the Exception has been encountered.. How can I start putting messages back to the Queue and make sure that same Exception wont be thrown.
You should set up your JMS queue with a pool of listeners that's adequate for your peak load. This can be arranged with your app server.
It should also allow a "dead letter queue" where messages that are poisonous in the way you describe will be routed.
It would be good to configure some kind of altering to let you know requests are spilling onto the floor.
I don't understand the fascination with queues anymore. I think a web service with a producer/consumer dequeue and a pool of executors to process requests is a better choice that a queue. That's 1990s IBM technology.
Related
I'm building an application using RabbitMQ/Spring/Spring AMQP and am having trouble handling the way I've laid out my queues.
Essentially I have one queue that every consumer listens to, with each message basically saying "this queue is ready to be processed by a single consumer". The consumer will then listen to the queue indicated in the message, consume all the messages in that queue, and finally delete it when done.
These short lived queues are all created on the fly as data comes in to be processed and cannot be consumed by multiple consumers (whichever gets the message in the 'ready' queue).
I'm having trouble gracefully handling the consumers in this situation. Right now I just create a new DirectMessageListenerContainer each time a consumer gets a message from the 'ready' queue and then stop it once it has gotten all the messages it needs. It seems like this solution isn't ideal. Is there any better way to handle a situation like this with Spring AMQP/RabbitMQ?
You can add/remove queues to/from existing container(s) at runtime; it is more efficient with the direct container (see Choosing a container).
The MessageProperties has the consumerQueue property to tell you which queue the message came from.
My application being as ibm client to consume the message which sent by ibm MQ server. But sometimes they will sent big number of messages(e.g:50000). But our client application can not "eat" the message so quickly.
what i'v been tried:
Use caching connection factory, But not help too much.
org.springframework.jms.connection.CachingConnectionFactory
I can't open multiple threads for the listener to speed up the consuming speed(Currently is set to 1) because of our business requirement.
Thanks in advanced!
Edit:
For each message processing time is like: (e.g:0:00:00.079) But wait to start process next message will take long time (e.g:0:00:00.534)
Consider the transactional and persistence requirements of the messages.
There are a number of options within MQ that could be enabled here to speed up delivery.
MQ is optimized for either persistent/transactional or non-persistent/non-transactional workloads. Don't mix them, so for example sending persistent messages in a non-transactional session.
If you are using non-persistent/non-transactional messaging then look into the READ_AHEAD options to stream messages down to the client.
In additional ensure that selectors are not in use.
If the client implementation is negotiable, look at sending aggregate messages that combine individual messages, especially if the business logic can adapt to handle them together before (for example) saving something to a database.
The only legitimate "business" reason that you can't have multiple listener threads is because of event/workflow ordering and the chance of processing two related messages concurrently rather than sequentially. However, perhaps it's possible to redesign the client so that messages are segregated by the sender, using JMS properties of some sort, and then have each listener filter by various properties. As long as all related events/messages get the same property, you might be able to have multiple listeners.
Not ideal, but if you made the listener stateful so that you knew when to rollback event B because related event A is currently getting processed, that might work. Difficult to do well with more overhead processing. Better yet, figure out a way to process messages out-of-order and yet get the correct answer in the end.
Ultimately, for large number of messages, you really need to figure out how to get n listeners because otherwise you might never catch up and, worse case, continue to grow your backlog.
I have a java program that puts down into the queue on the other side of the queue, i do have 10-15 consumers; any ONE of which should read the message and process it. If any of the 10-15 consumers get free they pick up the next message from the queue.
Basically, a Consumer can pickup the message from the queue whenever it is free, and only one consumer must pick it up. (without any synchronization blocks or so).
Also on the sender's end can i pause sending the messages into the queue if the queue sizes becomes full(or reaches a certain threshold)?
I am really new to the JMS API. Apologies if this is a newbie question .
Thanks!!
I have to Send messages into a queue and i have 20 threads running as
consumers, who can pick up the data from the queue(once they are
free). so when each thread gets free it goes to the queue checks if
the data is there it picks up and so on.. is this doable?
Yes, it's doable - that's the standard process of doing it with JMS queues. Another alternative would be topics, but with topics, every listener would have to process the same message, not just one, so queues are what you want. Although usually you don't have threads as consumers (I'm not even sure what that means), but message-driven beans. You might consider using them. MDBs run in their own thread anyway.
I'm currently adding JMS support to a application-server-like framework. The JMS will be implemented by HornetQ (stand-alone broker, hornetq jars on the servers classpath) but there is neither JBoss nor spring nor anything else that would provide MDBs.
The next step is to add a message listener to a xa queue that would allow for parallel processing of incoming messages. Some messages would init long running tasks, so the basic idea is to spawn worker threads from the onMessage method.
On my long journey through the internet I came across this discussion, where one of participants mentioned, that he would not do that but use an extra internal queue for the task: the (single threaded) message listener then would simply grab the messages from the inbound queue and create new messages for an internal queue, where at the other end of that internal queue some worker threads fight for the incoming messages. Inbound messages then would be acknowledge once they're "copied" to the internal queue (which is ok for me).
Unfortunatly they don't say why it would be better to not spawn worker threads from the onMessage method - maybe, because the listener would block if all threads from the pool are busy. So I'm looking for pros and cons for the designs decisions:
Start worker threads from the onMessage method of the message listener
Use an internal queue to "send messages to the worker threads"
Transaction limits aside, whether or not to have multiple threads (or processes) reading from a queue simply comes down to whether or not the message order is important. Obviously if the order is important, then a single thread naturally maintains that order, while multiple threads will provide no such guarentee.
What you will normally find, is that order is important but across a subset of all the messages. In this scenario, if a single thread isn't performant, you need to get those messages off the queue and re-queued in as short a possible time because to preserve the order you'll have to use a single thread reading from the initial queue - hence the use of one or more internal queues. The problem this incurs is that the transaction will be closed before the messages are fully processed and so you need some sort of temporary storage to ensure messages don't get dropped if the process were to fall over before the processing had taken place.
If, as your question suggests, you're not too worried about dropping messages then the java.util.concurrent.BlockingQueue sounds like what you need for the internal queues with a single thread servicing each.
I've a Java client which accesses our server side over HTTP making several small requests to load each new page of data. We maintain a thread pool to handle all non UI processing, so any background client side tasks and any tasks which want to make a connection to the server. I've been looking into some performance issues and I'm not certain we've got our threadpool set up as well as possible. Currently we use a ThreadPoolExecutor with a core pool size of 8, we use a LinkedBlockingQueue for the work queue so the max pool size is ignored. No doubt there's no simple do this certain thing in all situations answer, but are there any best practices. My thinking at the moment is
1) I'll switch to using a SynchronousQueue instead of a LinkedBlockingQueue so the pool can grow to the max pool size figure.
2) I'll set the max pool size to be unlimited.
Basically my current fear is that occasional performance issues on the server side are causing unrelated client side processing to halt due to the upper limit on the thread pool size. My fear with unbounding it is the additional hit on managing those threads on the client, possibly just the better of 2 evils.
Any suggestions, best practices or useful references?
Cheers,
Robin
It sounds like you'd probably be better of limiting the queue size: does your application still behave properly when there are many requests queued (is it acceptable for all task to be queued for a long time, are some more important to others)? What happens if there are still queued tasks left and the user quits the application? If the queue growing very large, is there a chance that the server will catch-up (soon enough) to hide the problem completely from the user?
I'd say create one queue for requests whose response is needed to update the user interface, and keep its queue very small. If this queue gets too big, notify the user.
For real background tasks keep a separate pool, with a longer queue, but not infinite. Define graceful behavior for this pool when it grows or when the user wants to quit but there are tasks left, what should happen?
In general, network latencies are easily orders of magnitude higher than anything that can be happening in regards to memory allocation or thread management on the client side. So, as a general rule, if you are running into a performance bottle neck, look first and foremost to the networking link.
If the issue is that your server simply can not keep up with the requests from the clients, bumping up the threads on the client side is not going to help matters: you'll simply progress from having 8 threads waiting to get a response to more threads waiting (and you may even aggravate the server side issues by increasing its load due to higher number of connections it is managing).
Both of the concurrent queues in JDK are high performers; the choice really boils down to usage semantics. If you have non-blocking plumbing, then it is more natural to use the non-blocking queue. IF you don't, then using the blocking queues makes more sense. (You can always specify Integer.MAX_VALUE as the limit). If FIFO processing is not a requirement, make sure you do not specify fair ordering as that will entail a substantial performance hit.
As alphazero said, if you've got a bottleneck, your number of client side waiting jobs will continue to grow regardless of what approach you use.
The real question is how you want to deal with the bottleneck. Or more correctly, how you want your users to deal with the bottleneck.
If you use an unbounded queue, then you don't get feedback that the bottleneck has occurred. And in some applications, this is fine: if the user is kicking off asynchronous tasks, then there's no need to report a backlog (assuming it eventually clears). However, if the user needs to wait for a response before doing the next client-side task, this is very bad.
If you use LinkedBlockingQueue.offer() on a bounded queue, then you'll immediately get a response that says the queue is full, and can take action such as disabling certain application features, popping a dialog, whatever. This will, however, require more work on your part, particularly if requests can be submitted from multiple places. I'd suggest, if you don't have it already, you create a GUI-aware layer over the server queue to provide common behavior.
And, of course, never ever call LinkedBlockingQueue.put() from the event thread (unless you don't mind a hung client, that is).
Why not create an unbounded queue, but reject tasks (and maybe even inform the user that the server is busy (app dependent!)) when the queue reaches a certain size? You can then log this event and find out what happened on the server side for the backup to occur, Additionally, unless you are connecting to a multiple remote servers there is probably not much point having more than a couple of threads in the pool, although this does depend on your app and what it does and who it talks to.
Having an unbounded pool is usually dangerous as it generally doesn't degrade gracefully. Better to log the problem, raise an alert, prevent further actions being queued and figure out how to scale the server side, if the problem is there, to prevent this happening again.