I am using ActiveMQ version 5.7.x,
I am having one ActiveMQ queue to which the listener listens.
Queue has a ConnectionFactory whose redeliveryPolicy is set to 3, intialRedeliveryDelay set to 5000.
Queue have some good messages and bad messages. While listening to such queue, when bad messages come, they are tried 3 times with the wait time of 5000 millis, but then the good messages are blocked for that much time span.
What I want is, during the wait time of 5000 millis for each retry, the processing of good messages should continue and should not wait for bad message processing.
For this I tried 1 attribute of connectionFactory, i.e. nonBlockingRedelivery set to true.
But nonBlockingRedelivery is not working.
Is there any other way to do this?
You can always have a retransmission queue for failing messages.
That is, receive messages from a main queue (no redelivery) and if you get an exception, put the message on a redelivery queue.
Let your application listen on both queues and do the same logic to both messages. It should simply be two message listeners invoking the same method. One with redelivery and one without, but with slightly different error handling.
Please look at the description of that attribute in ActiveMQConnectionFactory,
It says what I exactly what I wanted..
"When true a MessageConsumer will not stop Message delivery before re-delivering Messages from a rolled back transaction. This implies that message order will not be preserved and also will result in the TransactedIndividualAck option to be enabled"
But same is not working..!!!
Can you please look into it please?
Related
I'm using Camel for a while and I'm a huge admirer of its simplicity.
The use case
Given this simple route:
from("mock:some-route")
// split 1
.split().method("splitterBean", "split")
// now we have an ArrayList of n messages (let's say 10)
.to(ExchangePattern.InOut, "jms:some-thing");
If we assume that we have 10 messages after the split(), this route will immediately send 10 messages to the "to" endpoint. So jms:some-thing will receive all 10 messages at once.
The problem
--> Please note that the "out" endpoint is inOut, so we have timeouts in place when the receiver must acknowledge the message.
The application on the receiving end of jms:some-thing has to do quite some work for each message. As all 10 messages were written at the same time, the same timeout applies for all of them.
So we increased that timeout.
But one day, we will have 1000 messages and the timeout will again be to low.
What i want to archieve
I want to implement a pattern where I'll send only 1 message at once after the split, then sending the next after 1 message is acknowledged by the receiving system.
So instead of sending the 10 messages at once, I want
Send 1 message
Wait for the acknowledgment of that message
Send the next
Wait again
And so on..
How to implement such behavior?
I looked at the documentation, but none of the EIP components seem to fulfill that need?
Thanks for any input
You can have an intermediate seda queue with only one thread.
from("mock:some-route")
.split().method("splitterBean", "split")
.to("seda:your-seda-queue?waitForTaskToComplete=Always&timeout=0");
from("seda:your-seda-queue?waitForTaskToComplete=Always&timeout=0")
.to(ExchangePattern.InOut, "jms:some-thing");
By default, a seda queue will have a single consuming thread, and will block the calling thread until a consumer becomes available. More on seda details here
Saying that, your sending to a jms topic, which is really what you should be using to queue up your requests instead of a seda queue. You should look into implementing this logic asynchronously, and waiting on a reply topic rather than using a timeout.
I have a FIFO SQS queue, with visibility time of 30 seconds.
The requirement is to read messages as Quickly as possible and clear the queue.
I have code in JAVA in a fashion shown below ( this is just a representation of idea only, not complete code ):
//keep getting messages from FIFO and process them ASAP
while(true)
{
List<Message> messages =
sqsclient.receiveMessage(receiveMessageRequest).getMessages();
//my logic/code here to process these messages and delete them ASAP
}
In the while loop as soon as the messages are received, they are processed and removed from the queue.
But, many times the receiveMessageRequest does not give me messages (returns zero messages).
Also, the messages limitation is only 10 at a time during receive from SQS, which is already an issue, but due to these zero receives, the queues are piling up.
I have no clue why this is happening. The documentation exactly is not clear on this part (or Am I missing in terms of the configuration of the queue?)
Please help!
Note:
1. My FIFO Queue always has messages in this scenario, so there is no case of Queue having zero messages and receive request returning zero
2. The processing and delete times are also Less than the visibility timeout.
Thanks.
Update:
I have started running multiple consumers for processing the FIFO queue. Clearly, one consumer is not coping up with the inflow of messages. I shall update in few days how multiple consumers are performing. Thanks
You have to first make sure that all messages you received are deleted within VisibilityTimeout. If you are using DeleteMessageBatch for deletion make sure that all 10 messages are deleted.
Also, how did you queue messages when you enqueue them?
Order of messages are guaranteed only in a single message group.
This also means that if you set the same group id to all messages, you are limited to a single consumer so that order of messages are preserved for sure. Even if use multiple consumers, all messages that belong to a same group becomes invisible to other consumers until visibility timeout expires.
I have setup a JMS queue that is fed by a single producer, and consumed by 8 different consumers.
I would like to configure my queue/broker so that one message being delivered to a consumer blocks the queue until the consumer is done processing the message. During the processing of this first message, the following messages may not be delivered to another consumer. It doesn't matter which consumer processes which message, and it is acceptable for the same consumer to consume many messages in a row as long as when it dies another consumer is able to pick up the rest of the unprocessed messages.
In order to do this, I have configured all of my consumers to use the CLIENT acknowledgement mode, and I have coded them so that message.acknowledge() is called only at the end of the message processing.
My understanding was that this should be sufficient to satisfy my requirements.
However I am apparently wrong, because it looks like my brojer (OpenMQ) is delivering the messages to consumers as fast as possible, without waiting for the consumer acknowledgement. As a result, I get multiple messages processed in parallel, one for each consumer.
I'm obviously doing something wrong, but I can't figure out what.
As a workaround, I figure I could create a durable subscription with a fixed client ID shared between all my consumers. It would probably work by only allowing one consumer to even connect to the broker, but I can't shake the feeling that this is a rather ugly workaround.
Does anyone have an idea of how I should configure my Broker and/or my Client to make this possible?
I am going to use a Session to commit the read of a JMS message after it (and any corresponding write) has successfully completed.
However, if I have an error, and have to do a rollback, I would like to process new messages first, rather than the one that had (caused???) the error that had to be rolled back. I want to eventually reprocess the failed message, but not to fail over and over while other yet-unseen messages stall behind it, waiting for action to remove the offending message or fix the environment that made it fail.
Is this automatic? (will be using Sonic MQ, if that matters). If so, the rest of this question is moot.
Do I need to, or can I even, reset the priority of the failed message to push it further back in the queue (behind other pending messages, if any)? If I need to reset the priority, how do I make that "stick", given that I would have rolled back the transaction that initially read the message in question.
I am not aware of feature in Sonic MQ which supports your requirements out-of-the-box, but there are other options:
Use a second queue for failed messages. So failed messages are sent again on another queue. Processing could start, if the first queue is empty, for example.
Resend the message on the same queue (with same or even with a lower priority)
In both cases, after the message has been sent, there is a normal commit on the main queue.
Related: Message processing with priorities. A quote from James Shek's answer:
JMS Spec states: "JMS does not require that a provider strictly implement priority ordering of messages; however, it should do its best to deliver expedited messages ahead of normal messages."
Say you have two Spring DefaultMessageListenerContainer listening on the same Queue (ActiveMQ for example), started in different VMs.
Send 1 000 000 messages in. After 500 000 of messages you want the rest of them to be handled by only one DefaultMessageListenerContainer, BUT without calling destroy or shutdown on the other (since you might needed it in the future - and must keep it manageable with JMX). The numbers are just for example here and should be ignored, could be replaced with - "after some time, after some messages, etc etc"
This sounds easy : call stop on the other DefaultMessageListenerContainer. Wrong, since messages are dispatched in a Round Robin fashion and they get registered with the Consumer.
Add transaction support and throw an error in the second DefaultMessageListenerContainer every time a messages comes in, it will be rolled-back and taken (round-robin) by the first one. Wrong again, the messages somehow registers with the consumer, not allowing the first DefaultMessageListenerContainer to take the message.
Even if you you shutdown/destroy the first DMLC - the message is NOT consumed by the other DMLC. They ARE Consumed only if I kill the JVM that the now shutdown/destroyed DMLC was running in.
My Solution so far : Because of the Session.AUTO_ACKNOWLEDGE messages are taken out from the Queue before they enter the onMessage method in the MessageListener of the DefaultMessageListenerContainer. In the MessageListener implement SessionAwareMessageListener and re-send a fresh copy of the message with the same payload.
But this looks really dirty - I wish I could do it more in a "JMS"-ish way.
I don't fully grasp this part: "[the messages] get registered with the Consumer". Do you mean that ActiveMQ decides which listener to send it to? What exactly happens when you call "stop" on the DMLC?
I don't know if this is going to overcome your difficulties, but here's an idea: message selectors in DMLCs are live: you can change them any time and they are effective immediately. Perhaps try changing the message selector to "FALSE"; all cached messages should finish processing and new ones should stop coming.