How to not acknowledge only one message with Spring-JMS? - java

There is a class 'MyConsumer' which receives messages from a queue, and processes them. There are two requirements:
If there is a message contains invalid content, MyConsumer should not acknowledge it, but can process later messages
The unconsumed message will be deliver again when MyConsumer restarts
I tried with spring-jms, with the listener-container supports, but can't find a solution fits the first requirement.
My code:
<amq:queue id="destination" physicalName="org.springbyexample.jms.test"/>
<amq:connectionFactory id="jmsFactory" brokerURL="tcp://localhost:11111"/>
<bean id="jmsConsumerConnectionFactory"
class="org.springframework.jms.connection.SingleConnectionFactory"
p:targetConnectionFactory-ref="jmsFactory"/>
<bean id="jmsConsumerTemplate" class="org.springframework.jms.core.JmsTemplate"
p:connectionFactory-ref="jmsConsumerConnectionFactory"
p:defaultDestination-ref="destination"/>
<bean id="jmsMessageListener" class="test.MyConsumer"/>
<bean id="errorHandler" class="test.MyErrorHandler"/>
<jms:listener-container container-type="default"
connection-factory="jmsConsumerConnectionFactory"
error-handler="errorHandler"
acknowledge="client">
<jms:listener destination="org.springbyexample.jms.test" ref="jmsMessageListener"/>
</jms:listener-container>
Class MyConsumer:
#Override
public void onMessage(Message message) {
TextMessage textMessage = (TextMessage) message;
try {
System.out.println("!!!!!!!!! get message: " + textMessage.getText());
} catch (JMSException e) {
e.printStackTrace();
}
if (theNumberOfMessageIs(3)) {
throw new RuntimeException("something is wrong");
}
}
You may notice that the acknowledge in listener-container is client, actually it has 3 values:
auto (default)
client
transacted
I tried all of them, but none fits my requirement. My test scenario:
producer put 3 messages to queue
start a thread to monitor the message count in queue, when the count changes, print it
start consumer, it will receive messages from queue, and processes them
wait a while, put another 3 messages to queue
For auto:
MyConsumer will acknowledge after receiving each message, no matter throwing exception or not
For client:
MyConsumer will acknowledge only if no exception thrown in onMessage. For the 3rd message, it throws exception, there will be a message in the queue unconsummed. But when it get the 4th message and doesn't throw exception, the 3rd message in queue will be disapeared
For transacted:
If exception thrown in MyConsumer, the message will not be acknowledged and be re-delivered several times. After that, the message is disappeared from queue
But none of them fit the requirement 1.
I wonder: if I need to look for other solution than Spring-jms, or my usage is not correct?

auto The DefaultMessageListenerContainer is really designed for transactions - with auto, as you have found, the message is always acknowledged. You can use a SimpleMessagseListenerContainer which will work as you desire, but it has other limitations; see the JavaDocs.
client That's just the way JMS works when you ack #4, #3 is automatically acked too - see the Message JavaDocs. Client mode is used to reduce ack traffic (by, say, acking every 10 messages).
transacted That's a function of the broker, you can configure AMQ to send the bad message to a Dead Letter Queue after some number of retries.
You would need some process to move messages from the DLQ back to the main queue for later retry (perhaps during initialization on restart).

Using WMQ you can achieve the requirement using BackOut feature using BOTHRESH and BOQNAME QUEUE configuration parameters, where BOTHRESH define how many times you will try consume the message and after that parameter BOQNAME define the name of QUEUE that your message you be redelivery. In this case you can use a DLQ QUEUE where you can move messages to main QUEUE after some time or use you main QUEUE as DLQ QUEUE that enable message rotate in you consumer.
Hope that helps.

Related

How to move error message to IBM MQ dead letter queue using java?

Currently, my program is processing the messages being received from a queue but we encountered a xml file that has an error and what happens is it keeps looping on the same message and retrying to process it.
I would like to move the message to dead letter queue when a message like this occurs again.
What I did right now is that I created a class that will "producer.send(destination, msg)" to the dead queue and call this function on the try-catch but it seems that it is not working.
As #JoshMc hinted you should be treating the error messages as poison messages. For that you will need to enable transactions, and invoke a rollback for the error message.
ie. logic that looks like
// Create a connection factory
JmsFactoryFactory ff = JmsFactoryFactory.getInstance(WMQConstants.WMQ_PROVIDER);
JmsConnectionFactory cf = ff.createConnectionFactory();
//Set connection properties
...
context = cf.createContext(JMSContext.SESSION_TRANSACTED);
try {
...
// Message Processing
...
// All is OK
context.commit();
} catch (Exception e) {
// Message processing failed
context.rollback();
}
If a backout queue and backout threshold is set then the poison message is put on to the backout queue (BOQNAME) after BOTHRESH attempts at handling the message.
All this is done for you, by the underlying MQ Client code.
There is an explanation in this article - https://developer.ibm.com/articles/an-introduction-to-local-transactions-using-mq-and-jms/
which also links to sample code here - https://github.com/ibm-messaging/mq-dev-patterns/tree/master/transactions/JMS/SE

Acknowledge message in SQS queue

I am using Amazon SQS with Amazon SQS-JMS java library with Java EE 7. What I want to achieve is after receiving a message, depending on business logic of the application either confirm (consume) the message or resend it to the queue again and after 3 failed retries move it to DLQ.
I though about using CLIENT_Acknowledge mode in JMS and only acknowledging the messages that were successfully processed, but this is from their official documentation:
In this mode, when a message is acknowledged, all messages received before this message are implicitly acknowledged as well. For example, if 10 messages are received, and only the 10th message is acknowledged (in the order the messages are received), then all of the previous nine messages are also acknowledged.
This example also seems to confirm this: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/code-examples.html#example-synchronous-receiver-client-acknowledge-mode.
For me this is kind of a strange behavior and opposite what I would expect from a client_acknowledge. Is there a more elegant solution here than just manually sending message throughout the code to main SQS queue or DLQ depending on process status?
You can use:
UNORDERED_ACKNOWLEDGE
SQSSession.UNORDERED_ACKNOWLEDGE
Which comes from 'com.amazon.sqs.javamessaging;' and as it states in the documentation it is a variation of Client_Acknowledge which only acknowledges the message for which it is called.
/**
* Non standard acknowledge mode. This is a variation of CLIENT_ACKNOWLEDGE
* where Clients need to remember to call acknowledge on message. Difference
* is that calling acknowledge on a message only acknowledge the message
* being called.
*/
dependency example:
"com.amazonaws:amazon-sqs-java-messaging-lib:1.0.3"
To handle this case you can use RedrivePolicy attribute for the DLQ that you created. Solution for this case can be:
Create a 2 sqs Qs say my_q and my_q_dl (latter one is for DLQ)
Set DLQ my_q_dl as the DLQ of my_q by using RedrivePolicy.
Here, care should be taken to specify deadLetterTargetArn and maxReceiveCount. This maxReceiveCount is the number of times you want to process any message without acknowledging before sending it to the DLQ. If you set maxReceiveCount=3 then, the msg will remain in my_q up to 3rd pull by the consumer with no ack.
2 cases here:
Normal case: msg gets deleted as soon as ack is received.
If no ack (msg delete) for that msg upto third time then the msg gets deleted from my_q and pushed to
my_q_dl itself.
*RedrivePolicy - The string that includes the parameters for the deadletter queue functionality of the source queue.
deadLetterTargetArn - The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value
of maxReceiveCount is exceeded.
maxReceiveCount - The number of times a message is delivered to the source queue before being moved to the dead-letter queue.
Note
The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a
standard queue.*

Acknowledgment on Publish with Spring AMQP

Is there any way a publisher can be acknowledged that a published message has been delivered to a listener when using Spring AQMP? I have a number of queues where I set x-message-ttl = 0, which means messages will be discarded if they cannot be immediately delivered, but as I'm using this in a request/reply scenario, I'd like to be able to abort the request and handle an error immediately.
You could publish a message with the mandatory flag.
If this flag is set, the server will return an undeliverable message
with a Return method. If this flag is zero, the server will queue the
message, but with no guarantee that it will ever be consumed.
And set a return callback which will be called if the message in unroutable.
Another solution should be to use an alternate exchange associated to your exchange. The cons are that you need to bind a queue to this AE and consume messages to be able to know if a request has failed.

How to handle producer flow control in jms messaging while using apache qpid

I am trying to handle flow control situation on producer end.
I have a queue on a qpid-broker with a max queue-size set. Also have flow_stop_count and flow_resume_count set on the queue.
now at the producer keeps on continuously producing messages until this flow_stop_count is reached. Upon breach of this count, an exception is thrown which is handled by Exception listener.
Now sometime later the consumer on queue will catch up and the flow_resume_count will be reached. The question is how does the producer know of this event.
Here's a sample code of the producer
connection connection = connectionFactory.createConnection();
connection.setExceptionListenr(new MyExceptionListerner());
connection.start();
Session session = connection.createSession(false,Session.CLIENT_ACKNOWLEDGE);
Queue queue = (Queue)context.lookup("Test");
MessageProducer producer = session.createProducer(queue);
while(notStopped){
while(suspend){//---------------------------how to resume this flag???
Thread.sleep(1000);
}
TextMessage message = session.createTextMessage();
message.setText("TestMessage");
producer.send(message);
}
session.close();
connection.close();
and for the exception listener
private class MyExceptionListener implements ExceptionListener {
public void onException(JMSException e) {
System.out.println("got exception:" + e.getMessage());
suspend=true;
}
}
Now the exceptionlistener is a generic listener for exceptions, so it should not be a good idea to suspend the producer flow through that.
What I need is perhaps some method on the producer level , something like produer.isFlowStopped() which I can use to check before sending a message. Does such a functionality exist in qpid api.
There is some documentation on the qpid website which suggest this can be done. But I couldn't find any examples of this being done anywhere.
Is there some standard way of handling this kind of scenario.
From what I have read from the Apache QPid documentation it seems that the flow_resume_count and flow_stop_count will cause the producers to start getting blocked.
Therefore the only option would be to software wise to poll at regular intervals until the messages start flowing again.
Extract from here.
If a producer sends to a queue which is overfull, the broker will respond by instructing the client not to send any more messages. The impact of this is that any future attempts to send will block until the broker rescinds the flow control order.
While blocking the client will periodically log the fact that it is blocked waiting on flow control.
WARN AMQSession - Broker enforced flow control has been enforced
WARN AMQSession - Message send delayed by 5s due to broker enforced flow control
WARN AMQSession - Message send delayed by 10s due to broker enforced flow control
After a set period the send will timeout and throw a JMSException to the calling code.
ERROR AMQSession - Message send failed due to timeout waiting on broker enforced flow control.
From this documentation it implicates that the software managing the producer would then have to self manage. So basically when you receive an exception that the queue is overfull you will need to back off and most likely poll and reattempt to send your messages.
You can try setting the capacity (size in bytes at which the queue is thought to be full ) and flowResumeCapacity (the queue size at which producers are unflowed) properties for a queue.
send() will then be blocked if the size exceeds the capacity value.
You can have a look at this test case file in the repo to get an idea.
Producer flow control is not yet implemented on the JMS client.
See https://issues.apache.org/jira/browse/QPID-3388

MDB not listening after startup

I have a Message Producer running on one JVM that puts in messages in a JMS Queue .I have a Message Consumer which implements Message-Driven-Bean and MessageListener interface that listens to this queue.This Message consumer is on a different JVM.
The producer puts in messages in the queue properly.But the MDB is not able to pop out messages from queue.The weird thing is that when I restart my Message Consumer , all the messages in the queue are popped out by the Message Consumer at once.After this,no matter how many messages producer puts in the queue ,the Message Consumer does not pop them out.
What could be the reason??
The application server I am using is JBOSS4.0.5.GA.
Thanks
Please provide more details. From what you have provided:
is your consumer running and waiting for messages ? (inside some sort of while loop or a blocking call)
you can set prefetch size for your consumer to be 1 in your jms connection settings so that it fetches only 1 (or whatever number) message from the queue.

Categories

Resources