Camel consume single message and stop, transacted - java

I am trying to use Camel to consume a single message from a JMS queue in a transacted manner. Specifically in a flow like this:
Wait until message is published on JMS queue
Try to consume and process the single message
If processing fails (exception occurs), rollback the consumption
If the processing passes, acknowledge and stop consuming anymore messages
Later in the application lifecycle, another process triggers consumption to start again from (1)
At first I tried to do this using a polling consumer, using the ConsumerTemplate, but I can't figure out if its possible to do this transactionally - it seems like the transaction is internal to the ConsumerTemplate so regardless of what I do the message is already acknowledged as consumed by the time the ConsumerTemplate returns.
Can I do this using the ConsumerTemplate? Can I do this using Camel and if so what is the best approach (Simple examples would be appreciated)?

I ended up using the pollEnrich dsl to achieve this. For example my route builder looks like:
from("direct:service-endpont").transacted("PROPOGATION_REQUIRED").setExchangePattern(ExchangePattern.InOut).pollEnrich("activemq:test-queue").bean(myHandler);
I use the direct endpoint as a service, sending a "request" message to the direct endpoint polls the jms queue for a single message (blocking if required). The transaction started extends to the pollEnrich so if, for example, myHandler bean fails then the message taken during the pollEnrich is not consumed and left on the queue.

Related

Spring cloud kafka binder query

We have a requirement where we are consuming messages from one topic then there is some enrichment happening and then we are publishing the message to another topic. below are the events
Consumer - Consume the message
Enrichment - Enriched the consumed message
Producer - Published Enriched message to other topic
I am using Spring cloud kafka binder and things are working fine. suddenly we observed that producer is sending duplicate message to the topic and then we made Producer is idempotent. We have autocommitOffSet to false for better control. Below is what we are doing in the method
#StreamListener("INPUT")
#SendTo("OUTPUT")
public void consumer(Message message){
String inputMessage = message.getPayload.toString();
String enrichMessage = // Enrichment on inputMessage
return enrichMessage;
}
We observed if ack.acknowledge() failed due to some issue, Message still sent to the outbound channel. How can we handle entire consumer/producer as part of one transaction so that if acknowledge fail message will not sent to the topic.
I have set below transaction properties as well
spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix=TX-
spring.cloud.stream.kafka.binder.transaction.producer.configuration.ack=all
spring.cloud.stream.kafka.binder.transaction.producer.configuration.retries=1
spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset=true
spring.cloud.stream.kafka.bindings.input.consumer.enableDlq=true
spring.cloud.stream.kafka.bindings.input.consumer.dlqName=error.topic
spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOnError=true
If there is any example available that would be really helpful.
Cheers
You need to make the binder transactional. See the documentation
https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/3.1.4/reference/html/spring-cloud-stream-binder-kafka.html#_kafka_binder_properties
spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix
Enables transactions in the binder. See transaction.id in the Kafka documentation and Transactions in the spring-kafka documentation. When transactions are enabled, individual producer properties are ignored and all producers use the spring.cloud.stream.kafka.binder.transaction.producer.* properties.
Default null (no transactions)
Note that consumers on the output topic must be configured with isolation.level=read_committed to avoid receiving rolled-back records.

Spring AMQP : [RabbitTemplate] Hystrix fallback is not getting invoked when RabbitTemplate ReturnCallback is executed

I am using hystrix to handle falback scenarios when my messages are not delivered to RabbitMQ server. My fallback is getting invoked when RabbitMQ server is down(as AMQPException is thrown).
If broker is unable to accept/route the messages then returnCallback/returnConfirm(with nack) is invoked.
What I understand is that RabbitTemplate returnCallbacks/returnConfirms will be executed in different thread than the Hystrix thread.
Is it possible to throw Exception in these scenarios so that Hystrix fallback gets executed?
I have referred these q's : Spring AMQP return callback vs retry callback
Spring RabbitTemplate- How to get hold of the published message for NACKs in Publisher confirm mode
Any pointer to handle this scenario is much appreciated.
No; returns are completely asynchronous; even if you enable transactions - from the rabbit mq documentation...
AMQP does not specify when errors (e.g. lack of permissions, references to unknown exchanges) in transactional basic.publish and basic.ack commands should be detected. RabbitMQ performs the necessary checks immediately (rather than, say, at the time of commit), but note that both basic.publish and basic.ack are asynchronous commands so any errors will be reported back to the client asynchronously.
If you publish to a non-existent exchange (and setChannelTransacted(true)*) you will get an exception on the commit, but publishing to an exchange with no routable queue will never get an exception (only an async return callback).
enabling transactions is quite expensive for all operations so consider it carefully if you want to catch this particular scenario

How to Prevent Spring AMQP from Blocking on Unacked Messages?

I have a #RabbitListener annotated method for which Spring AMQP blocks after returning from the method. The underlying SimpleRabbitListenerContainerFactory uses AcknowledgeMode.MANUAL. I don’t want to acknowledge the message in the listener method, yet.
Is there any way to not have Spring AMQP block in such a scenario?
In more detail
I use a listener like this:
#RabbitListener(queues = "#{ #myQueue }")
void recordRequestsFromMyMessages(
#Payload MyMessage myMessagePayload,
#Header(AmqpHeaders.DELIVERY_TAG) long deliveryTag,
Channel channel) {
// record relevant parts of the given message and combine them with
// parts from previous/future messages
// DON'T acknowledge the consumed message, yet; instead only keep a
// record of the channel and the delivery tag
}
Since I batch/combine multiple messages before I actually process them (asynchronously) later, I don’t want to acknowledge the consumed message right away. Instead, I only want to do this once the messages have been successfully processed later.
With my current approach, Spring AMQP blocks after returning from calling the recordRequestsFromMyMessages method above and no further messages are consumed from the same queue anymore.
This SO answer suggests that batch processing should work, however, I’m not sure how.
It's not the container that's "blocking".
You need to increase the prefetchCount on the container (default 1) - the broker only allows that number of unacked messages to be outstanding.

spring rabbitmq and UI layer or managed bean

I have a rabbitmq listener as a separate class and JSF 2 managed bean.
In my bean I send a message and need to wait for result. I can't use sendAndReceive... because I send the message to one queue but receive from another queue, so I assign correlationId before sending.
So I need to wait asynchronously, I need to wait until right message comes to the listener. How to do it in rmq?
Looking at javadoc and source of RabbitTemplate it seems that he waits for response in reply queue. Do you set 'reply-to' property in your messages? If yes, then RabbitTemplate sendAndReceive methods should wait for response in 'reply-to' queue. Be sure to populate replyTo field correctly and test it.
Side note:
In RabbitMQ you do not send messages to the queue.
You send messages to the exchanges. Exchanges are routing messages to the queue(s) using bindings. With default or direct exchange type it looks like you send directly to the queue, but this is over-simplification.
See https://www.rabbitmq.com/tutorials/amqp-concepts.html for details.
Edit:
It seems there are some fix for that in AMQP 1.4.5.RELEASE
https://spring.io/blog/2015/05/08/spring-amqp-1-4-5-release-and-1-5-0-m1-available
Configurable Exchange/Routing Key for Replies
Previously, when using request/reply messaging with the
RabbitTemplate, replies were routed to the default exchange and routed
with the queue name. It is now possible to supply a reply-address with
the form exchange/routingKey to route using a specific exchange and
routing key.

Keeping messages in queue in case of receiver crash

We've a Spring JMS message listener container for receiving messages asynchronously. Using DefaultMessageListenerContainer and in sessionTransacted mode. I understand being in sessionTransacted mode means in case of an exception the message will be put back into the queue. But how can I make sure the message won't be deleted from the queue even if the receiver (which is picked the message) crashes or just the machine running it looses power?
At first I thought CLIENT_ACKNOWLEDGE acknowledge mode should save me, but apparently it's not the case, Spring calls .acknowledge() no matter what.
So here's my question, how can I guarantee the delivery? Using a custom MessageListenerContainer? Using a transaction manager?
Use a transacted session and indicate successful message processing by invoking the Session class's commit() method.
Check the section 19.4.5. Processing messages within transactions for the configuration. (you can use a DefaultMessageListenerContainer). Depending on what you're doing with the messages, you may need a JTA transaction manager.
Spring message listener with Client_Acknowledge mode will acknowledge the message when the client calls message.acknowledge().
However, if after successful execution of the message, the consumer does not find any acknowledgement from the client side, spring assumes the execution was successful and acknowledges the message.
If at any point of time, consumer got an exception while processing the message, spring listener needs to know some exception has occured in order to redeliver the message to the queue for another consumer thread to pick it up. If you're catching the exception, spring assumes everything was handled and execution was smooth and hence acknowledges the message.
Spring message listener only allows to throw JMS exception from onMessage listener. Catching your custom exception and throwing a JMS exception from listener (after logging the error for future reference) will allow you to redeliver the message.
or you can use Session.AUTO_ACKNOWLEDGE with nontransacted session, see quote below from this article
A message is automatically
acknowledged when it successfully
returns from the receive() method. If
the receiver uses the MessageListener
interface, the message is
automatically acknowledged when it
successfully returns from the
onMessage() method. If a failure
occurs while executing the receive()
method or the onMessage() method, the
message is automatically redelivered.
The JMS provider carefully manages
message redelivery and guarantees
once-only delivery semantics.

Categories

Resources