Spring Cloud Stream with RabbitMQ binder, how to apply #Transactional? - java

I have a Spring Cloud Stream application that receives events from RabbitMQ using the Rabbit Binder. My application can be summarized as this:
#Transactional
#StreamListener(MySink.SINK_NAME)
public void processEvents(Flux<Event> events) {
// Transform events and store them in MongoDB using
// spring-boot-data-mongodb-reactive
...
}
The problem is that it doesn't seem that #Transactional works with Spring Cloud Stream (or at least that's my impression) since if there's an exception when writing to MongoDB the event seems to have already been ack:ed to RabbitMQ and the operation is not retried.
Given that I want to achieve basically the same functionality as when using the #Transactional around a function with spring-amqp:
Do I have to manually ACK the messages to RabbitMQ when using Spring
Cloud Stream with the Rabbit Binder?
If so, how can I achieve this?

There are several issues here.
Transactions are not required for acknowledging messages
Reactor-based #StreamListener methods are invoked exactly once, just to set up the Flux so #Transactional on that method is meaningless - messages then flow through the flux so anything pertaining to individual messages has to be done within the context of the flux.
Spring Transactions are bound to the thread - Reactor is non-blocking; the message will be acked at the first handoff.
Yes, you would need to use manual acks; presumably on the result of the mongodb store operation. You would probably need to use Flux<Message<Event>> so you would have access to the channel and delivery tag headers.

Related

Auto create KafkaListeners

I work with apache-kafka and web flux (spring boot) and I want to know if there is a method to auto create a KafkaListener for each topic I add in application.yml(or properties)
This is not what consumer is for. The Kafka topic is a stream of data constantly changing . What is business purpose of that http request? Maybe you want to stream such a topic request to the Flux? Then consider to use Spring Integration dynamic flows and its toReactivePublisher() feature:
https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#java-dsl-runtime-flows
https://docs.spring.io/spring-integration/docs/current/reference/html/reactive-streams.html#java-dsl
This sample shows something about Kafka and dynamic flows: https://github.com/spring-projects/spring-integration-samples/tree/main/dsl/kafka-dsl.
Also this one demonstrates some “to WebFlux” technique : https://github.com/artembilan/sandbox/tree/master/amqp-to-webflux.
Or you can look into Reactor Kafka: https://projectreactor.io/docs/kafka/release/reference/.

Spring cloud kafka binder query

We have a requirement where we are consuming messages from one topic then there is some enrichment happening and then we are publishing the message to another topic. below are the events
Consumer - Consume the message
Enrichment - Enriched the consumed message
Producer - Published Enriched message to other topic
I am using Spring cloud kafka binder and things are working fine. suddenly we observed that producer is sending duplicate message to the topic and then we made Producer is idempotent. We have autocommitOffSet to false for better control. Below is what we are doing in the method
#StreamListener("INPUT")
#SendTo("OUTPUT")
public void consumer(Message message){
String inputMessage = message.getPayload.toString();
String enrichMessage = // Enrichment on inputMessage
return enrichMessage;
}
We observed if ack.acknowledge() failed due to some issue, Message still sent to the outbound channel. How can we handle entire consumer/producer as part of one transaction so that if acknowledge fail message will not sent to the topic.
I have set below transaction properties as well
spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix=TX-
spring.cloud.stream.kafka.binder.transaction.producer.configuration.ack=all
spring.cloud.stream.kafka.binder.transaction.producer.configuration.retries=1
spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset=true
spring.cloud.stream.kafka.bindings.input.consumer.enableDlq=true
spring.cloud.stream.kafka.bindings.input.consumer.dlqName=error.topic
spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOnError=true
If there is any example available that would be really helpful.
Cheers
You need to make the binder transactional. See the documentation
https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/3.1.4/reference/html/spring-cloud-stream-binder-kafka.html#_kafka_binder_properties
spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix
Enables transactions in the binder. See transaction.id in the Kafka documentation and Transactions in the spring-kafka documentation. When transactions are enabled, individual producer properties are ignored and all producers use the spring.cloud.stream.kafka.binder.transaction.producer.* properties.
Default null (no transactions)
Note that consumers on the output topic must be configured with isolation.level=read_committed to avoid receiving rolled-back records.

Consume messages from RabbitMQ queue using spring cloud stream 3.0+

I have a Producer producing messages in a RabbitMQ queue by using a direct exchange.
queue name: TEMP_QUEUE,
exchange name: TEMP_DIRECT_EXCHANGE
Producing to this queue is easy since on my producer application I use Spring AMQP which I am familiar with.
On my Consumer application, I need to use Spring cloud stream version 3.0+.
I want to avoid using legacy annotations like #EnableBinding, #StreamListener because they are about to be depracated.
Legacy code for my application would look like that :
#EnableBinding(Bindings.class)
public class TempConsumer {
#StreamListener(target = "TEMP_QUEUE")
public void consumeFromTempQueue(MyObject object) {
// do stuff with the object
}
}
public interface Bindings {
#Input("TEMP_QUEUE")
SubscribableChannel myInputBinding();
}
From their docs I have found out I can do something like that
#Bean
public Consumer<MyObject> consumeFromTempQueue() {
return obj -> {
// do stuff with the object
};
}
It is not clear to me how do I specify that this bean will consume from TEMP_QUEUE? Also what if I want to consume from multiple queues?
See Consuming from Existing Queues/Exchanges.
You can consume from multiple queues with
spring.cloud.stream.bindings.consumeFromTempQueue-in-0.destination=q1,q2,q3
spring.cloud.stream.bindings.consumer.multiplex=true
Without multiplex you'll get 3 bindings; with multiplex, you'll get 1 listener container listening to multiple queues.
You need to use the application.yml to bind your bean.
spring.cloud.stream:
function.definition: consumeFromTempQueue
You can use this configuration to configure source, process and sink as well. In your case you are just using a source.
You can read this post for more information.

Camel consume single message and stop, transacted

I am trying to use Camel to consume a single message from a JMS queue in a transacted manner. Specifically in a flow like this:
Wait until message is published on JMS queue
Try to consume and process the single message
If processing fails (exception occurs), rollback the consumption
If the processing passes, acknowledge and stop consuming anymore messages
Later in the application lifecycle, another process triggers consumption to start again from (1)
At first I tried to do this using a polling consumer, using the ConsumerTemplate, but I can't figure out if its possible to do this transactionally - it seems like the transaction is internal to the ConsumerTemplate so regardless of what I do the message is already acknowledged as consumed by the time the ConsumerTemplate returns.
Can I do this using the ConsumerTemplate? Can I do this using Camel and if so what is the best approach (Simple examples would be appreciated)?
I ended up using the pollEnrich dsl to achieve this. For example my route builder looks like:
from("direct:service-endpont").transacted("PROPOGATION_REQUIRED").setExchangePattern(ExchangePattern.InOut).pollEnrich("activemq:test-queue").bean(myHandler);
I use the direct endpoint as a service, sending a "request" message to the direct endpoint polls the jms queue for a single message (blocking if required). The transaction started extends to the pollEnrich so if, for example, myHandler bean fails then the message taken during the pollEnrich is not consumed and left on the queue.

How to clean HornetQ messaging journal before/after performing a test?

There's an Arquillian integration test using JMS HornetQ with persisted messages. Some test leave the messaging journal filled with unhandled messages that break other tests expecting no data.
Is there a way of telling JMS to clean its messaging journal before or after executing a test?
This does not exist in the JMS API itself, but there's a method 'removeMessages(filter)' in the HornetQ QueueControl management object. This method can be found in the JMX Bean for the Queue, but I wouldn't know how to get that in Arquillian.
Luckily, you can invoke management operations via the 'hornetq.management' queue. See http://docs.jboss.org/hornetq/2.2.5.Final/user-manual/en/html/management.html. In practice, the following should work:
Queue managementQueue = HornetQJMSClient.createQueue("hornetq.management");
QueueRequestor requestor = new QueueRequestor(session, managementQueue);
Message m = session.createMessage();
JMSManagementHelper.putOperationInvocation(m,
"jms.queue.exampleQueue",
"removeMessages","*");
Message reply = requestor.request(m);
boolean success = JMSManagementHelper.hasOperationSucceeded(reply);
If you're restarting the server, you could remove the paging and data folders (while keeping the bindings).

Categories

Resources