Consume messages from RabbitMQ queue using spring cloud stream 3.0+ - java

I have a Producer producing messages in a RabbitMQ queue by using a direct exchange.
queue name: TEMP_QUEUE,
exchange name: TEMP_DIRECT_EXCHANGE
Producing to this queue is easy since on my producer application I use Spring AMQP which I am familiar with.
On my Consumer application, I need to use Spring cloud stream version 3.0+.
I want to avoid using legacy annotations like #EnableBinding, #StreamListener because they are about to be depracated.
Legacy code for my application would look like that :
#EnableBinding(Bindings.class)
public class TempConsumer {
#StreamListener(target = "TEMP_QUEUE")
public void consumeFromTempQueue(MyObject object) {
// do stuff with the object
}
}
public interface Bindings {
#Input("TEMP_QUEUE")
SubscribableChannel myInputBinding();
}
From their docs I have found out I can do something like that
#Bean
public Consumer<MyObject> consumeFromTempQueue() {
return obj -> {
// do stuff with the object
};
}
It is not clear to me how do I specify that this bean will consume from TEMP_QUEUE? Also what if I want to consume from multiple queues?

See Consuming from Existing Queues/Exchanges.
You can consume from multiple queues with
spring.cloud.stream.bindings.consumeFromTempQueue-in-0.destination=q1,q2,q3
spring.cloud.stream.bindings.consumer.multiplex=true
Without multiplex you'll get 3 bindings; with multiplex, you'll get 1 listener container listening to multiple queues.

You need to use the application.yml to bind your bean.
spring.cloud.stream:
function.definition: consumeFromTempQueue
You can use this configuration to configure source, process and sink as well. In your case you are just using a source.
You can read this post for more information.

Related

How to create a DGS GraphQL Subscription to an ActiveMQ Topic

I have a bit of a complicated technology stack. I am leveraging Netflix DGS to provide a GraphQL service. Behind the scenes are a bunch of JMS components sending and receiving data from various services. I have everything working outside of a GraphQL subscription.
Specifically what I am trying to do is is create a GraphQL subscription for messages from an ActiveMQ topic.
So I have a SubscriptionDataFetcher as follows:
#DgsComponent
public class SurveyResultsSubscriptionDataFetcher {
private final Publisher<SurveyResult> surveyResultsReactiveSource;
#Autowired
public SurveyResultsSubscriptionDataFetcher(Publisher<SurveyResult> surveyResultsReactiveSource) {
this.surveyResultsReactiveSource = surveyResultsReactiveSource;
}
#DgsData(parentType = DgsConstants.SUBSCRIPTION.TYPE_NAME, field = DgsConstants.SUBSCRIPTION.SurveyResultStream)
public Publisher<SurveyResult> surveyResults() {
return surveyResultsReactiveSource;
}
}
Inside my Spring configuration, I am using the following Spring Integration Flow:
#Bean
#Scope(BeanDefinition.SCOPE_PROTOTYPE)
public Publisher<SurveyResult> surveyResultsReactiveSource() {
SurveyResultMessageConverter converter = new SurveyResultMessageConverter();
return Flux.from(
IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory()).destination(surveyDestination))
.log(LoggingHandler.Level.DEBUG)
.log()
.toReactivePublisher())
.map((message) -> converter.fromMessage(message, SurveyResult.class));
}
I will say a few things:
I have a separate #JmsListener that is receiving these messages off the topic
I do not see more than one consumer, even after a web socket connection is established.
If I hook up a Mongo Reactive Spring Data Repository to this GraphQL subscription, data is received by the client.
When I connect the client to the subscription, I see the following logs:
PublishSubscribeChannel : Channel 'unknown.channel.name' has 1 subscriber(s).
DgsWebSocketHandler : Subscription started for 1
I suspect that the message listener container isn't activated when the web socket connection is established. Am I supposed to "activate" the channel adapter? What am I missing?
Tech Stack:
// spring boots - version 2.4.3
implementation "org.springframework.boot:spring-boot-starter-web"
implementation "org.springframework.boot:spring-boot-starter-activemq"
implementation "org.springframework.boot:spring-boot-starter-data-mongodb-reactive"
implementation "org.springframework.boot:spring-boot-starter-integration"
implementation 'org.springframework.boot:spring-boot-starter-security'
// spring integration
implementation group: 'org.springframework.integration', name: 'spring-integration-jms', version: '5.4.4'
// dgs
implementation "com.netflix.graphql.dgs:graphql-dgs-spring-boot-starter:3.10.2"
implementation 'com.netflix.graphql.dgs:graphql-dgs-subscriptions-websockets-autoconfigure:3.10.2'
Update 1:
For what its worth, if I update the subscription to the following I get results on the client side.
#DgsData(parentType = DgsConstants.SUBSCRIPTION.TYPE_NAME, field = DgsConstants.SUBSCRIPTION.SurveyResultStream)
public Publisher<SurveyResult> surveyResults() {
// repository is a ReactiveMongoRepository
return repository.findAll();
}
Update 2:
This is the finalized bean in case it helps someone out based on accepted solution. I needed the listener on a topic, not a queue.
#Bean
public Publisher<Message<SurveyResult>> surveyResultsReactiveSource() {
SurveyResultMessageConverter converter = new SurveyResultMessageConverter();
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(
Jms.container(connectionFactory(), surveyDestination).pubSubDomain(true))
.jmsMessageConverter(converter))
.toReactivePublisher();
}
Your problem is here:
return Flux.from(
IntegrationFlows.from(
And the framework just doesn't see that inner IntegrationFlow instance to parse and register properly.
To make it working you need to consider to declare that IntegrationFlow as a top-level bean.
Something like this:
#Bean
public Publisher<Message<SurveyResult>> surveyResultsReactiveSource() {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory()).destination(surveyDestination))
.log(LoggingHandler.Level.DEBUG)
.transform([transform to SurveyResult])
.toReactivePublisher();
}
Now the framework knows that this logical IntegrationFlow container has to be parsed and all the beans have to be registered and started.
You probably need to rethink your SurveyResultMessageConverter logic to a plain transform() if you can't supply a Jms.messageDrivenChannelAdapter with your converter.
Then in your SurveyResultsSubscriptionDataFetcher you just need to have:
return surveyResultsReactiveSource.map(Message::getPayload);

Connecting to multiple clusters spring kafka

I would like to consume messages from one kafka cluster and publish to another kafka cluster. Would like to know how to configure this using spring-kafka?
Simply configure the consumer and producer factories with different bootstrap.servers properties.
If you are using Spring Boot, see
https://docs.spring.io/spring-boot/docs/current/reference/html/appendix-application-properties.html#spring.kafka.consumer.bootstrap-servers
and
https://docs.spring.io/spring-boot/docs/current/reference/html/appendix-application-properties.html#spring.kafka.producer.bootstrap-servers
If you are creating your own factory #Beans, set the properties there.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#connecting
you can use spring cloud stream kafka binders.
create two stream one for consume and one for producing.
for consumer
public interface InputStreamExample{
String INPUT = "consumer-in";
#Input(INPUT)
MessageChannel readFromKafka();
}
for producer
public interface ProducerStreamExample{
String OUTPUT = "produce-out";
#Output(OUTPUT)
MessageChannel produceToKafka();
}
for consuming message use:
#StreamListener(value = ItemStream.INPUT)
public void processMessage(){
/*
code goes here
*/
}
for producing
//here producerStreamExample is instance of ProducerStreamExample
producerStreamExample.produceToKafka().send(/*message goes here*/);
Now configure consumer and producer cluster using binder, you can use consumer-in for consumer cluster and produce-out for producing cluster.
properties file
spring.cloud.stream.binders.kafka-a.environment.spring.cloud.stream.kafka.binder.brokers:<consumer cluster>
#other properties for this binders
#bind kafka-a to consumer-in
spring.cloud.stream.bindings.consumer-in.binder=kafka-a #kafka-a binding to consumer-in
#similary other properties of consumer-in, like
spring.cloud.stream.bindings.consumer-in.destination=<topic>
spring.cloud.stream.bindings.consumer-in.group=<consumer group>
#now configure cluster to produce
spring.cloud.stream.binders.kafka-b.environment.spring.cloud.stream.kafka.binder.brokers:<cluster where to produce>
spring.cloud.stream.bindings.produce-out.binder=kafka-b #here kafka-b, binding to produce-out
#similary you can do other configuration like topic
spring.cloud.stream.bindings.produce-out.destination=<topic>
Refer to this for more configuration: https://cloud.spring.io/spring-cloud-stream-binder-kafka/spring-cloud-stream-binder-kafka.html

Publishing different types of events to different queues

I'm trying to create a simple microservices project to learn working with the Axon Framework.
I've set up messaging through RabbitMQ with the following code:
#Bean
public Exchange exchange() {
return ExchangeBuilder.fanoutExchange("Exchange").build();
}
#Bean
public Queue queue() {
return QueueBuilder.durable("QueueA").build();
}
#Bean
public Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with("*").noargs();
}
#Autowired
public void configure(AmqpAdmin admin) {
admin.declareExchange(exchange());
admin.declareQueue(queue());
admin.declareBinding(binding());
}
And the folling in my application.properties:
axon.amqp.exchange=Exchange
With this configuration all events published through the Axon Framework will be sent to QueueA. But now I want to make all EventA events go to QueueA and all EventB events go to QueueB. How can I do that?
By default, Axon Framework uses the package name of the event as the AMQP Routing Key. This means you can bind queues to topic exchanges using patterns to match against these routing keys.
See https://www.rabbitmq.com/tutorials/tutorial-five-java.html for more information.
You can customize Axon's behavior, by providing a custom RoutingKeyResolver (a simple Function that returns a String for a given EventMessage). This is then configured in the AMQPMessageConverter, which is responsible for creating an AMQP Message based on an Axon EventMessage (and vice versa). You can use the DefaultAMQPMessageConverter if you're fine with the default AMQP Message format.
however you are using fanoutExchange so it will put events to all Queue you have to just create another Queue and bind with sameExchange and Query-side you can handle event

Spring Cloud Stream with RabbitMQ binder, how to apply #Transactional?

I have a Spring Cloud Stream application that receives events from RabbitMQ using the Rabbit Binder. My application can be summarized as this:
#Transactional
#StreamListener(MySink.SINK_NAME)
public void processEvents(Flux<Event> events) {
// Transform events and store them in MongoDB using
// spring-boot-data-mongodb-reactive
...
}
The problem is that it doesn't seem that #Transactional works with Spring Cloud Stream (or at least that's my impression) since if there's an exception when writing to MongoDB the event seems to have already been ack:ed to RabbitMQ and the operation is not retried.
Given that I want to achieve basically the same functionality as when using the #Transactional around a function with spring-amqp:
Do I have to manually ACK the messages to RabbitMQ when using Spring
Cloud Stream with the Rabbit Binder?
If so, how can I achieve this?
There are several issues here.
Transactions are not required for acknowledging messages
Reactor-based #StreamListener methods are invoked exactly once, just to set up the Flux so #Transactional on that method is meaningless - messages then flow through the flux so anything pertaining to individual messages has to be done within the context of the flux.
Spring Transactions are bound to the thread - Reactor is non-blocking; the message will be acked at the first handoff.
Yes, you would need to use manual acks; presumably on the result of the mongodb store operation. You would probably need to use Flux<Message<Event>> so you would have access to the channel and delivery tag headers.

Message Driven Bean Selectors (JMS)

I have recently discovered message selectors
#ActivationConfigProperty(
propertyName="messageSelector",
propertyValue="Fragile IS TRUE")
My Question is: How can I make the selector dynamic at runtime?
Lets say a consumer decided they wanted only messages with the property "Fragile IS FALSE"
Could the consumer change the selector somehow without redeploying the MDB?
Note: I am using Glassfish v2.1
To my knowledge, this is not possible. There may be implementations that will allow it via some custom server hooks, but it would be implementation dependent. For one, it requires a change to the deployment descriptor, which is not read after the EAR is deployed.
JMS (Jakarta Messaging) is designed to provide simple means to do simple things and more complicated things to do more complicated but less frequently needed things. Message-driven beans are an example of the first case. To do some dynamic reconfiguration, you need to stop using MDBs and start consuming messages using the programmatic API, using an injected JMSContext and topic or queue. For example:
#Inject
private JMSContext context;
#Resource(lookup="jms/queue/thumbnail")
Queue thumbnailQueue;
JMSConsumer connectListener(String messageSelector) {
JMSConsumer consumer = context.createConsumer(logTopic, messageSelector);
consumer.setMessageListener(message -> {
// process message
});
return consumer;
}
You can call connectListener during startup, e.g. in a CDI bean:
public void start(#Observes #Initialized(ApplicationScoped.class) Object startEvent) {
connectListener("Fragile IS TRUE");
}
Then you can easily reconfigure it by closing the returned consumer and creating it again with a new selector string:
consumer.close();
consumer = connectListener("Fragile IS FALSE");

Categories

Resources