How to create a DGS GraphQL Subscription to an ActiveMQ Topic - java

I have a bit of a complicated technology stack. I am leveraging Netflix DGS to provide a GraphQL service. Behind the scenes are a bunch of JMS components sending and receiving data from various services. I have everything working outside of a GraphQL subscription.
Specifically what I am trying to do is is create a GraphQL subscription for messages from an ActiveMQ topic.
So I have a SubscriptionDataFetcher as follows:
#DgsComponent
public class SurveyResultsSubscriptionDataFetcher {
private final Publisher<SurveyResult> surveyResultsReactiveSource;
#Autowired
public SurveyResultsSubscriptionDataFetcher(Publisher<SurveyResult> surveyResultsReactiveSource) {
this.surveyResultsReactiveSource = surveyResultsReactiveSource;
}
#DgsData(parentType = DgsConstants.SUBSCRIPTION.TYPE_NAME, field = DgsConstants.SUBSCRIPTION.SurveyResultStream)
public Publisher<SurveyResult> surveyResults() {
return surveyResultsReactiveSource;
}
}
Inside my Spring configuration, I am using the following Spring Integration Flow:
#Bean
#Scope(BeanDefinition.SCOPE_PROTOTYPE)
public Publisher<SurveyResult> surveyResultsReactiveSource() {
SurveyResultMessageConverter converter = new SurveyResultMessageConverter();
return Flux.from(
IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory()).destination(surveyDestination))
.log(LoggingHandler.Level.DEBUG)
.log()
.toReactivePublisher())
.map((message) -> converter.fromMessage(message, SurveyResult.class));
}
I will say a few things:
I have a separate #JmsListener that is receiving these messages off the topic
I do not see more than one consumer, even after a web socket connection is established.
If I hook up a Mongo Reactive Spring Data Repository to this GraphQL subscription, data is received by the client.
When I connect the client to the subscription, I see the following logs:
PublishSubscribeChannel : Channel 'unknown.channel.name' has 1 subscriber(s).
DgsWebSocketHandler : Subscription started for 1
I suspect that the message listener container isn't activated when the web socket connection is established. Am I supposed to "activate" the channel adapter? What am I missing?
Tech Stack:
// spring boots - version 2.4.3
implementation "org.springframework.boot:spring-boot-starter-web"
implementation "org.springframework.boot:spring-boot-starter-activemq"
implementation "org.springframework.boot:spring-boot-starter-data-mongodb-reactive"
implementation "org.springframework.boot:spring-boot-starter-integration"
implementation 'org.springframework.boot:spring-boot-starter-security'
// spring integration
implementation group: 'org.springframework.integration', name: 'spring-integration-jms', version: '5.4.4'
// dgs
implementation "com.netflix.graphql.dgs:graphql-dgs-spring-boot-starter:3.10.2"
implementation 'com.netflix.graphql.dgs:graphql-dgs-subscriptions-websockets-autoconfigure:3.10.2'
Update 1:
For what its worth, if I update the subscription to the following I get results on the client side.
#DgsData(parentType = DgsConstants.SUBSCRIPTION.TYPE_NAME, field = DgsConstants.SUBSCRIPTION.SurveyResultStream)
public Publisher<SurveyResult> surveyResults() {
// repository is a ReactiveMongoRepository
return repository.findAll();
}
Update 2:
This is the finalized bean in case it helps someone out based on accepted solution. I needed the listener on a topic, not a queue.
#Bean
public Publisher<Message<SurveyResult>> surveyResultsReactiveSource() {
SurveyResultMessageConverter converter = new SurveyResultMessageConverter();
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(
Jms.container(connectionFactory(), surveyDestination).pubSubDomain(true))
.jmsMessageConverter(converter))
.toReactivePublisher();
}

Your problem is here:
return Flux.from(
IntegrationFlows.from(
And the framework just doesn't see that inner IntegrationFlow instance to parse and register properly.
To make it working you need to consider to declare that IntegrationFlow as a top-level bean.
Something like this:
#Bean
public Publisher<Message<SurveyResult>> surveyResultsReactiveSource() {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory()).destination(surveyDestination))
.log(LoggingHandler.Level.DEBUG)
.transform([transform to SurveyResult])
.toReactivePublisher();
}
Now the framework knows that this logical IntegrationFlow container has to be parsed and all the beans have to be registered and started.
You probably need to rethink your SurveyResultMessageConverter logic to a plain transform() if you can't supply a Jms.messageDrivenChannelAdapter with your converter.
Then in your SurveyResultsSubscriptionDataFetcher you just need to have:
return surveyResultsReactiveSource.map(Message::getPayload);

Related

Consume messages from RabbitMQ queue using spring cloud stream 3.0+

I have a Producer producing messages in a RabbitMQ queue by using a direct exchange.
queue name: TEMP_QUEUE,
exchange name: TEMP_DIRECT_EXCHANGE
Producing to this queue is easy since on my producer application I use Spring AMQP which I am familiar with.
On my Consumer application, I need to use Spring cloud stream version 3.0+.
I want to avoid using legacy annotations like #EnableBinding, #StreamListener because they are about to be depracated.
Legacy code for my application would look like that :
#EnableBinding(Bindings.class)
public class TempConsumer {
#StreamListener(target = "TEMP_QUEUE")
public void consumeFromTempQueue(MyObject object) {
// do stuff with the object
}
}
public interface Bindings {
#Input("TEMP_QUEUE")
SubscribableChannel myInputBinding();
}
From their docs I have found out I can do something like that
#Bean
public Consumer<MyObject> consumeFromTempQueue() {
return obj -> {
// do stuff with the object
};
}
It is not clear to me how do I specify that this bean will consume from TEMP_QUEUE? Also what if I want to consume from multiple queues?
See Consuming from Existing Queues/Exchanges.
You can consume from multiple queues with
spring.cloud.stream.bindings.consumeFromTempQueue-in-0.destination=q1,q2,q3
spring.cloud.stream.bindings.consumer.multiplex=true
Without multiplex you'll get 3 bindings; with multiplex, you'll get 1 listener container listening to multiple queues.
You need to use the application.yml to bind your bean.
spring.cloud.stream:
function.definition: consumeFromTempQueue
You can use this configuration to configure source, process and sink as well. In your case you are just using a source.
You can read this post for more information.

Spring Cloud Stream doesn't use Kafka channel binder to send a message

I'm trying to achieve following thing:
Use Spring Cloud Stream 2.1.3.RELEASE with Kafka binder to send a message to an input channel and achieve publish subscribe behaviour where every consumer will be notified and be able to handle a message sent to a Kafka topic.
I understand that in Kafka if every consumer belongs to its own consumer group, will be able to read every message from a topic.
In my case spring creates an anonymous unique consumer group to every instance of my spring boot application running. The spring boot application has only one stream listener configured to listen to the input channel.
Test example case:
Configured an example Spring Cloud Stream app with an input channel which is bound to a Kafka topic.
Using a spring rest controller to send a message to the input channel expecting that message will be delivered to every spring boot application instance running.
In both applications on startup I can see that kafka partition is assigned properly.
Problem:
However, when I send a message using output().send() then spring doesn't even send a message to the Kafka topic configured, instead, in the same thread it triggers #StreamListener method of the same application instance.
During debugging I see that spring code has two handlers of the message. The StreamListenerMessageHandler and KafkaProducerMessageHandler.
Spring simply chains them and if first handler ends with success then it will not even go further. StreamListenerMessageHandler simply invokes my #StreamListener method in the same thread and message never reaches Kafka.
Question:
Is this by design, and in that case why is that? How can I achieve behaviour mentioned at the beginning of the post?
PS.
If I use KafkaTemplate and #KafkaListener method then it works as I want. Message is sent to Kafka topic and both application instances receive message and handles it in Kafka listener annotated method.
Code:
The stream listener method is configured the following way:
#SpringBootApplication
#EnableBinding(Processor.class)
#EnableTransactionManagement
public class ProcessorApplication {
private Logger logger =
LoggerFactory.getLogger(this.getClass().getName());
private PersonRepository repository;
public ProcessorApplication(PersonRepository repository) {
this.repository = repository;
}
public static void main(String[] args) {
SpringApplication.run(ProcessorApplication.class, args);
}
#Transactional
#StreamListener(Processor.INPUT)
#SendTo(Processor.OUTPUT)
public PersonEvent process(PersonEvent data) {
logger.info("Received event={}", data);
Person person = new Person();
person.setName(data.getName());
logger.info("Saving person={}", person);
Person savedPerson = repository.save(person);
PersonEvent event = new PersonEvent();
event.setName(savedPerson.getName());
event.setType("PersonSaved");
logger.info("Sent event={}", event);
return event;
}
}
Sending a message to the input channel:
#RestController()
#RequestMapping("/persons")
public class PersonController {
#Autowired
private Sink sink;
#PostMapping("/events")
public void createPerson(#RequestBody PersonEvent event) {
sink.input().send(MessageBuilder.withPayload(event).build());
}
}
Spring Cloud Stream config:
spring:
cloud.stream:
bindings:
output.destination: person-event-output
input.destination: person-event-input
sink.input().send
You are bypassing the binder altogether and sending it directly to the stream listener.
You need to send the message to kafka (to the person-event-input topic) and then each stream listener will receive the message from kafka.
You need to configure another output binding and send it there, not directly to the input channel.

How to implement the JMS Inbound & Outbound configuration in separate spring integration application?

My use case is notification service implementation in our project.
We have used spring with jms and its working fine with rest services. Able to send message to queue from one application and receive a message from queue in another application using JMSTemplate's convertAndSend(), convertAndReceive() functions.
Now I want to do this using spring integration's java dsl approach. We did a sample by using Jms's inboundGateway() and outboundGateway() in single application with ActiveMQ.
How to use spring integration's java dsl in different application for only sending/receiving messages?
Here is my code,
#Bean
public IntegrationFlow jmsInboundFlow() {
return IntegrationFlows
.from(Jms.inboundGateway(this.connectionFactory)
.destination("pan.outbound"))
.transform((String s) -> s.toUpperCase())
.channel("in-bound-request-channel")
.get();
}
#Bean
public IntegrationFlow jmsOutboundGatewayFlow() {
return IntegrationFlows.from("out-bound-request-channel")
.handle(Jms.outboundGateway(this.connectionFactory)
.requestDestination("pan.outbound"))
.log()
.bridge()
.get();
}
#MessagingGateway
interface EchoGateway {
#Gateway(requestChannel = "out-bound-request-channel")
String send(String message);
}
You have only to split configuration - JMS inbound adapter in one application and JMS outbound in second.
The way is exactly the same as in one application ? (Maybe I missed some information?)

Publishing different types of events to different queues

I'm trying to create a simple microservices project to learn working with the Axon Framework.
I've set up messaging through RabbitMQ with the following code:
#Bean
public Exchange exchange() {
return ExchangeBuilder.fanoutExchange("Exchange").build();
}
#Bean
public Queue queue() {
return QueueBuilder.durable("QueueA").build();
}
#Bean
public Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with("*").noargs();
}
#Autowired
public void configure(AmqpAdmin admin) {
admin.declareExchange(exchange());
admin.declareQueue(queue());
admin.declareBinding(binding());
}
And the folling in my application.properties:
axon.amqp.exchange=Exchange
With this configuration all events published through the Axon Framework will be sent to QueueA. But now I want to make all EventA events go to QueueA and all EventB events go to QueueB. How can I do that?
By default, Axon Framework uses the package name of the event as the AMQP Routing Key. This means you can bind queues to topic exchanges using patterns to match against these routing keys.
See https://www.rabbitmq.com/tutorials/tutorial-five-java.html for more information.
You can customize Axon's behavior, by providing a custom RoutingKeyResolver (a simple Function that returns a String for a given EventMessage). This is then configured in the AMQPMessageConverter, which is responsible for creating an AMQP Message based on an Axon EventMessage (and vice versa). You can use the DefaultAMQPMessageConverter if you're fine with the default AMQP Message format.
however you are using fanoutExchange so it will put events to all Queue you have to just create another Queue and bind with sameExchange and Query-side you can handle event

Publish & Subscribe with Same Connection using Spring Integration MQTT

Due to the design of MQTT where you can only make a connection with a unique client id, is it possible to use the same connection to publish and subscribe in Spring Framework/Boot using Integration?
Taking this very simple example, it would connect to the MQTT broker to subscribe and get messages, but if you would want to publish a message, the first connection will disconnect and re-connect after the message is sent.
#Bean
public MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory();
factory.setServerURIs("tcp://localhost:1883");
factory.setUserName("guest");
factory.setPassword("guest");
return factory;
}
// publisher
#Bean
public IntegrationFlow mqttOutFlow() {
return IntegrationFlows.from(CharacterStreamReadingMessageSource.stdin(),
e -> e.poller(Pollers.fixedDelay(1000)))
.transform(p -> p + " sent to MQTT")
.handle(mqttOutbound())
.get();
}
#Bean
public MessageHandler mqttOutbound() {
MqttPahoMessageHandler messageHandler = new MqttPahoMessageHandler("siSamplePublisher", mqttClientFactory());
messageHandler.setAsync(true);
messageHandler.setDefaultTopic("siSampleTopic");
return messageHandler;
}
// consumer
#Bean
public IntegrationFlow mqttInFlow() {
return IntegrationFlows.from(mqttInbound())
.transform(p -> p + ", received from MQTT")
.handle(logger())
.get();
}
private LoggingHandler logger() {
LoggingHandler loggingHandler = new LoggingHandler("INFO");
loggingHandler.setLoggerName("siSample");
return loggingHandler;
}
#Bean
public MessageProducerSupport mqttInbound() {
MqttPahoMessageDrivenChannelAdapter adapter = new MqttPahoMessageDrivenChannelAdapter("siSampleConsumer",
mqttClientFactory(), "siSampleTopic");
adapter.setCompletionTimeout(5000);
adapter.setConverter(new DefaultPahoMessageConverter());
adapter.setQos(1);
return adapter;
}
Working with 2 separate connections becomes difficult if you need to wait for an answer/result after publishing a message...
the first connection will disconnect and re-connect after the message is sent.
Not sure what you mean by that; both components will keep open a persistent connection.
Since the factory doesn't connect the client, the adapters do, it's not designed for using a shared client.
Using a single connection won't really help with coordination of requests/replies because the reply will still come back asynchronously on another thread.
If you have some data in the request/reply that you can use for correlation of replies to requests, you can use a BarrierMessageHandler to perform that task. See my answer here for an example; it uses the standard correlation id header, but that's not possible with MQTT, you need something in the message.
TL;DR
The answer is no, not with the current Spring Boot MQTT Integration implementation (and maybe not even with future ones).
Answer
I'm facing the same exact situation: I need an MQTT Client to be opened in both inbound and outbound, making the connection persistent and sharing the same configuration (client ID, credentials, etc.), using Spring Integration Flows as close to the design as possible.
In order to achieve this, I had to reimplement MqttPahoMessageDrivenChannelAdapter and MqttPahoMessageHandler and a Client Factory.
In both MqttPahoMessageDrivenChannelAdapter and MqttPahoMessageHandler I had to choose to use the Async one (IMqttAsyncClient) in order to fix which one to use. Then I had to review parts of code where the client instance is called/used in order to check if it was already instantiated by the other flow and checking the status (e.g. not trying to connect it if it was already connected).
Regarding the Client Factory, it was easier: I've reimplemented the getAsyncClientInstance(String url, String clientId) using the concatenation of url and clientId as hash as key to store the instance into a map that is used to retrieve the existing instance if the other flow requests it.
It somehow works, but it's just a test and I'm not even sure it's a good approach. (I've started another StackOverflow question in order to track my specific scenario).
Can you share how did you manage your situation?

Categories

Resources