Understanding spring cloud messaging with rabbitmq - java

I think I have a problem understanding spring cloud messaging and can't find an answer to a "problem" I'm facing.
I have the following setup (using spring-boot 2.0.3.RELEASE).
application.yml
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
virtual-host: /
cloud:
stream:
bindings:
input:
destination: foo
group: fooGroup
fooChannel:
destination: foo
Service class
#Autowired
FoodOrderController foodOrderController;
#Bean
public CommandLineRunner runner() {
return (String[] args) -> {
IntStream.range(0,50).forEach(e -> foodOrderController.orderFood());
};
}
#StreamListener(target = FoodOrderSource.INPUT)
public void processCheapMeals(String meal){
System.out.println("This was a great meal!: "+ meal);
}
#StreamListener(target = FoodOrderSource.INPUT)
public void processCheapMeals1(String meal){
System.out.println("This was a great meal!: "+ meal);
}
FoodOrderController
public class FoodOrderController {
#Autowired
FoodOrderSource foodOrderSource;
public String orderFood(){
var foodOrder = new FoodOrder();
foodOrder.setCustomerAddress(UUID.randomUUID().toString());
foodOrder.setOrderDescription(UUID.randomUUID().toString());
foodOrder.setRestaurant("foo");
foodOrderSource.foodOrders().send(MessageBuilder.withPayload(foodOrder).build());
// System.out.println(foodOrder.toString());
return "food ordered!";
}
}
FoodOrderSource
public interface FoodOrderSource {
String INPUT = "foo";
String OUTPUT = "fooChannel";
#Input("foo")
SubscribableChannel foo();
#Output("fooChannel")
MessageChannel foodOrders();
}
FoodOrderPublisher
#EnableBinding(FoodOrderSource.class)
public class FoodOrderPublisher {
}
The setup is working, with the exception that both StreamListener receive the same messages. So everything get's logged twice. Reading the documentation, it says specifying a group inside the queues bindings, both the listeners will be registered inside the group and only one listener will receive a single message. I know that the example above is not sensible, but I want to mimic a multi-node environment with multiple listeners setup.
Why is the message received by both listeners? And how can I make sure that a message is only received once within a setup group?
According to the documentation, messages should also be auto-acknowledged by default, but I can't find anything that indicates that the messages actually get acknowledged. Am I missing something here?
Here's some screenshots of rabbit admin

Reading the documentation, it says specifying a group inside the queues bindings, both the listeners will be registered inside the group and only one listener will receive a single message.
That is true when the listeners are in different application instances. When there are multiple listeners in the same instance they all get the same message. This is typically used with a condition where each listener can express interest in which meals they are interested in. Documented here.
Basically, the competing consumer is the binding itself which dispatches the message to the actual #StreamListeners in the application.
So, you can't "mimic a multi-node environment with multiple listeners setup" this way.
but I can't find anything that indicates that the messages actually get acknowledged
What do you mean by that? If the message is processed successfully, the container acks the message and it is removed from the queue.

Thow correct answer is already replied on the post, but you can still look into this:
https://github.com/jinternals/spring-cloud-stream

Related

Spring Cloud Stream doesn't use Kafka channel binder to send a message

I'm trying to achieve following thing:
Use Spring Cloud Stream 2.1.3.RELEASE with Kafka binder to send a message to an input channel and achieve publish subscribe behaviour where every consumer will be notified and be able to handle a message sent to a Kafka topic.
I understand that in Kafka if every consumer belongs to its own consumer group, will be able to read every message from a topic.
In my case spring creates an anonymous unique consumer group to every instance of my spring boot application running. The spring boot application has only one stream listener configured to listen to the input channel.
Test example case:
Configured an example Spring Cloud Stream app with an input channel which is bound to a Kafka topic.
Using a spring rest controller to send a message to the input channel expecting that message will be delivered to every spring boot application instance running.
In both applications on startup I can see that kafka partition is assigned properly.
Problem:
However, when I send a message using output().send() then spring doesn't even send a message to the Kafka topic configured, instead, in the same thread it triggers #StreamListener method of the same application instance.
During debugging I see that spring code has two handlers of the message. The StreamListenerMessageHandler and KafkaProducerMessageHandler.
Spring simply chains them and if first handler ends with success then it will not even go further. StreamListenerMessageHandler simply invokes my #StreamListener method in the same thread and message never reaches Kafka.
Question:
Is this by design, and in that case why is that? How can I achieve behaviour mentioned at the beginning of the post?
PS.
If I use KafkaTemplate and #KafkaListener method then it works as I want. Message is sent to Kafka topic and both application instances receive message and handles it in Kafka listener annotated method.
Code:
The stream listener method is configured the following way:
#SpringBootApplication
#EnableBinding(Processor.class)
#EnableTransactionManagement
public class ProcessorApplication {
private Logger logger =
LoggerFactory.getLogger(this.getClass().getName());
private PersonRepository repository;
public ProcessorApplication(PersonRepository repository) {
this.repository = repository;
}
public static void main(String[] args) {
SpringApplication.run(ProcessorApplication.class, args);
}
#Transactional
#StreamListener(Processor.INPUT)
#SendTo(Processor.OUTPUT)
public PersonEvent process(PersonEvent data) {
logger.info("Received event={}", data);
Person person = new Person();
person.setName(data.getName());
logger.info("Saving person={}", person);
Person savedPerson = repository.save(person);
PersonEvent event = new PersonEvent();
event.setName(savedPerson.getName());
event.setType("PersonSaved");
logger.info("Sent event={}", event);
return event;
}
}
Sending a message to the input channel:
#RestController()
#RequestMapping("/persons")
public class PersonController {
#Autowired
private Sink sink;
#PostMapping("/events")
public void createPerson(#RequestBody PersonEvent event) {
sink.input().send(MessageBuilder.withPayload(event).build());
}
}
Spring Cloud Stream config:
spring:
cloud.stream:
bindings:
output.destination: person-event-output
input.destination: person-event-input
sink.input().send
You are bypassing the binder altogether and sending it directly to the stream listener.
You need to send the message to kafka (to the person-event-input topic) and then each stream listener will receive the message from kafka.
You need to configure another output binding and send it there, not directly to the input channel.

Spring cloud stream: dynamic output channel strange behavior

I am using spring cloud stream version 2.1.0.RELEASE to send messages (in this case to Kafka) to channels dynamically defined based on the input received. The issue is that only every other message ends up in the correct channel, the other half end up in the default channel.
I used this sample as a starting point.
I am placing the channel I want to send to into a specific message header, then using a HeaderValueRouter to check that same header value to see which channel to output to.
I am configuring my application as follows:
#EnableBinding(CloudStreamConfig.DynamicSource.class)
public class CloudStreamConfig {
#Autowired
private BinderAwareChannelResolver resolver;
public static final String CHANNEL_HEADER = "channelHeader";
public static final String OUTPUT_CHANNEL = "outputChannel";
private final String defaultChannel = "defaultChannel";
#ServiceActivator(inputChannel = OUTPUT_CHANNEL)
#Bean
public HeaderValueRouter router() {
HeaderValueRouter router = new HeaderValueRouter(CHANNEL_HEADER);
router.setDefaultOutputChannelName(defaultChannel);
router.setChannelResolver(resolver);
return router;
}
public interface DynamicSource {
#Output(OUTPUT_CHANNEL)
MessageChannel output();
}
}
And in my controller I take in an object as well as a parameter defining what channel to send it to, then send it to the MessageChannel. The relevant code is below:
#Autowired
#Qualifier(CloudStreamConfig.OUTPUT_CHANNEL)
public MessageChannel localChannel;
...
#GetMapping(path = "/error/{channel}")
#ResponseStatus(HttpStatus.OK)
public void error(#PathVariable String channel) {
// build my object
Message message = MessageBuilder.createMessage(myObject,
new MessageHeaders(Collections.singletonMap(CloudStreamConfig.CHANNEL_HEADER, channel)));
localChannel.send(message);
}
If I send 10 messages to /error/someChannel I would expect to see 10 messages in someChannel. However, I see half of the messages in someChannel and the other half in defaultChannel. I have put a debugging counter variable in my messages and it sends the first message to the correct channel, and then every second message to the correct channel, while the others all go to the default channel.
What would be causing this and how can I fix it? Am I misusing my DynamicSource class? I assumed it would be tied to any autowired MessageChannel of the same name (and it does appear to be) but I'm wondering if there's something I'm missing. Or is there an unintended interaction with the BinderAwareChannelResolver? (I honestly don't know what this does, I only included it because the samples do)
There are two subscribers on the output channel - the channel binding (in the binder) and your router.
For DirectChannels, the default dispatching algorithm is round robin so you are sending messages alternately to the router and directly to the binder.
You need a different DirectChannel #Bean for the service activator so all messages go there, and thence to the binder after routing.
See sourceChannel in that sample.

spring cloud AWS SQS Listener Issue

I am preparing demo programme which listen from the Amazon SQS.Below is my code.
xml config
<aws-messaging:annotation-driven-queue-listener amazon-sqs="sqsClient" max-number-of-messages="10" wait-time-out="20" visibility-timeout="100" />
UserServiceListenr.java
#Configuration
#EnableSqs
#Component
public class UserServiceListenr {
#SqsListener(value = "CMR", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void myQueueListener(Message message) throws Exception{
try {
System.out.println("Message Listen start");
System.out.println("Message part "+message);
}catch(Exception e){
System.out.println(" message Exception " + e);
}
}
}
I have put 2 messages on SQS queue.when I try to fetch messages using this demo programme Messages shown Messages_in_flight column in my AWS console.but messages not reach to my sqsListener method and after few minutes messages shown to Messages_available column in AWS console.
Below is the Exception I got when run the programme.
QueueMessageHandler:294 - 1 message handler methods found on class com.sophos.cmr.demo.UserServiceListenr: {public void com.sophos.cmr.demo.UserServiceListenr.myQueueListener(java.lang.String) throws java.lang.Exception=org.springframework.cloud.aws.messaging.listener.QueueMessageHandler$MappingInformation#5f0e9815}
so what's going wrong if any clue?
When you see that message move to column Messages_in_flight - it means that the message was picked up by a consumer but the acknowledgment about success handling, for this specific message, still didn`t receive.
So reason could be the next:
1) Error/Exception appears during the handling message from SQS
2) Spring can`t find appropriate ArgumentResolver to convert income message from SQS to your bean. I see you are using your custom bean - 'Message'
You can look through documentation, section 5.2.5
I had a similar issue and was able to solve it. The docs aren't clear on this matter. I am adding it here in case someone sees this later on.
First, when you insert an item into queue, you need to set some additional message properties so Spring knows to marshall it into your bean. If you set Name: contentType Type: String Value: application/json in the GUI when adding a message to queue, then spring will try to marshall it with Jackson.
Second, I wasn't able to get #SqsListener to work with a non String argument unless it was inside a class annotated with #Controller. For non-controller classes, I had to add a second method argument (#Header("SenderId") String senderId worked for me) and then it routed correctly

Scheduled websocket push with Springboot

I want to create a simple news feed feature on the front end that will automatically update through websocket push notifications.
The technologies involved are:
Angular for the general front-end application
SockJS for creating websocket communication
Stomp over webosocket for receiving messages from a message broker
Springboot Websockets
Stomp Message Broker (the java related framework)
What I want to achieve on the front end is:
Create a websocket connection when the view is loaded
Create s stomp provider using that websocket
Have my client subscribe to it
Catch server pushed messages and update the angular view
As far as the server side code:
Configure the websocket stuff and manage the connection
Have the server push messages every X amount of time (through an executor or #Scheduled?).
I think I have achieved everything so far except the last part of the server side code. The example I was following uses the websocket in full duplex mode and when a client sends something then the server immediately responds to the message queue and all subscribed clients update. But what I want is for the server itself to send something over Stomp WITHOUT waiting for the client to make any requests.
At first I created a spring #Controller and added a method to it with #SendTo("/my/subscribed/path") annotation. However I have no idea how to trigger it. Also I tried adding #Scheduled but this annotation works only on methods with void return type (and I'm returning a NewsMessage object).
Essentially what I need is to have the client initialize a websocket connection, and after have the server start pushing messages through it at a set interval (or whenever an event is triggered it doesn't matter for now). Also, every new client should listen to the same message queue and receive the same messages.
Before starting, make sure that you have the websocket dependencies in your pom.xml. For instance, the most important one:
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-websocket</artifactId>
<version>${org.springframework-version}</version>
</dependency>
Then, you need to have your configuration in place. I suggest you start with simple broker.
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/portfolio").withSockJS();
}
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.setApplicationDestinationPrefixes("/app");
config.enableSimpleBroker("/topic", "/queue");
}
}
Then your controller should look like this. When your AngularJs app opens a connection on /portfolio and sends a subscription to channel /topic/greeting, you will reach the controller and respond to all subscribed users.
#Controller
public class GreetingController {
#MessageMapping("/greeting")
public String handle(String greeting) {
return "[" + getTimestamp() + ": " + greeting;
}
}
With regard to your scheduler question, you need to enable it via configuration:
#Configuration
#EnableScheduling
public class SchedulerConfig{}
And then schedule it:
#Component
public class ScheduledUpdatesOnTopic{
#Autowired
private SimpMessagingTemplate template;
#Autowired
private final MessagesSupplier messagesSupplier;
#Scheduled(fixedDelay=300)
public void publishUpdates(){
template.convertAndSend("/topic/greetings", messagesSupplier.get());
}
}
Hope this somehow clarified the concept and steps to be taken to make things work for you.
First of all you can't send (push) messages to clients without their subscriptions.
Secondly to send messages to all subscribers you should take a look to the topic abstraction side.
That is a fundamentals of STOMP.
I think you are fine with #Scheduled, but you just need to inject SimpMessagingTemplate to send messages to the STOMP broker for pushing afterwards.
Also see Spring WebSockets XML configuration not providing brokerMessagingTemplate

Spring user destinations

I have a spring boot application that will have publish to user defined destination channels as such:
#Autowired
private SimpMessagingTemplate template;
public void send() {
//..
String uniqueId = "123";
this.template.convertAndSendToUser(uniqueId, "/event", "Hello");
}
Then a stomp over SockJS client can subscribe to it and receive the message. Suppose I have a stomp endpoint registered in my spring application called "/data"
var ws = new SockJS("/data");
var client = Stomp.over(ws);
var connect_fallback = function() {
client.subscribe("/user/123/event", sub_callback);
};
var sub_callback = function(msg) {
alert(msg);
};
client.connect('','', connect_callback);
Actually there will be more than one user client subscribing to the same distinct user destination, so each publish/subscribe channel is not one to one and I am only doing it this way since spring's concept of "/topic" have to be defined programmatically and "/queues" can only be consumed by one user. How do I know when a user destination no longer has any subscribers? And how do I delete a user destination?
#SendToUser('/queue/dest')
public String send() {
//..
String uniqueId = "123";
return "hello";
}
On the Client you would subscribe to '/user/queue/dest'
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/websocket.html#websocket-stomp-user-destination
After adding a channel interceptor and setting breakpoints within a code segment running in debug mode in eclipse, I found that there is a collection of websocket session and destination mappings in the registry objects that are held by the message handlers, which in turn can be seen stored inside the message channel object. I found this for the topics though not sure about purely user destination method.
However, Spring does not leave the api open for me to just call a function to get a list of all subscribers to every topic, at least not without passing in the message. Everything else that would have been helpful is set private so cannot be programmatically accessed. This is not helpful as i would like to trigger an action upon unsubscribe or disconnect of a client ONLY when the topic that is being unsubscribed/disconnected from does not have other listening clients left.
Now looking at embedding full featured broker such as ActiveMQ to see if it can potentially solve my problem

Categories

Resources