Spring Boot 2.6 Websocket Message Order - java

we are using spring boot with websocket.
#Slf4j
#Component
public class MyWebsocketConnector extends TextWebSocketHandler {
//....
#Override
protected void handleTextMessage(WebSocketSession session, TextMessage message) throws Exception {
// do something...
}
}
But now I found in our traces, that different messages are handled by different threads.
Therefore, I ask myself how I can guarantee that messages of a session that arrive one after the other are also processed one after the other.
Unfortunately, I can't find anything about a guaranteed order or synchronisation.
Does anyone know more about this?
The only thing I found was this: https://docs.spring.io/spring-framework/docs/current/reference/html/web.html#websocket-stomp-ordered-messages
But this is the STOMP Implemantiation - we are not using STOMP :-(

even though I used STOMP and set the flag "setPreservePublishOrder" to true it couldn't guarantee the order. The only solution I could find that works was to set the number of threads for the client inbound channel to 1.
#Override
public void configureClientInboundChannel(final ChannelRegistration registration) {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(1);
executor.setMaxPoolSize(1);
executor.initialize();
registration.taskExecutor(executor);
}

Related

Invoke method call asynchronously without blocking main thread

I have a scenario where the spring-boot application have to download a file from downstream application and pass it to the client. The API also needs to update a read flag in the database without blocking the response(main-thread).
A basic async use-case is what I thought of and implemented in the respective API. But, I am getting a behavioral issue with #Async. The annotation is able to spawn a new thread , but its blocking the main-thread and holding the response. The expectation was to return without holding the main-thread.
Actually, the async update is the last operation of main-thread, and I guess due to that #Async is blocking the main-thread.
Can anyone please suggest a better solution of this scenario.
Calling class
ResponseEntity<byte[]> parsedResponse = retrieverService.retrieve(id,"html");
retrieverService.update(id);
return parsedResponse;
Async method
#Override
#Async("updateTaskExecutor")
public void update(String id) {
LOG.info("Updating data for metaTagId: {}", id);
db.updateReadFlag(id);
}
Async Config
#Configuration
#EnableAsync
public class AsyncConfiguration {
#Bean(name = "updateTaskExecutor")
public Executor updateTaskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(100);
executor.setMaxPoolSize(100);
executor.setQueueCapacity(100);
executor.setThreadNamePrefix("UpdateTaskClient-");
executor.initialize();
return executor;
}
}
The Configurations were correct. I was using debugger to check the parallelism. As suggested by #M. Deinum, its not the correct way to check parallelism. After using Thread.sleep() , I could see that asynchronous calls are working as expected. I am able to send the response back, while performing an update query asynchronously.

Handle multiple amqp messages concurrently through one consumer inside one spring-rabbit service

EDIT
Just found out how to run multiple consumers inside one service:
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory, MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(RENDER_QUEUE);
container.setConcurrentConsumers(concurrentConsumers); // setting this in env
container.setMessageListener(listenerAdapter);
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(RenderMessageConsumer receiver) {
return new MessageListenerAdapter(receiver, "reciveMessageFromRenderQueue");
}
Now the only question that remains is: how can I have a global limit? So how do multiple instances of the AMQP receiver share the total number of consumers? So I want to set a global number of concurrentConsumers to 10, run 2 instances of the consumerSerivce and have each instance run around 5 consumers. Can this be managed by rabbitMq?
I have a Spring service that consumes AMQP messages and calls a http resource for each message.
After the http call completes another queue is called to either report error or done. Only then will message handling complete and the next message be taken from the queue.
// simplified
#RabbitListener(queues = RENDER_QUEUE)
public void reciveMessageFromRenderQueue(String message) {
try {
RenderMessage renderMessage = JsonUtils.stringToObject(message, RenderMessage.class);
String result = renderService.httpCallRenderer(renderMessage);
messageProducer.sendDoneMessage(result);
} catch (Exception e) {
logError(type, e);
messageProducer.sendErrorMessage(e.getMessage());
}
}
There are at times hundreds or thousands of render messages in the queue but the http call is rather long running and not doing much. This becomes obvious as I can improve the message handling rate by running multiple instances of the service thus adding more consumers and calling the http endpoint multiple times. One instance has exactly one consumer for the channel so the number of instances is equal to the number of consumers. However that heavily increases memory usage (since the service uses spring) for just forwarding a message and handling the result.
So I thought, I'd do the http call asynchronously and return immediatly after accepting the message:
.httpCallRendererAsync(renderMessage)
.subscribeOn(Schedulers.newThread())
.subscribe(new Observer<String >() {
public void onNext(String result) {
messageProducer.sendDoneMessage(result);
}
public void onError(Throwable throwable) {
messageProducer.sendErrorMessage(throwable.getMessage());
}
});
That however overloads the http endpoint which cannot deal with 1000 or more simultanous requests.
What I need is for my amqp service to take a certain amount of messages from the queue, handle them in separate threads, make the http call in each of them and return with "message handled". The amount of messages taken from the queue however needs to be shared between multiple instances of that service, so if maximum is 10, message consumption is round robin, the first 5 odd messages should be handled by instance one and the first 5 even messages by instance 2 and as soon as one instance finishes handling the message it should take another one from the queue.
What I found are things like prefetch with limts by consumer and by channel as described by rabbitmq. And the spring-rabbit implementation which uses prefetchCount and the transactionSize described here. That however does not seem to do anything for a single running instance. It will not spawn additional threads to handle more messages concurrently. And of course it will not reduce the number of messages handled in my async scenario since those messages are immediatly considered "handled".
#Bean
public RabbitListenerContainerFactory<SimpleMessageListenerContainer> prefetchContainerFactory(ConnectionFactory rabbitConnectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(rabbitConnectionFactory);
factory.setPrefetchCount(5);
factory.setTxSize(5);
return factory;
}
// and then using
#RabbitListener(queues = RENDER_QUEUE, containerFactory = "prefetchContainerFactory")
The most important requirement for me seems to be that multiple messages should be handled in one instance while the maximum of concurrently handled messages should be shared between instances.
Can that be done using rabbitMq and spring? Or do I have to implemenent something in between.
In an early stage it might be acceptable to just have concurrent message handling in one instance and not share that limit. Then I'll have to configure the limit manually using environment variables while scaling the number of instances.
Now the only question that remains is: how can I have a global limit? So how do multiple instances of the AMQP receiver share the total number of consumers? So I want to set a global number of concurrentConsumers to 10, run 2 instances of the consumerSerivce and have each instance run around 5 consumers. Can this be managed by rabbitMq?
There is no mechanism in either RabbitMQ or Spring to support such a scenario automatically. You can, however, change the concurrency at runtime (setConcurrentConsumers() on the container) so you could use some external agent to manage the concurrency on each instance.

Spring Cloud Stream doesn't use Kafka channel binder to send a message

I'm trying to achieve following thing:
Use Spring Cloud Stream 2.1.3.RELEASE with Kafka binder to send a message to an input channel and achieve publish subscribe behaviour where every consumer will be notified and be able to handle a message sent to a Kafka topic.
I understand that in Kafka if every consumer belongs to its own consumer group, will be able to read every message from a topic.
In my case spring creates an anonymous unique consumer group to every instance of my spring boot application running. The spring boot application has only one stream listener configured to listen to the input channel.
Test example case:
Configured an example Spring Cloud Stream app with an input channel which is bound to a Kafka topic.
Using a spring rest controller to send a message to the input channel expecting that message will be delivered to every spring boot application instance running.
In both applications on startup I can see that kafka partition is assigned properly.
Problem:
However, when I send a message using output().send() then spring doesn't even send a message to the Kafka topic configured, instead, in the same thread it triggers #StreamListener method of the same application instance.
During debugging I see that spring code has two handlers of the message. The StreamListenerMessageHandler and KafkaProducerMessageHandler.
Spring simply chains them and if first handler ends with success then it will not even go further. StreamListenerMessageHandler simply invokes my #StreamListener method in the same thread and message never reaches Kafka.
Question:
Is this by design, and in that case why is that? How can I achieve behaviour mentioned at the beginning of the post?
PS.
If I use KafkaTemplate and #KafkaListener method then it works as I want. Message is sent to Kafka topic and both application instances receive message and handles it in Kafka listener annotated method.
Code:
The stream listener method is configured the following way:
#SpringBootApplication
#EnableBinding(Processor.class)
#EnableTransactionManagement
public class ProcessorApplication {
private Logger logger =
LoggerFactory.getLogger(this.getClass().getName());
private PersonRepository repository;
public ProcessorApplication(PersonRepository repository) {
this.repository = repository;
}
public static void main(String[] args) {
SpringApplication.run(ProcessorApplication.class, args);
}
#Transactional
#StreamListener(Processor.INPUT)
#SendTo(Processor.OUTPUT)
public PersonEvent process(PersonEvent data) {
logger.info("Received event={}", data);
Person person = new Person();
person.setName(data.getName());
logger.info("Saving person={}", person);
Person savedPerson = repository.save(person);
PersonEvent event = new PersonEvent();
event.setName(savedPerson.getName());
event.setType("PersonSaved");
logger.info("Sent event={}", event);
return event;
}
}
Sending a message to the input channel:
#RestController()
#RequestMapping("/persons")
public class PersonController {
#Autowired
private Sink sink;
#PostMapping("/events")
public void createPerson(#RequestBody PersonEvent event) {
sink.input().send(MessageBuilder.withPayload(event).build());
}
}
Spring Cloud Stream config:
spring:
cloud.stream:
bindings:
output.destination: person-event-output
input.destination: person-event-input
sink.input().send
You are bypassing the binder altogether and sending it directly to the stream listener.
You need to send the message to kafka (to the person-event-input topic) and then each stream listener will receive the message from kafka.
You need to configure another output binding and send it there, not directly to the input channel.

Publish & Subscribe with Same Connection using Spring Integration MQTT

Due to the design of MQTT where you can only make a connection with a unique client id, is it possible to use the same connection to publish and subscribe in Spring Framework/Boot using Integration?
Taking this very simple example, it would connect to the MQTT broker to subscribe and get messages, but if you would want to publish a message, the first connection will disconnect and re-connect after the message is sent.
#Bean
public MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory();
factory.setServerURIs("tcp://localhost:1883");
factory.setUserName("guest");
factory.setPassword("guest");
return factory;
}
// publisher
#Bean
public IntegrationFlow mqttOutFlow() {
return IntegrationFlows.from(CharacterStreamReadingMessageSource.stdin(),
e -> e.poller(Pollers.fixedDelay(1000)))
.transform(p -> p + " sent to MQTT")
.handle(mqttOutbound())
.get();
}
#Bean
public MessageHandler mqttOutbound() {
MqttPahoMessageHandler messageHandler = new MqttPahoMessageHandler("siSamplePublisher", mqttClientFactory());
messageHandler.setAsync(true);
messageHandler.setDefaultTopic("siSampleTopic");
return messageHandler;
}
// consumer
#Bean
public IntegrationFlow mqttInFlow() {
return IntegrationFlows.from(mqttInbound())
.transform(p -> p + ", received from MQTT")
.handle(logger())
.get();
}
private LoggingHandler logger() {
LoggingHandler loggingHandler = new LoggingHandler("INFO");
loggingHandler.setLoggerName("siSample");
return loggingHandler;
}
#Bean
public MessageProducerSupport mqttInbound() {
MqttPahoMessageDrivenChannelAdapter adapter = new MqttPahoMessageDrivenChannelAdapter("siSampleConsumer",
mqttClientFactory(), "siSampleTopic");
adapter.setCompletionTimeout(5000);
adapter.setConverter(new DefaultPahoMessageConverter());
adapter.setQos(1);
return adapter;
}
Working with 2 separate connections becomes difficult if you need to wait for an answer/result after publishing a message...
the first connection will disconnect and re-connect after the message is sent.
Not sure what you mean by that; both components will keep open a persistent connection.
Since the factory doesn't connect the client, the adapters do, it's not designed for using a shared client.
Using a single connection won't really help with coordination of requests/replies because the reply will still come back asynchronously on another thread.
If you have some data in the request/reply that you can use for correlation of replies to requests, you can use a BarrierMessageHandler to perform that task. See my answer here for an example; it uses the standard correlation id header, but that's not possible with MQTT, you need something in the message.
TL;DR
The answer is no, not with the current Spring Boot MQTT Integration implementation (and maybe not even with future ones).
Answer
I'm facing the same exact situation: I need an MQTT Client to be opened in both inbound and outbound, making the connection persistent and sharing the same configuration (client ID, credentials, etc.), using Spring Integration Flows as close to the design as possible.
In order to achieve this, I had to reimplement MqttPahoMessageDrivenChannelAdapter and MqttPahoMessageHandler and a Client Factory.
In both MqttPahoMessageDrivenChannelAdapter and MqttPahoMessageHandler I had to choose to use the Async one (IMqttAsyncClient) in order to fix which one to use. Then I had to review parts of code where the client instance is called/used in order to check if it was already instantiated by the other flow and checking the status (e.g. not trying to connect it if it was already connected).
Regarding the Client Factory, it was easier: I've reimplemented the getAsyncClientInstance(String url, String clientId) using the concatenation of url and clientId as hash as key to store the instance into a map that is used to retrieve the existing instance if the other flow requests it.
It somehow works, but it's just a test and I'm not even sure it's a good approach. (I've started another StackOverflow question in order to track my specific scenario).
Can you share how did you manage your situation?

Frequent send to spring-websocket session: lost in transit

I got a load-test setup of spring websocket server (based on Jetty and spring version 4.3.2.RELEASE) and client, that generates many connections (based on spring's sample java websocket client). The code below sends data to given websocket session: the snippet exploits the case where sessionId can be used instead of User ID (Spring WebSocket #SendToSession: send message to specific session). I may execute this code very often, every 2-3 milliseconds. I use SimpleMessageBroker.
public void publishToSessionUsingTopic(String sessionId, String subscriptionTopic, Map<String, CacheRowModel> payload) {
String subscriptionTopicWithoutUser = subscriptionTopic.replace(USER_ENDPOINT, "");
// necessary message headers for per-session send
SimpMessageHeaderAccessor headerAccessor = SimpMessageHeaderAccessor.create(SimpMessageType.MESSAGE);
headerAccessor.setSessionId(sessionId);
headerAccessor.setLeaveMutable(true);
simpMessagingTemplate.convertAndSendToUser(sessionId, subscriptionTopicWithoutUser, Collections.singletonList(payload), headerAccessor.getMessageHeaders());
}
When this code is executed very frequently (every 2-3 milliseconds) for ~100 sessions, while I see in my logs that it was run and called the convertAndSendToUser, some of the sessions won't receive the message. I appreciate any suggestions about how this could be cleared.
Well, I think your problem is with the:
#Bean
public ThreadPoolTaskExecutor clientOutboundChannelExecutor() {
TaskExecutorRegistration reg = getClientOutboundChannelRegistration().getOrCreateTaskExecRegistration();
ThreadPoolTaskExecutor executor = reg.getTaskExecutor();
executor.setThreadNamePrefix("clientOutboundChannel-");
return executor;
}
where it uses this config for the Executor:
protected ThreadPoolTaskExecutor getTaskExecutor() {
ThreadPoolTaskExecutor executor = (this.taskExecutor != null ? this.taskExecutor : new ThreadPoolTaskExecutor());
executor.setCorePoolSize(this.corePoolSize);
executor.setMaxPoolSize(this.maxPoolSize);
executor.setKeepAliveSeconds(this.keepAliveSeconds);
executor.setQueueCapacity(this.queueCapacity);
executor.setAllowCoreThreadTimeOut(true);
return executor;
}
See, there is no RejectedExecutionHandler configured. And by default it is like:
private RejectedExecutionHandler rejectedExecutionHandler = new ThreadPoolExecutor.AbortPolicy();
So, when you have enough many messages and tasks for them exceed the ThreadPool, any extra are just aborted.
To fix the issue you should implement WebSocketMessageBrokerConfigurer and override its configureClientOutboundChannel() to provide some custom taskExecutor(ThreadPoolTaskExecutor taskExecutor) for example with the new ThreadPoolExecutor.CallerRunsPolicy().

Categories

Resources