Live reload on Database update - java

Previously I worked on Meteor and MongoDB. when I was working on it I noticed that Meteor server reloads the data if any changes happen in Mongo DB.
Can we do these things in Spring Boot, Java? I checked Live reload tools and plugins. These plugins and tools reload or restart the server when code changes, but not when DB is changed.

I assume that you are talking about MongoDB change streams.
Yes you can register a listener:
Imperative Style
Change stream events can be consumed using a MessageListener registered within a MessageListenerContainer. The container takes care of running the task in a separate Thread pushing events to the MessageListener.
#Configuration
class Config {
#Bean
MessageListenerContainer messageListenerContainer(MongoTemplate template) {
return new DefaultMessageListenerContainer(template);
}
}
Once the MessageListenerContainer is in place MessageListeners can be registered.
MessageListener<ChangeStreamDocument<Document>, Person> messageListener = (message) -> {
System.out.println("Hello " + message.getBody().getFirstname());
};
ChangeStreamRequest<Person> request = ChangeStreamRequest.builder()
.collection("person")
.filter(newAggregation(match(where("operationType").is("insert"))))
.publishTo(messageListener)
.build();
Subscription subscription = messageListenerContainer.register(request, Person.class);
// ...
Reactive Style
Change stream events be directly consumed via a Flux connected to the change stream.
Flux changeStream = reactiveTemplate
.changeStream(newAggregation(match(where("operationType").is("insert"))),
Person.class, ChangeStreamOptions.empty(), "person");
changeStream.doOnNext(event -> System.out.println("Hello " + event.getBody().getFirstname()))
.subscribe();
Read more about that:
https://github.com/spring-projects/spring-data-examples/tree/master/mongodb/change-streams

Related

Spring Integration transaction for subflow of router

I have a channel named "creationChannel" which is backed up with MongoMessageStore like this:
#Bean
ChannelMessageStore messageStore() {
return new MongoDbChannelMessageStore(mongoDatabaseFactory);
}
#Bean
PollableChannel creationChannel(ChannelMessageStore messageStore) {
return MessageChannels.queue("creationChannel", messageStore, "create").get();
}
And I want to use it in my flow here, but I want to be sure, that message from there will be read-only if "createOrderHandler" worked fine (the same applies to "updateOrderHandler", but with the different channel).
...some code here...
.<HeadersElement, OperationType>route(
payload -> route(payload),
spec -> spec
.transactional(transactionHandleMessageAdvice)
.subFlowMapping(
OperationType.New,
sf -> sf
.channel("creationChannel")
.log(Level.DEBUG, "Creation of a new order", Message::getPayload)
.transform(Mapper::mapCreate)
.handle(createOrderHandler,
handlerSpec -> handlerSpec.advice(retryOperationsInterceptor))
)
.subFlowMapping(
OperationType.Update,
sf -> sf
.channel("updateChannel")
.log(Level.DEBUG, "Update for existing order", Message::getPayload)
.transform(Mapper::mapUpdate)
.handle(updateOrderHandler,
handlerSpec -> handlerSpec.advice(retryOperationsInterceptor))
)
)
...some code here...
I tried to configure "transactionHandleMessageAdvice" like this:
#Bean
TransactionHandleMessageAdvice transactionHandleMessageAdvice(MongoTransactionManager transactionManager) {
return new TransactionHandleMessageAdvice(transactionManager, new Properties());
}
But messages are still being deleted from the database after the handler fails with an exception.
Maybe I should configure Poller for subflows and configure it with MongoTransactionManager somehow?
Maybe I should configure Poller for subflows and configure it with MongoTransactionManager somehow?
That's correct assumption. As fast as you have a thread shifting in the flow (like yours PollableChannel creationChannel), the current transaction is committed at the moment the message is placed into the store. Nothing more happens in the current thread and, therefore, current transaction which you have started with that .transactional(transactionHandleMessageAdvice).
To make reading transactional, you indeed have to configure a Poller on the .transform(Mapper::mapCreate) endpoint. So, every poll from that queue channel is going to be transactional until you shift to different thread again.
There is just no way (and must not be) to have the whole async flow transactional since transactions are tied to the ThreadLocal and at the moment when call stack comes back to the transaction initiator, it is committed or rolled back. With an async logic we just intend to "send-and-forget" from the producer side and let consumer to deal with data whenever it is ready. That is not what transactions have been designed for.

Webflux websocketclient, How to send multiple requests in same session[design client library]

TL;DR;
We are trying to design a WebSocket server using spring webflux WebSocket implementation. The server has usual HTTP server operations e.g. create/fetch/update/fetchall. Using WebSockets we were trying to expose one endpoint so the clients could leverage a single connection for all sort of operations, given WebSockets are meant for this purpose. Is it a right design with webflux and WebSockets?
Long Version
We are starting a project which is going to use reactive web sockets from spring-webflux. We need to build a reactive client library which can be used by consumers to connect to the server.
On the server, we get a request, read a message, save it and return a static response:
public Mono<Void> handle(WebSocketSession webSocketSession) {
Flux<WebSocketMessage> response = webSocketSession.receive()
.map(WebSocketMessage::retain)
.concatMap(webSocketMessage -> Mono.just(webSocketMessage)
.map(parseBinaryToEvent) //logic to get domain object
.flatMap(e -> service.save(e))
.thenReturn(webSocketSession.textMessage(SAVE_SUCCESSFUL))
);
return webSocketSession.send(response);
}
On the client, We want to make a call when someone calls save method and return the response from server.
public Mono<String> save(Event message) {
new ReactorNettyWebSocketClient().execute(uri, session -> {
session
.send(Mono.just(session.binaryMessage(formatEventToMessage)))
.then(session.receive()
.map(WebSocketMessage::getPayloadAsText)
.doOnNext(System.out::println).then()); //how to return this to client
});
return null;
}
We are not sure, how to go about designing this. Ideally, we think there should be
1) client.execute should be called only once and somehow hold the session. The same session should be used to send data in subsequent calls.
2) How to return the response from the server which we get in session.receive?
3) How about in case of fetch when the response is huge(not just a static string but list of events) in session.receive?
We are doing some research but we are unable to find proper resources for webflux-websocket-client documentation/implementation online. Any pointers on how to move ahead.
Please! Use RSocket!
It is absolutely correct design, and it worths to save resources and use only a connection per client for all possible ops.
However, don't implement a wheel and use the Protocol which gives you all of these kinds of communications.
RSocket has a request-response model which allows you to do the most common client-servert interaction today.
RSocket has a request-stream communication model so you can fulfill all your need and return a stream of events asynchronously reusing the same connection. RSocket does all maping of logical stream to phisical connection and back, so you will not feel the pain of doing that yourself.
RSocket has far more interaction models such as
fire-and-forget and stream-stream which could be useful in case of
sending a stream of data in both ways.
How to use RSocket in Spring
One of the options to do so is using RSocket-Java implementation of RSocket protocol. RSocket-Java is built on top of Project Reactor, so it naturally fits Spring WebFlux ecosystem.
Unfortunately, there is no featured integration with Spring ecosystem. Fortunately, I spent a couple of hours to provide a simple RSocket Spring Boot Starter that integrates Spring WebFlux with RSocket and exposes WebSocket RSocket server along with WebFlux Http server.
Why RSocket is a better approach?
Basically, RSocket hides the complexity of implementing the same approach yourself. With RSocket we don't have to care about interaction model definition as a custom protocol and as an implementation in Java. RSocket does for us delivering of the data to a particular logical channel. It provides a built-in client that sends messages to the same WS connection, so we don't have to invent a custom implementation for that.
Make it even better with RSocket-RPC
Since RSocket just a protocol it does not provide any message format, so this challenge is for business logic. However, there is an RSocket-RPC project which provides a Protocol Buffer as a message format and reuses the same code generation technique as GRPC does. So using RSocket-RPC we can easily build an API for the client and server and careless about transport and protocol abstraction at all.
The same RSocket Spring Boot integration provides an example of RSocket-RPC usage as well.
Alright, It has not convinced me, I wanna have a custom WebSocket server still
So, for that purpose, you have to implement that hell yourself. I have already done that once before, but I can't point to that project since it is an enterprise one.
Nevertheless, I can share a couple of code samples that can help you in building a proper client and server.
Server side
Handler and Open logical Subscribers mapping
The first point that must be taken into account is that all logical streams within one physical connection should be stored somewhere:
class MyWebSocketRouter implements WebSocketHandler {
final Map<String, EnumMap<ActionMessage.Type, ChannelHandler>> channelsMapping;
#Override
public Mono<Void> handle(WebSocketSession session) {
final Map<String, Disposable> channelsIdsToDisposableMap = new HashMap<>();
...
}
}
There are two maps in the sample above. The first one is your routes mapping which allows you to identify route based on the incoming message params, or so. The second one is created for request-streams usecase (in my case it was map of active subscriptions), so you can send a message-frame that creates a subscription, or subscribes you to a specific action and keep that subscription so once the unsubscribe action is executed you will be unsubscribed if a subscription exists.
Use Processor for messages multiplexing
In order to send back messages from all logical streams, you have to multiplex messages to one stream. For example, using Reactor, you can do that using UnicastProcessor:
#Override
public Mono<Void> handle(WebSocketSession session) {
final UnicastProcessor<ResponseMessage<?>> funIn = UnicastProcessor.create(Queues.<ResponseMessage<?>>unboundedMultiproducer().get());
...
return Mono
.subscriberContext()
.flatMap(context -> Flux.merge(
session
.receive()
...
.cast(ActionMessage.class)
.publishOn(Schedulers.parallel())
.doOnNext(am -> {
switch (am.type) {
case CREATE:
case UPDATE:
case CANCEL: {
...
}
case SUBSCRIBE: {
Flux<ResponseMessage<?>> flux = Flux
.from(
channelsMapping.get(am.getChannelId())
.get(ActionMessage.Type.SUBSCRIBE)
.handle(am) // returns Publisher<>
);
if (flux != null) {
channelsIdsToDisposableMap.compute(
am.getChannelId() + am.getSymbol(), // you can generate a uniq uuid on the client side if needed
(cid, disposable) -> {
...
return flux
.subscriberContext(context)
.subscribe(
funIn::onNext, // send message to a Processor manually
e -> {
funIn.onNext(
new ResponseMessage<>( // send errors as a messages to Processor here
0,
e.getMessage(),
...
ResponseMessage.Type.ERROR
)
);
}
);
}
);
}
return;
}
case UNSABSCRIBE: {
Disposable disposable = channelsIdsToDisposableMap.get(am.getChannelId() + am.getSymbol());
if (disposable != null) {
disposable.dispose();
}
}
}
})
.then(Mono.empty()),
funIn
...
.map(p -> new WebSocketMessage(WebSocketMessage.Type.TEXT, p))
.as(session::send)
).then()
);
}
As we can see from the sample above, there is a bunch of things there:
The message should include route info
The message should include a unique stream id to which it relates.
Separate Processor for message multiplexing where error should be a message as well
Each channel should be stored somewhere, in this case all we have a simple use case where each message can provide a Flux of messages or just a Mono (in case of mono it could be implemented simpler on the server side, so you don't have to keep unique stream ID).
This sample does not include messages encoding-decoding, so this challenge is left on you.
Client side
The client is not that simple as well:
Handle session
To handle connection we have to allocate two processors so further we can use them to multiplex and demultiplex messages:
UnicastProcessor<> outgoing = ...
UnicastPorcessor<> incoming = ...
(session) -> {
return Flux.merge(
session.receive()
.subscribeWith(incoming)
.then(Mono.empty()),
session.send(outgoing)
).then();
}
Keep all logical streams somewhere
All created streams whether it is Mono or Flux should be stored somewhere so we will be capable of distinguishing to which stream message relates:
Map<String, MonoSink> monoSinksMap = ...;
Map<String, FluxSink> fluxSinksMap = ...;
we have to keep two maps since MonoSink, and FluxSink does not have the same parent interface.
Message Routing
In the above samples, we just considered the initial part of the client side. Now we have to build a message routing mechanism:
...
.subscribeWith(incoming)
.doOnNext(message -> {
if (monoSinkMap.containsKey(message.getStreamId())) {
MonoSink sink = monoSinkMap.get(message.getStreamId());
monoSinkMap.remove(message.getStreamId());
if (message.getType() == SUCCESS) {
sink.success(message.getData());
}
else {
sink.error(message.getCause());
}
} else if (fluxSinkMap.containsKey(message.getStreamId())) {
FluxSink sink = fluxSinkMap.get(message.getStreamId());
if (message.getType() == NEXT) {
sink.next(message.getData());
}
else if (message.getType() == COMPLETE) {
fluxSinkMap.remove(message.getStreamId());
sink.next(message.getData());
sink.complete();
}
else {
fluxSinkMap.remove(message.getStreamId());
sink.error(message.getCause());
}
}
})
The above code sample shows how we can route incoming messages.
Multiplex requests
The final part is messages multiplexing. For that purpose we are going to cover possible sender class impl:
class Sender {
UnicastProcessor<> outgoing = ...
UnicastPorcessor<> incoming = ...
Map<String, MonoSink> monoSinksMap = ...;
Map<String, FluxSink> fluxSinksMap = ...;
public Sender () {
// create websocket connection here and put code mentioned earlier
}
Mono<R> sendForMono(T data) {
//generate message with unique
return Mono.<R>create(sink -> {
monoSinksMap.put(streamId, sink);
outgoing.onNext(message); // send message to server only when subscribed to Mono
});
}
Flux<R> sendForFlux(T data) {
return Flux.<R>create(sink -> {
fluxSinksMap.put(streamId, sink);
outgoing.onNext(message); // send message to server only when subscribed to Flux
});
}
}
Sumup of Custom implementation
Hardcore
No Backpressure support implemented so that could be another challenge
Easy to shoot yourself in the foot
Takeaways
PLEASE, use RSocket, don't invent protocol yourself, it is HARD!!!
To learn more about RSocket from Pivotal guys - https://www.youtube.com/watch?v=WVnAbv65uCU
To learn more about RSocket from one of my talks - https://www.youtube.com/watch?v=XKMyj6arY2A
There is a featured framework built on top of RSocket called Proteus - you might be interested in that - https://www.netifi.com/
To learn more about Proteus from core developer of RSocket protocol - https://www.google.com/url?sa=t&source=web&rct=j&url=https://m.youtube.com/watch%3Fv%3D_rqQtkIeNIQ&ved=2ahUKEwjpyLTpsLzfAhXDDiwKHUUUA8gQt9IBMAR6BAgNEB8&usg=AOvVaw0B_VdOj42gjr0YrzLLUX1E
Not sure if is this case your problem??
im seeing that you are sending a static flux response (this is a close-able stream)
you need a opend stream to send messages to that session for example you can create a processor
public class SocketMessageComponent {
private DirectProcessor<String> emitterProcessor;
private Flux<String> subscriber;
public SocketMessageComponent() {
emitterProcessor = DirectProcessor.create();
subscriber = emitterProcessor.share();
}
public Flux<String> getSubscriber() {
return subscriber;
}
public void sendMessage(String mesage) {
emitterProcessor.onNext(mesage);
}
}
and then you can send
public Mono<Void> handle(WebSocketSession webSocketSession) {
this.webSocketSession = webSocketSession;
return webSocketSession.send(socketMessageComponent.getSubscriber()
.map(webSocketSession::textMessage))
.and(webSocketSession.receive()
.map(WebSocketMessage::getPayloadAsText).log());
}

Spring WebFlux rest controller serves only first two subscriptions

There's a flux created programmatically by Flux.create method:
Flux<Tweet> flux = Flux.<Tweet>create(emitter -> ...);
There's a rest controller:
#RestController
public class StreamController {
...
#GetMapping("/top-words")
public Flux<TopWords> streamTopWords() {
return topWordsStream.getTopWords();
}
}
There's a couple of web clients (in standalone processes):
Flux<TopWords> topWordsFlux = WebClient.create(".../top-words")
.method(HttpMethod.GET)
.accept(MediaType.TEXT_EVENT_STREAM)
.retrieve()
.bodyToFlux(TopWords.class)
.subscribe(System.out::println);
There's a couple of EventSource instances in JavaScript:
var eventSource = new EventSource(".../top-words");
eventSource.onmessage = function (e) {
console.log("Processing message: ", e.data);
};
Only the first two "subscribers" will start receiving the messages (no matter if it's a web client or EventSource instance). The other will open the connection, get HTTP 200 status, but the event stream stays empty. There're no errors on either client or server side.
I don't understand, where is the limit on "2 subscribers" imposed. What do I have to do, if I want to support more than 2 subscribers?
The application is built with Spring Boot 2.0.0.RELEASE and auto-configured with spring-boot-starter-webflux. The default configuration is not changed.
There is a limitation in the underlying API that I tried to adapt (Twitter streaming API).
The goal was to connect to Twitter once and process tweet stream by various different subscribers.
Originally I thought the emitter passed to Flux.create method always uses the same FluxSink for all subscribers. That of course doesn't make sense. Instead the FluxSink is provided per-subscriber, as the javadoc states clearly.
I implemented my use case with a Twitter listener that allows for registration (and un-registration) of a number of FluxSink instances. That way, the single tweet stream can be subscribed to by various different subsribers.
Flux<Tweet> flux = Flux.<Tweet>create(twitterListener::addSink);
My twitterListener implements org.springframework.social.twitter.api.StreamListener from spring-social-twitter project.

Publish & Subscribe with Same Connection using Spring Integration MQTT

Due to the design of MQTT where you can only make a connection with a unique client id, is it possible to use the same connection to publish and subscribe in Spring Framework/Boot using Integration?
Taking this very simple example, it would connect to the MQTT broker to subscribe and get messages, but if you would want to publish a message, the first connection will disconnect and re-connect after the message is sent.
#Bean
public MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory();
factory.setServerURIs("tcp://localhost:1883");
factory.setUserName("guest");
factory.setPassword("guest");
return factory;
}
// publisher
#Bean
public IntegrationFlow mqttOutFlow() {
return IntegrationFlows.from(CharacterStreamReadingMessageSource.stdin(),
e -> e.poller(Pollers.fixedDelay(1000)))
.transform(p -> p + " sent to MQTT")
.handle(mqttOutbound())
.get();
}
#Bean
public MessageHandler mqttOutbound() {
MqttPahoMessageHandler messageHandler = new MqttPahoMessageHandler("siSamplePublisher", mqttClientFactory());
messageHandler.setAsync(true);
messageHandler.setDefaultTopic("siSampleTopic");
return messageHandler;
}
// consumer
#Bean
public IntegrationFlow mqttInFlow() {
return IntegrationFlows.from(mqttInbound())
.transform(p -> p + ", received from MQTT")
.handle(logger())
.get();
}
private LoggingHandler logger() {
LoggingHandler loggingHandler = new LoggingHandler("INFO");
loggingHandler.setLoggerName("siSample");
return loggingHandler;
}
#Bean
public MessageProducerSupport mqttInbound() {
MqttPahoMessageDrivenChannelAdapter adapter = new MqttPahoMessageDrivenChannelAdapter("siSampleConsumer",
mqttClientFactory(), "siSampleTopic");
adapter.setCompletionTimeout(5000);
adapter.setConverter(new DefaultPahoMessageConverter());
adapter.setQos(1);
return adapter;
}
Working with 2 separate connections becomes difficult if you need to wait for an answer/result after publishing a message...
the first connection will disconnect and re-connect after the message is sent.
Not sure what you mean by that; both components will keep open a persistent connection.
Since the factory doesn't connect the client, the adapters do, it's not designed for using a shared client.
Using a single connection won't really help with coordination of requests/replies because the reply will still come back asynchronously on another thread.
If you have some data in the request/reply that you can use for correlation of replies to requests, you can use a BarrierMessageHandler to perform that task. See my answer here for an example; it uses the standard correlation id header, but that's not possible with MQTT, you need something in the message.
TL;DR
The answer is no, not with the current Spring Boot MQTT Integration implementation (and maybe not even with future ones).
Answer
I'm facing the same exact situation: I need an MQTT Client to be opened in both inbound and outbound, making the connection persistent and sharing the same configuration (client ID, credentials, etc.), using Spring Integration Flows as close to the design as possible.
In order to achieve this, I had to reimplement MqttPahoMessageDrivenChannelAdapter and MqttPahoMessageHandler and a Client Factory.
In both MqttPahoMessageDrivenChannelAdapter and MqttPahoMessageHandler I had to choose to use the Async one (IMqttAsyncClient) in order to fix which one to use. Then I had to review parts of code where the client instance is called/used in order to check if it was already instantiated by the other flow and checking the status (e.g. not trying to connect it if it was already connected).
Regarding the Client Factory, it was easier: I've reimplemented the getAsyncClientInstance(String url, String clientId) using the concatenation of url and clientId as hash as key to store the instance into a map that is used to retrieve the existing instance if the other flow requests it.
It somehow works, but it's just a test and I'm not even sure it's a good approach. (I've started another StackOverflow question in order to track my specific scenario).
Can you share how did you manage your situation?

Spring user destinations

I have a spring boot application that will have publish to user defined destination channels as such:
#Autowired
private SimpMessagingTemplate template;
public void send() {
//..
String uniqueId = "123";
this.template.convertAndSendToUser(uniqueId, "/event", "Hello");
}
Then a stomp over SockJS client can subscribe to it and receive the message. Suppose I have a stomp endpoint registered in my spring application called "/data"
var ws = new SockJS("/data");
var client = Stomp.over(ws);
var connect_fallback = function() {
client.subscribe("/user/123/event", sub_callback);
};
var sub_callback = function(msg) {
alert(msg);
};
client.connect('','', connect_callback);
Actually there will be more than one user client subscribing to the same distinct user destination, so each publish/subscribe channel is not one to one and I am only doing it this way since spring's concept of "/topic" have to be defined programmatically and "/queues" can only be consumed by one user. How do I know when a user destination no longer has any subscribers? And how do I delete a user destination?
#SendToUser('/queue/dest')
public String send() {
//..
String uniqueId = "123";
return "hello";
}
On the Client you would subscribe to '/user/queue/dest'
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/websocket.html#websocket-stomp-user-destination
After adding a channel interceptor and setting breakpoints within a code segment running in debug mode in eclipse, I found that there is a collection of websocket session and destination mappings in the registry objects that are held by the message handlers, which in turn can be seen stored inside the message channel object. I found this for the topics though not sure about purely user destination method.
However, Spring does not leave the api open for me to just call a function to get a list of all subscribers to every topic, at least not without passing in the message. Everything else that would have been helpful is set private so cannot be programmatically accessed. This is not helpful as i would like to trigger an action upon unsubscribe or disconnect of a client ONLY when the topic that is being unsubscribed/disconnected from does not have other listening clients left.
Now looking at embedding full featured broker such as ActiveMQ to see if it can potentially solve my problem

Categories

Resources