Reactor / WebFlux implement a reactive http news ticker - java

I have a request that is rather simple to formulate, but I cannot pull it of without leaking resources.
I want to return a response of type application/stream+json, featuring news events someone posted. I do not want to use Websockets, not because I don't like them, I just want to know how to do it with a stream.
For this I need to return a Flux<News> from my restcontroller, that is continuously fed with news, once someone posts any.
My attempt for this was creating a Publisher:
public class UpdatePublisher<T> implements Publisher<T> {
private List<Subscriber<? super T>> subscribers = new ArrayList<>();
#Override
public void subscribe(Subscriber<? super T> s) {
subscribers.add(s);
}
public void pushUpdate(T message) {
subscribers.forEach(s -> s.onNext(message));
}
}
And a simple News Object:
public class News {
String message;
// Constructor, getters, some properties omitted for readability...
}
And endpoints to publish news respectively get the stream of news
// ...
private UpdatePublisher<String> updatePublisher = new UpdatePublisher<>();
#GetMapping(value = "/news/ticker", produces = "application/stream+json")
public Flux<News> getUpdateStream() {
return Flux.from(updatePublisher).map(News::new);
}
#PutMapping("/news")
public void putNews(#RequestBody News news) {
updatePublisher.pushUpdate(news.getMessage());
}
This WORKS, but I cannot unsubscribe, or access any given subscription again - so once a client disconnects, the updatePublisher will just continue to push onto a growing number of dead channels - as I have no way to call the onCompleted() handler on the subscriptions.
TL;DL:
Can one push messages onto a possible endless Flux from a different thread and still terminate the Flux on demand without relying on a reset by peer exception or something along those lines?

You should never try to implement yourself the Publisher interface, as it boils down to getting the reactive streams implementation right. This is exactly the issue you're facing here.
Instead you should use one of the generator operators provided by Reactor itself (this is actually a Reactor question, nothing specific to Spring WebFlux).
In this case, Flux.create or Flux.push are probably the best candidates, given your code uses some type of event listener to push events down the stream. See the reactor project reference documentation on that.
Without more details, it's hard to give you a concrete code sample that solves your problem. Here are a few pointers though:
you might want to .share() the stream of events for all subscribers if you'd like some multicast-like communication pattern
pay attention to the push/pull/push+pull model that you'd like to have here; how is the backpressure supposed to work here? What if we produce more events that the subscribers can handle?
this model would only work on a single application instance. If you'd like this to work on multiple application instances, you might want to look into messaging patterns using a broker

Related

Manage sagas between a microservice that uses axon and one that doesn't?

I'm working on a project where there are, for the sake of this question, two microservices:
An new OrderService (Spring Boot)
A "legacy" Invoice Service (Jersey Web Application)
Additionally, there is a RabbitMQ message broker.
In the OrderService, we've used the Axon framework for event-sourcing and CQRS.
We would now like to use sagas to manage the transactions between the OrderService and InvoiceService.
From what I've read, in order to make this change, we would need to do the following:
Switch from a SimpleCommandBus -> DistributedCommandBus
Change configuration to enable distributed commands
Connect the Microservices either using SpringCloud or JCloud
Add AxonFramework to the legacy InvoiceService project and handle the saga events received.
It is the fourth point where we have trouble: the invoice service is maintained by a separate team which is unwilling to make changes.
In this case, is it feasible to use the message broker instead of the command gateway. For instance, instead of doing something like this:
public class OrderManagementSaga {
private boolean paid = false;
private boolean delivered = false;
#Inject
private transient CommandGateway commandGateway;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void handle(OrderCreatedEvent event) {
// client generated identifiers
ShippingId shipmentId = createShipmentId();
InvoiceId invoiceId = createInvoiceId();
// associate the Saga with these values, before sending the commands
SagaLifecycle.associateWith("shipmentId", shipmentId);
SagaLifecycle.associateWith("invoiceId", invoiceId);
// send the commands
commandGateway.send(new PrepareShippingCommand(...));
commandGateway.send(new CreateInvoiceCommand(...));
}
#SagaEventHandler(associationProperty = "shipmentId")
public void handle(ShippingArrivedEvent event) {
delivered = true;
if (paid) { SagaLifecycle.end(); }
}
#SagaEventHandler(associationProperty = "invoiceId")
public void handle(InvoicePaidEvent event) {
paid = true;
if (delivered) { SagaLifecycle.end(); }
}
// ...
}
We would do something like this:
public class OrderManagementSaga {
private boolean paid = false;
private boolean delivered = false;
#Inject
private transient RabbitTemplate rabbitTemplate;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void handle(OrderCreatedEvent event) {
// client generated identifiers
ShippingId shipmentId = createShipmentId();
InvoiceId invoiceId = createInvoiceId();
// associate the Saga with these values, before sending the commands
SagaLifecycle.associateWith("shipmentId", shipmentId);
SagaLifecycle.associateWith("invoiceId", invoiceId);
// send the commands
rabbitTemplate.convertAndSend(new PrepareShippingCommand(...));
rabbitTemplate.convertAndSend(new CreateInvoiceCommand(...));
}
#SagaEventHandler(associationProperty = "shipmentId")
public void handle(ShippingArrivedEvent event) {
delivered = true;
if (paid) { SagaLifecycle.end(); }
}
#SagaEventHandler(associationProperty = "invoiceId")
public void handle(InvoicePaidEvent event) {
paid = true;
if (delivered) { SagaLifecycle.end(); }
}
// ...
}
In this case, When we receive a message from the InvoiceService in the exchange, we would publish the corresponding event on the event gateway or using SpringAMQPPublisher.
Questions:
Is this a valid approach?
Is there a documented way of handling this kind of scenario in Axon? If so, can you please provide a link to the documentation or any sample code?
First off, and not completely tailored towards your question, you're referring to the Axon Extensions to enable distributed messaging. Although this is indeed an option, know that this will require you to configure several separate solutions dedicated for distributed commands, events and event storage. Using a unified solution for this like Axon Server will ensure that a user does not have to dive in three (or more) different approaches to make it all work. Instead, Axon Server is attached to the Axon Framework application, and it does all the distribution and event storage for you.
That thus means that things like the DistributedCommandBus and SpringAMQPPublisher are unnecessary to fulfill your goal, if you would use Axon Server.
That's a piece of FYI which can simplify your life; by no means a necessity of course. So let's move to your actual question:
Is this a valid approach?
I think it is perfectly fine for a Saga to act as a anti corruption layer in this form. A Saga states that it reacts on events and send operations. Whether those operations are in the form of commands or another third party service is entirely up to you.
Note though that I feel AMQP is more a solution for distributed events (in a broadcast approach) than that it's a means to send commands (to a direct handler). It can be morphed to suit your needs, but I'd regard it as suboptimal for command dispatching as it needs to be adjusted.
Lastly, make sure that your Saga can cope with exception from sending those operations over RabbitMQ. You wouldn't want the Invoice service to fail on that message and having your Order service's Saga think it's on a happy path with your transaction of course.
Concluding though, it's indeed feasible to use another message broker within a Saga.
Is there a documented way of handling this kind of scenario in Axon?
There's no documented Axon way to deal with such a scenario at the moment, as there is no "one size fits all" solution. From a pure Axon approach, using commands would be the way to go. But as you stated, that's not an option within your domain due to an (unwilling?) other team. Hence you would track back to the actual intent of a Saga, without taking Axon into account, which I would summarize in the following:
A Saga manages a complex business transaction.
Reacts on things happening (events) and sends out operations (commands).
The Saga has a notion of time.
The Saga maintains state over time to know how to react.
That's my two cents, hope this helps you out!

"sharing" parts of a reactive stream over multiple rest calls

I have this Spring WebFlux controller:
#RestController
public class Controller
{
#PostMapping("/doStuff")
public Mono<Response> doStuff(#RequestBody Mono<Request> request)
{
...
}
}
Now, say I wanted to relate separate requests coming to this controller from different clients to group processing based on some property of the Request object.
Take 1:
#PostMapping("/doStuff")
public Mono<Response> doStuff(#RequestBody Mono<Request> request)
{
return request.flux()
.groupBy(r -> r.someProperty())
.flatMap(gf -> gf.map(r -> doStuff(r)));
}
This will not work, because every call will get its own instance of the stream. The whole flux() call doesn't make sense, there will always ever be one Request object going through the stream even if there's many of those streams fired at the same time as a result of simultaneous calls coming from clients. What I need, I gather, is some part of the stream that is shared between all requests where I could do my grouping, which led me to this slightly over engineered code
Take 2:
private AtomicReference<FluxSink<Request>> sink = new AtomicReference<>();
private Flux<Response> serializingStream;
public Controller()
{
this.serializingStream =
Flux.<Request>create(fluxSink -> sink.set(fluxSink), ERROR)
.groupBy(r -> r.someProperty())
.flatMap(gf -> gf.map(r -> doStuff(r)));
.publish()
.autoConnect();
this.serializingStream.subscribe().dispose(); //dummy subscription to set the sink;
}
#PostMapping("/doStuff")
public Mono<Response> doStuff(#RequestBody Request request)
{
req.setReqId(UUID.randomUUID().toString());
return
serializingStream
.doOnSubscribe(__ -> sink.get().next(req))
.filter(resp -> resp.getReqId().equals(req.getReqId()))
.take(1)
.single();
}
And this kind of works, though it looks like I am doing things I shouldn't (or at least they don't feel right), like leaking the FluxSink and then injecting a value through it while subscribing, adding a request ID so that I can then filter the right response. Also, if error happens in the serializingStream then it breakes everything for everyone, but I guess I could try to isolate the errors to keep things going.
The question is, is there a better way of doing this that doesn't feel like an open heart surgery.
Also, related question for a similar scenario. I was thinking about using Akka Persistence to implement event sourcing and have it trigerred from inside that Reactor stream. I was reading about Akka Streams that allow to wrap an Actor and then there's some ways of converting that into something that can be hooked up with Reactor (aka Publisher or Subscriber), but then if every requests gets it's own stream, I am effectively loosing back pressure and am risking OOME because of flooding the Persistent Actor's mailbox, so I guess that problem falls in to the same category like the one I described above.

RxJava 2.0 - handling ressources for uncaught subsciber error in publish().refCount()

I am rather new to RxJava and - as so many others - am trying to get my head around exception handling. I read quite a few post online (e.g. this discussion here how to handle exceptions thrown by observer's onNext) and think that I get the basic idea of the concepts.
In the above mentioned discussion, one of the posters says, that when an exception is thrown in a subscriber, RxJava does the following:
Implement generic handling to log the failure and stop sending it events
(of any kind) and clean up any resources due to that subscriber and carry
on with any remaining subscriptions.
This is also more or less what I see, the only thing I have problems with is the "clean up any ressources" bit. To make that clear, let's assume the following example:
I want to create an Observable that listens to an async event source (e.g. a JMS queue) and onNext()s on every received message. So in (pseudo-) code I would do something similar to this:
Observable<String> observable = Observable.create( s -> {
createConnectionToBroker();
getConsumer().setMessageListener(message -> s.onNext(transform(message)));
s.setDisposable(new Disposable() {
public void dispose() {
tearDownBrokerConnection();
}
});
});
Since I want to reuse the message listener for many subscribers / observers, I do not directly subscribe at the created Observable, but make use of the publish().refCount() team instead. Something similar to this:
Observable<String> observableToSubscribeTo = observable.publish().refCount();
Disposable d1 = observableToSubscribeTo.subscribe(s -> ...);
Disposable d2 = observableToSubscribeTo.subscribe(s -> ...);
This all works as expected. The code connects to JMS only when the first subscription is established, and the connection to the broker is closed when the last observer was dispose()d.
However, when a subscriber throws an exception when it is onNext()ed, things seem to get messy. As expected, the observer that threw is nuked, and whenever a new event is published, it won't be notified anymore. My problem appears that when all the remaining subscribers are dispose()d, the Observable that maintains the connection to the message broker is no longer notified. It looks to me as if the subscriber that threw the exception is in some sort of zombie state. It is ignored when it comes to event distribution, but it somehow prevents the root Observable to get notified when the last subscriber is dispose()d.
I understand that RxJava expects the observers to make sure to not throw but rather handle an eventual exception properly. Unfortunately, in the case where I want to provide a library that returns an Observable to the caller, I have no control over my subscribers whatsoever. This means, I would never be able to protect my library against stupid observers.
So, I am asking myself: am I missing something here? Is there really no chance to properly cleanup when a subscriber throws? Is this a bug or is it just me not understanding the library?
Any insights greatly appreciated!
If you could show some unit tests that demonstrates the problem (without the need for JMS) that would be great.
Also, onNext in RxJava 2 should never throw; if it does it is an undefined behavior. If you don't trust your consumers, you can have an end-observable transformer that does safeSubscribe instead of the plain subscribe that adds protecting against misbehaving downstream:
.compose(o -> v -> o.safeSubscribe(v))
or
.compose(new ObservableTransformer<T>() {
#Override public Observable<T> apply(final Observable<T> source) {
return new Observable<T>() {
#Override public void subscribeActual(Observer<? super T> observer) {
source.safeSubscribe(observer);
}
};
}
})

How to execute asynchronous database insert in Actor's onReceive method?

I have a Play framework 2 application that also uses Akka. I have an Actor that receives messages from a remote system, the amount of such messages can be very huge. After a message is received, i log it into the database (using the built-in Ebean ORM) and then continue to process it. I don't care, how fast this database logging works, but it definitely should not block the further processing. Here is a simplified code sample:
public class MessageReceiver extends UntypedActor {
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof ServerMessage) {
ServerMessage serverMessage = (ServerMessage) message;
ServerMessageModel serverMessageModel = new ServerMessageModel(serverMessage);
serverMessageModel.save();
//now send the message to another actor for further processing
} else {
unhandled(message);
}
}
}
As i understand, database inserting is blocking in this realization, so it does not meet my needs. But i can't figure out how to make it unblocking. I've read about the Future class, but i can't get it to work, since it should return some value, and serverMessageModel.save(); returns void.I understand that writing a lot of messages one-by-one into the database is unefficient, but that is not the issue at the moment.
Am i right that this implementation is blocking? If it is, how can i make it run asynchronously?
Future solution seems good to me. I haven't used Futures from Java, but you can just return arbitrary Integer or String if you definitely need some return value.
Other option is to send that message to some other actor which would do the saving to the DB. Then you should make sure that the mailbox of that actor would not overfill.
Have you considered akka-persistence for this? Maybe that would suit your use-case.
If you wish to use Future - construct an Akka Future with a Callable (anonymous class), whose apply() will actually implement the db save code. You can actually put all of this (future creation and apply()) in your ServerMessageModel class -- maybe call it asynchSave(). Your Future maybe Future where status is the result of asynchSave...
public Future<Status> asyncSave(...) { /* should the params be ServerMessageModel? */
return future(new Callable<Status>() {
public Status call() {
/* do db work here */
}
}
In your onReceive you can go ahead with tell to the other actor. NOTE: if you want to make sure that you are firing the tell to the other actor after this future returns, then you could use Future's onSuccess.
Future<Status> f = serverMessageModel.asyncSave();
f.onSuccess(otherActor.tell(serverMessage, self());
You can also do failure handling... see http://doc.akka.io/docs/akka/2.3.4/java/futures.html for further details.
Hope that helps.
Persist actor state with Martin Krassers akka-persistence extension and my jdbc persistence provider akka persistence jdbc https://github.com/dnvriend/akka-persistence-jdbc

Desktop program communicating with server

I am in the process of moving the business logic of my Swing program onto the server.
What would be the most efficient way to communicate client-server and server-client?
The server will be responsible for authentication, fetching and storing data, so the program will have to communication frequently.
it depends on a lot of things. if you want a real answer, you should clarify exactly what your program will be doing and exactly what falls under your definition of "efficient"
if rapid productivity falls under your definition of efficient, a method that I have used in the past involves serialization to send plain old java objects down a socket. recently I have found that, in combination with the netty api, i am able to rapidly prototype fairly robust client/server communication.
the guts are fairly simple; the client and server both run Netty with an ObjectDecoder and ObjectEncoder in the pipeline. A class is made for each object designed to handle data. for example, a HandshakeRequest class and HandshakeResponse class.
a handshake request could look like:
public class HandshakeRequest extends Message {
private static final long serialVersionUID = 1L;
}
and a handshake response may look like:
public class HandshakeResponse extends Message {
private static final long serialVersionUID = 1L;
private final HandshakeResult handshakeResult;
public HandshakeResponse(HandshakeResult handshakeResult) {
this.handshakeResult = handshakeResult;
}
public HandshakeResult getHandshakeResult() {
return handshakeResult;
}
}
in netty, the server would send a handshake request when a client connects as such:
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) {
Channel ch = e.getChannel();
ch.write(new HandshakeRequest();
}
the client receives the HandshakeRequest Object, but it needs a way to tell what kind of message the server just sent. for this, a Map<Class<?>, Method> can be used. when your program is run, it should iterate through the Methods of a class with reflection and place them in the map. here is an example:
public HashMap<Class<?>, Method> populateMessageHandler() {
HashMap<Class<?>, Method> temp = new HashMap<Class<?>, Method>();
for (Method method : getClass().getMethods()) {
if (method.getAnnotation(MessageHandler.class) != null) {
Class<?>[] methodParameters = method.getParameterTypes();
temp.put(methodParameters[1], method);
}
}
return temp;
}
this code would iterate through the current class and look for methods marked with an #MessageHandler annotation, then look at the first parameter of the method (the parameter being an object such as public void handleHandshakeRequest(HandshakeRequest request)) and place the class into the map as a key with the actual method as it's value.
with this map in place, it is very easy to receive a message and send the message directly to the method that should handle the message:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
try {
Message message = (Message) e.getMessage();
Method method = messageHandlers.get(message.getClass());
if (method == null) {
System.out.println("No handler for message!");
} else {
method.invoke(this, ctx, message);
}
} catch(Exception exception) {
exception.printStackTrace();
}
}
there's not really anything left to it. netty handles all of the messy stuff allowing us to send serialized objects back and forth with ease. if you decide that you do not want to use netty, you can wrap your own protocol around java's Object Output Stream. you will have to do a little bit more work overall, but the simplicity of communication remains intact.
It's a bit hard to say which method is "most efficient" in terms of what, and I don't know your use cases, but here's a couple of options:
The most basic way is to simply use "raw" TCP-sockets. The upside is that there's nothing extra moving across the network and you create your protocol yourself, the latter being also a downside; you have to design and implement your own protocol for the communication, plus the basic framework for handling multiple connections in the server end (if there is a need for such).
Using UDP-sockets, you'll probably save a little latency and bandwidth (not much, unless you're using something like mobile data, you probably won't notice any difference with TCP in terms of latency), but the networking code is a bit harder task; UDP-sockets are "connectionless", meaning all the clients messages will end up in the same handler and must be distinguished from one another. If the server needs to keep up with client state, this can be somewhat troublesome to implement right.
MadProgrammer brought up RMI (remote method invocation), I've personally never used it, and it seems a bit cumbersome to set up, but might be pretty good in the long run in terms of implementation.
Probably one of the most common ways is to use http for the communication, for example via REST-interface for Web services. There are multiple frameworks (I personally prefer Spring MVC) to help with the implementation, but learning a new framework might be out of your scope for now. Also, complex http-queries or long urls could eat your bandwidth a bit more, but unless we're talking about very large amounts of simultaneous clients, this usually isn't a problem (assuming you run your server(s) in a datacenter with something like 100/100MBit connections). This is probably the easiest solution to scale, if it ever comes to that, as there're lots of load-balancing solutions available for web servers.

Categories

Resources