I'm building my first event sourced system. It will have multiple domains using projects with a publication lifecycle at it's core. How can I effectively replay or re-apply events of two domains to a new aggregate inside a third domain?
To be more specific. Imagine 4 domains each with their own bounded context and purpose. A short description of these contexts:
Project - A project is a complex object at the core of the system, almost every domain requires project data to operate. A project has one or more ProductTypes which contain the limited supply of Products.
Media - The media domain covers operations around images, documents and generated reports and functions as a file server.
Delivery - Delivery allows for the configuration of which content channels to publish all publications to.
Publication - The publication domain handles the complex tasks of verifying if a project can be published to the requested status in it's current state.
The states of publication follow the lifecycle: concept (not yet published) > announced (optional) > sale > sold-out (publication ended). In my description I focus on the announced status. Concept is not actually a thin for the publication domain since a project is always in concept if publication does not know about it yet.
My first attempt was setting up a normal aggregate which handled the incoming event AnnouncementPublishedEvent. This requires a project to meet some basic requirements like 'it has a name', 'it has a description', 'it has at least one image' and so on. This means I need to validate this information before the event is applied and therefore I somehow need to supply a project instance in the command.
While doing this I suspected this method breaks the purpose of CQRS and I should be looking at the real data source: events. My next attempt was creating a Saga that starts when the event AnnouncementPublicationRequestedEvent. This saga needs to review which events occured around the given projectId and apply those to this new 'published project' projection in order to (at least) validate if the request can be accepted.
I researched and experimented with tracking processors but could not get a good example how this is done in version 4 of Axon. I also started to read several other questions on Stackoverflow that made me think I might need to reconsider my approach.
Unfortunately, the exact code can not be shared as it's not open source and even if I could it's far from a working state. I can use example code to show what I'm trying to do.
#Saga
#ProcessingGroup("AnnouncementPublication")
public class AnnouncementPublicationSaga {
private static int NUMBER_OF_ALLOWED_IMAGES
private PublicationId publicationId;
private ProjectId projectId;
private int numberOfImages = 0;
//...other fields
#StartSaga
#SagaEventHandler(associationProperty = "projectId")
public void handle(AnnouncementPublicationRequestedEvent event) {
publicationId = generatePublicationId();
//set parameters from event for saga to use
projectId = event.getProjectId();
targetPublicationStatus = event.getPublicationStatus();
date = event.getDate();
//initialize the 'publicated project' aggregate
//start a replay of associated events for this #ProcessingGroup
}
...
#SagaEventHandler(associationProperty = "projectId")
public void handle(ProjectCreatedEvent event) {
//Verify the project exists and has a valid name
}
...
/* Assumption* on how AssociationResolver works: */
#SagaEventHandler(AssociationResolver=MediaProjectAssociator.class )
public void handle(ProjectImageAdded event) {
numberOfImages += 1;
}
/* Assumption* on how AssociationResolver works: */
#SagaEventHandler(AssociationResolver=MediaProjectAssociator.class )
public void handle(ProjectImageRemoved event) {
numberOfImages -= 1;
}
...
/* In my head this should trigger if all events have been played
up to the PublicationRequestedEvent. Or maybe
*/
#SagaEventHandler(associationProperty = "publicationId")
public void handle(ValidationRequestCompleted event) {
//ValidationResult result = ValidationResult.builder();
...
if (numberOfImages > NUMBER_OF_ALLOWED_IMAGES) {
//reason to trigger PublicationRequestDeniedEvent
//update validationResult
}
...
if (validationResult.isAcceptable()) {
//Trigger AnnouncementPublicationAcceptedEvent
} else {
//Trigger AnnouncementPublicationDeniedEvent
}
}
...
#EndSaga
#SagaEventHandler(associationProperty = "publicationId")
public void handle(AnnouncementPublicationDeniedEvent event) {
//do stuff to inform why the publication failed
}
#EndSaga
#SagaEventHandler(associationProperty = "publicationId")
public void handle(AnnouncementPublicationAcceptedEvent event){
//do stuff to notify success to user
//choice: delegate to delivery for actual sharing of data
// or delivery itselfs listens for these events
}
}
*The associationResolver code is an assumption to it's actual working as I'm not even close to that part yet. My media context uses a file id as aggregate identifier as not every event is bound to a project. But all the media events this saga needs to replay will have a projectId as field in them. Any feedback on this is welcome but it's not my main problem now.
In the end the result should be: a record of the publication or a record of the attempt and why it failed.
The record of the publication contains all data from project or media events that are relevant to a publication. This is mostly information that potential buyers need to make a decision.
For the purpose of this question I don't expect the above to be solved completely. I just want to know if I'm on the right track with thinking in events, if my approach on replaying relevant events is the right way to go and if so how this can be done in Axon4.
From your problem description Martin, I assume you have several distinct Bounded Contexts. Following the definition of Bounded Context:
Explicitly define the context within which a model applies.
Explicitly set boundaries in terms of team organization,
usage within specific parts of the application,
and physical manifestations such as code bases and database schemas.
Keep the model strictly consistent within these bounds,
but don’t be distracted or confused by issues outside.
From this I'd like to emphasize that within a given Bounded Context, you speak the same language/API with any component.
Between contexts, you will however share very consciously, using dedicated context-mappings like for example an anti-corruption layer to ensure another domain doesn't enter your domain.
Having said the above, events are part of a specific Bounded Context.
Thus, using multiple streams of events from other contexts to recreate/replay an aggregate in another context should ideally be out of the question.
On top of this, in Axon an Aggregate can only ever be recreated based on events it has published itself.
To still arrive to a solution where a given application ingests events from other applications to re-hydrate an Aggregate, I would take the following steps:
Have a dedicated component (e.g. the anti-corruption layer) which translates the incoming events into a different form of message within your application.
If these events should result in the reconstruction of an Aggregate, you are required to make translate the events to commands. The Aggregate infrastructure components in Axon are meant for the Command Model when talking about CQRS.
Said Aggregate would then handle the commands, perform some business logic and publish an event (or several) as a result.
From here on out, the Framework will deal with replaying all events for the given Aggregate, granted you follow Event Sourcing practices to update the Aggregate's state.
Lastly, I'd like to point out that any specifics provided by Axon around replaying tied to the TrackingEventProcessor are meant for Event Processing on the Query side of a CQRS application.
Hope this clarifies things for you Martin! If not, feel free to comment under this answer and I'll update my response accordingly.
Related
I'm working on a project where there are, for the sake of this question, two microservices:
An new OrderService (Spring Boot)
A "legacy" Invoice Service (Jersey Web Application)
Additionally, there is a RabbitMQ message broker.
In the OrderService, we've used the Axon framework for event-sourcing and CQRS.
We would now like to use sagas to manage the transactions between the OrderService and InvoiceService.
From what I've read, in order to make this change, we would need to do the following:
Switch from a SimpleCommandBus -> DistributedCommandBus
Change configuration to enable distributed commands
Connect the Microservices either using SpringCloud or JCloud
Add AxonFramework to the legacy InvoiceService project and handle the saga events received.
It is the fourth point where we have trouble: the invoice service is maintained by a separate team which is unwilling to make changes.
In this case, is it feasible to use the message broker instead of the command gateway. For instance, instead of doing something like this:
public class OrderManagementSaga {
private boolean paid = false;
private boolean delivered = false;
#Inject
private transient CommandGateway commandGateway;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void handle(OrderCreatedEvent event) {
// client generated identifiers
ShippingId shipmentId = createShipmentId();
InvoiceId invoiceId = createInvoiceId();
// associate the Saga with these values, before sending the commands
SagaLifecycle.associateWith("shipmentId", shipmentId);
SagaLifecycle.associateWith("invoiceId", invoiceId);
// send the commands
commandGateway.send(new PrepareShippingCommand(...));
commandGateway.send(new CreateInvoiceCommand(...));
}
#SagaEventHandler(associationProperty = "shipmentId")
public void handle(ShippingArrivedEvent event) {
delivered = true;
if (paid) { SagaLifecycle.end(); }
}
#SagaEventHandler(associationProperty = "invoiceId")
public void handle(InvoicePaidEvent event) {
paid = true;
if (delivered) { SagaLifecycle.end(); }
}
// ...
}
We would do something like this:
public class OrderManagementSaga {
private boolean paid = false;
private boolean delivered = false;
#Inject
private transient RabbitTemplate rabbitTemplate;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void handle(OrderCreatedEvent event) {
// client generated identifiers
ShippingId shipmentId = createShipmentId();
InvoiceId invoiceId = createInvoiceId();
// associate the Saga with these values, before sending the commands
SagaLifecycle.associateWith("shipmentId", shipmentId);
SagaLifecycle.associateWith("invoiceId", invoiceId);
// send the commands
rabbitTemplate.convertAndSend(new PrepareShippingCommand(...));
rabbitTemplate.convertAndSend(new CreateInvoiceCommand(...));
}
#SagaEventHandler(associationProperty = "shipmentId")
public void handle(ShippingArrivedEvent event) {
delivered = true;
if (paid) { SagaLifecycle.end(); }
}
#SagaEventHandler(associationProperty = "invoiceId")
public void handle(InvoicePaidEvent event) {
paid = true;
if (delivered) { SagaLifecycle.end(); }
}
// ...
}
In this case, When we receive a message from the InvoiceService in the exchange, we would publish the corresponding event on the event gateway or using SpringAMQPPublisher.
Questions:
Is this a valid approach?
Is there a documented way of handling this kind of scenario in Axon? If so, can you please provide a link to the documentation or any sample code?
First off, and not completely tailored towards your question, you're referring to the Axon Extensions to enable distributed messaging. Although this is indeed an option, know that this will require you to configure several separate solutions dedicated for distributed commands, events and event storage. Using a unified solution for this like Axon Server will ensure that a user does not have to dive in three (or more) different approaches to make it all work. Instead, Axon Server is attached to the Axon Framework application, and it does all the distribution and event storage for you.
That thus means that things like the DistributedCommandBus and SpringAMQPPublisher are unnecessary to fulfill your goal, if you would use Axon Server.
That's a piece of FYI which can simplify your life; by no means a necessity of course. So let's move to your actual question:
Is this a valid approach?
I think it is perfectly fine for a Saga to act as a anti corruption layer in this form. A Saga states that it reacts on events and send operations. Whether those operations are in the form of commands or another third party service is entirely up to you.
Note though that I feel AMQP is more a solution for distributed events (in a broadcast approach) than that it's a means to send commands (to a direct handler). It can be morphed to suit your needs, but I'd regard it as suboptimal for command dispatching as it needs to be adjusted.
Lastly, make sure that your Saga can cope with exception from sending those operations over RabbitMQ. You wouldn't want the Invoice service to fail on that message and having your Order service's Saga think it's on a happy path with your transaction of course.
Concluding though, it's indeed feasible to use another message broker within a Saga.
Is there a documented way of handling this kind of scenario in Axon?
There's no documented Axon way to deal with such a scenario at the moment, as there is no "one size fits all" solution. From a pure Axon approach, using commands would be the way to go. But as you stated, that's not an option within your domain due to an (unwilling?) other team. Hence you would track back to the actual intent of a Saga, without taking Axon into account, which I would summarize in the following:
A Saga manages a complex business transaction.
Reacts on things happening (events) and sends out operations (commands).
The Saga has a notion of time.
The Saga maintains state over time to know how to react.
That's my two cents, hope this helps you out!
I have a request that is rather simple to formulate, but I cannot pull it of without leaking resources.
I want to return a response of type application/stream+json, featuring news events someone posted. I do not want to use Websockets, not because I don't like them, I just want to know how to do it with a stream.
For this I need to return a Flux<News> from my restcontroller, that is continuously fed with news, once someone posts any.
My attempt for this was creating a Publisher:
public class UpdatePublisher<T> implements Publisher<T> {
private List<Subscriber<? super T>> subscribers = new ArrayList<>();
#Override
public void subscribe(Subscriber<? super T> s) {
subscribers.add(s);
}
public void pushUpdate(T message) {
subscribers.forEach(s -> s.onNext(message));
}
}
And a simple News Object:
public class News {
String message;
// Constructor, getters, some properties omitted for readability...
}
And endpoints to publish news respectively get the stream of news
// ...
private UpdatePublisher<String> updatePublisher = new UpdatePublisher<>();
#GetMapping(value = "/news/ticker", produces = "application/stream+json")
public Flux<News> getUpdateStream() {
return Flux.from(updatePublisher).map(News::new);
}
#PutMapping("/news")
public void putNews(#RequestBody News news) {
updatePublisher.pushUpdate(news.getMessage());
}
This WORKS, but I cannot unsubscribe, or access any given subscription again - so once a client disconnects, the updatePublisher will just continue to push onto a growing number of dead channels - as I have no way to call the onCompleted() handler on the subscriptions.
TL;DL:
Can one push messages onto a possible endless Flux from a different thread and still terminate the Flux on demand without relying on a reset by peer exception or something along those lines?
You should never try to implement yourself the Publisher interface, as it boils down to getting the reactive streams implementation right. This is exactly the issue you're facing here.
Instead you should use one of the generator operators provided by Reactor itself (this is actually a Reactor question, nothing specific to Spring WebFlux).
In this case, Flux.create or Flux.push are probably the best candidates, given your code uses some type of event listener to push events down the stream. See the reactor project reference documentation on that.
Without more details, it's hard to give you a concrete code sample that solves your problem. Here are a few pointers though:
you might want to .share() the stream of events for all subscribers if you'd like some multicast-like communication pattern
pay attention to the push/pull/push+pull model that you'd like to have here; how is the backpressure supposed to work here? What if we produce more events that the subscribers can handle?
this model would only work on a single application instance. If you'd like this to work on multiple application instances, you might want to look into messaging patterns using a broker
I own a DDD/CQRS application.
My question concerns the handling of an item creation through POST (Rest).
CQRS (based on CQS principle) promotes that commands should never return a value.
Queries are there for that.
So I wonder how to handle the use case of Item creation.
Here's my current command handler pattern (light for the sample (no interfaces etc.)):
#Service
#Transactional
public CreateItem {
public void handle(CreateItemCommand command) {
Customer customer = customerRepository.findById(command.customerId);
ItemId generatedItemId = itemRepository.nextIdentity(); //generating the GUID
customer.createItem(generatedItemId, .....);
}
}
By reading this article, an easy method would be to declare an output property in the command, populated at the end of the handle method like this:
public void handle(CreateItemCommand command) {
Customer customer = customerRepository.findById(command.customerId);
ItemId generatedItemId = itemRepository.nextIdentity(); //generating the GUID
customer.createItem(generatedItemId, .....);
command.itemId = generatedItemId; //populating the output property
}
However, I see one drawback with this approach:
- A command, in theory, is meant to be immutable.
This itemId would then be sent thanks to the calling controller (webapp) through Location Header with the status 201 or 202 (depending if I expect async or not).
An other solution would be to let the controller initialize the GUID by accessing the repository itself, thus letting the command immutable:
//in my controller:
ItemId generatedItemId = itemRepository.nextIdentity(); //controller generating the GUID
createItem.handle(command);
// setting here the location header (201-202) containing the URL to the newly created item with the using the previous itemId.
Drawback: Controller (adapter layer) accessing directly the repository ..., that is too low-level IMO.
My extreme client being a Javascript application, I might have another solution to let the Javascript itself generate a GUID, and feed CreateItemCommand with it before sending the whole command to server.
Advantage: No more issues about potential violation of CQ(R)S guidelines.
Drawback: Should check the validity of the passed id at server side. Although there would have an index unique on this preventing an unexpected insertion in database.
What is the best (or just a good) strategy to handle this?
I am the developer of a CRM application based on the CQRS pattern. I tend to see commands as immutable. The team decided early on, that all IDs are generated on the client to have immutable commands. This is perfectly ok, as we are using UUIDs. So we are quite confident, that the IDs are valid and there are no ID collisions. We went well with that approach up to this point - I can definitely recommend this. In that scenario the client just knows the IDs.
Sometimes it happens though - especially in manual testing - that a create command is dispatched twice with the same ID. In that case the addition of events in the event store fails (we use event sourcing) with a duplicate key exception. The exception is passed to the controller. In fact we do return results from command executions with a call back, even though it's just "everything ok" most of the time - so no exception thrown. Also command validation is done this way. We do this using a command bus concept.
I would recommend taking a look at the Axon framework. We use it, it provides the common building blocks, and it just works. Maybe you can get some inspirations from that!
I am using the PostContextCreate part of the life cycle in an e4 RCP application to create the back-end "business logic" part of my application. I then inject it into the context using an IEclipseContext. I now have a requirement to persist some business logic configuration options between executions of my application. I have some questions:
It looks like properties (e.g. accessible from MContext) would be really useful here, a straightforward Map<String,String> sounds ideal for my simple requirements, but how can I get them in PostContextCreate?
Will my properties persist if my application is being run with clearPersistedState set to true? (I'm guessing not).
If I turn clearPersistedState off then will it try and persist the other stuff that I injected into the context?
Or am I going about this all wrong? Any suggestions would be welcome. I may just give up and read/write my own properties file.
I think the Map returned by MApplicationElement.getPersistedState() is intended to be used for persistent data. This will be cleared by -clearPersistedState.
The PostContextCreate method of the life cycle is run quite early in the startup and not everything is available at this point. So you might have to wait for the app startup complete event (UIEvents.UILifeCycle.APP_STARTUP_COMPLETE) before accessing the persisted state data.
You can always use the traditional Platform.getStateLocation(bundle) to get a location in the workspace .metadata to store arbitrary data. This is not touched by clearPersistedState.
Update:
To subscribe to the app startup complete:
#PostContextCreate
public void postContextCreate(IEventBroker eventBroker)
{
eventBroker.subscribe(UIEvents.UILifeCycle.APP_STARTUP_COMPLETE, new AppStartupCompleteEventHandler());
}
private static final class AppStartupCompleteEventHandler implements EventHandler
{
#Override
public void handleEvent(final Event event)
{
... your code here
}
}
I am creating a set of widgets in Java that decodes and displays messages received at a serial interface.
The message type is defined by a unique identifier.
Each widget is only interested in a particular identifier.
How to I program the application in a way to distribute the messages correctly to the relevant widgets?
If this is for a single app (i.e. a main and couple of threads), JMS is overkill.
The basics of this is a simple queue (of which Java has several good ones, BlockingQueue waving its hand in the back over there).
The serial port reads its data, formats a some relevant message object, and dumps it on a central Message Queue. This can be as simple as a BlockingQueue singleton.
Next, you'll need a queue listener/dispatcher.
This is a separate thread that sits on the queue, waiting for messages.
When it gets a message it then dispatches it to the waiting "widgets".
How it "knows" what widgets get what is up to you.
It can be a simple registration scheme:
String messageType = "XYZ";
MyMessageListener listener = new MyMessageListener();
EventQueueFactory.registerListener(messageType, listener);
Then you can do something like:
public void registerListener(String type, MessageListener listener) {
List<MessageListener> listeners = registrationMap.get(type);
if (listeners == null) {
listeneres = new ArrayList<MessageListener>();
registrationMap.put(type, listeners);
}
listeners.add(listener);
}
public void dispatchMessage(Message msg) {
List<MessageListener> listeners = registrationMap.get(type);
if (listeners != null) {
for(MessageListener listener : listeners) {
listener.send(msg);
}
}
}
Also, if you're using Swing, it has a whole suite of Java Bean property listeners and what not that you could leverage as well.
That's the heart of it. That should give you enough rope to keep you in trouble.
sounds like a jms topic/subscription. why reinvent the wheel?
One easy way to do this is to add each widget to a map by ID, and to provide each message to the widget by pulling it out of the map and calling some method on it. This means that each widget has to implement an interface that you can call to display the message. If the widgets are not in your control, then you can create a thin wrapper class (implementing an interface) and add this wrapper class -- with a widget -- to the map, one instance per ID.