I have a usecase where I would like to publish a non-state-chaninging event as a trigger.
In the vast majority of cases, the Aggregates will publish events by applying them. However, occasionally, it is necessary to publish an event (possibly from within another component), directly to the Event Bus. To publish an event, simply wrap the payload describing the event in an EventMessage. The GenericEventMessage.asEventMessage(Object) method allows you to wrap any object into an EventMessage ...
The event is published from inside a Saga.
When I use asEventMessageand look at the events table I'm a little confused. The event has an aggregate identifier that does not exist in the rest of the system and the type entry is null (when reading the docs, for a moment it sounds like the expected behavior of asEventMessage is equal to applying events from within aggregates).
Since I consider the event I'm talking about conceptionally part of the aggregate it should be referring to it, right?
So I craft a GenericDomainMessage myself and set its aggregate identifier, sequence number and type manually:
#SagaEventHandler
public void on (AnotherEvent event, #SequenceNumber long sequenceNr) {
// ...
GenericDomainEventMessage myEvent = new GenericDomainEventMessage(
MyAggregate.class.getSimpleName(),
identifier.toString(),
sequenceNr + 1,
payload);
eventStore.publish(myEvent);
}
This event does not introduce (data) state change to its underlying aggregate. I treat it as a flag/trigger that has a strong meaning in the domain.
I could also publish the event from within the aggregate, in a command handler but some of the operations that need to be performed are outside of the aggregate's scope. That's why a Saga seems to be more suitable.
So my questions are:
Is publishing a GenericDomainEventMessage equal to the behavior of AggrgateLifeCycle#apply?
Should there be a no-op handler in the aggregate or will axon handle this correctly?
In Axon 3, publishing the event to the EventBus is the same as apply()ing them from within the Aggregate, with one difference: apply() will invoke any available handlers as well. If no handler is available, there is no difference.
EventBus.publish is meant to publish Events that should not be directly tied to the Aggregate. In the Event Store table, it does get an Aggregate identifier (equal to the message identifier), but that's a technical detail.
In your case, the recommended solution would be to apply() the Event. The fact that the event doesn't trigger a state change doesn't matter at that point. You're not obliged to define an #EventSourcingHandler for it.
Related
Scenario:
We have two instances of the same microservice, which receives two events (pictured as Event1 and Event2 below) from Kafka. The instances need to combine the result of their own individual transformations, with that of the other, and send only 1 notification downstream.
I am trying to understand what is the best way to do all this. Specifically, how to make:
each instance of the microservice to wait for the other instance,
and then combine the individual transforms into one
and then check if the other instance has already combined and sent the notification, if yes, then skip!
Below diagram to help visualize:
Consider using the temporal.io open source project to implement this. You can code your logic as a simple stateful class that reacts to the events. The idea of Temporal is that the instance of that class is not linked to a specific instance of the service. Each object is unique and identified by a business ID. So all the cross-process coordination is handled by the Temporal runtime.
Here is a sample code using Temporal Java SDK. Go, Typescript/Javascript, PHP, Python are also supported.
#WorkflowInterface
public interface CombinerWorkflow {
#WorkflowMethod
void combine();
#SignalMethod
void event1(Event1 name);
#SignalMethod
void event1(Event2 name);
}
// Define the workflow implementation which implements the getGreetings workflow method.
public static class CombinerWorkflowImpl implements CombinerWorkflow {
private Event1 event1;
private Event2 event2;
private Notifier notifier = Workflow.newActivityStub(Notifier.class);
#Override
public void combine() {
Workflow.await(()->event1 != null && event2 !=null);
Event3 result = combine(event1, event2);
notifier.notify(result);
}
#Override
public void event1(Event1 event) {
this.event1 = event;
}
#Override
public void event1(Event2 event) {
this.event2 = event;
}
}
This code looks too simple as it doesn't talk to persistence. But Temporal ensures that all the data and even threads are durably preserved as long as needed (potentially years). So any infrastructure and process failures will not stop its execution.
It's worth noting that you cannot guarantee all of:
a notification which needs to be sent will be sent in some finite period of time (this is a reasonable working definition of availability in this context)
no notification will be sent more than once
either instance can fail or there are arbitrary network delays
Fundamentally you will need each instance to tell the other one that it's claiming responsibility for sending the notification or ask the other instance if it's claimed that responsibility. If telling, then if it doesn't wait for acknowledgement you cannot guarantee "not more than once". If you tell and wait for acknowledgement, you cannot guarantee "will be sent in a finite period". If you ask, you will likewise have to decide whether or not to send in the case of no reply.
You could have the instances use some external referee: this only punts the CAP tradeoff to that referee. If you choose a CP referee, you will be giving up on guaranteeing a notification will be sent. If you choose AP, you will be giving up on guaranteeing that no notification gets sent more than once.
You'll have to choose which of those three guarantees you want to weaken; deciding how you weaken will guide your design.
There are multiple ways to get around such kind of data sync problems. But since you are using Kafka, you should be using out of box functionalities offered by Kafka.
Option 1 (Preferable)
Kafka guarantees to maintain the order of events within the same partition. Therefore if your producer could send these events to the same partition, they would be received by the same consumer in any given consumer group (in your case, same instance - or if you are using threads as consumer, same thread of same consumer). With this you wouldn't need to worry about about syncing events across multiple consumers.
If you are using Spring Boot, this could be easily achieved by providing partition key in kafka template.
More on this topic : How to maintain message ordering and no message duplication
Option 2
Now, if you don't have control over producer, you would need to handle this at application side. You are going to need a distributed caching support, e.g. redis for this. Simply maintain the boolean state (completed: true OR false) for these events and only when all related events are received, process the downstream logic.
NOTE: Assuming you are using a persistence layer, combining and transforming the events should be trivial. But if you are not using any persistence, then you would need to use in-memory cache for Option1. For Option2, it should be trivial because you already have the distributed cache (and can store whatever you want).
In my software solution i use JavaEE with EJBs. On certain events i fire different events regarding what happend in the system. In my specific case i fire two different events that should be executed after the transaction finished successfully.
As far as i know the order of execution of the same event is not specified but how does CDI execute the events when they are different types and fired one after the other?
So in my code i do fire(Event) and then in the same transaction fire(Event). Is Event executed before Event? Researched this but i could not find an answer.
Here it is stated that the execution order of the same event is not a given but there is nothing about different events: http://www.next-presso.com/2014/06/you-think-you-know-everything-about-cdi-events-think-again/
Until CDI 1.2 (check here, chapter 10.5):
The order in which observer methods are called [after firing an event] is not defined, and so portable applications should not rely upon the order in which observers are called.
In fact, the CDI container may enqueue your fired events in a given list, specially when you marked observer as a transactional observer method. The implementation list may be ordered (FIFO or any other), but you have no guarantee of it.
Since CDI 2.0 (check here, Chapter 10.5.2), you may define an order using the #Priority annotation and specifying a number as its value. Observers with smaller priority values are called first and observers with no #Priority annotation gets de default priority (Priority.APPLICATION + 500). As with CDI 1.2, observers with same priority does not have any previously defined order and may be called by CDI container in any order.
CDI 2.0 observer ordering does not apply to asynchronous observer methods (per spec), as it's expected that observer methods get called as soon as it is possible and in different contexts. If you need some kind of ordering in you use case, you should make your asynchronous observer trigger the next event, instead of calling it from your "main" method.
How to actually manage sagas with multiple JVM of same app running
Should each JVM of this app use same database?
Otherwise tracking tokens will not "be shared" across the same app?
How are events split among same app running for sagas? Does one saga type or saga instance always handled on same app (until it is shutdown so another instance take charge of it) ?
Or is each JVM receives events and each saga of same type will run? (and result in duplicate commands sent and errors)
Any other thing to care of?
Example of scenario:
3 same app on 3 different PC/VM.
Saga of name "SagaA" which can start with EventA and end with EventB.
Both events have a field "id", saga has 2 event handler to handle to events in the saga.
How will events be handled for example 3 events EventA and EventB, each with "id" of different value
Etc.
Many more questions.
A Saga in Axon terms is nothing more than a specific type of Event Handler.
As such, Axon will use an Event Processor to give a Saga instance its events.
Event Processors come in two flavors:
SubscribingEventProcessor
TrackingEventProcessor
You should describe a subscribing processor as "receiving the events from the EventBus within the same JVM".
A tracking processor should be described as "pulling events from the EventStore, keeping track of the progress through as shareable token.
The nature of your question now highly depends on which Event Processor is being used.
With a SubscribingEventProcessor you would by definition not share the event load between different instances of the same app.
Thus, a given Saga would be loaded on any live instance, given both receive events associated to the same saga.
Needless to say, using the subscribing processor for Sagas does no work well if you are going to distributed the application running those Saga instances.
Instead, it is highly recommended to use a TrackingEventProcessor to be the source of events for a specific Saga instance.
In doing so, any load sharing follows from the requirement that a TrackingToken must be claimed by such a processor to be able to do any work (aka, handling events).
Thus, to share the workload of providing events from the event store to your Saga instances in Axon, you would have to do the following:
Set up a TrackingEventProcessor for said saga type
Set up a TokenStore, where the underlying storage mechanism is shared among all app instances
[Optional] If you want parallel processing of the event stream, you will have to segment the TrackingToken for the given saga type. [EDIT] On top of this, the saga_entry table used by the SagaStore should also be shared among among all app instances running the given Saga type
Hope this answer suffices for the "many more questions" you have #Yoann!
Studying about Spring event handlers i cannot see what is the gain ou benefits of implement our event listener, I mean, What is the difference in Object A calling object B directly synchronously ou Object A, using one Listener and one EventHandler, publishing and then the Object B be called. may be is about some architecture gain or to get low coupling ? which is the real gain? tks.
As you mentioned this approach lowers the coupling between classes as the sender or receiver of the event don't know about each other.
Just in case, it's not ideal to use this approach for all method invocations, but it makes a lot of sense when the operations are not really related.
For example, imagine the following scenario: When a user completes his registration, we'll send him/her a welcome email.
In this case, coupling the registration process to sending the email is not great. But if you have a listener on the UserRegistered event, then the email can be triggered out of that event. I really like this way of build applications as it makes them more decoupled, but, depending what event dispatcher you use, it becomes more difficult to understand the flow (e.g. if the event dispatcher receives a string as the event name and a map with the data, then is difficult to easily get a list of all consumers).
One important aspect (or I should say smell) is that 2 listeners that are listening for the same event shouldn't depend on the order they consume the event. This applies to all event dispatchers, not just the one implemented by Spring.
As is widely known, anything related to Swing components must be done on the event dispatch thread. This also applies to the models behind the components, such as TableModel. Easy enough in elementary cases, but things become pretty complicated if the model is a "live view" of something that must run on a separate thread because it's changing quickly. For example, a live view of a stock market on a JTable. Stock markets don't usually happen on the EDT.
So, what is the preferable pattern to (de)couple the Swing model that must be on the EDT, and a "real", thread-safe model that must be updateable from anywhere, anytime? One possible solution would be to actually split the model into two separate copies: the "real" model plus its Swing counterpart, which is is a snapshot of the "real" model. They're then (bidirectionally) synchronized on the EDT every now and then. But this feels like bloat. Is this really the only viable approach, or are there any other, or more standard, ways? Helpful libraries? Anything?
I can recommend the following approach:
Place events that should modify the table on a "pending event" queue, and when an event is placed on the queue and the queue is empty then invoke the Event Dispatch thread to drain the queue of all events and update the table model. This optimisation means you are no longer invoking the event dispatch thread for every event received, which solves the problem of the event dispatch thread not keeping up with the underlying event stream.
Avoid creation of a new Runnable when invoking the event dispatch thread by using a stateless inner class to drain the pending event queue within your table panel implementation.
Optional further optimisation: When draining the pending event queue minimise the number of table update events fired by remembering which table rows need to be repainted and then firing a single event (or one event per row) after processing all events.
Example Code
public class MyStockPanel extends JPanel {
private final BlockingQueue<StockEvent> stockEvents;
// Runnable invoked on event dispatch thread and responsible for applying any
// pending events to the table model.
private final Runnable processEventsRunnable = new Runnable() {
public void run() {
StockEvent evt;
while ((evt = stockEvents.poll() != null) {
// Update table model and fire table event.
// Could optimise here by firing a single table changed event
// when the queue is empty if processing a large #events.
}
}
}
// Called by thread other than event dispatch thread. Adds event to
// "pending" queue ready to be processed.
public void addStockEvent(StockEvent evt) {
stockEvents.add(evt);
// Optimisation 1: Only invoke EDT if the queue was previously empty before
// adding this event. If the size is 0 at this point then the EDT must have
// already been active and removed the event from the queue, and if the size
// is > 0 we know that the EDT must have already been invoked in a previous
// method call but not yet drained the queue (i.e. so no need to invoke it
// again).
if (stockEvents.size() == 1) {
// Optimisation 2: Do not create a new Runnable each time but use a stateless
// inner class to drain the queue and update the table model.
SwingUtilities.invokeLater(processEventsRunnable);
}
}
}
As far as I understand, you don't want to implement Swing model interfaces in your real model, do you? Can you implement a Swing model as a "view" over a part of a real model? It will translate its read-access getValueAt() to the calls of the real model, and the real model will notify Swing model about the changes , either providing a list of changes or assuming that Swing model will take care of quering the new values of everything it currently is showing.
The usual approach is to send "signals" of some kind to which the UI listens. In my code, I often use a central dispatcher which sends signals that contain the object that was modified, the name of the field/property plus the old and new value. No signal is sent for the case oldValue.equals(newValue) or oldValue.compareTo(newValue) == 0 (the latter for dates and BigDecimal).
The UI thread then registers a listener for all signals. It then examines the object and the name and then translates that to a change in the UI which is executed via asyncExec().
You could turn that into a listener per object and have each UI element register itself to the model. But I've found that this just spreads the code all over the place. When I have a huge set of objects on both sides, I sometimes just use several signals (or events) to make things more manageable.