In my software solution i use JavaEE with EJBs. On certain events i fire different events regarding what happend in the system. In my specific case i fire two different events that should be executed after the transaction finished successfully.
As far as i know the order of execution of the same event is not specified but how does CDI execute the events when they are different types and fired one after the other?
So in my code i do fire(Event) and then in the same transaction fire(Event). Is Event executed before Event? Researched this but i could not find an answer.
Here it is stated that the execution order of the same event is not a given but there is nothing about different events: http://www.next-presso.com/2014/06/you-think-you-know-everything-about-cdi-events-think-again/
Until CDI 1.2 (check here, chapter 10.5):
The order in which observer methods are called [after firing an event] is not defined, and so portable applications should not rely upon the order in which observers are called.
In fact, the CDI container may enqueue your fired events in a given list, specially when you marked observer as a transactional observer method. The implementation list may be ordered (FIFO or any other), but you have no guarantee of it.
Since CDI 2.0 (check here, Chapter 10.5.2), you may define an order using the #Priority annotation and specifying a number as its value. Observers with smaller priority values are called first and observers with no #Priority annotation gets de default priority (Priority.APPLICATION + 500). As with CDI 1.2, observers with same priority does not have any previously defined order and may be called by CDI container in any order.
CDI 2.0 observer ordering does not apply to asynchronous observer methods (per spec), as it's expected that observer methods get called as soon as it is possible and in different contexts. If you need some kind of ordering in you use case, you should make your asynchronous observer trigger the next event, instead of calling it from your "main" method.
Related
How to actually manage sagas with multiple JVM of same app running
Should each JVM of this app use same database?
Otherwise tracking tokens will not "be shared" across the same app?
How are events split among same app running for sagas? Does one saga type or saga instance always handled on same app (until it is shutdown so another instance take charge of it) ?
Or is each JVM receives events and each saga of same type will run? (and result in duplicate commands sent and errors)
Any other thing to care of?
Example of scenario:
3 same app on 3 different PC/VM.
Saga of name "SagaA" which can start with EventA and end with EventB.
Both events have a field "id", saga has 2 event handler to handle to events in the saga.
How will events be handled for example 3 events EventA and EventB, each with "id" of different value
Etc.
Many more questions.
A Saga in Axon terms is nothing more than a specific type of Event Handler.
As such, Axon will use an Event Processor to give a Saga instance its events.
Event Processors come in two flavors:
SubscribingEventProcessor
TrackingEventProcessor
You should describe a subscribing processor as "receiving the events from the EventBus within the same JVM".
A tracking processor should be described as "pulling events from the EventStore, keeping track of the progress through as shareable token.
The nature of your question now highly depends on which Event Processor is being used.
With a SubscribingEventProcessor you would by definition not share the event load between different instances of the same app.
Thus, a given Saga would be loaded on any live instance, given both receive events associated to the same saga.
Needless to say, using the subscribing processor for Sagas does no work well if you are going to distributed the application running those Saga instances.
Instead, it is highly recommended to use a TrackingEventProcessor to be the source of events for a specific Saga instance.
In doing so, any load sharing follows from the requirement that a TrackingToken must be claimed by such a processor to be able to do any work (aka, handling events).
Thus, to share the workload of providing events from the event store to your Saga instances in Axon, you would have to do the following:
Set up a TrackingEventProcessor for said saga type
Set up a TokenStore, where the underlying storage mechanism is shared among all app instances
[Optional] If you want parallel processing of the event stream, you will have to segment the TrackingToken for the given saga type. [EDIT] On top of this, the saga_entry table used by the SagaStore should also be shared among among all app instances running the given Saga type
Hope this answer suffices for the "many more questions" you have #Yoann!
I have a usecase where I would like to publish a non-state-chaninging event as a trigger.
In the vast majority of cases, the Aggregates will publish events by applying them. However, occasionally, it is necessary to publish an event (possibly from within another component), directly to the Event Bus. To publish an event, simply wrap the payload describing the event in an EventMessage. The GenericEventMessage.asEventMessage(Object) method allows you to wrap any object into an EventMessage ...
The event is published from inside a Saga.
When I use asEventMessageand look at the events table I'm a little confused. The event has an aggregate identifier that does not exist in the rest of the system and the type entry is null (when reading the docs, for a moment it sounds like the expected behavior of asEventMessage is equal to applying events from within aggregates).
Since I consider the event I'm talking about conceptionally part of the aggregate it should be referring to it, right?
So I craft a GenericDomainMessage myself and set its aggregate identifier, sequence number and type manually:
#SagaEventHandler
public void on (AnotherEvent event, #SequenceNumber long sequenceNr) {
// ...
GenericDomainEventMessage myEvent = new GenericDomainEventMessage(
MyAggregate.class.getSimpleName(),
identifier.toString(),
sequenceNr + 1,
payload);
eventStore.publish(myEvent);
}
This event does not introduce (data) state change to its underlying aggregate. I treat it as a flag/trigger that has a strong meaning in the domain.
I could also publish the event from within the aggregate, in a command handler but some of the operations that need to be performed are outside of the aggregate's scope. That's why a Saga seems to be more suitable.
So my questions are:
Is publishing a GenericDomainEventMessage equal to the behavior of AggrgateLifeCycle#apply?
Should there be a no-op handler in the aggregate or will axon handle this correctly?
In Axon 3, publishing the event to the EventBus is the same as apply()ing them from within the Aggregate, with one difference: apply() will invoke any available handlers as well. If no handler is available, there is no difference.
EventBus.publish is meant to publish Events that should not be directly tied to the Aggregate. In the Event Store table, it does get an Aggregate identifier (equal to the message identifier), but that's a technical detail.
In your case, the recommended solution would be to apply() the Event. The fact that the event doesn't trigger a state change doesn't matter at that point. You're not obliged to define an #EventSourcingHandler for it.
I'm using Java EE 6.
I'd like to trigger an action upon successful commit of a transaction. For now, my plan is to use a CDI transactional event within an EJB:
#Asynchronous
public void triggerAction(#Observes(during = TransactionPhase.AFTER_SUCCESS) MyEvent myEvent){
// Do something with the event
}
The transaction triggering the event can be involved in a XA distributed transaction.
At which phase of the two phase commit will the observer be called ?
The documentation states:
An after success observer method is called during the after completion phase of the transaction, only when the transaction completes successfully.
I'm not sure what this implies when using distributed transactions.
Further, is there any warranty that the data is already in DB (i.e. can my observer method be called when the decision to commit is reached, but the data are not yet persisted in DB ?).
Unfortunately, CDI 1.x does not define the behavior of events in async call stacks. The behavior you will see will be container specific, including some containers that invoke this method synchronously instead of async. CDI 2.0 is introducing an async observer for events.
the XA transaction manager is responsible for 'applying' XA semantics, meaning it has to go through the 2 Phase Commit (dialog) with all the involved parties (distributed parties) and then it will commit and consider (if no errors) it as done.
In you case, the observer will be called when the data in the db will be commited in the context of the transaction and not in any previous or interim state.
Studying about Spring event handlers i cannot see what is the gain ou benefits of implement our event listener, I mean, What is the difference in Object A calling object B directly synchronously ou Object A, using one Listener and one EventHandler, publishing and then the Object B be called. may be is about some architecture gain or to get low coupling ? which is the real gain? tks.
As you mentioned this approach lowers the coupling between classes as the sender or receiver of the event don't know about each other.
Just in case, it's not ideal to use this approach for all method invocations, but it makes a lot of sense when the operations are not really related.
For example, imagine the following scenario: When a user completes his registration, we'll send him/her a welcome email.
In this case, coupling the registration process to sending the email is not great. But if you have a listener on the UserRegistered event, then the email can be triggered out of that event. I really like this way of build applications as it makes them more decoupled, but, depending what event dispatcher you use, it becomes more difficult to understand the flow (e.g. if the event dispatcher receives a string as the event name and a map with the data, then is difficult to easily get a list of all consumers).
One important aspect (or I should say smell) is that 2 listeners that are listening for the same event shouldn't depend on the order they consume the event. This applies to all event dispatchers, not just the one implemented by Spring.
we have a situation where we want to perform some tasks at the end of a request or transaction. More specifically, we need to collect some data during that request and at the end we use that data to do some automatic database updates.
This process should be as transparent as possible, i.e. users of the EJBs that require this should not have to worry about that.
In addition we can't control the exact call stack since there are multiple entry points to the process.
To achieve our goal, we're currently considering the following concept:
certain low level operations (that are always called) fire a CDI event
a stateless EJB listens for those events and upon receiving one it collects the data and stores it into a scoped CDI bean (either request scope or conversation scope would be fine)
at the end of the request another event is fired which causes the data in the scoped CDI bean to be processed
So far, we managed to get steps 1 and 2 up and running.
However, the problem is step 3:
As I already said, there are multiple entry points to the process (originating from web requests, scheduled jobs or remote calls) and thus we thought of the following approach:
3a. A CDI extension scans all beans and adds an annotation to every EJB.
3b. An interceptor is registered for the added annotation and thus on every call to an EJB method the interceptor is invoked.
3c. The first invocation of that interceptor will fire an event after the invoked method has returned.
And here's the problem (again in the 3rd step :) ):
How would the interceptor know whether it was the first invocation or not?
We thought of the following, but neither worked so far:
get a request/conversation scoped bean
fails because no context is active
get the request/conversation context and activate it (which then should mark the first invocation since in subsequent ones the context should be active)
the system created another request context and thus WELD ends up with at least two active request contexts and complained about this
the conversion context stayed active or was deactivated prematurely (we couldn't yet figure out why)
start a long running conversation and end it after the invocation
failed because there was no active request context :(
Another option we didn't try yet but which seems to be discouraged:
use a ThreadLocal to either store some context data or at least to use invocation context propagatation as described here: http://blog.dblevins.com/2009/08/pattern-invocationcontext-propagation.html
However, AFAIK there's no guarantee that the request will be handled entirely by the same thread and thus wouldn't even invocation context propagation break when the container decides to switch to another thread?
So, thanks to all who endured with me and read through all that lengthy description.
Any ideas of how to solve this are welcome.
Btw, here are some of the software components/standards we're using (and which we can't switch):
JBoss 7.1.0.Final (along with WELD and thus CDI 1.0)
EJB 3.1
Hibernate 3.6.9 (can't switch to 4.0.0 yet)
UPDATE:
With the suggestions you gave so far, we came up with the following solution:
use a request scoped object to store the data in
the first time an object is stored in that object an event is fired
a listener is invoked before the end of the transaction (using #Observes(during=BEFORE_COMPLETION) - thanks, #bkail)
This works so far but there's still one problem:
We also have MBeans that are managed by CDI and automatically registered to the MBean server. Thus those MBeans can get EJB references injected.
However, when we try and call an MBean method which in turn calls an EJB and thus causes the above process to start we get a ContextNotActiveException. This indicates that within JBoss the request context is not started when executing an MBean method.
This also doesn't work when using JNDI lookups to get the service instead of DI.
Any ideas on this as well?
Update 2:
Well, seems like we got it running now.
Basically we did what I described in the previous update and solved the problem with the context not being active by creating our own scope and context (which activated the first time an EJB method is called and deactivated when the corresponding interceptor finishes).
Normally we should have been able to do the same with request scope (at least if we didn't miss anything in the spec) but since there is a bug in JBoss 7.1 there is not always an active request context when calling EJBs from MBeans or scheduled jobs (which do a JNDI lookup).
In the interceptor we could try to get an active context and on failure activate one of those present in the bean manager (most likely EjbRequestContext in that case) but despite our tests we'd rather not count on that working in every case.
A custom scope, however, should be independent from any JBoss scope and thus should not interfere here.
Thanks to all who answered/commented.
So there's a last problem though: whose answer should I accept as you all helped us get into the right direction? - I'll try to solve that myself and attribute those points to jan - he's got the fewest :)
Do the job in a method which is annotated with #PreDestroy.
#Named
#RequestScoped
public class Foo {
#PreDestroy
public void requestDestroyed() {
// Here.
}
}
It's invoked right before the bean instance is destroyed by the container.
What you're looking for is SessionSynchronization. This lets an EJB tie in to the transaction lifecycle and be notified when transactions are being completed.
Note, I am being specific about Transactions, you mention "requests and transactions" and I don't know if you specifically mean EJB Transactions or something tied to your application.
But I'm talking about EJB Transactions.
The downside is that it is only invoked when the specific EJB is invoked, not to "all" transactions in general. But that may well be appropriate anyway.
Finally, be careful in these interim call back areas -- I've had weird things happen with transactional at these lifecycle methods. In the end, ended up putting stuff in to a local, memory based queue that another thread reaped for committing to JMS or whatever. Downside is that they were tied to the transaction at hand, upside was that they actually worked.
Phew, that's a complex scenario :)
From how I understand what you've tried so far you are pretty advanced with the CDI techniques - there is nothing big you are missing.
I would say that you should be able to activate a conversation context at the entry point (you've probably seen the relevant documentaton?) and work with it for the whole processing. It might actually be worthwhile considering implementing an own scope. I did that once in a distantly related scenario where we could not tell whether we've been invoked by HTTP-request or EJB-remoting.
But to be honest, all this feels far too complex. It's a rather fragile construct of interceptors notifying each other with events which all in all seems just too easy to break.
Can it be that there is another approach which better fits your needs? E.g. you might try to hook on the transaction management itself and execute your data accumulation from there?