CDI 2 Event ordering with #Priority not work - java

The answer of this question describes that it is possible to sort CDI events with the #Priority annotation.
Execution order of different CDI Events in same transaction
Since CDI 2.0 (check here, Chapter 10.5.2), you may define an order using the #Priority annotation and specifying a number as its value. Observers with smaller priority values are called first and observers with no #Priority annotation gets de default priority (Priority.APPLICATION + 500). As with CDI 1.2, observers with same priority does not have any previously defined order and may be called by CDI container in any order.
CDI 2.0 observer ordering does not apply to asynchronous observer methods (per spec), as it's expected that observer methods get called as soon as it is possible and in different contexts. If you need some kind of ordering in you use case, you should make your asynchronous observer trigger the next event, instead of calling it from your "main" method.
So even if i fire Two different Event-Objects the order is not specified? – Noixes Jul 22 '20 at 6:45
Yes. Unless you are using CDI 2 and defining different priorities, the order is unspecified. You must realize that you may "discover" the order of a given implementation in such cases, but it is not recommended to rely on it because a future version of the same implementation may change it without colliding with the spec. –
But it doesn't work in my example:
#Stateless
public class EventTest {
#Inject
#QualifierA
private Event<String> eventA;
#Inject
#QualifierB
private Event<String> eventB;
#Test
public void test() throws VerarbeitungsException {
eventB.fire("B");
eventA.fire("A");
}
public void observerA(#Observes(during = TransactionPhase.AFTER_SUCCESS) #Priority(value = 1) #QualifierA String xml) {
send(xml);
}
public void observerB(#Observes(during = TransactionPhase.AFTER_SUCCESS) #Priority(value = 2) #QualifierB String xml) {
send(xml);
}
private void send(String xml){
System.out.println(xml);
}
}
In my testclass I fire event B and then A. The test log show B/A but I woud expact A/B as defined with #Priority. Im using WildFly14 with CDI 2.0. Does sorting of Events only work for observer for the same event/qualifier?

The ordering is between observers of the same event. But you defined two events, with different qualifiers.
To properly test the priority you should fire only one event, and set two observers for that event.
For example:
#Stateless
public class EventTest {
#Inject
#QualifierA
private Event<String> eventA;
#Test
public void test() throws VerarbeitungsException {
eventA.fire("A");
}
public void observerA(#Observes(during = TransactionPhase.AFTER_SUCCESS) #Priority(value = 1) #QualifierA String xml) {
send("A: " + xml);
}
public void observerB(#Observes(during = TransactionPhase.AFTER_SUCCESS) #Priority(value = 2) #QualifierB String xml) {
send("B: " + xml);
}
private void send(String xml){
System.out.println(xml);
}
}

Event qualifiers
The event qualifiers act as topic selectors, allowing the consumer to
narrow the set of events it observes. An event qualifier may be an
instance of any qualifier type. Observer methods
An observer method allows the application to receive and respond
synchronously to event notifications. And an async observer method
allows the application to receive and respond asynchronously to event
notifications. they both act as event consumers, observing events of a
specific type, with a specific set of qualifiers. Any Java type may be
observed by an observer method.
An observer method is a method of a bean class or extension with a
parameter annotated #Observes or #ObservesAsync.
An observer method will be notified of an event if:
the event object is assignable to the type observed by the observer method,
the observer method has all the event qualifiers of the event, and
either the event is not a container lifecycle event, or the observer method belongs to an extension.
Blockquote
In your code, you are specifically requesting that it observes events with different qualifiers. The ordering will not not be enforced.
Change it to and try:
public void observesFirst(#Observes(during = TransactionPhase.AFTER_SUCCESS) #Priority(value = 1) String xml) {
send(xml);
}
public void observesSecond(#Observes(during = TransactionPhase.AFTER_SUCCESS) #Priority(value = 2) String xml) {
send(xml);
}

Related

How to disable a particular event handler from axon?

We have a few event handlers configured in the code. But, I'd like to disable a particular event handler from axon.
Please note - I can do this using #Profile or #Conditional parameter. But, I am interested to know if there is any way like config at EventConfig to exclude the particular event handler from processing.
Please refer below code.
Event Source Config
public class AxonConfig {
public void configureProcessorDefault(EventProcessingConfigurer processingConfigurer) {
processingConfigurer.usingSubscribingEventProcessors();
}
}
Event handlers
#ProcessingGroup("this")
class ThisEventHandler {
// your event handlers here...
}
#ProcessingGroup("that")
class ThatEventHandler {
// your event handlers here...
}
#ProcessingGroup("other")
class OtherEventHandler {
// your event handlers here...
}```
The EventProcessingConfiguration that's constructed as a result of the EventProcessingConfigurer provides a means to retrieve your Event Processor instances. Furthermore, any EventProcessor implementation has a start() and shutDown() method.
You can thus stop the Event Processors that you want to stop.
The following piece of code would get that done for you:
class EventProcessorControl {
private final EventProcessingConfiguration processingConfig;
EventProcessorControl(EventProcessingConfiguration processingConfig) {
this.processingConfig = processingConfig;
}
public void startProcessor(String processorName) {
processingConfig.eventProcessor(processorName)
.ifPresent(EventProcessor::start);
}
public void stopProcessor(String processorName) {
processingConfig.eventProcessor(processorName)
.ifPresent(EventProcessor::shutDown);
}
}
As a side note, I would warn against only using the SubscribingEventProcessor. Of the Event Processor implementations, it provides the least amount of flexibility when it comes to performance, distribution and error handling. I'd much rather try using the PooledStreamingEventProcessor instead.

Why does my Spring #EventListener show different transactional behavior on event submission than when being called directly?

When using the #EventListener functionality with Spring Data's repositories the behavior is different than when calling the same code procedural.
My persistent objects publish events using the following base class:
public abstract class Aggregate {
#Transient
private transient final Set<Object> events = new LinkedHashSet<>();
protected <T> T registerEvent(T event) {
this.events.add(event);
return event;
}
#DomainEvents
Collection<Object> events() {
return Collections.unmodifiableSet(events);
}
#AfterDomainEventPublication
void clearEvents() {
this.events.clear();
}
}
My event listening class is implemented as follows:
class Service {
#EventListener
public void listener(SomeEvent event) {
someOtherRepository.save(someOtherPersistentObject);
someOtherCode();
}
}
When the listener is triggered and someOtherRepository's save(…) method fails a rollback will be issued. But someOtherCode() is executed regardless of the rollback.
But when I remove all #EventListening functionality and call the listener(…) method directly after the point where the originating repository is responsible for firing the event. Then I get a different behavior. Then someOtherCode() is never executed and the someOtherRepository.save(…) method fails immediately.
The original service responsible for publishing the event looks like this
public OriginatingService {
#Transactional
public void someMethod() {
originatingRepoDifferentFromSomeOtherRepo.save(something);
Why is this happening and is there a way to force the same behavior onto my event listening implementation?
Because writes to the database may be delayed until transaction commit i.e. when the transactional method returns.
Update as below to explicitly trigger an immediate flush:
#EventListener
public void listener(SomeEvent event) {
someOtherRepository.saveAndFlush(someOtherPersistentObject);
someOtherCode();
}

Designing custom workflow in JAVA and Spring

I am working on an spring 2.0.1.RELEASE application.
Brief of Application:
1. I have separate Transformer beans that transforms my DTO to Domain
and vice versa.
2. I have separate Validator beans that validate my domain object being passed.
3. I have Service classes that takes care of the applying rules and calling persistence layer.
Now, i want to build a Workflow in my application:
where i will just call the start of the workflow and below mentioned steps will be executed in order and exception handling will be done as per the step:
1.First-Transformtion - transformToDomain() method will be called for that object type.
2.Second-Validator - class valid() method will be called for that object.
3.Third-Service - class save() method will be called for that object.
4.Fourth- Transformation - transformToDTO() method will be called for that object type.
after this my workflow ends and i will return the DTO object as response of my REST API.
Exception handling part is the one, i also want to take care of, like if particular exception handler exist for that step then call it, else call global exception handler.
I designed some prototype of same, but looking for some expert advice and how this can be achieved with a better design in java.
Explanation with example considering above use case is highly appreciable.
I'm not so sure if what you are describing is a workflow system in its true sense, perhaps a Chain of Responsibility is more of what you are talking about?
Following what you described as a sequence of execution, here is a simplified example of how I would implement the chain:
Transformer.java
public interface Transformer<IN, OUT> {
OUT transformToDomain(IN dto);
IN transformToDTO(OUT domainObject);
}
Validator.java
public interface Validator<T> {
boolean isValid(T object);
}
Service.java
public interface Service {
void save(Object object);
}
And the implementation that binds everything:
ProcessChain.java
public class ProcessChain {
private Transformer transformer;
private Service service;
private Validator validator;
Object process(Object dto) throws MyValidationException {
Object domainObject = transformer.transformToDomain(dto);
boolean isValid = validator.isValid(domainObject);
if(!isValid){
throw new MyValidationException("Validation message here");
}
service.save(domainObject);
return transformer.transformToDTO(domainObject);
}
}
I haven't specified any Spring related things here because your question seems to be a design question rather than a technology questions.
Hope this helps
Brief of what i implemented in a way with not much hustle:
This is how I created flow of handlers:
Stream.<Supplier<RequestHandler>>of(
TransformToDomainRequestHandler::new,
ValidateRequestHandler::new,
PersistenceHandler::new,
TransformToDTORequestHandler::new)
.sequential()
.map(c -> c.get()) /* Create the handler instance */
.reduce((processed, unProcessed) -> { /* chains all handlers together */
RequestHandler previous = processed;
RequestHandler target = previous.getNextRequestHandler();
while (target != null && previous != null) {
previous = target;
target = target.getNextRequestHandler();
}
previous.setNextRequestHandler(unProcessed);
return processed;
}).get();
This is my Request Handler which all other handler extends

Request-scoped ApplicationEventListener fails to receive events

I have the need to register a separate application event listener for each request. The listener's purpose is to catch events coming in from other REST requests, while the listener's request is blocked awaiting all the required events to come in.
I have code such as this:
#Component
// #Scope(WebApplicationContext.SCOPE_REQUEST)
public static class WhistleEventListener implements ApplicationListener<WhistleEvent> {
volatile Consumer<WhistleEvent> handler;
#Override
public void onApplicationEvent(WhistleEvent we) {
final Consumer<WhistleEvent> h = handler;
if (h != null) h.accept(we);
}
}
#Autowired WhistleEventListener whistleEventListener;
This code receives events, but as soon as I uncomment the #Scope annotation, it stops receiving events.
Are request-scoped application event listeners supported, are they supposed to work? If so, can I do something to make my listener work?
I suspect you have a misunderstanding of the application event dispatching mechanics: the event is dispatched against bean definitions, not bean instances, and each bean definition is resolved into an instance at the moment, and in the context, of event publication. That means that your event will be dispatched only to the request-scoped bean belonging to the request inside which the event is published, but you want the listeners of all current requests to be notified.
More generally, the purpose of a scope is to isolate scope instances, which contain separate bean instances. If you do not want isolation, you should use a scope that does not have separate instances, for instance the application scope.
That is, to dispatch events to other scope instances, you'd have to do the dispatching yourself, for instance like:
#Component
public class WhistleEventMediator implements ApplicationListener<WhistleEvent> {
// TODO: make thread safe
final Set<Consumer<WhistleEvent>> consumers;
void subscribe(Consumer<WhistleEvent> c) { ... }
void unsubscribe(Consumer<WhistleEvent> c) { ... }
#Override public void onApplicationEvent(WhistleEvent we) {
// delegate to subscribed consumers
}
}
#Component
#Scope(WebApplicationContext.SCOPE_REQUEST)
public class WhateverBean implements Consumer<WhistleEvent> {
#Inject
WhistleEventMediator mediator;
#PostConstruct
void init() {
mediator.subscribe(this);
}
#PreDestroy
void destroy() {
mediator.unsubscribe(this);
}
// handle whistle event
}

How to select between different CDI-bean implementations runtime

I have a messageListener which purpose is to start clients implementing the Client-interface. The different implementations of the Client-interface is not know at compile time.
The messageListener uses the Launcher-bean to start the clients. So my problem is I need to construct a Launcher-bean that has the selected implementation of the Client-interface injected into it. Im not sure how to do this, or should i approch the problem differently?
public class MyMessageConsumer implements MessageListener {
public void onMessage(Message message) {
String clientType = message.getClientType();
//Here i need to construct a launcher-bean, which has the correct Client-implementation injected
launcher.startClient(message);
}
}
public class Launcher {
#Inject
private Client client;
public void startClient(Message message) {
...
client.start(message);
}
}
edit: I realised that the tricky part is not finding the correct implementation, but that i need the Consumption of a message to happen as a new request. Is it possible to understand what im after?
What you want is a producer.
This way you separate the client of the contextual instance and the producer. So inject them into a producer and have it decide what to use.
For this to be transparent and to avoid ambiguous dependency you could produce a value with #Dynamic qualifier.
#Inject
#Dynamic
Foo foo;
..............................
#Produces
#Dynamic
public Foo getFoo() {
//find out what implementation to use and return it
Creating your own qualifier and producer is very simple to google.

Categories

Resources