I was looking at this project https://github.com/MSzturc/cdi-async-events-extension/,
which provides async events in CDI 1.X (built-in async came from 2.0).
Now I'm questioning this piece of code inside the custom Extension:
public <X> void processAnnotatedType(#Observes ProcessAnnotatedType<X> event, final BeanManager beanManager) {
final AnnotatedType<X> type = event.getAnnotatedType();
for (AnnotatedMethod<?> method : type.getMethods()) {
for (final AnnotatedParameter<?> param : method.getParameters()) {
if (param.isAnnotationPresent(Observes.class) && param.isAnnotationPresent(Async.class)) {
asyncObservers.add(ObserverMethodHolder.create(this.pool, beanManager, type, method, param));
}
}
}
}
public void afterBeanDiscovery(#Observes AfterBeanDiscovery event) {
for (ObserverMethod<?> om : this.asyncObservers) {
event.addObserverMethod(om);
}
}
Basically, while each Bean is being registered, it is looking at each method to see if a parameter has the #Async annotation.
Then, after the discovery step, it is registering the #Observes #Async methods.
Looking inside the addObserverMethod() method, provided by JBoss Weld 2, I see:
additionalObservers.add(observerMethod);
My question then is, wouldn't those methods be called twice? I mean, they may be registered twice, first by the container itself, then by calling the addObserverMethod() method.
I am not familiar with project, but from the first look it seems pretty outdated and not maintained.
As for the extension - it basically adds the "same" observer method (OM) again, with it's own OM implementation. So I would say the behaviour depends on CDI implementation as the spec does not guarantee what happens when you register "the same" OM again - is it replaced or is it just added like you say?
And by "the same" I mean the exact same underlying Java method although wrapped in a fancier coat.
Ultimately, you can easily try it and see for yourself, but I would advise against using that project as any problems you bump into are unlikely to be resolved on the project side.
Related
We are using Spring Cloud Stream as the underlying implementation for event messaging in our microservice-based architecture. We wanted to go a step further and provide an abstraction layer between our services and the Spring Cloud Stream library to allow for dynamic channel subscriptions without too much boilerplate configuration code in the services themselves.
The original idea was as follows:
The messaging-library provides a BaseHandler abstract class which all individual services must implement. All handlers of a specific service would like to the same input channel, though only the one corresponding to the type of the event to handle would be called. This looks as follows:
public abstract class BaseEventHandler<T extends Event> {
#StreamListener
public abstract void handle(T event);
}
Each service offers its own events package, which contains N EventHandlers. There are plain POJOs which must be instantiated programmatically. This would look as follows:
public class ServiceEventHandler extends BaseEventHandler<ImportantServiceEvent> {
#Override
public void handle(ImportantServiceEvent event) {
// todo stuff
}
}
Note that these are simple classes and not Spring beans at this point, with ImportantServiceEvent implementing Event.
Our messaging-library is scanned on start-up as early as possible, and performs handler initialization. To do this, the following steps are done:
We scan all available packages in the classpath which provide some sort of event handling and retrieve all subclasses of BaseEventHandler.
We retrieve the #StreamListener annotation in the hierarchy of the subclass, and change its value to the corresponding input channel for this service.
Since our handlers might need to speak to some other application components (repositories etc.), we use DefaultListableBeanFactory to instantiate our handlers as singleton, as follows:
val bean = beanFactory.createBean(eventHandlerClass, AutowireCapableBeanFactory.AUTOWIRE_BY_TYPE, true);
beanFactory.registerSingleton(eventHandlerClass.getSimpleName(), bean);
After this, we ran into several issues.
The Spring Cloud Stream #StreamListener annotation cannot be inherited as it is a method annotation. Despite this, some mechanism seems to be able to find it on the parent (as the StreamListenerAnnotationBeanPostProcessor is registered) and attempts to perform post-processing upon the ServiceEventHandler being initialized. Our assumption is that the Spring Cloud Stream uses something like AnnotationElementUtils.findAllMergedAnnotations().
As a result of this, we thought that we might be able to alter the annotation value of the base class prior to each instantiation of a child class. Due to this, we thought that although our BaseEventHandler would simply get a new value which would then stay constant at the end of this initialization phase, the child classes would be instantiated with the correct channel name at the time of instantiation, since we do not expect to rebind. However, this is not the case and the value of the #StreamListener annotation that is used is always the one on the base.
The question is then: is what we want possible with Spring Cloud Stream? Or is it rather a plain Java problem that we have here (does not seem to be the case)? Did the Spring Cloud Stream team foresee a use case like this, and are we simply doing it completely wrong?
This question was also posted on on the Spring Cloud Stream tracker in case it might help garner a bit more attention.
Since the same people monitor SO and GitHub issues, it's rather pointless to post in both places. Stack Overflow is preferred for questions.
You should be able to subclass the BPP; it specifically has this extension point:
/**
* Extension point, allowing subclasses to customize the {#link StreamListener}
* annotation detected by the postprocessor.
*
* #param originalAnnotation the original annotation
* #param annotatedMethod the method on which the annotation has been found
* #return the postprocessed {#link StreamListener} annotation
*/
protected StreamListener postProcessAnnotation(StreamListener originalAnnotation, Method annotatedMethod) {
return originalAnnotation;
}
Then override the bean definition with yours
#Bean(name = STREAM_LISTENER_ANNOTATION_BEAN_POST_PROCESSOR_NAME)
public static StreamListenerAnnotationBeanPostProcessor streamListenerAnnotationBeanPostProcessor() {
return new StreamListenerAnnotationBeanPostProcessor();
}
This question already has answers here:
What is a callback method in Java? (Term seems to be used loosely)
(6 answers)
Closed 8 years ago.
I am reading the spring documentation. All the time I get this word "callback".
For example:
Lifecycle callbacks
Initialization callbacks etc.
How do we understand callback function ? And when you say "Lifecycle callbacks" in the spring, what does it mean ?
I have kept efforts in understanding this but I am not sure if I have understood correctly.
Please help.
LifeCycle
In the context of Spring beans (which I believe is the context of what you are reading - hard to tell with the little info you've provided), beans go through different lifecycle phases (like creation and destruction). Here are the lifecycle phases of the Spring bean you can hook into:
Callback
#R.T.'s wikipedia link to what a callback is, is a good starting point to understanding callbacks. In Java, the concept of callback is implemented differently.
In object-oriented programming languages without function-valued arguments, such as in Java before its 1.7 version, callbacks can be simulated by passing an instance of an abstract class or interface, of which the receiver will call one or more methods, while the calling end provides a concrete implementation.
A good example is given by #SotiriosDelamanolis in this answer, which I'll post here just for context.
/**
* #author #SotiriosDelamanolis
* see https://stackoverflow.com/a/19405498/2587435
*/
public class Test {
public static void main(String[] args) throws Exception {
new Test().doWork(new Callback() { // implementing class
#Override
public void call() {
System.out.println("callback called");
}
});
}
public void doWork(Callback callback) {
System.out.println("doing work");
callback.call();
}
public interface Callback {
void call();
}
}
LifeCycle Callback
By looking at the image above, you can see that Spring allows you to hook into the bean lifecyle with some interfaces and annotations. For example
Hooking into the bean creation part of the lifecycle, you can implements InitializingBean, which has a callback method afterPropertiesSet(). When you implements this interface, Spring pick up on it, and calls the afterPropertiesSet().
For example
public class SomeBean implements InitializingBean {
#Override
public void afterPropertiesSet() { // this is the callback method
// for the bean creation phase of the
// spring bean lifecycle
// do something after the properties are set during bean creation
}
}
Alternatively, you can use the #PostConstruct method for a non-InitializingBean implemented method, or using the init-method in xml config.
The diagram shows other lifecycle phases you can hook into and provide "callback" method for. The lifecycle phases are underlined at the top in the diagram
You can see more at Spring reference - Lifecycle Callbacks
The wiki has a good explanation:
In computer programming, a callback is a reference to executable code,
or a piece of executable code, that is passed as an argument to other
code. This allows a lower-level software layer to call a subroutine
(or function) defined in a higher-level layer.
Also check this interesting article Java Tip 10: Implement callback routines in Java
A sample example:
interface CallBack {
void methodToCallBack();
}
class CallBackImpl implements CallBack {
public void methodToCallBack() {
System.out.println("I've been called back");
}
}
class Caller {
public void register(CallBack callback) {
callback.methodToCallBack();
}
public static void main(String[] args) {
Caller caller = new Caller();
CallBack callBack = new CallBackImpl();
caller.register(callBack);
}
}
Paul Jakubik, Callback Implementations in C++.
Callbacks are most easily described in terms of the telephone system.
A function call is analogous to calling someone on a telephone, asking
her a question, getting an answer, and hanging up; adding a callback
changes the analogy so that after asking her a question, you also give
her your name and number so she can call you back with the answer.
I have written some code which I thought was quite well-designed, but then I started writing unit tests for it and stopped being so sure.
It turned out that in order to write some reasonable unit tests, I need to change some of my variables access modifiers from private to default, i.e. expose them (only within a package, but still...).
Here is some rough overview of my code in question. There is supposed to be some sort of address validation framework, that enables address validation by different means, e.g. validate them by some external webservice or by data in DB, or by any other source. So I have a notion of Module, which is just this: a separate way to validate addresses. I have an interface:
interface Module {
public void init(InitParams params);
public ValidationResponse validate(Address address);
}
There is some sort of factory, that based on a request or session state chooses a proper module:
class ModuleFactory {
Module selectModule(HttpRequest request) {
Module module = chooseModule(request);// analyze request and choose a module
module.init(createInitParams(request)); // init module
return module;
}
}
And then, I have written a Module that uses some external webservice for validation, and implemented it like that:
WebServiceModule {
private WebServiceFacade webservice;
public void init(InitParams params) {
webservice = new WebServiceFacade(createParamsForFacade(params));
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
So basically I have this WebServiceFacade which is a wrapper over external web service, and my module calls this facade, processes its response and returns some framework-standard response.
I want to test if WebServiceModule processes reponses from external web service correctly. Obviously, I can't call real web service in unit tests, so I'm mocking it. But then again, in order for the module to use my mocked web service, the field webservice must be accessible from the outside. It breaks my design and I wonder if there is anything I could do about it. Obviously, the facade cannot be passed in init parameters, because ModuleFactory does not and should not know that it is needed.
I have read that dependency injection might be the answer to such problems, but I can't see how? I have not used any DI frameworks before, like Guice, so I don't know if it could be easily used in this situation. But maybe it could?
Or maybe I should just change my design?
Or screw it and make this unfortunate field package private (but leaving a sad comment like // default visibility to allow testing (oh well...) doesn't feel right)?
Bah! While I was writing this, it occurred to me, that I could create a WebServiceProcessor which takes a WebServiceFacade as a constructor argument and then test just the WebServiceProcessor. This would be one of the solutions to my problem. What do you think about it? I have one problem with that, because then my WebServiceModule would be sort of useless, just delegating all its work to another components, I would say: one layer of abstraction too far.
Yes, your design is wrong. You should do dependency injection instead of new ... inside your class (which is also called "hardcoded dependency"). Inability to easily write a test is a perfect indicator of a wrong design (read about "Listen to your tests" paradigm in Growing Object-Oriented Software Guided by Tests).
BTW, using reflection or dependency breaking framework like PowerMock is a very bad practice in this case and should be your last resort.
I agree with what yegor256 said and would like to suggest that the reason why you ended up in this situation is that you have assigned multiple responsibilities to your modules: creation and validation. This goes against the Single responsibility principle and effectively limits your ability to test creation separately from validation.
Consider constraining the responsibility of your "modules" to creation alone. When they only have this responsibility, the naming can be improved as well:
interface ValidatorFactory {
public Validator createValidator(InitParams params);
}
The validation interface becomes separate:
interface Validator {
public ValidationResponse validate(Address address);
}
You can then start by implementing the factory:
class WebServiceValidatorFactory implements ValidatorFactory {
public Validator createValidator(InitParams params) {
return new WebServiceValidator(new ProdWebServiceFacade(createParamsForFacade(params)));
}
}
This factory code becomes hard to unit-test, since it is explicitly referencing prod code, so keep this impl very concise. Put any logic (like createParamsForFacade) on the side, so that you can test it separately.
The web service validator itself only gets the responsibility of validation, and takes in the façade as a dependency, following the Inversion of Control (IoC) principle:
class WebServiceValidator implements Validator {
private final WebServiceFacade facade;
public WebServiceValidator(WebServiceFacade facade) {
this.facade = facade;
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
Since WebServiceValidator is not controlling the creation of its dependencies anymore, testing becomes a breeze:
#Test
public void aTest() {
WebServiceValidator validator = new WebServiceValidator(new MockWebServiceFacade());
...
}
This way you have effectively inverted the control of the creation of the dependencies: Inversion of Control (IoC)!
Oh, and by the way, write your tests first. This way you will naturally gravitate towards a testable solution, which is usually also the best design. I think that this is due to the fact that testing requires modularity, and modularity is coincidentally the hallmark of good design.
Scenario
2x same Interceptor in same EJB on 2 methods:
...
#Interceptors(PerformanceAuditor.class)
public Date refreshIfNecessary() {
// there is also the PerformanceAuditor-Interceptor on this method
Pair<Date,String> lastImportDate = importDbDAO.findLatestImportLog();
someContainer.reloadIfNecessary(lastImportDate);
return lastImportDate.getLeft();
}
#Interceptors(PerformanceAuditor.class)
public boolean checkAndRefreshIfNecessary(final Date importDate) {
Date lastImportDate = refreshIfNecessary();
return lastImportDate.after(importDate);
}
...
Now we call on this EJB the methods externally with the following outcome:
calling refreshIfNecessary() -> PerformanceAuditor is called 2 times
calling checkAndRefreshIfNecessary() -> PerformanceAuditor is called also 2 times!
(but expected 3 times since one nesting level more!)
So what's happening here?
The answer is simple:
Interceptors "trigger" only if called as an "EJB call" (i.e. via their EJB interface). But in checkAndRefreshIfNecessary() the call of checkAndRefreshIfNecessary() is a simple Java method call and therefor not noticed by the container.
Solution: To call a method in an EJB internally as EJB call you have to go via its Interface which can be accessed e.g. via injected SessionContext and then via
context.getEJBLocalObject()... But this is certainly no a pretty solution! Better rethink your design in such cases!
PS: That's a beautiful example that one has still to understand the internals of an application server. Unfortunately from EJB3 onwards most features are so easy to use that more and more developers will not be confronted with the internals which will lead more often to such errors or at least bad design...
I have this code that allows to execute functions in a separate thread if "Asynch" annotation is present on them. Everything works fine, except for the day when I realized I also have to handle return value for some new functions that I've just added. I could use handlers and message-passing for this, but, due to already built project structure(which is huge and working fine), I can't change the existing functions to work with message passing.
Here's the code:
/**
* Defining the Asynch interface
*/
#Retention(RetentionPolicy.RUNTIME)
public #interface Asynch {}
/**
* Implementation of the Asynch interface. Every method in our controllers
* goes through this interceptor. If the Asynch annotation is present,
* this implementation invokes a new Thread to execute the method. Simple!
*/
public class AsynchInterceptor implements MethodInterceptor {
public Object invoke(final MethodInvocation invocation) throws Throwable {
Method method = invocation.getMethod();
Annotation[] declaredAnnotations = method.getDeclaredAnnotations();
if(declaredAnnotations != null && declaredAnnotations.length > 0) {
for (Annotation annotation : declaredAnnotations) {
if(annotation instanceof Asynch) {
//start the requested task in a new thread and immediately
//return back control to the caller
new Thread(invocation.getMethod().getName()) {
public void execute() {
invocation.proceed();
}
}.start();
return null;
}
}
}
return invocation.proceed();
}
}
Now, how can i convert it so that if its something as:
#Asynch
public MyClass getFeedback(int clientId){
}
MyClass mResult = getFeedback(12345);
"mResult" gets updated with the returned value?
Thanx in advance...
You can't, fundamentally. getFeedback has to return something in a synchronous way - and while in some cases you could update the returned object later on, in other cases you clearly couldn't - immutable classes like String are obvious examples. You can't change the value of the variable mResult later... it's quite possibly a local variable, after all. Indeed, by the time the result has been computed the method in which it was used may have completed... using a bogus value.
You're not going to be able to get clean asynchrony by just adding annotations on top of a synchronous language. Ideally, an asynchronous operation should return something like a Future<T> to say "at some point later, there'll be a result" - along with ways of finding out what that result is, whether it's been computed or not, whether there was an exception etc. This sort of thing is precisely why async/await was added in C# 5 - because you can't just do it transparently at the library level, even with AOP. Writing asynchronous code should be a very deliberate decision - not just something which is bolted onto synchronous code via an annotation.