I am aware of the bean post processor and it's working but I am not sure how it will helps us in real world application. what should be there inside the below define method in real application could it be
1 Some configuration Code?
2 Some validation code for bean ?
public class MyBeanInitProcessor implements BeanPostProcessor{
#Override
public Object postProcessAfterInitialization(Object bean, String beanName)
throws BeansException {
System.out.println("before initialization: "+beanName);
return bean;
}
#Override
public Object postProcessBeforeInitialization(Object bean, String beanName)
throws BeansException {
System.out.println("after initialization: "+beanName);
return bean;
}
}
In most real-world applications, you won't be interacting with them directly. Spring provides 28 implementations out of the box that handle standard functions like autowiring and applying AOP advice. You use them indirectly by using the standard Spring features such as applying validation annotations on a method parameter, which applies MethodValidationPostProcessor, or making method calls #Async, which applies AsyncAnnotationBeanPostProcessor.
BeanPostProcessor is a means of running a bit of code each time a bean is initialized.
Say you had an algorithm to process an undetermined number of Customer objects.
Say each Customer was a bean, and (as you may find in a LinkedList) each bean could tell if there was a bean following it, or not.
Say further that you need an event to be thrown when the last bean in that list was initialized.
You could do that, if you added code in postProcessAfterInitialization(). Knowing Spring, there are better ways, no doubt. Still, to me, this would be a case where BeanPostProcessor could be helpful.
Related
If I have something like a jdbcTemplate that is created and managed by Spring, am I able to take that reference and pass it down to a non-spring managed class?
If I can, how do life cycle methods such as #PreDestory know if there are now these extra references which are not known to Spring floating around?
Singleton beans managed by spring are retained in the application context.
You can think about the Application Context as a map that stores key like "id" to objects that are essentially references to the beans you have.
Now you can easily pass the reference to the bean to some object not managed by spring.
class NonManagedBySpring {
private JdbcTemplate tpl;
public NonManagedBySpring(JdbcTemplate tpl) {
this.tpl = tpl;
}
public void bar() {
...
tpl.execute // or whatever
}
}
#Service // this is a spring managed service
class MyService {
#Autowired // or constructor injection, doesn't matter for the sake of this example
private JdbcTemplate tpl;
public void foo() {
NonManagedBySpring obj = new NonManagedBySpring (tpl);
obj.bar();
}
}
Now, from the point of view of lifecycle, it doesn't matter that NonManagedBySpring holds the reference on JdbcTemplate object which is a bean.
When the #PreDestroy should be called, spring checks the reference in ApplicationContext, and since as I stated at the beginning of the answer, there is a reference to singleton beans - spring will find these objects and invoke a "pre-destroy" on it.
Having said that it worth to mention that if the bean is of scope "prototype" it won't be held on Application Context and its #PreDestroy won't be called anyway, but that has nothing to do with the managed/non-managed objects. That's just how the spring works.
I'm working on a Spring application using beans of different scopes. Many beans are singletons, other request or custom scoped. Especially using those custom scopes makes it sometimes difficult to find out which scope can be safely injected into which other scope or when e.g. a Provider<T> needs to be used.
I am aware that I can just create scope proxies for all beans that are basically not singletons, but in many cases that does not seem to be necessary. For example, a bean might only be supposed to be injected into other beans of the same scope, but not everyone working on the project might be aware of that. Thus, it would be great if one could somehow prevent "misuse" of those beans, especially if one might not always recognize the mistake in time.
So my question is: Is there some way to define which scoped can be safely injected into which scope and then prevent beans with narrower scope from directly (without using Provider<T>) being injected into e.g. singleton beans?
It looks like this can be achieved fairly simple using a custom BeanPostProcessor. Within the postProcessBeforeInitialization, you can simply check the scope of the bean and the scope of all dependencies. Here is a simple example:
#Component
public class BeanScopeValidator implements BeanPostProcessor {
private final ConfigurableListableBeanFactory configurableBeanFactory;
#Autowired
public BeanScopeValidator(ConfigurableListableBeanFactory configurableBeanFactory) {
this.configurableBeanFactory = configurableBeanFactory;
}
#Override
public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
String beanScope = configurableBeanFactory.getBeanDefinition(beanName).getScope();
String[] dependenciesForBean = configurableBeanFactory.getDependenciesForBean(beanName);
for (String dependencyBeanName : dependenciesForBean) {
String dependencyBeanScope = configurableBeanFactory.getBeanDefinition(dependencyBeanName).getScope();
// TODO: Check if the scopes are compatible and throw an exception
}
return bean;
}
}
This example is still very basic and is not really convenient to use. Most prominently, it lacks the capability of defining which scope can be injected into which other scope. Thus I've created a more complete example here. Using this project, the following injections are allowed by default:
Singletons can be injected into everything
Everything can be injected into prototypes
AOP proxies can be injected into everything
Everything can be injected into beans of the same scope
If you want to allow a bean to be injected into another scope, it needs to be explicitly allowed by using a respective annotation:
#Bean
#Scope("prototype")
#InjectableInto("singleton")
MyBean getMyBean(){
//...
}
I'm reading Spring documentation and found this
One possible way to get the Spring container to release resources used
by prototype-scoped beans is through the use of a custom bean
post-processor which would hold a reference to the beans that need to
be cleaned up.
But if bean-post processor holds a reference to the prototype object then Garbage Collector won't clean it and prototypes beans with their resources will reside in a heap until Application Context is close?
Could you clarify it, please?
Spring has an interface you can implement called DestructionAwareBeanPostProcessor. Instances of this interface are asked first if a bean needs destruction via the requiresDestruction() method. If you return true, you will eventually get called back again with that bean when it is about to be destroyed via the postProcessBeforeDestruction method.
What this does is it gives you a chance to clean up that bean's resources. For example if your bean has a reference to a File, you could close any streams you might have open. The important point is your class doesn't hold a reference to the bean that is about to be destroyed, or you'll hold it up from being garbage collected as you've pointed out.
To define a post-processor, you would do something like this
#Component
public class MyDestructionAwareBeanPostProcessor implements DestructionAwareBeanPostProcessor {
public boolean requiresDestruction(final Object bean) {
// Insert logic here
return bean instanceof MyResourceHolder;
}
public void postProcessBeforeDestruction(final Object bean, final String beanName) throws BeansException {
// Clean up bean here.
// Example:
((MyResourceHolder)bean).cleanup();
}
}
Can't find a definitive answer so asking here - Is it possible to turn on CGLIB proxying for only one bean? The scenario is following - I have a class which is autowired and don't implement any interfaces, now I want to add an interface which would cover a small subset of it methods. Is it possible to keep proxying only this class using CGLIB w/o impacting Spring's default behavior (JDK dynamic proxies are preferred)?
I'm using java-based configuration.
There (currently) is not any support out-of-the-box to enable class based proxies for a single class. Instead you would have to create the proxy yourself. The drawback of this is that you would need some intimate knowledge on how Spring works (which I happen to have :) ).
You should/could use the ProxyFactory or ProxyFactoryBean to create a class based proxy for your given class. Your #Bean method would return the proxy instead of the actual class. Spring is then clever enough (at least it should) to detect that it already is proxy and instead of proxying it again it should add the advices to the already created proxy. To make this work without destroying auto wiring and all other nice things Spring gives you, you probably want to create a specific BeanPostProcessor that handles this.
public YourBeanPostProcessor implements BeanPostProcessor {
public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
if (bean instanceof YourBean) {
ProxyFactory factory = new ProxyFactory(bean);
factory.setProxyTargetClass(true);
return factory.getProxy();
}
return bean;
}
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
return bean;
}
}
Register this as a bean as you normally would do with a BeanPostProcessor.
#Bean
public static YourBeanPostProcessor yourBeanPostProcessor() {
return new YourBeanPostProcessor();
}
Now you have a pre-created class-based proxy which should be detected and used by Spring.
I'm working with some existing code and it is doing things I haven't seen before. I've dealt with autowiring prototype beans into singletons using method injection or getting the bean from the context using getBean(). What I am seeing in this code I am working on is a bean that is a prototype and retrieved using getBean(), and it has autowired dependencies. Most of these are singleton beans, which makes sense. But there is an autowire of another prototype bean, and from what I see, it does seem like it is getting a new bean. My question is when you autowire a prototype into a prototype, will that give you a new instance? Since the autowire request is not at startup but rather when this bean is created, does it go and create a new instance? This goes against what I thought about autowire and prototype beans and I wanted to hear an answer from out in the wild. Thanks for any insight. I'm trying to minimize my refactoring of this code as it is a bit spaghetti-ish.
example:
#Scope("prototype")
public class MyPrototypeClass {
#Autowired
private ReallyGoodSingletonService svc;
#Autowired
private APrototypeBean bean;
public void doSomething() {
bean.doAThing();
}
}
#Scope("prototype)
public class APrototypeBean {
private int stuffgoeshere;
public void doAThing() {
}
}
So when doSomething() in MyPrototypeClass is called, is that "bean" a singleton or a new one for each instance of MyPrototypeClass?
In your example, the APrototypeBean bean will be set to a brand new bean which will live through until the instance of MyPrototypeClass that you created is destroyed.
If you create a second instance of MyPrototypeClass then that second instance will receive its own APrototypeBean. With your current configuration, every time you call doSomething(), the method will be invoked on an instance of APrototypeBean that is unique for that MyPrototypeClass object.
Your understanding of #Autowired or autowiring in general is flawed. Autowiring occurs when an instance of the bean is created and not at startup.
If you would have a singleton bean that is lazy and that bean isn't directly used nothing would happen as soon as you would retrieve the bean using for instance getBean on the application context an instance would be created, dependencies get wired, BeanPostProcessors get applied etc.
This is the same for each and every type of bean it will be processed as soon as it is created not before that.
Now to answer your question a prototype bean is a prototype bean so yes you will receive fresh instances with each call to getBean.
Adding more explanation to #Mark Laren's answer.
As explained in Spring 4.1.6 docs
In most application scenarios, most beans in the container are
singletons. When a singleton bean needs to collaborate with another
singleton bean, or a non-singleton bean needs to collaborate with
another non-singleton bean, you typically handle the dependency by
defining one bean as a property of the other. A problem arises when
the bean lifecycles are different. Suppose singleton bean A needs to
use non-singleton (prototype) bean B, perhaps on each method
invocation on A. The container only creates the singleton bean A once,
and thus only gets one opportunity to set the properties. The
container cannot provide bean A with a new instance of bean B every
time one is needed.
Below approach will solve this problem, but this is not desirable because this code couples business code with Spring framework and violating IOC pattern. The following is an example of this approach:
// a class that uses a stateful Command-style class to perform some processing
package fiona.apple;
// Spring-API imports
import org.springframework.beans.BeansException;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
public class CommandManager implements ApplicationContextAware {
private ApplicationContext applicationContext;
public Object process(Map commandState) {
// grab a new instance of the appropriate Command
Command command = createCommand();
// set the state on the (hopefully brand new) Command instance
command.setState(commandState);
return command.execute();
}
protected Command createCommand() {
// notice the Spring API dependency!
return this.applicationContext.getBean("command", Command.class);
}
public void setApplicationContext(
ApplicationContext applicationContext) throws BeansException {
this.applicationContext = applicationContext;
}
}
So, there are 2 desirable ways to solve this problem.
1. Using Spring's method injection
As name suggests, Spring will implement & inject our abstract method by using #Lookup annotation from Spring 4 or tag if you use xml version. Refer this DZone article.
By using #Lookup.
from Java Doc...
An annotation that indicates 'lookup' methods, to be overridden by the
container to redirect them back to the BeanFactory for a getBean call.
This is essentially an annotation-based version of the XML
lookup-method attribute, resulting in the same runtime arrangement.
Since:
4.1
#Component
public class MyClass1 {
doSomething() {
myClass2();
}
//I want this method to return MyClass2 prototype
#Lookup
public MyClass2 myClass2(){
return null; // No need to declare this method as "abstract" method as
//we were doing with earlier versions of Spring & <lookup-method> xml version.
//Spring will treat this method as abstract method and spring itself will provide implementation for this method dynamically.
}
}
The above example will create new myClass2 instance each time.
2. Using Provider from Java EE (Dependency Injection for Java (JSR 330)).
#Scope(BeanDefinition.SCOPE_PROTOTYPE)
#Component
public static class SomeRequest {}
#Service
public static class SomeService {
#Autowired
javax.inject.Provider<SomeRequest> someRequestProvider;
SomeRequest doSomething() {
return someRequestProvider.get();
}
}
The above example will create new SomeRequest instance each time.