Spring selectively use implementation instead of interface - java

Can't find a definitive answer so asking here - Is it possible to turn on CGLIB proxying for only one bean? The scenario is following - I have a class which is autowired and don't implement any interfaces, now I want to add an interface which would cover a small subset of it methods. Is it possible to keep proxying only this class using CGLIB w/o impacting Spring's default behavior (JDK dynamic proxies are preferred)?
I'm using java-based configuration.

There (currently) is not any support out-of-the-box to enable class based proxies for a single class. Instead you would have to create the proxy yourself. The drawback of this is that you would need some intimate knowledge on how Spring works (which I happen to have :) ).
You should/could use the ProxyFactory or ProxyFactoryBean to create a class based proxy for your given class. Your #Bean method would return the proxy instead of the actual class. Spring is then clever enough (at least it should) to detect that it already is proxy and instead of proxying it again it should add the advices to the already created proxy. To make this work without destroying auto wiring and all other nice things Spring gives you, you probably want to create a specific BeanPostProcessor that handles this.
public YourBeanPostProcessor implements BeanPostProcessor {
public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
if (bean instanceof YourBean) {
ProxyFactory factory = new ProxyFactory(bean);
factory.setProxyTargetClass(true);
return factory.getProxy();
}
return bean;
}
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
return bean;
}
}
Register this as a bean as you normally would do with a BeanPostProcessor.
#Bean
public static YourBeanPostProcessor yourBeanPostProcessor() {
return new YourBeanPostProcessor();
}
Now you have a pre-created class-based proxy which should be detected and used by Spring.

Related

If Spring can successfully intercept intra class function calls in a #Configuration class, why does it not support it in a regular bean?

I have recently noticed that Spring successfully intercepts intra class function calls in a #Configuration class but not in a regular bean.
A call like this
#Repository
public class CustomerDAO {
#Transactional(value=TxType.REQUIRED)
public void saveCustomer() {
// some DB stuff here...
saveCustomer2();
}
#Transactional(value=TxType.REQUIRES_NEW)
public void saveCustomer2() {
// more DB stuff here
}
}
fails to start a new transaction because while the code of saveCustomer() executes in the CustomerDAO proxy, the code of saveCustomer2() gets executed in the unwrapped CustomerDAO class, as I can see by looking at 'this' in the debugger, and so Spring has no chance to intercept the call to saveCustomer2.
However, in the following example, when transactionManager() calls createDataSource() it is correctly intercepted and calls createDataSource() of the proxy, not of the unwrapped class, as evidenced by looking at 'this' in the debugger.
#Configuration
public class PersistenceJPAConfig {
#Bean
public DriverManagerDataSource createDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
//dataSource.set ... DB stuff here
return dataSource;
}
#Bean
public PlatformTransactionManager transactionManager( ){
DataSourceTransactionManager transactionManager = new DataSourceTransactionManager(createDataSource());
return transactionManager;
}
}
So my question is, why can Spring correctly intercept the intra class function calls in the second example, but not in the first. Is it using different types of dynamic proxies?
Edit:
From the answers here and other sources I now understand the following:
#Transactional is implemented using Spring AOP, where the proxy pattern is carried out by wrapping/composition of the user class. The AOP proxy is generic enough so that many Aspects can be chained together, and may be a CGLib proxy or a Java Dynamic Proxy.
In the #Configuration class, Spring also uses CGLib to create an enhanced class which inherits from the user #Configuration class, and overrides the user's #Bean functions with ones that do some extra work before calling the user's/super function such as check if this is the first invocation of the function or not. Is this class a proxy? It depends on the definition. You may say that it is a proxy which uses inheritance from the real object instead of wrapping it using composition.
To sum up, from the answers given here I understand these are two entirely different mechanisms. Why these design choices were made is another, open question.
Is it using different types of dynamic proxies?
Almost exactly
Let's figure out what's the difference between #Configuration classes and AOP proxies answering the following questions:
Why self-invoked #Transactional method has no transactional semantics even though Spring is capable of intercepting self-invoked methods?
How #Configuration and AOP are related?
Why self-invoked #Transactional method has no transactional semantics?
Short answer:
This is how AOP made.
Long answer:
Declarative transaction management relies on AOP (for the majority of Spring applications on Spring AOP)
The Spring Framework’s declarative transaction management is made possible with Spring aspect-oriented programming (AOP)
It is proxy-based (§5.8.1. Understanding AOP Proxies)
Spring AOP is proxy-based.
From the same paragraph SimplePojo.java:
public class SimplePojo implements Pojo {
public void foo() {
// this next method invocation is a direct call on the 'this' reference
this.bar();
}
public void bar() {
// some logic...
}
}
And a snippet proxying it:
public class Main {
public static void main(String[] args) {
ProxyFactory factory = new ProxyFactory(new SimplePojo());
factory.addInterface(Pojo.class);
factory.addAdvice(new RetryAdvice());
Pojo pojo = (Pojo) factory.getProxy();
// this is a method call on the proxy!
pojo.foo();
}
}
The key thing to understand here is that the client code inside the main(..) method of the Main class has a reference to the proxy.
This means that method calls on that object reference are calls on the proxy.
As a result, the proxy can delegate to all of the interceptors (advice) that are relevant to that particular method call.
However, once the call has finally reached the target object (the SimplePojo, reference in this case), any method calls that it may make on itself, such as this.bar() or this.foo(), are going to be invoked against the this reference, and not the proxy.
This has important implications. It means that self-invocation is not going to result in the advice associated with a method invocation getting a chance to execute.
(Key parts are emphasized.)
You may think that aop works as follows:
Imagine we have a Foo class which we want to proxy:
Foo.java:
public class Foo {
public int getInt() {
return 42;
}
}
There is nothing special. Just getInt method returning 42
An interceptor:
Interceptor.java:
public interface Interceptor {
Object invoke(InterceptingFoo interceptingFoo);
}
LogInterceptor.java (for demonstration):
public class LogInterceptor implements Interceptor {
#Override
public Object invoke(InterceptingFoo interceptingFoo) {
System.out.println("log. before");
try {
return interceptingFoo.getInt();
} finally {
System.out.println("log. after");
}
}
}
InvokeTargetInterceptor.java:
public class InvokeTargetInterceptor implements Interceptor {
#Override
public Object invoke(InterceptingFoo interceptingFoo) {
try {
System.out.println("Invoking target");
Object targetRetVal = interceptingFoo.method.invoke(interceptingFoo.target);
System.out.println("Target returned " + targetRetVal);
return targetRetVal;
} catch (Throwable t) {
throw new RuntimeException(t);
} finally {
System.out.println("Invoked target");
}
}
}
Finally InterceptingFoo.java:
public class InterceptingFoo extends Foo {
public Foo target;
public List<Interceptor> interceptors = new ArrayList<>();
public int index = 0;
public Method method;
#Override
public int getInt() {
try {
Interceptor interceptor = interceptors.get(index++);
return (Integer) interceptor.invoke(this);
} finally {
index--;
}
}
}
Wiring everything together:
public static void main(String[] args) throws Throwable {
Foo target = new Foo();
InterceptingFoo interceptingFoo = new InterceptingFoo();
interceptingFoo.method = Foo.class.getDeclaredMethod("getInt");
interceptingFoo.target = target;
interceptingFoo.interceptors.add(new LogInterceptor());
interceptingFoo.interceptors.add(new InvokeTargetInterceptor());
interceptingFoo.getInt();
interceptingFoo.getInt();
}
Will print:
log. before
Invoking target
Target returned 42
Invoked target
log. after
log. before
Invoking target
Target returned 42
Invoked target
log. after
Now let's take a look at ReflectiveMethodInvocation.
Here is a part of its proceed method:
Object interceptorOrInterceptionAdvice = this.interceptorsAndDynamicMethodMatchers.get(++this.currentInterceptorIndex);
++this.currentInterceptorIndex should look familiar now
Here is the target
And there are interceptors
the method
the index
You may try introducing several aspects into your application and see the stack growing at the proceed method when advised method is invoked
Finally everything ends up at MethodProxy.
From its invoke method javadoc:
Invoke the original method, on a different object of the same type.
And as I mentioned previously documentation:
once the call has finally reached the target object any method calls that it may make on itself are going to be invoked against the this reference, and not the proxy
I hope now, more or less, it's clear why.
How #Configuration and AOP are related?
The answer is they are not related.
So Spring here is free to do whatever it wants. Here it is not tied to the proxy AOP semantics.
It enhances such classes using ConfigurationClassEnhancer.
Take a look at:
CALLBACKS
BeanMethodInterceptor
BeanFactoryAwareMethodInterceptor
Returning to the question
If Spring can successfully intercept intra class function calls in a #Configuration class, why does it not support it in a regular bean?
I hope from technical point of view it is clear why.
Now my thoughts from non-technical side:
I think it is not done because Spring AOP is here long enough...
Since Spring Framework 5 the Spring WebFlux framework has been introduced.
Currently Spring Team is working hard towards enhancing reactive programming model
See some notable recent blog posts:
Reactive Transactions with Spring
Spring Data R2DBC 1.0 M2 and Spring Boot starter released
Going Reactive with Spring, Coroutines and Kotlin Flow
More and more features towards less-proxying approach of building Spring applications are introduced. (see this commit for example)
So I think that even though it might be possible to do what you've described it is far from Spring Team's #1 priority for now
Because AOP proxies and #Configuration class serve a different purpose, and are implemented in a significantly different ways (even though both involve using proxies).
Basically, AOP uses composition while #Configuration uses inheritance.
AOP proxies
The way these work is basically that they create proxies that do the relevant advice logic before/after delegating the call to the original (proxied) object. The container registers this proxy instead of the proxied object itself, so all dependencies are set to this proxy and all calls from one bean to another go through this proxy. However, the proxied object itself has no pointer to the proxy (it doesn't know it's proxied, only the proxy has a pointer to the target object). So any calls within that object to other methods don't go through the proxy.
(I'm only adding this here for contrast with #Configuration, since you seem to have correct understanding of this part.)
#Configuration
Now while the objects that you usually apply the AOP proxy to are a standard part of your application, the #Configuration class is different - for one, you probably never intend to create any instances of that class directly yourself. This class truly is just a way to write configuration of the bean container, has no meaning outside Spring and you know that it will be used by Spring in a special way and that it has some special semantics outside of just plain Java code - e.g. that #Bean-annotated methods actually define Spring beans.
Because of this, Spring can do much more radical things to this class without worrying that it will break something in your code (remember, you know that you only provide this class for Spring, and you aren't going to ever create or use its instance directly).
What it actually does is it creates a proxy that's subclass of the #Configuration class. This way, it can intercept invocation of every (non-final non-private) method of the #Configuration class, even within the same object (because the methods are effectively all overriden by the proxy, and Java has all the methods virtual). The proxy does exactly this to redirect any method calls that it recognizes to be (semantically) references to Spring beans to the actual bean instances instead of invoking the superclass method.
read a bit spring source code. I try to answer it.
the point is how spring deal with the #Configurationand #bean.
in the ConfigurationClassPostProcessor which is a BeanFactoryPostProcessor, it will enhance all ConfigurationClasses and creat a Enhancer as a subClass.
this Enhancer register two CALLBACKS(BeanMethodInterceptor,BeanFactoryAwareMethodInterceptor).
you call PersistenceJPAConfig method will go through the CALLBACKS. in BeanMethodInterceptor,it will get bean from spring container.
it may be not clearly. you can see the source code in ConfigurationClassEnhancer.java BeanMethodInterceptor.ConfigurationClassPostProcessor.java enhanceConfigurationClasses
You can't call #Transactional method in same class
It's a limitation of Spring AOP (dynamic objects and cglib).
If you configure Spring to use AspectJ to handle the transactions, your code will work.
The simple and probably best alternative is to refactor your code. For example one class that handles users and one that process each user. Then default transaction handling with Spring AOP will work.
Also #Transactional should be on Service layer and not on #Repository
transactions belong on the Service layer. It's the one that knows about units of work and use cases. It's the right answer if you have several DAOs injected into a Service that need to work together in a single transaction.
So you need to rethink your transaction approach, so your methods can be reuse in a flow including several other DAO operations that are roll-able
Spring uses proxying for method invocation and when you use this... it bypasses that proxy. For #Bean annotations Spring uses reflection to find them.

Prevent injection of bean with narrower scope using Spring

I'm working on a Spring application using beans of different scopes. Many beans are singletons, other request or custom scoped. Especially using those custom scopes makes it sometimes difficult to find out which scope can be safely injected into which other scope or when e.g. a Provider<T> needs to be used.
I am aware that I can just create scope proxies for all beans that are basically not singletons, but in many cases that does not seem to be necessary. For example, a bean might only be supposed to be injected into other beans of the same scope, but not everyone working on the project might be aware of that. Thus, it would be great if one could somehow prevent "misuse" of those beans, especially if one might not always recognize the mistake in time.
So my question is: Is there some way to define which scoped can be safely injected into which scope and then prevent beans with narrower scope from directly (without using Provider<T>) being injected into e.g. singleton beans?
It looks like this can be achieved fairly simple using a custom BeanPostProcessor. Within the postProcessBeforeInitialization, you can simply check the scope of the bean and the scope of all dependencies. Here is a simple example:
#Component
public class BeanScopeValidator implements BeanPostProcessor {
private final ConfigurableListableBeanFactory configurableBeanFactory;
#Autowired
public BeanScopeValidator(ConfigurableListableBeanFactory configurableBeanFactory) {
this.configurableBeanFactory = configurableBeanFactory;
}
#Override
public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
String beanScope = configurableBeanFactory.getBeanDefinition(beanName).getScope();
String[] dependenciesForBean = configurableBeanFactory.getDependenciesForBean(beanName);
for (String dependencyBeanName : dependenciesForBean) {
String dependencyBeanScope = configurableBeanFactory.getBeanDefinition(dependencyBeanName).getScope();
// TODO: Check if the scopes are compatible and throw an exception
}
return bean;
}
}
This example is still very basic and is not really convenient to use. Most prominently, it lacks the capability of defining which scope can be injected into which other scope. Thus I've created a more complete example here. Using this project, the following injections are allowed by default:
Singletons can be injected into everything
Everything can be injected into prototypes
AOP proxies can be injected into everything
Everything can be injected into beans of the same scope
If you want to allow a bean to be injected into another scope, it needs to be explicitly allowed by using a respective annotation:
#Bean
#Scope("prototype")
#InjectableInto("singleton")
MyBean getMyBean(){
//...
}

Spring Custom Converter - To Bean or Not to Bean

I am implementing Custom Converter in Spring so my beans can convert from java.util.Date to java.time.LocalDateTime. I have implemented Converter already (by implementing Spring Converter interface)
Here is bean definition in #Configuration class
#Bean
ConversionService conversionService(){
DefaultConversionService service = new DefaultConversionService();
service.addConverter(new DateToLocalDateTimeConverter());
return service;
}
My question is : shall I pass my custom converter as Java Object or Spring Bean to service.addConverter?
In general what are the guidelines (criterias) whether to bean or not to bean in such scenarios?
Making an object a Spring Bean makes sense as you want that this object may benefit from Spring features (injections, transaction, aop, etc...).
In your case, it seems not required.
As conversionService is a Spring bean singleton that will be instantiated once, creating during its instantiation a plain java instance of DateToLocalDateTimeConverter seems fine : new DateToLocalDateTimeConverter().
Now, if later you want to inject the DateToLocalDateTimeConverter instance in other Spring beans, it would make sense to transform it to a Spring Bean.
For information Spring provides already this utility task in the Jsr310Converters class (included in the spring-data-commons dependency) :
import static java.time.LocalDateTime.*;
public abstract class Jsr310Converters {
...
public static enum DateToLocalDateTimeConverter implements Converter<Date, LocalDateTime> {
INSTANCE;
#Override
public LocalDateTime convert(Date source) {
return source == null ? null : ofInstant(source.toInstant(), ZoneId.systemDefault());
}
}
...
}
You could directly use it.
If you intend to inject this as a dependency of some kind into your application, and/or you intend to reuse it in multiple places, then it makes sense to register it as a bean. If you're not, then newing an instance up is acceptable.
Dependency injection and inversion of control are just that - how you inject dependencies into your app, and an acknowledgment that you no longer control how that's instantiated. Should you desire either of these, beans are suitable; if you don't, then new it up.
In you simple case, it does not seem to be necessary to add DateToLocalDateTimeConverter as a spring bean.
Reasons to add DateToLocalDateTimeConverter as a spring bean:
If it would make the implementation of conversionService() more readable (not the case in the question example)
You need the DateToLocalDateTimeConverter in other beans
The implementation of DateToLocalDateTimeConverter itself would need to have Spring beans injected, i.e. using #Autowired

What is the real world use of BeanPostProcessor in spring?

I am aware of the bean post processor and it's working but I am not sure how it will helps us in real world application. what should be there inside the below define method in real application could it be
1 Some configuration Code?
2 Some validation code for bean ?
public class MyBeanInitProcessor implements BeanPostProcessor{
#Override
public Object postProcessAfterInitialization(Object bean, String beanName)
throws BeansException {
System.out.println("before initialization: "+beanName);
return bean;
}
#Override
public Object postProcessBeforeInitialization(Object bean, String beanName)
throws BeansException {
System.out.println("after initialization: "+beanName);
return bean;
}
}
In most real-world applications, you won't be interacting with them directly. Spring provides 28 implementations out of the box that handle standard functions like autowiring and applying AOP advice. You use them indirectly by using the standard Spring features such as applying validation annotations on a method parameter, which applies MethodValidationPostProcessor, or making method calls #Async, which applies AsyncAnnotationBeanPostProcessor.
BeanPostProcessor is a means of running a bit of code each time a bean is initialized.
Say you had an algorithm to process an undetermined number of Customer objects.
Say each Customer was a bean, and (as you may find in a LinkedList) each bean could tell if there was a bean following it, or not.
Say further that you need an event to be thrown when the last bean in that list was initialized.
You could do that, if you added code in postProcessAfterInitialization(). Knowing Spring, there are better ways, no doubt. Still, to me, this would be a case where BeanPostProcessor could be helpful.

Spring session-scoped beans (controllers) and references to services, in terms of serialization

a standard case - you have a controller (#Controller) with #Scope("session").
classes put in the session usually are expected to implement Serializable so that they can be stored physically in case the server is restarted, for example
If the controller implements Serializable, this means all services (other spring beans) it is referring will also be serialized. They are often proxies, with references to transaction mangers, entity manager factories, etc.
It is not unlikely that some service, or even controller, hold a reference to the ApplicationContext, by implementing ApplicationContextAware, so this can effectively mean that the whole context is serialized. And given that it holds many connections - i.e. things that are not serializable by idea, it will be restored in corrupt state.
So far I've mostly ignored these issues. Recently I thought of declaring all my spring dependencies transient and getting them back in readResolve() by the static utility classes WebApplicationContextUtils and such that hold the request/ServletContext in a ThreadLocal. This is tedious, but it guarantees that, when the object is deserialized, its dependencies will be "up to date" with the current application context.
Is there any accepted practice for this, or any guidelines for serializing parts of the spring context.
Note that in JSF, managed beans (~controllers) are stateful (unlike action-based web frameworks). So perhaps my question stands more for JSF, than for spring-mvc.
In this presentation (around 1:14) the speaker says that this issue is resolved in spring 3.0 by providing a proxy of non-serializable beans, which obtains an instance from the current application context (on deserialization)
It appears that bounty didn't attract a single answer, so I'll document my limited understanding:
#Configuration
public class SpringConfig {
#Bean
#Scope(proxyMode = ScopedProxyMode.TARGET_CLASS)
MyService myService() {
return new MyService();
}
#Bean
#Scope("request")
public IndexBean indexBean() {
return new IndexBean();
}
#Bean
#Scope("request")
public DetailBean detailBean() {
return new DetailBean();
}
}
public class IndexBean implements Serializable {
#Inject MyService myService;
public void doSomething() {
myService.sayHello();
}
}
public class MyService {
public void sayHello() {
System.out.println("Hello World!");
}
}
Spring will then not inject the naked MyService into IndexBean, but a serializable proxy to it. (I tested that, and it worked).
However, the spring documentation writes:
You do not need to use the <aop:scoped-proxy/> in conjunction with beans that are scoped as singletons or prototypes. If you try to create a scoped proxy for a singleton bean, the BeanCreationException is raised.
At least when using java based configuration, the bean and its proxy can be instantiated just fine, i.e. no Exception is thrown. However, it looks like using scoped proxies to achieve serializability is not the intended use of such proxies. As such I fear Spring might fix that "bug" and prevent the creation of scoped proxies through Java based configuration, too.
Also, there is a limitation: The class name of the proxy is different after restart of the web application (because the class name of the proxy is based on the hashcode of the advice used to construct it, which in turn depends on the hashCode of an interceptor's class object. Class.hashCode does not override Object.hashCode, which is not stable across restarts). Therefore the serialized sessions can not be used by other VMs or across restarts.
I would expect to scope controllers as 'singleton', i.e. once per application, rather than in the session.
Session-scoping is typically used more for storing per-user information or per-user features.
Normally I just store the 'user' object in the session, and maybe some beans used for authentication or such. That's it.
Take a look at the spring docs for configuring some user data in session scope, using an aop proxy:
http://static.springsource.org/spring/docs/2.5.x/reference/beans.html#beans-factory-scopes-other-injection
Hope that helps
I recently combined JSF with Spring. I use RichFaces and the #KeepAlive feature, which serializes the JSF bean backing the page. There are two ways I have gotten this to work.
1) Use #Component("session") on the JSF backing bean
2) Get the bean from ELContext when ever you need it, something like this:
#SuppressWarnings("unchecked")
public static <T> T getBean(String beanName) {
return (T) FacesContext.getCurrentInstance().getApplication().getELResolver().getValue(FacesContext.getCurrentInstance().getELContext(), null, beanName);
}
After trying all the different alternatives suggested all I had to do was add aop:scoped-proxy to my bean definition and it started working.
<bean id="securityService"
class="xxx.customer.engagement.service.impl.SecurityContextServiceImpl">
<aop:scoped-proxy/>
<property name="identityService" ref="identityService" />
</bean>
securityService is injected into my managedbean which is view scoped. This seems to work fine. According to spring documentation this is supposed to throw a BeanCreationException since securityService is a singleton. However this does not seems to happen and it works fine. Not sure whether this is a bug or what the side effects would be.
Serialization of Dynamic-Proxies works well, even between different JVMs, eg. as used for Session-Replication.
#Configuration public class SpringConfig {
#Bean
#Scope(proxyMode = ScopedProxyMode.INTERFACES)
MyService myService() {
return new MyService();
}
.....
You just have to set the id of the ApplicationContext before the context is refreshed (see: org.springframework.beans.factory.support.DefaultListableBeanFactory.setSerializationId(String))
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
// all other initialisation part ...
// before! refresh
ctx.setId("portal-lasg-appCtx-id");
// now refresh ..
ctx.refresh();
ctx.start();
Works fine on Spring-Version: 4.1.2.RELEASE

Categories

Resources