Please explain, why self invocation on proxy performed on target but not proxy? If that made on purpose, then why? If proxies created by subclassing, it's possible to have some code executed before each method call, even on self invocation. I tried, and I have proxy on self invocation
public class DummyPrinter {
public void print1() {
System.out.println("print1");
}
public void print2() {
System.out.println("print2");
}
public void printBoth() {
print1();
print2();
}
}
public class PrinterProxy extends DummyPrinter {
#Override
public void print1() {
System.out.println("Before print1");
super.print1();
}
#Override
public void print2() {
System.out.println("Before print2");
super.print2();
}
#Override
public void printBoth() {
System.out.println("Before print both");
super.printBoth();
}
}
public class Main {
public static void main(String[] args) {
DummyPrinter p = new PrinterProxy();
p.printBoth();
}
}
Output:
Before print both
Before print1
print1
Before print2
print2
Here each method called on proxy. Why in documentation mentioned that AspectJ should be used in case of self invocation?
Please read this chapter in the Spring manual, then you will understand. Even the term "self-invocation" is used there. If you still do not understand, feel free to ask follow-up questions, as long as they are in context.
Update: Okay, now after we have established that you really read that chapter and after re-reading your question and analysing your code I see that the question is actually quite profound (I even upvoted it) and worth answering in more detail.
Your (false) assumption about how it works
Your misunderstanding is about how dynamic proxies work because they do not work as in your sample code. Let me add the object ID (hash code) to the log output for illustration to your own code:
package de.scrum_master.app;
public class DummyPrinter {
public void print1() {
System.out.println(this + " print1");
}
public void print2() {
System.out.println(this + " print2");
}
public void printBoth() {
print1();
print2();
}
}
package de.scrum_master.app;
public class PseudoPrinterProxy extends DummyPrinter {
#Override
public void print1() {
System.out.println(this + " Before print1");
super.print1();
}
#Override
public void print2() {
System.out.println(this + " Before print2");
super.print2();
}
#Override
public void printBoth() {
System.out.println(this + " Before print both");
super.printBoth();
}
public static void main(String[] args) {
new PseudoPrinterProxy().printBoth();
}
}
Console log:
de.scrum_master.app.PseudoPrinterProxy#59f95c5d Before print both
de.scrum_master.app.PseudoPrinterProxy#59f95c5d Before print1
de.scrum_master.app.PseudoPrinterProxy#59f95c5d print1
de.scrum_master.app.PseudoPrinterProxy#59f95c5d Before print2
de.scrum_master.app.PseudoPrinterProxy#59f95c5d print2
See? There is always the same object ID, which is no surprise. Self-invocation for your "proxy" (which is not really a proxy but a statically compiled subclass) works due to polymorphism. This is taken care of by the Java compiler.
How it really works
Now please remember we are talking about dynamic proxies here, i.e. subclasses and objects created during runtime:
JDK proxies work for classes implementing interfaces, which means that classes implementing those interfaces are being created during runtime. In this case there is no superclass anyway, which also explains why it only works for public methods: interfaces only have public methods.
CGLIB proxies also work for classes not implementing any interfaces and thus also work for protected and package-scoped methods (not private ones though because you cannot override those, thus the term private).
The crucial point, though, is that in both of the above cases the original object already (and still) exists when the proxies are created, thus there is no such thing as polymorphism. The situation is that we have a dynamically created proxy object delegating to the original object, i.e. we have two objects: a proxy and a delegate.
I want to illustrate it like this:
package de.scrum_master.app;
public class DelegatingPrinterProxy extends DummyPrinter {
DummyPrinter delegate;
public DelegatingPrinterProxy(DummyPrinter delegate) {
this.delegate = delegate;
}
#Override
public void print1() {
System.out.println(this + " Before print1");
delegate.print1();
}
#Override
public void print2() {
System.out.println(this + " Before print2");
delegate.print2();
}
#Override
public void printBoth() {
System.out.println(this + " Before print both");
delegate.printBoth();
}
public static void main(String[] args) {
new DelegatingPrinterProxy(new DummyPrinter()).printBoth();
}
}
See the difference? Consequently the console log changes to:
de.scrum_master.app.DelegatingPrinterProxy#59f95c5d Before print both
de.scrum_master.app.DummyPrinter#5c8da962 print1
de.scrum_master.app.DummyPrinter#5c8da962 print2
This is the behaviour you see with Spring AOP or other parts of Spring using dynamic proxies or even non-Spring applications using JDK or CGLIB proxies in general.
Is this a feature or a limitation? I as an AspectJ (not Spring AOP) user think it is a limitation. Maybe someone else might think it is a feature because due to the way proxy usage is implemented in Spring you can in principle (un-)register aspect advices or interceptors dynamically during runtime, i.e. you have one proxy per original object (delegate), but for each proxy there is a dynamic list of interceptors called before and/or after calling the delegate's original method. This can be a nice thing in very dynamic environments. I have no idea how often you might want to use that. But in AspectJ you also have the if() pointcut designator with which you can determine during runtime whether to apply certain advices (AOP language for interceptors) or not.
Solutions
What you can do in order to solve the problem is:
Switch to native AspectJ, using load-time weaving as described in the Spring manual. Alternatively, you can also use compile-time weaving, e.g. via AspectJ Maven plugin.
If you want to stick with Spring AOP, you need to make your bean proxy-aware, i.e. indirectly also AOP-aware, which is less than ideal from a design point of view. I do not recommend it, but it is easy enough to implement: Simply self-inject a reference to the component, e.g. #Autowired MyComponent INSTANCE and then always call methods using that bean instance: INSTANCE.internalMethod(). This way, all calls will go through proxies and Spring AOP aspects get triggered.
#Dolphin
It is late reply but maybe this text will help you: Spring AOP and self-invocation.
In short, the link leads you to a simple example of why running code from another class will work but from "self" it won't.
Notice looking at the example from the link that when you run code from another class you are asking Spring to inject the bean. And Spring sees that in the bean you are asking for a cache, it creates at runtime proxy for that bean.
On the other hand, when you do the same in the "self" class, you create a compile time method call and Spring won't do anything about it.
Related
I'm working on Java Spring
I wanna make simple Util methods like 'get current time as string, split string on special condition, etc..)
I think there is two ways to develop;
1. using static method
/* using static method */
public class TestUtil {
public static String printTest() {
System.out.println("test!");
}
}
// Call Method
public class Caller {
public void callTest() {
TestUtil.printTest();
}
}
2. set as component
/* using as #component */
#Component
public class TestUtil {
public String printTest() {
System.out.println("test!");
}
}
// Call Method
public class Caller {
#Autowired
TestUtil testutil;
public void callTest() {
testutil.printTest();
}
}
Which of the two looks better? And What is the main difference?
Can you answer, you have great mercy on this miserable junior developer.
Thank for your answer
First of all, you shouldn't use a static method to get the time. You can inject a Clock implementation into spring components so that tests can substitute their own predictable clock. That way code that gets the time can assert the expected value of the time.
Spring has control over how it creates component objects but it has no control over classloading, spring can't control which class loads first and make sure static fields have valid values at a given time. It's always better not to use static methods in spring for anything but code that has no dependencies or side effects whatsoever.
I have multiple methods in my codebase annotated with Spring's #transactional with different propgation levels (lets ignore the idea behind choosing the propagation levels). Example -
public class X {
#Transactional(Propagation.NOT_SUPPORTED)
public void A() { do_something; }
#Transactional(Propagation.REQUIRED)
public void B() { do_something; }
#Transactional(Propagation.REQUIRES_NEW)
public void C() { do_something; }
}
Now I have a new use case where I want to perform all these operations in a single transaction (for this specific use case only, without modifying existing behavior), overriding any annotated propagation levels. Example -
public class Y {
private X x;
// Stores application's global state
private GlobalState globalState;
#Transactional
public void newOperation() {
// Set current operation as the new operation in the global state,
// in case this info might be required somewhere
globalState.setCurrentOperation("newOperation");
// For this new operation A, B, C should be performed in the current
// transaction regardless of the propagation level defined on them
x.A();
x.B();
x.C();
}
}
Does Spring provide some way to achieve this ? Is this not possible ?
One way I could think of is to split the original methods
#Transactional(Propagation.NOT_SUPPORTED)
public void A() { A_actual(); }
// Call A_actual from A and newOperation
public void A_actual() { do_something; }
But this might not be as simple to do as this example (there can be a lot of such methods and doing this might not scale). Also it does not look much clean.
Also the use case might also appear counter intuitive, but anyway let's keep that out of scope of this question.
I do believe the only option is to replace TransactionInterceptor via BeanPostProcessor, smth. like:
public class TransactionInterceptorExt extends TransactionInterceptor {
#Override
public Object invoke(MethodInvocation invocation) throws Throwable {
// here some logic determining how to proceed invocation
return super.invoke(invocation);
}
}
public class TransactionInterceptorPostProcessor implements BeanFactoryPostProcessor, BeanPostProcessor, BeanFactoryAware {
#Setter
private BeanFactory beanFactory;
#Override
public void postProcessBeanFactory(#NonNull ConfigurableListableBeanFactory beanFactory) throws BeansException {
beanFactory.addBeanPostProcessor(this);
}
#Override
public Object postProcessBeforeInitialization(#NonNull Object bean, #NonNull String beanName) throws BeansException {
if (bean instanceof TransactionInterceptor) {
TransactionInterceptor interceptor = (TransactionInterceptor) bean;
TransactionInterceptor result = new TransactionInterceptorExt();
result.setTransactionAttributeSource(interceptor.getTransactionAttributeSource());
result.setTransactionManager(interceptor.getTransactionManager());
result.setBeanFactory(beanFactory);
return result;
}
return bean;
}
}
#Configuration
public class CustomTransactionConfiguration {
#Bean
//#ConditionalOnBean(TransactionInterceptor.class)
public static BeanFactoryPostProcessor transactionInterceptorPostProcessor() {
return new TransactionInterceptorPostProcessor();
}
}
However, I would agree with #jim-garrison suggestion to refactor your spring beans.
UPD.
But you favour refactoring the beans instead of following this approach. So for the sake of completeness, can you please mention any issues/shortcomings with this
Well, there are a plenty of things/concepts/ideas in spring framework which were implemented without understanding/anticipating consequences (I believe the goal was to make framework attractive to unexperienced developers), and #Transactional annotation is one of such things. Let's consider the following code:
#Transactional(Propagation.REQUIRED)
public void doSomething() {
do_something;
}
The question is: why do we put #Transactional(Propagation.REQUIRED) annotation above that method? Someone might say smth. like this:
that method modifies multiple rows/tables in DB and we would like to avoid inconsistencies in our DB, moreover Propagation.REQUIRED does not hurt anything, because according to the contract it either starts new transaction or joins to the exisiting one.
and that would be wrong:
#Transactional annotation poisons stacktraces with irrelevant information
in case of exception it marks existing transaction it joined to as rollback-only - after that caller side has no option to compensate that exception
In the most cases developers should not use #Transactional(Propagation.REQUIRED) - technically we just need a simple assertion about transaction status.
Using #Transactional(Propagation.REQUIRES_NEW) is even more harmful:
in case of existing transaction it acquires another one JDBC-connection from connection pool, and hence you start getting 2+ connections per thread - this hurts performance sizing
you need to carefully watch for data you are working with - data corruptions and self-locks are the consequences of using #Transactional(Propagation.REQUIRES_NEW), cause now you have two incarnations of the same data within the same thread
In the most cases #Transactional(Propagation.REQUIRES_NEW) is an indicator that you code requires refactoring.
So, the general idea about #Transactional annotation is do not use it everywhere just because we can, and your question actually confirms this idea: you have failed to tie up 3 methods together just because developer had some assumptions about how those methods should being executed.
Below is my code snippet:
ServiceImpl.java
#Service
public class ServiceImpl implements Service {
private Response worker(Audit send) throws ArgumentException {
System.out.println("STEP_1");
worker(send.getRequest(), send.getId());
}
private Response worker(Request request, String id) throws ArgumentException {
System.out.println("STEP_2");
try {
//throwing some exception
} catch (Exception e) {
System.out.println("STEP_3");
}
}
}
Now, what I want is whenever NullPointerException is being thrown from method worker(Request request, String id) as shown above I want to perform some specific task. For that I have written an Aspect class which is following:
MyAspect.java
#Aspect
#Component
public class MyAspect{
#Pointcut("com.xyz.myapp.ServiceImpl.worker() && args(request,..)")
private void someOperation(Request request) {}
#Before("someOperation(request)")
public void process(Request request) {
System.out.println("SUCCESS");
}
#AfterThrowing("com.xyz.myapp.ServiceImpl.worker() && args(request,..)")
public void doRecoveryActions() {
System.out.println("EXCEPTION_SUCCESS");
}
}
Current Output:
STEP_1
STEP_2
STEP_3
Desired Output:
STEP_1
STEP_2
STEP_3
SUCCESS
EXCEPTION_SUCCESS
As you can see MyAspect.java is not getting triggered hence NOT printing values.
What can be the reason for this?
Note:
I tried making worker as public classes too but it didn't work.
Also tried changing the name of the methods to eliminate any overloading issue that too didn't work.
Tried various other pointcut expressions all in vain as of now.
In my application there are other aspect classes working absolutely fine.
You made a typical Spring AOP beginner's mistake: You assume that it works for private methods, but as the documentation clearly says, it does not. Spring AOP is based on dynamic proxies, and those only work for public methods when implementing interfaces via JDK proxies and additionally for protected and package-scoped methods when using CGLIB proxies.
You should make the worker() method public if you want to intercept it from an aspect.
P.S.: Full-fledged AspectJ also works for private methods, but to switch to another AOP framework would be overkill here.
Update: You also have other problems in your code:
The first worker method, even if you make it public, does not return anything. The last statement should be return worker(send.getRequest(), send.getId());, not just worker(send.getRequest(), send.getId());.
Your pointcut com.xyz.myapp.ServiceImpl.worker() will never match because it has an empty argument list, but your method has arguments. The args() does not help you here.
The syntax of your pointcut is also wrong because it does not specify a return type for the method, not even *. Furthermore, the method name itself is not enough, it should be enclosed in an actual pointcut type such as execution(). I.e. you want to write something like:
#Pointcut("execution(* com.xyz.myapp.ServiceImpl.worker(..)) && args(request, ..)")
private void someOperation(Request request) {}
To intercept a method that throws an exception you can use this code (works only if methods are public):
#AfterThrowing(pointcut="com.xyz.myapp.SystemArchitecture.dataAccessOperation()",throwing="ex")
public void doRecoveryActions(NullPointerException ex) {
// ...
}
Source: Spring AOP
Please look at the code I posted below. FYI, this is from the Oracle website's websocket sample:
https://netbeans.org/kb/docs/javaee/maven-websocketapi.html
My question is, how does this work?! -- especially, the broadcastFigure function of MyWhiteboard. It is not a abstract function that is overridden and it is not "registered" with another class as in the traditional sense. The only way I see it is when the compiler sees the #OnMessage annotation, it goes and inserts the broadcastFigure call into the compiled code for when a new message is received. But before calling this function, it flows through the received data through the FigureDecoder class - based on this decoder being specified in the annotation #ServerEndpoint. Within broadcastFigure, when sendObject is called, the compiler inserts a reference to FigureEncoder - based on what's specified in the annotation #ServerEndpoint. Is this accurate?
If so, why did this implementation do things this way using annotations? Before looking at this, I would have expected there to be an abstract OnMessage function which needs to be overridden and explicit registration functions for Encoder and Decoder. Instead of such a "traditional" approach, why does the websocket implementation do it via annotations?
Thank you.
Mywhiteboard.java:
#ServerEndpoint(value = "/whiteboardendpoint", encoders = {FigureEncoder.class}, decoders = {FigureDecoder.class})
public class MyWhiteboard {
private static Set<Session> peers = Collections.synchronizedSet(new HashSet<Session>());
#OnMessage
public void broadcastFigure(Figure figure, Session session) throws IOException, EncodeException {
System.out.println("broadcastFigure: " + figure);
for (Session peer : peers) {
if (!peer.equals(session)) {
peer.getBasicRemote().sendObject(figure);
}
}
}
#OnError
public void onError(Throwable t) {
}
#OnClose
public void onClose(Session peer) {
peers.remove(peer);
}
#OnOpen
public void onOpen(Session peer) {
peers.add(peer);
}
}
FigureEncoder.java
public class FigureEncoder implements Encoder.Text<Figure> {
#Override
public String encode(Figure figure) throws EncodeException {
return figure.getJson().toString();
}
#Override
public void init(EndpointConfig config) {
System.out.println("init");
}
#Override
public void destroy() {
System.out.println("destroy");
}
}
FigureDecoder.java:
public class FigureDecoder implements Decoder.Text<Figure> {
#Override
public Figure decode(String string) throws DecodeException {
JsonObject jsonObject = Json.createReader(new StringReader(string)).readObject();
return new Figure(jsonObject);
}
#Override
public boolean willDecode(String string) {
try {
Json.createReader(new StringReader(string)).readObject();
return true;
} catch (JsonException ex) {
ex.printStackTrace();
return false;
}
}
#Override
public void init(EndpointConfig config) {
System.out.println("init");
}
#Override
public void destroy() {
System.out.println("destroy");
}
}
Annotations have their advantages and disadvantages, and there is a lot to say about choosing to create an annotation based API versus a (how you say) "traditional" API using interfaces. I won't go into that since you'll find plenty of wars online.
Used correctly, annotations provide better information about what a class/method's responsibility is. Many prefer annotations and as such they have become a trend and they are used everywhere.
With that out of the way, let's get back to your question:
Why did this implementation do things this way using annotations? Before looking at this, I would have expected there to be an abstract OnMessage function which needs to be overridden and explicit registration functions for Encoder and Decoder. Instead of such a "traditional" approach, why does the websocket implementation do it via annotations?
Actually they don't. Annotation is just a provided way of using the API. If you don't like it then you can do it the old way. Here is from the JSR-356 spec:
There are two main means by which an endpoint can be created. The first means is to implement certain of
the API classes from the Java WebSocket API with the required behavior to handle the endpoint lifecycle,
consume and send messages, publish itself, or connect to a peer. Often, this specification will refer to this
kind of endpoint as a programmatic endpoint. The second means is to decorate a Plain Old Java Object
(POJO) with certain of the annotations from the Java WebSocket API. The implementation then takes these
annotated classes and creates the appropriate objects at runtime to deploy the POJO as a websocket endpoint.
Often, this specification will refer to this kind of endpoint as an
annotated endpoint.
Again, people prefer using annotations and that's what you'll find most of tutorials using, but you can do without them if you want it bad enough.
Here's my use case:
I need to do some generic operation before and after each method of a given class, which is based on the parameter(s) of the method. For example:
void process(Processable object) {
LOGGER.log(object.getDesc());
object.process();
}
class BaseClass {
String method1(Object o){ //o may or may not be Processable(add process logic only in former case)
if(o intstanceof Prcessable){
LOGGER.log(object.getDesc());
object.process();
}
//method logic
}
}
My BaseClass has a lot of methods and I know for a fact that the same functionality will be added to several similar classes as well in future.
Is something like the following possible?
#MarkForProcessing
String method1(#Process Object o){
//method logic
}
PS: Can AspectJ/guice be used? Also want to know how to implement this from scratch for understanding.
Edit: Forgot to mention, what I have tried.(Not complete or working)
public #interface MarkForProcessing {
String getMetadata();
}
final public class Handler {
public boolean process(Object instance) throws Exception {
Class<?> clazz = instance.getClass();
for(Method m : clazz.getDeclaredMethods()) {
if(m.isAnnotationPresent(LocalSource.class)) {
LocalSource annotation = m.getAnnotation(MarkForProcessing.class);
Class<?> returnType = m.getReturnType();
Class<?>[] inputParamTypes = m.getParameterTypes();
Class<?> inputType = null;
// We are interested in just 1st param
if(inputParamTypes.length != 0) {
inputType = inputParamTypes[0];
}
// But all i have access to here is just the types, I need access to the method param.
}
return false;
}
return false;
}
Yes, it can be done. Yes, you can use AspectJ. No, Guice would only be tangentially related to this problem.
The traditional aspect approach creates a proxy which is basically a subclass of the class you've given it (e.g. a subclass of BaseClass) but that subclass is created at runtime. The subclass delegates to the wrapped class for all methods. However, when creating this new subclass you can specify some extra behavior to add before or after (or both) the call to the wrapped class. In other words, if you have:
public class Foo() {
public void doFoo() {...}
}
Then the dynamic proxy would be a subclass of Foo created at runtime that looks something like:
public class Foo$Proxy {
public void doFoo() {
//Custom pre-invocation code
super.doFoo();
//Custom post-invocation code
}
}
Actually creating a dynamic proxy is a magical process known as bytecode manipulation. If you want to to do that yourself you can use tools such as cglib or asm. Or you can use JDK dynamic proxies. The main downside to JDK proxies are that they can only wrap interfaces.
AOP tools like AspectJ provide an abstraction on top of the raw bytecode manipulation for doing the above (you can do a lot with bytecode manipulation, adding behavior before and after methods is all aspects allow). Typically they define 'Aspect's which are classes that have special methods called 'advice' along with a 'pointcut' which defines when to apply that advice. In other words you may have:
#Aspect
public class FooAspect {
#Around("#annotation(MarkForProcessing)")
public void doProcessing(final ProceedingJoinPoint joinPoint) throws Throwable
{
//Do some before processing
joinPoint.proceed(); //Invokes the underlying method
//Do some after processing
}
}
The aspect is FooAspect, the advice is doProcessing, and the pointcut is "#annotation(MarkForProcessing)" which matches all methods that are annotated with #MarkForProcessing. It's worth pointing out that the ProceedingJoinPoint will have a reference to the actual parameter values (unlike the java.lang.reflect.Method)
The last step is actually applying your aspect to an instance of your class. Typically this is either done with a container (e.g. Guice or Spring). Most containers have some way of knowing about a collection of aspects and when to apply them to classes constructed by that container. You can also do this programmatically. For example, with AspectJ you would do:
AspectJProxyFactory factory = new AspectJProxyFactory(baseClassInstance);
factory.addAspect(FooAspect.class);
BaseClass proxy = factory.getProxy();
Last, but not least, there are AOP implementations which use compile-time "weaving" which is a second compilation step run on the class files that applies the aspects. In other words, you don't have to do the above or use a container, the aspect will be injected into the class file itself.