Consider a class OriginalClass that might or might not be available on runtime. OriginalClass has a method doSomething which should be executed if its class is available.
A way of solving this is creating a class that also has a doSomething method that calls the OriginalClass.doSomething using reflection. Something like this:
public class CompatibilityClass {
private static Method originalClass_doSomething = null;
static {
initCompatibility();
};
private static void initCompatibility() {
try {
originalClass_doSomething = Class.forName("originalClass").getMethod("doSomething", new Class[] {});
} catch (NoSuchMethodException nsme) {
} catch (SecurityException se) {
} catch (ClassNotFoundException cnfe) {}
}
public static void doSomething() {
if (originalClass_doSomething != null) {
try {
originalClass_doSomething.invoke(null, new Object[]{});
} catch (Exception e) {}
}
}
}
What is the name of the design pattern applied here? I suspect it's either Adapter, Bridge, Facade or Proxy, but I'm not sure which.
I'd say it's the proxy pattern.
You've create a proxy class that wraps the gory reflection stuff and delegates the method call to a different object.
A proxy, in its most general form, is a class functioning as an interface to something else. The proxy could interface to anything: a network connection, a large object in memory, a file, or some other resource that is expensive or impossible to duplicate.
You pattern is quite similar to something like performing some method call over a network.
Smells like proxy to me. But aren't you better off using Java's default Dynamic Proxy API?
Definition of proxy:
A proxy forces object method calls to
occur indirectly through the proxy
object, which acts as a surrogate or
delegate for the underlying object
being proxied. Proxy objects are
usually declared so that the client
objects have no indication that they
have a proxy object instance.
Simple explanation:
Adapter: when you have two classes (A and B) that are semantically equivalent/similar, but have different interfaces. Adapter implements interface of A but delegates to B or vice-versa so A and B can be used interchangeably
Bridge - typically used with whole inheritance tree (I never used it though)
Facade - hide complexity of one or more classes behind simpler interface
Proxy - same interface as the target object, delegating to it, typically used for lazy loading and decoupling from target.
So your code sample looks like a Proxy.
Related
As we all know, the self-invokation of bean's method is not working in Spring without AspectJ.
See this question for example.
I think this is because the Spring-created proxy calls the target object's methods using delagate pattern. Like this:
class MyClass {
#Autowired
private MyClass self; // actually a MyProxy instance
#Transactional // or any other proxy magic
public void myMethod() {}
public void myOtherMethod() {
this.myMethod(); // or self.myMethod() to avoid self-invokation problem
}
}
class MyProxy extends MyClass { // or implements MyInterface if proxyMode is not TARGET_CLASS and MyClass also implements MyInterface
private final MyClass delegate;
#Override
public void myMethod() {
// some proxy magic: caching, transaction management etc
delegate.myMethod();
// some proxy magic: caching, transaction management etc
}
#Override
public void myOtherMethod() {
delegate.myOtherMethod();
}
}
Am I right?
With this code:
public void myOtherMethod() {
this.myMethod();
}
this.myMethod() will bypass the proxy (so all #Transactional or #Cacheable magic) because it is just internal delegate's call... So we should inject a MyClass bean (which is actually is MyProxy instance) inside MyClass and call self.myMethod() instead. It is understandable.
But why the proxy is implemented this way?
Why it is not just extends the target class, overriding all public methods and calling super instead of delegate?
Like this:
class MyProxy extends MyClass {
// private final MyClass delegate; // no delegate
#Override
public void myMethod() {
// some proxy magic: caching, transaction management etc
super.myMethod();
// some proxy magic: caching, transaction management etc
}
#Override
public void myOtherMethod() {
super.myOtherMethod();
}
}
It should solve the self-invokation problem, where this.myMethod() bypasses the proxy, because in this case this.myMethod(), invoked from MyClass.myOtherMethod() (we remember that MyClass bean actually is MyProxy instance), will invoke overriden child's method (MyProxy.myMethod()).
So, my main question is why it is not implemented this way?
Your assumption that Spring AOP uses delegation for its proxies is correct. This is also documented.
Using CGLIB, you can theoretically use proxy.invokeSuper() in order to achieve the effect you want, i.e. that self-invocation is registered by the aspect implemented by the proxy's method interceptor (I am using Spring's embedded version of CGLIB here, thus the package names):
package spring.aop;
import org.springframework.cglib.proxy.Enhancer;
import org.springframework.cglib.proxy.MethodInterceptor;
import org.springframework.cglib.proxy.MethodProxy;
import java.lang.reflect.Method;
class SampleClass {
public void x() {
System.out.println("x");
y();
}
public void y() {
System.out.println("y");
}
public static void main(String[] args) {
Enhancer enhancer = new Enhancer();
enhancer.setSuperclass(SampleClass.class);
enhancer.setCallback(new MethodInterceptor() {
#Override
public Object intercept(Object obj, Method method, Object[] args, MethodProxy proxy)
throws Throwable {
if(method.getDeclaringClass() == Object.class)
return proxy.invokeSuper(obj, args);
System.out.println("Before proxy.invokeSuper " + method.getName());
Object result = proxy.invokeSuper(obj, args);
System.out.println("After proxy.invokeSuper " + method.getName());
return result;
}
});
SampleClass proxy = (SampleClass) enhancer.create();
proxy.x();
}
}
Console log:
Before proxy.invokeSuper x
x
Before proxy.invokeSuper y
y
After proxy.invokeSuper y
After proxy.invokeSuper x
This is exactly what you want. The problem starts, however, when you have several aspects: transactions, logging, whatever else. How do you make sure that they all work together?
Option 1: Each aspect gets its own proxy. This obviously will not work unless you nest the proxies into each other according to aspect precedence. But nesting them into each other means inheritance, i.e. one proxy would have to inherit from the other outside-in. Try proxying a CGLIB proxy, it does not work, you get exceptions. Furthermore, CGLIB proxies are quite expensive and use perm-gen memory, see descriptions in this CGLIB primer.
Option 2: Use composition instead of inheritance. Composition is more flexible. Having one proxy to which you can register aspects as needed solves the inheritance problem, but also means delegation: The proxy registers the aspects and calls their methods during runtime in the right order before/after the actual real object's code is executed (or not, if an #Around advice never calls proceed()). See this example from the Spring manual about manually registering aspects to a proxy:
// create a factory that can generate a proxy for the given target object
AspectJProxyFactory factory = new AspectJProxyFactory(targetObject);
// add an aspect, the class must be an #AspectJ aspect
// you can call this as many times as you need with different aspects
factory.addAspect(SecurityManager.class);
// you can also add existing aspect instances, the type of the object supplied must be an #AspectJ aspect
factory.addAspect(usageTracker);
// now get the proxy object...
MyInterfaceType proxy = factory.getProxy();
As to why the Spring developers chose this approach and whether it might have been possible to use the one-proxy approach but still make sure that self-invocation works like in my little CGLIB sample "logging aspect" above, I can only speculate. You can maybe ask them on the developers mailing list or look into the source code. Maybe the reason was that CGLIB proxies should behave similarly to the default Java dynamic proxies so as to make switching between the two for interface types seamless. Maybe the reason is another one.
I did not mean to be rude in my comments, only straightforward, because your question is really not suited to StackOverflow because it is not a technical problem to which someone can find a solution. It is a historical design question and rather philosophic in nature because with AspectJ a solution to your technical problem (self-invocation) beneath the actual question already exists. But maybe you still want to dive into the Spring source code, change the Spring AOP implementation from delegation to proxy.invokeSuper() and file a pull request. I am not sure such a breaking change would be accepted, though.
In addition, you will not able to use Inheritance + super in the following cases:
What about if the RealSubject is final, so the proxy will can NOT extends it
What about if the Proxy needs to extend something other than the RealSubject
What about if you need to hide some functionality (methods) inside the RealSubject
Prefer Composition over Inheritance (recommended by many developers)
I have an interface, Resource, which is supposed to wrap something and expose a few operations on the wrapped object.
My first approach was to write the following, with the Strategy pattern in mind.
interface Resource<T> {
ResourceState read();
void write(ResourceState);
}
abstract class AbstractResource<T> implements Resource<T> {
// This is where the Strategy comes in.
protected AbstractResource(ResourceStrategy<T> strat) {
// ...
}
// Both the read and write implementations delegate to the strategy.
}
class ExclusiveResource<T> extends AbstractResource<T> { ... }
class ShareableResource<T> extends AbstractResource<T> { ... }
The two implementations above differ in the locking scheme used (regular locks, or read-write locks).
There is also a ResourceManager, an entity responsible for managing these things.
My idea of usage by the client, would be:
ResourceManager rm = ...
MyCustomObject o = ...
MyCustomReadWriteStrategy strat = ...
rm.newResourceFor(o, "id", strat);
This way, the client would know about resources, but wouldn't have to deal directly with resources (hence the package-private classes). Also, I could make my own implementation of some common resources, like sockets, and the client would only ask for them (ie, I would have to write a SocketStrategy implements ResourceStrategy<Socket>).
ResourceManager rm = ...
rm.newSocketResource("id", host, port);
To access resources, he would request an handler from the manager. This is due to each thread having some specific access privileges, and so the manager would create an handler with the appropriate access privileges.
// This is in the ResourceManager class.
public ResourceHandler getHandlerFor(String id) {
if (!canThreadUseThisResource(id)) throw ...;
if (isThreadReaderOnly()) {
return new ResourceReadHandler( ... );
} else {
return new ResourceWriteHandler( ... );
}
}
This is where the problem kicks in.
This approach seems clean and clear to me, it also seems to be intuitive for the user.
But, as hinted, the manager keeps a mapping from identifiers to resources. How would this be declared, and how would the manager retrieve the resources from the map?
Map<String, Resource<?>> map;
// Can I go around without any specific cast? Not sure yet.
Resource<?> r = map.get(id);
// This could have an enum ResourceType, to check if thread has privileges
// for the specific type.
Is this design acceptable, and/or following good practices?
Alternatively, I could wipe out the generics, and have ExclusiveResource and ShareableResource be abstract and public.
These classes would then be extended, both by me and the client, for every type of resource needed (FileResource extends ExclusiveResource, SocketResource extends ExclusiveResource, ...).
This would probably eliminate the need for the strategy pattern, but would expose more of my package to the user.
Which of these alternatives is the most correct, or widely accepted as good practice?
Edit: After some thought, I think I could be able to remove the generic from the Resource interface, since that's the one causing trouble, and leave it on AbstractResource and its subclasses. The latter could still grant me compile-time verification of the strategies used.
public <T> void newExclusiveResourceFor(
T obj, String id, ResourceStrategy<T> strat) {
ExclusiveResource<T> r = new ExclusiveResource<>(obj, strat);
map.put(id, r);
}
However, following the inheritance way seems to be more correct.
As suggested by dkaustubh and Paul Bellora, as it stands, there is no plausible justification for the generic in the Resource interface. This had gone completely unnoticed by me, at first, since I wanted the implementations to be generic, so I assumed the interface should also be generic. That's not the case.
I still have two options here.
Using Generics
I should remove the generic in the interface. Then, I would end up with the following.
interface Resource {
ResourceState read();
void write(ResourceState);
void dispose();
}
abstract class AbstractResource<T> implements Resource {
/* This is where the Strategy comes in.
* The generic ensures compile-time verification of the
* strategy's type. */
protected AbstractResource(ResourceStrategy<T> strat) {
// ...
}
// Both the read and write implementations delegate to the strategy.
}
class ExclusiveResource<T> extends AbstractResource<T> { ... }
class ShareableResource<T> extends AbstractResource<T> { ... }
// This is the behaviour the client implements, for custom resources.
public abstract class ResourceStrategy<T> {
public abstract ResourceState read(T obj);
public abstract void write(ResourceState state);
public abstract void dispose(T obj);
}
Only ResourceHandler, ResourceManager, ResourceState and ResourceStrategy need to be public, to the client.
Using Inheritance
Using inheritance, I can achieve the same results, with some trade-offs.
public interface Resource {
ResourceState read();
void write(ResourceState);
void dispose();
}
/* These implement only the locking schemes. */
abstract class ExclusiveResource implements Resource { ... }
abstract class ShareableResource implements Resource { ... }
/* The user extends these for custom content and behaviour. */
public abstract class CustomExclusiveResource
extends ExclusiveResource { ... }
public abstract class CustomShareableResource
extends ShareableResource { ... }
Resources are now public to the client.
Conclusions
There are ways to misuse resources with both approaches, bypassing the expected contracts and thread permissions. Both approaches are equal here.
With generics, the inner representation of resources need not be known by the client, since the manager creates the resources in the background. With inheritance, resource creation takes place on the client side, so the manager's API would change to accept provided resources.
Even though Resources are not public, using generics, the client needs to know about the strategies. With inheritance, these are gone, and the public status is assigned to resources instead.
With strategies, the behaviour can be changed in runtime, or there could be different behaviours for the same kind of resource. Without them, the client needs to dispose of a resource, and them re-create it using another subclass that implements different behaviour.
E.g.: small files can be completely read to memory, while large files may require an appropriately sized buffer.
Unless something else is missing, it may just be a matter of choice, and thinking about the desired API and use cases.
I have a generated object that I want to:
Preserve existing functionality of without injecting into the constructor and rewriting every method to call injectedObject.sameMethod().
Add additional functionality to that generated object without modifying the generated object.
add additional functionality to.
For example:
public class GeneratedObject {
public String getThis() { ... }
public String getThat() { ... }
}
public interface ObjectWrapper {
String doThisWithThat();
}
public class ObjectWrapperImpl extends GeneratedObject implements ObjectWrapper {
String doThisWithThat() { ... }
}
However, downcasting is not allowed, what is the proper implementation without rewriting a bunch of redundant code just to wrap the object?
I think decorator pattern may help you: "The decorator pattern can be used to extend (decorate) the functionality of a certain object at run-time, independently of other instances of the same class"
Have you tried aspectj? http://www.eclipse.org/aspectj/doc/next/progguide/semantics-declare.html It's a bit complicated but so is your request.
If you can extract an interface from GeneratedObject, then it would be possible to do this using a dynamic proxy. You would make a proxy which implemented the extracted interface and ObjectWrapper, with an invocation handler which passed all calls to methods in the GeneratedObject interface through to the delegate, and sent the doThisWithThat() calls elsewhere.
Proxies aren't pretty, but the ugliness is at least well-localised.
I have 3 classes:
Error
ShellError
WebError
where
ShellError extends Error
and
WebError extends Error
In ShellError there are fields some of which are optional and others are required. I am building the object in the following manner:
shellError = new ShellError.Builder().setFile(filePattern)
.setHost(host).setPath(path).setSource(file.isSource())
.setJobName(p.getJobName()).build();
Since ShellError extends Error, I further:
shellError.setDescription(msg.toString());
shellError.setExceptionClass("MyEvilException");
shellError.setExceptionMessage("Some clever error message");
shellError.setStacktrace(stack);
So ... why bother with Builder? I like the fact that my build() amongst other things conveniently validates that all fields are set appropriately etc.
I would love it if I could .. build() ShellError and add to it the fields from the Error class.
What i did works.
The question is:
Is there a better way, or does it make sense what I did?
-- EDIT
I updated Builder() to accept some of the parameters which were in Error class before. Now I have
shellError = new ShellError.Builder(exception, "Some description").setFile(filePattern).setHost(host)
.setPath(path).setSource(file.isSource()).
setJobName(p.getJobName()).build();
What do you say? Better? Worse?
The builder pattern, popularized by Josh Bloch, has several benefits, but it doesn't work so elegantly on parent/subclasses, as explained in this discussion by our colleagues in the C# world. The best solution I have seen so far is this one (or a slight variant of it).
Based on the functions you've referenced, this is clearly not the standard java.lang.Error class. Typically builders are used to allow for an immutable object to be easily constructed or to provide functionality similar to "named parameters" in cases where there are lots of configuration / construction parameters.
For this particular case, it would be more sensible if the Error class were immutable after construction, and if these additional setter functions were on the builder instead of on the error class. I don't know how much control you have over any of these classes, but if you can modify them, I would suggest first making the builder support the same setters, so you can do all the configuration at the builder. Then, if it is feasible to do so, you could try removing these setter methods and instead allowing these to be configured from the constructor. If you don't have any control at all over those, you can could potentially extend the builder class with another one which supports these additional methods.
What you did makes sense. It seems like the design of the builder and error classes don't necessarily make a whole lot of sense, forcing you to write code that feels inelegant or inconsistent.
As it was already said, the builder pattern is not something that could organically fit into the existing Java object initialization politics. There are several approaches to achieve the required result. Though, of course, it is always better to avoid any ambiguous practices, it's not always possible. My hack is based on Java reflection API with generics:
abstract public class AbstractClass {
public static class Builder {
public <T extends AbstractClass> T build(Class<T> implementingClass) {
try {
Constructor<T> constructor = implementingClass
.getConstructor(new Class[]{Builder.class});
return constructor.newInstance(this);
} catch (NoSuchMethodException e) {
// TODO handle the exception
} catch (InvocationTargetException | InstantiationException |
IllegalAccessException e) {
// TODO handle the exception
}
}
}
protected AbstractClass(Builder builder) {
}
}
public class ImplementingClass extends AbstractClass {
public ImplementingClass (Builder builder) {
super(builder);
}
}
The initialization:
ImplementingClass instance = new AbstractClass.Builder()
.build(ImplementingClass.class);
I have a Command class like the following:
public class Command {
...
private String commandName;
private Object[] commandArgs;
...
public void executeCommand() {}
}
I also have a subclass of Command, AuthenticateCommand:
public class AuthenticateCommand extends Command {
...
#Override
public void executeCommand() {
...
}
}
Now imagine a class, Server, that has a method processCommand(Command command). It takes the command param, inspects the commandName field, and uses that name to cast the command to a subclass of Command responsible for implementing the command logic. In this example, you might have a Command with a commandName of "authenticate" and the username and pw stored in the commandArgs array. processCommand() would cast the Command to AutheticateCommand and invoke the executeCommand() method. I'm trying to accomplish this with the following (commandMap is just a Map that maps a commandName to its implementor class name):
public void processCommand(Command command) {
String commandName = command.getCommandName();
String implementorClassString = commandMap.get(commandName);
try {
Class implementorClass = Class.forName(implementorClassString);
Object implementor = implementorClass.cast(command);
Method method = implementorClass.getDeclaredMethod("executeCommand", null);
method.invoke(implementor);
} catch (ClassNotFoundException e) {
logger.error("Could not find implementor class: " + implementorClassString, e);
} catch (NoSuchMethodException e) {
logger.error("Could not find executeCommand method on implementor class: " + implementorClassString, e);
} catch (IllegalAccessException e) {
logger.error("Could not access private member/method on implementor class: " + implementorClassString, e);
} catch (InvocationTargetException e) {
logger.error("Could not invoke executeCommand method on implementor class: " + implementorClassString, e);
}
}
The call to implementorClass.cast() is throwing a ClassCastException. Shouldn't it be able to downcast to the AuthenticateCommand class in this manner?
UPDATE
Some more background. The Server class handles more than just AuthenticateCommands. There could be any number of Command subclasses, depending on the project. I'm trying to make it simple for someone writing a Client to pass a serialized Command object with just a name and arguments. I could force the client to "know about" AuthenticateCommand and all the others, and then serialize those and pass them, but that seems sub-optimal because the only difference between the subclasses is the implementation of executeCommand, which the client doesn't care or know about. So I just want a way to have the Client pass the parent class, and use data within that parent class to cast it to the appropriate subclass.
I suppose I could use newInstance() instead of cast and just create a new object, but that seems wasteful. I suppose I could also do away with the concept of subclasses handling the logic and move those into methods, and then processCommand would call the appropriate method. That feels janky to me as well, though.
Why are you casting at all? You're just trying to call executeCommand, and that's available on Command... so just write:
command.executeCommand();
which should compile and run. It's not clear where the map comes in at all.
As for why the cast is failing... my guess is that the ClassLoader for the command isn't the default ClassLoader at this point, so that implementorClass is the same class, but loaded by a different ClassLoader... which makes it a difference class as far as the JVM is concerned.
EDIT: I'd say your design is broken. The Command object you're being passed isn't fulfilling its role properly. One option would be to have a new RemoteCommand subclass which knows the name, and when its executeCommand method is called, it builds the appropriate subclass instance. And yes, it will need to build an instance of the class. You can't call an instance method on a class without an instance of that class, and you can't make one object "pretend" that it's actually an object of a different type. What if AuthenticationCommand has some extra fields it tries to use? Where would the values come from?
A nicer alternative is to make your serialization/deserialization layer do this, so that by the time you've reached this bit of code, you've already got an AuthenticationCommand - and you can use the code at the top of this answer.
You really need to instantiate it. You can't "convert" a Class<T> to a concrete instance by just casting. Also, the casting should be done the other way round as opposed to your code snippet.
Class<?> implementorClass = Class.forName(implementorClassString);
Command instance = Command.class.cast(implementorClass.newInstance());
instance.executeCommand();
Not to mention that this all is a design smell.
You would be able to downcast only when Command Object actually references Authenticate Command instance at runtime. This is what polymorphism talks about isnt it?