I already post some relevant code in this question:
Specify object type of a returned array list dynamically
Now my question is a little bit more specific.
In fact I am using the following "handler" class to invoke methods of classes which implement the interface IMSSQLStatement:
public class MSSQLHandler {
IMSSQLStatement statement;
public MSSQLHandler(IMSSQLStatement statement) {
this.statement = statement;
}
public void invoke() throws SQLException {
statement.executeStatement();
}
public List<?> getDataList() throws SQLException {
return statement.getDataList();
}
}
The question is now how to force me (or an developer which implements my interface) to put created objects of the implemented class to MSSQLHandler?
Maybe this is bad design but I did not find any information and use cases regarding my problem.
Yes, you can use an abstract class with an explicit constructor, that is automatically called on all subclasses:
public abstract class IMSSQLStatement {
protected IMSSQLHandler handler;
public IMSSQLStatement() {
handler = new IMSSQLHandler(this);
}
}
Edit: (in reference to comment)
If you want that only the handler should be able to call the methods in IMSSQLStatement, both classes should be placed in the same package. Allow only package-private and subclass access, by giving the protected modifier. Although the methods could be called in the subclass itself, it would not be accessible outside, with the exception of the package.
This won't solve your problem completely. The other (real bogus) way around would be reflection.
To use reflection, you should write in your documentation the exact method signature the subclass should use (of course, don't define an abstract method in the superclass), giving it the private modifier. The handler should access these methods through reflection.
Refer some document, that describes how to use reflection. This is complicated, and beyond the scope of SO.
Related
Let's say I have 1 complete class with around 20 methods which provide different functionalities.
Now we have multiple clients using this class, but we want them to have restricted access.
For e.g. -
Client 1 - Gets access to method1/m3/m5/m7/m9/m11
Client 2 - Gets access to method2/m4/m6/m8/m10/m12
Is there any way I can restrict this access?
One solution which I thought:
Create 2 new classes extending Parent class and override methods which are not accessible and throw Exception from them.
But then if 3rd client with different requirement, we have to create new subclass for them.
Is there any other way to do this?
Create 2 new classes extending Parent class and override methods which
are not accessible and throw Exception from them. But then if 3rd
client with different requirement, we have to create new subclass for
them.
It is a bad solution because it violates Polymorphism and the Liskov Substitution Principle. This way will make your code less clear.
At first, you should think about your class, are you sure that it isn't overloaded by methods? Are you sure that all of those methods relate to one abstraction? Perhaps, there is a sense to separate methods to different abstractions and classes?
If there is a point in the existence of those methods in the class then you should use different interfaces to different clients. For example, you can make two interfaces for each client
interface InterfaceForClient1 {
public void m1();
public void m3();
public void m5();
public void m7();
public void m9();
public void m11();
}
interface InterfaceForClient2 {
public void m2();
public void m4();
public void m6();
public void m8();
public void m10();
public void m12();
}
And implement them in your class
class MyClass implements InterfaceForClient1, InterfaceForClient2 {
}
After it, clients must use those interfaces instead of the concrete implementation of the class to implement own logic.
You can create an Interface1 which defines methods only for Client1, and an Interface2 which defines methods only for Client2. Then, your class implements Interface1 and Interface2.
When you declare Client1 you can do something like: Interface1 client1.
With this approach, client1 can accesses only methods of this interface.
I hope this will help you.
The other answers already present the idiomatic approach. Another idea is a dynamic proxy decorating the API with an access check.
In essence, you generate a proxy API that has additional checks on method calls to implement a form of Access Control.
Example Implementation:
package com.example;
import java.lang.reflect.InvocationHandler;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;
#FunctionalInterface
public interface ACL<P, Q> {
boolean allowed(P accessor, Q target, Method method, Object[] args);
class ACLException extends RuntimeException {
ACLException(String message) {
super(message);
}
}
#SuppressWarnings("unchecked")
default Q protect(P accessor, Q delegate, Class<Q> dType) {
if (!dType.isInterface()) {
throw new IllegalArgumentException("Delegate type must be an Interface type");
}
final InvocationHandler handler = (proxy, method, args) -> {
if (allowed(accessor, delegate, method, args)) {
try {
return method.invoke(delegate, args);
} catch (InvocationTargetException e) {
throw e.getCause();
}
} else {
throw new ACLException("Access denies as per ACL");
}
};
return (Q) Proxy.newProxyInstance(dType.getClassLoader(), new Class[]{dType}, handler);
}
}
Example Usage:
package com.example;
import java.lang.reflect.Method;
public class Main {
interface API {
void doAlpha(int arg);
void doBeta(String arg);
void doGamma(Object arg);
}
static class MyAPI implements API {
#Override
public void doAlpha(int arg) {
System.out.println("Alpha");
}
#Override
public void doBeta(String arg) {
System.out.println("Beta");
}
#Override
public void doGamma(Object arg) {
System.out.println("Gamma");
}
}
static class AlphaClient {
void use(API api) {
api.doAlpha(100);
api.doBeta("100");
api.doGamma(this);
}
}
public static class MyACL implements ACL<AlphaClient, API> {
#Override
public boolean allowed(AlphaClient accessor, API target, Method method, Object[] args) {
final String callerName = accessor.getClass().getName().toLowerCase();
final String methodName = method.getName().toLowerCase().replace("do", "");
return callerName.contains(methodName);
}
}
public static void main(String[] args) {
final MyACL acl = new MyACL();
final API api = new MyAPI();
final AlphaClient client = new AlphaClient();
final API guardedAPI = acl.protect(client, api, API.class);
client.use(guardedAPI);
}
}
Notes:
The accessor does not have to be the client object itself, it can be a string key or token that helps ACL identify the client.
The ACL implementation here is rudimentary, more interesting ones could be One that reads ACL from some file or One that uses method and client annotations as rules.
If you don't want to define an interface for API class, consider a tool like javassist to directly proxy a class.
Consider other popular Aspect Oriented Programming solutions
You should create one super class with all the methods and then provide Client specific implementations in their corresponding sub classes extending from the super class defined earlier.
If there are methods which are common implementation for all clients, leave their implementations to the super class.
It seems like you are a bit confused about the purpose of Classes and Interfaces. As far as I know, an Interface is a contract defining which functionality a piece of software provides. This is from official java tutorial:
There are a number of situations in software engineering when it is
important for disparate groups of programmers to agree to a "contract"
that spells out how their software interacts. Each group should be
able to write their code without any knowledge of how the other
group's code is written. Generally speaking, interfaces are such
contracts.
Then you can write a Class which implements this Interface/contract, that is, provides the code that actually perform what was specified. The List interface and the ArrayList class are both an example of this.
Interfaces and Classes have access modifiers, but they aren't designed to specify permissions to specific clients. They specify what is visible for other piece of software depending the location where it is defined: Class, Package, Subclass, World. For example, a private method can be accessed only inside the class where it is defined.
From official Java tutorial again:
Access level modifiers determine whether other classes can use a
particular field or invoke a particular method. There are two levels
of access control:
At the top level—public, or package-private (no explicit modifier).
At the member level—public, private, protected, or package-private (no
explicit modifier).
Maybe you want something more powerful like Access Control List (ACL).
Your question is a little unclear, leading to different possible answers. I'll try to cover some of the possible areas:
Object encapsulation
If your goal is to provide interfaces to different clients that only provide certain functionality or a specific view there are several solutions. Which matches best depends on the purpose of your class:
Refactoring
The question somehow suggests that your class is responsible for different tasks. That might be an indicator, that you could tear it apart into distinct classes that provide the different interfaces.
Original
class AllInOne {
A m1() {}
B m2() {}
C m3() {}
}
client1.useClass(allInOneInstance);
client2.useClass(allInOneInstance);
client3.useClass(allInOneInstance);
Derived
class One {
A m1() {}
}
class Two {
B m2() {}
}
class Three {
C m3() {}
}
client1.useClass(oneInstance);
client2.useClass(twoInstance);
client3.useClass(threeInstance);
Interfaces
If you choose to keep the class together (there might be good reasons for it), you could have the class implement interfaces that model the view required by different clients. By passing instances of the appropriate interface to the clients they will not see the full class interface:
Example
class AllInOne implements I1, I2, I3 {
...
}
interface I1 {
A m1();
}
But be aware that clients will still be able to cast to the full class like ((AllInOne) i1Instance).m2().
Inheritance
This was already outline in other answers. I'll therefore skip this here. I don't think this is a good solution as it might easily break in a lot of scenarios.
Delegation
If casting is a risk to you, you can create classes that only offer the desired interface and delegate to the actual implementation:
Example
class Delegate1 {
private AllInOne allInOne;
public A m1() {
return allInOne.m1();
}
}
Implementing this can be done in various ways and depends on your environment like explicit classes, dynamic proxies , code generation, ...
Framework
If you are using an Application Framework like Spring you might be able to use functionality from this Framework.
Aspects
AOP allows you to intercept method calls and therefor apply some access control logic there.
Security
Please note that all of the above solutions will not give you actual security. Using casts, reflection or other techniques will still allow clients to obtain access to the full functionality.
If you require stronger access limitations there are techniques that I will just briefly outline as they might depend on your environment and are more complex.
Class Loader
Using different class loaders you can make sure that parts of your code have no access to class definitions outsider their scope (used e.g. in tomcat to isolate different deployments).
SecurityManager
Java offers possibilities to implement your own SecurityManager this offers ways to add some extra level of access checking.
Custom build Security
Of course you can add your own access checking logic. Yet I don't think this will be a viable solution for in JVM method access.
I hear that in Java I can achieve polymorphism through injection at runtime. Can someone please show a simple example of how that is done? I search online but I can't find anything: maybe I am searching wrong. So I know about polymorphism through interface and and extension such as
class MyClass extends Parent implements Naming
in such case I am achieving polymorphism twice: MyClass is at once of type Parent and Naming. But I don't get how injection works. The idea is that I would not be using the #Override keyword during injection. I hope the question is clear. Thanks.
So the end result here, per my understanding, is to change the behavior of a method through injection instead of by #Override it during development.
So I know about polymorphism through interface and and extension such as
class MyClass extends Parent implements Naming
This is known as inhertiance and not polymorphism. MyClassis a Parent and MyClass is also a Naming. That being said, inheritance allows you to achive polymorphism.
Consider a class other thanMyClass that also implements Naming :
class SomeOtherClass implements Naming {
#Override
public void someMethodDefinedInTheInterface() {
}
}
Now consider a method that takes a Naming argument somewhere in your code base :
public void doSomething(Naming naming) {
naming.someMethodDefinedInTheInterface();
}
The doSomething method can be passed an instance of any class that implements Naming. So both the following calls are valid :
doSomething(new MyClass());//1
doSomething(new SomeOtherClass());//2
Observe how you can call doSomething with different parameters. At runtime, the first call will call someMethodDefinedInTheInterface from MyClass and the second call will call someMethodDefinedInTheInterface from SomeOtherClass. This is known as runtime-polymorphism which can be achieved through inheritance.
But I don't get how injection works. The idea is that I would not be using the #Override keyword during injection
That's true in the broader sense. To inject something into a class, the class should ideally favor composition over inheritance. See this answer that does a good job in explaining the reason for favoring composition over inheritance.
To extend the above example from my answer, let's modify the doSomething method as follows :
public class ClassHasANaming {
private Naming naming;
public ClassHasANaming(Naming naming) {
this.naming = naming;
}
public void doSomething() {
naming.someMethodDefinedInTheInterface();
}
}
Observe how ClassHasANaming now has-a Naming dependency that can be injected from the outside world :
ClassHasANaming callMyClass = new ClassHasANaming(new MyClass());
callMyClass.doSomething();
If you use the Factory pattern, you can actually chose which subclass gets instantiated at runtime.
Do you think we could have done what we did above using inheritance?
public class ClassIsANaming implements Naming {
public void doSomething() {
someMethodDefinedInTheInterface();
}
#Override
public void someMethodDefinedInTheInterface() {
//....
}
}
The answer is No. ClassIsANaming is bound to a single implementation of the someMethodDefinedInTheInterface method at compile time itself.
`
Taking a contrived example. You have a class Store that stores things:
class Store {
private List l
void store(Object o) {
l.add(o);
}
void setStoreProvider(List l) {
this.l = l
}
}
You can inject the actual List used as the backing storage using setStoreProvider which could be a linked list, array backed list, whatever.
Hence, depending on the injected type your Store class would have the features of the injected type (with regards to memory usage, speed, etc).
This is a kind of polymorphism without the class implementing an interface.
I have a generated object that I want to:
Preserve existing functionality of without injecting into the constructor and rewriting every method to call injectedObject.sameMethod().
Add additional functionality to that generated object without modifying the generated object.
add additional functionality to.
For example:
public class GeneratedObject {
public String getThis() { ... }
public String getThat() { ... }
}
public interface ObjectWrapper {
String doThisWithThat();
}
public class ObjectWrapperImpl extends GeneratedObject implements ObjectWrapper {
String doThisWithThat() { ... }
}
However, downcasting is not allowed, what is the proper implementation without rewriting a bunch of redundant code just to wrap the object?
I think decorator pattern may help you: "The decorator pattern can be used to extend (decorate) the functionality of a certain object at run-time, independently of other instances of the same class"
Have you tried aspectj? http://www.eclipse.org/aspectj/doc/next/progguide/semantics-declare.html It's a bit complicated but so is your request.
If you can extract an interface from GeneratedObject, then it would be possible to do this using a dynamic proxy. You would make a proxy which implemented the extracted interface and ObjectWrapper, with an invocation handler which passed all calls to methods in the GeneratedObject interface through to the delegate, and sent the doThisWithThat() calls elsewhere.
Proxies aren't pretty, but the ugliness is at least well-localised.
I have an init method that is used and overridden through out an extensive heirarchy. Each init call however extends on the work that the previous did. So naturally, I would:
#Override public void init() {
super.init();
}
And naturally this would ensure that everything is called and instantiated. What I'm wondering is: Can I create a way to ensure that the super method was called? If all of the init's are not call, there is a break down in the obejct, so I want to throw an exception or an error if somebody forgets to call super.
TYFT ~Aedon
Rather than trying to do that -- I don't think it's achievable btw! -- how about a different approach:
abstract class Base {
public final void baseFunction() {
...
overridenFunction(); //call the function in your base class
...
}
public abstract void overridenFunction();
}
...
class Child extends Base {
public void overridenFunction() {...};
}
...
Base object = new Child();
object.baseFunction(); //this now calls your base class function and the overridenFunction in the child class!
Would that work for you?
Here's one way to raise an exception if a derived class fails to call up to the superclass:
public class Base {
private boolean called;
public Base() { // doesn't have to be the c'tor; works elsewhere as well
called = false;
init();
if (!called) {
// throw an exception
}
}
protected void init() {
called = true;
// other stuff
}
}
Android actually accomplishes this in the Activity class. I'm not sure how or whether they had to build support into the runtime for it, but I'd check out the open source code for the Activity class implementation. Specifically, in any of the lifecycle methods, you have to call the corresponding super class method before you do anything otherwise it throws SuperNotCalledException.
For instance, in onCreate(), the first thing you have to do is call super.onCreate().
I frequently like to use this solution. It wont throw a runtime error, but it will show a syntax error:
#CallSuper
public void init() {
// do stuff
}
This is a part of Android support annotations.
Make the class at the top of the inheritance tree set a flag on initialization. Then a class in the bottom of the inheritance tree can check for that flag to see if the whole tree has been traversed. I would make documentation that every child of base should include the following init code:
super.init()
if (!_baseIsInitialized) {
// throw exception or do w/e you wish
}
where base uses
_baseIsInitialized = true;
The other way around, forcing your childs to call super.init() is a lot thougher and would most likely include ugly hacks.
I don't know of any way to do this with a method.
However, note that this is exactly how constructors work. Every constructor must, directly or indirectly, call one of its superclass's constructors. This is statically guaranteed.
I note that you are writing an init method. Could you refactor so that your code uses constructors rather than init methods? That would give you this behaviour right out of the gate. Some people (eg me) prefer constructors to init methods anyway, partly for just this reason.
Note that using constructors rather than init methods might not mean using them on the class you're currently looking at - there might be a refactoring which moves the state needing initialisation out into a parallel class hierarchy which can use proper constructors.
Nowadays you can annotate your method with #CallSuper. This will Lint check that any overrides to that method calls super(). Here's an example:
#CallSuper
protected void onAfterAttached(Activity activity) {
if (activity instanceof ActivityMain) {
mainActivity = (ActivityMain) activity;
}
}
In the example above, any methods in descendant classes that override onAfterAttached but do not call super will make Lint raise an error.
In java, there's three levels of access:
Public - Open to the world
Private - Open only to the class
Protected - Open only to the class and its subclasses (inheritance).
So why does the java compiler allow this to happen?
TestBlah.java:
public class TestBlah {
public static void main(String[] args) {
Blah a = new Blah("Blah");
Bloo b = new Bloo("Bloo");
System.out.println(a.getMessage());
System.out.println(b.getMessage()); //Works
System.out.println(a.testing);
System.out.println(b.testing); //Works
}
}
Blah.java:
public class Blah {
protected String message;
public Blah(String msg) {
this.message = msg;
}
protected String getMessage(){
return(this.message);
}
}
Bloo.java:
public class Bloo extends Blah {
public Bloo(String testing) {
super(testing);
}
}
Actually it should be:
Open only to the classes on the same package the class and its subclasses (inheritance)
That's why
Because protected means subclass or other classes in the same package.
And there's actually a fourth "default" level of access, when the modifier is omitted, which provides access to other classes in the same package.
So protected is between default and public access.
To be more specific, you're expecting protected to work as it does in C++.
However, in Java, it has a different meaning. In Java, a protected method is available to the class (obviously), all the other classes in the same package and any subclasses of this class. Classes in other packages will not have access unless they subclass this original class.
See this similar question for more specific information on inheritance markers.
Personally, I almost never use protected. I develop applications rather than frameworks so I'm much more likely to define public methods, private data and, quite often, mark my whole class as final.
There are actually four levels of access: "public", "protected", "private" & default also known as package private or package protected. Default limits accessibility to the package. Default is quite useful and I use it frequently.
You're able to call b.getMessage() because b is of type Bloo, which extends Blah, and getMessage() is protected. Protected, as you mentioned, allows subclasses to access the method.
You've got the following errors, though:
Calling super() with no arguments in the Bloo constructor is an error. The compiler can't find the no-parameter Blah constructor because you defined one with a String parameter.
Calling new Blah() in TestBlah main method is an error for the same reason as above.
Referring to a.testing and b.testing is an error because you didn't define the variable testing for any class.