I was just wondering if you can limit the number of interface files on a project. Couple of projects i work with have dozen of interfaces with nothing special in them, and i was thinking can you write all interfaces in single file ? and in specific class point to interface needed ?? i.e:
//interface file
interface InterfaceOne {
}
interface InterfaceTwo{
}
//foo file
public class foo implements InterfaceTwo{
public void foo {
//....
}
}
//foo1 file
public class foo1 implements InterfaceOne{
public void foo {
//....
}
}
or something similar ?
Yes, you can fill a file with interfaces, but those interfaces cannot be public. Thus, they can only be accessible from the package they are defined in. If that's OK with you, you can collect your interfaces into a single file.
Note that this might make your interface definitions harder to find.
Related
Let's say I have 1 complete class with around 20 methods which provide different functionalities.
Now we have multiple clients using this class, but we want them to have restricted access.
For e.g. -
Client 1 - Gets access to method1/m3/m5/m7/m9/m11
Client 2 - Gets access to method2/m4/m6/m8/m10/m12
Is there any way I can restrict this access?
One solution which I thought:
Create 2 new classes extending Parent class and override methods which are not accessible and throw Exception from them.
But then if 3rd client with different requirement, we have to create new subclass for them.
Is there any other way to do this?
Create 2 new classes extending Parent class and override methods which
are not accessible and throw Exception from them. But then if 3rd
client with different requirement, we have to create new subclass for
them.
It is a bad solution because it violates Polymorphism and the Liskov Substitution Principle. This way will make your code less clear.
At first, you should think about your class, are you sure that it isn't overloaded by methods? Are you sure that all of those methods relate to one abstraction? Perhaps, there is a sense to separate methods to different abstractions and classes?
If there is a point in the existence of those methods in the class then you should use different interfaces to different clients. For example, you can make two interfaces for each client
interface InterfaceForClient1 {
public void m1();
public void m3();
public void m5();
public void m7();
public void m9();
public void m11();
}
interface InterfaceForClient2 {
public void m2();
public void m4();
public void m6();
public void m8();
public void m10();
public void m12();
}
And implement them in your class
class MyClass implements InterfaceForClient1, InterfaceForClient2 {
}
After it, clients must use those interfaces instead of the concrete implementation of the class to implement own logic.
You can create an Interface1 which defines methods only for Client1, and an Interface2 which defines methods only for Client2. Then, your class implements Interface1 and Interface2.
When you declare Client1 you can do something like: Interface1 client1.
With this approach, client1 can accesses only methods of this interface.
I hope this will help you.
The other answers already present the idiomatic approach. Another idea is a dynamic proxy decorating the API with an access check.
In essence, you generate a proxy API that has additional checks on method calls to implement a form of Access Control.
Example Implementation:
package com.example;
import java.lang.reflect.InvocationHandler;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;
#FunctionalInterface
public interface ACL<P, Q> {
boolean allowed(P accessor, Q target, Method method, Object[] args);
class ACLException extends RuntimeException {
ACLException(String message) {
super(message);
}
}
#SuppressWarnings("unchecked")
default Q protect(P accessor, Q delegate, Class<Q> dType) {
if (!dType.isInterface()) {
throw new IllegalArgumentException("Delegate type must be an Interface type");
}
final InvocationHandler handler = (proxy, method, args) -> {
if (allowed(accessor, delegate, method, args)) {
try {
return method.invoke(delegate, args);
} catch (InvocationTargetException e) {
throw e.getCause();
}
} else {
throw new ACLException("Access denies as per ACL");
}
};
return (Q) Proxy.newProxyInstance(dType.getClassLoader(), new Class[]{dType}, handler);
}
}
Example Usage:
package com.example;
import java.lang.reflect.Method;
public class Main {
interface API {
void doAlpha(int arg);
void doBeta(String arg);
void doGamma(Object arg);
}
static class MyAPI implements API {
#Override
public void doAlpha(int arg) {
System.out.println("Alpha");
}
#Override
public void doBeta(String arg) {
System.out.println("Beta");
}
#Override
public void doGamma(Object arg) {
System.out.println("Gamma");
}
}
static class AlphaClient {
void use(API api) {
api.doAlpha(100);
api.doBeta("100");
api.doGamma(this);
}
}
public static class MyACL implements ACL<AlphaClient, API> {
#Override
public boolean allowed(AlphaClient accessor, API target, Method method, Object[] args) {
final String callerName = accessor.getClass().getName().toLowerCase();
final String methodName = method.getName().toLowerCase().replace("do", "");
return callerName.contains(methodName);
}
}
public static void main(String[] args) {
final MyACL acl = new MyACL();
final API api = new MyAPI();
final AlphaClient client = new AlphaClient();
final API guardedAPI = acl.protect(client, api, API.class);
client.use(guardedAPI);
}
}
Notes:
The accessor does not have to be the client object itself, it can be a string key or token that helps ACL identify the client.
The ACL implementation here is rudimentary, more interesting ones could be One that reads ACL from some file or One that uses method and client annotations as rules.
If you don't want to define an interface for API class, consider a tool like javassist to directly proxy a class.
Consider other popular Aspect Oriented Programming solutions
You should create one super class with all the methods and then provide Client specific implementations in their corresponding sub classes extending from the super class defined earlier.
If there are methods which are common implementation for all clients, leave their implementations to the super class.
It seems like you are a bit confused about the purpose of Classes and Interfaces. As far as I know, an Interface is a contract defining which functionality a piece of software provides. This is from official java tutorial:
There are a number of situations in software engineering when it is
important for disparate groups of programmers to agree to a "contract"
that spells out how their software interacts. Each group should be
able to write their code without any knowledge of how the other
group's code is written. Generally speaking, interfaces are such
contracts.
Then you can write a Class which implements this Interface/contract, that is, provides the code that actually perform what was specified. The List interface and the ArrayList class are both an example of this.
Interfaces and Classes have access modifiers, but they aren't designed to specify permissions to specific clients. They specify what is visible for other piece of software depending the location where it is defined: Class, Package, Subclass, World. For example, a private method can be accessed only inside the class where it is defined.
From official Java tutorial again:
Access level modifiers determine whether other classes can use a
particular field or invoke a particular method. There are two levels
of access control:
At the top level—public, or package-private (no explicit modifier).
At the member level—public, private, protected, or package-private (no
explicit modifier).
Maybe you want something more powerful like Access Control List (ACL).
Your question is a little unclear, leading to different possible answers. I'll try to cover some of the possible areas:
Object encapsulation
If your goal is to provide interfaces to different clients that only provide certain functionality or a specific view there are several solutions. Which matches best depends on the purpose of your class:
Refactoring
The question somehow suggests that your class is responsible for different tasks. That might be an indicator, that you could tear it apart into distinct classes that provide the different interfaces.
Original
class AllInOne {
A m1() {}
B m2() {}
C m3() {}
}
client1.useClass(allInOneInstance);
client2.useClass(allInOneInstance);
client3.useClass(allInOneInstance);
Derived
class One {
A m1() {}
}
class Two {
B m2() {}
}
class Three {
C m3() {}
}
client1.useClass(oneInstance);
client2.useClass(twoInstance);
client3.useClass(threeInstance);
Interfaces
If you choose to keep the class together (there might be good reasons for it), you could have the class implement interfaces that model the view required by different clients. By passing instances of the appropriate interface to the clients they will not see the full class interface:
Example
class AllInOne implements I1, I2, I3 {
...
}
interface I1 {
A m1();
}
But be aware that clients will still be able to cast to the full class like ((AllInOne) i1Instance).m2().
Inheritance
This was already outline in other answers. I'll therefore skip this here. I don't think this is a good solution as it might easily break in a lot of scenarios.
Delegation
If casting is a risk to you, you can create classes that only offer the desired interface and delegate to the actual implementation:
Example
class Delegate1 {
private AllInOne allInOne;
public A m1() {
return allInOne.m1();
}
}
Implementing this can be done in various ways and depends on your environment like explicit classes, dynamic proxies , code generation, ...
Framework
If you are using an Application Framework like Spring you might be able to use functionality from this Framework.
Aspects
AOP allows you to intercept method calls and therefor apply some access control logic there.
Security
Please note that all of the above solutions will not give you actual security. Using casts, reflection or other techniques will still allow clients to obtain access to the full functionality.
If you require stronger access limitations there are techniques that I will just briefly outline as they might depend on your environment and are more complex.
Class Loader
Using different class loaders you can make sure that parts of your code have no access to class definitions outsider their scope (used e.g. in tomcat to isolate different deployments).
SecurityManager
Java offers possibilities to implement your own SecurityManager this offers ways to add some extra level of access checking.
Custom build Security
Of course you can add your own access checking logic. Yet I don't think this will be a viable solution for in JVM method access.
I am working on GWT project with JDK7. It has two entryPoints (two clients) that are located in separate packages of the project. Clients share some code that is located in /common package, which is universal and accessible to both by having the following line in their respective xml-build files:
<source path='ui/common' />
Both clients have their own specific implementations of the Callback class which serves their running environments and performs various actions in case of failure or success. I have the following abstract class that implements AsyncCallback interface and then gets extended by its respective client.
public abstract class AbstractCallback<T> implements AsyncCallback<T> {
public void handleSuccess( T result ) {}
...
}
Here are the client's classes:
public class Client1Callback<T> extends AbstractCallback<T> {...}
and
public class Client2Callback<T> extends AbstractCallback<T> {...}
In the common package, that also contains these callback classes, I am working on implementing the service layer that serves both clients. Clients use the same back-end services, just handle the results differently. Based on the type of the client I want to build a corresponding instance of AbstractCallback child without duplicating anonymous class creation for each call. I am going to have many declarations that will look like the following:
AsyncCallback<MyVO> nextCallback = isClient1 ?
new Client1Callback<MyVO>("ABC") {
public void handleSuccess(MyVO result) {
doThatSameAction(result);
}
}
:
new Client2Callback<MyVO>("DEF") {
public void handleSuccess(MyVO result) {
doThatSameAction(result);
}
};
That will result in a very verbose code.
The intent (in pseudo-code) is to have the below instead:
AsyncCallback<MyVO> nextCallback = new CallbackTypeResolver.ACallback<MyVO>(clientType, "ABC"){
public void handleSuccess(MyVO result) {
doThatSameAction(result);
}
};
I was playing with the factory pattern to get the right child instance, but quickly realized that I am not able to override handleSuccess() method after the instance is created.
I think the solution may come from one of the two sources:
Different GWT way of dealing with custom Callback implementations, lets call it alternative existent solution.
Java generics/types juggling magic
I can miss something obvious, and would appreciate any advice.
I've read some articles here and on Oracle about types erasure for generics, so I understand that my question may have no direct answer.
Refactor out the handleSuccess behavior into its own class.
The handleSuccess behavior is a separate concern from what else is going on in the AsyncCallback classes; therefore, separate it out into a more useful form. See Why should I prefer composition over inheritance?
Essentially, by doing this refactoring, you are transforming an overridden method into injected behavior that you have more control over. Specifically, you would have instead:
public interface SuccessHandler<T> {
public void handleSuccess(T result);
}
Your callback would look something like this:
public abstract class AbstractCallback<T> implements AsyncCallback<T> {
private final SuccessHandler<T> handler; // Inject this in the constructor
// etc.
// not abstract anymore
public void handleSuccess( T result ) {
handler.handleSuccess(result);
}
}
Then your pseudocode callback creation statement would be something like:
AsyncCallback<MyVO> nextCallback = new CallbackTypeResolver.ACallback<MyVO>(
clientType,
"ABC",
new SuccessHandler<MyVO>() {
public void handleSuccess(MyVO result) {
doThatSameMethod(result);
}
});
The implementations of SuccessHandler don't have to be anonymous, they can be top level classes or even inner classes based on your needs. There's a lot more power you can do once you're using this injection based framework, including creating these handlers with automatically injected dependencies using Gin and Guice Providers. (Gin is a project that integrates Guice, a dependency injection framework, with GWT).
So, this is going to sound like an odd question, but I need to know how to get the Class object of a child Object in an inheritance situation for Java reflection.
The situation is this: I'm writing CraftBukkit plugins, Java plugins that work with CraftBukkit, a server-side-only plugin A.P.I. for Minecraft. At the moment, I'm making a plugin that is supposed to be like a "parent" to all of the other plugins I'm writing. It contains large amounts of extra useful Objects and utilities.
One class in the plugin is an Object class called myPlugin that I want all the main classes of all the other plugins to extend. (I know Object names shouldn't start with a lowercase letter, but the lowercase "my" is a trademark with my CraftBukkit plugins.)
One of the things that I want this myPlugin class to do is be able to handle commands to load plugins' data. Therefore, when the command is called, I want the plugin to basically call all of the methods in the plugin's main class that start with "load".
I know how to search through all the Methods in the Class for ones starting with "load" if I can just retrieve the Class, but if I try to call getClass() in the myPlugin class, I believe it's just going to return the myPlugin Class instead of the Class that extends myPlugin.
So, how can I retrieve the Class that extends myPlugin instead of the myPlugin class itself?
EDIT:
I feel that I should mention that I've considered creating an abstract method called mainClass() that will return the Class and making each plugin add this method and return their main class, but this is an ugly fix that I would prefer to avoid.
No it's the subclass name that is returned, consider:
public class ClassOne {
}
public class ClassTwo extends ClassOne {
}
public class Test {
public void someMethod(ClassOne one) {
System.out.println(one.getClass().getName());
}
}
public class Main {
public static void main(String[] args) {
ClassTwo t = new ClassTwo();
Test tst = new Test();
tst.someMethod(t);
}
}
The output is: ClassTwo
I'm designing UI Tests for a web application with Selenium in JUnit. I have a base test class with something like this from which I inherit my tests:
public class BaseTest {
protected TestSteps test;
protected Assertions assertion;
// set everything up...
}
and the tests then only look like this:
public class TestX extends BaseTest {
#Test
public testFeature1() {
test.clickSomething().enterSomething(); // method chaining
assertion.assertSomething();
//...
}
}
The problem I'm having: There are different modules in the web app, and Assertions/TestSteps methods that only apply to one module clutter the interface of the Assertions/TestSteps class for the other modules.
Thus I tried to split the Assertions/TestSteps up.
The problem is, the method chaining returns instances of TestSteps. Of course, when I have Module1TestSteps with method doSomethingSpecific() then I would expect test.clickSomething().doSomethingSpecific() to work, but it does not, because clickSomething() would return a TestSteps instance, not a Module1TestSteps instance.
I "solved" this by making an AbstractTestSteps<T extends AbstractTestSteps<T> class (which contains all the base TestSteps methods) protected abstract T getThis();.
I then extend this class like this:
public class BaseTestSteps extends AbstractTestSteps<BaseTestSteps> {
// Constructors
protected BaseTestSteps getThis() {
return this;
}
// that's it, the "base methods" are all inherited from AbstractTestSteps...
}
for the base TestSteps and
public class Module1TestSteps extends AbstractTestSteps<Module1TestSteps> {
// same constructors...
protected Module1TestSteps getThis() {
return this;
}
public Module1TestSteps doSomeThingSpecific() {
// do something
return getThis();
}
}
for my specialized TestSteps. It works for now, but I don't like it because of the following reasons:
All the general methods are in the AbstractTestSteps class, but they are used through an instance of BaseTestSteps
What if I have a submodule of Module1? I can't inherit from Module1TestSteps, only from AbstractTestSteps.
I think it's not trivial to understand the relation of these classes when one of my colleagues tries to add a new TestSteps class.
How can this be made better?
Use the Page Object pattern. That is, create an API for each page so that your tests describe navigating and interacting with pages in a way that describes the user's experience.
It has a few benefits that address your concerns:
It uses composition, not inheritance
It is easy to understand and explain to people maintaining the tests because the tests read like a description of somebody using the application
I've got the following classes set up:
public abstract class Process<T,S> {
...
}
public abstract class Resource<T, S extends Process<T, S>> {
protected S processer;
...
}
public class ProcessImpl<EventType1, EventType2> {
...
}
public class ResourceImpl extends Resource<EventType1, ProcessImpl> {
processer = new ProcesserImpl();
...
}
Everything is fine until I get to the ResourceImpl. I'm told that ProcessImpl is not a valid substitute for the bounded parameter <S extends Process<T,S>> of the type Resource<T,S>.
I've tried various ways of getting around this and keep hitting a wall.
Does anyone have any ideas?
public class ProcessImpl<EventType1, EventType2> {
...
}
Because ProcessImpl doesn't extend Process. Your ProcessImpl is not derived from Process, which is what you're declaring that parameter should be.
You might want to do something like this:
public abstract class Process<T, S> {
}
public abstract class Resource<T, S extends Process<T, S>> {
S processor;
}
public class ProcessImpl extends Process<EventType1, ProcessImpl> {
}
public class ResourceImpl extends Resource<EventType1, ProcessImpl> {
}
If you constrain the S parameter of the Resource to be a processor you also need to properly declare it on the ProcessImpl class. I don't know what EventType2 is but it should be implementing Process interface. I assumed you actually want to say ProcessImpl.
I can't see a way to edit the original version, or comment on given answers without a better rep.
This code will exist on a web layer, the eventtype2 is defined on the persistence layer and accessible only in the core layer which exists below this level.
So unfortunately without having a tight coupling, which I would like to avoid, I don't have access to EventType2.
If you don't want your code to depend on some existing package, which contains the Process, you could also introduce some new interface package depending on nothing in the very bottom of the class hierarchy. (If you are able to change the constrains of the inheritance of course.)