I am building an application on two layer. Web layer and business layer.
Inside the business layer I have some public method that can be called within the business layer or from the web layer.
I only want some of these methods being called from the web layer (the safe one).
I was wondering if I can create a annotation in my business layer, for example #Public which means I can call this method from the web layer, and #Private so I should not use this method from the web layer.
And when I try to call a #private method from the web layer (in eclipse) it gives me a warning?
As well: Can I have a way to list automatically all this method private and public?
AFAIK you can't make Eclipse use annotation to determine whether you can access a method from a certain file. For this to be possible Eclipse would have to know whether the file is part of the web layer or the business layer.
In order to list all methods having a certain annotation, you could use reflection at runtime. In Eclipse there might be filters, but I don't know of any annotation based filters.
Maybe you should choose another approach, I'll shortly describe how we do that:
We have two interfaces that our services may implement:
one public interface that contains all the methods the web layer may see
one private interface that contains all the methods internal to the business logic
We split those interfaces into two eclipse projects - one public api project and one implementation project that contains the services and internal api - and just allow access to the public api project from the web layer.
Since our services (EJB 3.0) need an interface, we have to add the internal one, if we have internal methods. However, with other technology (like EJB 3.1) you might also just provide the public interface.
Another approach might be to split the interfaces into two packages, e.g. myproject.api.pub (public is a keyword) and myproject.api.internal, and then use package based filters in Eclipse.
The first thing that comes up in my mind is that this would only be needed in a bad-designed two-layer app.
You can use access control to make sure that the web gui can only access the safe methods, and keep other methods to the business layer.
It is probably possible to just make those "public" methods that you don't want to be used by the web interface private; that way you can use them in the public, safe, methods in the business logic.
Though without knowing how your project is set up, giving concrete examples is kind of impossible.
But say; you can have:
com.somecompany.gui > contains all web-gui stuff
com.somecompany.logic > contains business logic.
In the logic package, you create classes that have public methods to be used from gui, and private or - if needed by other logic components - package private (protected) methods that cannot be accessed from the gui package. That way you can separate your logic from the interface without having a need for the annotation you want to make.
In general I'd say: yes, it could work. At least to produce compiler warnings.
We have the #Override annotation for methods. This annotation is used for a similar reason: verify at compile time, that certain conditions are met. In this case: the annotated methods overrides a method from a superclass or it implements an interface method or an abstract method. If the verifier finds out, the this is not the case, then the compiler will produce a compile time error.
So it should be possible here too. We could think of an annotations like
#Layer("servicelayer") // class annotation
#Private(layer="servicelayer") // method annotation
And now we could verify at compile time, that annotated methods can only be called from classes that have the same layer annotation. If the condition is not met, the compiler could produce a warning (iaw: the compiler could detect, if we accidentally call an internal service layer method from a web layer class.
Related
Asume the following code:
public class Main {
public static final List<Object> configuration = new ArrayList<>();
public static void main(String[] args) {
System.out.println(configuration);
}
}
I now want to be able, to provide "self-configuring" classes. This means, they should be able to simply provide something like a static block, that will get called automatically like this:
public class Custom {
static {
Main.configuration.add(Custom.class);
}
}
If you execute this code, the configuration list is empty (because of the way static blocks are executed). The class is "reachable", but not "loaded". You could add the following to the Main class before the System.out
Class.forName("Custom");
and the list would now contain the Custom class object (since the class is not initialized yet, this call initializes it). But because the control should be inverse (Custom should know Main and not the other way around), this is not a usable approach. Custom should never be called directly from Main or any class, that is associated with Main.
What would be possible though is the following: You could add an Annotation to the class and collect all classes with said annotation, using something like the ClassGraph framework and call Class.forName on each of them.
TL;DR
Is there a way, to automatically call the static block without the need to analyze all classes and the need of knowing the concrete, "self configuring" class? Perfect would be an approach, that, upon starting the application, automatically initializes a classes (if they are annotated with a certain annotation). I thought about custom ClassLoaders, but from what i understand, they are lazy and therefor not usable for this approach.
The background of this is, that i want to incorporate it into an annotation processor, which creates "self configuring code".
Example (warning: design-talk and in depth)
To make this a little less abstract, imagine the following:
You develop a Framework. Let's call it Foo. Foo has the classes GlobalRepository and Repository. GlobalRepository follows the Singleton design pattern (only static methods). The Repository as well as the GlobalRepository have a method "void add(Object)" and " T get(Class)". If you call get on the Repository and the Class cannot be found, it calls GlobalRepository.get(Class).
For convenience, you want to provide an Annotation called #Add. This Annotation can be placed on Type-Declarations (aka Classes). An annotation-processor creates some configurations, which automatically add all annotated classes to the GlobalRepository and therefor reduce boilerplate code. It should only (in all cases) happen once. Therefor the generated code has a static initializer, in which the GlobalRepository is filled, just like you would do with the local repository. Because your Configurations have names that are designed to be as unique as possible and for some reason even contain the date of creation (this is a bit arbitrary, but stay with me), they are nearly impossible to guess.
So, you also add an annotation to those Configurations, which is called #AutoLoad. You require the using developer to call GlobalRepository.load(), after which all classes are analyzed and all classes with this annotation are initialized, and therefor their respective static-blocks are called.
This is a not very scalable approach. The bigger the application, the bigger the realm to search, the longer the time and so on. A better approach would be, that upon starting the application, all classes are automatically initialized. Like through a ClassLoader. Something like this is what i am looking for.
First, don’t hold Class objects in your registry. These Class objects would require you to use Reflection to get the actual operation, like instantiating them or invoking certain methods, whose signature you need to know before-hand anyway.
The standard approach is to use an interface to describe the operations which the dynamic components ought to support. Then, have a registry of implementation instances. These still allow to defer expensive operations, if you separate them into the operational interface and a factory interface.
E.g. a CharsetProvider is not the actual Charset implementation, but provides access to them on demand. So the existing registry of providers does not consume much memory as long as only common charsets are used.
Once you have defined such a service interface, you may use the standard service discovery mechanism. In case of jar files or directories containing class files, you create a subdirectory META-INF/services/ containing a file name as the qualified name of the interface containing qualified names of implementation classes. Each class path entry may have such a resource.
In case of Java modules, you can declare such an implementation even more robust, using
provides service.interface.name with actual.implementation.class;
statements in your module declaration.
Then, the main class may lookup the implementations, only knowing the interface, as
List<MyService> registered = new ArrayList<>();
for(Iterator<MyService> i = ServiceLoader.load(MyService.class); i.hasNext();) {
registered.add(i.next());
}
or, starting with Java 9
List<MyService> registered = ServiceLoader.load(MyService.class)
.stream().collect(Collectors.toList());
The class documentation of ServiceLoader contains a lot more details about this architecture. When you go through the package list of the standard API looking for packages have a name ending with .spi, you get an idea, how often this mechanism is already used within the JDK itself. The interfaces are not required to be in packages with such names though, e.g. implementations of java.sql.Driver are also searched through this mechanism.
Starting with Java 9, you could even use this to do something like “finding the Class objects for all classes having a certain annotation”, e.g.
List<Class<?>> configuration = ServiceLoader.load(MyService.class)
.stream()
.map(ServiceLoader.Provider::type)
.filter(c -> c.isAnnotationPresent(MyAnnotation.class))
.collect(Collectors.toList());
but since this still requires the classes to implement a service interface and being declared as implementations of the interface, it’s preferable to use the methods declared by the interface for interacting with the modules.
What I have known are:
annotation was added in java 5
annotation can be using in method, class, and property
annotation can work in RUNTIME, CLASS, SOURCE( I don't know how to work with CLASS and SOURCE, and their's features)
annotation with retention which is RUNTIME can be implement when java program is running.
And I want to implement a annotation to have follows features:
ensure class only being allowed to create a instance
ensure methods only being allowed to access method in the class
it is like as friend in c++
it is same as public and private , but more dynamicall, like
#MyAnnotation(allowMethods={xxx.doSomething})
public void getValue(){}
the getValues method only can be accessed in the instance self and xxx.doSomething() method
What should I do and learn in next?
And Where can I learn about these?
I think you might be misunderstanding something there. Annotations are descriptive elements, not parts of your program. You can write as many annotations as you want, and people who use your code will still be able to ignore them.
That said, an annotation that enforces a policy (as yours does) can actually be implemented, either at compile or at runtime, but you need an external mechanism to help you. I can think of 3:
Annotation processing lets you interact with the compiler and process annotations by generating code or by omitting compiler errors. Unfortunately, I don't think it will work for your case, as you want to protect your annotated type from instantiation, and that means the call site doesn't actually have an annotation. Annotation processing only gives you access to the actual code pieces that have annotations, not to those that refer to them.
AspectJ allows you to write policy enforcement aspects and omit compiler errors, based on static pointcuts. The problem here is that static pointcuts have very limited semantics, so while you could forbid the instantiation of your class altogether, or from certain packages, you could not limit the your class instantiations to 1.
The third way, and probably the only sane way is that you use a container like Spring or Guice and configure your class as singleton. As long as you only retrieve your class from the container, it will never create a second instance.
Finally: If you want to limit the number of instantiations of your class, you can always use a classic Singleton pattern approach.
This concept is unclear with me.
I have worked on several frameworks for an instance Spring.
To implement a feature we always implement some interfaces provided by the framework.
For an instance if I have to create a custom scope in Spring, my class implements a org.springframework.beans.factory.config.Scope interface. Which has some predefined low level functionality which helps in defining a custom scope for a bean.
Whereas in Java I read an interface is just a declaration which classes can implement & define their own functionality. The methods of an interface have no predefined functionality.
interface Car
{
topSpeed();
acclerate();
deaccelrate();
}
The methods here don't have any functionality. They are just declared.
Can anyone explain this discrepancy in the concept? How does the framework put some predefined functionality with interface methods?
It doesn't put predefined functionality in the methods. But when you implement
some interface (say I) in your class C, the framework knows that your object (of type C)
implements the I interface, and can call certain methods (defined in I) on your object
thus sending some signals/events to your object. These events can be e.g. 'app initialized',
'app started', 'app stopped', 'app destroyed'. So usually this is what frameworks do.
I am talking about frameworks in general here, not Spring in particular.
There is no conceptual difference, actually. Each java interface method has a very clear responsibility (usually described in its javadoc). Take Collection.size() as an example. It is defined to return the number of elements in your collection. Having it return a random number is possible, but will cause no end of grief for any caller. Interface methods have defined semantics ;)
As I mentioned in the comments, to some extent, implementing interfaces provided by the framework is replaced by the use of stereotype annotations. For example, you might annotate a class as #Entity to let Spring know to manage it and weave a Transaction manager into it.
I have a suspicion that what you are seeing relates to how Spring and other frameworks make use of dynamic proxies to inject functionality.
For an example of Spring injecting functionality, if you annotate a method as #Transactional, then the framework will attempt to create a dynamic proxy, which wraps access to your method. i.e. When something calls your "save()" method, the call is actually to the proxy, which might do things like starting a transaction before passing the call to your implementation, and then closing the transaction after your method has completed.
Spring is able to do this at runtime if you have defined an interface, because it is able to create a dynamic proxy which implements the same interface as your class. So where you have:
#Autowired
MyServiceInterface myService;
That is injected with SpringDynamicProxyToMyServiceImpl instead of MyServiceImpl.
However, with Spring you may have noticed that you don't always need to use interfaces. This is because it also permits AspectJ compile-time weaving. Using AspectJ actually injects the functionality into your class at compile-time, so that you are no longer forced to use an interface and implementation. You can read more about Spring AOP here:
http://docs.spring.io/spring/docs/4.0.0.RELEASE/spring-framework-reference/htmlsingle/#aop-introduction-defn
I should point out that although Spring does generally enable you to avoid defining both interface and implementation for your beans, it's not such a good idea to take advantage of it. Using separate interface and implementation is very valuable for unit testing, as it enables you to do things like inject a stub which implements an interface, instead of a full-blown implementation of something which needs database access and other rich functionality.
I've worked/seen a few spring-hibernate web application projects having as many interfaces as there are actual service and dao classes.
I always thought that these two as the main reasons for having these single implementation interfaces:
Spring can wire actual implementation as dependencies in a given class (loose coupling)
public class Person {
#Autowired
private Address address;
#Autowired
private AccountDetail accountDetail;
public Person(Address address, AccountDetail accountDetail)
{ // constructor
While unit testing, I can create mock classes and test a class in isolation.
Address mockedAddress = mock(Address);
AccountDetail mockedAccountDetail = mock(AccountDetail);
Person underTestPerson = new Person(mockedAddress, mockedAccountDetail);
// unit test follows
But, of late, I realized that:
Spring can wire concrete implementation classes as dependencies:
public class Person {
#Autowired
private AddressImpl address;
#Autowired
private AccountDetailImpl accountDetail;
public Person(AddressImpl address, AccountDetailImpl accountDetail) {
// constructor
Mock frameworks like EasyMock can mock concrete classes as well
AddressImpl mockedAddress = mock(AddressImpl);
AccountDetailImpl mockedAccountDetail = mock(AccountDetailImpl);
Person underTestPerson = new Person(mockedAddress, mockedAccountDetail);
// unit test follows
Also, as per this discussion, I think the summary is that within a single app, interfaces are mostly overused probably out of convention or habit. They generally make best sense in cases where we are interfacing with another application for example slf4j used by many apps around the world. Within a single app, a class is almost as much an abstraction as an interface is.
So, my question is why do we still need Interfaces and then have single implementations like *ServiceImpl and *DaoImpl classes and unnecessarily increase our code base size. Is there some issue in mocking concrete classes that I’m not aware of.
Whenever I've discussed this with my team-mates, only answer I get is that implementing service and dao classes based on interfaces is THE DESIGN everybody follows - they mention about spring best practices, OOP, DDD etc. But I still don't get a pragmatic reason behind having so many interfaces within an isolated application.
There are more advantages to interfaces - As in proxying . If your class implements an interface , JDK dynamic proxies will be used by default for AOP . If you use the implementations directly, you'll be forced to use CGLIB proxies by making proxy-target-class=true . These require byte code manipulation unlike JDK proxies .
read here for more on this .
Read another discussion at what reasons are there to use interfaces (Java EE or Spring and JPA) for more info .
It's a very controversial subject. In brief, there's none—at least for you, the developer.
In EJB2 world, the Home and Remote interfaces were a must, and were exactly for a reason #AravindA mentions: proxies. Security, remoting, pooling, etc. all could be wrapped in a proxy, and provide the services requested strictly within standard library (as in DynamicProxy).
Now that we have javaassist and cglib, Spring (Hibernate, EJB3 if you prefer) are perfectly capable of instrumenting your classes as framework developer likes. Problem is, what they do is a very annoying thing: they usually request you to add a no-parameter constructor.—Wait, I had parameters here?—Nevermind, just add the constructor.
So interfaces are here to maintain your sanity. Still, it's strange, a no-argument constructor for a class with proper constructor is not something that makes a sense to me, right? Turns out (I should've read the spec, I know) that Spring creates a functional equivalent of an interface out of your class: an instance with no (or ignored) state and all the methods overridden. So you have a "real" instance, and a "fake interface" one, and what fake interface does is, it serves all the security/transactional/remoting magic for you. Nice, but hard to understand, and looks like a bug if you haven't taken it apart.
Moreover, if you happen to implement an interface in your class, (at least some versions of) Spring suddenly decides you were going to proxy this interface only, and the application just doesn't work for no apparent reason.
Thus, so far the reason is, safety and sanity. There are reasons why it is a good practice—but from your post, I see you already read all of those. The most important reason I can see today is the WTH/minute metric, especially if we're talking about newcomers to your project.
I have the following situation:
Three concrete service classes implement a service interface: one is for persistence, the other deals with notifications, the third deals with adding points to specific actions (gamification). The interface has roughly the following structure:
public interface IPhotoService {
void upload();
Photo get(Long id);
void like(Long id);
//etc...
}
I did not want to mix the three types of logic into one service (or even worse, in the controller class) because I want to be able to change them (or shut them) without any problems. The problem comes when I have to inject a concrete service into the controller to use. Usually, I create a fourth class, named roughly ApplicationNamePhotoService, which implements the same interface, and works as a wrapper (mediator) between the other three services, which gets input from the controller, and calls each service correspondingly. It is a working approach, though one, which creates a lot of boilerplate code.
Is this the right approach? Currently, I am not aware of a better one, although I will highly appreciate to know if it is possible to declare the execution sequence declaratively (in the context) and to inject the controller with and on-the fly generated wrapper instance.
Also, it would be nice to cache some stuff between the three services. For example, all are using DAOs, i.e. making sometimes the same calls to the DB over and over again. If all the logic were into one place that could have been avoided, but now... I know that it is possible to enable some request or session based caching. Can you suggest me some example code? BTW, I am using Hibernate for the persistence part. Is there already some caching provided (probably, if they reside in the same transaction or something - with that one I am totally lost)
The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. It sounds like you are mixing service classes when they could be in the same class and method. You can inject service classes into one another when required too, rather than create another "mediator".
It is perfectly acceptable to "mix the three types of logic", in fact it is preferable if they form an expected use case/unit of work
Cache-ing I would look to use eh cache which is, I believe, well integrated with hibernate.