Why always have single implementation interfaces in service and dao layers? - java

I've worked/seen a few spring-hibernate web application projects having as many interfaces as there are actual service and dao classes.
I always thought that these two as the main reasons for having these single implementation interfaces:
Spring can wire actual implementation as dependencies in a given class (loose coupling)
public class Person {
#Autowired
private Address address;
#Autowired
private AccountDetail accountDetail;
public Person(Address address, AccountDetail accountDetail)
{ // constructor
While unit testing, I can create mock classes and test a class in isolation.
Address mockedAddress = mock(Address);
AccountDetail mockedAccountDetail = mock(AccountDetail);
Person underTestPerson = new Person(mockedAddress, mockedAccountDetail);
// unit test follows
But, of late, I realized that:
Spring can wire concrete implementation classes as dependencies:
public class Person {
#Autowired
private AddressImpl address;
#Autowired
private AccountDetailImpl accountDetail;
public Person(AddressImpl address, AccountDetailImpl accountDetail) {
// constructor
Mock frameworks like EasyMock can mock concrete classes as well
AddressImpl mockedAddress = mock(AddressImpl);
AccountDetailImpl mockedAccountDetail = mock(AccountDetailImpl);
Person underTestPerson = new Person(mockedAddress, mockedAccountDetail);
// unit test follows
Also, as per this discussion, I think the summary is that within a single app, interfaces are mostly overused probably out of convention or habit. They generally make best sense in cases where we are interfacing with another application for example slf4j used by many apps around the world. Within a single app, a class is almost as much an abstraction as an interface is.
So, my question is why do we still need Interfaces and then have single implementations like *ServiceImpl and *DaoImpl classes and unnecessarily increase our code base size. Is there some issue in mocking concrete classes that I’m not aware of.
Whenever I've discussed this with my team-mates, only answer I get is that implementing service and dao classes based on interfaces is THE DESIGN everybody follows - they mention about spring best practices, OOP, DDD etc. But I still don't get a pragmatic reason behind having so many interfaces within an isolated application.

There are more advantages to interfaces - As in proxying . If your class implements an interface , JDK dynamic proxies will be used by default for AOP . If you use the implementations directly, you'll be forced to use CGLIB proxies by making proxy-target-class=true . These require byte code manipulation unlike JDK proxies .
read here for more on this .
Read another discussion at what reasons are there to use interfaces (Java EE or Spring and JPA) for more info .

It's a very controversial subject. In brief, there's none—at least for you, the developer.
In EJB2 world, the Home and Remote interfaces were a must, and were exactly for a reason #AravindA mentions: proxies. Security, remoting, pooling, etc. all could be wrapped in a proxy, and provide the services requested strictly within standard library (as in DynamicProxy).
Now that we have javaassist and cglib, Spring (Hibernate, EJB3 if you prefer) are perfectly capable of instrumenting your classes as framework developer likes. Problem is, what they do is a very annoying thing: they usually request you to add a no-parameter constructor.—Wait, I had parameters here?—Nevermind, just add the constructor.
So interfaces are here to maintain your sanity. Still, it's strange, a no-argument constructor for a class with proper constructor is not something that makes a sense to me, right? Turns out (I should've read the spec, I know) that Spring creates a functional equivalent of an interface out of your class: an instance with no (or ignored) state and all the methods overridden. So you have a "real" instance, and a "fake interface" one, and what fake interface does is, it serves all the security/transactional/remoting magic for you. Nice, but hard to understand, and looks like a bug if you haven't taken it apart.
Moreover, if you happen to implement an interface in your class, (at least some versions of) Spring suddenly decides you were going to proxy this interface only, and the application just doesn't work for no apparent reason.
Thus, so far the reason is, safety and sanity. There are reasons why it is a good practice—but from your post, I see you already read all of those. The most important reason I can see today is the WTH/minute metric, especially if we're talking about newcomers to your project.

Related

Implementing an interface from a framework vs simple java interface

This concept is unclear with me.
I have worked on several frameworks for an instance Spring.
To implement a feature we always implement some interfaces provided by the framework.
For an instance if I have to create a custom scope in Spring, my class implements a org.springframework.beans.factory.config.Scope interface. Which has some predefined low level functionality which helps in defining a custom scope for a bean.
Whereas in Java I read an interface is just a declaration which classes can implement & define their own functionality. The methods of an interface have no predefined functionality.
interface Car
{
topSpeed();
acclerate();
deaccelrate();
}
The methods here don't have any functionality. They are just declared.
Can anyone explain this discrepancy in the concept? How does the framework put some predefined functionality with interface methods?
It doesn't put predefined functionality in the methods. But when you implement
some interface (say I) in your class C, the framework knows that your object (of type C)
implements the I interface, and can call certain methods (defined in I) on your object
thus sending some signals/events to your object. These events can be e.g. 'app initialized',
'app started', 'app stopped', 'app destroyed'. So usually this is what frameworks do.
I am talking about frameworks in general here, not Spring in particular.
There is no conceptual difference, actually. Each java interface method has a very clear responsibility (usually described in its javadoc). Take Collection.size() as an example. It is defined to return the number of elements in your collection. Having it return a random number is possible, but will cause no end of grief for any caller. Interface methods have defined semantics ;)
As I mentioned in the comments, to some extent, implementing interfaces provided by the framework is replaced by the use of stereotype annotations. For example, you might annotate a class as #Entity to let Spring know to manage it and weave a Transaction manager into it.
I have a suspicion that what you are seeing relates to how Spring and other frameworks make use of dynamic proxies to inject functionality.
For an example of Spring injecting functionality, if you annotate a method as #Transactional, then the framework will attempt to create a dynamic proxy, which wraps access to your method. i.e. When something calls your "save()" method, the call is actually to the proxy, which might do things like starting a transaction before passing the call to your implementation, and then closing the transaction after your method has completed.
Spring is able to do this at runtime if you have defined an interface, because it is able to create a dynamic proxy which implements the same interface as your class. So where you have:
#Autowired
MyServiceInterface myService;
That is injected with SpringDynamicProxyToMyServiceImpl instead of MyServiceImpl.
However, with Spring you may have noticed that you don't always need to use interfaces. This is because it also permits AspectJ compile-time weaving. Using AspectJ actually injects the functionality into your class at compile-time, so that you are no longer forced to use an interface and implementation. You can read more about Spring AOP here:
http://docs.spring.io/spring/docs/4.0.0.RELEASE/spring-framework-reference/htmlsingle/#aop-introduction-defn
I should point out that although Spring does generally enable you to avoid defining both interface and implementation for your beans, it's not such a good idea to take advantage of it. Using separate interface and implementation is very valuable for unit testing, as it enables you to do things like inject a stub which implements an interface, instead of a full-blown implementation of something which needs database access and other rich functionality.

Advice wanted on a complex structure in java (DAO and Service Layer linking/coupling)

Introduction
I am trying to make a rather complex structure in Java with interfaces, abstract classes and generics. Having no experience with generics and only average experience with creating good OOP designs, this is beginning to prove quite a challenge.
I have some feeling that what I'm trying to do cannot actually be done, but that I could come close enough to it. I'll try to explain it as brief as I can. I'm just going to tell straight away that this structure will represent my DAO and service layers to access the database. Making this question more abstract would only make it more difficult.
My DAO layer is completely fine as it is. There is a generic DAO interface and for each entity, there is a DAO interface that extends the generic one and fills in the generic types. Then there's an abstract class that is extended by each DAO implementation, which in turn implement the corresponding interface. Confusing read for most probably, so here's the diagram showing the DAO for Products as an example:
Now for the service classes, I had a similar construction in mind. Most of the methods in a service class map to the DAO methods anyway. If you replace every "DAO" in the diagram above with "Service", you get the basis for my service layer. But there is one thing that I want to do, based on the following idea I have:
Every service class for an entity will at least access one DAO object, namely the DAO of the entity that it is designed for.
Which is...
The question/problem
If I could make a proper OO design to make each service class have one instance variable for the DAO object of their respective entity my service layer would be perfect, in my view. Advice on this is welcome, in case my design is not so good as it seemed.
I have implemented it like this:
Class AbstractService
public abstract class AbstractService<EntityDAO> {
EntityDAO entityDAO;
public AbstractService() {
entityDAO = makeEntityDAO(); //compiler/IDE warning: overridable method call in constructor
}
abstract EntityDAO makeEntityDAO();
}
Class ProductServiceImpl
public class ProductServiceImpl extends AbstractService<ProductDAOImpl> {
public ProductServiceImpl() {
super();
}
#Override
ProductDAOImpl makeEntityDAO() {
return new ProductDAOImpl();
}
}
The problem with this design is a compiler warning I don't like: it has an overridable method call in the constructor (see the comment). Now it is designed to be overridable, in fact I enforce it to make sure that each service class has a reference to the corresponding DAO. Is this the best thing I can do?
I have done my absolute best to include everything you might need and only what you need for this question. All I have to say now is, comments are welcome and extensive answers even more, thanks for taking your time to read.
Additional resources on StackOverflow
Understanding Service and DAO layers
DAO and Service layers (JPA/Hibernate + Spring)
Just a little note first: usually in an application organized in layers like Presentation / Service / DAO for example, you have the following rules:
Each layer knows only the layer immediately below.
It knows it only by it's interfaces, and not by it's implementation class.
This will provide easier testing, a better code encapsulation, and a sharper definition of the different layers (through interfaces that are easily identified as public API)
That said, there is a very common way to handle that kind of situation in a way that allow the most flexibility: dependency injection. And Spring is the industry standard implementation of dependency injection (and of a lot of other things)
The idea (in short) is that your service will know that it needs a IEntityDAO, and that someone will inject in it and implementation of the interface before actually using the service. That someone is called an IOC container (Inversion of Control container). It can be Spring, and what it does is usually described by an application configuration file and will be done at application startup.
Important Note: The concept is brilliant and powerful but dead simple stupid. You can also use the Inversion of Control architectural pattern without a framework with a very simple implementation consisting in a large static method "assembling" your application parts. But in an industrial context it's better to have a framework which will allow to inject other things like database connection, web service stub clients, JMS queues, etc...
Benefits:
Your have an easy time mocking and testing, as the only thing a class depends on is interfaces
You have a single file of a small set of XML files that describe the whole structure of your application, which is really handy when your application grows.
It's a very widely adopted standard and well - known by many java developers.
Sample java code:
public abstract class AbstractService<IEntityDAO> {
private IEntityDAO entityDAO; // you don't know the concrete implementation, maybe it's a mock for testing purpose
public AbstractService() {
}
protected EntityDAO getEntityDAO() { // only subclasses need this method
}
public void setEntityDAO(IEntityDAO dao) { // IOC container will call this method
this.entityDAO = dao;
}
}
And in spring configuration file, you will have something like that:
<bean id="ProductDAO" class="com.company.dao.ProductDAO" />
[...]
<bean id="ProductService" class="com.company.service.ProductService">
<property name="entityDAO" ref="ProductDAO"/>
</bean>

Shortcut methods

My original question was quite incorrect, I have classes (not POJO), which have shortcut methods for business logic classes, to give the consumer of my API the ability to use it like:
Connector connector = new ConnectorImpl();
Entity entity = new Entity(connector);
entity.createProperty("propertyName", propertyValue);
entity.close;
Instead of:
Connector connector = new ConnectorImpl();
Entity entity = new Entity();
connector.createEntityProperty(entity, "propertyName", propertyValue);
connector.closeEntity(entity);
Is it good practice to create such shortcut methods?
Old question
At the moment I am developing a small framework and have a pretty nice separation of the business logic in different classes (connectors, authentication tokens, etc.), but one thing is still bothers me. I have methods which manipulates with POJOs, like this:
public class BuisnessLogicImpl implements BusinessLogic{
public void closeEntity(Entity entity) {
// Business Logic
}
}
And POJO entities which also have a close method:
public class Entity {
public void close(){
businessLogic.closeEntity(this);
}
}
Is it good practice to provide two ways to do the same thing? Or better, just remove all "proxy" methods from POJOs for clarity sake?
You should remove the methods from the "POJOs"... They aren't really POJO's if you encapsulate functionality like this. The reason for this comes from SOA design principles which basically says you want loose coupling between the different layers of your application.
If you are familiar with Inversion of control containers, like Google_Guice or Spring Framework-- this separation is a requirement. For instance, let's say you have a CreditCard POJO and a CreditCardProcessor service, and a DebugCreditCardProcess service that doesn't actually charge the CC money (for testing).
#Inject
private CardProcessor processor;
...
CreditCard card = new CreditCard(...params...);
processor.process(card);
In my example, I am relying on an IoC container to provide me with a CardProcessor. Whether this is the debug one, or the real one... I don't really care and neither does the CreditCard object. The one that is provided is decided by your application configuration.
If you had coupling between the processor and credit card where I could say card.process(), you would always have to pass in the processor in the card constructor. CreditCards can be used for other things besides processing however. Perhaps you just want to load a CreditCard from the database and get the expiration date... It shouldn't need a processor to do this simple operation.
You may argue: "The credit card could get the processor from a static factory". While true, singletons are widely regarded as an anti-pattern requiring keeping a global state in your application.
Keeping your business logic separate from your data model is always a good thing to do to reduce the coupling required. Loose coupling makes testing easier, and it makes your code easier to read.
I do not see your case as "two methods", because the logic of the implementation is kept in bussinessLogic. It would be akin of asking if it is a good idea java.lang.System has both a method getProperties() and a getProperty(String), more than a different method is just a shortcut to the same method.
But, in general, no, it is not good practice. Mainly because:
a) if the way to do that thing changes in the future, you need to remember that you have to touch two implementations.
b) when reading your code, other programmers will wonder if there are two methods because they are different.
Also, it does not fit very well with assigning responsabilities to a specific class for a given task, which is one of the tenets of OOP.
Of course, all absolute rules may have a special case where some considerations (mainly performance) may suggest breaking the rule. Think if you win something by doing so and document it heavily.

spring and interfaces

I read all over the place about how Spring encourages you to use interfaces in your code. I don't see it. There is no notion of interface in your spring xml configuration. What part of Spring actually encourages you to use interfaces (other than the docs)?
The Dependency Inversion Principle explains this well. In particular, figure 4.
A. High level modules should not depend on low level modules. Both should depend upon abstractions.
B. Abstraction should not depend upon details. Details should depend upon abstractions.
Translating the examples from the link above into java:
public class Copy {
private Keyboard keyboard = new Keyboard(); // concrete dependency
private Printer printer = new Printer(); // concrete dependency
public void copy() {
for (int c = keyboard.read(); c != KeyBoard.EOF) {
printer.print(c);
}
}
}
Now with dependency inversion:
public class Copy {
private Reader reader; // any dependency satisfying the reader interface will work
private Writer writer; // any dependency satisfying the writer interface will work
public void copy() {
for (int c = reader.read(); c != Reader.EOF) {
writer.write(c);
}
}
public Copy(Reader reader, Writer writer) {
this.reader = reader;
this.writer = writer;
}
}
Now Copy supports more than just copying from a keyboard to a printer.
It is capable of copying from any Reader to any Writer without requiring any modifications to its code.
And now with Spring:
<bean id="copy" class="Copy">
<constructor-arg ref="reader" />
<constructor-arg ref="writer" />
</bean>
<bean id="reader" class="KeyboardReader" />
<bean id="writer" class="PrinterWriter" />
or perhaps:
<bean id="reader" class="RemoteDeviceReader" />
<bean id="writer" class="DatabaseWriter" />
When you define an interface for your classes, it helps with dependency injection. Your Spring configuration files don't have anything about interfaces in them themselves -- you just put in the name of the class.
But if you want to inject another class that offers "equivalent" functionality, using an interface really helps.
For example, saying you've got a class that analyzes a website's content, and you're injecting it with Spring. If the classes you're injecting it into know what the actual class is, then in order to change it out you'll have to change a whole lot of code to use a different concrete class. But if you created an Analyzer interface, you could just as easily inject your original DefaultAnalyzer as you could a mocked up DummyAnalyzer or even another one that does essentially the same thing, like a PageByPageAnalyzer or anything else. In order to use one of those, you just have to change the classname you're injecting in your Spring config files, rather than go through your code changing classes around.
It took me about a project and a half before I really started to see the usefulness. Like most things (in enterprise languages) that end up being useful, it seems like a pointless addition of work at first, until your project starts to grow and then you discover how much time you saved by doing a little bit more work up front.
Most of the answers here are some form of "You can easily swap out implementations", but what I think they fail to answer is the why? part. To that I think the answer is almost definitively testability. Regardless of whether or not you use Spring or any other IOC framework, using Dependency Injection makes your code easier to test. In the case of say a writer rather than a PrinterWriter, you can Mock the Writer interface in a Unit test, and ensure that your code is calling it the way you expect it to. If you depend directly on the class implementation, your only option is to walk to the printer and check it, which isn't very automated. Furthermore, if you depend upon the result of a call to a class, not being able to Mock it may prevent you from being able to reach all code paths in your test, thus reducing their quality (potentially) Simply put, you should decouple Object graph creation from application logic. Doing so makes your code easier to test.
No one has mention yet that in many occasions won't be necessary to create an interface so that the implementing class can be switched quickly because simply there won't be more than one implementing class.
When interfaces are created without need, classes will be created by pairs (interface plus implementation), adding unnecessary boilerplate interfaces and creating potential dependency confusions because, on XML configuration files, components will be sometimes referenced by its interface and sometimes by its implementation, with no consequences at runtime but being incoherent regarding code conventions.
You may probably want to try using it for yourself to be better able to see this, it may not be clear from the docs how Spring encourages interface use.
Here are a couple of examples:
Say you're writing a class that needs to read from a resource (e.g., file) that may be referenced in several ways (e.g., in classpath, absolute file path, as a URL etc). You'd want to define a org.springframework.core.io.Resource (interface) property on your class. Then in your Spring configuration file, you simply select the actual implementation class (e.g., org.springframework.core.io.ClassPathResource, org.springframework.core.io.FileSystemResource, org.springframework.core.io.UrlResource etc). Spring is basically functioning as an extremely generic factory.
If you want to take advantage of Spring's AOP integration (for adding transaction interceptors for instance), you'll pretty much need to define interfaces. You define the interception points in your Spring configuration file, and Spring generates a proxy for you, based on your interface.
These are examples I personally have experience with. I'm sure there are much more out there.
it's easy to generate proxies from interfaces.
if you look at any spring app, you'll see service and persistence interfaces. making that the spring idiom certainly does encourage the use of interfaces. it doesn't get any more explicit than that.
Writing separate interfaces adds complexity and boilerplate code that's normally unnecessary. It also makes debugging harder because when you click a method call in your IDE, it shows the interface instead of the implementation. Unless you're swapping implementations at runtime, there's no need to go down that path.
Tools like Mockito make it very easy to test code using dependency injection without piling on interfaces.
Spring won't force you to use interfaces anywhere, it's just good practice. If you have a bean that has a some properties that are interfaces instead of concrete classes, then you can simply switch out some objects with mockups that implement the same interface, which is useful for certain test cases.
If you use for example the Hibernate support clases, you can define an interface for your DAO, then implement it separately; the advantage of having the interface is that you will be able to configure it using the Spring interceptors, which will allow you to simplify your code; you won't have to write any code cathing HibernateExceptions and closing the session in a finally segment, and you won't have to define any transactions programmatically either, you just configure all that stuff declaratively with Spring.
When you're writing quick and dirty apps, you can implement some simple DAO using JDBC or some simple framework which you won't end up using in the final version; you will be able to easily switch those components out if they implement some common interfaces.
If you don't use interfaces you risk an autowiring failure:
Sometime Spring creates a Proxy class for a Bean. This Proxy class is not a child class of the service implementation but it re-implements all of its interfaces.
Spring will try to autowire instances of this Bean, however this Proxy class is incompatible with the Bean class. So declaring a field with Bean class can lead to "unsafe field assignement" exceptions.
You cannot reasonably know when Spring is going to Proxy a service (nor should you), so to protect yourself against those surprises, your best move is to declare an interface and use this interface when declaring autowired fields.

Is there ever a case for 'new' when using dependency injection?

Does dependency injection mean that you don't ever need the 'new' keyword? Or is it reasonable to directly create simple leaf classes such as collections?
In the example below I inject the comparator, query and dao, but the SortedSet is directly instantiated:
public Iterable<Employee> getRecentHires()
{
SortedSet<Employee> entries = new TreeSet<Employee>(comparator);
entries.addAll(employeeDao.findAll(query));
return entries;
}
Just because Dependency Injection is a useful pattern doesn't mean that we use it for everything. Even when using DI, there will often be a need for new. Don't delete new just yet.
One way I typically decide whether or not to use dependency injection is whether or not I need to mock or stub out the collaborating class when writing a unit test for the class under test. For instance, in your example you (correctly) are injecting the DAO because if you write a unit test for your class, you probably don't want any data to actually be written to the database. Or perhaps a collaborating class writes files to the filesystem or is dependent on an external resource. Or the behavior is unpredictable or difficult to account for in a unit test. In those cases it's best to inject those dependencies.
For collaborating classes like TreeSet, I normally would not inject those because there is usually no need to mock out simple classes like these.
One final note: when a field cannot be injected for whatever reason, but I still would like to mock it out in a test, I have found the Junit-addons PrivateAccessor class helpful to be able to switch the class's private field to a mock object created by EasyMock (or jMock or whatever other mocking framework you prefer).
There is nothing wrong with using new like how it's shown in your code snippet.
Consider the case of wanting to append String snippets. Why would you want to ask the injector for a StringBuilder ?
In another situation that I've faced, I needed to have a thread running in accordance to the lifecycle of my container. In that case, I had to do a new Thread() because my Injector was created after the callback method for container startup was called. And once the injector was ready, I hand injected some managed classes into my Thread subclass.
Yes, of course.
Dependency injection is meant for situations where there could be several possible instantiation targets of which the client may not be aware (or capable of making a choice) of compile time.
However, there are enough situations where you do know exactly what you want to instantiate, so there is no need for DI.
This is just like invoking functions in object-oriented langauges: just because you can use dynamic binding, doesn't mean that you can't use good old static dispatching (e.g., when you split your method into several private operations).
My thinking is that DI is awesome and great to wire layers and also pieces of your code that needs sto be flexible to potential change. Sure we can say everything can potentially need changing, but we all know in practice some stuff just wont be touched.
So when DI is overkill I use 'new' and just let it roll.
Ex: for me wiring a Model to the View to the Controller layer.. it's always done via DI. Any Algorithms my apps uses, DI and also any pluggable reflective code, DI. Database layer.. DI but pretty much any other object being used in my system is handled with a common 'new'.
hope this helps.
It is true that in today, framework-driven environment you instantiate objects less and less. For example, Servlets are instantiated by servlet container, beans in Spring instantiated with Spring etc.
Still, when using persistence layer, you will instantiate your persisted objects before they have been persisted. When using Hibernate, for example you will call new on your persisted object before calling save on your HibernateTemplate.

Categories

Resources