How To Create Proxy For OSGi Service - java

Let's say I have a really simple interface for getting files from somewhere:
interface FileManager {
File getFile(Object data);
}
We can assume there are multiple implementations of this interface and all applications only use the interface and are blissfully unaware of which implementation the OSGi context provides them with.
Since some methods to get files are really slow, I want to add an optional cache. But I don't want the applications to change from the FileManager interface to another one, since that would make them aware of which implementation they are using (and if it's slow or not).
So I came up with this:
class FileManagerCache implements FileManager {
private final Map<Object, File> cache = new HashMap<>();
public File getFile(final Object data) {
if (this.cache.containsKey(data)) {
return this.cache.get(data);
}
final File result = getDelegate().getFile(data);
this.cache.put(data, result);
return result;
}
private FileManager getDelegate() {
for (final FileManager fileManager : ServiceUtil.findServices(FileManager.class)) {
if (this != fileManager) {
return fileManager;
}
}
throw new UnsupportedOperationException("No FileManager is present!"); //$NON-NLS-1$
}
}
This implementation is registered with a very high "service.ranking" and so the first one the applications use, and it delegates to the next one in line in the list of possible implementations.
Now this approach is not very elegant, and probably error prone. How would I create a proxy in OSGi using standard mechanisms?

A safer way to define a proxy for another service is to use service properties.
For example you could give the slow FileManager a property like "name=A".
Then you could give the proxy the propertie name=A,cached=true. On initialization you could give the proxy a filter name=A to search for a service to proxy.
So the user of the service could either use any serivce (by ranking) or filter for cached=true if it needs the cached variant.

Why not just create a service which collects registered implementation of other services? Sample implementation and the idea you can get from here.

I think what you describe is a 2 service model because you combine multiple responsibilities. You combine the caching responsibility with the abstraction of where the file comes from. Or in other words, your design is mixing concerns and it is therefore not cohesive.
The easiest solution is therefore to have a FileManager and a FileManagerProvider service. You can then provide a cached File Manager and a transparent File Manager depending on your situation.
When I started in software more than 35 years ago I got coupling but it took me many years to understand how much more important cohesion is. This problem is a very archetypical example.
Now to see why this design is bad, you could implement the proxies with OSGi service hooks. You register an alternative and hide the original services. However, that is a lot of work to make a technical inferior solution, as all proxies related solutions have their own problems. Keeping it simple, straightforward, and using actual types to represent your abstractions is imho the best solution. (Though I admit I frequently find that I initially made uncohesive designs as well before I understood the problem well.)

Felix DependencyManager supports it.
There is the summary from http://felix.apache.org/documentation/subprojects/apache-felix-dependency-manager/reference/component-aspect.html
Dependency Manager - Aspect
Aspects, as part of aspect oriented programming, can be used in a
dynamic environment such as OSGi to "extend" existing services and add
certain "capabilities" to them. Examples of these are adding a
specific caching mechanism to a storage service or implementing
logging. Aspects in OSGi can be applied to services and can be added
and removed at runtime.
Aspects allow you to define an "interceptor", or chain of interceptors for a service to add features like caching or logging, etc. An aspect will be applied to any service that matches the specified interface and filter. For each matching service an aspect will be created based on the aspect implementation class. The aspect will be registered with the same interface and properties as the original service, plus any extra properties you supply here. It will also inherit all dependencies, and if you declare the original service as a member it will be injected.
#AspectService(ranking=10), properties={#Property(name="param", value="value")})
class AspectService implements InterceptedService {
// The service we are intercepting (injected by reflection)
protected InterceptedService intercepted;
public void doWork() {
intercepted.doWork();
}
}

Related

What's the benefit of having Jersey instantiating components

All of the code samples of Spring Boot and Jersey that I have seen register their components with Jersey by passing the component's class.
From here:
public static class JerseyServletConfig extends ResourceConfig {
public JerseyServletConfig() {
register(RequestContextFilter.class);
packages("com.github.cthiebault");
register(LoggingFilter.class);
}
}
Or here:
register(ApiListingResource.class);
ResourceConfig's javadoc says:
Register an instance of a custom JAX-RS component (such as an
extension provider or a feature meta-provider) to be instantiated and
used in the scope of this configurable context.
My questions are:
What is the benefit of letting those resources being instantiated by Jersey?
If we should let Jersey manage those components, why does it still provides a register(Object component) method, why not keep it limited to register(Class<?> componentClass)?
When should we send our own instances instead of letting Jersey instantiating our class?
To start, Dependency Injection is a GoodThing(tm) in general - it allows for separating concerns and it can greatly simplify testing. In general separating object creation from object use gives benefits around separating business/application logic (i.e. object use) from implementation concerns (deciding what objects are wired together).
Allowing Jersey to manage your resources / components is also a GoodThing(tm). It's a part of what Jersey is for. If you allow Jersey to manage your resource lifecycle then you have less code to write / maintain and the code which you do end up writing / maintaining becomes more about what your application does and less about how your objects fit together.
Jersey provides a standard lifecycle, which gives you a convention that allows developers a mental framework to work in - making it easier for new developers to join and existing developers to switch between applications. The lifecycle can be configured if need be, which allows your special-snowflake application to have special-snowflake behaviour if necessary.
The register(Object) method is an example of how you can opt-out of Jersey controlling a component's lifecycle. You may want to do that for lots of reasons, but you should generally look to avoid doing it - let the library do its job. Examples of exceptional cases would be if you're integrating with some legacy code which, for obscure/arcane reasons of its own, means that some crucial class must be an application-level singleton. There may even be some non-legacy reasons why you only want a single instance of something in your application - object mappers were always a good example of this. Typically, you'd use JSR-330 support for that nowadays but there might be some cases where that's not possible.
By integrating with JSR-330, you can also provide custom named scopes for some objects - which allows you to control how Jersey creates and uses objects while also revealing what you're intending (via the scope name). This generally provides a clean structure which is intention-revealing rather than intention-hiding.

Why is Spring's ApplicationContext.getBean with Interface considered bad? [duplicate]

I asked a general Spring question: Auto-cast Spring Beans and had multiple people respond that calling Spring's ApplicationContext.getBean() should be avoided as much as possible. Why is that?
How else should I gain access to the beans I configured Spring to create?
I'm using Spring in a non-web application and had planned on accessing a shared ApplicationContext object as described by LiorH.
Amendment
I accept the answer below, but here's an alternate take by Martin Fowler who discusses the merits of Dependency Injection vs. using a Service Locator (which is essentially the same as calling a wrapped ApplicationContext.getBean()).
In part, Fowler states, "With service locator the application class asks for it [the service] explicitly by a message to the locator. With injection there is no explicit request, the service appears in the application class - hence the inversion of control.
Inversion of control is a common feature of frameworks, but it's something that comes at a price. It tends to be hard to understand and leads to problems when you are trying to debug. So on the whole I prefer to avoid it [Inversion of Control] unless I need it. This isn't to say it's a bad thing, just that I think it needs to justify itself over the more straightforward alternative."
I mentioned this in a comment on the other question, but the whole idea of Inversion of Control is to have none of your classes know or care how they get the objects they depend on. This makes it easy to change what type of implementation of a given dependency you use at any time. It also makes the classes easy to test, as you can provide mock implementations of dependencies. Finally, it makes the classes simpler and more focused on their core responsibility.
Calling ApplicationContext.getBean() is not Inversion of Control! While it's still easy to change what implemenation is configured for the given bean name, the class now relies directly on Spring to provide that dependency and can't get it any other way. You can't just make your own mock implementation in a test class and pass that to it yourself. This basically defeats Spring's purpose as a dependency injection container.
Everywhere you want to say:
MyClass myClass = applicationContext.getBean("myClass");
you should instead, for example, declare a method:
public void setMyClass(MyClass myClass) {
this.myClass = myClass;
}
And then in your configuration:
<bean id="myClass" class="MyClass">...</bean>
<bean id="myOtherClass" class="MyOtherClass">
<property name="myClass" ref="myClass"/>
</bean>
Spring will then automatically inject myClass into myOtherClass.
Declare everything in this way, and at the root of it all have something like:
<bean id="myApplication" class="MyApplication">
<property name="myCentralClass" ref="myCentralClass"/>
<property name="myOtherCentralClass" ref="myOtherCentralClass"/>
</bean>
MyApplication is the most central class, and depends at least indirectly on every other service in your program. When bootstrapping, in your main method, you can call applicationContext.getBean("myApplication") but you should not need to call getBean() anywhere else!
Reasons to prefer Service Locator over Inversion of Control (IoC) are:
Service Locator is much, much easier for other people to following in your code. IoC is 'magic' but maintenance programmers must understand your convoluted Spring configurations and all the myriad of locations to figure out how you wired your objects.
IoC is terrible for debugging configuration problems. In certain classes of applications the application will not start when misconfigured and you may not get a chance to step through what is going on with a debugger.
IoC is primarily XML based (Annotations improve things but there is still a lot of XML out there). That means developers can't work on your program unless they know all the magic tags defined by Spring. It is not good enough to know Java anymore. This hinders less experience programmers (ie. it is actually poor design to use a more complicated solution when a simpler solution, such as Service Locator, will fulfill the same requirements). Plus, support for diagnosing XML problems is far weaker than support for Java problems.
Dependency injection is more suited to larger programs. Most of the time the additional complexity is not worth it.
Often Spring is used in case you "might want to change the implementation later". There are other ways of achieving this without the complexity of Spring IoC.
For web applications (Java EE WARs) the Spring context is effectively bound at compile time (unless you want operators to grub around the context in the exploded war). You can make Spring use property files, but with servlets property files will need to be at a pre-determined location, which means you can't deploy multiple servlets of the same time on the same box. You can use Spring with JNDI to change properties at servlet startup time, but if you are using JNDI for administrator-modifiable parameters the need for Spring itself lessens (since JNDI is effectively a Service Locator).
With Spring you can lose program Control if Spring is dispatching to your methods. This is convenient and works for many types of applications, but not all. You may need to control program flow when you need to create tasks (threads etc) during initialization or need modifiable resources that Spring didn't know about when the content was bound to your WAR.
Spring is very good for transaction management and has some advantages. It is just that IoC can be over-engineering in many situations and introduce unwarranted complexity for maintainers. Do not automatically use IoC without thinking of ways of not using it first.
It's true that including the class in application-context.xml avoids the need to use getBean. However, even that is actually unnecessary. If you are writing a standalone application and you DON'T want to include your driver class in application-context.xml, you can use the following code to have Spring autowire the driver's dependencies:
public class AutowireThisDriver {
private MySpringBean mySpringBean;
public static void main(String[] args) {
AutowireThisDriver atd = new AutowireThisDriver(); //get instance
ClassPathXmlApplicationContext ctx = new ClassPathXmlApplicationContext(
"/WEB-INF/applicationContext.xml"); //get Spring context
//the magic: auto-wire the instance with all its dependencies:
ctx.getAutowireCapableBeanFactory().autowireBeanProperties(atd,
AutowireCapableBeanFactory.AUTOWIRE_BY_TYPE, true);
// code that uses mySpringBean ...
mySpringBean.doStuff() // no need to instantiate - thanks to Spring
}
public void setMySpringBean(MySpringBean bean) {
this.mySpringBean = bean;
}
}
I've needed to do this a couple of times when I have some sort of standalone class that needs to use some aspect of my app (eg for testing) but I don't want to include it in application-context because it is not actually part of the app. Note also that this avoids the need to look up the bean using a String name, which I've always thought was ugly.
One of the coolest benefits of using something like Spring is that you don't have to wire your objects together. Zeus's head splits open and your classes appear, fully formed with all of their dependencies created and wired-in, as needed. It's magical and fantastic.
The more you say ClassINeed classINeed = (ClassINeed)ApplicationContext.getBean("classINeed");, the less magic you're getting. Less code is almost always better. If your class really needed a ClassINeed bean, why didn't you just wire it in?
That said, something obviously needs to create the first object. There's nothing wrong with your main method acquiring a bean or two via getBean(), but you should avoid it because whenever you're using it, you're not really using all of the magic of Spring.
The motivation is to write code that doesn't depend explicitly on Spring. That way, if you choose to switch containers, you don't have to rewrite any code.
Think of the container as something is invisible to your code, magically providing for its needs, without being asked.
Dependency injection is a counterpoint to the "service locator" pattern. If you are going to lookup dependencies by name, you might as well get rid of the DI container and use something like JNDI.
Using #Autowired or ApplicationContext.getBean() is really the same thing. In both ways you get the bean that is configured in your context and in both ways your code depends on spring.
The only thing you should avoid is instantiating your ApplicationContext. Do this only once! In other words, a line like
ApplicationContext context = new ClassPathXmlApplicationContext("AppContext.xml");
should only be used once in your application.
One of Spring premises is avoid coupling. Define and use Interfaces, DI, AOP and avoid using ApplicationContext.getBean() :-)
One of the reasons is testability. Say you have this class:
interface HttpLoader {
String load(String url);
}
interface StringOutput {
void print(String txt);
}
#Component
class MyBean {
#Autowired
MyBean(HttpLoader loader, StringOutput out) {
out.print(loader.load("http://stackoverflow.com"));
}
}
How can you test this bean? E.g. like this:
class MyBeanTest {
public void creatingMyBean_writesStackoverflowPageToOutput() {
// setup
String stackOverflowHtml = "dummy";
StringBuilder result = new StringBuilder();
// execution
new MyBean(Collections.singletonMap("https://stackoverflow.com", stackOverflowHtml)::get, result::append);
// evaluation
assertEquals(result.toString(), stackOverflowHtml);
}
}
Easy, right?
While you still depend on Spring (due to the annotations) you can remove you dependency on spring without changing any code (only the annotation definitions) and the test developer does not need to know anything about how spring works (maybe he should anyway, but it allows to review and test the code separately from what spring does).
It is still possible to do the same when using the ApplicationContext. However then you need to mock ApplicationContext which is a huge interface. You either need a dummy implementation or you can use a mocking framework such as Mockito:
#Component
class MyBean {
#Autowired
MyBean(ApplicationContext context) {
HttpLoader loader = context.getBean(HttpLoader.class);
StringOutput out = context.getBean(StringOutput.class);
out.print(loader.load("http://stackoverflow.com"));
}
}
class MyBeanTest {
public void creatingMyBean_writesStackoverflowPageToOutput() {
// setup
String stackOverflowHtml = "dummy";
StringBuilder result = new StringBuilder();
ApplicationContext context = Mockito.mock(ApplicationContext.class);
Mockito.when(context.getBean(HttpLoader.class))
.thenReturn(Collections.singletonMap("https://stackoverflow.com", stackOverflowHtml)::get);
Mockito.when(context.getBean(StringOutput.class)).thenReturn(result::append);
// execution
new MyBean(context);
// evaluation
assertEquals(result.toString(), stackOverflowHtml);
}
}
This is quite a possibility, but I think most people would agree that the first option is more elegant and makes the test simpler.
The only option that is really a problem is this one:
#Component
class MyBean {
#Autowired
MyBean(StringOutput out) {
out.print(new HttpLoader().load("http://stackoverflow.com"));
}
}
Testing this requires huge efforts or your bean is going to attempt to connect to stackoverflow on each test. And as soon as you have a network failure (or the admins at stackoverflow block you due to excessive access rate) you will have randomly failing tests.
So as a conclusion I would not say that using the ApplicationContext directly is automatically wrong and should be avoided at all costs. However if there are better options (and there are in most cases), then use the better options.
The idea is that you rely on dependency injection (inversion of control, or IoC). That is, your components are configured with the components they need. These dependencies are injected (via the constructor or setters) - you don't get then yourself.
ApplicationContext.getBean() requires you to name a bean explicitly within your component. Instead, by using IoC, your configuration can determine what component will be used.
This allows you to rewire your application with different component implementations easily, or configure objects for testing in a straightforward fashion by providing mocked variants (e.g. a mocked DAO so you don't hit a database during testing)
Others have pointed to the general problem (and are valid answers), but I'll just offer one additional comment: it's not that you should NEVER do it, but rather that do it as little as possible.
Usually this means that it is done exactly once: during bootstrapping. And then it's just to access the "root" bean, through which other dependencies can be resolved. This can be reusable code, like base servlet (if developing web apps).
There is another time when using getBean makes sense. If you're reconfiguring a system that already exists, where the dependencies are not explicitly called out in spring context files. You can start the process by putting in calls to getBean, so that you don't have to wire it all up at once. This way you can slowly build up your spring configuration putting each piece in place over time and getting the bits lined up properly. The calls to getBean will eventually be replaced, but as you understand the structure of the code, or lack there of, you can start the process of wiring more and more beans and using fewer and fewer calls to getBean.
I've only found two situations where getBean() was required:
Others have mentioned using getBean() in main() to fetch the "main" bean for a standalone program.
Another use I have made of getBean() are in situations where an interactive user configuration determines the bean makeup for a particular situation. So that, for instance, part of the boot system loops through a database table using getBean() with a scope='prototype' bean definition and then setting additional properties. Presumably, there is a UI that adjusts the database table that would be friendlier than attempting to (re)write the application context XML.
however, there are still cases where you need the service locator pattern.
for example, i have a controller bean, this controller might have some default service beans, which can be dependency injected by configuration.
while there could also be many additional or new services this controller can invoke now or later, which then need the service locator to retrieve the service beans.
You should to use: ConfigurableApplicationContext instead of for ApplicationContext

Are domain objects ment to be injected?

I'm already using CDI #Inject to get some stateless services in some of my classes.
I wonder if it would also make sense to inject domain objects, like the following example:
class UserSettings;
class User {
//#Inject
private UserSetttings settings = new UserSettings();
}
A user should always have some default settings attached, that can be altered later. Would you use CDI here, or just stick with manual creation of a new object?
Or more general speaking: where makes it in general sense to use CDI? And where not?
Update Producer:
class Preferences {
#Produces #DefaultSettings
public UserSettings getDefaultSettings() {
UserSettings settings = new UserSettings();
//configure default
return settings;
}
}
class User {
#Inject #DefaultSettings
private UserSettings settings;
}
Domain objects can be injected. You probably don't want to inject the default domain object, instead you want to provide a producer for it. This producer would essentially create the domain object based on some setup. You could have a "manager" type class that loads the object with the necessary properties based on something, like the current logged in user. Right now I do something similar, taking the principal and using that to look up the information for the logged in user, creating something like UserSettings. You just need to ensure that your UserSettings is not injectable, using a veto extension or not even installing it.
The alternative (which I don't particularly like, but could work) is for your domain object to inject references to persistence domains to look up the data. Conceptually, it looks a little cleaner. The setup code would go in a #PostConstruct method.
A user should always have some default settings attached, that can be
altered later. Would you use CDI here, or just stick with manual
creation of a new object?
If your application is CDI enabled then you should use CDI, rather than manual creation of new object.
where makes it in general sense to use CDI? And where not?
CDI has many broader uses, allowing developers a great deal of flexibility to integrate various kinds of components in a loosely coupled but typesafe way. So, CDI should use on all Java EE 6 and EE 7 applications. If CDI is not supported by your application, then you should not used it.
I would like to add one more aspect: good code can be tested. And using dependency injection to support "Inversion of Control" is always a good idea. Think about how you would test your code, if the Settings are created internally via
private final UserSettings s = new UserSettings(); ...
It's much easier to make it injectable and then use an injection framework in test scope (tip: use needle (https://github.com/akquinet/needle)).

Advice wanted on a complex structure in java (DAO and Service Layer linking/coupling)

Introduction
I am trying to make a rather complex structure in Java with interfaces, abstract classes and generics. Having no experience with generics and only average experience with creating good OOP designs, this is beginning to prove quite a challenge.
I have some feeling that what I'm trying to do cannot actually be done, but that I could come close enough to it. I'll try to explain it as brief as I can. I'm just going to tell straight away that this structure will represent my DAO and service layers to access the database. Making this question more abstract would only make it more difficult.
My DAO layer is completely fine as it is. There is a generic DAO interface and for each entity, there is a DAO interface that extends the generic one and fills in the generic types. Then there's an abstract class that is extended by each DAO implementation, which in turn implement the corresponding interface. Confusing read for most probably, so here's the diagram showing the DAO for Products as an example:
Now for the service classes, I had a similar construction in mind. Most of the methods in a service class map to the DAO methods anyway. If you replace every "DAO" in the diagram above with "Service", you get the basis for my service layer. But there is one thing that I want to do, based on the following idea I have:
Every service class for an entity will at least access one DAO object, namely the DAO of the entity that it is designed for.
Which is...
The question/problem
If I could make a proper OO design to make each service class have one instance variable for the DAO object of their respective entity my service layer would be perfect, in my view. Advice on this is welcome, in case my design is not so good as it seemed.
I have implemented it like this:
Class AbstractService
public abstract class AbstractService<EntityDAO> {
EntityDAO entityDAO;
public AbstractService() {
entityDAO = makeEntityDAO(); //compiler/IDE warning: overridable method call in constructor
}
abstract EntityDAO makeEntityDAO();
}
Class ProductServiceImpl
public class ProductServiceImpl extends AbstractService<ProductDAOImpl> {
public ProductServiceImpl() {
super();
}
#Override
ProductDAOImpl makeEntityDAO() {
return new ProductDAOImpl();
}
}
The problem with this design is a compiler warning I don't like: it has an overridable method call in the constructor (see the comment). Now it is designed to be overridable, in fact I enforce it to make sure that each service class has a reference to the corresponding DAO. Is this the best thing I can do?
I have done my absolute best to include everything you might need and only what you need for this question. All I have to say now is, comments are welcome and extensive answers even more, thanks for taking your time to read.
Additional resources on StackOverflow
Understanding Service and DAO layers
DAO and Service layers (JPA/Hibernate + Spring)
Just a little note first: usually in an application organized in layers like Presentation / Service / DAO for example, you have the following rules:
Each layer knows only the layer immediately below.
It knows it only by it's interfaces, and not by it's implementation class.
This will provide easier testing, a better code encapsulation, and a sharper definition of the different layers (through interfaces that are easily identified as public API)
That said, there is a very common way to handle that kind of situation in a way that allow the most flexibility: dependency injection. And Spring is the industry standard implementation of dependency injection (and of a lot of other things)
The idea (in short) is that your service will know that it needs a IEntityDAO, and that someone will inject in it and implementation of the interface before actually using the service. That someone is called an IOC container (Inversion of Control container). It can be Spring, and what it does is usually described by an application configuration file and will be done at application startup.
Important Note: The concept is brilliant and powerful but dead simple stupid. You can also use the Inversion of Control architectural pattern without a framework with a very simple implementation consisting in a large static method "assembling" your application parts. But in an industrial context it's better to have a framework which will allow to inject other things like database connection, web service stub clients, JMS queues, etc...
Benefits:
Your have an easy time mocking and testing, as the only thing a class depends on is interfaces
You have a single file of a small set of XML files that describe the whole structure of your application, which is really handy when your application grows.
It's a very widely adopted standard and well - known by many java developers.
Sample java code:
public abstract class AbstractService<IEntityDAO> {
private IEntityDAO entityDAO; // you don't know the concrete implementation, maybe it's a mock for testing purpose
public AbstractService() {
}
protected EntityDAO getEntityDAO() { // only subclasses need this method
}
public void setEntityDAO(IEntityDAO dao) { // IOC container will call this method
this.entityDAO = dao;
}
}
And in spring configuration file, you will have something like that:
<bean id="ProductDAO" class="com.company.dao.ProductDAO" />
[...]
<bean id="ProductService" class="com.company.service.ProductService">
<property name="entityDAO" ref="ProductDAO"/>
</bean>

spring and interfaces

I read all over the place about how Spring encourages you to use interfaces in your code. I don't see it. There is no notion of interface in your spring xml configuration. What part of Spring actually encourages you to use interfaces (other than the docs)?
The Dependency Inversion Principle explains this well. In particular, figure 4.
A. High level modules should not depend on low level modules. Both should depend upon abstractions.
B. Abstraction should not depend upon details. Details should depend upon abstractions.
Translating the examples from the link above into java:
public class Copy {
private Keyboard keyboard = new Keyboard(); // concrete dependency
private Printer printer = new Printer(); // concrete dependency
public void copy() {
for (int c = keyboard.read(); c != KeyBoard.EOF) {
printer.print(c);
}
}
}
Now with dependency inversion:
public class Copy {
private Reader reader; // any dependency satisfying the reader interface will work
private Writer writer; // any dependency satisfying the writer interface will work
public void copy() {
for (int c = reader.read(); c != Reader.EOF) {
writer.write(c);
}
}
public Copy(Reader reader, Writer writer) {
this.reader = reader;
this.writer = writer;
}
}
Now Copy supports more than just copying from a keyboard to a printer.
It is capable of copying from any Reader to any Writer without requiring any modifications to its code.
And now with Spring:
<bean id="copy" class="Copy">
<constructor-arg ref="reader" />
<constructor-arg ref="writer" />
</bean>
<bean id="reader" class="KeyboardReader" />
<bean id="writer" class="PrinterWriter" />
or perhaps:
<bean id="reader" class="RemoteDeviceReader" />
<bean id="writer" class="DatabaseWriter" />
When you define an interface for your classes, it helps with dependency injection. Your Spring configuration files don't have anything about interfaces in them themselves -- you just put in the name of the class.
But if you want to inject another class that offers "equivalent" functionality, using an interface really helps.
For example, saying you've got a class that analyzes a website's content, and you're injecting it with Spring. If the classes you're injecting it into know what the actual class is, then in order to change it out you'll have to change a whole lot of code to use a different concrete class. But if you created an Analyzer interface, you could just as easily inject your original DefaultAnalyzer as you could a mocked up DummyAnalyzer or even another one that does essentially the same thing, like a PageByPageAnalyzer or anything else. In order to use one of those, you just have to change the classname you're injecting in your Spring config files, rather than go through your code changing classes around.
It took me about a project and a half before I really started to see the usefulness. Like most things (in enterprise languages) that end up being useful, it seems like a pointless addition of work at first, until your project starts to grow and then you discover how much time you saved by doing a little bit more work up front.
Most of the answers here are some form of "You can easily swap out implementations", but what I think they fail to answer is the why? part. To that I think the answer is almost definitively testability. Regardless of whether or not you use Spring or any other IOC framework, using Dependency Injection makes your code easier to test. In the case of say a writer rather than a PrinterWriter, you can Mock the Writer interface in a Unit test, and ensure that your code is calling it the way you expect it to. If you depend directly on the class implementation, your only option is to walk to the printer and check it, which isn't very automated. Furthermore, if you depend upon the result of a call to a class, not being able to Mock it may prevent you from being able to reach all code paths in your test, thus reducing their quality (potentially) Simply put, you should decouple Object graph creation from application logic. Doing so makes your code easier to test.
No one has mention yet that in many occasions won't be necessary to create an interface so that the implementing class can be switched quickly because simply there won't be more than one implementing class.
When interfaces are created without need, classes will be created by pairs (interface plus implementation), adding unnecessary boilerplate interfaces and creating potential dependency confusions because, on XML configuration files, components will be sometimes referenced by its interface and sometimes by its implementation, with no consequences at runtime but being incoherent regarding code conventions.
You may probably want to try using it for yourself to be better able to see this, it may not be clear from the docs how Spring encourages interface use.
Here are a couple of examples:
Say you're writing a class that needs to read from a resource (e.g., file) that may be referenced in several ways (e.g., in classpath, absolute file path, as a URL etc). You'd want to define a org.springframework.core.io.Resource (interface) property on your class. Then in your Spring configuration file, you simply select the actual implementation class (e.g., org.springframework.core.io.ClassPathResource, org.springframework.core.io.FileSystemResource, org.springframework.core.io.UrlResource etc). Spring is basically functioning as an extremely generic factory.
If you want to take advantage of Spring's AOP integration (for adding transaction interceptors for instance), you'll pretty much need to define interfaces. You define the interception points in your Spring configuration file, and Spring generates a proxy for you, based on your interface.
These are examples I personally have experience with. I'm sure there are much more out there.
it's easy to generate proxies from interfaces.
if you look at any spring app, you'll see service and persistence interfaces. making that the spring idiom certainly does encourage the use of interfaces. it doesn't get any more explicit than that.
Writing separate interfaces adds complexity and boilerplate code that's normally unnecessary. It also makes debugging harder because when you click a method call in your IDE, it shows the interface instead of the implementation. Unless you're swapping implementations at runtime, there's no need to go down that path.
Tools like Mockito make it very easy to test code using dependency injection without piling on interfaces.
Spring won't force you to use interfaces anywhere, it's just good practice. If you have a bean that has a some properties that are interfaces instead of concrete classes, then you can simply switch out some objects with mockups that implement the same interface, which is useful for certain test cases.
If you use for example the Hibernate support clases, you can define an interface for your DAO, then implement it separately; the advantage of having the interface is that you will be able to configure it using the Spring interceptors, which will allow you to simplify your code; you won't have to write any code cathing HibernateExceptions and closing the session in a finally segment, and you won't have to define any transactions programmatically either, you just configure all that stuff declaratively with Spring.
When you're writing quick and dirty apps, you can implement some simple DAO using JDBC or some simple framework which you won't end up using in the final version; you will be able to easily switch those components out if they implement some common interfaces.
If you don't use interfaces you risk an autowiring failure:
Sometime Spring creates a Proxy class for a Bean. This Proxy class is not a child class of the service implementation but it re-implements all of its interfaces.
Spring will try to autowire instances of this Bean, however this Proxy class is incompatible with the Bean class. So declaring a field with Bean class can lead to "unsafe field assignement" exceptions.
You cannot reasonably know when Spring is going to Proxy a service (nor should you), so to protect yourself against those surprises, your best move is to declare an interface and use this interface when declaring autowired fields.

Categories

Resources