I have come across some java classes used in the spring framework. First, there is the beans in the applicationContext.xml
<bean id="someBean" parent="txProxyTemplate">
<property name="target">
<bean class="path.to.bean.impl.SomeBeanImpl">
...
</bean>
...
</bean>
And I have the interface ISomeBean, and its implementation SomeBeanImpl
Then, I have another class which uses ISomeBean.
public class SomeOtherClass {
...
public function doStuff() {
...
ApplicationContext ctx;
SomeBean theBean = (SomeBean) ctx.getBean;
}
}
I want to know why do we cast to an interface instead of casting to the class.
Why would you want to tie SomeOtherClass to a specific implementation? Using the interface, you get loose coupling - you can test against a fake implementation, or switch to a different implementation later.
This is a large part of the benefit of inversion of control - you aren't as tightly coupled to your dependencies as if you instantiate them directly within the class.
The logic behind such type of casting is to give freedom to the Spring IOC and Dependency Injection. One of the benefits of using this approach is that the coupling between classes is very loose,
for example in ur case if some fine morning u decide to change the code of ISomeBeanImpl, u don't need to change anything else as long as functionality doesn't change..
Have a look on Spring IOC documentation, the idea will be more clear...
One great benefit of using dependency injection is that your classes doesn't have to know which implementation is used, only that they need one implementation. Thereby it makes sence to cast it to an interface.
You should not however confuse it with what can or can't be done.
You cast to an interface to keep your code generic and loosely coupled. Once you cast an object to a concrete class you have added a tight coupling between your object and its collaborator. Should the class type of the collaborator change then your object's cast operation will potentially break - causing a ClassCastException. If instead we use an interface you can freely change the implementing class type of the collaborator without worrying about implications on clients.
There is the style-aspect of it - loose coupling and all that.
More concretely:
If you are using AOP (which Spring does by default for many built-in functionalities, transactional support for instance) and for instance cglib proxying, the actual bean will not be of the implementation class, it will be a a dynamically created proxy-class. In that case, your application will fail with all sorts of ClassCast-like exceptions. This can come as a nasty surprise down the road if you make changes that inadvertently actives the proxying behaviour in spring.
A dependency injection framework have two goals:
to manage the lifecycle of object, that is, how they are created.
to decouple the code from concrete implementation, that is, you can switch between implementations of an interface without recompiling
If you are not interested in 1 nor 2, obviously you don't need dependency injection: simply create the bean yourself SombeBeanImpl = new SomeBeanImpl().
If you are interested only in 1 but not 2, you could cast the bean you obtain to SomeBeanImpl, but then you are bound to always use it. I suspect there is a way to specify in Spring that you are just having one implementation and you don't have an interface, but I'm not sure.
If you are interested in 2, you are by definition interested in 1 as well. Indeed, if you don't want to explicitely state what concrete class to instantiate, you delegate that to a factory that control the creation of objects. This is what a dependency injection framework actually is, a complex factory.
Related
I am studying Spring and I have the followig
Consider the following bean definition:
<bean id="clientService" class="com.myapp.service.ClientServiceImpl" />
Now consider the case on which it is declared a pointcut* targetting all methods inside the **clientService bean.
Consider also that the ClientServiceImpl class implements 3 interfaces
Now I know that using AOP the clientService bean is proxied and that this proxy implements all the 3 interfaces.
But what is the exact reason for which all these 3 interface are implemented?
So it seems to me that exist 2 kinds of proxies (correct me if I am saying wrong assertions):
JDK Proxy: used by default from Spring (is it true?) in wicht I have an interface that define the method of the object that I want to proxify. So the concrete implementation of this interface is wrapped by the proxy. So when I call a method on my object I am calling it on its proxy. The call is recognized by a method interceptor that eventually perform the aspect and then is performed the invoked method.
CGLIB Proxy: in wich, it seems to me that, the proxy extend the implementation of the wrapped object adding to it the extra logic features
Something like this:
So it seems to me that Spring use the first kind of proxy that is based on the implementation of interfaces (is it right?):
I think that in AOP the extra logic is represented by the implementation of the method interceptor (is it true?) and the standard logic is represented by the implementation of the method defined into the interfaces.
But, if the previous reasoning are correct, my doubts is: why I need to define these interface and do that the object wrapped by the object implement these interfaces? (I can't understand if the proxy itself implement these interfaces).
Why? How exactly works?
Tnx
But what is the exact reason for which all these 3 interface are
implemented?
If the proxy didn't implement all of those interfaces, the bean couldn't be wired into other beans that use that interface (you'd get a ClassCastException). For example, autowiring all of the beans of that interface into a bean. Additionally, things like getBeanNamesForType wouldn't work if the proxy didn't implement the interface.
So it seems to me that exist 2 kinds of proxies (correct me if I am
saying wrong assertions)
Yes that's correct. See ScopedProxyMode. By default, Spring won't create a proxy. It only creates a proxy if it needs to wrap the bean to add additional behavior (AOP). Note that there's also a special case of the CGLIB based proxy that uses Objenesis to deal with subclassing targets that don't have a default constructor.
CGLIB Proxy: in wich, it seems to me that, the proxy extend the
implementation of the wrapped object adding to it the extra logic
features
When you use CGLIB based proxies, the constructor for your bean gets called twice: once when the dynamically generated subclass is instantiated (to create the proxy) and a second time when the actual bean is created (the target).
I think that in AOP the extra logic is represented by the
implementation of the method interceptor (is it true?)
The proxy is essentially just invoking the chain of advice needs to be applied. That advice isn't implemented in the proxy itself. For example, the advice for #Transactional lives in TransactionAspectSupport. Take a look at the source to JdkDynamicAopProxy.
and the standard logic is represented by the implementation of the
method defined into the interfaces.
Assuming that you're programming against interfaces and using JDK proxies that's correct.
But, if the previous reasoning are correct, my doubts is: why I need
to define these interface and do that the object wrapped by the object
implement these interfaces? (I can't understand if the proxy itself
implement these interfaces).
If you want to use interface based proxies you need to use interfaces. Just make sure all of your beans implement interfaces, all of your advised methods are defined by those interfaces, and that when one bean depends on another bean, that dependency is specified using an interface. Spring will take care of constructing the proxy and making sure it implements all of the interfaces.
In your diagram, you have "Spring AOP Proxy (this)". You have to be really careful with using this when you're using any type of proxying.
Calls within the same class won't have advice applied because those calls won't pass through the proxy.
If in one of your beans you pass this to some outside code, you're passing the target of the AOP advice. If some other code uses that reference, the calls won't have AOP advice applied (again, you're bypassing the proxy).
This concept is unclear with me.
I have worked on several frameworks for an instance Spring.
To implement a feature we always implement some interfaces provided by the framework.
For an instance if I have to create a custom scope in Spring, my class implements a org.springframework.beans.factory.config.Scope interface. Which has some predefined low level functionality which helps in defining a custom scope for a bean.
Whereas in Java I read an interface is just a declaration which classes can implement & define their own functionality. The methods of an interface have no predefined functionality.
interface Car
{
topSpeed();
acclerate();
deaccelrate();
}
The methods here don't have any functionality. They are just declared.
Can anyone explain this discrepancy in the concept? How does the framework put some predefined functionality with interface methods?
It doesn't put predefined functionality in the methods. But when you implement
some interface (say I) in your class C, the framework knows that your object (of type C)
implements the I interface, and can call certain methods (defined in I) on your object
thus sending some signals/events to your object. These events can be e.g. 'app initialized',
'app started', 'app stopped', 'app destroyed'. So usually this is what frameworks do.
I am talking about frameworks in general here, not Spring in particular.
There is no conceptual difference, actually. Each java interface method has a very clear responsibility (usually described in its javadoc). Take Collection.size() as an example. It is defined to return the number of elements in your collection. Having it return a random number is possible, but will cause no end of grief for any caller. Interface methods have defined semantics ;)
As I mentioned in the comments, to some extent, implementing interfaces provided by the framework is replaced by the use of stereotype annotations. For example, you might annotate a class as #Entity to let Spring know to manage it and weave a Transaction manager into it.
I have a suspicion that what you are seeing relates to how Spring and other frameworks make use of dynamic proxies to inject functionality.
For an example of Spring injecting functionality, if you annotate a method as #Transactional, then the framework will attempt to create a dynamic proxy, which wraps access to your method. i.e. When something calls your "save()" method, the call is actually to the proxy, which might do things like starting a transaction before passing the call to your implementation, and then closing the transaction after your method has completed.
Spring is able to do this at runtime if you have defined an interface, because it is able to create a dynamic proxy which implements the same interface as your class. So where you have:
#Autowired
MyServiceInterface myService;
That is injected with SpringDynamicProxyToMyServiceImpl instead of MyServiceImpl.
However, with Spring you may have noticed that you don't always need to use interfaces. This is because it also permits AspectJ compile-time weaving. Using AspectJ actually injects the functionality into your class at compile-time, so that you are no longer forced to use an interface and implementation. You can read more about Spring AOP here:
http://docs.spring.io/spring/docs/4.0.0.RELEASE/spring-framework-reference/htmlsingle/#aop-introduction-defn
I should point out that although Spring does generally enable you to avoid defining both interface and implementation for your beans, it's not such a good idea to take advantage of it. Using separate interface and implementation is very valuable for unit testing, as it enables you to do things like inject a stub which implements an interface, instead of a full-blown implementation of something which needs database access and other rich functionality.
I'd like to introduce Guice for the use of an existing mid-sized project.
For my demands I need a custom scope (session is too big, while request to small for my project).
Imagine that I request guice to provide me an instance of Class A which has direct and indirect dependencies to many other classes (composition).
My custom provider is able to provide the instance of classes which are used as constructor arguments of all involved classes.
Question:
Do I really have to put an #Inject (and my custom scope) annotation on the constructors of all involved classes or is there a way that guice only requires these annotations on the top-level class which I request and that all further dependencies are resolved by "asking" my custom scope for a provider of the dependent types?
If this is true this would increase the effort of introducing Guice because I have to adjust more than 1000 classes. Any help and experiences during the introduction of guice is appreciated.
First of all, it's possible to use Guice without putting an #Inject annotation anywhere. Guice supports Provider bindings, #Provides methods and constructor bindings, all of which allow you to bind types however you choose. However, for its normal operation it requires #Inject annotations to serve as metadata telling it what dependencies a class requires and where it can inject them.
There reason for this is that otherwise, it cannot deterministically tell what it should inject and where. For example, classes may have multiple constructors and Guice needs some way of choosing one to inject that doesn't rely on any guessing. You could say "well, my classes only have one constructor so it shouldn't need #Inject on that", but what happens when someone adds a new constructor to a class? Then Guice no longer has its basis for deciding and the application breaks. Additionally, this all assumes that you're only doing constructor injection. While constructor injection is certainly the best choice in general, Guice allows injection of methods (and fields) as well, and the problem of needing to specify the injection points of a class explicitly is stronger there since most classes will have many methods that are not used for injection and at most a few that are.
In addition to #Inject's importance in telling Guice, it also serves as documentation of how a class is intended to be used--that the class is part of an application's dependency injection wired infrastructure. It also helps to be consistent in applying #Inject annotations across your classes, even if it wouldn't currently be absolutely necessary on some that just use a single constructor. I'd also note that you can use JSR-330's #javax.inject.Inject annotation in Guice 3.0 if a standard Java annotation is preferable to a Guice-specific one to you.
I'm not too clear on what you mean by asking the scope for a provider. Scopes generally do not create objects themselves; they control when to ask the unscoped provider of a dependency for a new instance and how to control the scope of that instance. Providers are part of how they operate, of course, but I'm not sure if that's what you mean. If you have some custom way of providing instances of objects, Provider bindings and #Provides methods are the way to go for that and don't require #Inject annotations on the classes themselves.
NO YOU DONT
GUICE does not ask you to inject every single object. GUICE will try and create only injected objects. So you can #Inject objects that you want to be injected.
On the scope bit - Scope essentially controls how your objects gets created by GUICE. When you write your own custom scope you can have a datastructure that controls the way objects are created. When you scope a class with your custom annotation, GUICE will call your scope method before creation with a Provider for that class. You can then decide if you want to create a new object or use an existing object from a datastructure (such as hashmap or something). If you want to use an existing one you get that and return the object, else you do a provider.get() and return.
Notice this
public <T> Provider<T> scope(final Key<T> key, final Provider<T> unscoped) {
return new Provider<T>() {
public T get() {
Map<Key<?>, Object> scopedObjects = getScopedObjectMap(key);
#SuppressWarnings("unchecked")
T current = (T) scopedObjects.get(key);
if (current == null && !scopedObjects.containsKey(key)) {
current = unscoped.get();
scopedObjects.put(key, current);
}
// what you return here is going to be injected ....
// in this scope object you can have a datastructure that holds all references
// and choose to return that instead depending on your logic and external
// dependencies such as session variable etc...
return current;
}
};
}
Here's a tutorial ...
http://code.google.com/p/google-guice/wiki/CustomScopes
At the most basic level, the #Inject annotation identifies the stuff guice will need to set for you. You can have guice inject into a field directly, into a method, or into a constructor. You must use the #Inject annotation every time you want guice to inject an object.
Here is a guice tutorial.
I read all over the place about how Spring encourages you to use interfaces in your code. I don't see it. There is no notion of interface in your spring xml configuration. What part of Spring actually encourages you to use interfaces (other than the docs)?
The Dependency Inversion Principle explains this well. In particular, figure 4.
A. High level modules should not depend on low level modules. Both should depend upon abstractions.
B. Abstraction should not depend upon details. Details should depend upon abstractions.
Translating the examples from the link above into java:
public class Copy {
private Keyboard keyboard = new Keyboard(); // concrete dependency
private Printer printer = new Printer(); // concrete dependency
public void copy() {
for (int c = keyboard.read(); c != KeyBoard.EOF) {
printer.print(c);
}
}
}
Now with dependency inversion:
public class Copy {
private Reader reader; // any dependency satisfying the reader interface will work
private Writer writer; // any dependency satisfying the writer interface will work
public void copy() {
for (int c = reader.read(); c != Reader.EOF) {
writer.write(c);
}
}
public Copy(Reader reader, Writer writer) {
this.reader = reader;
this.writer = writer;
}
}
Now Copy supports more than just copying from a keyboard to a printer.
It is capable of copying from any Reader to any Writer without requiring any modifications to its code.
And now with Spring:
<bean id="copy" class="Copy">
<constructor-arg ref="reader" />
<constructor-arg ref="writer" />
</bean>
<bean id="reader" class="KeyboardReader" />
<bean id="writer" class="PrinterWriter" />
or perhaps:
<bean id="reader" class="RemoteDeviceReader" />
<bean id="writer" class="DatabaseWriter" />
When you define an interface for your classes, it helps with dependency injection. Your Spring configuration files don't have anything about interfaces in them themselves -- you just put in the name of the class.
But if you want to inject another class that offers "equivalent" functionality, using an interface really helps.
For example, saying you've got a class that analyzes a website's content, and you're injecting it with Spring. If the classes you're injecting it into know what the actual class is, then in order to change it out you'll have to change a whole lot of code to use a different concrete class. But if you created an Analyzer interface, you could just as easily inject your original DefaultAnalyzer as you could a mocked up DummyAnalyzer or even another one that does essentially the same thing, like a PageByPageAnalyzer or anything else. In order to use one of those, you just have to change the classname you're injecting in your Spring config files, rather than go through your code changing classes around.
It took me about a project and a half before I really started to see the usefulness. Like most things (in enterprise languages) that end up being useful, it seems like a pointless addition of work at first, until your project starts to grow and then you discover how much time you saved by doing a little bit more work up front.
Most of the answers here are some form of "You can easily swap out implementations", but what I think they fail to answer is the why? part. To that I think the answer is almost definitively testability. Regardless of whether or not you use Spring or any other IOC framework, using Dependency Injection makes your code easier to test. In the case of say a writer rather than a PrinterWriter, you can Mock the Writer interface in a Unit test, and ensure that your code is calling it the way you expect it to. If you depend directly on the class implementation, your only option is to walk to the printer and check it, which isn't very automated. Furthermore, if you depend upon the result of a call to a class, not being able to Mock it may prevent you from being able to reach all code paths in your test, thus reducing their quality (potentially) Simply put, you should decouple Object graph creation from application logic. Doing so makes your code easier to test.
No one has mention yet that in many occasions won't be necessary to create an interface so that the implementing class can be switched quickly because simply there won't be more than one implementing class.
When interfaces are created without need, classes will be created by pairs (interface plus implementation), adding unnecessary boilerplate interfaces and creating potential dependency confusions because, on XML configuration files, components will be sometimes referenced by its interface and sometimes by its implementation, with no consequences at runtime but being incoherent regarding code conventions.
You may probably want to try using it for yourself to be better able to see this, it may not be clear from the docs how Spring encourages interface use.
Here are a couple of examples:
Say you're writing a class that needs to read from a resource (e.g., file) that may be referenced in several ways (e.g., in classpath, absolute file path, as a URL etc). You'd want to define a org.springframework.core.io.Resource (interface) property on your class. Then in your Spring configuration file, you simply select the actual implementation class (e.g., org.springframework.core.io.ClassPathResource, org.springframework.core.io.FileSystemResource, org.springframework.core.io.UrlResource etc). Spring is basically functioning as an extremely generic factory.
If you want to take advantage of Spring's AOP integration (for adding transaction interceptors for instance), you'll pretty much need to define interfaces. You define the interception points in your Spring configuration file, and Spring generates a proxy for you, based on your interface.
These are examples I personally have experience with. I'm sure there are much more out there.
it's easy to generate proxies from interfaces.
if you look at any spring app, you'll see service and persistence interfaces. making that the spring idiom certainly does encourage the use of interfaces. it doesn't get any more explicit than that.
Writing separate interfaces adds complexity and boilerplate code that's normally unnecessary. It also makes debugging harder because when you click a method call in your IDE, it shows the interface instead of the implementation. Unless you're swapping implementations at runtime, there's no need to go down that path.
Tools like Mockito make it very easy to test code using dependency injection without piling on interfaces.
Spring won't force you to use interfaces anywhere, it's just good practice. If you have a bean that has a some properties that are interfaces instead of concrete classes, then you can simply switch out some objects with mockups that implement the same interface, which is useful for certain test cases.
If you use for example the Hibernate support clases, you can define an interface for your DAO, then implement it separately; the advantage of having the interface is that you will be able to configure it using the Spring interceptors, which will allow you to simplify your code; you won't have to write any code cathing HibernateExceptions and closing the session in a finally segment, and you won't have to define any transactions programmatically either, you just configure all that stuff declaratively with Spring.
When you're writing quick and dirty apps, you can implement some simple DAO using JDBC or some simple framework which you won't end up using in the final version; you will be able to easily switch those components out if they implement some common interfaces.
If you don't use interfaces you risk an autowiring failure:
Sometime Spring creates a Proxy class for a Bean. This Proxy class is not a child class of the service implementation but it re-implements all of its interfaces.
Spring will try to autowire instances of this Bean, however this Proxy class is incompatible with the Bean class. So declaring a field with Bean class can lead to "unsafe field assignement" exceptions.
You cannot reasonably know when Spring is going to Proxy a service (nor should you), so to protect yourself against those surprises, your best move is to declare an interface and use this interface when declaring autowired fields.
Does dependency injection mean that you don't ever need the 'new' keyword? Or is it reasonable to directly create simple leaf classes such as collections?
In the example below I inject the comparator, query and dao, but the SortedSet is directly instantiated:
public Iterable<Employee> getRecentHires()
{
SortedSet<Employee> entries = new TreeSet<Employee>(comparator);
entries.addAll(employeeDao.findAll(query));
return entries;
}
Just because Dependency Injection is a useful pattern doesn't mean that we use it for everything. Even when using DI, there will often be a need for new. Don't delete new just yet.
One way I typically decide whether or not to use dependency injection is whether or not I need to mock or stub out the collaborating class when writing a unit test for the class under test. For instance, in your example you (correctly) are injecting the DAO because if you write a unit test for your class, you probably don't want any data to actually be written to the database. Or perhaps a collaborating class writes files to the filesystem or is dependent on an external resource. Or the behavior is unpredictable or difficult to account for in a unit test. In those cases it's best to inject those dependencies.
For collaborating classes like TreeSet, I normally would not inject those because there is usually no need to mock out simple classes like these.
One final note: when a field cannot be injected for whatever reason, but I still would like to mock it out in a test, I have found the Junit-addons PrivateAccessor class helpful to be able to switch the class's private field to a mock object created by EasyMock (or jMock or whatever other mocking framework you prefer).
There is nothing wrong with using new like how it's shown in your code snippet.
Consider the case of wanting to append String snippets. Why would you want to ask the injector for a StringBuilder ?
In another situation that I've faced, I needed to have a thread running in accordance to the lifecycle of my container. In that case, I had to do a new Thread() because my Injector was created after the callback method for container startup was called. And once the injector was ready, I hand injected some managed classes into my Thread subclass.
Yes, of course.
Dependency injection is meant for situations where there could be several possible instantiation targets of which the client may not be aware (or capable of making a choice) of compile time.
However, there are enough situations where you do know exactly what you want to instantiate, so there is no need for DI.
This is just like invoking functions in object-oriented langauges: just because you can use dynamic binding, doesn't mean that you can't use good old static dispatching (e.g., when you split your method into several private operations).
My thinking is that DI is awesome and great to wire layers and also pieces of your code that needs sto be flexible to potential change. Sure we can say everything can potentially need changing, but we all know in practice some stuff just wont be touched.
So when DI is overkill I use 'new' and just let it roll.
Ex: for me wiring a Model to the View to the Controller layer.. it's always done via DI. Any Algorithms my apps uses, DI and also any pluggable reflective code, DI. Database layer.. DI but pretty much any other object being used in my system is handled with a common 'new'.
hope this helps.
It is true that in today, framework-driven environment you instantiate objects less and less. For example, Servlets are instantiated by servlet container, beans in Spring instantiated with Spring etc.
Still, when using persistence layer, you will instantiate your persisted objects before they have been persisted. When using Hibernate, for example you will call new on your persisted object before calling save on your HibernateTemplate.