Proper Class Construction: Using Multiple Hard Dependencies - java

I'm trying to integrate Single Responsibility Principle into my Java code by refactoring large classes (2000+ lines) into smaller, cohesive classes (~200 lines). However I'm confused how to properly reduce coupling between classes, since certain classes seem bound to be create multiple "hard dependencies" via the new keyword.
I'm using dependency injection via constructors primarily, followed by setter methods, or methods which accept the dependency as a parameter and use it amonst other logic within the method body (not just a simple this.val = val; setter.
IntelliJ's automatic refactoring instantiates this newly extracted class and passes (injects) it with a this reference to the LoadController. If I have to refactor a 2000 line class, of course this auto-instantiation + injection will occur each time I extract a new class out. The following LoadController is a JavaFX controller class for the program's main stage, which acts as the starting point for various features:
public class LoadController{
private final DBConnection dbConnection = new DBConnection(this);
private final UpdateLabels updateLabels = new UpdateLabels(this);
private final OpenCloseMenu openCloseMenu= new OpenCloseMenu (this);
private final CreateVBox createVBox= new CreateVBox (this, dbConnection);
private final ...
private final ...
}
Is this wrong? My understanding is that large, separate functions should be in their own class ... BUT some classes must have multiple hard dependencies like above, in order to "guide" the flow of logic between the use of various other classes.

If you are doing dependency injection into JavaFX controllers, you might want to look into using something like Gluon Ignite to assist you.
Gluon Ignite allows developers to use popular dependency injection frameworks in their JavaFX applications, including inside their FXML controllers. Gluon Ignite creates a common abstraction over several popular dependency injection frameworks
The injection framework you choose to use (e.g. Guice or Spring) will be responsible for creating the injectable components (e.g. you don't invoke new) and injecting the relevant references into your code, (e.g. you don't need to write dbConnection = <some value>). The injection framework will have extensive documentation and blog articles on how it works and how it may best be used, so full discussion of that is outside the scope of this answer.
An alternate to Gluon Ignite is afterburner.fx, which is similar but uses a small custom implementation for #Inject, so is more lightweight (and a little less powerful), then the more established dependency injection frameworks (very simple to use though).
This is just one option, there are other ways you can handle this, but seeing as you state that you wish to perform dependency injection with JavaFX, it seems to make sense of proven frameworks to do this rather than try to roll your own implementation.
some classes must have multiple hard dependencies like above, in order to "guide" the flow of logic between the use of various other classes.
Using something like Guice you provide a module that defines bindings between interface types and implementations. These bindings tell Guice how to construct the dependencies, so you don't need to hard code the dependencies in your classes. See the BillingModule in the Guice getting started guide for a module example. If you need multiple instances of injectable objects, you can use Providers in Guice. Spring has similar concepts, but different names.
Deciding on whether or not to use a dependency injection framework is a tradeoff between the work you would need to do if there was no injection framework vs the additional time and complexity of integrating the framework into your application. so the decision of whether or not to use one needs to be an architectural decision that you make, there is not a generic right or wrong answer for every application on whether use of such frameworks is justified.
I decide that using an injection framework is superfluous for my requirements, then I am not doing anything inherently incorrect by having multiple hard dependencies in some classes, as shown above?
Well the dependencies need to be defined somewhere. Either in an inferred or centralized location such as dependency injection systems use or local to given classes as you might determine from a traditional responsibility driven design approach. So you are not necessarily doing anything wrong by having hard dependencies. Abstract decoupling patterns such as dependency injection aren't always required.
The trick is determining what dependencies to have where and how to manage them. Often it is just obvious and falls out naturally from the problem domain, sometimes techniques such as CRC modeling can help structure dependencies.
Related article:
Inversion of Control Containers and the Dependency Injection pattern by Martin Fowler.
My assumption is that I can refactor my large classes into smaller, cohesive classes with some of these classes having multiple hard dependencies, using new.
Yes, you can certainly do that.
Can an injection framework be integrated later on in a project's life, rather than early on when it may not be required yet?
Yes it can. It will be a bit of work to do so, but if the application is well structured, not all that difficult. It is more difficult to go the other way and try to remove usage of a dependency injection framework from applications and libraries that are already based on it.
Related:
Passing Parameters JavaFX FXML

Related

Android Dependency Injection using Dagger 2

I just started using dagger 2 for dependency injection in android. The way i'm using it now i made sure i don't have
new Class();
but i have a feeling i'm over using dependency injection. I inject any thing that needs an instance. is this right? or are there set of things i can inject or i can inject everything
It is very easy and common to overuse dependency injection, and I wouldn't endorse the practice of "inject anything that needs an instance". However, you'll need to decide which aspects fall into which group.
One distinction I've seen drawn is "injectables" vs "newables", as in this oft-cited article by Miško Hevery (also on the Google Testing Blog), this article by Giorgio Sironi, and this Dagger 2 StackOverflow answer.
You may want to weigh the advantages of dependency injection, which include:
ability for the environment to substitute out implementations, particularly in testing against unwritten, heavy, or nondeterministic implementations
insulation from your dependencies' dependencies, which may change and evolve independently
...against the costs, which include:
difficulty in telling which implementation may be supplied
additional Provider classes and instances, which may be expensive on embedded/mobile platforms
complex syntax and build steps to handle mixing constructor parameters and factories, such as through AutoFactory
Value and model objects, which are unlikely to have multiple or risky implementations, are often squarely in the newable camp; interconnected and interdependent services are often far into the injectable camp. For lightweight services and utils, you'll need to identify the benefits provided above and draw the line based on the benefits you need.

Can you give me a example of dependency injection without frameworks?

I want to reduce the coupling between two components, then I thought of dependency injection, but for a long time, I just use Spring to implement this. But now, I am working in a project which is not suitable to use this framwork(it is too heavy).
So can you give me an example to implement dependency injection for myself?
Dependency Injection is a pattern that can easily used without framework support. Some even prefer it without framework support, but at least, whether or not a framework is actually beneficial depends on the way you use such framework and the type and size of application you are building/maintaining.
Dependency injection is simply about injecting the dependencies into a component from the outside. The most common and advised way to do so is through constructor injection. This means that a class should specify all its dependencies as constructor arguments.
You should always design your code as if there is no DI framework at all; your application code should be oblivious to the existence of such framework. This means that you should never decorate your code with framework specific attributes. They pollute your code and cause a vendor lock-in. If the DI library you use requires the use of attributes, switch to a different library.
The use of dependency injection will 'bubble' throughout the application. This means that a class that applies the dependency injection pattern will move the responsibility of the creation of its dependencies up the call stack. This means that the consumer of that class now becomes responsible of creating its dependencies. But since that consumer should apply dependency injection as well, it means that it pushes the responsibility of creating the dependencies up again. When all classes apply the dependency injection pattern, it means that the complete object graph(s) need to be created in a single place in the application. This is actually a good thing. This place is called the composition root.
Again, you don't need to use a DI library (a.k.a. IoC container), and your application code should definitely not depend on it. You should apply the Dependency Injection pattern (and the SOLID principles) to make your application maintainable. A DI library can be used to make your composition root maintainable, but it should ONLY be used IF it makes the composition root more maintainable. Not using a DI library gives you complete compile-time support over the creation of your object graphs. Using a DI library will make you lose this compile-time support, so the advantages of its use should outweigh the disadvantage of losing compile-time support. Furthermore, you need to make sure that you can verify the building of your object graphs during application start-up or at least in a test suite. If your DI container makes this hard to impossible, switching your library or building your object graphs by hand might be a better option.
Why do you want to reinvent the wheel? Java has excellent CDI framework. Its lightweight and easy to use. For more info, see http://docs.oracle.com/javaee/6/tutorial/doc/giwhl.html

how does Spring (DI) help in development by wiring the components?

I have learned Spring for quite some time (just learn, without actual hands-on experience on real project). In my understanding, Spring provides a DI frameworks, which allows centralize way of connecting / wiring all the classes in one place. The classes themselves do not compose / instantiate other components.
I can understand, DI allows easier unit-testing for each component as they are depending on interface.
My question is, why does wiring all the classes in centralize way (externally) helps in development process (besides testing), compared to traditional way (each class instantiates another class).
This link on DI explains it pretty well:
http://en.wikipedia.org/wiki/Dependency_injection#Motivation
The primary purpose of the dependency injection pattern is to allow selection among multiple implementations of a given dependency interface at runtime, or via configuration files, instead of at compile time. The pattern is particularly useful for providing "mock" test implementations of complex components when testing; but is often used for locating plugin components, or locating and initializing software services.
Unit testing of components in large software systems is difficult, because components under test often require the presence of a substantial amount of infrastructure and set up in order to operate at all. Dependency injection simplifies the process of bringing up a working instance of an isolated component for testing. Because components declare their dependencies, a test can automatically bring up only those dependent components required to perform testing.
It improves the quality of your code by reducing coupling between classes.
If a class instantiates an instance of another class, then there is a dependency directly between the two classes (=tight coupling). So for example, if Class A has-a relatoinship with Interface B, if Class A handles the instantiation of the Interface B, then Class A must specify a concrete implementation to instantiate and those classes become tightly coupled.
Lets say we have the following interface:
interface B{}
and then the following Class
class A{
private B b = new BImpl();
...
}
In the above example (without DI), Class A has an explicit dependency on BImpl, which means if you ever want to use a different implementation of B then you also have to change Class A.
DI (and loose coupling in general) aims to remove these kind of dependencies and have a code base where changes to one part of the code do not "ripple" through the entire application requiring lots of changes. The above example is pretty trivial, but if you have a medium to large size codebase with tight coupling this problem can get pretty bad.
why does wiring all the classes in centralize way
Centralized configuration is an implementation detail rather than part of DI. Guice for example can spread the configuration about a bit (I've not used Spring in anger so I can not comment on it).
why does wiring all the classes ... externally
As this allows you to change the implementation. DI is not the only way but it is the most popular. Factories and Service Locators are the main alternatives. Without some way of swapping out the implementation testing becomes near impossible.
development process (besides testing)
Testing is a very important part. It alone is a good reason to separate creation and use.
Unlike the other two methods above and direct initialization DI also makes the dependencies visible (esp. ctor injection) this can help other users of the class. By making the dependencies so visible it can be used to give you a warning when your class is doing too much (as it will require a lot of dependencies).
DI concept is independent of spring. Dependency Injection is possible even without using Spring. I.e. manually injection of dependency is also possible.
Please refer java example given on wiki:- http://en.wikipedia.org/wiki/Dependency_injection#Manually_injected_dependency
DI main purpose is loose coupling.
Spring provides IOC (Inversion Of Control). When we use spring to inject dependency using the Spring IOC, we get features like:
1) Loose coupling, reduce time to add new feature. Code to interface will provide this. Add new service which complies to interface and which can be replaced in bean configuration.
2)No need to change code/compile required while changing dependency.
3)Easier and fast testing. Hence, can cover more cases in same time frame which leads to good product.
4) Spring provides lots of different templates to make developers life easier. These all templates are using DI/Ioc concept. Leads to faster development cycle. Such templates are available for batch processing, JMS, JMX, JDBC operations and many more.

Java Dependency injection: XML or annotations

Annotations becoming popular. Spring-3 supports them. CDI depends on them heavily (I can not use CDI with out of annotations, right?)
My question is why?
I heard several issues:
"It helps get rid of XML". But what is bad about xml? Dependencies are declarative by nature, and XML is very good for declarations (and very bad for imperative programming).
With good IDE (like idea) it is very easy to edit and validate xml, is not it?
"In many cases there is only one implementation for each interface". That is not true!
Almost all interfaces in my system has mock implementation for tests.
Any other issues?
And now my pluses for XML:
You can inject anything anywhere (not only code that has annotations)
What should I do if I have several implementations of one interface? Use qualifiers? But it forces my class to know what kind of injection it needs.
It is not good for design.
XML based DI makes my code clear: each class has no idea about injection, so I can configure it and unit-test it in any way.
What do you think?
I can only speak from experience with Guice, but here's my take. The short of it is that annotation-based configuration greatly reduces the amount you have to write to wire an application together and makes it easier to change what depends on what... often without even having to touch the configuration files themselves. It does this by making the most common cases absolutely trivial at the expense of making certain relatively rare cases slightly more difficult to handle.
I think it's a problem to be too dogmatic about having classes have "no idea about injection". There should be no reference to the injection container in the code of a class. I absolutely agree with that. However, we must be clear on one point: annotations are not code. By themselves, they change nothing about how a class behaves... you can still create an instance of a class with annotations as if they were not there at all. So you can stop using a DI container completely and leave the annotations there and there will be no problem whatsoever.
When you choose not to provide metadata hints about injection within a class (i.e. annotations), you are throwing away a valuable source of information on what dependencies that class requires. You are forced to either repeat that information elsewhere (in XML, say) or to rely on unreliable magic like autowiring which can lead to unexpected issues.
To address some of your specific questions:
It helps get rid of XML
Many things are bad about XML configuration.
It's terribly verbose.
It isn't type-safe without special tools.
It mandates the use of string identifiers. Again, not safe without special tool support.
Doesn't take any advantage of the features of the language, requiring all kinds of ugly constructs to do what could be done with a simple method in code.
That said, I know a lot of people have been using XML for long enough that they are convinced that it is just fine and I don't really expect to change their minds.
In many cases there is only one implementation for each interface
There is often only one implementation of each interface for a single configuration of an application (e.g. production). The point is that when starting up your application, you typically only need to bind an interface to a single implementation. It may then be used in many other components. With XML configuration, you have to tell every component that uses that interface to use this one particular binding of that interface (or "bean" if you like). With annotation-based configuration, you just declare the binding once and everything else is taken care of automatically. This is very significant, and dramatically reduces the amount of configuration you have to write. It also means that when you add a new dependency to a component, you often don't have to change anything about your configuration at all!
That you have mock implementations of some interface is irrelevant. In unit tests you typically just create the mock and pass it in yourself... it's unrelated to configuration. If you set up a full system for integration tests with certain interfaces using mocks instead... that doesn't change anything. For the integration test run of the system, you're still only using 1 implementation and you only have to configure that once.
XML: You can inject anything anywhere
You can do this easily in Guice and I imagine you can in CDI too. So it's not like you're absolutely prevented from doing this by using an annotation-based configuration system. That said, I'd venture to say that the majority of injected classes in the majority of applications are classes that you can add an #Inject to yourself if it isn't already there. The existence of a lightweight standard Java library for annotations (JSR-330) makes it even easier for more libraries and frameworks to provide components with an #Inject annotated constructor in the future, too.
More than one implementation of an interface
Qualifiers are one solution to this, and in most cases should be just fine. However, in some cases you do want to do something where using a qualifier on a parameter in a particular injected class would not work... often because you want to have multiple instances of that class, each using a different interface implementation or instance. Guice solves this with something called PrivateModules. I don't know what CDI offers in this regard. But again, this is a case that is in the minority and it's not worth making the rest of your configuration suffer for it as long as you can handle it.
I have the following principle: configuration-related beans are defined with XML. Everything else - with annotations.
Why? Because you don't want to change configuration in classes. On the other hand, it's much simpler to write #Service and #Inject, in the class that you want to enable.
This does not interfere with testing in any way - annotations are only metadata that is parsed by the container. If you like, you can set different dependencies.
As for CDI - it has an extension for XML configuration, but you are right it uses mainly annotations. That's something I don't particularly like in it though.
In my opinion, this is more a matter of taste.
1) In our project (using Spring 3), we want the XML-configuration files to be just that: configuration. If it doesn't need to be configured (from end-user perspective) or some other issue doesn't force it to be done in xml, don't put the bean-definitions/wirings into the XML-configurations, use #Autowired and such.
2) With Spring, you can use #Qualifier to match a certain implementation of the interface, if multiple exist. Yes, this means you have to name the actual implementations, but I don't mind.
In our case, using XML for handling all the DI would bloat the XML-configuration files a lot, although it could be done in a separate xml-file (or files), so it's not that valid point ;). As I said, it's a matter of taste and I just think it's easier and more clean to handle the injections via annotations (you can see what services/repositories/whatever something uses just by looking at the class instead of going through the XML-file looking for the bean-declaration).
Edit: Here's an opinion about #Autowired vs. XML that I completely agree with: Spring #Autowired usage
I like to keep my code clear, as you pointed. XML feets better, at least for me, in the IOC principle.
The fundamental principle of Dependency Injection for configuration is that application objects should not be responsible for looking up the resources or collaborators they depend on. Instead, an IoC container should configure the objects, externalizing resource lookup from application code into the container. (J2EE Development without EJB - Rod Johnson - page 131)
Again, it just my point of view, no fundamentalism in there :)
EDIT: Some useful discussions out there:
http://forum.springsource.org/showthread.php?t=95126
http://www.theserverside.com/discussions/thread.tss?thread_id=61217
"But what is bad about xml?" It's yet another file to manage and yet another place to have to go look for a bug. If your annotations are right next to your code it's much easier to mange and debug.
Like all things, dependency injection should be used in moderation. Moreover, all trappings of the injections should be segregated from the application code and relegated to the code associated with main.
In general applications should have a boundary that separates the abstract application code from the concrete implementation details. All the source code dependencies that cross that boundary should point towards the application. I call the concrete side of that boundary, the main partition, because that's where 'main' (or it's equivalent) should live.
The main partition consists of factory implementations, strategy implementations, etc. And it is on this side of the boundary that the dependency injection framework should do it's work. Then those injected dependencies can be passed across the boundary into the application by normal means. (e.g. as arguments).
The number of injected dependencies should be relatively small. A dozen or less. In which case, the decision between XML or annotations is moot.
Also don't forget Spring JavaConfig.
In my case the developers writing the application are different that the ones configuring it (different departments, different technologies/languages) and the last group doesn't even has access to the source code (which is the case in many enterprise setups). That makes Guice unusable since I would have to expose source code rather than consuming the xmls configured by the developers implementing the app.
Overall I think it is important to recognize that providing the components and assembling/configuring an application are two different exercises and provide if needed this separation of concerns.
I just have a couple of things to add to what's already here.
To me, DI configuration is code. I would like to treat it as such, but the very nature of XML prevents this without extra tooling.
Spring JavaConfig is a major step forward in this regard, but it still has complications. Component scanning, auto-magic selection of interface implementations, and semantics around CGLIB interception of #Configuration annotated classes make it more complex than it needs to be. But it's still a step forward from XML.
The benefit of separating IoC metadata from application objects is overstated, especially with Spring. Perhaps if you confined yourself to the Spring IoC container only, this would be true. But Spring offers a wide application stack built on the IoC container (Security, Web MVC, etc). As soon as you leverage any of that, you're tied to the container anyway.
XML has the only benefit of a declarative style that is defined clearly separated from the application code itself. That stays independent from DI concerns. The downsides are verbosity, poor re-factoring robustness and a general runtime failure behaviour. There is just a general (XML) tool support with little benefit compared to IDE support for e.g. Java. Besides this XML comes with a performance overhead so it usually is slower than code solutions.
Annoations often said to be more intuitive and robust when re-factoring application code. Also they benefit from a better IDE guidance like guice provides. But they mix application code with DI concerns. An application gets dependent on a framework. Clear separation is almost impossible. Annotations are also limited when describing different injection behaviour at the same place (constructor, field) dependent on other circumstances (e.g. robot legs problem). Moreover they don't allow to treat external classes (library code) like your own source. Therefore they are considered to run faster than XML.
Both techniques have serious downsides. Therefore I recommend to use Silk DI. It is declarative defined in code (great IDE support) but 100% separated from your application code (no framework dependency). It allows to treat all code the same no matter if it is from your source or a external library. Problems like the robot legs problem are easy to solve with usual bindings. Furthermore it has good support to adapt it to your needs.

Dependency Injection: What's wrong with good old-fashioned refactoring?

DI creates an extra layer of abstraction so that if your implementation class ever changes you can simply plug in a different class with the same interface.
But why not simply refactor when you want to use a different implementation class? Other languages like Python and Ruby work fine this way. Why not Java?
That is an incorrect characterization of dependency injection. It is not that you have one implementation of a particular interface that changes over time; rather, it is possible that there will be many different implementations of an interface all at once, and which implementation will be used can vary over multiple different runs of the program. For example, in your actual program, you might want to use one implementation, while during unit testing, you might want to "mock out" that implementation with an alternative version that is easier to test. In this case, refactoring is not a solution, because you need to be able to test all the time without interrupting the rest of the development process.
It should also be noted that dependency injection is usually used as a solution to the Singleton anti-pattern; it allows one to have a singleton object that can be easily mocked out during testing. And, if later it turns out that the singleton assumption really is incorrect, that singleton can be replaced with various implementations.
Some resources which you may find helpful in better understanding the topic:
Java on Guice: Dependency Injection the Java Way
Big Modular Java with Google Guice
Singletons are Pathological Liars
Why Singletons are Evil
Root Cause of Singletons
Dependency Injection Myth: Reference Passing
So you are saying Python and Ruby can't have dependency injection? Or Java can't work fine without DI?
Besides you've missed the one of the most characteristic of DI, that you can have Dynamic DI, not just at compile time, but at run time. In Software Engineering there is always a question of is there too much Abstraction and too little and it really comes down to how you design a solution to your problem
Not quite. The issue here is that when you write a code snippet like:
Runnable r = new MyFooRunnable();
you essentially decide that the Runnable you will need is a MyFooRunnable (and not a MyBarRunnable or a third one). Occasionally you will want to postpone that decision from compile time to deployment time, so that the deployer can decide how the individual modules your application consists of are to be glued together.
Traditionally this has been done with factories, but this just moves the actual decision around in code and you still have to know all the possibilities when coding the factory or let it read instructions from a configuration file (which tends to be fragile to refactoring).
Dependency Injection is a formalization of configured factories in a way so the code does not need to know hardly anything about how things work. This is also why annotations have been found so useful for pointing out where the Dependency Injection should happen. If running the code in a non-DI setting (like a junit test) then there does not happen anything (which would have been hard to do with Factories littered all over).
So, Dependency Injection used liberally allows you to write modules that "snap" well together without knowing of each other at compile time. This is very similar to the jar-file concept, but it has taken longer to mature.

Categories

Resources