Background
I'm writing an application in Java and I'm using Guice for DI.
(It's Android and RoboGuice to be exact, although it probably makes no difference in this context).
Class design
It's an app for keeping score for a popular cardgame - Hearts.
The game consists of various consecutive deals, whose rules differ. Eg. in one deal players are penalized for taking Hearts, in another - for taking Jokers and Kings.
My object design involves several classes:
Deal for identifying each deal
ScoreCalculator for calculating penalties (eg. each Heart may be worth -2 points). Penalties may vary from deal to deal.
ScoreValidator for validating the scores (for instance, it's not possible for each player to take 4 Hearts, because there isn't as many in the deck)
If each Deal class had one corresponding ScoreCalculator and ScoreValidator, dependency injection would be trivial.
But this is not the case. Calculating score for some of the deals can be very specific (distinguishing them from the others), while for the rest it's based on nothing more but multiplying the number of taken cards by the penalty parameter (-2 or -4 etc.)
So, LotteryDeal is associated with a LotteryCalculator, but NoQueens and NoGents both require a class I named SimpleCalculator.
It takes a single integer parameter, which is a multiplier (penalty score).
This is my current solution, in which I implemented Deal as an enum (but I'm not happy with it and I want to drop it):
public enum Deal
{
TakeNothing(-2, PossibleDealResults.fullRange()),
NoHearts(-2, PossibleDealResults.fullRange()),
NoQueens(-2, PossibleDealResults.rangeUpTo(4)),
NoGents(-2, PossibleDealResults.rangeUpTo(8)),
NoKingOfHearts(-18, PossibleDealResults.rangeUpTo(1)),
NoLastOne(
new NoLastOneCalculator(),
new NoLastOneValidator(new NoLastOneCalculator())),
Trump1(2, PossibleDealResults.fullRange()),
Trump2(2, PossibleDealResults.fullRange()),
Trump3(2, PossibleDealResults.fullRange()),
Trump4(2, PossibleDealResults.fullRange()),
Lottery(new LotteryCalculator(), PossibleDealResults.rangeUnique(1, 4));
protected ScoreCalculator calculator;
protected PlainScoreValidator validator;
Deal(int multiplier, PossibleDealResults possibleResults)
{
this(new SimpleCalculator(multiplier), possibleResults);
}
Deal(ScoreCalculator calculator, PossibleDealResults possibleResults)
{
this(calculator, new PlainScoreValidator(possibleResults, calculator));
}
Deal(ScoreCalculator calculator, PlainScoreValidator validator)
{
Preconditions.checkNotNull(calculator, "calculator");
Preconditions.checkNotNull(validator, "validator");
this.calculator = calculator;
this.validator = validator;
}
}
I'm not removing some complexities that are out of the scope of this question (such as the PossibleDealResults class, which I did not describe to you), as it doesn't seem to be very relevant.
The main point is that all dependencies are hard-coded, as you can see, which is not really flexible, for example because there are many different variations of the game, with various scoring rules.
Let's say I'd like to use dependency injection to allow for more flexibility and perhaps even switching between different rules sets more easily - by switching to a different Module in order to re-resolve dependencies if there is a need.
Where's the problem?
I think I have some grasp on how to do it in general.
My question concerns injecting the SimpleCalculator object.
I'd need it with a parameter of -2 for TakeNothingDeal, but -18 for the NoKingOfHeartsDeal.
How to achieve it with Guice?
I'd like to keep the class parameterized and avoid creating a MinusTwoSimpleCalculator and a MinusEighteen... one.
I'm not really sure what's the proper way to achieve that, without abusing the framework (or more general DI design guidelines).
What have you tried?
Not much in terms of actual code. I'm a bit stuck.
I know there's bindConstant, but I can't see how I could make use of it in this case. It requires annotations, but if use Deal-specific annotations - I mean, create a Deal.multiplier field and then annotate it with something to the effect of "inject -2 here, please", what did I really do? I just went back to hard-coding dependencies manually and I'm not really using Guice anymore.
I read about AssistedInject, too, but I can't figure out how it could be of help here, either.
I don't want to overengineer this nor to work against the framework. What's the correct approach? Happy to clarify if the problem is somehow unclear.
You have a lot of options, actually. Here are three:
Factory object
Frankly, I don't think this design needs Guice for this particular problem. Instead, create a simple interface to populate with relevant switch statements:
interface DealFactory {
ScoreCalculator getFromDeal(Deal deal);
ScoreValidator getFromDeal(Deal deal);
}
You might be thinking, "But that works on objects telescopically! Those methods would be better left on Deal." You'd be right, mostly, but one key factor of OOP (and Dependency Injection) is to encapsulate what varies. Having a single set of rules declared statically in Deal is the opposite of the flexibility you want. (The enum itself is fine; there are a finite number of deal types regardless of the rules in play.)
Here, you could easily bind the DealFactory to some lightweight object that provides exactly the right ScoreCalculator and ScoreValidator for any given Deal, and write as many DealFactory objects as you'd like for each set of rules. At that point you can declare the currently-in-play DealFactory in a module and inject it wherever you want.
Also bear in mind that a factory implementation could easily work with Guice and injected bindings:
class DealFactoryImpl implements DealFactory {
#Inject Provider<DefaultScoreCalculator> defaultScoreCalculatorProvider;
#Inject MultiplierScoreCalculator.Factory multiplerScoreCalculatorFactory;
#Override public ScoreCalculator getFromDeal(Deal deal) {
if (TakeNothing.equals(Deal)) {
return defaultScoreCalculatorProvider.get();
} else {
return multiplierScoreCalculatorFactory.create(-2); // assisted inject
}
} /* ... */
}
Private modules
A similar problem to yours is sometimes known as the "robot legs" problem, as if you're writing a common Leg object that needs to refer to a LeftFoot in some trees and a RightFoot in others. Setting aside the (better) above solution for a second, you can set up private modules, which allow you bind things privately to expose only a few public dependencies:
// in your module
install(new PrivateModule() {
#Override public void configure() {
SimpleCalculator calculator = new SimpleCalculator(-2);
bind(ScoreCalculator.class).toInstance(calculator);
bind(ScoreValidator.class).toInstance(
new PlainScoreValidator(calculator, PossibleDealResults.fullRange());
expose(Deal.class).annotatedWith(TakeNothing.class); // custom annotation
}
});
See what I mean? Certainly possible, but a lot of work for Hearts rules. A custom class is a better match for this particular problem. Your need to specify multipliers is shallow; if Deal needed P, which needed Q, which needed R, which needs S, which needs the same ScoreCalculator, then a PrivateModule solution looks much more appealing than passing your calculator around everywhere.
#Provides methods
If you still want to solve this in Guice, but you don't want to write a lot of private modules that expose one binding each, you can take matters into your own hands with #Provides methods.
// in your module adjacent to configure()
#Provides #TakeNothing Deal anyMethodNameWorks(/* dependencies here */) {
SimpleCalculator calculator = new SimpleCalculator(-2);
ScoreValidator validator = new PlainScoreValidator(calculator,
PossibleDealResults.fullRange());
return new Deal(calculator, validator);
}
Again, this will create a binding for every type of Deal, which is probably a bad idea, but it's a bit more lightweight than the above. To some degree you're doing Guice's job for it—creating objects, I mean—but should you need any dependencies Guice can provide, you can inject them as method parameters on the #Provides method itself.
Related
I'm trying to implement the Strategy Pattern for some custom validation that doesn't involve only validation input with basic operations but I do need to call some other services to validate the data.
At the beginning I used the example mentioned here which uses enums to have different strategies but of course it was not possible to inject my services in the enum so now I'm looking at this that leverages to java 8 for more clarity.
My idea is to have an interface with has one method validate() and have generic input for different objects I could send it and then a class implementing that interface that would have different validations based on object type and dispatches requests to different services, but on the other hand I'm kinda loosing the enum advantage on having different strategies which I could select for instance based on specific user settings.
Any idea how to have both of these advantages?
I would say that enums and the strategy pattern don't really mix.
The ideal use case for an enum is something that represents an exhaustive set of choices. DayOfWeek representing Monday to Sunday, for example. The problem with using this, in addition to not being able to autowire any other beans, is that your enum will continue to grow as the number of validations increase.
The strategy pattern allows you to use a potentially infinite number of possible strategies, provided it obeys the contract. Take Collections.sort(List<T> list, Comparator<? super T> c), for example. There could be no exhaustive list of possible comparators, because it could never fulfill everyone's use-cases.
It makes more sense to define each of your possible strategies as a component
#Component
class NonNullValidationStrategy implements ValidationStrategy {
private final MyService service;
//constructor
#Override
public boolean isValid(MyClass foo) {
return foo != null;
}
}
How you would obtain an instance of the right strategy when you need it will depend on details you haven't provided. Maybe autowiring with a qualifier is a possibility.
Spring already has it's own way of defining validations, via interfaces and annotations. I suggest you pursue that before rolling your own solution.
It'd like to suggest using javax.validation groups, see more about it here especially #Validated(OnCreate.class)
If you want to apply strategy pattern at the controller level and deeper than see this article and especially my comment, because there is described not a pretty clean solution.
If I follow dependency injection principle for a class design, I should make sure my class does not try to instantiate its dependencies within my class, but rather ask for the object through the constructor.
This way I am in control of the dependencies I provide for the class while unit-testing it. This I understand.
But what I am not sure is does this mean that a good class design which follows dependency injection principle means that its fields should never be initialized inline? Should we totally avoid inline initialization to produce testable code?
EDIT
Which is better 1 or 2?
1
public class Car {
private Tire tire = new Tire(); //
}
2
public class Car {
private Tire tire;
public Car(Tire tire) {
this.tire = tire
}
}
No, it surely doesn't mean inline initialization.
I'm not a Java user but this term is relevant to many programming languages.
Basically, instead of having your objects creating a dependency or asking a factory object to make one for them, you pass the needed dependencies into the object externally, and you make it somebody else's problem.
public SomeClass() {
myObject = Factory.getObject();
}
"Dependency Injection" is a 25-dollar term for a 5-cent concept. [...]
Dependency injection means giving an object its instance variables.
[...].
Dependency injection is basically providing the objects that an object needs (its dependencies) instead of having it construct them itself. It's a very useful technique for testing since it allows dependencies to be mocked or stubbed out.
Any application is composed of many objects that collaborate with each other to perform some useful stuff. Traditionally each object is responsible for obtaining its own references to the dependent objects (dependencies) it collaborates with. This leads to highly coupled classes and hard-to-test code.
A lot of reference from here
In my opinion, In general, if you think that the data field is a dependency, then let the dependency management container should manage it. What counts a dependency and what is not dependency is a tough question, and only you can decide.
For example: if your design of car allows working with different types of Tyres, then it's a dependency. Otherwise, inside a class Car you'll have to work with some kind of TyreFactory, which is not really reasonable.
If you work with DI container, testing the class will be a breeze. You'll have to provide a stub/mock to the class which is obviously a real benefit. So, in your example, if you go with the first solution then, how will you test that your car works with different types of Tyres?
I have a situation where I have have a lot of model classes (~1000) which implement any number of 5 interfaces. So I have classes which implement one and others which implement four or five.
This means I can have any permutation of those five interfaces. In the classical model, I would have to implement 32-5 = 27 "meta interfaces" which "join" the interfaces in a bundle. Often, this is not a problem because IB usually extends IA, etc. but in my case, the five interfaces are orthogonal/independent.
In my framework code, I have methods which need instances that have any number of these interfaces implemented. So lets assume that we have the class X and the interfaces IA, IB, IC, ID and IE. X implements IA, ID and IE.
The situation gets worse because some of these interfaces have formal type parameters.
I now have two options:
I could define an interface IADE (or rather IPersistable_MasterSlaveCapable_XmlIdentifierProvider; underscores just for your reading pleasure)
I could define a generic type as <T extends IPersistable & IMasterSlaveCapable & IXmlIdentifierProvider> which would give me a handy way to mix & match interfaces as I need them.
I could use code like this: IA a = ...; ID d = (ID)a; IE e = (IE)e and then use the local variable with the correct type to call methods even though all three work on the same instance. Or use a cast in every second method call.
The first solution means that I get a lot of empty interfaces with very unreadable names.
The second uses a kind of "ad-hoc" typing. And Oracle's javac sometimes stumbles over them while Eclipse gets it right.
The last solution uses casts. Nuff said.
Questions:
Is there a better solution for mixing any number of interfaces?
Are there any reasons to avoid the temporary types which solution #2 offers me (except for shortcomings in Oracle's javac)?
Note: I'm aware that writing code which doesn't compile with Oracle's javac is a risk. We know that we can handle this risk.
[Edit] There seems to be some confusion what I try to attempt here. My model instances can have one of these traits:
They can be "master slave capable" (think cloning)
They can have an XML identifier
They might support tree operations (parent/child)
They might support revisions
etc. (yes, the model is even more complex than that)
Now I have support code which operates on trees. An extensions of trees are trees with revisions. But I also have revisions without trees.
When I'm in the code to add a child in the revision tree manager, I know that each instance must implement ITtree and IRevisionable but there is no common interface for both because these are completely independent concerns.
But in the implementation, I need to call methods on the nodes of the tree:
public void addChild( T parent, T child ) {
T newRev = parent.createNewRevision();
newRev.addChild( foo );
... possibly more method calls to other interfaces ...
}
If createNewRevision is in the interface IRevisionable and addChild is in the interface ITree, what are my options to define T?
Note: Assume that I have several other interfaces which work in a similar way: There are many places where they are independent but some code needs to see a mix of them. IRevisionableTree is not a solution but another problem.
I could cast the type for each call but that seems clumsy. Creating all permutations of interfaces would be boring and there seems no reasonable pattern to compress the huge interface names. Generics offer a nice way out:
public
<T extends IRevisionable & ITree>
void addChild( T parent, T child ) { ... }
This doesn't always work with Oracle's javac but it seems compact and useful. Any other options/comments?
Loosely coupled capabilities might be interesting. An example here.
It is an entirely different approach; decoupling things instead of typing.
Basically interfaces are hidden, implemented as delegating field.
IA ia = x.lookupCapability(IA.class);
if (ia != null) {
ia.a();
}
It fits here, as with many interfaces the wish to decouple rises, and you can more easily combine cases of interdepending interfaces (if (ia != null && ib != null) ...).
If you have a method (semicode)
void doSomething(IA & ID & IE thing);
then my main concern is: Couldn't doSomething be better tailored? Might it be better to split up the functionality? Or are the interfaces itself badly tailored?
I have stumbled over similar things several times and each time it proved to be better to take big step backward and rethink the complete partitioning of the logic - not only due to the stuff you mentioned but also due to other concerns.
Since you formulated your question very abstractly (i.e. without a sensible example) I cannot tell you if that's advisable in your case also.
I would avoid all "artificial" interfaces/types that attempt to represent combinations. It's just bad design... what happens if you add 5 more interfaces? The number of combinations explodes.
It seems you want to know if some instance implements some interface(s). Reasonable options are:
use instanceof - there is no shame
use reflection to discover the interfaces via object.getClass().getInterfaces() - you may be able to write some general code to process stuff
use reflection to discover the methods via object.getClass().getMethods() and just invoke those that match a known list of methods of your interfaces (this approach means you don't have to care what it implements - sounds simple and therefore sounds like a good idea)
You've given us no context as to exactly why you want to know, so it's hard to say what the "best" approach is.
Edited
OK. Since your extra info was added it's starting to make sense. The best approach here is to use the a callback: Instead of passing in a parent object, pass in an interface that accepts a "child".
It's a simplistic version of the visitor pattern. Your calling code knows what it is calling with and how it can handle a child, but the code that navigates around and/or decides to add a child doesn't have context of the caller.
Your code would look something like this (caveat: May not compile; I just typed it in):
public interface Parent<T> {
void accept(T child);
}
// Central code - I assume the parent is passed in somewhere earlier
public void process(Parent<T> parent) {
// some logic that decides to add a child
addChild(parent, child);
}
public void addChild(Parent<T> parent, T child ) {
parent.accept(child);
}
// Calling code
final IRevisionable revisionable = ...;
someServer.process(new Parent<T> {
void accept(T child) {
T newRev = revisionable.createNewRevision();
newRev.addChild(child);
}
}
You may have to juggle things around, but I hope you understand what I'm trying to say.
Actually solution 1 is a good solution, but you should find a better naming.
What actually would you name a class that implements the IPersistable_MasterSlaveCapable_XmlIdentifierProvider interface? If you follow good naming convention, it should have a meaningful name originating from a model entity. You can give the interface the same name prefixed with I.
I don't find it a disadvantage to have many interfaces, because like that you can write mock implementations for testing purposes.
My situation is the opposite: I know that at certain point in code,
foo must implement IA, ID and IE (otherwise, it couldn't get that
far). Now I need to call methods in all three interfaces. What type
should foo get?
Are you able to bypass the problem entirely by passing (for example) three objects? So instead of:
doSomethingWithFoo(WhatGoesHere foo);
you do:
doSomethingWithFoo(IA foo, ID foo, IE foo);
Or, you could create a proxy that implements all interfaces, but allows you to disable certain interfaces (i.e. calling the 'wrong' interface causes an UnsupportedOperationException).
One final wild idea - it might be possible to create Dynamic Proxies for the appropriate interfaces, that delegate to your actual object.
I'm currently trying to improve the testability of a legacy system, written in Java. The most present problem is the presence of “inner” dependencies which cannot be mocked out. The solution to this is pretty simple: Introduce dependency injection.
Unfortunately the code base is pretty large, so it would be a huge effort to introduce dependency injection throughout the whole application, up to the “bootstrapping”. For each class I want to test, I would have to change another hundred (maybe I’m exaggerating a bit here, but it definitely would be a lot) classes, which depend on the changed component.
Now to my question: Would it be ok, to use a two constructors, a default constructor which initialize the instance fields with default values and another one to allow the injection of dependencies? Are there any disadvantages using this approach? It would allow dependency injection for future uses but still doesn’t require the existing code to change (despite the class under test).
Current code (“inner” dependencies):
public class ClassUnderTest {
private ICollaborator collab = new Collaborator();
public void methodToTest() {
//do something
collab.doComplexWork();
//do something
}
}
With default / di constructor:
public class ClassUnderTest {
private ICollaborator collab;
public ClassUnderTest() {
collab = new Collaborator();
}
public ClassUnderTest(ICollaborator collab) {
this.collab = collab;
}
public void methodToTest() {
//do something
collab.doComplexWork();
//do something
}
}
This is absolutely fine, I do this occasionally also, especially with service-style classes where a default constructor is necessary because of legacy code or framework constraints.
By using two constructors, you're clearly separating the default collaborator objects while still allowing tests to inject mocks. You can reinforce the intent by making the second constructor package-protected, and keeping the unit test in the same package.
It's not as good a pattern as full-blown dependency injection, but that's not always an option.
I think this is perfectly OK. I would write the default-constructor in terms of the parametrized one, but that is probably a question of style and preference.
What I would be more cautious about is adding a setCollaborator()-function, because this radically changes the contract (and assumptions) for ClassUnderTest, which will lead to strange error-scenarios if called from code written by someone who doesn't know the history, haven't read the docs properly (or maybe there isn't any docs at all...)
I have a pipeline-based application that analyzes text in different languages (say, English and Chinese). My goal is to have a system that can work in both languages, in a transparent way. NOTE: This question is long because it has many simple code snippets.
The pipeline is composed of three components (let's call them A, B, and C), and I've created them in the following way so that the components are not tightly coupled:
public class Pipeline {
private A componentA;
private B componentB;
private C componentC;
// I really just need the language attribute of Locale,
// but I use it because it's useful to load language specific ResourceBundles.
public Pipeline(Locale locale) {
componentA = new A();
componentB = new B();
componentC = new C();
}
public Output runPipeline(Input) {
Language lang = LanguageIdentifier.identify(Input);
//
ResultOfA resultA = componentA.doSomething(Input);
ResultOfB resultB = componentB.doSomethingElse(resultA); // uses result of A
return componentC.doFinal(resultA, resultB); // uses result of A and B
}
}
Now, every component of the pipeline has something inside which is language specific. For example, in order to analyze Chinese text, I need one lib, and for analyzing English text, I need another different lib.
Moreover, some tasks can be done in one language and cannot be done in the other. One solution to this problem is to make every pipeline component abstract (to implement some common methods), and then have a concrete language-specific implementation. Exemplifying with component A, I'd have the following:
public abstract class A {
private CommonClass x; // common to all languages
private AnotherCommonClass y; // common to all languages
abstract SomeTemporaryResult getTemp(input); // language specific
abstract AnotherTemporaryResult getAnotherTemp(input); // language specific
public ResultOfA doSomething(input) {
// template method
SomeTemporaryResult t = getTemp(input); // language specific
AnotherTemporaryResult tt = getAnotherTemp(input); // language specific
return ResultOfA(t, tt, x.get(), y.get());
}
}
public class EnglishA extends A {
private EnglishSpecificClass something;
// implementation of the abstract methods ...
}
In addition, since each pipeline component is very heavy and I need to reuse them, I thought of creating a factory that caches up the component for further use, using a map that uses the language as the key, like so (the other components would work in the same manner):
public Enum AFactory {
SINGLETON;
private Map<String, A> cache; // this map will only have one or two keys, is there anything more efficient that I can use, instead of HashMap?
public A getA(Locale locale) {
// lookup by locale.language, and insert if it doesn't exist, et cetera
return cache.get(locale.getLanguage());
}
}
So, my question is: What do you think of this design? How can it be improved? I need the "transparency" because the language can be changed dynamically, based on the text that it's being analyzed. As you can see from the runPipeline method, I first identify the language of the Input, and then, based on this, I need to change the pipeline components to the identified language. So, instead of invoking the components directly, maybe I should get them from the factory, like so:
public Output runPipeline(Input) {
Language lang = LanguageIdentifier.identify(Input);
ResultOfA resultA = AFactory.getA(lang).doSomething(Input);
ResultOfB resultB = BFactory.getB(lang).doSomethingElse(resultA);
return CFactory.getC(lang).doFinal(resultA, resultB);
}
Thank you for reading this far. I very much appreciate every suggestion that you can make on this question.
The factory idea is good, as is the idea, if feasible, to encapsulate the A, B, & C components into single classes for each language. One thing that I would urge you to consider is to use Interface inheritance instead of Class inheritance. You could then incorporate an engine that would do the runPipeline process for you. This is similar to the Builder/Director pattern. The steps in this process would be as follows:
get input
use factory method to get correct interface (english/chinese)
pass interface into your engine
runPipeline and get result
On the extends vs implements topic, Allen Holub goes a bit over the top to explain the preference for Interfaces.
Follow up to you comments:
My interpretation of the application of the Builder pattern here would be that you have a Factory that would return a PipelineBuilder. The PipelineBuilder in my design is one that encompases A, B, & C, but you could have separate builders for each if you like. This builder then is given to your PipelineEngine which uses the Builder to generate your results.
As this makes use of a Factory to provide the Builders, your idea above for a Factory remains in tact, replete with its caching mechanism.
With regard to your choice of abstract extension, you do have the choice of giving your PipelineEngine ownership of the heavy objects. However, if you do go the abstract way, note that the shared fields that you have declared are private and therefore would not be available to your subclasses.
I like the basic design. If the classes are simple enough, I might consider consolidating the A/B/C factories into a single class, as it seems there could be some sharing in behavior at that level. I'm assuming that these are really more complex than they appear, though, and that's why that is undesirable.
The basic approach of using Factories to reduce coupling between components is sound, imo.
If I'm not mistaken, What you are calling a factory is actually a very nice form of dependency injection. You are selecting an object instance that is best able to meet the needs of your parameters and return it.
If I'm right about that, you might want to look into DI platforms. They do what you did (which is pretty simple, right?) then they add a few more abilities that you may not need now but you may find would help you later.
I'm just suggesting you look at what problems are solved now. DI is so easy to do yourself that you hardly need any other tools, but they might have found situations you haven't considered yet. Google finds many great looking links right off the bat.
From what I've seen of DI, it's likely that you'll want to move the entire creation of your "Pipe" into the factory, having it do the linking for you and just handing you what you need to solve a specific problem, but now I'm really reaching--my knowledge of DI is just a little better than my knowledge of your code (in other words, I'm pulling most of this out of my butt).