I'm working on a legacy Java application, that deals with "fruits" and "vegetables", let's say, for the sake of the question.
They are treated as different things internally, cause they don't have all methods/properties in common, but a lot of things are DONE very similar to both of them.
So, we have a ton of methods doSomethingWithAFruit(Fruit f) and doSomethingWithAVegetable(Veg v), that use the proper doOtherStuffWithAFruit(Fruit f) / doOtherStuffWithAVeg(Veg v). And those are very similar, except that methods that do things with fruits only call the methods that do things with fruits, and the same thing for vegetables.
I want to refactor this to reduce the duplication, but I'm not sure what is the best way to accomplish that. I've read a bit about some design patterns, but I don't know if it has made any clearer to me. (I can recognize some patterns in the code I use, but I don't really know when I should be applying a pattern to improve things around. Maybe I should be reading more about refactoring itself...)
I was thinking of these two options:
1. Creating a class that can have an instance of either a Fruit or a Vegetable and pass it around to the methods, trying to minimize the duplication. It would go like this:
public void doSomething(Plant p) {
// do the stuff that is common, and then...
if (p.hasFruit()) {
doThingWithFruit(p.getFruit());
} else {
doThingWithVegetable(p.getVegetable());
}
}
This would get things a bit better, but I don't know... it still feels wrong.
2. The other alternative I thought was to put an interface in Fruit and Vegetable with the stuff that is common to them, and use that to pass it around. I feel this is the cleaner approach, although I will have to use instanceof and cast to Fruit/Vegetable when it needs stuff that is specific to them.
So, what more can I do here? And what are the shortcomings of these approaches?
UPDATE: Note that the question is a bit simplified, I'm looking for way to do things WITH the "plants", that is, code that mostly "uses" them instead of doing things TO them. Having said that, those similar methods I refer to cannot be inside the "Plants" classes, and they usually have another argument, like:
public void createSomethingUsingFruit(Something s, Fruit f);
public void createSomethingUsingVegetable(Something s, Vegetable v);
Namely, those methods have other concerns besides Fruits/Vegetables, and aren't really appropriated to be in any Fruit/Vegetable class.
UPDATE 2: Most code in those methods only reads state from the Fruit/Vegetable objects, and create instances of other classes according to the appropriate type, store in the database and so on -- from my answer to a question in the comments that I think it's important.
I think the 2nd approach would be better.. Designing to an interface is always a better way to design.. That way you can switch between your implementation easily..
And if you use interfaces, you won't need to do typecast as you can easily exploit the concept of polymorphism.. That is, you will have `Base class reference pointing to a subclass object..
But if you want to keep only methods common to fruits and vegetables in your interface, and specific implementation in your implementation class.. Then in that case typecasting would be required..
So, you can have a generic method at interface level.. And more specific method at implementation level..
public interface Food {
public void eat(Food food);
}
public class Fruit implements Food {
// Can have interface reference as parameter.. But will take Fruit object
public void eat(Food food) {
/ ** Fruit specific task **/
}
}
public class Vegetable implements Food {
// Can have interface reference as parameter.. But will take Vegetable object
public void eat(Food food) {
/** Vegetable specific task **/
}
}
public class Test {
public static void main(String args[]) {
Food fruit = new Fruit();
fruit.eat(new Fruit()); // Invoke Fruit Version
Food vegetable = new Vegetable();
vegetable.eat(new Vegetable()); // Invoke vegetable version
}
}
OK, I have modified a code to make eat() method to take parameters of type Food.. That will not make much of a difference.. You can pass Vegetable object to Food reference..
Another option you can use, or perhaps include it as part of your solution, is to ask the consumer if it can manage the object that you are passing it. At this point, it becomes the consumer's responsibility to ensure it knows how to handle the object you are sending it.
For instance, if your consumer is called Eat, you would do something like:
Consumer e = new Eat();
Consumer w = new Water();
if( e.canProcess( myFruit ) )
e.doSomethingWith( myFruit );
else if ( w.canProcess( myFruit ) )
w.doSomethingWith( myFruit );
.... etc
But then you end up with a lot of it/else classes, so you create yourself a Factory which determines which consumer you want. Your Factory basically does the if/else branching to determine which consumer can handle the object you pass, and returns the consumer to you.
So it looks something like
public class Factory {
public static Consumer getConsumer( Object o ){
Consumer e = new Eat();
Consumer w = new Water();
if( e.canProcess( o ) )
return e;
else if ( w.canProcess( o ) )
return w;
}
}
Then your code becomes:
Consumer c = Factory.getConsumer( myFruit );
c.doSomethingWith( myFruit );
Of course in the canProcess method of the consumer, it would be basically an instanceof or some other function your derive to determine if it can handle your class.
public class Eat implements Consumer{
public boolean canProcess(Object o ){
return o instanceof Fruit;
}
}
So you end up shifting the responsibility from your class to a Factory class to determine which objects can be handled. The trick, of course, is that all Consumers must implement a common interface.
I realize that my pseudo-code is very basic, but it is just to point out the general idea. This may or may not work in your case and/or become overkill depending on how your classes are structured, but if well designed, can significantly improve readability of your code, and truely keep all logic for each type self-contained in their own class without instanceof and if/then scattered everywhere.
If you have functionality that is specific to fruits and vegetables respectively and a client using both types has to distinguish (using instanceof) - that is a coherence vs. coupling problem.
Maybe consider if said functionality is not better placed near fruit and vegetable themselves instead of with the client. The client may then somehow be refered to the functionality (through a generic interface) not caring what instance he is dealing with. Polymorphism would be preserved at least from the client's perspective.
But that is theoretical and may not be practical or be over-engineered for your use case. Or you could end up actually just hiding instanceof somewhere else in your design. instanceof is going to be a bad thing when you start having more inheritance siblings next to fruits and vegetables. Then you would start violating the Open Closed Principle.
I would create and abstract base class (let's say - Food, but I don't know your domain, something else might fit better) and start to migrate methods to it one after another.
In case you see that 'doSomethingWithVeg' and 'doSomthingWithFruit' are slightly different - create the 'doSomething' method at the base class and use abstract methods to do only the parts that are different (I guess the main business logic can be united, and only minor issues like write to DB/file are different).
When you have one method ready - test it. After you're sure it's ok - go to the other one. When you are done, the Fruit and Veg classes shouldn't have any methods but the implementations of the abstract ones (the tiny differences between them).
Hope it helps..
This is a basic OOD question. since fruits and vegetables are of type Plant.
I would suggest:
interface Plant {
doSomething();
}
class Vegetable {
doSomething(){
....
}
}
and the same with the fruit.
it seems to me that doOtherStuff methods should be private for the relevant class.
You can also consider having them both implement multiple interfaces instead of just one. That way you code against the most meaningful interface according the circumstances which will help avoid casting. Kind of what Comparable<T> does. It helps the methods (like the ones that sort objects), where they don't care what the objects are, the only requirement is that they have to be comparable. e.g. in your case both can implement some interfaces called Edible, then take both of them as Edible where an Edible is expected.
Related
Given the following class:
class Example implements Interface1, Interface2 {
...
}
When I instantiate the class using Interface1:
Interface1 example = new Example();
...then I can call only the Interface1 methods, and not the Interface2 methods, unless I cast:
((Interface2) example).someInterface2Method();
Of course, to make this runtime safe, I should also wrap this with an instanceof check:
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
I'm aware that I could have a wrapper interface that extends both interfaces, but then I could end up with multiple interfaces to cater for all the possible permutations of interfaces that can be implemented by the same class. The Interfaces in question do not naturally extend one another so inheritance also seems wrong.
Does the instanceof/cast approach break LSP as I am interrogating the runtime instance to determine its implementations?
Whichever implementation I use seems to have some side-effect either in bad design or usage.
I'm aware that I could have a wrapper interface that extends both
interfaces, but then I could end up with multiple interfaces to cater
for all the possible permutations of interfaces that can be
implemented by the same class
I suspect that if you're finding that lots of your classes implement different combinations of interfaces then either: your concrete classes are doing too much; or (less likely) your interfaces are too small and too specialised, to the point of being useless individually.
If you have good reason for some code to require something that is both a Interface1 and a Interface2 then absolutely go ahead and make a combined version that extends both. If you struggle to think of an appropriate name for this (no, not FooAndBar) then that's an indicator that your design is wrong.
Absolutely do not rely on casting anything. It should only be used as a last resort and usually only for very specific problems (e.g. serialization).
My favourite and most-used design pattern is the decorator pattern. As such most of my classes will only ever implement one interface (except for more generic interfaces such as Comparable). I would say that if your classes are frequently/always implementing more than one interface then that's a code smell.
If you're instantiating the object and using it within the same scope then you should just be writing
Example example = new Example();
Just so it's clear (I'm not sure if this is what you were suggesting), under no circumstances should you ever be writing anything like this:
Interface1 example = new Example();
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
Your class can implement multiple interfaces fine, and it is not breaking any OOP principles. On the contrary, it is following the interface segregation principle.
It is confusing why would you have a situation where something of type Interface1 is expected to provide someInterface2Method(). That is where your design is wrong.
Think about it in a slightly different way: Imagine you have another method, void method1(Interface1 interface1). It can't expect interface1 to also be an instance of Interface2. If it was the case, the type of the argument should have been different. The example you have shown is precisely this, having a variable of type Interface1 but expecting it to also be of type Interface2.
If you want to be able to call both methods, you should have the type of your variable example set to Example. That way you avoid the instanceof and type casting altogether.
If your two interfaces Interface1 and Interface2 are not that loosely coupled, and you will often need to call methods from both, maybe separating the interfaces wasn't such a good idea, or maybe you want to have another interface which extends both.
In general (although not always), instanceof checks and type casts often indicate some OO design flaw. Sometimes the design would fit for the rest of the program, but you would have a small case where it is simpler to type cast rather than refactor everything. But if possible you should always strive to avoid it at first, as part of your design.
You have two different options (I bet there are a lot more).
The first is to create your own interface which extends the other two:
interface Interface3 extends Interface1, Interface2 {}
And then use that throughout your code:
public void doSomething(Interface3 interface3){
...
}
The other way (and in my opinion the better one) is to use generics per method:
public <T extends Interface1 & Interface2> void doSomething(T t){
...
}
The latter option is in fact less restricted than the former, because the generic type T gets dynamically inferred and thus leads to less coupling (a class doesn't have to implement a specific grouping interface, like the first example).
The core issue
Slightly tweaking your example so I can address the core issue:
public void DoTheThing(Interface1 example)
{
if (example instanceof Interface2)
{
((Interface2) example).someInterface2Method();
}
}
So you defined the method DoTheThing(Interface1 example). This is basically saying "to do the thing, I need an Interface1 object".
But then, in your method body, it appears that you actually need an Interface2 object. Then why didn't you ask for one in your method parameters? Quite obviously, you should've been asking for an Interface2
What you're doing here is assuming that whatever Interface1 object you get will also be an Interface2 object. This is not something you can rely on. You might have some classes which implement both interfaces, but you might as well have some classes which only implement one and not the other.
There is no inherent requirement whereby Interface1 and Interface2 need to both be implemented on the same object. You can't know (nor rely on the assumption) that this is the case.
Unless you define the inherent requirement and apply it.
interface InterfaceBoth extends Interface1, Interface2 {}
public void DoTheThing(InterfaceBoth example)
{
example.someInterface2Method();
}
In this case, you've required InterfaceBoth object to both implement Interface1 and Interface2. So whenever you ask for an InterfaceBoth object, you can be sure to get an object which implements both Interface1 and Interface2, and thus you can use methods from either interface without even needing to cast or check the type.
You (and the compiler) know that this method will always be available, and there's no chance of this not working.
Note: You could've used Example instead of creating the InterfaceBoth interface, but then you would only be able to use objects of type Example and not any other class which would implement both interfaces. I assume you're interested in handling any class which implements both interfaces, not just Example.
Deconstructing the issue further.
Look at this code:
ICarrot myObject = new Superman();
If you assume this code compiles, what can you tell me about the Superman class? That it clearly implements the ICarrot interface. That is all you can tell me. You have no idea whether Superman implements the IShovel interface or not.
So if I try to do this:
myObject.SomeMethodThatIsFromSupermanButNotFromICarrot();
or this:
myObject.SomeMethodThatIsFromIShovelButNotFromICarrot();
Should you be surprised if I told you this code compiles? You should, because this code doesn't compile.
You may say "but I know that it's a Superman object which has this method!". But then you'd be forgetting that you only told the compiler it was an ICarrot variable, not a Superman variable.
You may say "but I know that it's a Superman object which implements the IShovel interface!". But then you'd be forgetting that you only told the compiler it was an ICarrot variable, not a Superman or IShovel variable.
Knowing this, let's look back at your code.
Interface1 example = new Example();
All you've said is that you have an Interface1 variable.
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
It makes no sense for you to assume that this Interface1 object also happens to implement a second unrelated interface. Even if this code works on a technical level, it is a sign of bad design, the developer is expecting some inherent correlation between two interfaces without actually having created this correlation.
You may say "but I know I'm putting an Example object in, the compiler should know that too!" but you'd be missing the point that if this were a method parameter, you would have no way of knowing what the callers of your method are sending.
public void DoTheThing(Interface1 example)
{
if (example instanceof Interface2)
{
((Interface2) example).someInterface2Method();
}
}
When other callers call this method, the compiler is only going to stop them if the passed object does not implement Interface1. The compiler is not going to stop someone from passing an object of a class which implements Interface1 but does not implement Interface2.
Your example does not break LSP, but it seems to break SRP. If you encounter such case where you need to cast an object to its 2nd interface, the method that contains such code can be considered busy.
Implementing 2 (or more) interfaces in a class is fine. In deciding which interface to use as its data type depends entirely on the context of the code that will use it.
Casting is fine, especially when changing context.
class Payment implements Expirable, Limited {
/* ... */
}
class PaymentProcessor {
// Using payment here because i'm working with payments.
public void process(Payment payment) {
boolean expired = expirationChecker.check(payment);
boolean pastLimit = limitChecker.check(payment);
if (!expired && !pastLimit) {
acceptPayment(payment);
}
}
}
class ExpirationChecker {
// This the `Expirable` world, so i'm using Expirable here
public boolean check(Expirable expirable) {
// code
}
}
class LimitChecker {
// This class is about checking limits, thats why im using `Limited` here
public boolean check(Limited limited) {
// code
}
}
Usually, many, client-specific interfaces are fine, and somewhat part of the Interface segregation principle (the "I" in SOLID). Some more specific points, on a technical level, have already been mentioned in other answers.
Particularly that you can go too far with this segregation, by having a class like
class Person implements FirstNameProvider, LastNameProvider, AgeProvider ... {
#Override String getFirstName() {...}
#Override String getLastName() {...}
#Override int getAge() {...}
...
}
Or, conversely, that you have an implementing class that is too powerful, as in
class Application implements DatabaseReader, DataProcessor, UserInteraction, Visualizer {
...
}
I think that the main point in the Interface Segregation Principle is that the interfaces should be client-specific. They should basically "summarize" the functions that are required by a certain client, for a certain task.
To put it that way: The issue is to strike the right balance between the extremes that I sketched above. When I'm trying to figure out interfaces and their relationships (mutually, and in terms of the classes that implement them), I always try to take a step back and ask myself, in an intentionally naïve way: Who is going to receive what, and what is he going to do with it?
Regarding your example: When all your clients always need the functionality of Interface1 and Interface2 at the same time, then you should consider either defining an
interface Combined extends Interface1, Interface2 { }
or not have different interfaces in the first place. On the other hand, when the functionalities are completely distinct and unrelated and never used together, then you should wonder why the single class is implementing them at the same time.
At this point, one could refer to another principle, namely Composition over inheritance. Although it is not classically related to implementing multiple interfaces, composition can also be favorable in this case. For example, you could change your class to not implement the interfaces directly, but only provide instances that implement them:
class Example {
Interface1 getInterface1() { ... }
Interface2 getInterface2() { ... }
}
It looks a bit odd in this Example (sic!), but depending on the complexity of the implementation of Interface1 and Interface2, it can really make sense to keep them separated.
Edited in response to the comment:
The intention here is not to pass the concrete class Example to methods that need both interfaces. A case where this could make sense is rather when a class combines the functionalities of both interfaces, but does not do so by directly implementing them at the same time. It's hard to make up an example that does not look too contrived, but something like this might bring the idea across:
interface DatabaseReader { String read(); }
interface DatabaseWriter { void write(String s); }
class Database {
DatabaseConnection connection = create();
DatabaseReader reader = createReader(connection);
DatabaseReader writer = createWriter(connection);
DatabaseReader getReader() { return reader; }
DatabaseReader getWriter() { return writer; }
}
The client will still rely on the interfaces. Methods like
void create(DatabaseWriter writer) { ... }
void read (DatabaseReader reader) { ... }
void update(DatabaseReader reader, DatabaseWriter writer) { ... }
could then be called with
create(database.getWriter());
read (database.getReader());
update(database.getReader(), database.getWriter());
respectively.
With the help of various posts and comments on this page, a solution has been produced, which I feel is correct for my scenario.
The following shows the iterative changes to the solution to meet SOLID principles.
Requirement
To produce the response for a web service, key + object pairs are added to a response object. There are lots of different key + object pairs that need to be added, each of which may have unique processing required to transform the data from the source to the format required in the response.
From this it is clear that whilst the different key / value pairs may have different processing requirements to transform the source data to the target response object, they all have a common goal of adding an object to the response object.
Therefore, the following interface was produced in solution iteration 1:
Solution Iteration 1
ResponseObjectProvider<T, S> {
void addObject(T targetObject, S sourceObject, String targetKey);
}
Any developer that needs to add an object to the response can now do so using an existing implementation that matches their requirement, or add a new implementation given a new scenario
This is great as we have a common interface which acts as a contract for this common practise of adding response objects
However, one scenario requires that the target object should be taken from the source object given a particular key, "identifier".
There are options here, the first is to add an implementation of the existing interface as follows:
public class GetIdentifierResponseObjectProvider<T extends Map, S extends Map> implements ResponseObjectProvider<T, S> {
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get("identifier"));
}
}
This works, however this scenario could be required for other source object keys ("startDate", "endDate" etc...) so this implementation should be made more generic to allow for reuse in this scenario.
Additionally, other implementations may require more context information to perform the addObject operation... So a new generic type should be added to cater for this
Solution Iteration 2
ResponseObjectProvider<T, S, U> {
void addObject(T targetObject, S sourceObject, String targetKey);
void setParams(U params);
U getParams();
}
This interface caters for both usage scenarios; the implementations that require additional params to perform the addObject operation and the implementations that do not
However, considering the latter of the usage scenarios, the implementations that do not require additional parameters will break the SOLID Interface Segregation Principle as these implementations will override getParams and setParams methods but not implement them. e.g:
public class GetObjectBySourceKeyResponseObjectProvider<T extends Map, S extends Map, U extends String> implements ResponseObjectProvider<T, S, U> {
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get(U));
}
public void setParams(U params) {
//unimplemented method
}
U getParams() {
//unimplemented method
}
}
Solution Iteration 3
To fix the Interface Segregation issue, the getParams and setParams interface methods were moved into a new Interface:
public interface ParametersProvider<T> {
void setParams(T params);
T getParams();
}
The implementations that require parameters can now implement the ParametersProvider interface:
public class GetObjectBySourceKeyResponseObjectProvider<T extends Map, S extends Map, U extends String> implements ResponseObjectProvider<T, S>, ParametersProvider<U>
private String params;
public void setParams(U params) {
this.params = params;
}
public U getParams() {
return this.params;
}
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get(params));
}
}
This solves the Interface Segregation issue but causes two more issues... If the calling client wants to program to an interface, i.e:
ResponseObjectProvider responseObjectProvider = new GetObjectBySourceKeyResponseObjectProvider<>();
Then the addObject method will be available to the instance, but NOT the getParams and setParams methods of the ParametersProvider interface... To call these a cast is required, and to be safe an instanceof check should also be performed:
if(responseObjectProvider instanceof ParametersProvider) {
((ParametersProvider)responseObjectProvider).setParams("identifier");
}
Not only is this undesirable it also breaks the Liskov Substitution Principle - "if S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program"
i.e. if we replaced an implementation of ResponseObjectProvider that also implements ParametersProvider, with an implementation that does not implement ParametersProvider then this could alter the some of the desirable properties of the program... Additionally, the client needs to be aware of which implementation is in use to call the correct methods
An additional problem is the usage for calling clients. If the calling client wanted to use an instance that implements both interfaces to perform addObject multiple times, the setParams method would need to be called before addObject... This could cause avoidable bugs if care is not taken when calling.
Solution Iteration 4 - Final Solution
The interfaces produced from Solution Iteration 3 solve all of the currently known usage requirements, with some flexibility provided by generics for implementation using different types. However, this solution breaks the Liskov Substitution Principle and has a non-obvious usage of setParams for the calling client
The solution is to have two separate interfaces, ParameterisedResponseObjectProvider and ResponseObjectProvider.
This allows the client to program to an interface, and would select the appropriate interface depending on whether the objects being added to the response require additional parameters or not
The new interface was first implemented as an extension of ResponseObjectProvider:
public interface ParameterisedResponseObjectProvider<T,S,U> extends ResponseObjectProvider<T, S> {
void setParams(U params);
U getParams();
}
However, this still had the usage issue, where the calling client would first need to call setParams before calling addObject and also make the code less readable.
So the final solution has two separate interfaces defined as follows:
public interface ResponseObjectProvider<T, S> {
void addObject(T targetObject, S sourceObject, String targetKey);
}
public interface ParameterisedResponseObjectProvider<T,S,U> {
void addObject(T targetObject, S sourceObject, String targetKey, U params);
}
This solution solves the breaches of Interface Segregation and Liskov Substitution principles and also improves the usage for calling clients and improves the readability of the code.
It does mean that the client needs to be aware of the different interfaces, but since the contracts are different this seems to be a justified decision especially when considering all the issues that the solution has avoided.
The problem you describe often comes about through over-zealous application of the Interface Segregation Principle, encouraged by languages' inability to specify that members of one interface should, by default, be chained to static methods which could implement sensible behaviors.
Consider, for example, a basic sequence/enumeration interface and the following behaviors:
Produce an enumerator which can read out the objects if no other iterator has yet been created.
Produce an enumerator which can read out the objects even if another iterator has already been created and used.
Report how many items are in the sequence
Report the value of the Nth item in the sequence
Copy a range of items from the object into an array of that type.
Yield a reference to an immutable object that can accommodate the above operations efficiently with contents that are guaranteed never to change.
I would suggest that such abilities should be part of the basic sequence/enumeration interface, along with a method/property to indicate which of the above operations are meaningfully supported. Some kinds of single-shot on-demand enumerators (e.g. an infinite truly-random sequence generator) might not be able to support any of those functions, but segregating such functions into separate interfaces will make it much harder to produce efficient wrappers for many kinds of operations.
One could produce a wrapper class that would accommodate all of the above operations, though not necessarily efficiently, on any finite sequence which supports the first ability. If, however, the class is being used to wrap an object that already supports some of those abilities (e.g. access the Nth item), having the wrapper use the underlying behaviors could be much more efficient than having it do everything via the second function above (e.g. creating a new enumerator, and using that to iteratively read and ignore items from the sequence until the desired one is reached).
Having all objects that produce any kind of sequence support an interface that includes all of the above, along with an indication of what abilities are supported, would be cleaner than trying to have different interfaces for different subsets of abilities, and requiring that wrapper classes make explicit provision for any combinations they want to expose to their clients.
I am studying Design Patters and I have a situation where I am not sure what would be a better practice:
I have a class "Category" which has several fields: name, 2 kinds of urls, list of related objects. There is a method 'toHtml()' which basically generates some HTML from instances of that class.
There are 4 different types of 'Categories' which have exactly the same fields but 'toHtml()' method should give different result for each one.
I am not sure if I should pass a parameter "type" and series of ifs/switch statement to generate different html or should I make a Category class abstract and create several sub-classes that override the toHtml() method and then use CategoryFactory class to generate them? In both cases I need to pass 'type' parameter.
I tried to think about 'Close for modification, open for extension' OOP rule. But in this case if I want to add 'fifth' category type, that generates different html - for first solution I need to modify only toHtml method (adding one more if), going for second solution I need to create additional sub-class AND modify CategoryFactory class.
What would be better practice? Is there any extra rule I should follow when I have similar kind of dilemma?
First, I believe you are referring to the Factory Method, and not the Abstract Factory Pattern.
The main difference being, in the former you define a common template for a single product, whereas in the latter you define a template for a family of products. For more information, you could look here.
In your case, you wish to define a template for Category. With this assumption, here is what your group of classes would look like:
abstract class Category {
public void doSomething() {
Foo f = makeFoo();
f.whatever();
}
abstract void toHtml();
}
class Category1 extends Category {
public override void toHtml() {
... // do something here
}
}
class Category2 extends Category {
public override void toHtml() {
... // do something else here
}
}
It is true that this certainly a lot of code, and could easily be represented like this:
class Category {
public void toHtml(Integer param) {
if(param == 1) { // do something for Category1
}
else { // do something for Category2
}
}
At the end of the day, it really is a design decision. There are some factors you can consider. Is this going to be a constantly changing class? Is this going to be declared global for the customer to use? How do you want the customer to be able to use this?
The easier thing at this point would be to take the path of least resistance. Having one class to service all categories certainly results in lesser code and in Salesforce, less code is always a better thing. But consider this: Abstracting your functionality into separate classes makes for more maintainable code. You may find it easier to write a class and a wall of if statements, but tomorrow when you're not around and there's a critical failure and someone has to look through your code to figure out exactly which if caused the problem, they'll curse you for it.
Keep in mind that inheritance is an all or nothing mechanism. You may find it particularly useful to use if you have some common functionality, in which case you can choose to abstract that out into the parent class and have your children take care of the specifics.
If you create a subclass of Category and override the toHtml () method, why do you need to have a factory pattern. The toHtml () method of the runtime resolved class will be called if you are calling it using the reference. This implies that if you add a new Category subclass then you override the toHtml () method and it should work fine.
I am reading a Java book and stuck again this time thinking about what this whole paragraph actually means:
Interfaces are designed to support dynamic method resolution at run time. Normally, in order for a method to be called from one class to another, both classes need to be present at compile time so the Java compiler can check to ensure that the method signatures are compatible. This requirement by itself makes for a static and nonextensible classing environment. Inevitably in a system like this, functionality gets pushed up higher and higher in the class hierarchy so that the mechanisms will be available to more and more subclasses. Interfaces are designed to avoid this problem. They disconnect the definition of a method or set of methods from the inheritance hierarchy. Since interfaces are in a different hierarchy from classes, it is possible for classes that are unrelated in terms of the class hierarchy to implement the same interface. This is where the real power of interfaces is realized.
First question: what does the author mean by saying from one class to another? Does he mean that those classes are related in terms of the hierarchy? I mean, assigning subclass object reference to its superclass type variable and then calling a method?
Second question: what does the author again mean by saying This requirement by itself makes for a static and nonextensible classing environment? I don't understand the makes for meaning (english is not my main language) and why the environment is called static and nonextensible.
Third question: what does he mean by saying functionality gets pushed up higher and higher? Why does it get pushed up higher and higher? What functionality? Also, mechanisms will be available to more and more subclasses. What mechanisms? Methods?
Fourth question: Interfaces are designed to avoid this problem. What problem???
I know the answers must be obvious but I don't know them. Maybe mainly because I don't undestand some magic english phrases. Please help me to understand what is this whole paragraph telling.
Between any two classes. If your code contains a call to String.substring() for example, the String class and its substring() method must be available at compile time.
As said, "makes for" means the same as "creates". The environment is non-extensible because everything you may want to use must be available at compile time. (This isn't 100% true though. Abstract classes and methods provide extension points even when no interfaces are present, but they aren't very flexible as we're going to see.)
Imagine that you have two classes: Foo and Bar. Both classes extend the class Thingy. But then you want to add a new functionality, let's say you want to display both in HTML on a web page. So you add a method to both that does that.
The basic problem
abstract class Thingy { ... }
class Foo extends Thingy {
...
public String toHTMLString() {
...
}
}
class Bar extends Thingy {
...
public String toHTMLString() {
...
}
}
This is great but how do you call this method?
public String createWebPage( Thingy th ) {
...
if (th instanceof Foo)
return ((Foo)th).toHTMLString();
if (th instanceof Bar)
return ((Bar)th).toHTMLString();
...
}
Clearly this way isn't flexible at all. So what can you do? Well, you can push toHTMLString() up into their common ancestor, Thingy. (And this is what the book is talking about.)
A naive attempt to resolve it
abstract class Thingy {
...
public abstract String toHTMLString();
}
class Foo extends Thingy {
...
public String toHTMLString() {
...
}
}
class Bar extends Thingy {
...
public String toHTMLString() {
...
}
}
And then you can call it like this:
public String createWebPage( Thingy th ) {
...
return th.toHTMLString();
}
Success! Except now you've forced every class extending Thingy to implement a toHTMLString() method, even if it doesn't make sense for some of them. Even worse, what if the two objects do not extend anything explicitly, they're completely unrelated? You'd have to push the method up all the way into their common ancestor, which is java.lang.Object. And you can't do that.
Solution with interfaces
So what can we do with interfaces?
abstract class Thingy { ... }
interface HTMLPrintable {
public String toHTMLString();
}
class Foo extends Thingy implements HTMLPrintable {
...
public String toHTMLString() {
...
}
}
class Bar extends Thingy implements HTMLPrintable {
...
public String toHTMLString() {
...
}
}
//We've added another class that isn't related to all of the above but is still HTMLPrintable,
//with interfaces we can do this.
class NotEvenAThingy implements HTMLPrintable {
public String toHTMLString() {
...
}
}
And the calling code will be simply
public String createWebPage( HTMLPrintable th ) {
...
return th.toHTMLString(); // "implements HTMLPrintable" guarantees that this method exists
}
What are interfaces then?
There are many metaphors used to understand interfaces, the most popular is probably the idea of a contract. What it says to the caller is this: "If you need X done, we'll get it done. Don't worry about how, that's not your problem." (Although the word "contract" is often used in a more general sense, so be careful.)
Or in another way: if you want to buy a newspaper, you don't care if it's sold in a supermarket, a newsagents or a small stall in the street, you just want to buy a newspaper. So NewspaperVendor in this case is an interface with one method: sellNewsPaper(). And if someone later decides to sell newspaper online or door-to-door, all they need to do is implement the interface and people will buy from them.
But my favourite example is the little sticker in shop windows that says "we accept X,Y and Z credit cards". That's the purest real-world example of an interface. The shops could sell anything (they may not even be shops, some might be restaurants), the card readers they use are different too. But you don't care about all of that, you look at the sign and you know you can pay with your card there.
The Key to paragraph is "classes need to be present at compile time" in line 2. Classes are more concrete. While interfaces are abstract.
As classes are concrete so Designer and programmer needs to know all about class structure and how the methods are implemented. Where as interfaces are more abstract. (They contain abstract methods only). So programmer needs to know only what methods an interface has to have and signature of those methods. He does not need to know detail how these are implemented.
Thus using interfaces is easier and better while making subclasses. You only need to know method signatures of interface.
Using concrete class we have to implement functionality of a method high in class hierarchy while using interface avoids this problem. (There is a related concept of polymorphism that you would probably learn later)
I know this is a basic question, but I can't find other StackOverflow posts or any good API docs on this.
Say I have an abstract class like Appliance and then I have some classes like Toaster and Blender that extend Appliance. Now suppose that I want to create an ArrayList that will contain mixed elements, all of which are ultimately members of Appliance but could also be Toaster or Blender as well. The Blender class has a method called turnBlenderOff() and the Toaster class has a method called turnToasterOff(), and I will want to iterate over my ArrayList and call these methods, depending on which subclass the element actually belongs to.
Currently I make a class called PowerPoint and try:
// Constructor given an ArrayList of appliances.
public PowerPoint(ArrayList<Appliance> initial_list_of_appliances){
int listSize = initial_list_of_appliances.size();
for(int ii = 0; ii < listSize; ii++){
this.applianceList.add(initial_list_of_appliances.get(ii));
}
}
/////
// Method to switch everything in the list OFF simultaneously.
/////
public void switchOff(){
int N = this.applianceList.size();
String cur_name;
for(int ii = 0; ii < N; ii++){
cur_name = this.applianceList.get(ii).getClassName();
if(cur_name.equals("Blender")){
this.turnBlenderOff(this.applianceList.get(ii));
}
else if(cur_name.equals("Toaster")){
this.turnToasterOff(this.applianceList.get(ii));
}
else if(cur_name.equals("Oven")){
this.turnOvenOff(this.applianceList.get(ii));
}
}
}
Most of my code compiles fine, but I keep getting this error message:
PowerPoint.java:83: turnBlenderOff(appliances.ApplianceWrapper.Blender) in PowerPoint cannot be applied to (appliances.ApplianceWrapper.Appliance)
this.turnBlenderOff(this.applianceList.get(ii));
I see that this method, implemented to work only on Blender objects is trying to be executed on an Appliance object that happens to be a Blender but that the compiler doesn't realize this.
I tried to replace the <Appliance> type with <? extends Appliance> in the ArrayList specifications, but that gave additional errors and would not longer compile.
What is the proper way to make a list based on the abstract type, but then call methods of the subclassed type by using something like getClassName() to retrieve the subclass type?
Added
Since a lot of folks immediately pointed out the obvious: use inheritance better, I need to explain. For this project, we have to assume that all of the subclasses of Appliance were created by third-party people and put into some package that we cannot change. This was done in a bad, crufty way in which all different subclasses have different on/off methods and this can't be changed. So the option of designing a smooth Appliance abstract class is not open to me. For example, Toaster has the method startToasting(), while Oven has the method heatUp(), each of which serves as the 'on' method for the two different classes.
I think the exercise is meant to teach us: how would you retro-fit someone else's bad design job so as to minimize future damage.
If you want to use an abstract class and all subclasses actually have the same functions, why don't you use a function in the abstract base class called "turnDeviceOff" and override it in the subclasses accordingly. That's the OO approach.
The ArrayList is ok.
but you could do this:
public abstract class Appliance{
//declare an abstract method
abstract void switchOff();
}
then
public class Toaster extends Appliance{
//implement the abstract method
void switchOff(){
//do toaster switchOff
}
}
for other subclasses, do the same.
finally,
for(Appliance element: yourList){
element.switchOff();
}
Use instanceof or getClass, not rolling your own getClassName, and then do an explicit cast to the type you just identified.
That said, prefer #guitarflow's answer, though that approach might not work if there is state that you can't just pass to the switchOff method.
There are two ways you can do this. The first (and less-recommended) way is by using the instanceof keyword and casting your Appliance instance into a Blender instance:
for(Appliance a : list){
if(a instanceof Blender) this.turnBlenderOff((Blender)a);
...
}
This is bad because instanceof is slow and doesn't allow you to take advantage of Java's most powerful counterpart to polymorphism, late binding. The better way would be to have the Appliance class have an abstract public method called turnOff(). Then you could do something like:
for(Appliance a : list){
a.turnOff();
...
}
The proper way to solve is by adding a turnOff() method in Appliance class, and have the various subclasses override them appropriately. If you do that, then the big "if" code goes away.
You're rather defeating the point of inheritance: Appliance should define a turnOff() method, which its children should implement appropriately for their needs. That way you can just work with a list of Appliances and not have to worry about what's what.
Otherwise, what's the point of them extending Appliance in the first place?
If you do need to figure out the type of something, use instanceof, testing class names as strings is a terribly brittle way of doing it.
Edit
Random style tip: it's almost never necessary to use index access on Lists anymore:
public PowerPoint(List<Appliance> initialList){
for(Appliance app : initialList)
applianceList.add(app);
}
Of course there's also:
applianceList.addAll(initialList);
Edit 2
A more direct translation:
public void switchOff(){
for(Appliance app : applianceList)
switchOff(app);
}
private void switchOff(Appliance app){
if(app instanceof Blender)
turnBlenderOff(app);
else if(app instanceof Toaster)
turnToasterOff(app);
else if(app instanceof Oven)
turnOvenOff(app);
else
throw new RuntimeException("unknown appliance: " + app);
}
You could also add a wrapper around the different appliance classes that normalizes the API, but it may not be worth it (depending on how involved it is).
For example suppose I have a class Vehicle and I wish for a subclass ConvertibleVehicle which has extra methods such as foldRoof(), turboMode(), foldFrontSeats() etc. I wish to instantiate as follows
Vehicle convertible = new ConvertibleVehicle()
so I still have access to common method such as openDoor(), startEngine() etc. How do I designed such a solution?
To clarify my two initial solutions, neither of which I am happy with are:
Have dummy methods foldRoof(), turboMode(), foldFrontSeats() which I override in ConvertibleVehicle only, leaving them to do nothing in other subclasses
Have abstract methods foldRoof(), turboMode(), foldFrontSeats() and force each subclass to provide an implementation even if it will be blank in all instances other than ConvertibleVehicle
The above seem slightly convoluted since they both pollute the base class as I add an increasing number of subclasses each with their own unique functions
After reading some of the responses perhaps there is some type of fundamental flaw in my design. Suppose I have a class VehicleFleet which takes vehicles and instructs them to drive as follows:
public VehicleFleet(Vehicle[] myVehicles) {
for (int i=0; i < myVehicles.length; i++) {
myVehicles[i].drive();
}
}
Suppose this works for dozens of subclasses of Vehicle but for ConvertibleVehicle I also want to fold the roof before driving. To do so I subclass VehicleFleet as follows:
public ConvertibleVehicleFleet(Vehicle[] myVehicles) {
for (int i=0; i < myVehicles.length; i++) {
myVehicles[i].foldRoof();
myVehicles[i].drive();
}
}
This leaves me with a messy function foldRoof() stuck in the base class where it doesn't really belong which is overridden only in the case of ConvertibleVehicle and does nothing in all the other cases. The solution works but seems very inelegant. Does this problem lend itself to a better architecture?
I'm using Java although I would hope that a general solution could be found that will work in any object oriented language and that I will not need to rely upon language specific quirks
Any objects that use Vehicle shouldn't know about ConvertibleVehicle and its specialized methods. In proper loosely coupled object-oriented design Driver would only know about the Vehicle interface. Driver might call startEngine() on a Vehicle, but it's up to subclasses of Vehicle to override startEngine() to handle varying implementations such as turning a key versus pushing a button.
Consider reviewing the following two links which should help to explain this concept:
http://en.wikipedia.org/wiki/Liskov_substitution_principle
http://en.wikipedia.org/wiki/Open/closed_principle
Consider posting a real world problem that you feel leads to the dilemma you describe here and someone will be more than happy to demonstrate a better approach.
I've done this in similar situations.
Option A)
If the specialized operations are part of the same sequence as a base operation ( e.g. ConvertibleVehicle needs to be foldRoof before it can drive ) then just put the specialized operation inside the base operation.
class Vehicle {
public abstract void drive();
}
class ConvertibleVehicle {
public void drive() {
this.foldRoof();
.... // drive
}
private void foldRoof() {
....
}
}
So the effect of driving a fleet will be some of them will fold their roof before being driven.
for( Vehicle v : vehicleFleet ) {
v.drive();
}
The specialized method is not exposed in the object public interface but is called when needed.
Option B)
If the specialized operation are not part of the same sequence and must be called under certain "special" circumstances then let a specialized version of a client call those specialized operations. Warning, this is not so pure nor low coupling but when both objects ( the client and the service ) are created by the same "condition" or builder then most of the times is ok.
class Vehicle {
public void drive() {
....
}
}
class ConvertibleVehicle extends Vehicle {
// specialized version may override base operation or may not.
public void drive() {
...
}
public void foldRoof() { // specialized operation
...
}
}
Almost the same as the previous example, only in this case foldRoof is public also.
The difference is that I need an specialized client:
// Client ( base handler )
public class FleetHandler {
public void handle( Vehicle [] fleet ) {
for( Vehicle v : fleet ) {
v.drive();
}
}
}
// Specialized client ( sophisticate handler that is )
public class RoofAwareFleetHandler extends FleetHandler {
public void handle( Vehicle [] fleet ) {
for( Vehicle v : fleet ) {
// there are two options.
// either all vehicles are ConvertibleVehicles (risky) then
((ConvertibleVehicles)v).foldRoof();
v.drive();
// Or.. only some of them are ( safer ) .
if( v instenceOf ConvertibleVehicle ) {
((ConvertibleVehicles)v).foldRoof();
}
v.drive();
}
}
}
That instaceof look kind of ugly there, but it may be inlined by modern vm.
The point here is that only the specialized client knows and can invoke the specialized methods. That is, only RoofAwareFleetHandler can invoke foldRoof() on ** ConvertibleVehicle**
The final code doesn't change ...
public class Main {
public static void main( String [] args ) {
FleetHandler fleetHandler = .....
Vehicles [] fleet = ....
fleetHandler.handle( fleet );
}
}
Of course, I always make sure the fleethandler and the array of Vehicles are compatible ( probably using abstrac factory or builder )
I hope this helps.
This is a good question. What it implies is that you have (or expect to have) code that asks a Vehicle to (for instance) foldRoof(). And that's a problem, because most vehicles shouldn't fold their roofs. Only code that knows it's dealing with a ConvertibleVehicle should call that method, which means it is a method that should be only in the ConvertibleVehicle class. It's better this way; as soon as you try to call Vehicle.foldRoof(), your editor will tell you it can't be done. Which means you either need to arrange your code so that you know you're dealing with a ConvertibleVehicle, or cancel the foldRoof() call.
I think most people are missing the point of Delta's question. It looks to me like he/she isn't asking about what inheritance is. He/She is asking about subclasses implementing functionality that is not a natural fit for a base class, and the resulting mess that can ensue. I.e. the pushing of specific methods / functionality up the hierarchy chain, or requiring that subclasses implement a contract for functionality that isn't a natural fit.
There is also the matter of whether it is valuable to be able to treat a base class like the subclass in every case (to avoid casting and use them interchangeably). *edit -- this is called the Liskov substitution principle (thanks for reminding me, Kyle).
This is just what subclassing does: adds functionality not present in a base class.
class MyVehicle : public Vehicle {
public:
void MyNewFunction()
...
There are two (really just) different flavors of inheritance: public and private, reflecting the Is-A and Has-A relationships respectively. With public inheritance, you're directly adding stuff to a class. If I have class Animal with methods Eat() and Walk(), I may make a subclass called Cat which has the method Purr(). A Cat then has public methods Eat, Walk, and Purr.
In the case of a Stack based on a LinkedList however, I may say that a Stack HAS-A LinkedList internally. As such, I do not expose any features of the base class publically, I retain them as private and have to explicitly offer whatever I choose as public. A list may have a method Insert(), but for the Stack, I restrict the implementation and rexpose it as Push(). No previous public methods are exposed.
In C++, this is defined by the access modifier given before the base class. Above, I'm using public inheritance. Here, I use private inheritance:
class MyVehicle : private Engine {
This reflects that MyVehicle HAS-An Engine.
Ultimately, subclassing takes everything available in the base class and adds new stuff to it.
EDIT:
With this new information it seems that you're really looking for, it seems, is interfaces as stated by an earlier (voted down) comment. This is one of the big problems with inheritance - granularity. One of C++'s big complaints is its implementation of multiple inheritance (an option to accomplish this.) Can you state specifically what language you're using so we can advise properly?
To add on to Kyle W. Cartmell's excellent answer, and to perhaps simplify Oscar Reyes's answer a tad...
You might want to consider having the base class define a method called prepareToDrive() where inherited classes could put any setup tasks that need to be done before starting up. Calling drive() would be the way to start everything up from the user's perspective, so we would need to refactor drive into a "setup" phase and a "go" phase.
public class Vehicle {
protected void prepareToDrive() {
// no-op in the base class
}
protected abstract void go();
public final void drive() {
prepareToDrive();
go();
}
}
Now, subclasses must implement the protected method go() (really bad method name, but you get the idea), which is where they do their class-specific handling of driving.
Now, your inherited class could look like this:
public class ConvertableVehicle extends Vehicle {
// override setup method
protected void prepareToDrive() {
foldRoof();
}
protected void go() {
// however this works
}
protected void foldRoof() {
// ... whatever ...
}
}
This structure would also help when you run into class TractorTrailerRig that needs to make sure the trailer is loaded and correctly attached before it can drive.
How does the user of Vehicle know its a ConvertibleVehicle? Either they need to dynamic cast to ensure it is correct, or you've provided a method in Vehicle to get the objects real type.
In the first case the user already has a ConvertibleVehicle as part of dynamic cast. They can just use the new pointer/reference to access ConvertiblVehicle's methods
In the second case where the user verifies the objects type with one of Vehicles methods they can just cast the Vehicle to ConvertibleVehicle and use it.
Generally, casting is a bad idea. Try to do everything with the base class pointer. Your car example doesn't work well because the methods are too low level, build higher level virtual functions.
All that said. I have needed to all a derived classes methods from the base class. I could have cast to the derived class but it was involved in a framework and would have required much more effort. The old adage "all problems can be solved with one more layer of indirection" is how I solved this. I called a virtual method in the base class with the 'name' of the function I wanted to call. 'Name' can be a string or an integer depending on your needs. It's slower, but you should only need to do it rarely, if you class hierarchy is expressive enough.
Having ConvertibleVehicle subclass Vehicle and add its own methods as you describe is perfectly fine. That part of the design is OK. The trouble you have is with fleet. ConvertibleFleet should not be a subclass of VehicleFleet. An example will show you why. Let's say VehicleFleet is like this:
public class VehicleFleet {
// other stuff...
public void add(Vehicle vehicle) {
// adds to collection...
}
}
This is perfectly fine and sensible, you can add any Vehicle or subclass of it to a VehicleFleet. Now, let's say we also have another kind of vehicle:
public class TruckVehicle extends Vehicle {
// truck-specific methods...
}
We can also add this to a VehicleFleet since it's a vehicle. The problem is this: if ConvertibleFleet is a subclass of VehicleFleet, that means we can also add trucks to ConvertibleFleet. That's wrong. A ConvertibleFleet is not a proper subclass, since an operation that's valid for its parent (adding a truck) is not valid for the child.
The typical solution is to use a type parameter:
public class VehicleFleet<T extends Vehicle> {
void add(T vehicle) {
// add to collection...
}
}
This will let you define fleets specific to certain vehicle types. Note that this also means there is no "base" VehicleFleet class that you can pass to functions that don't care what kind of vehicle the fleet has. This can be remedied using another layer of base class (or interface):
public interface VehicleFleetBase {
Vehicle find(String name);
// note that 'add' methods or methods that pass in vehicles to the fleet
// should *not* be in here
}
public class VehicleFleet<T extends Vehicle> {
void add(T vehicle) {
// add to collection...
}
Vehicle find(String name) {
// look up vehicle...
}
}
For methods that are pulling vehicles out of fleets and don't care what kind they are, you can pass around VehicleFleetBase. Methods that need to insert vehicles use VehicleFleet<T> which is safely strongly-typed.