Abstract Factory Pattern implemented as Interfaces - java

I'm curious about abstract factory pattern. From Java 8 we have default methods, does it mean that we can replace our abstract classes to interfaces? The one downside which I can see it's a case when we need non-static / final field. We can't do it interface. Could you give me some examples (except this one I listed) when old-fashion factories has more advantages?

You technically can, but you shouldn't.
The default implementation on an interface is a tool with a few very specific purposes- primarily, for adding functionality to an interface which very likely has been implemented by clients outside of your control, or for an interface which has been implemented repeatedly, where the default implementation would be onerous to re-implement.
They are not intended as a replacement (or even supplement) to abstract classes, when you are extending from some common parent's behavior.
Now, that said, the abstract factory pattern has little to do with Java's use of the abstract keyword. The abstract factory pattern is about hiding (or abstracting away) the concrete factory implementation a given client is actually using to produce an object. What the factory's factory methods are defined as returning may be a concrete class, an abstract class, or an interface.
So, for example-
Suppose you have some class, GuiPainter. It has a method, #paintWindow.
Under the covers, you've introduced a Window, with OS specific implementations like AppleWindow, AndroidWindow, UbunutuWindow (and etc). Each Window implementation varies a little in how it needs to be constructed.
One approach would be to construct GuiPainter with an AppleWindowFactory, an AndroidWindowFactory, an UbuntuWindowFactory (and etc), along with a means to find the OS and decide which factory to use. However, all GuiPainter really wants is any instance of Window- it has no other OS-specific knowledge.
So, instead, we introduce a WindowFactory, which returns a Window. WindowFactory is a Factory which has that knowledge of discovering the OS and deciding which of the concrete Window factories to use- abstracting that responsibility away from GuiPainter.
Window itself might be a concrete class with a single implementation and just configuration difference based on OS. It might be an abstract class with OS-specific implementations (like AppleWindow, AndroidWindow, etc). It might even be an Interface which is implemented anonymously by the factories. What Window is doesn't change that the client no longer has to worry about OS specific nonsense to get the window it wants.
Does that make sense?

Yes, and it's a widely used technique.
You'll often stumble upon public interfaces, e.g:
public interface Cache<K, V> {
public Optional<V> get(K k);
public V put(K k, V v);
// ...
}
and hidden ( = package-private or nested private ) implementations.
class SoftCache<K, V> implements Cache<K, V> {
private final Map<K, SoftReference<V> dataMap;
// ...
}
class WeakCache<K, V> implements Cache<K, V> {
private final Map<K, WeakReference<V> dataMap;
// ...
}
There is no need to be multiple implementations, the pattern is valid and fully applicable even with just one sub-class.
Your classes using your caching system do not care about the implementation details. They only care about the behavior exposed, and this is describable very well through an interface.
You are talking about default and factory methods, but they really differ from each other, and I feel you are mixing things up a little bit.
default methods were mostly added because if you have 20 classes implementing MyInterface and you add a method within your interface, it'd be an extremely painful job to implement a behavior (which is usually the same across all classes) in 20 different places.
I feel Java 8/9+ is heavily going towards this pattern : factory methods within interfaces. Take as example the API Set.of(...), Map.of(...), and more.
Take java.util.Stream<T> as an example.
You usually use it this way (for Objects):
private Stream<Element> stream;
or
private IntStream intStream;
You do not care whether an element you are currently having is a Head element, an OfRef or anything else.
These are hidden details, unreachable from your code.
Yet, the interface Stream<T> does expose the factory method of (among others):
/**
* Returns a sequential {#code Stream} containing a single element.
*
* #param t the single element
* #param <T> the type of stream elements
* #return a singleton sequential stream
*/
public static<T> Stream<T> of(T t) {
return StreamSupport.stream(new Streams.StreamBuilderImpl<>(t), false);
}
There is nothing wrong in having your interface expose factory methods instead of abstract classes, but at the end of the day it depends entirely on you and how you feel more comfortable doing things.
It's also to be noted that java usually uses this pattern:
interface > abstract class > other classes
See also java.util.Collection<E> and its sub-classes.

Related

How do I avoid breaking the Liskov substitution principle with a class that implements multiple interfaces?

Given the following class:
class Example implements Interface1, Interface2 {
...
}
When I instantiate the class using Interface1:
Interface1 example = new Example();
...then I can call only the Interface1 methods, and not the Interface2 methods, unless I cast:
((Interface2) example).someInterface2Method();
Of course, to make this runtime safe, I should also wrap this with an instanceof check:
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
I'm aware that I could have a wrapper interface that extends both interfaces, but then I could end up with multiple interfaces to cater for all the possible permutations of interfaces that can be implemented by the same class. The Interfaces in question do not naturally extend one another so inheritance also seems wrong.
Does the instanceof/cast approach break LSP as I am interrogating the runtime instance to determine its implementations?
Whichever implementation I use seems to have some side-effect either in bad design or usage.
I'm aware that I could have a wrapper interface that extends both
interfaces, but then I could end up with multiple interfaces to cater
for all the possible permutations of interfaces that can be
implemented by the same class
I suspect that if you're finding that lots of your classes implement different combinations of interfaces then either: your concrete classes are doing too much; or (less likely) your interfaces are too small and too specialised, to the point of being useless individually.
If you have good reason for some code to require something that is both a Interface1 and a Interface2 then absolutely go ahead and make a combined version that extends both. If you struggle to think of an appropriate name for this (no, not FooAndBar) then that's an indicator that your design is wrong.
Absolutely do not rely on casting anything. It should only be used as a last resort and usually only for very specific problems (e.g. serialization).
My favourite and most-used design pattern is the decorator pattern. As such most of my classes will only ever implement one interface (except for more generic interfaces such as Comparable). I would say that if your classes are frequently/always implementing more than one interface then that's a code smell.
If you're instantiating the object and using it within the same scope then you should just be writing
Example example = new Example();
Just so it's clear (I'm not sure if this is what you were suggesting), under no circumstances should you ever be writing anything like this:
Interface1 example = new Example();
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
Your class can implement multiple interfaces fine, and it is not breaking any OOP principles. On the contrary, it is following the interface segregation principle.
It is confusing why would you have a situation where something of type Interface1 is expected to provide someInterface2Method(). That is where your design is wrong.
Think about it in a slightly different way: Imagine you have another method, void method1(Interface1 interface1). It can't expect interface1 to also be an instance of Interface2. If it was the case, the type of the argument should have been different. The example you have shown is precisely this, having a variable of type Interface1 but expecting it to also be of type Interface2.
If you want to be able to call both methods, you should have the type of your variable example set to Example. That way you avoid the instanceof and type casting altogether.
If your two interfaces Interface1 and Interface2 are not that loosely coupled, and you will often need to call methods from both, maybe separating the interfaces wasn't such a good idea, or maybe you want to have another interface which extends both.
In general (although not always), instanceof checks and type casts often indicate some OO design flaw. Sometimes the design would fit for the rest of the program, but you would have a small case where it is simpler to type cast rather than refactor everything. But if possible you should always strive to avoid it at first, as part of your design.
You have two different options (I bet there are a lot more).
The first is to create your own interface which extends the other two:
interface Interface3 extends Interface1, Interface2 {}
And then use that throughout your code:
public void doSomething(Interface3 interface3){
...
}
The other way (and in my opinion the better one) is to use generics per method:
public <T extends Interface1 & Interface2> void doSomething(T t){
...
}
The latter option is in fact less restricted than the former, because the generic type T gets dynamically inferred and thus leads to less coupling (a class doesn't have to implement a specific grouping interface, like the first example).
The core issue
Slightly tweaking your example so I can address the core issue:
public void DoTheThing(Interface1 example)
{
if (example instanceof Interface2)
{
((Interface2) example).someInterface2Method();
}
}
So you defined the method DoTheThing(Interface1 example). This is basically saying "to do the thing, I need an Interface1 object".
But then, in your method body, it appears that you actually need an Interface2 object. Then why didn't you ask for one in your method parameters? Quite obviously, you should've been asking for an Interface2
What you're doing here is assuming that whatever Interface1 object you get will also be an Interface2 object. This is not something you can rely on. You might have some classes which implement both interfaces, but you might as well have some classes which only implement one and not the other.
There is no inherent requirement whereby Interface1 and Interface2 need to both be implemented on the same object. You can't know (nor rely on the assumption) that this is the case.
Unless you define the inherent requirement and apply it.
interface InterfaceBoth extends Interface1, Interface2 {}
public void DoTheThing(InterfaceBoth example)
{
example.someInterface2Method();
}
In this case, you've required InterfaceBoth object to both implement Interface1 and Interface2. So whenever you ask for an InterfaceBoth object, you can be sure to get an object which implements both Interface1 and Interface2, and thus you can use methods from either interface without even needing to cast or check the type.
You (and the compiler) know that this method will always be available, and there's no chance of this not working.
Note: You could've used Example instead of creating the InterfaceBoth interface, but then you would only be able to use objects of type Example and not any other class which would implement both interfaces. I assume you're interested in handling any class which implements both interfaces, not just Example.
Deconstructing the issue further.
Look at this code:
ICarrot myObject = new Superman();
If you assume this code compiles, what can you tell me about the Superman class? That it clearly implements the ICarrot interface. That is all you can tell me. You have no idea whether Superman implements the IShovel interface or not.
So if I try to do this:
myObject.SomeMethodThatIsFromSupermanButNotFromICarrot();
or this:
myObject.SomeMethodThatIsFromIShovelButNotFromICarrot();
Should you be surprised if I told you this code compiles? You should, because this code doesn't compile.
You may say "but I know that it's a Superman object which has this method!". But then you'd be forgetting that you only told the compiler it was an ICarrot variable, not a Superman variable.
You may say "but I know that it's a Superman object which implements the IShovel interface!". But then you'd be forgetting that you only told the compiler it was an ICarrot variable, not a Superman or IShovel variable.
Knowing this, let's look back at your code.
Interface1 example = new Example();
All you've said is that you have an Interface1 variable.
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
It makes no sense for you to assume that this Interface1 object also happens to implement a second unrelated interface. Even if this code works on a technical level, it is a sign of bad design, the developer is expecting some inherent correlation between two interfaces without actually having created this correlation.
You may say "but I know I'm putting an Example object in, the compiler should know that too!" but you'd be missing the point that if this were a method parameter, you would have no way of knowing what the callers of your method are sending.
public void DoTheThing(Interface1 example)
{
if (example instanceof Interface2)
{
((Interface2) example).someInterface2Method();
}
}
When other callers call this method, the compiler is only going to stop them if the passed object does not implement Interface1. The compiler is not going to stop someone from passing an object of a class which implements Interface1 but does not implement Interface2.
Your example does not break LSP, but it seems to break SRP. If you encounter such case where you need to cast an object to its 2nd interface, the method that contains such code can be considered busy.
Implementing 2 (or more) interfaces in a class is fine. In deciding which interface to use as its data type depends entirely on the context of the code that will use it.
Casting is fine, especially when changing context.
class Payment implements Expirable, Limited {
/* ... */
}
class PaymentProcessor {
// Using payment here because i'm working with payments.
public void process(Payment payment) {
boolean expired = expirationChecker.check(payment);
boolean pastLimit = limitChecker.check(payment);
if (!expired && !pastLimit) {
acceptPayment(payment);
}
}
}
class ExpirationChecker {
// This the `Expirable` world, so i'm using Expirable here
public boolean check(Expirable expirable) {
// code
}
}
class LimitChecker {
// This class is about checking limits, thats why im using `Limited` here
public boolean check(Limited limited) {
// code
}
}
Usually, many, client-specific interfaces are fine, and somewhat part of the Interface segregation principle (the "I" in SOLID). Some more specific points, on a technical level, have already been mentioned in other answers.
Particularly that you can go too far with this segregation, by having a class like
class Person implements FirstNameProvider, LastNameProvider, AgeProvider ... {
#Override String getFirstName() {...}
#Override String getLastName() {...}
#Override int getAge() {...}
...
}
Or, conversely, that you have an implementing class that is too powerful, as in
class Application implements DatabaseReader, DataProcessor, UserInteraction, Visualizer {
...
}
I think that the main point in the Interface Segregation Principle is that the interfaces should be client-specific. They should basically "summarize" the functions that are required by a certain client, for a certain task.
To put it that way: The issue is to strike the right balance between the extremes that I sketched above. When I'm trying to figure out interfaces and their relationships (mutually, and in terms of the classes that implement them), I always try to take a step back and ask myself, in an intentionally naïve way: Who is going to receive what, and what is he going to do with it?
Regarding your example: When all your clients always need the functionality of Interface1 and Interface2 at the same time, then you should consider either defining an
interface Combined extends Interface1, Interface2 { }
or not have different interfaces in the first place. On the other hand, when the functionalities are completely distinct and unrelated and never used together, then you should wonder why the single class is implementing them at the same time.
At this point, one could refer to another principle, namely Composition over inheritance. Although it is not classically related to implementing multiple interfaces, composition can also be favorable in this case. For example, you could change your class to not implement the interfaces directly, but only provide instances that implement them:
class Example {
Interface1 getInterface1() { ... }
Interface2 getInterface2() { ... }
}
It looks a bit odd in this Example (sic!), but depending on the complexity of the implementation of Interface1 and Interface2, it can really make sense to keep them separated.
Edited in response to the comment:
The intention here is not to pass the concrete class Example to methods that need both interfaces. A case where this could make sense is rather when a class combines the functionalities of both interfaces, but does not do so by directly implementing them at the same time. It's hard to make up an example that does not look too contrived, but something like this might bring the idea across:
interface DatabaseReader { String read(); }
interface DatabaseWriter { void write(String s); }
class Database {
DatabaseConnection connection = create();
DatabaseReader reader = createReader(connection);
DatabaseReader writer = createWriter(connection);
DatabaseReader getReader() { return reader; }
DatabaseReader getWriter() { return writer; }
}
The client will still rely on the interfaces. Methods like
void create(DatabaseWriter writer) { ... }
void read (DatabaseReader reader) { ... }
void update(DatabaseReader reader, DatabaseWriter writer) { ... }
could then be called with
create(database.getWriter());
read (database.getReader());
update(database.getReader(), database.getWriter());
respectively.
With the help of various posts and comments on this page, a solution has been produced, which I feel is correct for my scenario.
The following shows the iterative changes to the solution to meet SOLID principles.
Requirement
To produce the response for a web service, key + object pairs are added to a response object. There are lots of different key + object pairs that need to be added, each of which may have unique processing required to transform the data from the source to the format required in the response.
From this it is clear that whilst the different key / value pairs may have different processing requirements to transform the source data to the target response object, they all have a common goal of adding an object to the response object.
Therefore, the following interface was produced in solution iteration 1:
Solution Iteration 1
ResponseObjectProvider<T, S> {
void addObject(T targetObject, S sourceObject, String targetKey);
}
Any developer that needs to add an object to the response can now do so using an existing implementation that matches their requirement, or add a new implementation given a new scenario
This is great as we have a common interface which acts as a contract for this common practise of adding response objects
However, one scenario requires that the target object should be taken from the source object given a particular key, "identifier".
There are options here, the first is to add an implementation of the existing interface as follows:
public class GetIdentifierResponseObjectProvider<T extends Map, S extends Map> implements ResponseObjectProvider<T, S> {
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get("identifier"));
}
}
This works, however this scenario could be required for other source object keys ("startDate", "endDate" etc...) so this implementation should be made more generic to allow for reuse in this scenario.
Additionally, other implementations may require more context information to perform the addObject operation... So a new generic type should be added to cater for this
Solution Iteration 2
ResponseObjectProvider<T, S, U> {
void addObject(T targetObject, S sourceObject, String targetKey);
void setParams(U params);
U getParams();
}
This interface caters for both usage scenarios; the implementations that require additional params to perform the addObject operation and the implementations that do not
However, considering the latter of the usage scenarios, the implementations that do not require additional parameters will break the SOLID Interface Segregation Principle as these implementations will override getParams and setParams methods but not implement them. e.g:
public class GetObjectBySourceKeyResponseObjectProvider<T extends Map, S extends Map, U extends String> implements ResponseObjectProvider<T, S, U> {
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get(U));
}
public void setParams(U params) {
//unimplemented method
}
U getParams() {
//unimplemented method
}
}
Solution Iteration 3
To fix the Interface Segregation issue, the getParams and setParams interface methods were moved into a new Interface:
public interface ParametersProvider<T> {
void setParams(T params);
T getParams();
}
The implementations that require parameters can now implement the ParametersProvider interface:
public class GetObjectBySourceKeyResponseObjectProvider<T extends Map, S extends Map, U extends String> implements ResponseObjectProvider<T, S>, ParametersProvider<U>
private String params;
public void setParams(U params) {
this.params = params;
}
public U getParams() {
return this.params;
}
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get(params));
}
}
This solves the Interface Segregation issue but causes two more issues... If the calling client wants to program to an interface, i.e:
ResponseObjectProvider responseObjectProvider = new GetObjectBySourceKeyResponseObjectProvider<>();
Then the addObject method will be available to the instance, but NOT the getParams and setParams methods of the ParametersProvider interface... To call these a cast is required, and to be safe an instanceof check should also be performed:
if(responseObjectProvider instanceof ParametersProvider) {
((ParametersProvider)responseObjectProvider).setParams("identifier");
}
Not only is this undesirable it also breaks the Liskov Substitution Principle - "if S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program"
i.e. if we replaced an implementation of ResponseObjectProvider that also implements ParametersProvider, with an implementation that does not implement ParametersProvider then this could alter the some of the desirable properties of the program... Additionally, the client needs to be aware of which implementation is in use to call the correct methods
An additional problem is the usage for calling clients. If the calling client wanted to use an instance that implements both interfaces to perform addObject multiple times, the setParams method would need to be called before addObject... This could cause avoidable bugs if care is not taken when calling.
Solution Iteration 4 - Final Solution
The interfaces produced from Solution Iteration 3 solve all of the currently known usage requirements, with some flexibility provided by generics for implementation using different types. However, this solution breaks the Liskov Substitution Principle and has a non-obvious usage of setParams for the calling client
The solution is to have two separate interfaces, ParameterisedResponseObjectProvider and ResponseObjectProvider.
This allows the client to program to an interface, and would select the appropriate interface depending on whether the objects being added to the response require additional parameters or not
The new interface was first implemented as an extension of ResponseObjectProvider:
public interface ParameterisedResponseObjectProvider<T,S,U> extends ResponseObjectProvider<T, S> {
void setParams(U params);
U getParams();
}
However, this still had the usage issue, where the calling client would first need to call setParams before calling addObject and also make the code less readable.
So the final solution has two separate interfaces defined as follows:
public interface ResponseObjectProvider<T, S> {
void addObject(T targetObject, S sourceObject, String targetKey);
}
public interface ParameterisedResponseObjectProvider<T,S,U> {
void addObject(T targetObject, S sourceObject, String targetKey, U params);
}
This solution solves the breaches of Interface Segregation and Liskov Substitution principles and also improves the usage for calling clients and improves the readability of the code.
It does mean that the client needs to be aware of the different interfaces, but since the contracts are different this seems to be a justified decision especially when considering all the issues that the solution has avoided.
The problem you describe often comes about through over-zealous application of the Interface Segregation Principle, encouraged by languages' inability to specify that members of one interface should, by default, be chained to static methods which could implement sensible behaviors.
Consider, for example, a basic sequence/enumeration interface and the following behaviors:
Produce an enumerator which can read out the objects if no other iterator has yet been created.
Produce an enumerator which can read out the objects even if another iterator has already been created and used.
Report how many items are in the sequence
Report the value of the Nth item in the sequence
Copy a range of items from the object into an array of that type.
Yield a reference to an immutable object that can accommodate the above operations efficiently with contents that are guaranteed never to change.
I would suggest that such abilities should be part of the basic sequence/enumeration interface, along with a method/property to indicate which of the above operations are meaningfully supported. Some kinds of single-shot on-demand enumerators (e.g. an infinite truly-random sequence generator) might not be able to support any of those functions, but segregating such functions into separate interfaces will make it much harder to produce efficient wrappers for many kinds of operations.
One could produce a wrapper class that would accommodate all of the above operations, though not necessarily efficiently, on any finite sequence which supports the first ability. If, however, the class is being used to wrap an object that already supports some of those abilities (e.g. access the Nth item), having the wrapper use the underlying behaviors could be much more efficient than having it do everything via the second function above (e.g. creating a new enumerator, and using that to iteratively read and ignore items from the sequence until the desired one is reached).
Having all objects that produce any kind of sequence support an interface that includes all of the above, along with an indication of what abilities are supported, would be cleaner than trying to have different interfaces for different subsets of abilities, and requiring that wrapper classes make explicit provision for any combinations they want to expose to their clients.

Purpose of factory classes in Java

I have used Java for quite a long time, but I never did find out, what makes factories so special. Can somebody please explain it to me? Is there any reason why I should want to implement my own factory (if it's even possible)?
Factory Patterns are fantastic for decoupling classes. For example, let's assume a common interface, Animal, and some classes that implement it. Dog, Cat.
If you do not know what the user will want to generate at run time, then you can not create a Dog pointer. It simply won't work!
What you can do though, is port that to a separate class, and at run time, pass a discriminator to the factory class. The class will then return the object. For example:
public Animal getInstance(String discriminator)
{
if(discriminator.equals("Dog")) {
return new Dog();
}
// etc.
}
And the calling class simply uses:
String type = "Dog";
Animal value = Factory.getInstance(type);
This makes code extremely readable, separates decision logic from the logic performed on the value and decouples the classes via some common interface. All in all, pretty nice pattern!
IMO the biggest benefits of Factory classes are configuration and data encapsulation. API design is perhaps the most common scenario where such classes come in really handy.
By offering developers Factory classes instead of direct access, I have a way to easily preventing from someone messing up with the internals of my API, for instance. Through Factory classes I also control how much you know about my infrastructure. If I don't want to tell you everything about how I provide a service internally, I will give you a Factory class instead.
For reliability and uniform design purposes, the most crucial internals should be enclosed in Factory classes.
They are a way to make sure that a random developer that comes along will not:
Break internals
Gain unwanted privileges
Break configuration formats
Another good example can be the following. Imagine that the user want to uses somes collections to performs actions (in this case choose the collections to create an inverted index). You can create an interface like :
interface CollectionsFactory {
<E> Set<E> newSet();
<K, V> Map<K, V> newMap();
}
Then you could create a class which take a collections factory as parameter.
public class ConcreteInvertedIndex implements InvertedIndex {
private final Map<String, Set<Document>> index;
private final Set<Document> emptyDocSet;
private final CollectionsFactory c;
public ConcreteInvertedIndex(CollectionsFactory c){
this.c = c;
this.index = c.newMap();
this.emptyDocSet = c.newSet();
}
//some methods
}
And finally it's to the user to decide which collections he wants to use to perform such actions :
CollectionsFactory c = new CollectionsFactory() {
#Override
public <E> Set<E> newSet() {
return new HashSet<E>(); //or you can return a TreeSet per example
}
#Override
public <K, V> Map<K, V> newMap() {
return new TreeMap<K, V>();//or you can return an HashMap per example
}
};
InvertedIndex ii = new ConcreteInvertedIndex(c);
Use the Factory Method pattern when
· a class can't anticipate the class of objects it must create.
· a class wants its subclasses to specify the objects it creates.
· classes delegate responsibility to one of several helper subclasses, and
you want to localize the knowledge of which helper subclass is the delegate.
Factories are widely used in JDK to support the concept of SPI (service provider interface) - API intended to be implemented or extended by a third party. E.g. DocumentBuilderFactory.newInstance() uses a complicated 4-step search algorithm to find the actual implementation.
I think programmers who develop not libraries but applications should favour "simple is best" approach in software design.
You use them in combination with private constructor to create singletons. It assures that the bean (probably, a service) can not be created as a non-singleton.
You can also forbid the creation of the class with the constructor by making it private, so that you could add some logic in the factory method and nobody could bypass it.
EDIT: I know a person who would replace a constructor in every bean by a method "create" with parameters (in order to centralize the bean population logic), but I am not a big fan of this approach.

interface and contract : in this example

so i was trying to understand the interfaces, but i almost only see articles that explains "how" to use the interface, my problem is to understand the "why" :
so it's better to use Interface than creating and subclassing a class, which might be useless,
so we implement the interface methods in the class, but i don't understand why this is a good thing,
Let's say :
a class like Car.java defines all the code to make the car
we create the interface Working.java with several methods like start(), stop(), etc.
we implement the methods in Diesel_Car.java, Electric_Car.java, etc.
so what does it change for Car.java? This might not be the best example, as it seems that Car should be the parent of Diesel_Car.java etc,
but what was the meaning to implement the methods in those classes?
Is there a method in Car.java that somehow "calls" the Diesel_Car.java class and its interface methods?
I've read that the interface is like a "Contract", but i only see the second part of this contract (where the method is implemented) and i'm having some trouble to imagine where the first part happen?
Thanks for your help
Lets take your example of a Base class of Car with Electric_Car and Diesel_Car Subclasses, and expand the model a bit.
Car may have the following Interfaces
Working : with start() & stop() methods
Moving : with move(), turn() & stop() methods
The Car might contain an instance of class AirConditioner which should also implement the interface Working.
The Driver object can interact with objects than implement Working , the driver can start() or stop() . (The driver can start or stop the car and the A/C seperatly).
also, since the Driver can walk around on his own (and does not always need a car) he should implement the interface Moving.
The object Ground can now interact with anything that implements Moving : either car or driver.
(Very) contrived example (non-generic, error handling removed, etc. for clarity).
List theList = new ArrayList();
theList is a List, in this case implemented by an ArrayList. Let's say we pass this to a third-party API that somewhere in its bowels adds something to the list.
public void frobList(List list) {
list.add(new Whatever());
}
Now let's say for some reason we want to do something unusual to items that are added to the list. We can't change the third-party API. We can, however, create a new list type.
public FrobbableList extends ArrayList {
public boolean add(E e) {
super.add(Frobnicator.frob(e));
}
}
Now in our code we change the list we instantiate and call the API as before:
List theList = new FrobbableList();
frobber.frobList(theList);
If the third-party API had taken an ArrayList (the actual type) instead of a List (the interface), we'd be unable to do this as easily. By not locking the API in to a specific implementation, it provided us the opportunity to create custom behavior.
Taken further, this is a concept fundamental to extensible, debuggable, testable code. Things like dependency injection/Inversion of Control rely on coding to interfaces to function.
I am making another attempt to explain the concept of interface as a contract.
A typical usage scenario is when you'd like to sort a List of elements using java.util.Collections : <T extends java.lang.Comparable<? super T>> void sort(java.util.List<T> ts)
what does this signature mean? the sort() method will accept a java.util.List<T> of objects of type T, where T is an object that implements the interface Comparable.
so, If you would like to use Collections.sort() with a list of your objects you will need them to implement the Comparable interface:
public interface Comparable<T> {
int compareTo(T t);
}
So, if you implement a class of type Car and want to compare cars by their weight using Collections.sort(), you will have to implement the Comparable interface/contract in class car.
public class Car implements Comparable<Car> {
private int weight;
//..other class implementation stuff
#Override
public int compareTo(Car otherCar) {
if (this.weight == otherCar.weight) return 0;
else if (this.weight > otherCar.weight) return 1;
else return -1;
}
}
under the hood the Collections.sort() will call your implementation of compareTo when it sorts the list.
The contract is a concept of how classes work with each other. The idea is that a interfacing class defines the methods return type and name, but doesn't provide the idea of how it is implemented. That is done by the implementing class.
The concept is that when a Interface A defines methods A and B, any class implementing that interface MUST implement A and B along with its own methods. So it might work like this:
interface InterfaceA {
void methodA();
void methodB(String s);
}
public class ClassA implements InterfaceA {
public void methodA() {
System.out.println("MethodA");
}
public void methodB(String s) {
System.out.println(s);
}
}
The contract principle is that anything implementing a interface must implement the whole interface. Anything that doesn't do this must be abstract.
Hope this helps.
Design by contract (DbC), also known as programming by contract and design-by-contract programming, is an approach for designing computer software. It prescribes that software designers should define formal, precise and verifiable interface specifications for software components, which extend the ordinary definition of abstract data types with preconditions, postconditions and invariants. These specifications are referred to as "contracts", in accordance with a conceptual metaphor with the conditions and obligations of business contracts. Wikipedia
Short-cut.
If you follow the good practice of coding against interfaces, you know that the interface defines the contract all implementation classes must adhere to.
We designed Contract Java, an extension of Java in which method contracts are specified in interfaces. We identified three design goals.
First, Contract Java programs without contracts and programs with
fully-satisfied contracts should behave as if they were run without
contracts in Java.
Second, programs compiled with a conventional Java compiler must be able to interoperate with programs
compiled under Contract Java.
Finally, unless a class declares that it meets a particular contract, it must never be blamed for failing to meet that
contract. Abstractly, if the method m of an object with type t is called, the caller should only be blamed for the
pre-condition contracts associated with t and m should only be blamed for post-condition contracts associated
with t.
These design goals raise several interesting questions and demand decisions that balance language design with
software engineering concerns. This section describes each of the major design issues, the alternatives, our decisions,
our rationale, and the ramifications of the decisions. The decisions are not orthogonal; some of the later decisions
depend on earlier ones.
Contracts in Contract Java are decorations of methods signatures in interfaces. Each method declaration may come
with a pre-condition expression and a post-condition expression; both expressions must evaluate to booleans. The
pre-condition specifies what must be true when the method is called. If it fails, the context of the method call is to
blame for not using the method in a proper context. The post-condition expression specifies what must be true when
the method returns. If it fails, the method itself is to blame for not establishing the promised conditions.
Contract Java does not restrict the contract expressions. Still, good programming discipline dictates that the expressions
should not contribute to the result of the program. In particular, the expressions should not have any side-effects.
Both the pre- and post-condition expressions are parameterized over the arguments of the method and the pseudovariable
this. The latter is bound to the current object. Additionally, the post-condition of the contract may refer to the
name of the method, which is bound to the result of the method call.
Contracts are enforced based on the type-context of the method call. If an object’s type is an interface type, the
method call must meet all of the contracts in the interface. For instance, if an object implements the interface I, a call
to one of I’s methods must check that pre-condition and the post-condition specified in I. If the object’s type is a class
type, the object has no contractual obligations. Since a programmer can always create an interface for any class, we
leave objects with class types unchecked for efficiency reasons.
For an example, consider the interface RootFloat:
interface RootFloat {
float getValue ();
float sqRoot ();
#pre { this.getValue() >= 0f }
#post { Math.abs(sqRoot * sqRoot - this.getValue()) < 0.01f }
}
It describes the interface for a float wrapper class that provides a sqRoot method. The first method, getValue, has no
contracts. It accepts no arguments and returns the unwrapped float. The sqRoot method also accepts no arguments,
but has a contract. The pre-condition asserts that the unwrapped value is greater than or equal to zero. The result type
of sqRoot is float. The post-condition states that the square of the result must be within 0.01 of the value of the float.
Even though the contract language is sufficiently strong to specify the complete behavior in some cases, such as the
previous example, total or even partial correctness is not our goal in designing these contracts. Typically, the contracts
cannot express the full behavior of a method. In fact, there is a tension between the amount of information revealed in
the interface and the amount of validation the contracts can satisfy.
For an example, consider this stack interface:
interface Stack {
void push (int i);
int pop ();
}
With only push and pop operations available in the interface, it is impossible to specify that, after a push, the top
element in the stack is the element that was just pushed. But, if we augment the interface with a top operation that
reveals the topmost item on the stack (without removing it), then we can specify that push adds items to the top of the
stack:
interface Stack {
void push (int x);
#post { x = this.top() }
int pop ();
int top ();
}
In summary, we do not restrict the language of contracts. This makes the contract language as flexible as possible;
contract expression evaluation may even contribute to the final result of a computation. Despite the flexibility of the
contract language, not all desirable contracts are expressible. Some contracts are inexpressible because they may
involve checking undecidable properties, while others are inexpressible because the interface does not permit enough
observations.

When and where to use abstract classes and when and where to use interfaces in Java?

Abstract classes and interfaces play very a important role in Java and they have their own importance in certain situations. They possess certain special characteristics. There are some observable differences between them. Let me describe some a few of them.
One of the major differences between an interface and an abstract class is that an abstract class can never be instantiated, an interface however can.
Both of them can never be declared as final obviously, because they are to be inherited by some other non-abstract class(es).
Both of them can never have static methods. Neither concrete nor abstract (abstract static methods indeed and in fact, don't exist at all).
An interface can never never have concrete methods (a method with it's actual implementation), an abstract class however can have concrete methods too.
An interface can not have constructors, an abstract class can however can have.
Two obvious questions are likely to arise here.
An abstract class can never be instantiated because it is by nature, not a fully implemented class and it's full implementation requires it to be inherited by some other non-abstract class(es). If it is so then, an abstract class should not have a constructor of it's own because a constructor implicitly returns an object of it's own class and an abstract class by itself can not be instantiated hence, it should not be able to have a constructor of it's own.
An interface somewhat looks better and more appropriate to use than an abstract class, since it imposes less restrictions than what those are imposed by an abstract class. In which very specific situations, an interface is useful and in which very specific situations, an abstract class is appropriate? Hope! the boldface letters would be taken into much consideration.
First off, you're factually wrong in a few places:
One of the major differences between an interface and an abstract class is that an abstract class can never be instantiated, an interface however can.
Wrong. An abstract and interface can both be instantiated anonymously.
Both of them can never be declared as final obviously, because they are to be inherited by some other non-abstract class(es).
True, although I personally see no reason why interfaces couldn't have been able to be final so they couldn't be extended, but that's just me. I see why they made the decision they did.
Both of them can never have static methods. Neither concrete nor abstract (abstract static methods indeed and in fact, don't exist at all).
Abstract classes can have static methods; sorry!
An interface can never never have concrete methods (a method with it's actual implementation), an abstract class however can have concrete methods too.
Yes, that's one of the the primary differences between them.
An interface can not have constructors, an abstract class can however can have.
Yes, that's true.
Now, let's move on to your questions:
Your first paragraph doesn't have a question in it. What was the question there? If it was "Why allow abstract classes to have constructors if you can't instantiate them?" the answer is so child classes can use it. Here's an example
abstract class Parent {
String name;
int id;
public Parent(String n, int i) { name = n; id = i; }
}
class Child extends Parent {
float foo;
public Child(String n, int i, float f) {
super(n,i);
foo = f;
}
}
// later
Parent p = new Parent("bob",12); // error
Child c = new Child("bob",12); // fine!
Your second paragraph has a question but is malformed. I think you're simply missing an 'is' in there... :) The answer to it is as follows:
You use an interface when you want to define a contract. Here's a very specific example:
public interface Set<E> {
int size(); // determine size of the set
boolean isEmpty(); // determine if the set is empty or not
void add(E data); // add to the set
boolean remove(E data); // remove from the set
boolean contains(E data); // determine if set holds something
}
Four common methods to all sets.
You use an abstract class when you want to define SOME of the behavior, but still have the contract
public abstract class AbstractSet<E> implements Set<E> {
// we define the implementation for isEmpty by saying it means
// size is 0
public boolean isEmpty() { return this.size() == 0; }
// let all the other methods be determined by the implementer
}
Most of the time, when the discussion comes up between deciding if you should use interfaces or abstract classes, it ends up with definitions of how to use them, but not always why and when? Also the obvious other concrete classes and utility classes that you may end up using as well aren't always brought up. Really, in my thinking the correct way to answer the question is to determine the context you are dealing with regarding the domain or entity objects, namely what is your use case?
From a very high level, Java consists of objects (entities or domain objects that can model objects in the real world) that communicate with each other using methods. In any event, you want to model behavior with interfaces and use abstract classes when you have inheritance.
In my personal experience, I do this using a top down and then bottom up approach. I start looking for inheritance by looking at the use case and seeing what classes I will need. Then I look to see if there is a superClassOrInterfaceType (since both classes and interfaces define types, I'm combining them into a one word for simplicity. Hopefully it doesn't make it more confusing) domain object that would encompass all the objects, as in a superClassOrInterfaceType of vehicle if I'm working on a use case dealing with subtypeClassOrInterfaceTypes like: cars, trucks, jeeps, and motorcycles for example. If there is a hierarchy relationship, then I define the superClassOrInterfaceType and subtypeClassOrInterfaceTypes.
As I said, what I generally do first is to look for a common domain superClassOrInterfaceType for the objects I'm dealing with. If so, I look for common method operations between the subtypeClassOrInterfaceTypes. If not, I look to see if there are common method implementations, because even though you may have a superClassOrInterfaceType and may have common methods, the implementations may not favor code reuse. At this point, if I have common methods, but no common implementations, I lean towards an interface. However, with this simplistic example, I should have some common methods with some common implementations between the vehicle subtypeClassOrInterfaceTypes that I can reuse code with.
On the other hand, if there is no inheritance structure, then I start from the bottom up to see if there are common methods. If there are no common methods and no common implementations, then I opt for a concrete class.
Generally, if there is inheritance with common methods and common implementations and a need for multiple subtype implementation methods in the same subtype, then I go with an abstract class, which is rare, but I do use it. If you just go with Abstract classes just because there is inheritance, you can run into problems if the code changes a lot. This is detailed very well in the example here: Interfaces vs Abstract Classes in Java, for the different types of domain objects of motors. One of them required a dual powered motor, that required multiple subtype implementation methods to be used in a single subtype class.
To sum it all up, as a rule you want to define behaviors (what the objects will do) with interfaces and not in Abstract classes. Abstract classes focus on an implementation hierarchy and code reuse.
Here are some links that go into greater details on this.
Thanks Type & Gentle Class
The Magic behind Subtype Polymorphism
Maximize Flexibility with Interfaces & Abstract Classes
Interfaces vs Abstract Classes in Java

Should separation between API and Implementation be total?

Separation between API design and their implementation is often recommended in large software implementations. But somewhere, they have to be reconnected (i.e., the implementation has to be reconnected to the API).
The following example shows an API design and an invocation of its implementation via the INSTANCE object:
import java.util.List;
public abstract class Separation {
public static final Separation INSTANCE = new SeparationImpl();
// Defining a special list
public static interface MySpecialList<T> extends List<T> {
void specialAdd(T item);
}
// Creation of a special list
public abstract <T> MySpecialList<T> newSpecialList(Class<T> c);
// Merging of a special list
public abstract <T> MySpecialList<? extends T> specialMerge(
MySpecialList<? super T> a, MySpecialList<? super T> b);
// Implementation of separation
public static class SeparationImpl extends Separation {
#Override
public <T> MySpecialList<T> newSpecialList(Class<T> c) {
return ...;
}
#Override
public <T> MySpecialList<? extends T> specialMerge(
MySpecialList<? super T> a, MySpecialList<? super T> b) {
return ...;
}
}
}
Some will argue that API should not refer to implementation code. Even if we separate API code from implementation via separate files, one often has to import implementation code (at least the class name) in the API.
There are techniques to avoid such references by using a string representation of the fully qualified name. The class is loaded with that string and then instantiated. It makes the code more complicated.
My question: Is there any benefit to completely separate or isolate API code from implementation code? Or is this just a purist's attempt to reach perfection with little practical benefits?
I've always understood the requirement to separate interface from implementation to mean that you don't mix the how of your implementation with the what. So in your above example, mixing api and implementation would mean exposing in the api something that was specific to how SeparationImpl implemented your api.
As an example, look at how iteration is implemented in various collection classes. There are more specific ways you can retrieve elements in specific collections (e.g. by position in an ArrayList) but those are not exposed in Collection because they're specific to how the concrete ArrayList is implemented.
I've also seen projects with huge directories of interfaces, each of which has a single concrete implementation, and each of which mechanically reproduces every method in their concrete implementation, which seems like a completely pointless "pretend" abstraction, as it's not actually providing any logical abstraction.
One technique which is often using in OSGi is to have the API in a separate module to the implementation. The API should compile by itself avoiding any reference to the implementation directly.
Peter's and Steve's answers are enough but I would like to add more - if you ever have only single implementation of the interface or abstract class, then its pointless to have interface or abstract class as its defeats the purpose of abstraction.
In your case I really didn't understand - why you implemented Separation as a abstract class, rather SeparationImpl itself can be API class or if you have different implementations Separation can be an inetrface and if you have some common functionality then you can have another abstract class implementing your interface and then SeparationImpl inheriting from that abstract class. the sample class hierarchy would look like
interface Separation --> AbstractSeparation --> SeparationImpl
just like the standard collection library
interface List --> AbstractList --> ArrayList
Additional to the good points from the other authors I would mention the unit testing purposes:
To mock up objects is highly easier when having interfaces intstead of classes.

Categories

Resources