What are the benefits of declaring an object as interface? [duplicate] - java

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What does it mean to “program to an interface”?
I noticed that some people like to declare an object as one of the interfaces it implements even though, within the scope of the variable, it is not necessary to look at it as the interface, e.g. there is no external API that expect an interface.
For example:
Map<String, Object> someMap = new HashMap<String, Object>();
Or you can just do
HashMap<String, Object> someMap = new HashMap<String, Object>();
and avoid importing java.util.Map altogether.
What are the advantages of declaring it through an interface (first above) as opposed to the class itself (second above)?
Thanks

An interface such as Map<A, B> declares what an object can do. On the other hand, a class such as HashMap<A, B> defines how an object does what the interface declares it does.
If you declare a variable (or field, or whatever) a Map<A, B>, you are stating that your code depends only on the contract defined by that interface, it does not depend on the specifics of an implementation.
If you declare it a HashMap<A, B>, it is to be understood that you need that specific version of a map (for whatever reason), and it cannot be replaced for something else.
I tend to dislike the common answers like "because you can change the implementation" simply because, after years and years of practice, you will see that it won't happen as frequently as that, and the major benefits are really this subtle (but clear) expression of your intents.

Since it's not part of API, it is implementation detail. It's better to be specific in implementations, there's no point to be abstract here.
my prev answer
Use interface or type for variable definition in java?

If the variable is not used later, there is no advantage/disadvantage. The reason you would use an interface rather than an object is to allow for more flexibility, but if that variable isn't used, there is no difference from a performance persepective.

If you use Map<String, Object> someMap, you are designing to an interface rather than implementation.. So you can switch between other implementation easily..
So, your Map can point to a HashMap, LinkedHashMap, or any other object, that is a subclass of Map.
So, if you have: -
Map<String, Integer> someMap = new HashMap<>();
You can change the implementation later on(If you want) to point to a LinkedHashMap: -
someMap = new LinkedHashMap<>();
Whereas if you use HashMap on LHS, you can only make it point to an object of type HashMap..
But, as such there is no difference in performance.. But it is suggested to always design to an interface rather than implementation..

A interface defines, which method a class has to implement.
This way - if you want to call a method defined by an interface - you don't need to know the exact class type of an object, you only need to know that it implements a specific interface.
Example:
interface Printer {
public void print(String text);
}
class FilePrinter implements Printer {
public void print(String text) {
//append the text to a file
}
}
class ScreenPrinter implements Printer {
public void print(String text) {
//write the text on the screen
}
}
class SomeClass {
public printSomething(Printer myPrinter) {
myPrinter.print("Hello");
}
}
If you call SomeClass.printSomething(...) it does not matter if you pass an instance of FilePrinter or ScreenPrinter, because the method just does not care. It knows that the object implements the interface Printer and also implements it's methods.
Another important point about interfaces is that a class can implement multiple interfaces.

Related

Specific Collection type returned by Convenience Factory Method in Java 9

In Java 9 we have convenience factory methods to create and instantiate immutable List, Set and Map.
However, it is unclear about the specific type of the returned object.
For ex:
List list = List.of("item1", "item2", "item3");
In this case which type of list is actually returned? Is it an ArrayList or a LinkedList or some other type of List?
The API documentation just mentions this line, without explicitly mentioning that its a LinkedList:
The order of elements in the list is the same as the order of the
provided arguments, or of the elements in the provided array.
The class returned by List.of is one of the package-private static classes and therefore not part of the public API:
package java.util;
...
class ImmutableCollections {
...
// Java 9-10
static final class List0<E> extends AbstractImmutableList<E> {
...
}
static final class List1<E> extends AbstractImmutableList<E> {
...
}
static final class List2<E> extends AbstractImmutableList<E> {
...
}
static final class ListN<E> extends AbstractImmutableList<E> {
...
}
// Java 11+
static final class List12<E> extends AbstractImmutableList<E> {
...
}
static final class ListN<E> extends AbstractImmutableList<E> {
...
}
}
So this is not an ArrayList (neither a LinkedList). The only things you need to know is that it is immutable and satisfies the List interface contract.
However, it is unclear about the specific type of the returned object.
And that is all you need to know! The whole point is: these methods do return some List<Whatever> on purpose. The thing that you get back is guaranteed to fulfill the public contract denoted by the List interface. It is a list of the things given to that method, in the same order as those things were written down.
You absolutely should avoid writing any code that needs to know more about the lists returned by these methods! That is an implementation detail which should not matter to client code invoking these methods!
In that sense: your focus should be much more on the client side - for example by avoiding that raw type you are using in your example (using List without a specific generic type).
Actually the same idea is in Collectors.toList for example - you get a List back and the documentation specifically says : There are no guarantees on the type, mutability, serializability, or thread-safety of the List returned. At the moment it is an ArrayList returned, but obviously this can change at any point in time.
I wonder if the same should be done here - to explicitly mention that the type is a List and nothing more. This let's a lot of ground for future implementations to decide what type to return that would fit best - speed, space, etc.
List.of returns a List of special type like Collections.UnmodifiableList. It is neither an ArrayList nor LinkedList. It will throw an exception when you will try to modify it.
Though the question seems to have been answered by #ZhekaKozlov and #GhostCat both in terms of what the return type would be(ImmutableCollections.List) and how it has been created package private and is not a public API. Also adding to the facts that the implementation of these factory methods guarantees to fulfill the public contract denoted by the List interface.
To further provide a gist of the implementation of the ImmutableCollections then Set, Map and List a step further in it. I would add a sample representation for a List which is quite similar for Map and Set.
The ImmutableCollections currently implements these factory methods on the interface List using:
abstract static class AbstractImmutableList<E>
which extends the AbstractList class and throws an UnsupportedOperationException for all the possible operations overriding their implementation. Meaning no more operations allowed on the collection, making them equivalent to Java Immutable Collections.
Furthermore, this is extended by a class
static final class ListN<E> extends AbstractImmutableList<E>
with a constructor to evaluate certain checks for the input and its elements being NonNull to return the object consisting of an E[] elements(array of elements of type E) and overrides certain implementations as .get(int idx), .contains(Object o) based out of these elements.
The way it goes on for Map and Set is similar, just that the validations on top of the elements or a pair of key and value are ensured there. Such that you can't create these(throws IllegalArgumentException and NullPointer exceptions respectively) :
Set<String> set2 = Set.of("a", "b", "a");
Map<String,String> map = Map.of("key1","value1","key1","value1");
Set<String> set2 = Set.of("a", null);
Map<String,String> map = Map.of("key1",null);

Why would a map helper extending hashmap be useful? Why not just use hashmap?

In a large java code base in my recent job, I see the below code:
public class MapHelper extends HashMap<String, Object>{
private static final long serialVersionUID = 1L;
public MapHelper() {
super();
}
public MapHelper(MapHelper mh) {
super(mh);
}
public MapHelper as_dict(String key) {
return (MapHelper)this.get(key);
}
}
I'm not sure how this would be useful. Are there examples you have that could shed light on the above MapHelper's usefulness?
The class does seem relatively frivolous the way it is now; however:
It lets them refer to HashMap<String, Object> as MapHelper which is shorter and guarantees consistency. See also 'Is there a Java equivalent or methodology for the typedef keyword in C++?'.
as_dict is a utility method that performs a cast. They appear to have foresight about what the Map contains. This is safer than doing the cast inline because the cast is defined in only one place. Less margin for error.
It lets them add additional functionality later without having to update the entire code-base.
Extending HashMap as a top-level class makes the generic type arguments reified, that is, they are available at runtime through reflection. See this blog post by Neal Gafter that explains this feature in more detail.
So there are actually quite a few small but legitimate reasons to do this.
There are some examples like this in the Java API such as:
Properties extends Hashtable<Object, Object>
UIDefaults extends Hashtable<Object, Object>
It seems like the only purpose of this class is having a shortcut.
Instead of doing
Map<String, Map<String, Map<String, Map<String, Object>>>> myMap = new HashMap<String, HashMap<String, HashMap<String, HashMap<String, Object>>>>();
(which is bad to read)
you could use
MapHelper myHelper = new MapHelper(new MapHelper(new MapHelper(new MapHelper())));
since MapHelper will exactly limit the generic type attributes to be String and Object
Finally imagine, you want to check if ANY element equals "1" while not knowing the depth - You could call a recursive method over and over again - or implement it once on the MapHelper.
if (myHelper.contains("1"));
The purpose of the class is to prevent you from having to hardcode HashMap<String, Object> every place you want to use that type of map (it increases abstraction). By extending it into MapHelper, you reduce repetition and it prevents you from having to clutter your code with the diamond operator.
A better name for the class would be something that describes what the underlying map types are (without being too specific):
public class StringMapHelper extends HashMap<String, Object>
If you do it that way, the class name is still descriptive even if you end up changing the key datatype, and again, you don't have to replace the key type everywhere in your code:
public class StringMapHelper extends HashMap<FancyString, Object>

If I need serializable should i use concrete List (e.g. ArrayList) or (Serializable)List?

We have a discussion in office and cannot understand which approach is better
I have a class (SomeClass) with some method which receives Serializable object. The signature is following:
public void someMethod(Serializable serializableObject){
...
}
And I need to call this method from another class, but I should provide it with some List as fact parameter. There are two different approaches
1. Serializable
private SomeClass someClass;
public void doSomething() {
List<String> al = new ArrayList<String>();
al.add("text");
someClass.someMethod((Serializable)al);
}
2. ArrayList
private SomeClass someClass;
public void doSomething() {
ArrayList<String> al = new ArrayList<String>();
al.add("text");
someClass.someMethod(al);
}
Benefit of the first example is that it adheres to the java’s best practices which says: use interface instead of concrete realization for reference type and any programmer while reading that source will understand that we don't need special behavior of the ArrayList. And the only place we need it's serializable behavior we are adding this behavior by casting it to the Serializable interface. And programmer can simply change this current realization of the List to some other serializable realization, for example, LinkedList, without any side affect on this element because we use interface List as it`s reference type.
Benefit of the second example is that we refer to ArrayList as to class which have not only List behavior but also Serializable behavior. So if someone looked at this code and tried to change ArrayList to List he would receive a compile time error which would reduce time for programmer to understand what is going on there
UPDATE: we can't change someMethod signature. It came from a third-party company and we use it not only for Serializable Lists but also for Strings, Integers and some other Serializable objects
You should use an interface when all you need is the methods an interface provides. (this is most cases) However, if you need more than one interface, you can use generics, but the simplest approach is to use the concrete type.
It's better to define ArrayList because this combines two interfaces - List + Serializable. You need both of them in one place.
It doesn't matter that much, but not that using interfaces should be applied more strictly for return types, and less strictly for local variables.
I would change the signature of the someMethod so that it reflects what it requires from the invoker of the method:
public class SomeClass {
public <T extends List<? extends Serializable> & Serializable> void someMethod(T t) {
}
public static void main(String[] args) {
SomeClass test = new SomeClass();
test.someMethod(new ArrayList<String>()); //Works
test.someMethod(new ArrayList<Image>()); //Compile time error, Image is not Serializable
List<String> l = null;
test.someMethod(l); //Compile time error
}
}
The signature of someMethod now says that you must invoke it with something that is a List, and that is Serializable, and contains elements that are Serializable
In this case, I would just use List, and not worry that the compiler cannot guarantee that your object is serializable (it most likely will be anyway, if you've done things right elsewhere).
Note that methods of the following type (which accept a Serializable parameter) provide a false sense of security, because the compiler can never guarantee that the entire object graph which needs to be serialized will actually be serializable.
public void write(Serializable s);
Consider an ArrayList (serializable) which contains non-serializable objects. The signature may as well just be:
public void write(Object o);
And then you don't have to worry about all the extraneous casting.
Also consider that, although you cannot change the signature of the API you are using, you can very easily create a wrapper API which has a different signature.
1 is generally the right thing to do. However in this case, my opinion would to be bend that and declare it as ArrayList<>. This avoids the cast and guarantees that someone can't change the implementation of the List to one that isn't Serializable.
You can't do (1) because you're not free to change the List implementation type arbitrarily, which is the whole idea of doing that. You can only use a List implementation that implements Serializable. So you may as well express that in the code.

Refactoring advice: maps to POJOs

I currently am part of a project where there is an interface like this:
public interface RepositoryOperation {
public OperationResult execute(Map<RepOpParam, Object> params);
}
This interface has about ~100 implementers.
To call an implementer one needs to do the following:
final Map<RepOpParam, Object> opParams = new HashMap<RepOpParam, Object>();
opParams.put(ParamName.NAME1, val1);
opParams.put(ParamName.NAME2, val2);
Now I think that there is obviously something wrong with anything with a<Something, Object> generic declaration.
Currently this causes a caller of a OperationImpl to have to actually read the code of the operation in order to know how to build the argument map. (and this is not even the worst of the problems, but I don't want to cite them all since they are fairly obvious)
After some discussion I managed to convince my colleagues to let me do some refactoring.
It seems to me that the simplest 'fix' would be to change the interface like so:
public interface RepositoryOperation {
public OperationResult execute(OperationParam param);
}
After all the concrete operations will define (extend) their own OperationParam and the needed arguments would be visible to everybody. (which is the 'normal way' to do things like that IMHO)
So as I see it since the interface implementers are quite numerous I have several choices:
Try to change the interface and rewrite all the Operation calls to use objects instead of maps. This seems the cleanest, but I think that since the operations are a lot it might be too much work in practice. (~2 weeks with tests probably)
Add an additional method to the interface like so:
public interface RepositoryOperation {
public OperationResult execute(Map<String, Object> params);
public OperationResult execute(OperationParam params);
}
and fix the map calls whenever I come across them during functionality implementation.
Live with it (please no !).
So my question is.
Does anyone see a better approach for 'fixing' the maps and if you do would you fix them with method 1 or 2 or not fix them at all.
EDIT:
Thanks for the great answers. I would accept both Max's and Riduidel's answers if I could, but since I can't I'm leaning a bit more towards Riduidel's.
I can see a third way.
You have a map made of <RepOpParam, Object>. If I understand you correctly, what bothers you is the fact that there is no type checking. And obviously, it's not ideal. But, it is possible to move the type-checking issue from the whole parameter (your OperationParam) to individual RepOpParam. Let me explain it.
Suppose your RepOpParam interface (which currently seems like a tagging interface) is modified as it :
public interface RepOpParam<Value> {
public Value getValue(Map<RepOpParam, Object> parameters);
}
You can then update modern code by replacing old calls to
String myName = (String) params.get(ParamName.NAME1);
with new calls to
String myName = ParamName.NAME1.getValue(params);
The obvious collateral advantage being that you can now have a default value for your parameter, hidden in its very definition.
I have however to make clear that this third way is nothing more than a way to merge your two operations of the second way into only one, respecting old code prototype, while adding new powers in it. As a consequence, I would personnally go the first way, and rewrite all that "stuff", using modern objects (besides, consider taking a look at configuration librarires, which may lead you to interesting anwsers to this problem).
First of all, I think the interface is not perfect. You could add some generics to make it prettier:
public interface RepositoryOperation<P extends OperationParam, R extends OperationResult> {
public R execute(T params);
}
Now, we will need some backward compatibility code. I'd go with this:
//We are using basic types for deprecated operations
public abstract class DeprecatedRepositoryOperation implements RepositoryOperation<OperationParam, OperationResult> {
//Each of our operations will need to implement this method
public abstract OperationResult execute(Map<String, Object> params);
//This will be the method that we can call from the outside
public OperationResult execute(OperationParam params) {
Map<String, Object> paramMap = getMapFromObj(params);
return execute(paramMap);
}
}
Here is how will old operation look like:
public class SomeOldOperation extends DeprecatedRepositoryOperation {
public OperationResult execute(Map<String, Object> params) {
//Same old code as was here before. Nothing changes
}
}
New operation will be prettier:
public class DeleteOperation implements RepositoryOperation<DeleteParam, DeleteResult> {
public DeleteResult execute(DeleteParam param) {
database.delete(param.getID());
...
}
}
But the calling code can use both functions now (an example of code):
String operationName = getOperationName(); //="Delete"
Map<String, RepositoryOperation> operationMap = getOperations(); //=List of all operations
OperationParam param = getParam(); //=DeleteParam
operationMap.execute(param);
In case the operation was old one - it will use the converter method from DeprecatedRepositoryOperation.
In case the operation is a new one - it will use the new public R execute(T params) function.
It sounds like you have an unnecessary and misguided abstraction. Anytime I see an interface with one method in it, I think Strategy pattern or Action pattern, depending on whether you make the decision at runtime or not.
One way to cleanup the code is have each RepositoryOperation implementation have a constructor which takes the specific arguments it needs for the execute method to run correctly. That way there is no messy casting of the Object values in the map.
If you want to keep the execute method signature, you might be able to use generics to putter tighter bounds on the values of the Map.

Java, declare variable with multiple interfaces?

In Java, is it possible to declare a field/variable whose type is multiple interfaces? For example, I need to declare a Map that is also Serializable. I want to make sure the variable references a serializable map. The Map interface does not extend Serializable, but most of Map's implementations are Serializable.
I'm pretty sure the answer is no.
Follow up: I'm fully aware of creating a new interface that extends both Map and Serializable. This will not work as existing implementations (such as HashMap) do not implement my new interface.
You can do it with generics, but it's not pretty:
class MyClass<T,K,V extends Serializable & Map<K,V>> {
T myVar;
}
There is no need to declare the field/variable like that. Especially since it can only be tested runtime and not compile time. Create a setter and report an error should the passed Map not implement Serializable.
The answers recommending that you create your own interface are of course not very practical as they will actively prohibit sending in things that are Maps and Serializable but not your special interface.
It's possible to do this using some generics tricks:
public <T extends Map<?,?> & Serializable> void setMap(T map)
The above code uses generics to force you to pass a map which implements both interfaces. However, note that a consequence of this is that when you actually pass it the maps, they will probably need to be either marked as serializable or of a map type which is already serializable. It also is quite a bit more difficult to read. I would document that the map must be serializable and perform the test for it.
public interface MyMap extends Map, Serializable {
}
will define a new interface that is the union of Map and Serializable.
You obviously have to then provide a suitable implementation of this (e.g. MyMapImpl) and you can then provide variable references of the type MyMap (or Map, or Serializable, depending on the requirements).
To address your clarification, you can't retrofit behaviour (e.g. a serializable map). You have to have the interface and some appropriate implementation.
I voted up Brian's answer, but wanted to add a little higher-level thought..
If you look through the SDK, you'll find that they rarely (if ever) pass around actual collection objects.
The reason for that is that it's not a very good idea. Collections are extremely unprotected.
Most of the time you want to make a copy before passing it off and pass the copy so that any modifications to the collection won't change the environment for something else that's relying on it. Also, threading becomes a nightmare--even with a synchronized collection!
I've seen two solutions, one is to always extract an array and pass it. This is how the SDK does it.
The other is to ALWAYS wrap collections in a parent class (And I mean encapsulate, not extend). I've gotten into this habit and it's very worth while. It doesn't really cost anything because you don't duplicate all the collection methods anyway (actually you rarely duplicate any of them). In fact what you end up doing is moving "Utility" functionality from other classes distributed all over your code into the wrapper class, which is where it should have been in the first place.
Any method with a signature that matches "method(collection,...)" should almost certainly be a member method of that collection, as should any loops that iterate over the collection.
I just have to throw this out every now and then because it's one of those things I didn't get for a while (because nobody championed the concept). It always seems like it's going to have some drawback but having done this for a while and seeing the problems it solved and code it eliminated, I can't even imagine any possible drawbacks myself, it's just all good.
You can achieve this by making your own Interface, which extends the interfaces you want
public interface SerializableMap<K, V> extends Map<K, V>, Serializable {
}
In my case it worked just to declare the concrete type:
HashMap<String, String> mySerializableMap = new HashMap<>();
It allowed me to use the Map methods (like put) and pass the map to methods that required a Serializable, without casting. Not perfect when we’ve learned to program towards interfaces, but good enough for me in the situation I was in.
If you really insist: As has been noted, declaring a combined interface alone does not solve the problem since the concrete classes we already have do not implement our combined interface even when they do implement each of the two interfaces we combine. I use it as a first step on the way, though. For example:
public interface SerializableMap<K, V> extends Map<K, V>, Serializable {
// No new methods or anything
}
The next step is also declaring a new class:
public class SerilizableHashMap<K, V> extends HashMap<K, V> implements SerializableMap<K, V> {
private static final long serialVersionUID = 4302237185522279700L;
}
This class is declared to implement the combined interface and thus can be used wherever one of those types is required. It extends a class that already implements each of the interfaces separately, therefore there’s nothing more we need to do. And now we have got what you asked for. Example of use:
public static void main(String[] args) {
SerializableMap<String, String> myMap = new SerilizableHashMap<>();
// myMap works as a Map
myMap.put("colour1", "red");
myMap.put("colour2", "green");
// myMap works as a Serializable too
consumeSerializable(myMap);
}
public static void consumeSerializable(Serializable s) {
// So something with the Serializable
}
For most purposes I suggest that this is overkill, but now I have at least presented it as an option.
Link: What does it mean to “program to an interface”?
You can't really do it if you want to keep using the existing Map implementations.
An alternative would be to make a helper class, and add a method like this one:
public static Serializable serializableFromMap(Map<?, ?> map) {
if (map instanceof Serializable) {
return (Serializable)map;
}
throw new IllegalArgumentException("map wasn't serializable");
}
Nope, you'll pretty much need to cast.

Categories

Resources