How to subclass Guava's ImmutableList? - java

When I try to implement my own ImmutableList (actually a wrapper that delegates to the underlying list) I get the following compiler error:
ImmutableListWrapper is not abstract and does not override abstract method isPartialView() in com.google.common.collect.ImmutableCollection
But in fact, it seems to be impossible to override isPartialView() because it is package protected and I'd like to declare the wrapper in my own package.
Why don't I simply extend ImmutableCollection? Because I want ImmutableList.copyOf() to return my instance without making a defensive copy.
The only approach I can think of is declaring a subclass in guava's package which changes isPartialView() from package-protected to public, and then having my wrapper extend that. Is there a cleaner way?
What I am trying to do
I am attempting to fix https://github.com/google/guava/issues/2029 by creating a wrapper that would delegate to the underlying ImmutableList for all methods except spliterator(), which would it override.
I am working under the assumption that users may define variables of type ImmutableList and expect the the wrapper to be a drop-in replacement (i.e. it isn't enough to implement List, they are expecting an ImmutableList).

If you want your own immutable list but don't want to implement it, just use a ForwardingList. Also, to actually make a copy, use Iterator as parameter for the copyOf. Here's a solution that should fulfill all your requirements described in the question and your answer.
public final class MyVeryOwnImmutableList<T> extends ForwardingList<T> {
public static <T> MyVeryOwnImmutableList<T> copyOf(List<T> list) {
// Iterator forces a real copy. List or Iterable doesn't.
return new MyVeryOwnImmutableList<T>(list.iterator());
}
private final ImmutableList<T> delegate;
private MyVeryOwnImmutableList(Iterator<T> it) {
this.delegate = ImmutableList.copyOf(it);
}
#Override
protected List<T> delegate()
{
return delegate;
}
}

If you want different behavior than ImmutableList.copyOf() provides, simply define a different method, e.g.
public class MyList {
public static List<E> copyOf(Iterable<E> iter) {
if (iter instanceof MyList) {
return (List<E>)iter;
return ImmutableList.copyOf(iter);
}
}
Guava's immutable classes provide a number of guarantees and make a number of assumptions about how their implementations work. These would be violated if other authors could implement their own classes that extend Guava's immutable types. Even if you correctly implemented your class to work with these guarantees and assumptions, there's nothing stopping these implementation details from changing in a future release, at which point your code could break in strange or undetectable ways.
Please do not attempt to implement anything in Guava's Imutable* heirarchy; you're only shooting yourself in the foot.
If you have a legitimate use case, file a feature request and describe what you need, maybe it'll get incorporated. Otherwise, just write your wrappers in a different package and provide your own methods and guarantees. There's nothing forcing you, for instance, to use ImmutableList.copyOf(). If you need different behavior, just write your own method.

Upon digging further, it looks like this limitation is by design:
Quoting
http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/ImmutableList.html:
Note: Although this class is not final, it cannot be subclassed as it has no public or protected constructors. Thus, instances of this type are guaranteed to be immutable.
So it seems I need to create my wrapper in the guava package.

Related

New instance of an object referenced by an interface in Java

I am working on a JDK6 project.
I have a pojo like:
public class MyPojo implements serializable {
private List<Integer> ids;
public List<Integers> getIds() {
return ids;
}
public List<Integers> setIds(List<Integer> ids) {
// idsCopy = copy of ids; // how can I do it?
this.ids = idsCopy;
}
}
I'd like to store a copy of the parameter ids passed in the setter, but I don't want to specialize it in the setId method signature by declaring the reference as a particular implementation of the List interface: depending on where the pojo is used, ids could be either a LinkedList, or an ArrayList, etc.
I'd like to keep the same implementation of the ids parameter.
How can I do the copy?
The first thing I thought was: ids.getClass().newInstance(), but it needs to be surrounded by a try/catch block for InstantiationException and IllegalAccessException, in particular because I am not sure that the actual implementation of ids has an empty constructor. Is there something more immediate?
Update
In this case the most common, straightforward and reasonable thing to do is avoiding making a copy and let who will use the class MyPojo passing a copy of the object to set, for instance:
List<Integer> ls = new Arraylist<>();
MyPojo pj = new MyPojo();
pj.setIds(ls.clone()); // or using copy constructor or anything else..
At the beginning I had the idea to do like this:
public <T extends List<Integer> & Cloneable> void setIds(T ids) {
this.ids = ids.clone();
}
enforcing using a class implementing also Cloneable but the javadoc of Cloneable interface explains very good why this is not intended to work (and also why reflection on clone would not too):
A class implements the Cloneable interface to
indicate to the method that it
is legal for that method to make a
field-for-field copy of instances of that class.
Invoking Object's clone method on an instance that does not implement the
Cloneable interface results in the exception
CloneNotSupportedException being thrown.
By convention, classes that implement this interface should override
Object.clone (which is protected) with a public method.
See for details on overriding this
method.
Note that this interface does not contain the clone method.
Therefore, it is not possible to clone an object merely by virtue of the
fact that it implements this interface. Even if the clone method is invoked
reflectively, there is no guarantee that it will succeed.
At the end, regardless the context, the answer to my question
Is there something more immediate?
is "no", most probably because there should not be the need to do it...
..even though..
https://rules.sonarsource.com/java/type/Vulnerability/RSPEC-2384
Is there something more immediate?
There are two possible ways to do this:
The method you proposed: use reflection to invoke the no-args constructor create a new empty list, and then use List::addAll to copy the elements to the new list.
Use the Object::clone method to create the copy.
Either approach (if it works!) will create a list object of the same type as the original. But neither method is guaranteed to work for all possible List classes.
The reflective approach fails if the list implementation class has no usable / accessible no-args constructor.
The clone approach fails if the list implementation class doesn't override Object::clone appropriately.
Note that these are not hypothetical concerns. A list implementation class may be deliberately designed to prevent programs making clones of the list. For example, there may be some relationship between the list and (say) a database query result set, such that cloning makes no sense.
But the flip-side is that if the library designer didn't provide a no-args constructor or a clone override, they probably did it for a good reason.
Finally, it is questionable that you should be trying to do this at all. There is no obvious reason for your MyPojo classes ids property to use the same List class as the original argument. This is actually making the behavior MyPojo class dependent on the behavior of the supplied list. I would argue that this is a bad thing because you are weakening the abstraction boundary for your MyPojo class.
As you have said it yourself:
The first thing I thought was: ids.getClass().newInstance(), but it
needs to be surrounded by a try/catch block for InstantiationException
and IllegalAccessException, in particular because I am not sure that
the actual implementation of ids has an empty constructor. Is there
something more immediate?
How will you be able to construct an instance of the underlying class when you don't even know what the constructor looks like? Say through reflection, you know what a (there could be more than one) constructor looks like, you still have to worry about things like:
Private vs Internal/Package-Private vs Public - How do you handle each case? Some implementations of List such as the ones found in the guava library use the builder pattern to construct the collection and therefore make the constructor private.
Number of constructor arguments - If more than one, how do you construct the rest?
Exceptions thrown by the constructor - How do you catch them all? Don't say "catch Exception" because you may miss Throwable.
Multiple constructors - How do you decide which one to call?
Many more corner cases to deal with...
You should adopt the KISS (Keep It Simple Soldier) principle as well as the SoC (Separation of Concerns) principle. Keep it simple and you will be rewarded. Let the person using the Pojo decide how they want to retrieve the list. As long as you make it clear (through documentation) what the method does, it is up to the user to make rational decisions as to how they use that method.
If they do something like:
LinkedList<Integer> myIds = ...;
myPojo.setIds(myIds);
then let them decide how to retrieve the ids; They can either cast the returned type or make a copy.
If you made it clear in the documentation that the list is copied, then the person using this POJO should know to make a copy as a LinkedList when they retrieve it, otherwise it should be safe for them to cast the list to a LinkedList.
I would still argue that even the above is not simple enough. The simplest thing to do (and this is what is widely accepted) is to use the most general type for the work needed. If you need a Mapping type, use Map, not HashMap or List/Set of tuples; If you need a unique collection, use Set, if you need a general collection, use List, etc.
You can use Arrays.asList for that e.g.
public void setIds(List<Integer> ids) {
this.ids = Arrays.asList(ids.toArray(new Integer[ids.size()]));
}
this should work for Java 1.6

How do I avoid breaking the Liskov substitution principle with a class that implements multiple interfaces?

Given the following class:
class Example implements Interface1, Interface2 {
...
}
When I instantiate the class using Interface1:
Interface1 example = new Example();
...then I can call only the Interface1 methods, and not the Interface2 methods, unless I cast:
((Interface2) example).someInterface2Method();
Of course, to make this runtime safe, I should also wrap this with an instanceof check:
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
I'm aware that I could have a wrapper interface that extends both interfaces, but then I could end up with multiple interfaces to cater for all the possible permutations of interfaces that can be implemented by the same class. The Interfaces in question do not naturally extend one another so inheritance also seems wrong.
Does the instanceof/cast approach break LSP as I am interrogating the runtime instance to determine its implementations?
Whichever implementation I use seems to have some side-effect either in bad design or usage.
I'm aware that I could have a wrapper interface that extends both
interfaces, but then I could end up with multiple interfaces to cater
for all the possible permutations of interfaces that can be
implemented by the same class
I suspect that if you're finding that lots of your classes implement different combinations of interfaces then either: your concrete classes are doing too much; or (less likely) your interfaces are too small and too specialised, to the point of being useless individually.
If you have good reason for some code to require something that is both a Interface1 and a Interface2 then absolutely go ahead and make a combined version that extends both. If you struggle to think of an appropriate name for this (no, not FooAndBar) then that's an indicator that your design is wrong.
Absolutely do not rely on casting anything. It should only be used as a last resort and usually only for very specific problems (e.g. serialization).
My favourite and most-used design pattern is the decorator pattern. As such most of my classes will only ever implement one interface (except for more generic interfaces such as Comparable). I would say that if your classes are frequently/always implementing more than one interface then that's a code smell.
If you're instantiating the object and using it within the same scope then you should just be writing
Example example = new Example();
Just so it's clear (I'm not sure if this is what you were suggesting), under no circumstances should you ever be writing anything like this:
Interface1 example = new Example();
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
Your class can implement multiple interfaces fine, and it is not breaking any OOP principles. On the contrary, it is following the interface segregation principle.
It is confusing why would you have a situation where something of type Interface1 is expected to provide someInterface2Method(). That is where your design is wrong.
Think about it in a slightly different way: Imagine you have another method, void method1(Interface1 interface1). It can't expect interface1 to also be an instance of Interface2. If it was the case, the type of the argument should have been different. The example you have shown is precisely this, having a variable of type Interface1 but expecting it to also be of type Interface2.
If you want to be able to call both methods, you should have the type of your variable example set to Example. That way you avoid the instanceof and type casting altogether.
If your two interfaces Interface1 and Interface2 are not that loosely coupled, and you will often need to call methods from both, maybe separating the interfaces wasn't such a good idea, or maybe you want to have another interface which extends both.
In general (although not always), instanceof checks and type casts often indicate some OO design flaw. Sometimes the design would fit for the rest of the program, but you would have a small case where it is simpler to type cast rather than refactor everything. But if possible you should always strive to avoid it at first, as part of your design.
You have two different options (I bet there are a lot more).
The first is to create your own interface which extends the other two:
interface Interface3 extends Interface1, Interface2 {}
And then use that throughout your code:
public void doSomething(Interface3 interface3){
...
}
The other way (and in my opinion the better one) is to use generics per method:
public <T extends Interface1 & Interface2> void doSomething(T t){
...
}
The latter option is in fact less restricted than the former, because the generic type T gets dynamically inferred and thus leads to less coupling (a class doesn't have to implement a specific grouping interface, like the first example).
The core issue
Slightly tweaking your example so I can address the core issue:
public void DoTheThing(Interface1 example)
{
if (example instanceof Interface2)
{
((Interface2) example).someInterface2Method();
}
}
So you defined the method DoTheThing(Interface1 example). This is basically saying "to do the thing, I need an Interface1 object".
But then, in your method body, it appears that you actually need an Interface2 object. Then why didn't you ask for one in your method parameters? Quite obviously, you should've been asking for an Interface2
What you're doing here is assuming that whatever Interface1 object you get will also be an Interface2 object. This is not something you can rely on. You might have some classes which implement both interfaces, but you might as well have some classes which only implement one and not the other.
There is no inherent requirement whereby Interface1 and Interface2 need to both be implemented on the same object. You can't know (nor rely on the assumption) that this is the case.
Unless you define the inherent requirement and apply it.
interface InterfaceBoth extends Interface1, Interface2 {}
public void DoTheThing(InterfaceBoth example)
{
example.someInterface2Method();
}
In this case, you've required InterfaceBoth object to both implement Interface1 and Interface2. So whenever you ask for an InterfaceBoth object, you can be sure to get an object which implements both Interface1 and Interface2, and thus you can use methods from either interface without even needing to cast or check the type.
You (and the compiler) know that this method will always be available, and there's no chance of this not working.
Note: You could've used Example instead of creating the InterfaceBoth interface, but then you would only be able to use objects of type Example and not any other class which would implement both interfaces. I assume you're interested in handling any class which implements both interfaces, not just Example.
Deconstructing the issue further.
Look at this code:
ICarrot myObject = new Superman();
If you assume this code compiles, what can you tell me about the Superman class? That it clearly implements the ICarrot interface. That is all you can tell me. You have no idea whether Superman implements the IShovel interface or not.
So if I try to do this:
myObject.SomeMethodThatIsFromSupermanButNotFromICarrot();
or this:
myObject.SomeMethodThatIsFromIShovelButNotFromICarrot();
Should you be surprised if I told you this code compiles? You should, because this code doesn't compile.
You may say "but I know that it's a Superman object which has this method!". But then you'd be forgetting that you only told the compiler it was an ICarrot variable, not a Superman variable.
You may say "but I know that it's a Superman object which implements the IShovel interface!". But then you'd be forgetting that you only told the compiler it was an ICarrot variable, not a Superman or IShovel variable.
Knowing this, let's look back at your code.
Interface1 example = new Example();
All you've said is that you have an Interface1 variable.
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
It makes no sense for you to assume that this Interface1 object also happens to implement a second unrelated interface. Even if this code works on a technical level, it is a sign of bad design, the developer is expecting some inherent correlation between two interfaces without actually having created this correlation.
You may say "but I know I'm putting an Example object in, the compiler should know that too!" but you'd be missing the point that if this were a method parameter, you would have no way of knowing what the callers of your method are sending.
public void DoTheThing(Interface1 example)
{
if (example instanceof Interface2)
{
((Interface2) example).someInterface2Method();
}
}
When other callers call this method, the compiler is only going to stop them if the passed object does not implement Interface1. The compiler is not going to stop someone from passing an object of a class which implements Interface1 but does not implement Interface2.
Your example does not break LSP, but it seems to break SRP. If you encounter such case where you need to cast an object to its 2nd interface, the method that contains such code can be considered busy.
Implementing 2 (or more) interfaces in a class is fine. In deciding which interface to use as its data type depends entirely on the context of the code that will use it.
Casting is fine, especially when changing context.
class Payment implements Expirable, Limited {
/* ... */
}
class PaymentProcessor {
// Using payment here because i'm working with payments.
public void process(Payment payment) {
boolean expired = expirationChecker.check(payment);
boolean pastLimit = limitChecker.check(payment);
if (!expired && !pastLimit) {
acceptPayment(payment);
}
}
}
class ExpirationChecker {
// This the `Expirable` world, so i'm using Expirable here
public boolean check(Expirable expirable) {
// code
}
}
class LimitChecker {
// This class is about checking limits, thats why im using `Limited` here
public boolean check(Limited limited) {
// code
}
}
Usually, many, client-specific interfaces are fine, and somewhat part of the Interface segregation principle (the "I" in SOLID). Some more specific points, on a technical level, have already been mentioned in other answers.
Particularly that you can go too far with this segregation, by having a class like
class Person implements FirstNameProvider, LastNameProvider, AgeProvider ... {
#Override String getFirstName() {...}
#Override String getLastName() {...}
#Override int getAge() {...}
...
}
Or, conversely, that you have an implementing class that is too powerful, as in
class Application implements DatabaseReader, DataProcessor, UserInteraction, Visualizer {
...
}
I think that the main point in the Interface Segregation Principle is that the interfaces should be client-specific. They should basically "summarize" the functions that are required by a certain client, for a certain task.
To put it that way: The issue is to strike the right balance between the extremes that I sketched above. When I'm trying to figure out interfaces and their relationships (mutually, and in terms of the classes that implement them), I always try to take a step back and ask myself, in an intentionally naïve way: Who is going to receive what, and what is he going to do with it?
Regarding your example: When all your clients always need the functionality of Interface1 and Interface2 at the same time, then you should consider either defining an
interface Combined extends Interface1, Interface2 { }
or not have different interfaces in the first place. On the other hand, when the functionalities are completely distinct and unrelated and never used together, then you should wonder why the single class is implementing them at the same time.
At this point, one could refer to another principle, namely Composition over inheritance. Although it is not classically related to implementing multiple interfaces, composition can also be favorable in this case. For example, you could change your class to not implement the interfaces directly, but only provide instances that implement them:
class Example {
Interface1 getInterface1() { ... }
Interface2 getInterface2() { ... }
}
It looks a bit odd in this Example (sic!), but depending on the complexity of the implementation of Interface1 and Interface2, it can really make sense to keep them separated.
Edited in response to the comment:
The intention here is not to pass the concrete class Example to methods that need both interfaces. A case where this could make sense is rather when a class combines the functionalities of both interfaces, but does not do so by directly implementing them at the same time. It's hard to make up an example that does not look too contrived, but something like this might bring the idea across:
interface DatabaseReader { String read(); }
interface DatabaseWriter { void write(String s); }
class Database {
DatabaseConnection connection = create();
DatabaseReader reader = createReader(connection);
DatabaseReader writer = createWriter(connection);
DatabaseReader getReader() { return reader; }
DatabaseReader getWriter() { return writer; }
}
The client will still rely on the interfaces. Methods like
void create(DatabaseWriter writer) { ... }
void read (DatabaseReader reader) { ... }
void update(DatabaseReader reader, DatabaseWriter writer) { ... }
could then be called with
create(database.getWriter());
read (database.getReader());
update(database.getReader(), database.getWriter());
respectively.
With the help of various posts and comments on this page, a solution has been produced, which I feel is correct for my scenario.
The following shows the iterative changes to the solution to meet SOLID principles.
Requirement
To produce the response for a web service, key + object pairs are added to a response object. There are lots of different key + object pairs that need to be added, each of which may have unique processing required to transform the data from the source to the format required in the response.
From this it is clear that whilst the different key / value pairs may have different processing requirements to transform the source data to the target response object, they all have a common goal of adding an object to the response object.
Therefore, the following interface was produced in solution iteration 1:
Solution Iteration 1
ResponseObjectProvider<T, S> {
void addObject(T targetObject, S sourceObject, String targetKey);
}
Any developer that needs to add an object to the response can now do so using an existing implementation that matches their requirement, or add a new implementation given a new scenario
This is great as we have a common interface which acts as a contract for this common practise of adding response objects
However, one scenario requires that the target object should be taken from the source object given a particular key, "identifier".
There are options here, the first is to add an implementation of the existing interface as follows:
public class GetIdentifierResponseObjectProvider<T extends Map, S extends Map> implements ResponseObjectProvider<T, S> {
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get("identifier"));
}
}
This works, however this scenario could be required for other source object keys ("startDate", "endDate" etc...) so this implementation should be made more generic to allow for reuse in this scenario.
Additionally, other implementations may require more context information to perform the addObject operation... So a new generic type should be added to cater for this
Solution Iteration 2
ResponseObjectProvider<T, S, U> {
void addObject(T targetObject, S sourceObject, String targetKey);
void setParams(U params);
U getParams();
}
This interface caters for both usage scenarios; the implementations that require additional params to perform the addObject operation and the implementations that do not
However, considering the latter of the usage scenarios, the implementations that do not require additional parameters will break the SOLID Interface Segregation Principle as these implementations will override getParams and setParams methods but not implement them. e.g:
public class GetObjectBySourceKeyResponseObjectProvider<T extends Map, S extends Map, U extends String> implements ResponseObjectProvider<T, S, U> {
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get(U));
}
public void setParams(U params) {
//unimplemented method
}
U getParams() {
//unimplemented method
}
}
Solution Iteration 3
To fix the Interface Segregation issue, the getParams and setParams interface methods were moved into a new Interface:
public interface ParametersProvider<T> {
void setParams(T params);
T getParams();
}
The implementations that require parameters can now implement the ParametersProvider interface:
public class GetObjectBySourceKeyResponseObjectProvider<T extends Map, S extends Map, U extends String> implements ResponseObjectProvider<T, S>, ParametersProvider<U>
private String params;
public void setParams(U params) {
this.params = params;
}
public U getParams() {
return this.params;
}
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get(params));
}
}
This solves the Interface Segregation issue but causes two more issues... If the calling client wants to program to an interface, i.e:
ResponseObjectProvider responseObjectProvider = new GetObjectBySourceKeyResponseObjectProvider<>();
Then the addObject method will be available to the instance, but NOT the getParams and setParams methods of the ParametersProvider interface... To call these a cast is required, and to be safe an instanceof check should also be performed:
if(responseObjectProvider instanceof ParametersProvider) {
((ParametersProvider)responseObjectProvider).setParams("identifier");
}
Not only is this undesirable it also breaks the Liskov Substitution Principle - "if S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program"
i.e. if we replaced an implementation of ResponseObjectProvider that also implements ParametersProvider, with an implementation that does not implement ParametersProvider then this could alter the some of the desirable properties of the program... Additionally, the client needs to be aware of which implementation is in use to call the correct methods
An additional problem is the usage for calling clients. If the calling client wanted to use an instance that implements both interfaces to perform addObject multiple times, the setParams method would need to be called before addObject... This could cause avoidable bugs if care is not taken when calling.
Solution Iteration 4 - Final Solution
The interfaces produced from Solution Iteration 3 solve all of the currently known usage requirements, with some flexibility provided by generics for implementation using different types. However, this solution breaks the Liskov Substitution Principle and has a non-obvious usage of setParams for the calling client
The solution is to have two separate interfaces, ParameterisedResponseObjectProvider and ResponseObjectProvider.
This allows the client to program to an interface, and would select the appropriate interface depending on whether the objects being added to the response require additional parameters or not
The new interface was first implemented as an extension of ResponseObjectProvider:
public interface ParameterisedResponseObjectProvider<T,S,U> extends ResponseObjectProvider<T, S> {
void setParams(U params);
U getParams();
}
However, this still had the usage issue, where the calling client would first need to call setParams before calling addObject and also make the code less readable.
So the final solution has two separate interfaces defined as follows:
public interface ResponseObjectProvider<T, S> {
void addObject(T targetObject, S sourceObject, String targetKey);
}
public interface ParameterisedResponseObjectProvider<T,S,U> {
void addObject(T targetObject, S sourceObject, String targetKey, U params);
}
This solution solves the breaches of Interface Segregation and Liskov Substitution principles and also improves the usage for calling clients and improves the readability of the code.
It does mean that the client needs to be aware of the different interfaces, but since the contracts are different this seems to be a justified decision especially when considering all the issues that the solution has avoided.
The problem you describe often comes about through over-zealous application of the Interface Segregation Principle, encouraged by languages' inability to specify that members of one interface should, by default, be chained to static methods which could implement sensible behaviors.
Consider, for example, a basic sequence/enumeration interface and the following behaviors:
Produce an enumerator which can read out the objects if no other iterator has yet been created.
Produce an enumerator which can read out the objects even if another iterator has already been created and used.
Report how many items are in the sequence
Report the value of the Nth item in the sequence
Copy a range of items from the object into an array of that type.
Yield a reference to an immutable object that can accommodate the above operations efficiently with contents that are guaranteed never to change.
I would suggest that such abilities should be part of the basic sequence/enumeration interface, along with a method/property to indicate which of the above operations are meaningfully supported. Some kinds of single-shot on-demand enumerators (e.g. an infinite truly-random sequence generator) might not be able to support any of those functions, but segregating such functions into separate interfaces will make it much harder to produce efficient wrappers for many kinds of operations.
One could produce a wrapper class that would accommodate all of the above operations, though not necessarily efficiently, on any finite sequence which supports the first ability. If, however, the class is being used to wrap an object that already supports some of those abilities (e.g. access the Nth item), having the wrapper use the underlying behaviors could be much more efficient than having it do everything via the second function above (e.g. creating a new enumerator, and using that to iteratively read and ignore items from the sequence until the desired one is reached).
Having all objects that produce any kind of sequence support an interface that includes all of the above, along with an indication of what abilities are supported, would be cleaner than trying to have different interfaces for different subsets of abilities, and requiring that wrapper classes make explicit provision for any combinations they want to expose to their clients.

Protect "default" methods from overriding

I'm looking for a solution, that allows to protect the default methods from inheritance. The easiest solution could be - extend from class and etc... but in my case it's not possible.
Can someone suggest how to solve this problem? Could there be any workarounds?
Atm I have following code, which needs to be reworked (if/any possible):
public interface MyInterface1 {
default boolean isA(Object obj) {
return (boolean) obj.equals("A") ? true : false;
}
default boolean isB(Object obj) {
return (boolean) obj.equals("B") ? true : false;
}
}
public class MyClass extends MyLogic implements MyInterface, MyInterface1 {
// this class allows to inherit methods from both interfaces,
// but from my perspective i'd like to use the methods from MyInterface1 as it is,
// with a 'protection' from inheritance. is that possible?
}
You seem to want a way to write your interface so that implementing classes cannot provide their own implementations of its default methods. There is no way to do this, and indeed it runs counter to the purpose of interfaces in general and default members in particular.
The point of default methods is to provide a way to add methods to existing interfaces without instantly breaking all their existing implementations. Generally speaking, this is a binary compatibility issue, not a functionality issue. There's no particular reason to suppose in general that default implementations can provide the intended functionality, but without them, even old code that doesn't rely on the new methods at all is incompatible with interface revisions that add methods.
I think you have a factoring issue. Rather than trying to force classes to provide a specific implementation of a specific method -- which cannot even refer to that class's members, except possibly others defined by the same interface -- you should provide the common methods in a class of their own. After all, since you want all classes involved to provide identical implementations, it doesn't matter which class's implementations you actually use. Moreover, there is therefore no particular usefulness in marking any given class as providing implementations of the well-known methods.
Example:
public class MyImplementation1 {
public static boolean isA(Object obj) {
return obj.equals("A");
}
public static isB(Object obj) {
return obj.equals("B");
}
}
// Wherever needed, use as MyImplementation1.isA(o), etc.
You can do this even if you want these pre-baked implementations to operate in terms of the other methods of your interface. In that case, just add an argument to the fixed methods that provides the object to operate on. Perhaps that's what the obj arguments in your example were supposed to be; in that case, this may be closer to what you're after:
public interface MyInterface3 {
public String someInterfaceMethod();
}
public class MyImplementation2 {
public static boolean isA(MyInterface3 subject) {
return subject.someInterfaceMethod().equals("A");
}
public static boolean isB(MyInterface3 subject) {
return subject.someInterfaceMethod().equals("B");
}
}
You can't. At least if you restrict yourself to a pure-java-compiler solution.
And the reason is because it was not designed to do that: the purpose is to add new methods to existing interface (like java.util.Collection) without breaking the implementations. That way, we have sort(), stream(), forEach() on Collection.
If you were to allow such thing (forbidding implementation), then it would means a change in the interface would result in a compilation error for implementation (because they would override the method, method that would been rendered final). That was not the purpose.
There are several other options to achieve that, depending on your need:
Abstract class with final method being the previously default method.
Testing the default behavior using unit testing.
Testing the possible implementation and check they don't override it.
The last case can probably be done easily with Reflections: you would have to list all implementations, and check for each interface's default method that there is no overriding using Reflections.
I take it you mean you want to write a class that uses the default methods of an interface, but does not inherit them.
In your example code, you attempted to use the default methods by implementing the interface. When you implement an interface, by design you also inherit all its methods. This is the Liskov Substitution Principle. By implementing the interface you are telling your users that all instances of your class are substitutable for instances of the interface. But if the interface default methods weren't inherited, this wouldn't be true, so you would be lying to users of your class.
To have your class use the interface's default methods without inheriting them, don't implement the interface! Instead, use a helper class that does:
public interface MyInterface1 {
default boolean isA(Object obj) {
return obj.equals("A"); // or "A".equals(obj) to avoid NullPointerException
}
default boolean isB(Object obj) {
return obj.equals("B");
}
}
public class MyClass extends MyLogic implements MyInterface {
private static class Helper implements MyInterface1 {
void doSomeWork() {
// do something that calls isA() and isB()...
}
}
public void someMethodOfMyClass() {
// ...
Helper.doSomeWork();
// ...
}
}
No, This is not possible due to the way java implements the interface (pun intended). For more information as to the reason for this, see the answers to this question Why is "final" not allowed in Java 8 interface methods?
However here are some other ways to guide a developer not to override a default method:
A source code comment
//Do not inherit please
A javadoc comment

Are there any Java standard classes that implement Iterable without implementing Collection?

I have a conundrum that's caused me to ponder whether there are any standard java classes that implement Iterable<T> without also implementing Collection<T>. I'm implementing one interface that requires me to define a method that accepts an Iterable<T>, but the object I'm using to back this method requires a Collection<T>.
This has me doing some really kludgy feeling code that give some unchecked warnings when compiled.
public ImmutableMap<Integer, Optional<Site>> loadAll(
Iterable<? extends Integer> keys
) throws Exception {
Collection<Integer> _keys;
if (keys instanceof Collection) {
_keys = (Collection<Integer>) keys;
} else {
_keys = Lists.newArrayList(keys);
}
final List<Site> sitesById = siteDBDao.getSitesById(_keys);
// snip: convert the list to a map
Changing my resulting collection to use the more generified Collection<? extends Integer> type doesn't eliminate the unchecked warning for that line. Also, I can't change the method signature to accept a Collection instead of an Iterable because then it's no longer overriding the super method and won't get called when needed.
There doesn't seem to be a way around this cast-or-copy problem: other questions have been asked here an elsewhere and it seems deeply rooted in Java's generic and type erasure systems. But I'm asking instead if there ever are any classes that can implement Iterable<T> that don't also implement Collection<T>? I've taken a look through the Iterable JavaDoc and certainly everything I expect to be passed to my interface will actually be a collection. I'd like to use an in-the-wild, pre-written class instead as that seems much more likely to actually be passed as a parameter and would make the unit test that much more valuable.
I'm certain the cast-or-copy bit I've written works with the types I'm using it for in my project due to some unit tests I'm writing. But I'd like to write a unit test for some input that is an iterable yet isn't a collection and so far all I've been able to come up with is implementing a dummy-test class implementation myself.
For the curious, the method I'm implementing is Guava's CacheLoader<K, V>.loadAll(Iterable<? extends K> keys) and the backing method is a JDBI instantiated data-access object, which requires a collection to be used as the parameter type for the #BindIn interface. I think I'm correct in thinking this is tangental to the question, but just in case anyone wants to try lateral thinking on my problem. I'm aware I could just fork the JDBI project and rewrite the #BindIn annotation to accept an iterable...
Although there is no class that would immediately suit your needs and be intuitive to the readers of your test code, you can easily create your own anonymous class that is easy to understand:
static Iterable<Integer> range(final int from, final int to) {
return new Iterable<Integer>() {
public Iterator<Integer> iterator() {
return new Iterator<Integer>() {
int current = from;
public boolean hasNext() { return current < to; }
public Integer next() {
if (!hasNext()) { throw new NoSuchElementException(); }
return current++;
}
public void remove() { /*Optional; not implemented.*/ }
};
}
};
}
Demo.
This implementation is anonymous, and it does not implement Collection<Integer>. On the other hand, it produces a non-empty enumerable sequence of integers, which you can fully control.
To answer the question as per title:
Are there any Java standard classes that implement Iterable without implementing Collection?
From text:
If there ever are any classes that can implement Iterable<T> that don't also implement Collection<T>?
Answer:
Yes
See the following javadoc page: https://docs.oracle.com/javase/8/docs/api/java/lang/class-use/Iterable.html
Any section that says Classes in XXX that implement Iterable, will list Java standard classes implementing the interface. Many of those don't implement Collection.
Kludgy, yes, but I think the code
Collection<Integer> _keys;
if (keys instanceof Collection) {
_keys = (Collection<Integer>) keys;
} else {
_keys = Lists.newArrayList(keys);
}
is perfectly sound. The interface Collection<T> extends Iterable<T> and you are not allowed to implement the same interface with 2 different type parameters, so there is no way a class could implement Collection<String> and Iterable<Integer>, for example.
The class Integer is final, so the difference between Iterable<? extends Integer> and Iterable<Integer> is largely academic.
Taken together, the last 2 paragraphs prove that if something is both an Iterable<? extends Integer> and a Collection, it must be a Collection<Integer>. Therefore your code is guaranteed to be safe. The compiler can't be sure of this so you can suppress the warning by writing
#SuppressWarnings("unchecked")
above the statement. You should also include a comment by the annotation to explain why the code is safe.
As for the question of whether there are any classes that implement Iterable but not Collection, as others have pointed out the answer is yes. However I think what you are really asking is whether there is any point in having two interfaces. Many others have asked this. Often when a method has a Collection argument (e.g. addAll() it could, and probably should, be an Iterable.
Edit
#Andreas has pointed out in the comments that Iterable was only introduced in Java 5, whereas Collection was introduced in Java 1.2, and most existing methods taking a Collection could not be retrofitted to take an Iterable for compatibility reasons.
In core APIs, the only types that are Iterable but not Collection --
interface java.nio.file.Path
interface java.nio.file.DirectoryStream
interface java.nio.file.SecureDirectoryStream
class java.util.ServiceLoader
class java.sql.SQLException (and subclasses)
Arguably these are all bad designs.
As mentioned in #bayou.io's answer, one such implementation for Iterable is the new Path class for filesystem traversal introduced in Java 7.
If you happen to be on Java 8, Iterable has been retrofitted with (i.e. given a default method) spliterator() (pay attention to its Implementation Note), which lets you use it in conjunction with StreamSupport:
public static <T> Collection<T> convert(Iterable<T> iterable) {
// using Collectors.toList() for illustration,
// there are other collectors available
return StreamSupport.stream(iterable.spliterator(), false)
.collect(Collectors.toList());
}
This comes at the slight expense that any argument which is already a Collection implementation goes through an unnecessary stream-and-collect operation. You probably should only use it if the desire for a standardized JDK-only approach outweighs the potential performance hit, compared to your original casting or Guava-based methods, which is likely moot since you're already using Guava's CacheLoader.
To test this out, consider this snippet and sample output:
// Snippet
System.out.println(convert(Paths.get(System.getProperty("java.io.tmpdir"))));
// Sample output on Windows
[Users, MyUserName, AppData, Local, Temp]
After reading the excellent answers and provided docs, I poked around in a few more classes and found what looks to be the winner, both in terms of straightforwardness for test code and for a direct question title. Java's main ArrayList implementation contains this gem:
public Iterator<E> iterator() {
return new Itr();
}
Where Itr is a private inner class with a highly optimized, customized implementation of Iterator<E>. Unfortunately, Iterator doesn't itself implement Iterable, so if I want to shoe horn it into my helper method for testing the code path that doesn't do the cast, I have to wrap it in my own junk class that implements Iterable (and not Collection) and returns the Itr. This is a handy way to easily to turn a collection into an Iterable without having to write the iteration code yourself.
On a final note, my final version of the code doesn't even do the cast itself, because Guava's Lists.newArrayList does pretty much exactly what I was doing with the runtime type detection in the question.
#GwtCompatible(serializable = true)
public static <E> ArrayList<E> More ...newArrayList(Iterable<? extends E> elements) {
checkNotNull(elements); // for GWT
// Let ArrayList's sizing logic work, if possible
if (elements instanceof Collection) {
#SuppressWarnings("unchecked")
Collection<? extends E> collection = (Collection<? extends E>) elements;
return new ArrayList<E>(collection);
} else {
return newArrayList(elements.iterator());
}
}

If I need serializable should i use concrete List (e.g. ArrayList) or (Serializable)List?

We have a discussion in office and cannot understand which approach is better
I have a class (SomeClass) with some method which receives Serializable object. The signature is following:
public void someMethod(Serializable serializableObject){
...
}
And I need to call this method from another class, but I should provide it with some List as fact parameter. There are two different approaches
1. Serializable
private SomeClass someClass;
public void doSomething() {
List<String> al = new ArrayList<String>();
al.add("text");
someClass.someMethod((Serializable)al);
}
2. ArrayList
private SomeClass someClass;
public void doSomething() {
ArrayList<String> al = new ArrayList<String>();
al.add("text");
someClass.someMethod(al);
}
Benefit of the first example is that it adheres to the java’s best practices which says: use interface instead of concrete realization for reference type and any programmer while reading that source will understand that we don't need special behavior of the ArrayList. And the only place we need it's serializable behavior we are adding this behavior by casting it to the Serializable interface. And programmer can simply change this current realization of the List to some other serializable realization, for example, LinkedList, without any side affect on this element because we use interface List as it`s reference type.
Benefit of the second example is that we refer to ArrayList as to class which have not only List behavior but also Serializable behavior. So if someone looked at this code and tried to change ArrayList to List he would receive a compile time error which would reduce time for programmer to understand what is going on there
UPDATE: we can't change someMethod signature. It came from a third-party company and we use it not only for Serializable Lists but also for Strings, Integers and some other Serializable objects
You should use an interface when all you need is the methods an interface provides. (this is most cases) However, if you need more than one interface, you can use generics, but the simplest approach is to use the concrete type.
It's better to define ArrayList because this combines two interfaces - List + Serializable. You need both of them in one place.
It doesn't matter that much, but not that using interfaces should be applied more strictly for return types, and less strictly for local variables.
I would change the signature of the someMethod so that it reflects what it requires from the invoker of the method:
public class SomeClass {
public <T extends List<? extends Serializable> & Serializable> void someMethod(T t) {
}
public static void main(String[] args) {
SomeClass test = new SomeClass();
test.someMethod(new ArrayList<String>()); //Works
test.someMethod(new ArrayList<Image>()); //Compile time error, Image is not Serializable
List<String> l = null;
test.someMethod(l); //Compile time error
}
}
The signature of someMethod now says that you must invoke it with something that is a List, and that is Serializable, and contains elements that are Serializable
In this case, I would just use List, and not worry that the compiler cannot guarantee that your object is serializable (it most likely will be anyway, if you've done things right elsewhere).
Note that methods of the following type (which accept a Serializable parameter) provide a false sense of security, because the compiler can never guarantee that the entire object graph which needs to be serialized will actually be serializable.
public void write(Serializable s);
Consider an ArrayList (serializable) which contains non-serializable objects. The signature may as well just be:
public void write(Object o);
And then you don't have to worry about all the extraneous casting.
Also consider that, although you cannot change the signature of the API you are using, you can very easily create a wrapper API which has a different signature.
1 is generally the right thing to do. However in this case, my opinion would to be bend that and declare it as ArrayList<>. This avoids the cast and guarantees that someone can't change the implementation of the List to one that isn't Serializable.
You can't do (1) because you're not free to change the List implementation type arbitrarily, which is the whole idea of doing that. You can only use a List implementation that implements Serializable. So you may as well express that in the code.

Categories

Resources