The recurring explanation I find is that an upper bounded wildcard relaxes the restrictions of types that a type parameter can accept. This concept applies to bounded generics as well, for example:
static <T extends Number> void gMethod (ArrayList <T> list) {}
This method's generic will accept Objects of type Number or any of it's sub classes when specified:
ArrayList <Integer> intList = new ArrayList();
gMethod(intList); // allowed
For further elaboration that a generic bounded to Number will accept type arguments of Number or any of it's sub classes as well:
class Thing <T extends Number> {}
Thing <Number> numThing = new Thing();
Thing <Integer> intThing = new Thing();
Thing <Double> dubThing = new Thing(); // All three instances work
Given this, the only benefit I can see to using an upper bounded wildcard vs a bounded generic is that an upper bounded wildcard type argument can be declared without relying on a type parameter already declared by either a class or method. Is there a more important benefit that I'm missing?
„…the only benefit I can see to using an upper bounded wildcard vs a bounded generic…“
If your use case only ever calls for you working with one Thing at a time, then the simple usage scenario you outline is all you'll ever need.
But eventually you'll have a use case where you will need to work with a heterogeneous assortment of Things. That's when you'll need to pull slightly more advanced polymorphism out from your toolbox.
„…Is there a more important benefit that I'm missing?…“
One super important benefit that it sounds like you're missing is a substitutability relationship between types; called covariance.
For example, since it's legal to do this:
Integer[] intAry = {2,4,6,8};
Number[] numAry = intAry;
Then intuitively, it seems like you should be able to do this:
List<Integer> intList = List.of(8,6,7,5,3,0,9);
List<Number> numList = intList; // but this fails to compile
A wildcard with an upper bound effectively makes collections covariant:
List <? extends Number> numList = intList;
Since Integer extends Number :
Thing<Number> numThing = new Thing<>();
Thing<Integer> intThing = new Thing<>();
Then intuitively, it seems like you should be able to do this:
numThing = intThing; // but this fails to compile
A wildcard with an upper bound effectively makes Things more intuitive:
Thing<? extends Number> numThing = new Thing<>();
numThing = intThing; /* That makes sense! */
Same deal with methods. With this declaration:
public static void use(Thing<Number> oneThing){
/*...*/
}
This would fail to compile:
Thing<Integer> intThing = new Thing<>();
use(intThing); /* error: no suitable method found for use(Thing<Integer>) */
Wild cards with upper bounds makes it possible to use Things the way you intuitively would think they'd be used:
public static void use(Thing<? extends Number> anyThing){
/* ...*/
}
...
Thing<Integer> intThing = new Thing<>();
use(intThing); /* Perfectly fine! */
„…applies to bounded generics…This method's generic will accept…an upper bounded wildcard vs a bounded generic…“
The things you've incorrectly called „generics“ are actually called either type parameters, type variables or type arguments; depending on the context.
Related
The Collections.fill method has the following header:
public static <T> void fill(List<? super T> list, T obj)
Why is the wildcard necessary? The following header seems to work just as well:
public static <T> void fill(List<T> list, T obj)
I cannot see a reason why the wildcard is needed; code such as the following works with the second header as well as the first:
List<Number> nums = new ArrayList<>();
Integer i = 43;
fill(nums, i); //fill method written using second header
My question is: For what specific call of fill would the first header work but not the second? And if there is no such call, why include the wildcard? In this case, the wildcard does not make the method more concise nor add to readability (in my opinion).
This is a really good question and the simple answer was guessed already:
For the current version of the fill(List<? super T> list, T obj) there is no
such input that would be rejected given the signature is changed to fill(List<T> list, T obj), so there is no benefit and the devs are likely followed the PECS principle
The above statement derives from the principle that: if there is a such type X so that
X is a supertype of T then List<X> is a supertype of List<? super T> because of type contravariance.
Since we can always find such X (at the worst case it's the Object class) - the compiler can infer a suitable List<X> argument type given either form of fill.
So, knowing that fact we can interfere with the compiler and infer the type ourselves using "type witness" so the code breaks:
List<Object> target = new ArrayList<>();
//Compiles OK as we can represent List<Object> as List<? super Integer> and it fits
Collections.<Integer>fill(target, 1);
//Compilation error as List<Object> is invariant to List<Integer> and not a valid substitute
Collections.<Integer>fillNew(target, 1);
This is all of course purely theoretical and nobody in their right mind would use the type argument there.
HOWEVER
While answering the question "What is the benefit of using wildcards here?" we yet considered only one side of the equation - us, consumers of the method and our experience but not library developers.
Hence this question is somewhat similar to why Collections.enumeration(final Collection<T> c) is declared the way it is and not enumeration(Collection<T> c) as final seems superfluous for the end-user.
We can speculate here about the real intention, but I can give a few subjective reasons:
First: using List<? super T> (as well as final for enumeration) immediately disambiguates the code that tiny bit more and for the <? super T> specifically - it useful to show that only partial knowledge about the
type parameter is required and the list cannot be used to produce values of T, but only to consume them.
Quote:
Wildcards are useful in situations where only partial knowledge about the type parameter is required.
JLS 4.5.1. Type Arguments of Parameterized Types
Second: it gives some freedom to the library owners to improve/update the method without breaking backward compatibility while conforming to the existing constraints.
Now let's try make up some hypothetical "improvements" to see what I mean (I'll call the form of fill that uses List<T> as fillNew):
#1 The decision is to make method to return the obj value (used to fill up the list) back:
public static <T> void fill(List<? super T> list, T obj)
//becomes ↓↓↓
public static <T> T fill(List<? super T> list, T obj)
The updated method would work just fine for fill signature, but for fillNew - the inferred return type now isn't that obvious:
List<Number> target = new ArrayList<>();
Long val = fill(target, 1L); //<<Here Long is the most specific type that fits both arguments
//Compilation error
Long val = fillNew(target, 1L); //<<Here Number is, so it cannot be assigned back
//More exotic case:
Integer val = fill(asList(true), 0); //val is Integer as expected
Comparable<?> val = fillNew(asList(true), 0); //val is now Comparable<?> as the most specific type
#2 The decision to add an overloaded version of fill that is 10x more performant in cases when T is Comparable<T>:
/* Extremely performant 10x version */
public static <T extends Comparable<T>> void fill(List<? super T> list, T value)
/* Normal version */
public static void fill(List<? super T> list, T value)
List<Number> target = new ArrayList<>();
fill(target, 1); //<<< Here the more performant version is used as T inferred to Integer and it implements Comparable<Integer>
fillNew(target, 1); //<< Still uses the slow version just because T is inferred to Number which is not Comparable
To sum up - the current signature of fill is more flexible/descriptive in my opinion for all parties (developers and library designers)
For your example, the reason it 'works' with your basic <T> signature, is that an Integer is also a Number. The only 'T' that works is T = Number, and then the whole thing just works out.
In this case, the expression you have for the T obj parameter is a reified type: You have an Integer. You could have a T instead. Perhaps you have this:
class AtomicReference<T> {
// The actual impl of j.u.concurrent.AtomicReference...
// but with this one additional method:
public void fillIntoList(List<? super T> list) {
T currentValue = get();
Collections.fill(list, currentValue);
}
}
I may perhaps want to write something like this:
AtomicReference<String> ref = new AtomicReference<String>("hello");
List<CharSequence> texts = new ArrayList<>();
...
ref.fillIntoList(texts);
If my hypothetical fillIntoList method simply had List<T> in the signature that wouldn't compile. Fortunately it does, so the code does compile. Had the Collections.fill method not done the <? super T> thing, the invocation of the Collections.fill method in my fillIntoList method would have failed.
It's highly exotic for any of this to come up. But it can come up. List<? super T> is the strictly superior signature here - it can do everything List<T> does, and more, and it is also semantically correct: Of course I can fill a list-of-foos by writing into every slot a ref to something that I know for sure is a bar, if bar is a child of foo.
That is because the inheritance is useful is some cases.
For example, if you have the following class structure:
public class Parent {
//some code
}
public class Child extends Parent {
//some another code
}
You could use the first method writing:
List<Child> children = new ArrayList<>();
Parent otherParentObject = new Parent(); //after this line, set the values for the class
List<Parent> outParentList = new ArrayList<>();
fill(children, otherParentObject); //fill method using first signature;
I gather that you cannot bind a Java generics type parameter to a lower bound (i.e. using the super keyword). I was reading what the Angelika Langer Generics FAQ had to say on the subject. They say it basically comes down to a lower bound being useless ("not making any sense").
I'm not convinced. I can imagine a use for them to help you be more flexible to callers of a library method that produces a typed result. Imagine a method that created an array list of a user-specified size and filled it with the empty string. A simple declaration would be
public static ArrayList<String> createArrayListFullOfEmptyStrings(int i);
But that's unnecessarily restrictive to your clients. Why can't they invoke your method like this:
//should compile
List<Object> l1 = createArrayListFullOfEmptyStrings(5);
List<CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
List<String> l3 = createArrayListFullOfEmptyStrings(5);
//shouldn't compile
List<Integer> l4 = createArrayListFullOfEmptyStrings(5);
At this point I would be tempted to try the following definition:
public static <T super String> List<T> createArrayListFullOfEmptyStrings(int size) {
List<T> list = new ArrayList<T>(size);
for(int i = 0; i < size; i++) {
list.add("");
}
return list;
}
But it will not compile; the super keyword is illegal in this context.
Is my example above a bad example (ignoring what I say below)? Why isn't a lower bound useful here? And if it would be useful, what's the real reason that it is not permitted in Java?
P.S.
I know that a better organization might be something like this:
public static void populateListWithEmptyStrings(List<? super String> list, int size);
List<CharSequence> list = new ArrayList<CharSequence>();
populateListWithEmptyStrings(list, 5);
Can we for the purpose of this question pretend that due to a requirement, we need to do both operations in one method call?
Edit
#Tom G (justifiably) asks what benefit having a List<CharSequence> would have over a List<String>. For one, nobody said the returned list is immutable, so here's one advantage:
List<CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
l2.add(new StringBuilder("foo").append("bar"));
Basically, its not useful enough.
I think your example points out the only advantage of a lower bound, a feature the FAQ calls Restricted Instantiation:
The bottom line is: all that a " super " bound would buy you is the restriction that only supertypes of Number can be used as type arguments. ....
But as the other posts point out, the usefulness of even this feature can be limited.
Due to the nature of polymorphism and specialization, upper bounds are far more useful than lower bounds as described by the FAQ (Access To Non-Static Members and Type Erasure). I suspect the complexity introduced by lower bounds aren't worth its limited value.
OP: I want to add I think you did show it is useful, just not useful enough. Come up with the irrefutable killer use cases and I'll back the JSR. :-)
the spec does talk about lower bounds of type parameters, for example
4.10.2
a type variable is a direct supertype of its lower bound.
5.1.10
a fresh type variable ... whose lower bound
It appears that a type variable only has a (non-null) lower bound if it's a synthetic one as result of wildcard capture. What if the language allow lower bounds on all type parameters? Probably it doesn't cause a lot of trouble, and it's excluded only to keep generics simpler (well ...) Update it is said that theoretical investigation of lower bounded type parameters is not thoroughly conducted.
Update: a paper claiming lower bounds are ok: "Java Type Infererence Is Broken: Can We Fix It" by Daniel Smith
RETRACT: the following argument is wrong. OP's example is legitimate.
Your particular example is not very convincing. First it's not type safe. The returned list is indeed a List<String>, it's unsafe to view it as another type. Suppose your code compiles:
List<CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
then we can add non-String to it, which is wrong
CharSequence chars = new StringBuilder();
l2.add(chars);
Well a List<String> is not, but somewhat like a list of CharSequence. Your need can be solved by using wildcard:
public static List<String> createArrayListFullOfEmptyStrings(int size)
// a list of some specific subtype of CharSequence
List<? extends CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
// legal. can retrieve elements as CharSequence
CharSequence chars = l2.get(0);
// illegal, won't compile. cannot insert elements as CharSequence
l2.add(new StringBuilder());
More than an answer, this is another (possibly killer?) use case.
I have a ModelDecorator helper. I want it to have the following public API
class ModelDecorator<T>{
public static <T> ModelDecorator<T> create(Class<T> clazz);
public <SUPER> T from(SUPER fromInstance);
}
So, given classes A, B extends A, it can be used like this:
A a = new A();
B b = ModelDecorator.create(B.class).from(a);
But I want to have bounds on T and SUPER, so I make sure that only subclases can be instantiated using the API. At this moment, I can do:
C c = new C();
B b = ModelDecorator.create(B.class).from(c);
Where B DOES not inherit from C.
Obviously, if I could do:
public <SUPER super T> T from(SUPER fromInstance);
That would solve my problem.
What advantage does typing the List give you at that point? When you iterate over the returned collection, you should still be able to do the following:
for(String s : returnedList) {
CharSequence cs = s;
//do something with your CharSequence
}
Edit: I bring good news. There is a way to get most of what you want.
public static <R extends List<? super String>> R createListFullOfEmptyString(IntFunction<R> creator, int size)
{
R list = creator.apply(size);
for (int i = 0; i < size; i++)
{
list.add("");
}
return list;
}
// compiles
List<Object> l1 = createListFullOfEmptyString(ArrayList::new, 5);
List<CharSequence> l2 = createListFullOfEmptyString(ArrayList::new, 5);
List<String> l3 = createListFullOfEmptyString(ArrayList::new, 5);
// doesn't compile
List<Integer> l4 = createListFullOfEmptyString(ArrayList::new, 5);
The downside is clients do need to provide either an instance of R to mutate, or some means to construct an R. There is no other way to safely construct it.
I'll retain my original answer below for informational purposes.
In summary:
There is not a good reason, it just has not been done.
And until such time as it is, it will be impossible to write exact types with correct variance for methods that do all of:
A) Accept or create parametrized data structure
B) Write computed (not-passed-in) value(s) to that data structure
C) Return that data structure
Writing/accepting values is exactly the case where contravariance applies, which means the type parameter on the data structure must be lower-bounded by the type of the value being written to the data structure. The only way to express that in Java currently is using a lower-bounded wildcard on the data structure, e.g. List<? super T>.
If we are designing an API such as the OP's, which might naturally (but not legally) be expressed as:
// T is the type of the value(s) being computed and written to the data structure
// Method creates the data structure
<S super T> Container<S> create()
// Method writes to the data structure
<S super T> Container<S> write(Container<S> container)
Then the options available to us are:
A) Use a lower-bounded wildcard, and force callers to cast the output:
// This one is actually useless - there is no type the caller can cast to that is both read- and write-safe.
Container<? super T> create()
// Caller must cast result to the same type they passed in.
Container<? super T> write(Container<? super T> container)
B) Overly restrict the type parameter on the data structure to match the type of the value being written, and force callers to cast the input and output:
// Caller must accept as-is; cannot write values of type S (S super T) into the result.
Container<T> create()
// Caller must cast Container<S> (S super T) to Container<T> before calling, then cast the result back to Container<S>.
Container<T> write(Container<T> container)
C) Use a new type parameter and do our own unsafe casting internally:
// Caller must ensure S is a supertype of T - we cast T to S internally!
<S> Container<S> create()
// Caller must ensure S is a supertype of T - we cast T to S internally!
<S> Container<S> write(Container<S> container)
Pick your poison.
Hmm, ok - let's work with this. You define a method:
public static <T super String> List<T> createArrayListFullOfEmptyStrings(int size) {
What does that mean? It means that if I call your method, then I get back a list of some superclass of String. Maybe it returns a list of String. Maybe it returns a list of Object. I don't know.
Cool.
List<Object> l1 = createArrayListFullOfEmptyStrings(5);
According to you, that should compile. But that's not right! I can put an Integer into a list of Object - l1.add(3) . But if you are returning a list of String, then doing that should be illegal.
List<String> l3 = createArrayListFullOfEmptyStrings(5);
According to you, that should compile. But that's not right! l3.get(1) should always return a String ... but that method might have returned a list of Object, meaning that l3.get(1) could conceivably be an Integer.
The only thing that works is
List<? super String> l5 = createArrayListFullOfEmptyStrings(5);
All I know is that I can safely call l4.put("foo"), and I can safely get Object o = l4.get(2) .
Please explain this generic code wildcard compile time error:
//no compile time error.
List<? extends Number> x = new ArrayList<>();
//compile time error.
List<? extends Number> x = new ArrayList<? extends Number>();
It's invalid syntax to instantiate a generic type with wildcards. The type List<? extends Number> means a List of some type that is or extends Number. To create an instance of this type doesn't make sense, because with instantiation you're creating something specific:
new ArrayList<? extends Number>();//compiler:"Wait, what am I creating exactly?"
Generic types with wildcards only make sense for variables and method parameters, because this allows greater freedom in what can be assigned/passed into them.
//compiler:"Okay, so passing in a List<Integer> or a List<Double> are both fine"
public void eatSomeNumbers(List<? extends Number> numbers) {
for (Number number : numbers) {
System.out.println("om nom " + number + " nom");
}
}
Make sure to keep in mind the limitations that come with using wildcards.
List<? extends Number> numList = ...
numList.add(new Integer(3));//compiler:"Nope, cause that might be a List<Double>"
As for your first example, the diamond is a new feature in Java 7 that allows the compiler to infer the type of the new generic instance, based on the type of the variable it's assigned to. In this case:
List<? extends Number> x = new ArrayList<>();
The compiler is most likely inferring new ArrayList<Number>() here, but what's inferred hardly matters, as long as it's a valid assignment to the given variable. This was the reason for the diamond operator being introduced - that specifying the generic type of a new object was redundant, as long some generic type would make it a valid assignment/argument.
This reasoning only makes sense if you remember that generics in Java are a purely compile-time language feature, because of type erasure, and have no meaning at runtime. Wildcards exist only because of this limitation. By contrast, in C# generic type information sticks around at runtime - and generic wildcards don't exist in that language.
Use
List<? extends Number> x = new ArrayList<Number>();
instead.
I gather that you cannot bind a Java generics type parameter to a lower bound (i.e. using the super keyword). I was reading what the Angelika Langer Generics FAQ had to say on the subject. They say it basically comes down to a lower bound being useless ("not making any sense").
I'm not convinced. I can imagine a use for them to help you be more flexible to callers of a library method that produces a typed result. Imagine a method that created an array list of a user-specified size and filled it with the empty string. A simple declaration would be
public static ArrayList<String> createArrayListFullOfEmptyStrings(int i);
But that's unnecessarily restrictive to your clients. Why can't they invoke your method like this:
//should compile
List<Object> l1 = createArrayListFullOfEmptyStrings(5);
List<CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
List<String> l3 = createArrayListFullOfEmptyStrings(5);
//shouldn't compile
List<Integer> l4 = createArrayListFullOfEmptyStrings(5);
At this point I would be tempted to try the following definition:
public static <T super String> List<T> createArrayListFullOfEmptyStrings(int size) {
List<T> list = new ArrayList<T>(size);
for(int i = 0; i < size; i++) {
list.add("");
}
return list;
}
But it will not compile; the super keyword is illegal in this context.
Is my example above a bad example (ignoring what I say below)? Why isn't a lower bound useful here? And if it would be useful, what's the real reason that it is not permitted in Java?
P.S.
I know that a better organization might be something like this:
public static void populateListWithEmptyStrings(List<? super String> list, int size);
List<CharSequence> list = new ArrayList<CharSequence>();
populateListWithEmptyStrings(list, 5);
Can we for the purpose of this question pretend that due to a requirement, we need to do both operations in one method call?
Edit
#Tom G (justifiably) asks what benefit having a List<CharSequence> would have over a List<String>. For one, nobody said the returned list is immutable, so here's one advantage:
List<CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
l2.add(new StringBuilder("foo").append("bar"));
Basically, its not useful enough.
I think your example points out the only advantage of a lower bound, a feature the FAQ calls Restricted Instantiation:
The bottom line is: all that a " super " bound would buy you is the restriction that only supertypes of Number can be used as type arguments. ....
But as the other posts point out, the usefulness of even this feature can be limited.
Due to the nature of polymorphism and specialization, upper bounds are far more useful than lower bounds as described by the FAQ (Access To Non-Static Members and Type Erasure). I suspect the complexity introduced by lower bounds aren't worth its limited value.
OP: I want to add I think you did show it is useful, just not useful enough. Come up with the irrefutable killer use cases and I'll back the JSR. :-)
the spec does talk about lower bounds of type parameters, for example
4.10.2
a type variable is a direct supertype of its lower bound.
5.1.10
a fresh type variable ... whose lower bound
It appears that a type variable only has a (non-null) lower bound if it's a synthetic one as result of wildcard capture. What if the language allow lower bounds on all type parameters? Probably it doesn't cause a lot of trouble, and it's excluded only to keep generics simpler (well ...) Update it is said that theoretical investigation of lower bounded type parameters is not thoroughly conducted.
Update: a paper claiming lower bounds are ok: "Java Type Infererence Is Broken: Can We Fix It" by Daniel Smith
RETRACT: the following argument is wrong. OP's example is legitimate.
Your particular example is not very convincing. First it's not type safe. The returned list is indeed a List<String>, it's unsafe to view it as another type. Suppose your code compiles:
List<CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
then we can add non-String to it, which is wrong
CharSequence chars = new StringBuilder();
l2.add(chars);
Well a List<String> is not, but somewhat like a list of CharSequence. Your need can be solved by using wildcard:
public static List<String> createArrayListFullOfEmptyStrings(int size)
// a list of some specific subtype of CharSequence
List<? extends CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
// legal. can retrieve elements as CharSequence
CharSequence chars = l2.get(0);
// illegal, won't compile. cannot insert elements as CharSequence
l2.add(new StringBuilder());
More than an answer, this is another (possibly killer?) use case.
I have a ModelDecorator helper. I want it to have the following public API
class ModelDecorator<T>{
public static <T> ModelDecorator<T> create(Class<T> clazz);
public <SUPER> T from(SUPER fromInstance);
}
So, given classes A, B extends A, it can be used like this:
A a = new A();
B b = ModelDecorator.create(B.class).from(a);
But I want to have bounds on T and SUPER, so I make sure that only subclases can be instantiated using the API. At this moment, I can do:
C c = new C();
B b = ModelDecorator.create(B.class).from(c);
Where B DOES not inherit from C.
Obviously, if I could do:
public <SUPER super T> T from(SUPER fromInstance);
That would solve my problem.
What advantage does typing the List give you at that point? When you iterate over the returned collection, you should still be able to do the following:
for(String s : returnedList) {
CharSequence cs = s;
//do something with your CharSequence
}
Edit: I bring good news. There is a way to get most of what you want.
public static <R extends List<? super String>> R createListFullOfEmptyString(IntFunction<R> creator, int size)
{
R list = creator.apply(size);
for (int i = 0; i < size; i++)
{
list.add("");
}
return list;
}
// compiles
List<Object> l1 = createListFullOfEmptyString(ArrayList::new, 5);
List<CharSequence> l2 = createListFullOfEmptyString(ArrayList::new, 5);
List<String> l3 = createListFullOfEmptyString(ArrayList::new, 5);
// doesn't compile
List<Integer> l4 = createListFullOfEmptyString(ArrayList::new, 5);
The downside is clients do need to provide either an instance of R to mutate, or some means to construct an R. There is no other way to safely construct it.
I'll retain my original answer below for informational purposes.
In summary:
There is not a good reason, it just has not been done.
And until such time as it is, it will be impossible to write exact types with correct variance for methods that do all of:
A) Accept or create parametrized data structure
B) Write computed (not-passed-in) value(s) to that data structure
C) Return that data structure
Writing/accepting values is exactly the case where contravariance applies, which means the type parameter on the data structure must be lower-bounded by the type of the value being written to the data structure. The only way to express that in Java currently is using a lower-bounded wildcard on the data structure, e.g. List<? super T>.
If we are designing an API such as the OP's, which might naturally (but not legally) be expressed as:
// T is the type of the value(s) being computed and written to the data structure
// Method creates the data structure
<S super T> Container<S> create()
// Method writes to the data structure
<S super T> Container<S> write(Container<S> container)
Then the options available to us are:
A) Use a lower-bounded wildcard, and force callers to cast the output:
// This one is actually useless - there is no type the caller can cast to that is both read- and write-safe.
Container<? super T> create()
// Caller must cast result to the same type they passed in.
Container<? super T> write(Container<? super T> container)
B) Overly restrict the type parameter on the data structure to match the type of the value being written, and force callers to cast the input and output:
// Caller must accept as-is; cannot write values of type S (S super T) into the result.
Container<T> create()
// Caller must cast Container<S> (S super T) to Container<T> before calling, then cast the result back to Container<S>.
Container<T> write(Container<T> container)
C) Use a new type parameter and do our own unsafe casting internally:
// Caller must ensure S is a supertype of T - we cast T to S internally!
<S> Container<S> create()
// Caller must ensure S is a supertype of T - we cast T to S internally!
<S> Container<S> write(Container<S> container)
Pick your poison.
Hmm, ok - let's work with this. You define a method:
public static <T super String> List<T> createArrayListFullOfEmptyStrings(int size) {
What does that mean? It means that if I call your method, then I get back a list of some superclass of String. Maybe it returns a list of String. Maybe it returns a list of Object. I don't know.
Cool.
List<Object> l1 = createArrayListFullOfEmptyStrings(5);
According to you, that should compile. But that's not right! I can put an Integer into a list of Object - l1.add(3) . But if you are returning a list of String, then doing that should be illegal.
List<String> l3 = createArrayListFullOfEmptyStrings(5);
According to you, that should compile. But that's not right! l3.get(1) should always return a String ... but that method might have returned a list of Object, meaning that l3.get(1) could conceivably be an Integer.
The only thing that works is
List<? super String> l5 = createArrayListFullOfEmptyStrings(5);
All I know is that I can safely call l4.put("foo"), and I can safely get Object o = l4.get(2) .
Is there a difference between
<N extends Number> Collection<N> getThatCollection(Class<N> type)
and
Collection<? extends Number> getThatCollection(Class<? extends Number>)
They expose different interfaces and contract for the method.
The first declaration should return a collection whose elements type is the same of the argument class. The compiler infers the type of N (if not specified). So the following two statements are valid when using the first declaration:
Collection<Integer> c1 = getThatCollection(Integer.class);
Collection<Double> c2 = getThatCollection(Double.class);
The second declaration doesn't declare the relationship between the returned Collection type argument to the argument class. The compiler assumes that they are unrelated, so the client would have to use the returned type as Collection<? extends Number>, regardless of what the argument is:
// Invalid statements
Collection<Integer> c1 = getThatCollection(Integer.class); // invalid
Collection<Double> c2 = getThatCollection(Double.class); // invalid
Collection<Number> cN = getThatCollection(Number.class); // invalid
// Valid statements
Collection<? extends Number> c3 = getThatCollection(Integer.class); // valid
Collection<? extends Number> c4 = getThatCollection(Double.class); // valid
Collection<? extends Number> cNC = getThatCollection(Number.class); // valid
Recommendation
If indeed there is a relationship between the type between the returned type argument and the passed argument, it is much better to use the first declaration. The client code is cleaner as stated above.
If the relationship doesn't exist, then it is still better to avoid the second declaration. Having a returned type with a bounded wildcard forces the client to use wildcards everywhere, so the client code becomes clattered and unreadable. Joshua Bloch emphisize that you should Avoid Bounded Wildcards in Return Types (slide 23). While bounded wildcards in return types may be useful is some cases, the ugliness of the result code should, IMHO, override the benefit.
In this particular case, no. however the second option is more flexible since it would allow you to return a collection that contains elements of a different type (even though it would also be a Number) than the type contained by the collection parameter.
Concrete example:
Collection<? extends Number> getRoot(Class<? extends Number> number){
ArrayList<Integer> result=new ArrayList<Integer>();
result.add(java.util.Math.round(number);
return result)
}