Confusion regarding super use in Generics Java [duplicate] - java

I gather that you cannot bind a Java generics type parameter to a lower bound (i.e. using the super keyword). I was reading what the Angelika Langer Generics FAQ had to say on the subject. They say it basically comes down to a lower bound being useless ("not making any sense").
I'm not convinced. I can imagine a use for them to help you be more flexible to callers of a library method that produces a typed result. Imagine a method that created an array list of a user-specified size and filled it with the empty string. A simple declaration would be
public static ArrayList<String> createArrayListFullOfEmptyStrings(int i);
But that's unnecessarily restrictive to your clients. Why can't they invoke your method like this:
//should compile
List<Object> l1 = createArrayListFullOfEmptyStrings(5);
List<CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
List<String> l3 = createArrayListFullOfEmptyStrings(5);
//shouldn't compile
List<Integer> l4 = createArrayListFullOfEmptyStrings(5);
At this point I would be tempted to try the following definition:
public static <T super String> List<T> createArrayListFullOfEmptyStrings(int size) {
List<T> list = new ArrayList<T>(size);
for(int i = 0; i < size; i++) {
list.add("");
}
return list;
}
But it will not compile; the super keyword is illegal in this context.
Is my example above a bad example (ignoring what I say below)? Why isn't a lower bound useful here? And if it would be useful, what's the real reason that it is not permitted in Java?
P.S.
I know that a better organization might be something like this:
public static void populateListWithEmptyStrings(List<? super String> list, int size);
List<CharSequence> list = new ArrayList<CharSequence>();
populateListWithEmptyStrings(list, 5);
Can we for the purpose of this question pretend that due to a requirement, we need to do both operations in one method call?
Edit
#Tom G (justifiably) asks what benefit having a List<CharSequence> would have over a List<String>. For one, nobody said the returned list is immutable, so here's one advantage:
List<CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
l2.add(new StringBuilder("foo").append("bar"));

Basically, its not useful enough.
I think your example points out the only advantage of a lower bound, a feature the FAQ calls Restricted Instantiation:
The bottom line is: all that a " super " bound would buy you is the restriction that only supertypes of Number can be used as type arguments. ....
But as the other posts point out, the usefulness of even this feature can be limited.
Due to the nature of polymorphism and specialization, upper bounds are far more useful than lower bounds as described by the FAQ (Access To Non-Static Members and Type Erasure). I suspect the complexity introduced by lower bounds aren't worth its limited value.
OP: I want to add I think you did show it is useful, just not useful enough. Come up with the irrefutable killer use cases and I'll back the JSR. :-)

the spec does talk about lower bounds of type parameters, for example
4.10.2
a type variable is a direct supertype of its lower bound.
5.1.10
a fresh type variable ... whose lower bound
It appears that a type variable only has a (non-null) lower bound if it's a synthetic one as result of wildcard capture. What if the language allow lower bounds on all type parameters? Probably it doesn't cause a lot of trouble, and it's excluded only to keep generics simpler (well ...) Update it is said that theoretical investigation of lower bounded type parameters is not thoroughly conducted.
Update: a paper claiming lower bounds are ok: "Java Type Infererence Is Broken: Can We Fix It" by Daniel Smith
RETRACT: the following argument is wrong. OP's example is legitimate.
Your particular example is not very convincing. First it's not type safe. The returned list is indeed a List<String>, it's unsafe to view it as another type. Suppose your code compiles:
List<CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
then we can add non-String to it, which is wrong
CharSequence chars = new StringBuilder();
l2.add(chars);
Well a List<String> is not, but somewhat like a list of CharSequence. Your need can be solved by using wildcard:
public static List<String> createArrayListFullOfEmptyStrings(int size)
// a list of some specific subtype of CharSequence
List<? extends CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
// legal. can retrieve elements as CharSequence
CharSequence chars = l2.get(0);
// illegal, won't compile. cannot insert elements as CharSequence
l2.add(new StringBuilder());

More than an answer, this is another (possibly killer?) use case.
I have a ModelDecorator helper. I want it to have the following public API
class ModelDecorator<T>{
public static <T> ModelDecorator<T> create(Class<T> clazz);
public <SUPER> T from(SUPER fromInstance);
}
So, given classes A, B extends A, it can be used like this:
A a = new A();
B b = ModelDecorator.create(B.class).from(a);
But I want to have bounds on T and SUPER, so I make sure that only subclases can be instantiated using the API. At this moment, I can do:
C c = new C();
B b = ModelDecorator.create(B.class).from(c);
Where B DOES not inherit from C.
Obviously, if I could do:
public <SUPER super T> T from(SUPER fromInstance);
That would solve my problem.

What advantage does typing the List give you at that point? When you iterate over the returned collection, you should still be able to do the following:
for(String s : returnedList) {
CharSequence cs = s;
//do something with your CharSequence
}

Edit: I bring good news. There is a way to get most of what you want.
public static <R extends List<? super String>> R createListFullOfEmptyString(IntFunction<R> creator, int size)
{
R list = creator.apply(size);
for (int i = 0; i < size; i++)
{
list.add("");
}
return list;
}
// compiles
List<Object> l1 = createListFullOfEmptyString(ArrayList::new, 5);
List<CharSequence> l2 = createListFullOfEmptyString(ArrayList::new, 5);
List<String> l3 = createListFullOfEmptyString(ArrayList::new, 5);
// doesn't compile
List<Integer> l4 = createListFullOfEmptyString(ArrayList::new, 5);
The downside is clients do need to provide either an instance of R to mutate, or some means to construct an R. There is no other way to safely construct it.
I'll retain my original answer below for informational purposes.
In summary:
There is not a good reason, it just has not been done.
And until such time as it is, it will be impossible to write exact types with correct variance for methods that do all of:
A) Accept or create parametrized data structure
B) Write computed (not-passed-in) value(s) to that data structure
C) Return that data structure
Writing/accepting values is exactly the case where contravariance applies, which means the type parameter on the data structure must be lower-bounded by the type of the value being written to the data structure. The only way to express that in Java currently is using a lower-bounded wildcard on the data structure, e.g. List<? super T>.
If we are designing an API such as the OP's, which might naturally (but not legally) be expressed as:
// T is the type of the value(s) being computed and written to the data structure
// Method creates the data structure
<S super T> Container<S> create()
// Method writes to the data structure
<S super T> Container<S> write(Container<S> container)
Then the options available to us are:
A) Use a lower-bounded wildcard, and force callers to cast the output:
// This one is actually useless - there is no type the caller can cast to that is both read- and write-safe.
Container<? super T> create()
// Caller must cast result to the same type they passed in.
Container<? super T> write(Container<? super T> container)
B) Overly restrict the type parameter on the data structure to match the type of the value being written, and force callers to cast the input and output:
// Caller must accept as-is; cannot write values of type S (S super T) into the result.
Container<T> create()
// Caller must cast Container<S> (S super T) to Container<T> before calling, then cast the result back to Container<S>.
Container<T> write(Container<T> container)
C) Use a new type parameter and do our own unsafe casting internally:
// Caller must ensure S is a supertype of T - we cast T to S internally!
<S> Container<S> create()
// Caller must ensure S is a supertype of T - we cast T to S internally!
<S> Container<S> write(Container<S> container)
Pick your poison.

Hmm, ok - let's work with this. You define a method:
public static <T super String> List<T> createArrayListFullOfEmptyStrings(int size) {
What does that mean? It means that if I call your method, then I get back a list of some superclass of String. Maybe it returns a list of String. Maybe it returns a list of Object. I don't know.
Cool.
List<Object> l1 = createArrayListFullOfEmptyStrings(5);
According to you, that should compile. But that's not right! I can put an Integer into a list of Object - l1.add(3) . But if you are returning a list of String, then doing that should be illegal.
List<String> l3 = createArrayListFullOfEmptyStrings(5);
According to you, that should compile. But that's not right! l3.get(1) should always return a String ... but that method might have returned a list of Object, meaning that l3.get(1) could conceivably be an Integer.
The only thing that works is
List<? super String> l5 = createArrayListFullOfEmptyStrings(5);
All I know is that I can safely call l4.put("foo"), and I can safely get Object o = l4.get(2) .

Related

Java Generics: What is the benefit of using wildcards here?

The Collections.fill method has the following header:
public static <T> void fill(List<? super T> list, T obj)
Why is the wildcard necessary? The following header seems to work just as well:
public static <T> void fill(List<T> list, T obj)
I cannot see a reason why the wildcard is needed; code such as the following works with the second header as well as the first:
List<Number> nums = new ArrayList<>();
Integer i = 43;
fill(nums, i); //fill method written using second header
My question is: For what specific call of fill would the first header work but not the second? And if there is no such call, why include the wildcard? In this case, the wildcard does not make the method more concise nor add to readability (in my opinion).
This is a really good question and the simple answer was guessed already:
For the current version of the fill(List<? super T> list, T obj) there is no
such input that would be rejected given the signature is changed to fill(List<T> list, T obj), so there is no benefit and the devs are likely followed the PECS principle
The above statement derives from the principle that: if there is a such type X so that
X is a supertype of T then List<X> is a supertype of List<? super T> because of type contravariance.
Since we can always find such X (at the worst case it's the Object class) - the compiler can infer a suitable List<X> argument type given either form of fill.
So, knowing that fact we can interfere with the compiler and infer the type ourselves using "type witness" so the code breaks:
List<Object> target = new ArrayList<>();
//Compiles OK as we can represent List<Object> as List<? super Integer> and it fits
Collections.<Integer>fill(target, 1);
//Compilation error as List<Object> is invariant to List<Integer> and not a valid substitute
Collections.<Integer>fillNew(target, 1);
This is all of course purely theoretical and nobody in their right mind would use the type argument there.
HOWEVER
While answering the question "What is the benefit of using wildcards here?" we yet considered only one side of the equation - us, consumers of the method and our experience but not library developers.
Hence this question is somewhat similar to why Collections.enumeration(final Collection<T> c) is declared the way it is and not enumeration(Collection<T> c) as final seems superfluous for the end-user.
We can speculate here about the real intention, but I can give a few subjective reasons:
First: using List<? super T> (as well as final for enumeration) immediately disambiguates the code that tiny bit more and for the <? super T> specifically - it useful to show that only partial knowledge about the
type parameter is required and the list cannot be used to produce values of T, but only to consume them.
Quote:
Wildcards are useful in situations where only partial knowledge about the type parameter is required.
JLS 4.5.1. Type Arguments of Parameterized Types
Second: it gives some freedom to the library owners to improve/update the method without breaking backward compatibility while conforming to the existing constraints.
Now let's try make up some hypothetical "improvements" to see what I mean (I'll call the form of fill that uses List<T> as fillNew):
#1 The decision is to make method to return the obj value (used to fill up the list) back:
public static <T> void fill(List<? super T> list, T obj)
//becomes ↓↓↓
public static <T> T fill(List<? super T> list, T obj)
The updated method would work just fine for fill signature, but for fillNew - the inferred return type now isn't that obvious:
List<Number> target = new ArrayList<>();
Long val = fill(target, 1L); //<<Here Long is the most specific type that fits both arguments
//Compilation error
Long val = fillNew(target, 1L); //<<Here Number is, so it cannot be assigned back
//More exotic case:
Integer val = fill(asList(true), 0); //val is Integer as expected
Comparable<?> val = fillNew(asList(true), 0); //val is now Comparable<?> as the most specific type
#2 The decision to add an overloaded version of fill that is 10x more performant in cases when T is Comparable<T>:
/* Extremely performant 10x version */
public static <T extends Comparable<T>> void fill(List<? super T> list, T value)
/* Normal version */
public static void fill(List<? super T> list, T value)
List<Number> target = new ArrayList<>();
fill(target, 1); //<<< Here the more performant version is used as T inferred to Integer and it implements Comparable<Integer>
fillNew(target, 1); //<< Still uses the slow version just because T is inferred to Number which is not Comparable
To sum up - the current signature of fill is more flexible/descriptive in my opinion for all parties (developers and library designers)
For your example, the reason it 'works' with your basic <T> signature, is that an Integer is also a Number. The only 'T' that works is T = Number, and then the whole thing just works out.
In this case, the expression you have for the T obj parameter is a reified type: You have an Integer. You could have a T instead. Perhaps you have this:
class AtomicReference<T> {
// The actual impl of j.u.concurrent.AtomicReference...
// but with this one additional method:
public void fillIntoList(List<? super T> list) {
T currentValue = get();
Collections.fill(list, currentValue);
}
}
I may perhaps want to write something like this:
AtomicReference<String> ref = new AtomicReference<String>("hello");
List<CharSequence> texts = new ArrayList<>();
...
ref.fillIntoList(texts);
If my hypothetical fillIntoList method simply had List<T> in the signature that wouldn't compile. Fortunately it does, so the code does compile. Had the Collections.fill method not done the <? super T> thing, the invocation of the Collections.fill method in my fillIntoList method would have failed.
It's highly exotic for any of this to come up. But it can come up. List<? super T> is the strictly superior signature here - it can do everything List<T> does, and more, and it is also semantically correct: Of course I can fill a list-of-foos by writing into every slot a ref to something that I know for sure is a bar, if bar is a child of foo.
That is because the inheritance is useful is some cases.
For example, if you have the following class structure:
public class Parent {
//some code
}
public class Child extends Parent {
//some another code
}
You could use the first method writing:
List<Child> children = new ArrayList<>();
Parent otherParentObject = new Parent(); //after this line, set the values for the class
List<Parent> outParentList = new ArrayList<>();
fill(children, otherParentObject); //fill method using first signature;

Differences between these generic statements [duplicate]

I'm a newbie in Generic and my question is: what difference between two functions:
function 1:
public static <E> void funct1 (List<E> list1) {
}
function 2:
public static void funct2(List<?> list) {
}
The first signature says: list1 is a List of Es.
The second signature says: list is a List of instances of some type, but we don't know the type.
The difference becomes obvious when we try to change the method so it takes a second argument, which should be added to the list inside the method:
import java.util.List;
public class Experiment {
public static <E> void funct1(final List<E> list1, final E something) {
list1.add(something);
}
public static void funct2(final List<?> list, final Object something) {
list.add(something); // does not compile
}
}
The first one works nicely. And you can't change the second argument into anything that will actually compile.
Actually I just found an even nicer demonstration of the difference:
public class Experiment {
public static <E> void funct1(final List<E> list) {
list.add(list.get(0));
}
public static void funct2(final List<?> list) {
list.add(list.get(0)); // !!!!!!!!!!!!!! won't compile !!!!!!!!!
}
}
One might as why do we need <?> when it only restricts what we can do with it (as #Babu_Reddy_H did in the comments). I see the following benefits of the wildcard version:
The caller has to know less about the object he passes in. For example if I have a Map of Lists: Map<String, List<?>> I can pass its values to your function without specifying the type of the list elements. So
If I hand out objects parameterized like this I actively limit what people know about these objects and what they can do with it (as long as they stay away from unsafe casting).
These two make sense when I combine them: List<? extends T>. For example consider a method List<T> merge(List<? extends T>, List<? extends T>), which merges the two input lists to a new result list. Sure you could introduce two more type parameters, but why would you want to? It would be over specifying things.
finally wildcards can have lower bounds, so with lists you can make the add method work, while get doesn't give you anything useful. Of course that triggers the next question: why don't generics have lower bounds?
For a more in depth answer see: When to use generic methods and when to use wild-card? and http://www.angelikalanger.com/GenericsFAQ/FAQSections/TypeArguments.html#FAQ203
Generics makes the collection more type safe.
List<E> : E here is the Type Parameter, which can be used to determine the content type of the list, but there was No way to check what was the content during the runtime.
Generics are checked only during compilation time.
<? extends String> : This was specially build into java, to handle the problem which was with the Type Parameter. "? extends String" means this List can have
objects which IS-A String.
For eg:
Animal class
Dog class extends Animal
Tiger class extends Animal
So using "public void go(ArrayList<Animal> a)" will NOT accept Dog or Tiger as its content but Animal.
"public void go(ArrayList<? extends Animal> a)" is whats needed to make the ArrayList take in Dog and Tiger type.
Check for references in Head First Java.
List<E> as a parameter type says that the parameter must be a list of items with any object type. Moreover, you can bind the E parameter to declare references to list items inside the function body or as other parameter types.
The List<?> as a parameter type has the same semantics, except that there is no way to declare references to the items in the list other than to use Object. Other posts give additional subtle differences.
The first is a function that accepts a parameter that must be a list of items of E type.
the second example type is not defined
List<?> list
so you can pass list of any type of objects.
I usually explain the difference between <E> and <?> by a comparison with logical quantifications, that is, universal quantification and existential quantification.
corresponds to "forall E, ..."
corresponds to "there exists something(denoted by ) such that ...."
Therefore, the following generic method declaration means that, for all class type E, we define funct1
public static <E> void funct1 (List<E>; list1) {
}
The following generic method declaration means that, for some existing class denoted by <?>, we define funct2.
public static void funct2(List<?> list) {
}
(Since your edit) Those two function signatures have the same effect to outside code -- they both take any List as argument. A wildcard is equivalent to a type parameter that is used only once.
In addition to those differences mentioned before, there is also an additional difference: You can explicitly set the type arguments for the call of the generic method:
List<Apple> apples = ...
ClassName.<Banana>funct2(apples); // for some reason the compiler seems to be ok
// with type parameters, even though the method has none
ClassName.<Banana>funct1(apples); // compiler error: incompatible types: List<Apple>
// cannot be converted to List<Banana>
(ClassName is the name of the class containing the methods.)
In this context, both wild card (?) and type parameter (E) will do the same for you. There are certain edges based on the use cases.
Let's say if you want to have a method which may have more than one params like:
public void function1(ArrayList<?> a, ArrayList<?> b){
// some process
}
public <T> void function2(ArrayList<T> a, ArrayList<T> b){
// some process
}
in function1 a can be AL of String and b can be AL of the Integer so it is not possible to control the type of both the params but this is easy for the function2.
We should use the Type Params (function 2) if we want to use the type later in the method or class
There are some features in WildCard and Type param:
WildCard(?)
It support the upper and lower bound in the type while the Type param (E) supports only upper bound.
Type Param(E)
SomeTime we do not need to pass the actual type ex:
ArrayList<Integer> ai = new ArrayList<Integer>();
ArrayList<Double> ad = new ArrayList<Double>();
function2(ai, ad);
//It will compile and the T will be Number.
In this case, the compiler infers the type argument for us based on the type of actual arguments

What is the difference <T> and <?> [duplicate]

I'm a newbie in Generic and my question is: what difference between two functions:
function 1:
public static <E> void funct1 (List<E> list1) {
}
function 2:
public static void funct2(List<?> list) {
}
The first signature says: list1 is a List of Es.
The second signature says: list is a List of instances of some type, but we don't know the type.
The difference becomes obvious when we try to change the method so it takes a second argument, which should be added to the list inside the method:
import java.util.List;
public class Experiment {
public static <E> void funct1(final List<E> list1, final E something) {
list1.add(something);
}
public static void funct2(final List<?> list, final Object something) {
list.add(something); // does not compile
}
}
The first one works nicely. And you can't change the second argument into anything that will actually compile.
Actually I just found an even nicer demonstration of the difference:
public class Experiment {
public static <E> void funct1(final List<E> list) {
list.add(list.get(0));
}
public static void funct2(final List<?> list) {
list.add(list.get(0)); // !!!!!!!!!!!!!! won't compile !!!!!!!!!
}
}
One might as why do we need <?> when it only restricts what we can do with it (as #Babu_Reddy_H did in the comments). I see the following benefits of the wildcard version:
The caller has to know less about the object he passes in. For example if I have a Map of Lists: Map<String, List<?>> I can pass its values to your function without specifying the type of the list elements. So
If I hand out objects parameterized like this I actively limit what people know about these objects and what they can do with it (as long as they stay away from unsafe casting).
These two make sense when I combine them: List<? extends T>. For example consider a method List<T> merge(List<? extends T>, List<? extends T>), which merges the two input lists to a new result list. Sure you could introduce two more type parameters, but why would you want to? It would be over specifying things.
finally wildcards can have lower bounds, so with lists you can make the add method work, while get doesn't give you anything useful. Of course that triggers the next question: why don't generics have lower bounds?
For a more in depth answer see: When to use generic methods and when to use wild-card? and http://www.angelikalanger.com/GenericsFAQ/FAQSections/TypeArguments.html#FAQ203
Generics makes the collection more type safe.
List<E> : E here is the Type Parameter, which can be used to determine the content type of the list, but there was No way to check what was the content during the runtime.
Generics are checked only during compilation time.
<? extends String> : This was specially build into java, to handle the problem which was with the Type Parameter. "? extends String" means this List can have
objects which IS-A String.
For eg:
Animal class
Dog class extends Animal
Tiger class extends Animal
So using "public void go(ArrayList<Animal> a)" will NOT accept Dog or Tiger as its content but Animal.
"public void go(ArrayList<? extends Animal> a)" is whats needed to make the ArrayList take in Dog and Tiger type.
Check for references in Head First Java.
List<E> as a parameter type says that the parameter must be a list of items with any object type. Moreover, you can bind the E parameter to declare references to list items inside the function body or as other parameter types.
The List<?> as a parameter type has the same semantics, except that there is no way to declare references to the items in the list other than to use Object. Other posts give additional subtle differences.
The first is a function that accepts a parameter that must be a list of items of E type.
the second example type is not defined
List<?> list
so you can pass list of any type of objects.
I usually explain the difference between <E> and <?> by a comparison with logical quantifications, that is, universal quantification and existential quantification.
corresponds to "forall E, ..."
corresponds to "there exists something(denoted by ) such that ...."
Therefore, the following generic method declaration means that, for all class type E, we define funct1
public static <E> void funct1 (List<E>; list1) {
}
The following generic method declaration means that, for some existing class denoted by <?>, we define funct2.
public static void funct2(List<?> list) {
}
(Since your edit) Those two function signatures have the same effect to outside code -- they both take any List as argument. A wildcard is equivalent to a type parameter that is used only once.
In addition to those differences mentioned before, there is also an additional difference: You can explicitly set the type arguments for the call of the generic method:
List<Apple> apples = ...
ClassName.<Banana>funct2(apples); // for some reason the compiler seems to be ok
// with type parameters, even though the method has none
ClassName.<Banana>funct1(apples); // compiler error: incompatible types: List<Apple>
// cannot be converted to List<Banana>
(ClassName is the name of the class containing the methods.)
In this context, both wild card (?) and type parameter (E) will do the same for you. There are certain edges based on the use cases.
Let's say if you want to have a method which may have more than one params like:
public void function1(ArrayList<?> a, ArrayList<?> b){
// some process
}
public <T> void function2(ArrayList<T> a, ArrayList<T> b){
// some process
}
in function1 a can be AL of String and b can be AL of the Integer so it is not possible to control the type of both the params but this is easy for the function2.
We should use the Type Params (function 2) if we want to use the type later in the method or class
There are some features in WildCard and Type param:
WildCard(?)
It support the upper and lower bound in the type while the Type param (E) supports only upper bound.
Type Param(E)
SomeTime we do not need to pass the actual type ex:
ArrayList<Integer> ai = new ArrayList<Integer>();
ArrayList<Double> ad = new ArrayList<Double>();
function2(ai, ad);
//It will compile and the T will be Number.
In this case, the compiler infers the type argument for us based on the type of actual arguments

How do parameterized methods resolve <T> if it's not an input parameter?

How are references to << T >> handled by the compiler in the following code, since the method takes no parameters that would allow inference of T? Are any restrictions being placed on what type of object can be placed into the list? Is a cast even taking place on the line where I add the String to the list? My first thought is that without anything to infer T from, T becomes an Object type. Thanks in advance.
public class App {
private <T> void parameterizedMethod()
{
List<T> list = new ArrayList<>();
for(int i = 0; i < 10; i++)
{
list.add((T)new String()); //is a cast actually occurring here?
}
}
public App()
{
parameterizedMethod();
}
public static void main(String[] args) {
new App();
}
}
This is initially determined by 18.1.3:
When inference begins, a bound set is typically generated from a list of type parameter declarations P1, ..., Pp and associated inference variables α1, ..., αp. Such a bound set is constructed as follows. For each l (1 ≤ l ≤ p):
If Pl has no TypeBound, the bound αl <: Object appears in the set.
Otherwise, for each type T delimited by & in the TypeBound, the bound αl <: T[P1:=α1, ..., Pp:=αp] appears in the set; [...].
At the end of inference, the bound set gets "resolved" to the inferred type. Without any additional context, the bound set will only consist of the initial bounds based on the declaration of the type parameter.
A bound with a form like αl <: Object means αl (an inference variable) is Object or a subtype of Object. This bound is resolved to Object.
So in your case, yes, Object is inferred.
If we declared a type bound:
private <T extends SomeType> void parameterizedMethod()
then SomeType will be inferred.
No cast actually happens in this case (erasure). That's why it's "unchecked". A cast only happens when the object is exposed due to e.g.:
<T> T parameterizedMethodWithAResult()
{
return (T) new String();
}
// the cast happens out here
Integer i = parameterizedMethodWithAResult();
// parameterizedMethodWithAResult returns Object actually,
// and we are implicitly doing this:
Integer i = (Integer) parameterizedMethodWithAResult();
Are any restrictions being placed on what type of object can be placed into the list?
Semantically (compile-time), yes. And note that the restriction is determined outside the method. Inside the method, we don't know what that restriction actually is. So we should not be putting String in a List<T>. We don't know what T is.
Practically (run-time), no. It's just a List and there's no checked cast. parameterizedMethod won't cause an exception...but that only holds for this kind of isolated example. This kind of code may very well lead to issues.
Inside the method body, Java provides us no way to get any information about the substitution for T, so how can we do anything useful with T?
Sometimes, T is not really important to the method body; it's just more convenient for the caller
public static List<T> emptyList(){...}
List<String> emptyStringList = emptyList();
But if T is important to method body, there must be an out-of-band protocol, not enforceable by the compiler, that both the caller and the callee must obey. For example
class Conf
<T> T get(String key)
//
<conf>
<param name="size" type="int" ...
//
String name = conf.get("name");
Integer size = conf.get("size");
The API uses <T> here just so that the caller doesn't need to do an explicit cast. It is the caller's responsibility to ensure that the correct T is supplied.
In your example, the callee assumes that T is a supertype of String; the caller must uphold that assumption. It would be nice if such constraint can be expressed to the compiler as
<T super String> void parameterizedMethod()
{
List<T> list
...
list.add( new String() ); // obviously correct; no cast is needed
}
//
this.<Integer>parameterizedMethod(); // compile error
unfortunately, java does not support <T super Foo> ... :) So you need to javadoc the constraint instead
/** T must be a supertype of String! **/
<T> void parameterizedMethod()
I have an actual API example just like that.
List<T> list = new ArrayList<>();
for(int i = 0; i < 10; i++)
{
list.add((T)new String()); //is a cast actually occurring here?
}
No, no cast is actually occurring there. If you did anything with list that forced it to be a List<T> -- such as returning it -- then that may cause ClassCastExceptions at the point where the compiler inserted the real cast.

Why can't a Generic Type Parameter have a lower bound in Java?

I gather that you cannot bind a Java generics type parameter to a lower bound (i.e. using the super keyword). I was reading what the Angelika Langer Generics FAQ had to say on the subject. They say it basically comes down to a lower bound being useless ("not making any sense").
I'm not convinced. I can imagine a use for them to help you be more flexible to callers of a library method that produces a typed result. Imagine a method that created an array list of a user-specified size and filled it with the empty string. A simple declaration would be
public static ArrayList<String> createArrayListFullOfEmptyStrings(int i);
But that's unnecessarily restrictive to your clients. Why can't they invoke your method like this:
//should compile
List<Object> l1 = createArrayListFullOfEmptyStrings(5);
List<CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
List<String> l3 = createArrayListFullOfEmptyStrings(5);
//shouldn't compile
List<Integer> l4 = createArrayListFullOfEmptyStrings(5);
At this point I would be tempted to try the following definition:
public static <T super String> List<T> createArrayListFullOfEmptyStrings(int size) {
List<T> list = new ArrayList<T>(size);
for(int i = 0; i < size; i++) {
list.add("");
}
return list;
}
But it will not compile; the super keyword is illegal in this context.
Is my example above a bad example (ignoring what I say below)? Why isn't a lower bound useful here? And if it would be useful, what's the real reason that it is not permitted in Java?
P.S.
I know that a better organization might be something like this:
public static void populateListWithEmptyStrings(List<? super String> list, int size);
List<CharSequence> list = new ArrayList<CharSequence>();
populateListWithEmptyStrings(list, 5);
Can we for the purpose of this question pretend that due to a requirement, we need to do both operations in one method call?
Edit
#Tom G (justifiably) asks what benefit having a List<CharSequence> would have over a List<String>. For one, nobody said the returned list is immutable, so here's one advantage:
List<CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
l2.add(new StringBuilder("foo").append("bar"));
Basically, its not useful enough.
I think your example points out the only advantage of a lower bound, a feature the FAQ calls Restricted Instantiation:
The bottom line is: all that a " super " bound would buy you is the restriction that only supertypes of Number can be used as type arguments. ....
But as the other posts point out, the usefulness of even this feature can be limited.
Due to the nature of polymorphism and specialization, upper bounds are far more useful than lower bounds as described by the FAQ (Access To Non-Static Members and Type Erasure). I suspect the complexity introduced by lower bounds aren't worth its limited value.
OP: I want to add I think you did show it is useful, just not useful enough. Come up with the irrefutable killer use cases and I'll back the JSR. :-)
the spec does talk about lower bounds of type parameters, for example
4.10.2
a type variable is a direct supertype of its lower bound.
5.1.10
a fresh type variable ... whose lower bound
It appears that a type variable only has a (non-null) lower bound if it's a synthetic one as result of wildcard capture. What if the language allow lower bounds on all type parameters? Probably it doesn't cause a lot of trouble, and it's excluded only to keep generics simpler (well ...) Update it is said that theoretical investigation of lower bounded type parameters is not thoroughly conducted.
Update: a paper claiming lower bounds are ok: "Java Type Infererence Is Broken: Can We Fix It" by Daniel Smith
RETRACT: the following argument is wrong. OP's example is legitimate.
Your particular example is not very convincing. First it's not type safe. The returned list is indeed a List<String>, it's unsafe to view it as another type. Suppose your code compiles:
List<CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
then we can add non-String to it, which is wrong
CharSequence chars = new StringBuilder();
l2.add(chars);
Well a List<String> is not, but somewhat like a list of CharSequence. Your need can be solved by using wildcard:
public static List<String> createArrayListFullOfEmptyStrings(int size)
// a list of some specific subtype of CharSequence
List<? extends CharSequence> l2 = createArrayListFullOfEmptyStrings(5);
// legal. can retrieve elements as CharSequence
CharSequence chars = l2.get(0);
// illegal, won't compile. cannot insert elements as CharSequence
l2.add(new StringBuilder());
More than an answer, this is another (possibly killer?) use case.
I have a ModelDecorator helper. I want it to have the following public API
class ModelDecorator<T>{
public static <T> ModelDecorator<T> create(Class<T> clazz);
public <SUPER> T from(SUPER fromInstance);
}
So, given classes A, B extends A, it can be used like this:
A a = new A();
B b = ModelDecorator.create(B.class).from(a);
But I want to have bounds on T and SUPER, so I make sure that only subclases can be instantiated using the API. At this moment, I can do:
C c = new C();
B b = ModelDecorator.create(B.class).from(c);
Where B DOES not inherit from C.
Obviously, if I could do:
public <SUPER super T> T from(SUPER fromInstance);
That would solve my problem.
What advantage does typing the List give you at that point? When you iterate over the returned collection, you should still be able to do the following:
for(String s : returnedList) {
CharSequence cs = s;
//do something with your CharSequence
}
Edit: I bring good news. There is a way to get most of what you want.
public static <R extends List<? super String>> R createListFullOfEmptyString(IntFunction<R> creator, int size)
{
R list = creator.apply(size);
for (int i = 0; i < size; i++)
{
list.add("");
}
return list;
}
// compiles
List<Object> l1 = createListFullOfEmptyString(ArrayList::new, 5);
List<CharSequence> l2 = createListFullOfEmptyString(ArrayList::new, 5);
List<String> l3 = createListFullOfEmptyString(ArrayList::new, 5);
// doesn't compile
List<Integer> l4 = createListFullOfEmptyString(ArrayList::new, 5);
The downside is clients do need to provide either an instance of R to mutate, or some means to construct an R. There is no other way to safely construct it.
I'll retain my original answer below for informational purposes.
In summary:
There is not a good reason, it just has not been done.
And until such time as it is, it will be impossible to write exact types with correct variance for methods that do all of:
A) Accept or create parametrized data structure
B) Write computed (not-passed-in) value(s) to that data structure
C) Return that data structure
Writing/accepting values is exactly the case where contravariance applies, which means the type parameter on the data structure must be lower-bounded by the type of the value being written to the data structure. The only way to express that in Java currently is using a lower-bounded wildcard on the data structure, e.g. List<? super T>.
If we are designing an API such as the OP's, which might naturally (but not legally) be expressed as:
// T is the type of the value(s) being computed and written to the data structure
// Method creates the data structure
<S super T> Container<S> create()
// Method writes to the data structure
<S super T> Container<S> write(Container<S> container)
Then the options available to us are:
A) Use a lower-bounded wildcard, and force callers to cast the output:
// This one is actually useless - there is no type the caller can cast to that is both read- and write-safe.
Container<? super T> create()
// Caller must cast result to the same type they passed in.
Container<? super T> write(Container<? super T> container)
B) Overly restrict the type parameter on the data structure to match the type of the value being written, and force callers to cast the input and output:
// Caller must accept as-is; cannot write values of type S (S super T) into the result.
Container<T> create()
// Caller must cast Container<S> (S super T) to Container<T> before calling, then cast the result back to Container<S>.
Container<T> write(Container<T> container)
C) Use a new type parameter and do our own unsafe casting internally:
// Caller must ensure S is a supertype of T - we cast T to S internally!
<S> Container<S> create()
// Caller must ensure S is a supertype of T - we cast T to S internally!
<S> Container<S> write(Container<S> container)
Pick your poison.
Hmm, ok - let's work with this. You define a method:
public static <T super String> List<T> createArrayListFullOfEmptyStrings(int size) {
What does that mean? It means that if I call your method, then I get back a list of some superclass of String. Maybe it returns a list of String. Maybe it returns a list of Object. I don't know.
Cool.
List<Object> l1 = createArrayListFullOfEmptyStrings(5);
According to you, that should compile. But that's not right! I can put an Integer into a list of Object - l1.add(3) . But if you are returning a list of String, then doing that should be illegal.
List<String> l3 = createArrayListFullOfEmptyStrings(5);
According to you, that should compile. But that's not right! l3.get(1) should always return a String ... but that method might have returned a list of Object, meaning that l3.get(1) could conceivably be an Integer.
The only thing that works is
List<? super String> l5 = createArrayListFullOfEmptyStrings(5);
All I know is that I can safely call l4.put("foo"), and I can safely get Object o = l4.get(2) .

Categories

Resources