Type parameter E in Enum is defined as <E extends Enum<E>>. So why in Enum implementation source code, we still need to check getClass() and getDeclaringClass() in compareTo method? I don't think compiler can pass when I set a different enum type object in compareTo.
It covers the case of comparing raw types and values obtained via unsafe / unchecked casts and conversions (such as Comparable, or Object a):
static enum Fruit { Apple, Orange, Banana };
static enum Animal { Cat, Dog, Horse };
public static final void main (String[] args) throws Exception {
Enum f = Fruit.Apple;
Enum a = Animal.Cat;
f.compareTo(a);
}
There, compareTo would fail with a ClassCastException at the explicit getDeclaringClass comparison, as it would pass the first explicit cast (Enum other = (Enum)o) with no issue.
As for comparing getClass, it's tagged as an "optimization" in that source. The reason this is an optimization is that if the value classes are the same, then they're definitely from the same enum, and so there's no need to call the slightly more expensive getDeclaringClass. Since the vast majority of enums are likely simple enums (no value class bodies), it's optimized for that case.
It can, if you use Enum as a raw type. For instance in this program:
public static void main(String[] args) {
Enum e = A.x; // All occurrences of E are erased to Enum
e.compareTo(B.i); // B.i extends Enum, so this is fine
}
enum A {x,y,z};
enum B {i,j,k};
Related
The following code compiles:
public enum Foo {
A,
B{};
public static void main(String[] args) {
Foo f = Foo.A;
List s = (List)f;
}
}
This one doesn't:
public enum Foo {
A,
B;
public static void main(String[] args) {
Foo f = Foo.A;
List s = (List)f;
}
}
I could also replace Foo.A with Foo.B and get the same result.
What is going on here? How could Foo.A ever be a List in the first example?
For this type of casting, the spec on Narrowing Reference Conversion defines the rules. There is no special case for enums, only a distinction between final and non-final classes.
The basic enum falls into the "final class" category, but your extended enum doesn't, as it introduces a subclass via the {} syntax.
Of course, even with the extended enum, there's no way that one of your enum constants could ever implement List, but the current spec simply doesn't cover that situation.
A future revision of the spec might improve that, and then I'd expect compilers to implement the additional checks. But right now, that degree of compile-time safety isn't available.
I know that a method can be overriden with one that returns a subclass.
Can it also be overloaded by a method which returns a subclass?
public Object foo() {...}
public String foo(int a) {...}
Is the above code valid?(if placed within a Class)
What about?
public Object foo() {...}
public String foo() {...}
Beginning with Java 5, covariant return types are allowed for overridden methods. This means that an overridden method in a subclass is allowed to use a signature which returns a type that may be a subclass of the parent signature's return type.
To make this concrete, say you have this interface:
public interface MyInterface {
Number apply(double op1, double op2);
:
:
}
The following is legal because the return type is a subclass of Number:
public class MyClass implements MyInterface {
:
:
#Override
public Integer apply(double op1, double op2) {
:
:
return Integer.valueOf(result);
}
}
Because overloaded methods (which have the same name but different signatures) are effectively different methods, you are free to use different return types if you like ... however ... this is discouraged because methods that have the same name but different return types can be confusing to programmers and can complicate the use of your API. It's best not to overload mrthods when you can reasonably avoid it.
This example:
public Object foo() {...}
public String foo(int a) {...}
as long as the two methods get different set of variables, there's no problem with returning different types (even if they are not subclass).
The logic is very simple - if the compiler can choose without doubt which one to use - there's no issue. In this case- if you give the method int it's one method, and without parameters it's the other, no dilema (and the same name does not matter here)
as for:
public Object foo() {...}
public String foo() {...}
This one is not valid, since here the compiler can't 'choose' which one to use.
Yes, and it does not even need to be a subclass foo(), and foo(int) are two completely different and unrelated functions as far as the compiler is concerned, each can have any return type you want.
yeah, you overroad foo with foo(int a) , and gave it a new data type String , the compiler see this as a valid code but the other one Object foo() and String foo() is totally invalid in java
I would like to write a generic method that takes a bounded parameter that extends Enum. For example, if I have an Enum as follows:
public enum InputFlags{
ONE (0000001),
TWO (0000002),
THREE (00000004);
public final int value;
InputFlags(int value){
this.value = value;
}
}
I can then do the following:
for (InputFlags ifg : InputFlags.values()){
// Do something with ifg
}
However if I try to do the above in a generic method whose return parameter is bounded, I cannot access the values() method:
public static <T extends Enum> T getFlags(int f){
T.values(); // NOT allowed, even though I have bounded by extending Enum.
}
It seems as though I cannot access values() in the generic method. Is this a peculiarity of Enums or is there a way round this?
values() is a very strange thing in Java. Look in the documentation for Enum - values() isn't even there! values() is not a method of Enum at all. Instead a static method called values() is implicitly added to every class extending Enum. But the values() method for one enum is different from the values() method in another enum.
The fact that T extends Enum means that if t has type T you can call instance methods from Enum on t. You can't call static methods of Enum (and even if you could, values() doesn't exist anyway!)
values() is only useful when you know the actual enum by name. It cannot be used when you only have a type parameter T.
The way around this problem is to pass a Class object. Like this:
public static <T extends Enum<T>> T getFlags(Class<T> clazz, int f){
T[] array = clazz.getEnumConstants(); // This is how you can get an array.
Set<T> set = EnumSet.allOf(clazz); // This is how you can get a Set.
}
values() is a static method inserted by the compiler in the InputFlags class. Thus, it is not possible to use T.values(), especially since T is a generic type. However, if you can get the Class object of T (usually via getClass(), or by passing it to getFlags(int f, Class<T> clazz)), you can use Class#getEnumConstants() on that object.
This is a peculiarity of static methods. There is no way in Java, with generics or otherwise, to define an interface that applies to static methods, ie. that a class should/must implement a static method.
Also, type erasure (among other things) prevents you from using a type variable T as a type name in a static method invocation expression.
According to this entry in the Java Generics FAQ, there are some circumstances where a generic method has no equivalent non-generic method that uses wildcard types. According to that answer,
If a method signature uses multi-level wildcard types then there is always a difference between the generic method signature and the wildcard version of it.
They give the example of a method <T> void print1( List <Box<T>> list), which "requires a list of boxes of the same type." The wildcard version, void print2( List <Box<?>> list), "accepts a heterogenous list of boxes of different types," and thus is not equivalent.
How do you interpret the the differences between the following two method signatures:
<T extends Iterable<?>> void f(Class<T> x) {}
void g(Class<? extends Iterable<?>> x) {}
Intuitively, it seems like these definitions should be equivalent. However, the call f(ArrayList.class) compiles using the first method, but the call g(ArrayList.class) using the second method results in a compile-time error:
g(java.lang.Class<? extends java.lang.Iterable<?>>) in Test
cannot be applied to (java.lang.Class<java.util.ArrayList>)
Interestingly, both functions can be called with each others' arguments, because the following compiles:
class Test {
<T extends Iterable<?>> void f(Class<T> x) {
g(x);
}
void g(Class<? extends Iterable<?>> x) {
f(x);
}
}
Using javap -verbose Test, I can see that f() has the generic signature
<T::Ljava/lang/Iterable<*>;>(Ljava/lang/Class<TT;>;)V;
and g() has the generic signature
(Ljava/lang/Class<+Ljava/lang/Iterable<*>;>;)V;
What explains this behavior? How should I interpret the differences between these methods' signatures?
Well, going by the spec, neither invocation is legal. But why does the first one type check while the second does not?
The difference is in how the methods are checked for applicability (see §15.12.2 and §15.12.2.2 in particular).
For simple, non-generic g to be applicable, the argument Class<ArrayList> would need to be a subtype of Class<? extends Iterable<?>>. That means ? extends Iterable<?> needs to contain ArrayList, written ArrayList <= ? extends Iterable<?>. Rules 4 and 1 can be applied transitively, so that ArrayList needs to be a subtype of Iterable<?>.
Going by §4.10.2 any parameterization C<...> is a (direct) subtype of the raw type C. So ArrayList<?> is a subtype of ArrayList, but not the other way around. Transitively, ArrayList is not a subtype of Iterable<?>.
Thus g is not applicable.
f is generic, for simplicity let us assume the type argument ArrayList is explicitly specified. To test f for applicability, Class<ArrayList> needs to be a subtype of Class<T> [T=ArrayList] = Class<ArrayList>. Since subtyping is reflexisve, that is true.
Also for f to be applicable, the type argument needs to be within its bounds. It is not because, as we've shown above, ArrayList is not a subtype of Iterable<?>.
So why does it compile anyways?
It's a bug. Following a bug report and subsequent fix the JDT compiler explicitly rules out the first case (type argument containment). The second case is still happily ignored, because the JDT considers ArrayList to be a subtype of Iterable<?> (TypeBinding.isCompatibleWith(TypeBinding)).
I don't know why javac behaves the same, but I assume for similar reasons. You will notice that javac does not issue an unchecked warning when assigning a raw ArrayList to an Iterable<?> either.
If the type parameter were a wildcard-parameterized type, then the problem does not occur:
Class<ArrayList<?>> foo = null;
f(foo);
g(foo);
I think this is almost certainly a weird case arising out of the fact that the type of the class literal is Class<ArrayList>, and so the type parameter in this case (ArrayList) is a raw type, and the subtyping relationship between raw ArrayList and wildcard-parameterized ArrayList<?> is complicated.
I haven't read the language specification closely, so I'm not exactly sure why the subtyping works in the explicit type parameter case but not in the wildcard case. It could also very well be a bug.
Guess: The thing representing the first ? (ArrayList) does not 'implement' ArrayList<E> (by virtue of the double nested wildcard). I know this sounds funny but....
Consider (for the original listing):
void g(Class<? extends Iterable<Object> x) {} // Fail
void g(Class<? extends Iterable<?> x) {} // Fail
void g(Class<? extends Iterable x) {} // OK
And
// Compiles
public class Test{
<T extends Iterable<?>> void f(ArrayList<T> x) {}
void g(ArrayList<? extends Iterable<?>> x) {}
void d(){
ArrayList<ArrayList<Integer>> d = new ArrayList<ArrayList<Integer>>();
f(d);
g(d);
}
}
This
// Does not compile on g(d)
public class Test{
<T extends Iterable<?>> void f(ArrayList<T> x) {}
void g(ArrayList<? extends Iterable<?>> x) {}
void d(){
ArrayList<ArrayList> d = new ArrayList<ArrayList>();
f(d);
g(d);
}
}
These are not quite the same:
<T extends Iterable<?>> void f(Class<T> x) {}
void g(Class<? extends Iterable<?>> x) {}
The difference is that g accepts a "Class of unknown that implements Iterable of unknown", but ArrayList<T> is constrained implementing Iterable<T>, not Iterable<?>, so it doesn't match.
To make it clearer, g will accept Foo implements Iterable<?>, but not AraryList<T> implements Iterable<T>.
Suppose I have the following class (for demonstration purposes)
package flourish.lang.data;
public class Toyset implements Comparable<Toyset> {
private Comparable<?>[] trains;
#Override
public int compareTo(Toyset o) {
for (int i =0; i<trains.length; i++) {
if (trains[i].compareTo(o.trains[i]) < 0)
return -1;
}
return 1;
}
}
The compiler tells me
"The method compareTo(capture#1-of ?) in the type Comparable<capture#1-of ?> is not applicable for the arguments (Comparable<capture#2-of ?>)"
How can I deal with the fact that I want to put different Comparables into trains?
Sure I could remove the parameters and go with raw types, but that seems like a little bit of a cop-out.
EDIT:
Perhaps the example I've given is a little obtuse. What I'm trying to understand is whether generics should always be used with Comparables. e.g. If the class of the object I want compare is not known until runtime:
public class ComparisonTool {
public static int compareSomeObjects(final Class<? extends Comparable> clazz, final Object o1, final Object o2) {
return clazz.cast(o1).compareTo(clazz.cast(o2));
}
public static void main(String[] args) {
System.out.println(compareSomeObjects(Integer.class, new Integer(22), new Integer(33)));
}
}
If I replace Comparable with Comparable<?> then the compiler complains (as above) because the two cast operations are not guaranteed to be the same class (capture#1 of ? vs capture#2 of ?). On the other hand, I can't replace them with Comparable<Object> either, because then the call in main() doesn't match the method signature (i.e. Integer implements Comparable<Integer> and not Comparable<Object>. Using the raw type certainly 'works', but is this the right approach?
The problem is that one instance might have a Comparable<TrainA> and the other contain Comapable<TrainB> and the compare method of Comparable<TrainA> will not accept an instance of TrainB. This is what you have set up with the wildcard.
Your better bet is to put a common super type in the Comparable ie. Comparable<Toy> or Comparable<Object>.
By declaring your trains field to be of type Comparable<?>[], you're asserting that it's an array of some specific type—and that you don't happen to know which type it is. Two different instances of Toyset will each have trains fields that hold sequences of some specific type, but each has a different specific type in mind. The compiler is warning you that there's nothing in the code asserting that the specific types of the arrays pointed to be the various trains fields in Toyset instances will have any subtype or supertype relationship.
In this case, falling back to a raw type is honest; you don't have anything meaningful to say about the type of objects being compared. You could instead try using Comparable<Object>, which allows rather weak use of a type parameter.
The design strikes me as odd. I'm assuming it's elided from something much larger. Toy sets can be compared, which in turn depends on a lexicographic comparison of the trains contained in each toy set. That's fine, but why is there no upper bound type that all trains have in common?