What happens when working with variable length arguments in JAVA? [duplicate] - java

There seems to be a bug in the Java varargs implementation. Java can't distinguish the appropriate type when a method is overloaded with different types of vararg parameters.
It gives me an error The method ... is ambiguous for the type ...
Consider the following code:
public class Test
{
public static void main(String[] args) throws Throwable
{
doit(new int[]{1, 2}); // <- no problem
doit(new double[]{1.2, 2.2}); // <- no problem
doit(1.2f, 2.2f); // <- no problem
doit(1.2d, 2.2d); // <- no problem
doit(1, 2); // <- The method doit(double[]) is ambiguous for the type Test
}
public static void doit(double... ds)
{
System.out.println("doubles");
}
public static void doit(int... is)
{
System.out.println("ints");
}
}
the docs say: "Generally speaking, you should not overload a varargs method, or it will be difficult for programmers to figure out which overloading gets called."
however they don't mention this error, and it's not the programmers that are finding it difficult, it's the compiler.
thoughts?
EDIT - Compiler: Sun jdk 1.6.0 u18

The problem is that it is ambiguous.
doIt(1, 2);
could be a call to doIt(int ...), or doIt(double ...). In the latter case, the integer literals will be promoted to double values.
I'm pretty sure that the Java spec says that this is an ambiguous construct, and the compiler is just following the rules laid down by the spec. (I'd have to research this further to be sure.)
EDIT - the relevant part of the JLS is "15.12.2.5 Choosing the Most Specific Method", but it is making my head hurt.
I think that the reasoning would be that void doIt(int[]) is not more specific (or vice versa) than void doIt(double[]) because int[] is not a subtype of double[] (and vice versa). Since the two overloads are equally specific, the call is ambiguous.
By contrast, void doItAgain(int) is more specific than void doItAgain(double) because int is a subtype of double according the the JLS. Hence, a call to doItAgain(42) is not ambiguous.
EDIT 2 - #finnw is right, it is a bug. Consider this part of 15.12.2.5 (edited to remove non-applicable cases):
One variable arity member method named m is more specific than another variable arity member method of the same name if:
One member method has n parameters and the other has k parameters, where n ≥ k. The types of the parameters of the first member method are T1, . . . , Tn-1 , Tn[], the types of the parameters of the other method are U1, . . . , Uk-1, Uk[]. Let Si = Ui, 1<=i<=k. Then:
for all j from 1 to k-1, Tj <: Sj, and,
for all j from k to n, Tj <: Sk
Apply this to the case where n = k = 1, and we see that doIt(int[]) is more specific than doIt(double[]).
In fact, there is a bug report for this and Sun acknowledges that it is indeed a bug, though they have prioritized it as "very low". The bug is now marked as Fixed in Java 7 (b123).

There is a discussion about this over at the Sun Forums.
No real resolution there, just resignation.
Varargs (and auto-boxing, which also leads to hard-to-follow behaviour, especially in combination with varargs) have been bolted on later in Java's life, and this is one area where it shows. So it is more a bug in the spec, than in the compiler.
At least, it makes for good(?) SCJP trick questions.

Interesting. Fortunately, there are a couple different ways to avoid this problem:
You can use the wrapper types instead in the method signatures:
public static void doit(Double... ds) {
for(Double currD : ds) {
System.out.println(currD);
}
}
public static void doit(Integer... is) {
for(Integer currI : is) {
System.out.println(currI);
}
}
Or, you can use generics:
public static <T> void doit(T... ts) {
for(T currT : ts) {
System.out.println(currT);
}
}

Related

Why is a type parameter stronger then a method parameter

Why is
public <R, F extends Function<T, R>> Builder<T> withX(F getter, R returnValue) {...}
more strict then
public <R> Builder<T> with(Function<T, R> getter, R returnValue) {...}
This is a follow up on Why is lambda return type not checked at compile time.
I found using the method withX() like
.withX(MyInterface::getLength, "I am not a Long")
produces the wanted compile time error:
The type of getLength() from the type BuilderExample.MyInterface is long, this is incompatible with the descriptor's return type: String
while using the method with() does not.
full example:
import java.util.function.Function;
public class SO58376589 {
public static class Builder<T> {
public <R, F extends Function<T, R>> Builder<T> withX(F getter, R returnValue) {
return this;
}
public <R> Builder<T> with(Function<T, R> getter, R returnValue) {
return this;
}
}
static interface MyInterface {
public Long getLength();
}
public static void main(String[] args) {
Builder<MyInterface> b = new Builder<MyInterface>();
Function<MyInterface, Long> getter = MyInterface::getLength;
b.with(getter, 2L);
b.with(MyInterface::getLength, 2L);
b.withX(getter, 2L);
b.withX(MyInterface::getLength, 2L);
b.with(getter, "No NUMBER"); // error
b.with(MyInterface::getLength, "No NUMBER"); // NO ERROR !!
b.withX(getter, "No NUMBER"); // error
b.withX(MyInterface::getLength, "No NUMBER"); // error !!!
}
}
javac SO58376589.java
SO58376589.java:32: error: method with in class Builder<T> cannot be applied to given types;
b.with(getter, "No NUMBER"); // error
^
required: Function<MyInterface,R>,R
found: Function<MyInterface,Long>,String
reason: inference variable R has incompatible bounds
equality constraints: Long
lower bounds: String
where R,T are type-variables:
R extends Object declared in method <R>with(Function<T,R>,R)
T extends Object declared in class Builder
SO58376589.java:34: error: method withX in class Builder<T> cannot be applied to given types;
b.withX(getter, "No NUMBER"); // error
^
required: F,R
found: Function<MyInterface,Long>,String
reason: inference variable R has incompatible bounds
equality constraints: Long
lower bounds: String
where F,R,T are type-variables:
F extends Function<MyInterface,R> declared in method <R,F>withX(F,R)
R extends Object declared in method <R,F>withX(F,R)
T extends Object declared in class Builder
SO58376589.java:35: error: incompatible types: cannot infer type-variable(s) R,F
b.withX(MyInterface::getLength, "No NUMBER"); // error
^
(argument mismatch; bad return type in method reference
Long cannot be converted to String)
where R,F,T are type-variables:
R extends Object declared in method <R,F>withX(F,R)
F extends Function<T,R> declared in method <R,F>withX(F,R)
T extends Object declared in class Builder
3 errors
Extended Example
The following example shows the different behaviour of method and type parameter boiled down to a Supplier. In addition it shows the difference to a Consumer behaviour for a type parameter. And it shows it does not make a difference wether it is a Consumer or Supplier for a method parameter.
import java.util.function.Consumer;
import java.util.function.Supplier;
interface TypeInference {
Number getNumber();
void setNumber(Number n);
#FunctionalInterface
interface Method<R> {
TypeInference be(R r);
}
//Supplier:
<R> R letBe(Supplier<R> supplier, R value);
<R, F extends Supplier<R>> R letBeX(F supplier, R value);
<R> Method<R> let(Supplier<R> supplier); // return (x) -> this;
//Consumer:
<R> R lettBe(Consumer<R> supplier, R value);
<R, F extends Consumer<R>> R lettBeX(F supplier, R value);
<R> Method<R> lett(Consumer<R> consumer);
public static void main(TypeInference t) {
t.letBe(t::getNumber, (Number) 2); // Compiles :-)
t.lettBe(t::setNumber, (Number) 2); // Compiles :-)
t.letBe(t::getNumber, 2); // Compiles :-)
t.lettBe(t::setNumber, 2); // Compiles :-)
t.letBe(t::getNumber, "NaN"); // !!!! Compiles :-(
t.lettBe(t::setNumber, "NaN"); // Does not compile :-)
t.letBeX(t::getNumber, (Number) 2); // Compiles :-)
t.lettBeX(t::setNumber, (Number) 2); // Compiles :-)
t.letBeX(t::getNumber, 2); // !!! Does not compile :-(
t.lettBeX(t::setNumber, 2); // Compiles :-)
t.letBeX(t::getNumber, "NaN"); // Does not compile :-)
t.lettBeX(t::setNumber, "NaN"); // Does not compile :-)
t.let(t::getNumber).be(2); // Compiles :-)
t.lett(t::setNumber).be(2); // Compiles :-)
t.let(t::getNumber).be("NaN"); // Does not compile :-)
t.lett(t::setNumber).be("NaN"); // Does not compile :-)
}
}
This is a really interesting question. The answer, I'm afraid, is complicated.
tl;dr
Working out the difference involves some quite in-depth reading of Java's type inference specification, but basically boils down to this:
All other things equal, the compiler infers the most specific type it can.
However, if it can find a substitution for a type parameter that satisfies all the requirements, then compilation will succeed, however vague the substitution turns out to be.
For with there is a (admittedly vague) substitution that satisfies all the requirements on R: Serializable
For withX, the introduction of the additional type parameter F forces the compiler to resolve R first, without considering the constraint F extends Function<T,R>. R resolves to the (much more specific) String which then means that inference of F fails.
This last bullet point is the most important, but also the most hand-wavy. I can't think of a better concise way of phrasing it, so if you want more details, I suggest you read the full explanation below.
Is this intended behaviour?
I'm gonna go out on a limb here, and say no.
I'm not suggesting there's a bug in the spec, more that (in the case of withX) the language designers have put their hands up and said "there are some situations where type inference gets too hard, so we'll just fail". Even though the compiler's behaviour with respect to withX seems to be what you want, I would consider that to be an incidental side-effect of the current spec, rather than a positively intended design decision.
This matters, because it informs the question Should I rely on this behaviour in my application design? I would argue that you shouldn't, because you can't guarantee that future versions of the language will continue to behave this way.
While it's true that language designers try very hard not to break existing applications when they update their spec/design/compiler, the problem is that the behaviour you want to rely on is one where the compiler currently fails (i.e. not an existing application). Langauge updates turn non-compiling code into compiling code all the time. For example, the following code could be guaranteed not to compile in Java 7, but would compile in Java 8:
static Runnable x = () -> System.out.println();
Your use-case is no different.
Another reason I'd be cautious about using your withX method is the F parameter itself. Generally, a generic type parameter on a method (that doesn't appear in the return type) exists to bind the types of multiple parts of the signature together. It's saying:
I don't care what T is, but want to be sure that wherever I use T it's the same type.
Logically, then, we would expect each type parameter to appear at least twice in a method signature, otherwise "it's not doing anything". F in your withX only appears once in the signature, which suggests to me a use of a type parameter not inline with the intent of this feature of the language.
An alternative implementation
One way to implement this in a slightly more "intended behaviour" way would be to split your with method up into a chain of 2:
public class Builder<T> {
public final class With<R> {
private final Function<T,R> method;
private With(Function<T,R> method) {
this.method = method;
}
public Builder<T> of(R value) {
// TODO: Body of your old 'with' method goes here
return Builder.this;
}
}
public <R> With<R> with(Function<T,R> method) {
return new With<>(method);
}
}
This can then be used as follows:
b.with(MyInterface::getLong).of(1L); // Compiles
b.with(MyInterface::getLong).of("Not a long"); // Compiler error
This doesn't include an extraneous type parameter like your withX does. By breaking down the method into two signatures, it also better expresses the intent of what you're trying to do, from a type-safety point of view:
The first method sets up a class (With) that defines the type based on the method reference.
The scond method (of) constrains the type of the value to be compatible with what you previously set up.
The only way a future version of the language would be able to compile this is if the implemented full duck-typing, which seems unlikely.
One final note to make this whole thing irrelevant: I think Mockito (and in particular its stubbing functionality) might basically already do what you're trying to achieve with your "type safe generic builder". Maybe you could just use that instead?
The full(ish) explanation
I'm going to work through the type inference procedure for both with and withX. This is quite long, so take it slowly. Despite being long, I've still left quite a lot of details out. You may wish to refer to the spec for more details (follow the links) to convince yourself that I'm right (I may well have made a mistake).
Also, to simplify things a little, I'm going to use a more minimal code sample. The main difference is that it swaps out Function for Supplier, so there are less types and parameters in play. Here's a full snippet that reproduces the behaviour you described:
public class TypeInference {
static long getLong() { return 1L; }
static <R> void with(Supplier<R> supplier, R value) {}
static <R, F extends Supplier<R>> void withX(F supplier, R value) {}
public static void main(String[] args) {
with(TypeInference::getLong, "Not a long"); // Compiles
withX(TypeInference::getLong, "Also not a long"); // Does not compile
}
}
Let's work through the type applicability inference and type inference procedure for each method invocation in turn:
with
We have:
with(TypeInference::getLong, "Not a long");
The initial bound set, B0, is:
R <: Object
All parameter expressions are pertinent to applicability.
Hence, the initial constraint set for applicability inference, C, is:
TypeInference::getLong is compatible with Supplier<R>
"Not a long" is compatible with R
This reduces to bound set B2 of:
R <: Object (from B0)
Long <: R (from the first constraint)
String <: R (from the second constraint)
Since this does not contain the bound 'false', and (I assume) resolution of R succeeds (giving Serializable), then the invocation is applicable.
So, we move on to invocation type inference.
The new constraint set, C, with associated input and output variables, is:
TypeInference::getLong is compatible with Supplier<R>
Input variables: none
Output variables: R
This contains no interdependencies between input and output variables, so can be reduced in a single step, and the final bound set, B4, is the same as B2. Hence, resolution succeeds as before, and the compiler breathes a sigh of relief!
withX
We have:
withX(TypeInference::getLong, "Also not a long");
The initial bound set, B0, is:
R <: Object
F <: Supplier<R>
Only the second parameter expression is pertinent to applicability. The first one (TypeInference::getLong) is not, because it meets the following condition:
If m is a generic method and the method invocation does not provide explicit type arguments, an explicitly typed lambda expression or an exact method reference expression for which the corresponding target type (as derived from the signature of m) is a type parameter of m.
Hence, the initial constraint set for applicability inference, C, is:
"Also not a long" is compatible with R
This reduces to bound set B2 of:
R <: Object (from B0)
F <: Supplier<R> (from B0)
String <: R (from the constraint)
Again, since this does not contain the bound 'false', and resolution of R succeeds (giving String), then the invocation is applicable.
Invocation type inference once more...
This time, the new constraint set, C, with associated input and output variables, is:
TypeInference::getLong is compatible with F
Input variables: F
Output variables: none
Again, we have no interdependencies between input and output variables. However this time, there is an input variable (F), so we must resolve this before attempting reduction. So, we start with our bound set B2.
We determine a subset V as follows:
Given a set of inference variables to resolve, let V be the union of this set and all variables upon which the resolution of at least one variable in this set depends.
By the second bound in B2, the resolution of F depends on R, so V := {F, R}.
We pick a subset of V according to the rule:
let { α1, ..., αn } be a non-empty subset of uninstantiated variables in V such that i) for all i (1 ≤ i ≤ n), if αi depends on the resolution of a variable β, then either β has an instantiation or there is some j such that β = αj; and ii) there exists no non-empty proper subset of { α1, ..., αn } with this property.
The only subset of V that satisfies this property is {R}.
Using the third bound (String <: R) we instantiate R = String and incorporate this into our bound set. R is now resolved, and the second bound effectively becomes F <: Supplier<String>.
Using the (revised) second bound, we instantiate F = Supplier<String>. F is now resolved.
Now that F is resolved, we can proceed with reduction, using the new constraint:
TypeInference::getLong is compatible with Supplier<String>
...reduces to Long is compatible with String
...which reduces to false
... and we get a compiler error!
Additional notes on the 'Extended Example'
The Extended Example in the question looks at a few interesting cases that aren't directly covered by the workings above:
Where the value type is a subtype of the method return type (Integer <: Number)
Where the functional interface is contravariant in the inferred type (i.e. Consumer rather than Supplier)
In particular, 3 of the given invocations stand out as potentially suggesting 'different' compiler behaviour to that described in the explanations:
t.lettBe(t::setNumber, "NaN"); // Does not compile :-)
t.letBeX(t::getNumber, 2); // !!! Does not compile :-(
t.lettBeX(t::setNumber, 2); // Compiles :-)
The second of these 3 will go through exactly the same inference process as withX above (just replace Long with Number and String with Integer). This illustrates yet another reason why you shouldn't rely on this failed type inference behaviour for your class design, as the failure to compile here is likely not a desirable behaviour.
For the other 2 (and indeed any of the other invocations involving a Consumer you wish to work through), the behaviour should be apparent if you work through the type inference procedure laid out for one of the methods above (i.e. with for the first, withX for the third). There's just one small change you need to take note of:
The constraint on the first parameter (t::setNumber is compatible with Consumer<R>) will reduce to R <: Number instead of Number <: R as it does for Supplier<R>. This is described in the linked documentation on reduction.
I leave it as an excercise for the reader to carfully work through one of the above procedures, armed with this piece of additional knowledge, to demonstrate to themselves exactly why a particular invocation does or doesn't compile.

What is the basic concept of overloading a method

In overloading when we overload a method why we cant make a new method which works same as overloaded method because we have to write the same number of line of code Such as in my example...why i cant make a new method b() which multiply two numbers.
public class que {
public void a(int a)
{
System.out.println(a);
}
public void a(int b,int c) {
System.out.println(b*c);
}
public static void main(String[] args) {
que queObject = new que();
queObject.a(5);
queObject.a(3,4);
}
}
You can make all your methods have different names. The point is you don't have to. This reduces the number of names a developer using the API needs to learn.
e.g. in the PrintWriter you have lots of methods called print and println which conceptually all do the same thing. They could have been given different names, but then you would need to know which method you wanted to call,
At runtime, each method signature is unique as it includes the return type and the non generic argument types form. i.e. in byte code the names are made unique for you.
In Java, a method cannot be distinguished/overloaded by it's return type, though in Java 6 there was a bug which allowed overloading on methods with different return types.
Nobody says you can't do this. It just isn't method overloading. That's defined as two or more methods of the same name, but with different (parameter) signatures.
method name signifies what a method does. so if you have 2 methods having same name but expects different arguments. its beneficial for the understanding of code and better design to have the same name.
so if you add another method
public void a(int b,int c,int d) {
System.out.println(b*c*d);
}
you basically doing the same behaviour i.e. multiplication but with more arguments. so overriding is better for understanding and good coding principles.
Consider This
public void eat(Orange o) {
// eat Orange
}
public void eat(Mango m) {
// eat Mango
}
You want different implementation on the basis of what you pass as a parameter but want to keep method name same.
For More Info --> Polymorphism vs Overriding vs Overloading
Here's my explanation:
public class Calculator {
// used for integer numbers
public int sum(int a, int b) {
return a + b;
}
// used for double numbers
public double sum(double a, double b) {
return a + b;
}
}
In this case you don't care if you use sum method with int or double - sum method is overloaded, so it will take both int and doubles. It's much easier than using sumInts() and sumDoubles() separately.
Method overloading is when, in a class, there are more than one method with same name but different arguments although it can have different return types (different return types is in itself is not a distinguishing feature and if only that is changed will result in a compile error).
For a complete discussion, see: http://beginnersbook.com/2013/03/polymorphism-in-java/
Why do we need it?
An example will be enlightening: Lets take the ubiquitous StringBuilder class and its append methods. They are a prime example of overloading. A specific instance of method overloading is constructor overloading as well. See the StringBuilder again: http://docs.oracle.com/javase/7/docs/api/java/lang/StringBuilder.html.
As another example suitable to your case: we can have an appendBoolean(boolean b) and a appendString(String b) and a appendChar(char c) in the StringBuilder class as well. It is a matter of clarity and choice to either have that or just have a set of overloaded append methods. To me - since the operation is to append but we are appending different instance types - having an overloaded append makes sense and provides clarity and is concise. On the other hand, you have no such choice for overloaded constructors: they need to have the same name as the class - that is by convention and by design :-)

reference to call ambiguous in java

class A {
public void printFirst(int... va) throws IOException{
System.out.print("A");
}
public static void main(String args[]) {
try {
new B().printFirst(2);
} catch (Exception ex) {
}
}
}
class B extends A {
//#Override
public void printFirst(float... va) throws IOException{
System.out.print("B");
}
}
Why, it is showing reference to call ambiguous ??
It actually compiles if you remove the varargs notation. The literal 2 should be considered an int, not a float, so I would expect that the printFirst in A would be chosen by the compiler.
It looks like this has to do with how the compiler does method invocation conversions. This SO question says it's in the spec, but the part of accepted answer that relates to this question appears to be contradictory (it says you can't combine a widening conversion (int to float) with varargs, but then later it says this is okay). A similar problem was discussed in this question and the accepted answer concludes that this case is actually unspecified (unfortunately the link to the discussion is now broken). Making matters worse, the language guide simply suggests avoiding this type of overloading.
This appears to be a bug in your compiler; I can reproduce your compile-error in one compiler (Eclipse), but not in another (javac), and I believe the latter is correct.
According to §15.12.2.5 "Choosing the Most Specific Method" of The Java Language Specification, Java SE 7 Edition, the compile-error that you're seeing should only happen if "no method is the most specific, because there are two or more methods that are maximally specific" (plus various other restrictions). But that's not the case here: in your case, B.printFirst(float...) is not maximally specific, because a method is maximally specific "if it is accessible and applicable and there is no other method that is applicable and accessible that is strictly more specific", and in your case, A.printFirst(int...) is strictly more specific, because int is a subtype of float and float is not a subtype of int.
By the way, your class B is most likely a red herring; in Eclipse, at least, you can trigger the same compile-error by simply writing:
class A
{
public static void printFirst(float... va)
{ System.out.print("float..."); }
public static void printFirst(int... va)
{ System.out.print("int..."); }
public static void main(String args[])
{ printFirst(2); }
}

Java overloading and inheritance rules

I've been studying because I have an exam and I don't have many problems with most of Java but I stumbled upon a rule I can't explain. Here's a code fragment:
public class A {
public int method(Object o) {
return 1;
}
public int method(A a) {
return 2;
}
}
public class AX extends A {
public int method(A a) {
return 3;
}
public int method(AX ax) {
return 4;
}
}
public static void main(String[] args) {
Object o = new A();
A a1 = new A();
A a2 = new AX();
AX ax = new AX();
System.out.println(a1.method(o));
System.out.println(a2.method(a1));
System.out.println(a2.method(o));
System.out.println(a2.method(ax));
}
This returns:
1
3
1
3
While I would expect it to return:
1
3
1
4
Why is it that the type of a2 determines which method is called in AX?
I've been reading on overloading rules and inheritance but this seems obscure enough that I haven't been able to find the exact rule. Any help would be greatly appreciated.
The behavior of these method calls is dictated and described by the Java Language Specification (reference section 8.4.9).
When a method is invoked (§15.12), the number of actual arguments (and
any explicit type arguments) and the compile-time types of the
arguments are used, at compile time, to determine the signature of the
method that will be invoked (§15.12.2). If the method that is to be
invoked is an instance method, the actual method to be invoked will be
determined at run time, using dynamic method lookup (§15.12.4).
In your example, the Java compiler determines the closest match on the compile type of the instance you are invoking your method on. In this case:
A.method(AX)
The closest method is from type A, with signature A.method(A). At runtime, dynamic dispatch is performed on the actual type of A (which is an instance of AX), and hence this is the method that is actually called:
AX.method(A)
I will clarified it in more simple way. See when you making sub class object with super class reference like here you did.
Always one thing keep in your mind that when you call with super class reference, no matters object is of sub class it will go to the super class, check method with this name along with proper signature is there or not.
now if it will find it, than it will check whether it is overridden?? if yes than it will go to the sub class method like here it went. another wise it will execute the same super class method.
I can give you the example of it...just hide
public int method(A a) {
return 3;
}
method & check your answer you will get 1 2 1 2, why because it gives first priority to reference. because you overridden it & than calling it, so its giving 3..!! hope its big but easy to understand. Happy Learning
a2 referenced as an A and the JVM using the reference first (not the acutal object as you expected).

Reference is ambiguous with generics

I'm having quite a tricky case here with generics and method overloading. Check out this example class:
public class Test {
public <T> void setValue(Parameter<T> parameter, T value) {
}
public <T> void setValue(Parameter<T> parameter, Field<T> value) {
}
public void test() {
// This works perfectly. <T> is bound to String
// ambiguity between setValue(.., String) and setValue(.., Field)
// is impossible as String and Field are incompatible
Parameter<String> p1 = getP1();
Field<String> f1 = getF1();
setValue(p1, f1);
// This causes issues. <T> is bound to Object
// ambiguity between setValue(.., Object) and setValue(.., Field)
// is possible as Object and Field are compatible
Parameter<Object> p2 = getP2();
Field<Object> f2 = getF2();
setValue(p2, f2);
}
private Parameter<String> getP1() {...}
private Parameter<Object> getP2() {...}
private Field<String> getF1() {...}
private Field<Object> getF2() {...}
}
The above example compiles perfectly in Eclipse (Java 1.6), but not with the Ant javac command (or with the JDK's javac command), where I get this sort of error message on the second invocation of setValue:
reference to setValue is ambiguous,
both method
setValue(org.jooq.Parameter,T)
in Test and method
setValue(org.jooq.Parameter,org.jooq.Field)
in Test match
According to the specification and to my understanding of how the Java compiler works, the most specific method should always be chosen: http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#20448
In any case, even if <T> is bound to Object, which makes both setValue methods acceptable candidates for invocation, the one with the Field parameter always seems to be more specific. And it works in Eclipse, just not with the JDK's compiler.
UPDATE:
Like this, it would work both in Eclipse and with the JDK compiler (with rawtypes warnings, of course). I understand, that the rules specified in the specs are quite special, when generics are involved. But I find this rather confusing:
public <T> void setValue(Parameter<T> parameter, Object value) {
}
// Here, it's easy to see that this method is more specific
public <T> void setValue(Parameter<T> parameter, Field value) {
}
UPDATE 2:
Even with generics, I can create this workaround where I avoid the type <T> being bound to Object at setValue invocation time, by adding an additional, unambiguous indirection called setValue0. This makes me think that the binding of T to Object is really what's causing all the trouble here:
public <T> void setValue(Parameter<T> parameter, T value) {
}
public <T> void setValue(Parameter<T> parameter, Field<T> value) {
}
public <T> void setValue0(Parameter<T> parameter, Field<T> value) {
// This call wasn't ambiguous in Java 7
// It is now ambiguous in Java 8!
setValue(parameter, value);
}
public void test() {
Parameter<Object> p2 = p2();
Field<Object> f2 = f2();
setValue0(p2, f2);
}
Am I misunderstanding something here? Is there a known compiler bug related to this? Or is there a workaround/compiler setting to help me?
Follow-Up:
For those interested, I have filed a bug report both to Oracle and Eclipse. Oracle has accepted the bug, so far, Eclipse has analysed it and rejected it! It looks as though my intuition is right and this is a bug in javac
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7031404
https://bugs.eclipse.org/bugs/show_bug.cgi?id=340506
https://bugs.eclipse.org/bugs/show_bug.cgi?id=469014 (a new issue in Eclipse Mars)
JDK is right. The 2nd method is not more specific than the 1st. From JLS3#15.12.2.5
"The informal intuition is that one method is more specific than another if any invocation handled by the first method could be passed on to the other one without a compile-time type error."
This is clearly not the case here. I emphasized any invocation. The property of one method being more specific than the other purely depends on the two methods themselves; it doesn't change per invocation.
Formal analysis on your problem: is m2 more specific than m1?
m1: <R> void setValue(Parameter<R> parameter, R value)
m2: <V> void setValue(Parameter<V> parameter, Field<V> value)
First, compiler needs to infer R from the initial constraints:
Parameter<V> << Parameter<R>
Field<V> << R
The result is R=V, per inference rules in 15.12.2.7
Now we substitute R and check subtype relations
Parameter<V> <: Parameter<V>
Field<V> <: V
The 2nd line does not hold, per subtyping rules in 4.10.2. So m2 is not more specific than m1.
V is not Object in this analysis; the analysis considers all possible values of V.
I would suggest to use different method names. Overloading is never a necessity.
This appears to be a significant bug in Eclipse. The spec quite clearly indicates that the type variables are not substituted in this step. Eclipse apparently does type variable substitution first, then check method specificity relation.
If such behavior is more "sensible" in some examples, it is not in other examples. Say,
m1: <T extends Object> void check(List<T> list, T obj) { print("1"); }
m2: <T extends Number> void check(List<T> list, T num) { print("2"); }
void test()
check( new ArrayList<Integer>(), new Integer(0) );
"Intuitively", and formally per spec, m2 is more specific than m1, and the test prints "2". However, if substitution T=Integer is done first, the two methods become identical!
for Update 2
m1: <R> void setValue(Parameter<R> parameter, R value)
m2: <V> void setValue(Parameter<V> parameter, Field<V> value)
m3: <T> void setValue2(Parameter<T> parameter, Field<T> value)
s4: setValue(parameter, value)
Here, m1 is not applicable for method invocation s4, so m2 is the only choice.
Per 15.12.2.2, to see if m1 is applicable for s4, first, type inference is carried out, to the conclusion that R=T; then we check Ai :< Si, which leads to Field<T> <: T, which is false.
This is consistent with the previous analysis - if m1 is applicable to s4, then any invocation handled by m2 (essentially same as s4) can be handled by m1, which means m2 would be more specific than m1, which is false.
in a parameterized type
Consider the following code
class PF<T>
{
public void setValue(Parameter<T> parameter, T value) {
}
public void setValue(Parameter<T> parameter, Field<T> value) {
}
}
void test()
PF<Object> pf2 = null;
Parameter<Object> p2 = getP2();
Field<Object> f2 = getF2();
pf2.setValue(p2,f2);
This compiles without problem. Per 4.5.2, the types of the methods in PF<Object> are methods in PF<T> with substitution T=Object. That is, the methods of pf2 are
public void setValue(Parameter<Object> parameter, Object value)
public void setValue(Parameter<Object> parameter, Field<Object> value)
The 2nd method is more specific than the 1st.
My guess is that the compiler is doing an method overloading resolution as per JLS, Section 15.12.2.5.
For this Section, the compiler uses strong subtyping (thus not allowing any unchecked conversion), so, T value becomes Object value and Field<T> value becomes Field<Object> value. The following rules will apply:
The method m is applicable by
subtyping if and only if both of the
following conditions hold:
* For 1in, either:
o Ai is a subtype (§4.10) of Si (Ai <: Si) or
o Ai is convertible to some type *Ci* by unchecked conversion
(§5.1.9), and Ci <: Si.
* If m is a generic method as described above then Ul <: Bl[R1 = U1,
..., Rp = Up], 1lp.
(Refer to bullet 2). Since Field<Object> is a subtype of Object then the most specific method is found. Field f2 matches both methods of yours (because of bullet 2 above) and makes it ambiguous.
For String and Field<String>, there is no subtype relationship between the two.
PS. This is my understanding of things, don't quote it as kosher.
Edit: This answer is wrong. Take a look at accepted answer.
I think the issue comes down this: compiler does not see the type of f2 (i.e. Field) and the inferred type of formal parameter (i.e. Field -> Field) as the same type.
In other words, it looks like type of f2 (Field) is considered to be a subtype of the type of formal parameter Field (Field). Since Field is at the same type a subtype of Object, compiler cannot pick one method over another.
Edit: Let me expand my statement a bit
Both methods are applicable and it looks like the Phase 1: Identify Matching Arity Methods Applicable by Subtyping is used to decide which method to call and than rules from Choosing the Most Specific Method applied, but failed for some reason to pick second method over first one.
Phase 1 section uses this notation: X <: S (X is subtype of S). Based on my understanding of <:, X <: X is a valid expression, i.e. the <: is not strict and includes the type itself (X is subtype of X) in this context. This explains the result of Phase 1: both methods are picked as candidates, since Field<Object> <: Object and Field<Object> <: Field<Object>.
Choosing the Most Specific Method section uses same notation to say that one method is more specific than another. The interesting part the paragraph that starts with "One fixed-arity member method named m is more specific than another member...". It has, among other things:
For all j from 1 to n, Tj <: Sj.
This makes me think that in our case second method must be chosen over the first one, because following holds:
Parameter<Object> <: Parameter<Object>
Field<Object> <: Object
while the other way around does not hold due to Object <: Field<Object> being false (Object is not a subtype of Field).
Note: In case of String examples, Phase 1 will simply pick the only method applicable: the second one.
So, to answer your questions: I think this is a bug in compiler implementation. Eclipse has it is own incremental compiler which does not have this bug it seems.

Categories

Resources