Reference is ambiguous with generics - java

I'm having quite a tricky case here with generics and method overloading. Check out this example class:
public class Test {
public <T> void setValue(Parameter<T> parameter, T value) {
}
public <T> void setValue(Parameter<T> parameter, Field<T> value) {
}
public void test() {
// This works perfectly. <T> is bound to String
// ambiguity between setValue(.., String) and setValue(.., Field)
// is impossible as String and Field are incompatible
Parameter<String> p1 = getP1();
Field<String> f1 = getF1();
setValue(p1, f1);
// This causes issues. <T> is bound to Object
// ambiguity between setValue(.., Object) and setValue(.., Field)
// is possible as Object and Field are compatible
Parameter<Object> p2 = getP2();
Field<Object> f2 = getF2();
setValue(p2, f2);
}
private Parameter<String> getP1() {...}
private Parameter<Object> getP2() {...}
private Field<String> getF1() {...}
private Field<Object> getF2() {...}
}
The above example compiles perfectly in Eclipse (Java 1.6), but not with the Ant javac command (or with the JDK's javac command), where I get this sort of error message on the second invocation of setValue:
reference to setValue is ambiguous,
both method
setValue(org.jooq.Parameter,T)
in Test and method
setValue(org.jooq.Parameter,org.jooq.Field)
in Test match
According to the specification and to my understanding of how the Java compiler works, the most specific method should always be chosen: http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#20448
In any case, even if <T> is bound to Object, which makes both setValue methods acceptable candidates for invocation, the one with the Field parameter always seems to be more specific. And it works in Eclipse, just not with the JDK's compiler.
UPDATE:
Like this, it would work both in Eclipse and with the JDK compiler (with rawtypes warnings, of course). I understand, that the rules specified in the specs are quite special, when generics are involved. But I find this rather confusing:
public <T> void setValue(Parameter<T> parameter, Object value) {
}
// Here, it's easy to see that this method is more specific
public <T> void setValue(Parameter<T> parameter, Field value) {
}
UPDATE 2:
Even with generics, I can create this workaround where I avoid the type <T> being bound to Object at setValue invocation time, by adding an additional, unambiguous indirection called setValue0. This makes me think that the binding of T to Object is really what's causing all the trouble here:
public <T> void setValue(Parameter<T> parameter, T value) {
}
public <T> void setValue(Parameter<T> parameter, Field<T> value) {
}
public <T> void setValue0(Parameter<T> parameter, Field<T> value) {
// This call wasn't ambiguous in Java 7
// It is now ambiguous in Java 8!
setValue(parameter, value);
}
public void test() {
Parameter<Object> p2 = p2();
Field<Object> f2 = f2();
setValue0(p2, f2);
}
Am I misunderstanding something here? Is there a known compiler bug related to this? Or is there a workaround/compiler setting to help me?
Follow-Up:
For those interested, I have filed a bug report both to Oracle and Eclipse. Oracle has accepted the bug, so far, Eclipse has analysed it and rejected it! It looks as though my intuition is right and this is a bug in javac
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7031404
https://bugs.eclipse.org/bugs/show_bug.cgi?id=340506
https://bugs.eclipse.org/bugs/show_bug.cgi?id=469014 (a new issue in Eclipse Mars)

JDK is right. The 2nd method is not more specific than the 1st. From JLS3#15.12.2.5
"The informal intuition is that one method is more specific than another if any invocation handled by the first method could be passed on to the other one without a compile-time type error."
This is clearly not the case here. I emphasized any invocation. The property of one method being more specific than the other purely depends on the two methods themselves; it doesn't change per invocation.
Formal analysis on your problem: is m2 more specific than m1?
m1: <R> void setValue(Parameter<R> parameter, R value)
m2: <V> void setValue(Parameter<V> parameter, Field<V> value)
First, compiler needs to infer R from the initial constraints:
Parameter<V> << Parameter<R>
Field<V> << R
The result is R=V, per inference rules in 15.12.2.7
Now we substitute R and check subtype relations
Parameter<V> <: Parameter<V>
Field<V> <: V
The 2nd line does not hold, per subtyping rules in 4.10.2. So m2 is not more specific than m1.
V is not Object in this analysis; the analysis considers all possible values of V.
I would suggest to use different method names. Overloading is never a necessity.
This appears to be a significant bug in Eclipse. The spec quite clearly indicates that the type variables are not substituted in this step. Eclipse apparently does type variable substitution first, then check method specificity relation.
If such behavior is more "sensible" in some examples, it is not in other examples. Say,
m1: <T extends Object> void check(List<T> list, T obj) { print("1"); }
m2: <T extends Number> void check(List<T> list, T num) { print("2"); }
void test()
check( new ArrayList<Integer>(), new Integer(0) );
"Intuitively", and formally per spec, m2 is more specific than m1, and the test prints "2". However, if substitution T=Integer is done first, the two methods become identical!
for Update 2
m1: <R> void setValue(Parameter<R> parameter, R value)
m2: <V> void setValue(Parameter<V> parameter, Field<V> value)
m3: <T> void setValue2(Parameter<T> parameter, Field<T> value)
s4: setValue(parameter, value)
Here, m1 is not applicable for method invocation s4, so m2 is the only choice.
Per 15.12.2.2, to see if m1 is applicable for s4, first, type inference is carried out, to the conclusion that R=T; then we check Ai :< Si, which leads to Field<T> <: T, which is false.
This is consistent with the previous analysis - if m1 is applicable to s4, then any invocation handled by m2 (essentially same as s4) can be handled by m1, which means m2 would be more specific than m1, which is false.
in a parameterized type
Consider the following code
class PF<T>
{
public void setValue(Parameter<T> parameter, T value) {
}
public void setValue(Parameter<T> parameter, Field<T> value) {
}
}
void test()
PF<Object> pf2 = null;
Parameter<Object> p2 = getP2();
Field<Object> f2 = getF2();
pf2.setValue(p2,f2);
This compiles without problem. Per 4.5.2, the types of the methods in PF<Object> are methods in PF<T> with substitution T=Object. That is, the methods of pf2 are
public void setValue(Parameter<Object> parameter, Object value)
public void setValue(Parameter<Object> parameter, Field<Object> value)
The 2nd method is more specific than the 1st.

My guess is that the compiler is doing an method overloading resolution as per JLS, Section 15.12.2.5.
For this Section, the compiler uses strong subtyping (thus not allowing any unchecked conversion), so, T value becomes Object value and Field<T> value becomes Field<Object> value. The following rules will apply:
The method m is applicable by
subtyping if and only if both of the
following conditions hold:
* For 1in, either:
o Ai is a subtype (§4.10) of Si (Ai <: Si) or
o Ai is convertible to some type *Ci* by unchecked conversion
(§5.1.9), and Ci <: Si.
* If m is a generic method as described above then Ul <: Bl[R1 = U1,
..., Rp = Up], 1lp.
(Refer to bullet 2). Since Field<Object> is a subtype of Object then the most specific method is found. Field f2 matches both methods of yours (because of bullet 2 above) and makes it ambiguous.
For String and Field<String>, there is no subtype relationship between the two.
PS. This is my understanding of things, don't quote it as kosher.

Edit: This answer is wrong. Take a look at accepted answer.
I think the issue comes down this: compiler does not see the type of f2 (i.e. Field) and the inferred type of formal parameter (i.e. Field -> Field) as the same type.
In other words, it looks like type of f2 (Field) is considered to be a subtype of the type of formal parameter Field (Field). Since Field is at the same type a subtype of Object, compiler cannot pick one method over another.
Edit: Let me expand my statement a bit
Both methods are applicable and it looks like the Phase 1: Identify Matching Arity Methods Applicable by Subtyping is used to decide which method to call and than rules from Choosing the Most Specific Method applied, but failed for some reason to pick second method over first one.
Phase 1 section uses this notation: X <: S (X is subtype of S). Based on my understanding of <:, X <: X is a valid expression, i.e. the <: is not strict and includes the type itself (X is subtype of X) in this context. This explains the result of Phase 1: both methods are picked as candidates, since Field<Object> <: Object and Field<Object> <: Field<Object>.
Choosing the Most Specific Method section uses same notation to say that one method is more specific than another. The interesting part the paragraph that starts with "One fixed-arity member method named m is more specific than another member...". It has, among other things:
For all j from 1 to n, Tj <: Sj.
This makes me think that in our case second method must be chosen over the first one, because following holds:
Parameter<Object> <: Parameter<Object>
Field<Object> <: Object
while the other way around does not hold due to Object <: Field<Object> being false (Object is not a subtype of Field).
Note: In case of String examples, Phase 1 will simply pick the only method applicable: the second one.
So, to answer your questions: I think this is a bug in compiler implementation. Eclipse has it is own incremental compiler which does not have this bug it seems.

Related

Is the built-in LambdaMetafactory parameter type checking correct?

For example, when I execute the following:
public static int addOne(Number base) {
return base.intValue() + 1;
}
public static interface AddOneLambda {
public int addOne(Integer base);
}
public static void main(String[] a) throws Throwable {
Method lambdaMethod = AddOneLambda.class.getMethod("addOne", Integer.class);
Class<?>[] lambdaParameters = Stream.of(lambdaMethod.getParameters()).map(p -> p.getType()).toArray(Class[]::new);
Method sourceMethod = Main.class.getMethod("addOne", Number.class);
Class<?>[] sourceParameters = Stream.of(sourceMethod.getParameters()).map(p -> p.getType()).toArray(Class[]::new);
MethodHandles.Lookup lookup = MethodHandles.lookup();
CallSite site = LambdaMetafactory.metafactory(lookup, //
lambdaMethod.getName(), //
MethodType.methodType(lambdaMethod.getDeclaringClass()), //
MethodType.methodType(lambdaMethod.getReturnType(), lambdaParameters), //
lookup.unreflect(sourceMethod), //
MethodType.methodType(sourceMethod.getReturnType(), sourceParameters));
AddOneLambda addOneLambda = (AddOneLambda) site.getTarget().invoke();
System.out.println("1 + 1 = " + addOneLambda.addOne(1));
}
I receive the following exception from metafactory:
LambdaConversionException: Type mismatch for dynamic parameter 0:
class java.lang.Number is not a subtype of class java.lang.Integer
I don't understand this. Passing an Integer to the AddOneLambda should always be fine, because the underlying addOne method can accept Integers as part of it's Number signature - so I believe this configuration should be "safe".
On the other hand, when I execute the above with this change:
public static int addOne(Integer base) {
return base.intValue() + 1;
}
public interface AddOneLambda {
public int addOne(Number base);
}
The metafactory allows now this without exception, but it doesn't seem right. I can pass any kind of Number to AddOneLambda, even though the underlying method can only handle Integers - so I believe this configuration to be "unsafe". Indeed, if I now call addOneLambda.addOne(1.5) I receive an exception for inability to cast Double to Integer.
Why then is my initial code not allowed, while the change which ultimately allows for invalid types to be passed ok? Is it something to do with the values I'm passing to metafactory, is metafactory incorrectly checking the types, or does this prevent some other kind of situation I haven't considered? If relevant, I'm using JDK 17.0.3.
You seem to have misunderstood the purpose of the dynamicMethodType argument of metafactory (the sixth one, which you have as MethodType.methodType(sourceMethod.getReturnType(), sourceParameters)). The point of it is to produce runtime type errors: the generated method will have the type given by interfaceMethodType (fourth parameter) but will check that its parameters and return value obey the dynamicMethodType. For example, after erasure, Consumer contains void accept(Object), and if you do something like (Consumer<String>)String::intern the generated accept(Object) must do a checked cast to String. The underlying call to metafactory will pass the method type void(Object) as interfaceMethodType and void(String) as dynamicMethodType.
As the purpose of dynamicMethodType is to act as "further" restrictions on interfaceMethodType, it is considered an error for it to be looser. From a theoretical/user standpoint this seems a bit ugly, but from an implementation standpoint (why generate a useless cast? does producing a wider type indicate a bug in the caller?) you might consider it justified.
To get all language lawyery, you have violated the "linkage invariants" in LambdaMetafactory's documentation
Assume the linkage arguments are as follows:
...
interfaceMethodType (describing the implemented method type) has N parameters, of types (U1..Un) and return type Ru;
...
dynamicMethodType (allowing restrictions on invocation) has N parameters, of types (T1..Tn) and return type Rt.
Then the following linkage invariants must hold:
interfaceMethodType and dynamicMethodType have the same arity N, and for i=1..N, Ti and Ui are the same type, or Ti and Ui are both reference types and Ti is a subtype of Ui
...
As you are not dealing with generics or anything similarly interesting, you should do as the Javadoc for metafactory suggests and pass the same int(Integer) type for interfaceMethodType and dynamicMethodType.

Why is a type parameter stronger then a method parameter

Why is
public <R, F extends Function<T, R>> Builder<T> withX(F getter, R returnValue) {...}
more strict then
public <R> Builder<T> with(Function<T, R> getter, R returnValue) {...}
This is a follow up on Why is lambda return type not checked at compile time.
I found using the method withX() like
.withX(MyInterface::getLength, "I am not a Long")
produces the wanted compile time error:
The type of getLength() from the type BuilderExample.MyInterface is long, this is incompatible with the descriptor's return type: String
while using the method with() does not.
full example:
import java.util.function.Function;
public class SO58376589 {
public static class Builder<T> {
public <R, F extends Function<T, R>> Builder<T> withX(F getter, R returnValue) {
return this;
}
public <R> Builder<T> with(Function<T, R> getter, R returnValue) {
return this;
}
}
static interface MyInterface {
public Long getLength();
}
public static void main(String[] args) {
Builder<MyInterface> b = new Builder<MyInterface>();
Function<MyInterface, Long> getter = MyInterface::getLength;
b.with(getter, 2L);
b.with(MyInterface::getLength, 2L);
b.withX(getter, 2L);
b.withX(MyInterface::getLength, 2L);
b.with(getter, "No NUMBER"); // error
b.with(MyInterface::getLength, "No NUMBER"); // NO ERROR !!
b.withX(getter, "No NUMBER"); // error
b.withX(MyInterface::getLength, "No NUMBER"); // error !!!
}
}
javac SO58376589.java
SO58376589.java:32: error: method with in class Builder<T> cannot be applied to given types;
b.with(getter, "No NUMBER"); // error
^
required: Function<MyInterface,R>,R
found: Function<MyInterface,Long>,String
reason: inference variable R has incompatible bounds
equality constraints: Long
lower bounds: String
where R,T are type-variables:
R extends Object declared in method <R>with(Function<T,R>,R)
T extends Object declared in class Builder
SO58376589.java:34: error: method withX in class Builder<T> cannot be applied to given types;
b.withX(getter, "No NUMBER"); // error
^
required: F,R
found: Function<MyInterface,Long>,String
reason: inference variable R has incompatible bounds
equality constraints: Long
lower bounds: String
where F,R,T are type-variables:
F extends Function<MyInterface,R> declared in method <R,F>withX(F,R)
R extends Object declared in method <R,F>withX(F,R)
T extends Object declared in class Builder
SO58376589.java:35: error: incompatible types: cannot infer type-variable(s) R,F
b.withX(MyInterface::getLength, "No NUMBER"); // error
^
(argument mismatch; bad return type in method reference
Long cannot be converted to String)
where R,F,T are type-variables:
R extends Object declared in method <R,F>withX(F,R)
F extends Function<T,R> declared in method <R,F>withX(F,R)
T extends Object declared in class Builder
3 errors
Extended Example
The following example shows the different behaviour of method and type parameter boiled down to a Supplier. In addition it shows the difference to a Consumer behaviour for a type parameter. And it shows it does not make a difference wether it is a Consumer or Supplier for a method parameter.
import java.util.function.Consumer;
import java.util.function.Supplier;
interface TypeInference {
Number getNumber();
void setNumber(Number n);
#FunctionalInterface
interface Method<R> {
TypeInference be(R r);
}
//Supplier:
<R> R letBe(Supplier<R> supplier, R value);
<R, F extends Supplier<R>> R letBeX(F supplier, R value);
<R> Method<R> let(Supplier<R> supplier); // return (x) -> this;
//Consumer:
<R> R lettBe(Consumer<R> supplier, R value);
<R, F extends Consumer<R>> R lettBeX(F supplier, R value);
<R> Method<R> lett(Consumer<R> consumer);
public static void main(TypeInference t) {
t.letBe(t::getNumber, (Number) 2); // Compiles :-)
t.lettBe(t::setNumber, (Number) 2); // Compiles :-)
t.letBe(t::getNumber, 2); // Compiles :-)
t.lettBe(t::setNumber, 2); // Compiles :-)
t.letBe(t::getNumber, "NaN"); // !!!! Compiles :-(
t.lettBe(t::setNumber, "NaN"); // Does not compile :-)
t.letBeX(t::getNumber, (Number) 2); // Compiles :-)
t.lettBeX(t::setNumber, (Number) 2); // Compiles :-)
t.letBeX(t::getNumber, 2); // !!! Does not compile :-(
t.lettBeX(t::setNumber, 2); // Compiles :-)
t.letBeX(t::getNumber, "NaN"); // Does not compile :-)
t.lettBeX(t::setNumber, "NaN"); // Does not compile :-)
t.let(t::getNumber).be(2); // Compiles :-)
t.lett(t::setNumber).be(2); // Compiles :-)
t.let(t::getNumber).be("NaN"); // Does not compile :-)
t.lett(t::setNumber).be("NaN"); // Does not compile :-)
}
}
This is a really interesting question. The answer, I'm afraid, is complicated.
tl;dr
Working out the difference involves some quite in-depth reading of Java's type inference specification, but basically boils down to this:
All other things equal, the compiler infers the most specific type it can.
However, if it can find a substitution for a type parameter that satisfies all the requirements, then compilation will succeed, however vague the substitution turns out to be.
For with there is a (admittedly vague) substitution that satisfies all the requirements on R: Serializable
For withX, the introduction of the additional type parameter F forces the compiler to resolve R first, without considering the constraint F extends Function<T,R>. R resolves to the (much more specific) String which then means that inference of F fails.
This last bullet point is the most important, but also the most hand-wavy. I can't think of a better concise way of phrasing it, so if you want more details, I suggest you read the full explanation below.
Is this intended behaviour?
I'm gonna go out on a limb here, and say no.
I'm not suggesting there's a bug in the spec, more that (in the case of withX) the language designers have put their hands up and said "there are some situations where type inference gets too hard, so we'll just fail". Even though the compiler's behaviour with respect to withX seems to be what you want, I would consider that to be an incidental side-effect of the current spec, rather than a positively intended design decision.
This matters, because it informs the question Should I rely on this behaviour in my application design? I would argue that you shouldn't, because you can't guarantee that future versions of the language will continue to behave this way.
While it's true that language designers try very hard not to break existing applications when they update their spec/design/compiler, the problem is that the behaviour you want to rely on is one where the compiler currently fails (i.e. not an existing application). Langauge updates turn non-compiling code into compiling code all the time. For example, the following code could be guaranteed not to compile in Java 7, but would compile in Java 8:
static Runnable x = () -> System.out.println();
Your use-case is no different.
Another reason I'd be cautious about using your withX method is the F parameter itself. Generally, a generic type parameter on a method (that doesn't appear in the return type) exists to bind the types of multiple parts of the signature together. It's saying:
I don't care what T is, but want to be sure that wherever I use T it's the same type.
Logically, then, we would expect each type parameter to appear at least twice in a method signature, otherwise "it's not doing anything". F in your withX only appears once in the signature, which suggests to me a use of a type parameter not inline with the intent of this feature of the language.
An alternative implementation
One way to implement this in a slightly more "intended behaviour" way would be to split your with method up into a chain of 2:
public class Builder<T> {
public final class With<R> {
private final Function<T,R> method;
private With(Function<T,R> method) {
this.method = method;
}
public Builder<T> of(R value) {
// TODO: Body of your old 'with' method goes here
return Builder.this;
}
}
public <R> With<R> with(Function<T,R> method) {
return new With<>(method);
}
}
This can then be used as follows:
b.with(MyInterface::getLong).of(1L); // Compiles
b.with(MyInterface::getLong).of("Not a long"); // Compiler error
This doesn't include an extraneous type parameter like your withX does. By breaking down the method into two signatures, it also better expresses the intent of what you're trying to do, from a type-safety point of view:
The first method sets up a class (With) that defines the type based on the method reference.
The scond method (of) constrains the type of the value to be compatible with what you previously set up.
The only way a future version of the language would be able to compile this is if the implemented full duck-typing, which seems unlikely.
One final note to make this whole thing irrelevant: I think Mockito (and in particular its stubbing functionality) might basically already do what you're trying to achieve with your "type safe generic builder". Maybe you could just use that instead?
The full(ish) explanation
I'm going to work through the type inference procedure for both with and withX. This is quite long, so take it slowly. Despite being long, I've still left quite a lot of details out. You may wish to refer to the spec for more details (follow the links) to convince yourself that I'm right (I may well have made a mistake).
Also, to simplify things a little, I'm going to use a more minimal code sample. The main difference is that it swaps out Function for Supplier, so there are less types and parameters in play. Here's a full snippet that reproduces the behaviour you described:
public class TypeInference {
static long getLong() { return 1L; }
static <R> void with(Supplier<R> supplier, R value) {}
static <R, F extends Supplier<R>> void withX(F supplier, R value) {}
public static void main(String[] args) {
with(TypeInference::getLong, "Not a long"); // Compiles
withX(TypeInference::getLong, "Also not a long"); // Does not compile
}
}
Let's work through the type applicability inference and type inference procedure for each method invocation in turn:
with
We have:
with(TypeInference::getLong, "Not a long");
The initial bound set, B0, is:
R <: Object
All parameter expressions are pertinent to applicability.
Hence, the initial constraint set for applicability inference, C, is:
TypeInference::getLong is compatible with Supplier<R>
"Not a long" is compatible with R
This reduces to bound set B2 of:
R <: Object (from B0)
Long <: R (from the first constraint)
String <: R (from the second constraint)
Since this does not contain the bound 'false', and (I assume) resolution of R succeeds (giving Serializable), then the invocation is applicable.
So, we move on to invocation type inference.
The new constraint set, C, with associated input and output variables, is:
TypeInference::getLong is compatible with Supplier<R>
Input variables: none
Output variables: R
This contains no interdependencies between input and output variables, so can be reduced in a single step, and the final bound set, B4, is the same as B2. Hence, resolution succeeds as before, and the compiler breathes a sigh of relief!
withX
We have:
withX(TypeInference::getLong, "Also not a long");
The initial bound set, B0, is:
R <: Object
F <: Supplier<R>
Only the second parameter expression is pertinent to applicability. The first one (TypeInference::getLong) is not, because it meets the following condition:
If m is a generic method and the method invocation does not provide explicit type arguments, an explicitly typed lambda expression or an exact method reference expression for which the corresponding target type (as derived from the signature of m) is a type parameter of m.
Hence, the initial constraint set for applicability inference, C, is:
"Also not a long" is compatible with R
This reduces to bound set B2 of:
R <: Object (from B0)
F <: Supplier<R> (from B0)
String <: R (from the constraint)
Again, since this does not contain the bound 'false', and resolution of R succeeds (giving String), then the invocation is applicable.
Invocation type inference once more...
This time, the new constraint set, C, with associated input and output variables, is:
TypeInference::getLong is compatible with F
Input variables: F
Output variables: none
Again, we have no interdependencies between input and output variables. However this time, there is an input variable (F), so we must resolve this before attempting reduction. So, we start with our bound set B2.
We determine a subset V as follows:
Given a set of inference variables to resolve, let V be the union of this set and all variables upon which the resolution of at least one variable in this set depends.
By the second bound in B2, the resolution of F depends on R, so V := {F, R}.
We pick a subset of V according to the rule:
let { α1, ..., αn } be a non-empty subset of uninstantiated variables in V such that i) for all i (1 ≤ i ≤ n), if αi depends on the resolution of a variable β, then either β has an instantiation or there is some j such that β = αj; and ii) there exists no non-empty proper subset of { α1, ..., αn } with this property.
The only subset of V that satisfies this property is {R}.
Using the third bound (String <: R) we instantiate R = String and incorporate this into our bound set. R is now resolved, and the second bound effectively becomes F <: Supplier<String>.
Using the (revised) second bound, we instantiate F = Supplier<String>. F is now resolved.
Now that F is resolved, we can proceed with reduction, using the new constraint:
TypeInference::getLong is compatible with Supplier<String>
...reduces to Long is compatible with String
...which reduces to false
... and we get a compiler error!
Additional notes on the 'Extended Example'
The Extended Example in the question looks at a few interesting cases that aren't directly covered by the workings above:
Where the value type is a subtype of the method return type (Integer <: Number)
Where the functional interface is contravariant in the inferred type (i.e. Consumer rather than Supplier)
In particular, 3 of the given invocations stand out as potentially suggesting 'different' compiler behaviour to that described in the explanations:
t.lettBe(t::setNumber, "NaN"); // Does not compile :-)
t.letBeX(t::getNumber, 2); // !!! Does not compile :-(
t.lettBeX(t::setNumber, 2); // Compiles :-)
The second of these 3 will go through exactly the same inference process as withX above (just replace Long with Number and String with Integer). This illustrates yet another reason why you shouldn't rely on this failed type inference behaviour for your class design, as the failure to compile here is likely not a desirable behaviour.
For the other 2 (and indeed any of the other invocations involving a Consumer you wish to work through), the behaviour should be apparent if you work through the type inference procedure laid out for one of the methods above (i.e. with for the first, withX for the third). There's just one small change you need to take note of:
The constraint on the first parameter (t::setNumber is compatible with Consumer<R>) will reduce to R <: Number instead of Number <: R as it does for Supplier<R>. This is described in the linked documentation on reduction.
I leave it as an excercise for the reader to carfully work through one of the above procedures, armed with this piece of additional knowledge, to demonstrate to themselves exactly why a particular invocation does or doesn't compile.

Ambiguous method call when overloading method with generics and lambdas

I've noticed a weird behavior for overloading methods with generics and lambdas. This class works fine:
public <T> void test(T t) { }
public <T> void test(Supplier<T> t) { }
public void test() {
test("test");
test(() -> "test");
}
No ambiguous method call. However, changing it to this makes the second call ambiguous:
public <T> void test(Class<T> c, T t) { }
public <T> void test(Class<T> c, Supplier<T> t) { }
public void test() {
test(String.class, "test");
test(String.class, () -> "test"); // this line does not compile
}
How can this be? Why would adding another argument cause the method resolution to be ambiguous? Why can it tell the difference between a Supplier and an Object in the first example, but not the second?
Edit: This is using 1.8.0_121. This is the full error message:
error: reference to test is ambiguous
test(String.class, () -> "test");
^
both method <T#1>test(Class<T#1>,T#1) in TestFoo and method <T#2>test(Class<T#2>,Supplier<T#2>) in TestFoo match
where T#1,T#2 are type-variables:
T#1 extends Object declared in method <T#1>test(Class<T#1>,T#1)
T#2 extends Object declared in method <T#2>test(Class<T#2>,Supplier<T#2>)
/workspace/com/test/TestFoo.java:14: error: incompatible types: cannot infer type-variable(s) T
test(String.class, () -> "test");
^
(argument mismatch; String is not a functional interface)
where T is a type-variable:
T extends Object declared in method <T>test(Class<T>,T)
If my understanding of chapters 15 and 18 of the JLS for Java SE 8 is correct, the key to your question lies in the following quote from paragraph 15.12.2:
Certain argument expressions that contain implicitly typed lambda expressions (§15.27.1) or inexact method references (§15.13.1) are ignored by the applicability tests, because their meaning cannot be determined until a target type is selected.
When a Java compiler encounters a method call expression such as test(() -> "test"), it has to search for accessible (visible) and applicable (i.e. with matching signature) methods to which this method call can be dispatched. In your first example, both <T> void test(T) and <T> void test(Supplier<T>) are accessible and applicable w.r.t. the test(() -> "test") method call. In such cases, when there are multiple matching methods, the compiler attempts to determine the most specific one. Now, while this determination for generic methods (as covered in
JLS 15.12.2.5 and JLS 18.5.4) is quite complicated, we can use the intuition from the opening of 15.12.2.5:
The informal intuition is that one method is more specific than another if any invocation handled by the first method could be passed on to the other one without a compile-time error.
Since for any valid call to <T> void test(Supplier<T>) we can find a corresponding instantiation of the type parameter T in <T> void test(T), the former is more specific than the latter.
Now, the surprising part is that in your second example, both <T> void test(Class<T>, Supplier<T>) and <T> void test(Class<T>, T) are considered applicable for method call test(String.class, () -> "test"), even though it's clear to us, that the latter shouldn't be. The problem is, that the compiler acts very conservatively in the presence of implicitly typed lambdas, as quoted above. See in particular JLS 18.5.1:
A set of constraint formulas, C, is constructed as follows.
...
To test for applicability by strict invocation:
If k ≠ n, or if there exists an i (1 ≤ i ≤ n) such that e_i is pertinent to applicability (§15.12.2.2) (...) Otherwise, C includes, for all i (1 ≤ i ≤ k) where e_i is pertinent to applicability, ‹e_i → F_i θ›.
To test for applicability by loose invocation:
If k ≠ n, the method is not applicable and there is no need to proceed with inference.
Otherwise, C includes, for all i (1 ≤ i ≤ k) where e_i is pertinent to applicability, ‹e_i → F_i θ›.
and JLS 15.12.2.2:
An argument expression is considered pertinent to applicability for a potentially applicable method m unless it has one of the following forms:
An implicitly typed lambda expression (§15.27.1).
...
So, the constraints from implicitly typed lambdas passed as arguments take no part in resolving type inference in the context of method applicability checks.
Now, if we assume that both methods are applicable, the problem - and the difference between this and the previous example - is that none of this methods is more specific. There exist calls which are valid for <T> void test(Class<T>, Supplier<T>) but not for <T> void test(Class<T>, T), and vice versa.
This also explains why test(String.class, (Supplier<String>) () -> "test"); compiles, as mentioned by #Aominè in the comment above. (Supplier<String>) () -> "test") is an explicitly typed lambda, and as such is considered pertinent to applicability, the compiler is able to correctly deduce, that only one of these methods is applicable, and no conflict occurs.
This problem has kept buzzing me since some days now.
I won't go into JLS details but more on a logical explanation.
What if i'm calling:
test(Callable.class, () -> new Callable() {
#Override
public Object call() throws Exception {
return null;
}
});
Which method the compiler can choose
public <T> void test(Class<T> c, T t)
same as below >>
public <Callable> void test(Class<Callable> c, Callable<Callable> t)
Callable is a #FunctionalInterface so it seems perfectly valid
public <T> void test(Class<T> c, Supplier<T> t)
same as below >>
public <Callable> void test(Class<Callable> c, Supplier<Callable> t)
Supplier is a #FunctionalInterface so it seems also perfectly valid
When providing a class which is a FunctionalInterface then the two calls are valid and neither of the calls are more specific than the other which lead to an ambiguous method message.
Then what about the functions with only one parameter:
What if i'm calling:
test(() -> new Callable() {
#Override
public Object call() throws Exception {
return null;
}
})
Which method the compiler can choose
public <T> void test(T t)
A lambda expression has to be mapped to a #FunctionalInterface but in this case we aren't providing any explicit type to map so this call can't be valid
public <T> void test(Supplier<T> t)
Supplier is a #FunctionalInterface so the lambda expression can be mapped to a #FunctionalInterface (Supplier), the call is then valid
Only one of the two methods is applicable
During compilation all generic types are erased, so Object is used and an Object can be a FunctionalInterface
I hope someone more JLS compliant than me can validate or correct my explanation it would be great.

What happens when working with variable length arguments in JAVA? [duplicate]

There seems to be a bug in the Java varargs implementation. Java can't distinguish the appropriate type when a method is overloaded with different types of vararg parameters.
It gives me an error The method ... is ambiguous for the type ...
Consider the following code:
public class Test
{
public static void main(String[] args) throws Throwable
{
doit(new int[]{1, 2}); // <- no problem
doit(new double[]{1.2, 2.2}); // <- no problem
doit(1.2f, 2.2f); // <- no problem
doit(1.2d, 2.2d); // <- no problem
doit(1, 2); // <- The method doit(double[]) is ambiguous for the type Test
}
public static void doit(double... ds)
{
System.out.println("doubles");
}
public static void doit(int... is)
{
System.out.println("ints");
}
}
the docs say: "Generally speaking, you should not overload a varargs method, or it will be difficult for programmers to figure out which overloading gets called."
however they don't mention this error, and it's not the programmers that are finding it difficult, it's the compiler.
thoughts?
EDIT - Compiler: Sun jdk 1.6.0 u18
The problem is that it is ambiguous.
doIt(1, 2);
could be a call to doIt(int ...), or doIt(double ...). In the latter case, the integer literals will be promoted to double values.
I'm pretty sure that the Java spec says that this is an ambiguous construct, and the compiler is just following the rules laid down by the spec. (I'd have to research this further to be sure.)
EDIT - the relevant part of the JLS is "15.12.2.5 Choosing the Most Specific Method", but it is making my head hurt.
I think that the reasoning would be that void doIt(int[]) is not more specific (or vice versa) than void doIt(double[]) because int[] is not a subtype of double[] (and vice versa). Since the two overloads are equally specific, the call is ambiguous.
By contrast, void doItAgain(int) is more specific than void doItAgain(double) because int is a subtype of double according the the JLS. Hence, a call to doItAgain(42) is not ambiguous.
EDIT 2 - #finnw is right, it is a bug. Consider this part of 15.12.2.5 (edited to remove non-applicable cases):
One variable arity member method named m is more specific than another variable arity member method of the same name if:
One member method has n parameters and the other has k parameters, where n ≥ k. The types of the parameters of the first member method are T1, . . . , Tn-1 , Tn[], the types of the parameters of the other method are U1, . . . , Uk-1, Uk[]. Let Si = Ui, 1<=i<=k. Then:
for all j from 1 to k-1, Tj <: Sj, and,
for all j from k to n, Tj <: Sk
Apply this to the case where n = k = 1, and we see that doIt(int[]) is more specific than doIt(double[]).
In fact, there is a bug report for this and Sun acknowledges that it is indeed a bug, though they have prioritized it as "very low". The bug is now marked as Fixed in Java 7 (b123).
There is a discussion about this over at the Sun Forums.
No real resolution there, just resignation.
Varargs (and auto-boxing, which also leads to hard-to-follow behaviour, especially in combination with varargs) have been bolted on later in Java's life, and this is one area where it shows. So it is more a bug in the spec, than in the compiler.
At least, it makes for good(?) SCJP trick questions.
Interesting. Fortunately, there are a couple different ways to avoid this problem:
You can use the wrapper types instead in the method signatures:
public static void doit(Double... ds) {
for(Double currD : ds) {
System.out.println(currD);
}
}
public static void doit(Integer... is) {
for(Integer currI : is) {
System.out.println(currI);
}
}
Or, you can use generics:
public static <T> void doit(T... ts) {
for(T currT : ts) {
System.out.println(currT);
}
}

Inferred wildcard generics in return type

Java can often infer generics based on the arguments (and even on the return type, in contrast to e.g. C#).
Case in point: I've got a generic class Pair<T1, T2> which just stores a pair of values and can be used in the following way:
Pair<String, String> pair = Pair.of("Hello", "World");
The method of looks just like this:
public static <T1, T2> Pair<T1, T2> of(T1 first, T2 second) {
return new Pair<T1, T2>(first, second);
}
Very nice. However, this no longer works for the following use-case, which requires wildcards:
Pair<Class<?>, String> pair = Pair.of((Class<?>) List.class, "hello");
(Notice the explicit cast to make List.class the correct type.)
The code fails with the following error (provided by Eclipse):
Type mismatch: cannot convert from TestClass.Pair<Class<capture#1-of ?>,String> to TestClass.Pair<Class<?>,String>
However, explicitly calling the constructor still works as expected:
Pair<Class<?>, String> pair =
new Pair<Class<?>, String>((Class<?>) List.class, "hello");
Can someone explain this behaviour? Is it by design? Is it wanted? Am I doing something wrong or did I stumble upon a flaw in the design / bug in the compiler?
Wild guess: the “capture#1-of ?” somehow seems to imply that the wildcard is filled in by the compiler on the fly, making the type a Class<List>, and thus failing the conversion (from Pair<Class<?>, String> to Pair<Class<List>, String>). Is this right? Is there a way to work around this?
For completeness’ sake, here is a simplified version of the Pair class:
public final class Pair<T1, T2> {
public final T1 first;
public final T2 second;
public Pair(T1 first, T2 second) {
this.first = first;
this.second = second;
}
public static <T1, T2> Pair<T1, T2> of(T1 first, T2 second) {
return new Pair<T1, T2>(first, second);
}
}
The reason the constructor works is that you're explicitly specifying the type parameters. The static method also will work if you do that:
Pair<Class<?>, String> pair = Pair.<Class<?>, String>of(List.class, "hello");
Of course, the whole reason you have a static method in the first place is probably just to get the type inference (which doesn't work with constructors at all).
The problem here (as you suggested) is that the compiler is performing capture conversion. I believe this is as a result of [§15.12.2.6 of the JLS]:
The result type of the chosen method is determined as follows:
If the method being invoked is declared with a return type of void,
then the result is void.
Otherwise, if unchecked conversion was necessary for the
method to be applicable then the
result type is the erasure (§4.6) of
the method's declared return type.
Otherwise, if the method being invoked is generic, then for 1in, let
Fi be the formal type parameters of
the method, let Ai be the actual type
arguments inferred for the method
invocation, and let R be the declared
return type of the method being
invoked. The result type is obtained
by applying capture conversion
(§5.1.10) to R[F1 := A1, ..., Fn :=
An].
Otherwise, the result type is obtained by applying capture
conversion (§5.1.10) to the type given
in the method declaration.
If you really want the inference, one possible workaround is to do something like this:
Pair<? extends Class<?>, String> pair = Pair.of(List.class, "hello");
The variable pair will have a wider type, and it does mean a bit more typing in the variable's type name, but at least you don't need to cast in the method call anymore.

Categories

Resources