I've noticed a weird behavior for overloading methods with generics and lambdas. This class works fine:
public <T> void test(T t) { }
public <T> void test(Supplier<T> t) { }
public void test() {
test("test");
test(() -> "test");
}
No ambiguous method call. However, changing it to this makes the second call ambiguous:
public <T> void test(Class<T> c, T t) { }
public <T> void test(Class<T> c, Supplier<T> t) { }
public void test() {
test(String.class, "test");
test(String.class, () -> "test"); // this line does not compile
}
How can this be? Why would adding another argument cause the method resolution to be ambiguous? Why can it tell the difference between a Supplier and an Object in the first example, but not the second?
Edit: This is using 1.8.0_121. This is the full error message:
error: reference to test is ambiguous
test(String.class, () -> "test");
^
both method <T#1>test(Class<T#1>,T#1) in TestFoo and method <T#2>test(Class<T#2>,Supplier<T#2>) in TestFoo match
where T#1,T#2 are type-variables:
T#1 extends Object declared in method <T#1>test(Class<T#1>,T#1)
T#2 extends Object declared in method <T#2>test(Class<T#2>,Supplier<T#2>)
/workspace/com/test/TestFoo.java:14: error: incompatible types: cannot infer type-variable(s) T
test(String.class, () -> "test");
^
(argument mismatch; String is not a functional interface)
where T is a type-variable:
T extends Object declared in method <T>test(Class<T>,T)
If my understanding of chapters 15 and 18 of the JLS for Java SE 8 is correct, the key to your question lies in the following quote from paragraph 15.12.2:
Certain argument expressions that contain implicitly typed lambda expressions (§15.27.1) or inexact method references (§15.13.1) are ignored by the applicability tests, because their meaning cannot be determined until a target type is selected.
When a Java compiler encounters a method call expression such as test(() -> "test"), it has to search for accessible (visible) and applicable (i.e. with matching signature) methods to which this method call can be dispatched. In your first example, both <T> void test(T) and <T> void test(Supplier<T>) are accessible and applicable w.r.t. the test(() -> "test") method call. In such cases, when there are multiple matching methods, the compiler attempts to determine the most specific one. Now, while this determination for generic methods (as covered in
JLS 15.12.2.5 and JLS 18.5.4) is quite complicated, we can use the intuition from the opening of 15.12.2.5:
The informal intuition is that one method is more specific than another if any invocation handled by the first method could be passed on to the other one without a compile-time error.
Since for any valid call to <T> void test(Supplier<T>) we can find a corresponding instantiation of the type parameter T in <T> void test(T), the former is more specific than the latter.
Now, the surprising part is that in your second example, both <T> void test(Class<T>, Supplier<T>) and <T> void test(Class<T>, T) are considered applicable for method call test(String.class, () -> "test"), even though it's clear to us, that the latter shouldn't be. The problem is, that the compiler acts very conservatively in the presence of implicitly typed lambdas, as quoted above. See in particular JLS 18.5.1:
A set of constraint formulas, C, is constructed as follows.
...
To test for applicability by strict invocation:
If k ≠ n, or if there exists an i (1 ≤ i ≤ n) such that e_i is pertinent to applicability (§15.12.2.2) (...) Otherwise, C includes, for all i (1 ≤ i ≤ k) where e_i is pertinent to applicability, ‹e_i → F_i θ›.
To test for applicability by loose invocation:
If k ≠ n, the method is not applicable and there is no need to proceed with inference.
Otherwise, C includes, for all i (1 ≤ i ≤ k) where e_i is pertinent to applicability, ‹e_i → F_i θ›.
and JLS 15.12.2.2:
An argument expression is considered pertinent to applicability for a potentially applicable method m unless it has one of the following forms:
An implicitly typed lambda expression (§15.27.1).
...
So, the constraints from implicitly typed lambdas passed as arguments take no part in resolving type inference in the context of method applicability checks.
Now, if we assume that both methods are applicable, the problem - and the difference between this and the previous example - is that none of this methods is more specific. There exist calls which are valid for <T> void test(Class<T>, Supplier<T>) but not for <T> void test(Class<T>, T), and vice versa.
This also explains why test(String.class, (Supplier<String>) () -> "test"); compiles, as mentioned by #Aominè in the comment above. (Supplier<String>) () -> "test") is an explicitly typed lambda, and as such is considered pertinent to applicability, the compiler is able to correctly deduce, that only one of these methods is applicable, and no conflict occurs.
This problem has kept buzzing me since some days now.
I won't go into JLS details but more on a logical explanation.
What if i'm calling:
test(Callable.class, () -> new Callable() {
#Override
public Object call() throws Exception {
return null;
}
});
Which method the compiler can choose
public <T> void test(Class<T> c, T t)
same as below >>
public <Callable> void test(Class<Callable> c, Callable<Callable> t)
Callable is a #FunctionalInterface so it seems perfectly valid
public <T> void test(Class<T> c, Supplier<T> t)
same as below >>
public <Callable> void test(Class<Callable> c, Supplier<Callable> t)
Supplier is a #FunctionalInterface so it seems also perfectly valid
When providing a class which is a FunctionalInterface then the two calls are valid and neither of the calls are more specific than the other which lead to an ambiguous method message.
Then what about the functions with only one parameter:
What if i'm calling:
test(() -> new Callable() {
#Override
public Object call() throws Exception {
return null;
}
})
Which method the compiler can choose
public <T> void test(T t)
A lambda expression has to be mapped to a #FunctionalInterface but in this case we aren't providing any explicit type to map so this call can't be valid
public <T> void test(Supplier<T> t)
Supplier is a #FunctionalInterface so the lambda expression can be mapped to a #FunctionalInterface (Supplier), the call is then valid
Only one of the two methods is applicable
During compilation all generic types are erased, so Object is used and an Object can be a FunctionalInterface
I hope someone more JLS compliant than me can validate or correct my explanation it would be great.
Related
The following class will not compile:
public class Thing {
public static <T> T foo(java.util.function.Supplier<T> supplier) {
return supplier.get();
}
public static <T> T bar(java.util.function.Function<Integer, T> function) {
return function.apply(42);
}
public static void main(String... args) {
System.out.println(foo(() -> "hello")); // 1
System.out.println(bar(any -> "hello")); // 2 !!!
System.out.println(bar((Integer any) -> "hello")); // 3
System.out.println(Thing.<String>bar(any -> "hello")); // 4
println(bar(any -> "hello")); // 5
}
private static void println(String string) {
System.out.println(string);
}
}
The issue is on line [2] in the main() method (all other lines are fine):
[ERROR] .../Thing.java:[13,19] reference to println is ambiguous
both method println(char[]) in java.io.PrintStream and method println(java.lang.String) in java.io.PrintStream match
[ERROR] .../Thing.java:[13,31] incompatible types: inference variable T has incompatible bounds
lower bounds: char[],java.lang.Object
lower bounds: java.lang.String
I don't understand why the compiler thinks there is an ambiguity on the return type of bar() (char[] or String) and can't decide which flavor of println() method should be used. On line [1], it can infer that the return type of foo() is String and on line [3], I don't understand how specifying the type of any (Integer) helps because it could not be anything else given the signature of the bar() method.
This is because the implicitly typed lambda any -> "hello" is not pertinent to applicability, and so it is practically ignored when determining what type the call bar(any -> "hello") should be (See Invocation Type Inference sections of the JLS), in order to select the correct println call.
The justification can be found from the first link:
The meaning of an implicitly typed lambda expression or an inexact method reference expression is sufficiently vague prior to resolving a target type that arguments containing these expressions are not considered pertinent to applicability; they are simply ignored (except for their expected arity) until overload resolution is finished.
So other than the fact that we are passing it to System.out.println, there are practically no bounds on the return type of bar at all - it could be any reference type. The compiler tries to infer which overload we are trying to call using the rules in that second link above, but println(String) and println(char[]) both seem like valid overloads!
The reasons why the other lines work are:
System.out.println(foo(() -> "hello"));
System.out.println(bar((Integer any) -> "hello"));
these lambdas are all pertinent to applicability, because they are explicitly typed lambdas (though this rule has some exceptions). Note that a lambda with no parameters counts as an explicitly typed lambda.
System.out.println(Thing.<String>bar(any -> "hello"));
Since you have provided the type parameter of bar, the compiler does not need to determine it using type inference. It can now be trivially determined that bar returns String, so you must be calling println(String).
println(bar(any -> "hello"));
There is only one overload of println to pick, so even though we can't know that bar returns String from the lambda, we can still do it by seeing that the bar call is being passed to println(String).
I recommend that you have a look at the list of expressions that are not pertinent to applicability and play around with them :)
Why is
public <R, F extends Function<T, R>> Builder<T> withX(F getter, R returnValue) {...}
more strict then
public <R> Builder<T> with(Function<T, R> getter, R returnValue) {...}
This is a follow up on Why is lambda return type not checked at compile time.
I found using the method withX() like
.withX(MyInterface::getLength, "I am not a Long")
produces the wanted compile time error:
The type of getLength() from the type BuilderExample.MyInterface is long, this is incompatible with the descriptor's return type: String
while using the method with() does not.
full example:
import java.util.function.Function;
public class SO58376589 {
public static class Builder<T> {
public <R, F extends Function<T, R>> Builder<T> withX(F getter, R returnValue) {
return this;
}
public <R> Builder<T> with(Function<T, R> getter, R returnValue) {
return this;
}
}
static interface MyInterface {
public Long getLength();
}
public static void main(String[] args) {
Builder<MyInterface> b = new Builder<MyInterface>();
Function<MyInterface, Long> getter = MyInterface::getLength;
b.with(getter, 2L);
b.with(MyInterface::getLength, 2L);
b.withX(getter, 2L);
b.withX(MyInterface::getLength, 2L);
b.with(getter, "No NUMBER"); // error
b.with(MyInterface::getLength, "No NUMBER"); // NO ERROR !!
b.withX(getter, "No NUMBER"); // error
b.withX(MyInterface::getLength, "No NUMBER"); // error !!!
}
}
javac SO58376589.java
SO58376589.java:32: error: method with in class Builder<T> cannot be applied to given types;
b.with(getter, "No NUMBER"); // error
^
required: Function<MyInterface,R>,R
found: Function<MyInterface,Long>,String
reason: inference variable R has incompatible bounds
equality constraints: Long
lower bounds: String
where R,T are type-variables:
R extends Object declared in method <R>with(Function<T,R>,R)
T extends Object declared in class Builder
SO58376589.java:34: error: method withX in class Builder<T> cannot be applied to given types;
b.withX(getter, "No NUMBER"); // error
^
required: F,R
found: Function<MyInterface,Long>,String
reason: inference variable R has incompatible bounds
equality constraints: Long
lower bounds: String
where F,R,T are type-variables:
F extends Function<MyInterface,R> declared in method <R,F>withX(F,R)
R extends Object declared in method <R,F>withX(F,R)
T extends Object declared in class Builder
SO58376589.java:35: error: incompatible types: cannot infer type-variable(s) R,F
b.withX(MyInterface::getLength, "No NUMBER"); // error
^
(argument mismatch; bad return type in method reference
Long cannot be converted to String)
where R,F,T are type-variables:
R extends Object declared in method <R,F>withX(F,R)
F extends Function<T,R> declared in method <R,F>withX(F,R)
T extends Object declared in class Builder
3 errors
Extended Example
The following example shows the different behaviour of method and type parameter boiled down to a Supplier. In addition it shows the difference to a Consumer behaviour for a type parameter. And it shows it does not make a difference wether it is a Consumer or Supplier for a method parameter.
import java.util.function.Consumer;
import java.util.function.Supplier;
interface TypeInference {
Number getNumber();
void setNumber(Number n);
#FunctionalInterface
interface Method<R> {
TypeInference be(R r);
}
//Supplier:
<R> R letBe(Supplier<R> supplier, R value);
<R, F extends Supplier<R>> R letBeX(F supplier, R value);
<R> Method<R> let(Supplier<R> supplier); // return (x) -> this;
//Consumer:
<R> R lettBe(Consumer<R> supplier, R value);
<R, F extends Consumer<R>> R lettBeX(F supplier, R value);
<R> Method<R> lett(Consumer<R> consumer);
public static void main(TypeInference t) {
t.letBe(t::getNumber, (Number) 2); // Compiles :-)
t.lettBe(t::setNumber, (Number) 2); // Compiles :-)
t.letBe(t::getNumber, 2); // Compiles :-)
t.lettBe(t::setNumber, 2); // Compiles :-)
t.letBe(t::getNumber, "NaN"); // !!!! Compiles :-(
t.lettBe(t::setNumber, "NaN"); // Does not compile :-)
t.letBeX(t::getNumber, (Number) 2); // Compiles :-)
t.lettBeX(t::setNumber, (Number) 2); // Compiles :-)
t.letBeX(t::getNumber, 2); // !!! Does not compile :-(
t.lettBeX(t::setNumber, 2); // Compiles :-)
t.letBeX(t::getNumber, "NaN"); // Does not compile :-)
t.lettBeX(t::setNumber, "NaN"); // Does not compile :-)
t.let(t::getNumber).be(2); // Compiles :-)
t.lett(t::setNumber).be(2); // Compiles :-)
t.let(t::getNumber).be("NaN"); // Does not compile :-)
t.lett(t::setNumber).be("NaN"); // Does not compile :-)
}
}
This is a really interesting question. The answer, I'm afraid, is complicated.
tl;dr
Working out the difference involves some quite in-depth reading of Java's type inference specification, but basically boils down to this:
All other things equal, the compiler infers the most specific type it can.
However, if it can find a substitution for a type parameter that satisfies all the requirements, then compilation will succeed, however vague the substitution turns out to be.
For with there is a (admittedly vague) substitution that satisfies all the requirements on R: Serializable
For withX, the introduction of the additional type parameter F forces the compiler to resolve R first, without considering the constraint F extends Function<T,R>. R resolves to the (much more specific) String which then means that inference of F fails.
This last bullet point is the most important, but also the most hand-wavy. I can't think of a better concise way of phrasing it, so if you want more details, I suggest you read the full explanation below.
Is this intended behaviour?
I'm gonna go out on a limb here, and say no.
I'm not suggesting there's a bug in the spec, more that (in the case of withX) the language designers have put their hands up and said "there are some situations where type inference gets too hard, so we'll just fail". Even though the compiler's behaviour with respect to withX seems to be what you want, I would consider that to be an incidental side-effect of the current spec, rather than a positively intended design decision.
This matters, because it informs the question Should I rely on this behaviour in my application design? I would argue that you shouldn't, because you can't guarantee that future versions of the language will continue to behave this way.
While it's true that language designers try very hard not to break existing applications when they update their spec/design/compiler, the problem is that the behaviour you want to rely on is one where the compiler currently fails (i.e. not an existing application). Langauge updates turn non-compiling code into compiling code all the time. For example, the following code could be guaranteed not to compile in Java 7, but would compile in Java 8:
static Runnable x = () -> System.out.println();
Your use-case is no different.
Another reason I'd be cautious about using your withX method is the F parameter itself. Generally, a generic type parameter on a method (that doesn't appear in the return type) exists to bind the types of multiple parts of the signature together. It's saying:
I don't care what T is, but want to be sure that wherever I use T it's the same type.
Logically, then, we would expect each type parameter to appear at least twice in a method signature, otherwise "it's not doing anything". F in your withX only appears once in the signature, which suggests to me a use of a type parameter not inline with the intent of this feature of the language.
An alternative implementation
One way to implement this in a slightly more "intended behaviour" way would be to split your with method up into a chain of 2:
public class Builder<T> {
public final class With<R> {
private final Function<T,R> method;
private With(Function<T,R> method) {
this.method = method;
}
public Builder<T> of(R value) {
// TODO: Body of your old 'with' method goes here
return Builder.this;
}
}
public <R> With<R> with(Function<T,R> method) {
return new With<>(method);
}
}
This can then be used as follows:
b.with(MyInterface::getLong).of(1L); // Compiles
b.with(MyInterface::getLong).of("Not a long"); // Compiler error
This doesn't include an extraneous type parameter like your withX does. By breaking down the method into two signatures, it also better expresses the intent of what you're trying to do, from a type-safety point of view:
The first method sets up a class (With) that defines the type based on the method reference.
The scond method (of) constrains the type of the value to be compatible with what you previously set up.
The only way a future version of the language would be able to compile this is if the implemented full duck-typing, which seems unlikely.
One final note to make this whole thing irrelevant: I think Mockito (and in particular its stubbing functionality) might basically already do what you're trying to achieve with your "type safe generic builder". Maybe you could just use that instead?
The full(ish) explanation
I'm going to work through the type inference procedure for both with and withX. This is quite long, so take it slowly. Despite being long, I've still left quite a lot of details out. You may wish to refer to the spec for more details (follow the links) to convince yourself that I'm right (I may well have made a mistake).
Also, to simplify things a little, I'm going to use a more minimal code sample. The main difference is that it swaps out Function for Supplier, so there are less types and parameters in play. Here's a full snippet that reproduces the behaviour you described:
public class TypeInference {
static long getLong() { return 1L; }
static <R> void with(Supplier<R> supplier, R value) {}
static <R, F extends Supplier<R>> void withX(F supplier, R value) {}
public static void main(String[] args) {
with(TypeInference::getLong, "Not a long"); // Compiles
withX(TypeInference::getLong, "Also not a long"); // Does not compile
}
}
Let's work through the type applicability inference and type inference procedure for each method invocation in turn:
with
We have:
with(TypeInference::getLong, "Not a long");
The initial bound set, B0, is:
R <: Object
All parameter expressions are pertinent to applicability.
Hence, the initial constraint set for applicability inference, C, is:
TypeInference::getLong is compatible with Supplier<R>
"Not a long" is compatible with R
This reduces to bound set B2 of:
R <: Object (from B0)
Long <: R (from the first constraint)
String <: R (from the second constraint)
Since this does not contain the bound 'false', and (I assume) resolution of R succeeds (giving Serializable), then the invocation is applicable.
So, we move on to invocation type inference.
The new constraint set, C, with associated input and output variables, is:
TypeInference::getLong is compatible with Supplier<R>
Input variables: none
Output variables: R
This contains no interdependencies between input and output variables, so can be reduced in a single step, and the final bound set, B4, is the same as B2. Hence, resolution succeeds as before, and the compiler breathes a sigh of relief!
withX
We have:
withX(TypeInference::getLong, "Also not a long");
The initial bound set, B0, is:
R <: Object
F <: Supplier<R>
Only the second parameter expression is pertinent to applicability. The first one (TypeInference::getLong) is not, because it meets the following condition:
If m is a generic method and the method invocation does not provide explicit type arguments, an explicitly typed lambda expression or an exact method reference expression for which the corresponding target type (as derived from the signature of m) is a type parameter of m.
Hence, the initial constraint set for applicability inference, C, is:
"Also not a long" is compatible with R
This reduces to bound set B2 of:
R <: Object (from B0)
F <: Supplier<R> (from B0)
String <: R (from the constraint)
Again, since this does not contain the bound 'false', and resolution of R succeeds (giving String), then the invocation is applicable.
Invocation type inference once more...
This time, the new constraint set, C, with associated input and output variables, is:
TypeInference::getLong is compatible with F
Input variables: F
Output variables: none
Again, we have no interdependencies between input and output variables. However this time, there is an input variable (F), so we must resolve this before attempting reduction. So, we start with our bound set B2.
We determine a subset V as follows:
Given a set of inference variables to resolve, let V be the union of this set and all variables upon which the resolution of at least one variable in this set depends.
By the second bound in B2, the resolution of F depends on R, so V := {F, R}.
We pick a subset of V according to the rule:
let { α1, ..., αn } be a non-empty subset of uninstantiated variables in V such that i) for all i (1 ≤ i ≤ n), if αi depends on the resolution of a variable β, then either β has an instantiation or there is some j such that β = αj; and ii) there exists no non-empty proper subset of { α1, ..., αn } with this property.
The only subset of V that satisfies this property is {R}.
Using the third bound (String <: R) we instantiate R = String and incorporate this into our bound set. R is now resolved, and the second bound effectively becomes F <: Supplier<String>.
Using the (revised) second bound, we instantiate F = Supplier<String>. F is now resolved.
Now that F is resolved, we can proceed with reduction, using the new constraint:
TypeInference::getLong is compatible with Supplier<String>
...reduces to Long is compatible with String
...which reduces to false
... and we get a compiler error!
Additional notes on the 'Extended Example'
The Extended Example in the question looks at a few interesting cases that aren't directly covered by the workings above:
Where the value type is a subtype of the method return type (Integer <: Number)
Where the functional interface is contravariant in the inferred type (i.e. Consumer rather than Supplier)
In particular, 3 of the given invocations stand out as potentially suggesting 'different' compiler behaviour to that described in the explanations:
t.lettBe(t::setNumber, "NaN"); // Does not compile :-)
t.letBeX(t::getNumber, 2); // !!! Does not compile :-(
t.lettBeX(t::setNumber, 2); // Compiles :-)
The second of these 3 will go through exactly the same inference process as withX above (just replace Long with Number and String with Integer). This illustrates yet another reason why you shouldn't rely on this failed type inference behaviour for your class design, as the failure to compile here is likely not a desirable behaviour.
For the other 2 (and indeed any of the other invocations involving a Consumer you wish to work through), the behaviour should be apparent if you work through the type inference procedure laid out for one of the methods above (i.e. with for the first, withX for the third). There's just one small change you need to take note of:
The constraint on the first parameter (t::setNumber is compatible with Consumer<R>) will reduce to R <: Number instead of Number <: R as it does for Supplier<R>. This is described in the linked documentation on reduction.
I leave it as an excercise for the reader to carfully work through one of the above procedures, armed with this piece of additional knowledge, to demonstrate to themselves exactly why a particular invocation does or doesn't compile.
As always I was looking through JDK 8 sources and found very interesting code:
#Override
default void forEachRemaining(Consumer<? super Integer> action) {
if (action instanceof IntConsumer) {
forEachRemaining((IntConsumer) action);
}
}
The question is: how Consumer<? super Integer> could be an instance of IntConsumer? Because they are in different hierarchy.
I have made similar code snippet to test casting:
public class InterfaceExample {
public static void main(String[] args) {
IntConsumer intConsumer = i -> { };
Consumer<Integer> a = (Consumer<Integer>) intConsumer;
a.accept(123);
}
}
But it throws ClassCastException:
Exception in thread "main"
java.lang.ClassCastException:
com.example.InterfaceExample$$Lambda$1/764977973
cannot be cast to
java.util.function.Consumer
You can find this code at java.util.Spliterator.OfInt#forEachRemaining(java.util.function.Consumer)
Let's see the code below, then you can see why?
class IntegerConsumer implements Consumer<Integer>, IntConsumer {
...
}
Any class can implement multi-interfaces, one is Consumer<Integer> maybe implements another one is IntConsumer. Sometimes occurs when we want to adapt IntConsumer to Consumer<Integer> and to save its origin type (IntConsumer), then the code looks like as below:
class IntConsumerAdapter implements Consumer<Integer>, IntConsumer {
#Override
public void accept(Integer value) {
accept(value.intValue());
}
#Override
public void accept(int value) {
// todo
}
}
Note: it's the usage of Class Adapter Design Pattern.
THEN you can use IntConsumerAdapter both as Consumer<Integer> and IntConsumer, for example:
Consumer<? extends Integer> consumer1 = new IntConsumerAdapter();
IntConsumer consumer2 = new IntConsumerAdapter();
Sink.OfInt is a concrete usage of Class Adapter Design Pattern in jdk-8.The downside of Sink.OfInt#accept(Integer) is clearly that JVM will throw a NullPointerException when it accepts a null value, so that is why Sink is package visible.
189 interface OfInt extends Sink<Integer>, IntConsumer {190 #Override191 void accept(int value);193 #Override194 default void accept(Integer i) {195 if (Tripwire.ENABLED)196 Tripwire.trip(getClass(), "{0} calling Sink.OfInt.accept(Integer)");197 accept(i.intValue());198 }199 }
I found it why need to cast a Consumer<Integer> to an IntConsumer if pass a consumer like as IntConsumerAdapter?
One reason is when we use a Consumer to accept an int the compiler needs to auto-boxing it to an Integer. And in the method accept(Integer) you need to unbox an Integer to an int manually.
In the other words, each accept(Integer) does 2 additional operations for boxing/unboxing. It needs to improve the performance so it does some special checking in the algorithm library.
Another reason is reusing a piece of code. The body of OfInt#forEachRemaining(Consumer) is a good example of applying Adapter Design Pattern for reusing OfInt#forEachRenaming(IntConsumer).
default void forEachRemaining(Consumer<? super Integer> action) {
if (action instanceof IntConsumer) {
// action's implementation is an example of Class Adapter Design Pattern
// |
forEachRemaining((IntConsumer) action);
}
else {
// method reference expression is an example of Object Adapter Design Pattern
// |
forEachRemaining((IntConsumer) action::accept);
}
}
Because the implementing class might implement both interfaces.
It is legal to cast any type to any interface type, as long as the object being passed might implement the destination interface. This is known at compile-time to be false when the source type is a final class that does not implement the interface, or when it can be proven to have different type parameterization that results in the same erasure. At run-time, if the object does not implement the interface, you'll get a ClassCastException. Checking with instanceof before attempting to cast allows you to avoid the exception.
From the Java Language Specification, 5.5.1: Reference Type Casting:
5.5.1 Reference Type Casting
Given a compile-time reference type S (source) and a compile-time reference type
T (target), a casting conversion exists from S to T if no compile-time errors occur
due to the following rules.
...
• If T is an interface type:
– If S is not a final class (§8.1.1), then, if there exists a supertype X of T, and a supertype Y of S, such that both X and Y are provably distinct parameterized types, and that the erasures of X and Y are the same, a compile-time error occurs.
Otherwise, the cast is always legal at compile time (because even if S does not implement T, a subclass of S might).
Note that you could have found this behavior without looking into the source code, just by looking at the official API documentation, you have linked yourself:
Implementation Requirements:
If the action is an instance of IntConsumer then it is cast to IntConsumer and passed to forEachRemaining(java.util.function.IntConsumer); otherwise the action is adapted to an instance of IntConsumer, by boxing the argument of IntConsumer, and then passed to forEachRemaining(java.util.function.IntConsumer).
So in either case, forEachRemaining(IntConsumer) will be called, which is the actual implementation method. But when possible, the creation of a boxing adapter will be omitted. The reason is that a Spliterator.OfInt is also a Spliterator<Integer>, which only offers the forEachRemaining(Consumer<Integer>) method. The special behavior allows treating generic Spliterator instances and their primitive (Spliterator.OfPrimitive) counterparts equally, with an automatic selection of the most efficient method.
As said by others, you can implement multiple interfaces with an ordinary class. Also, you can implement multiple interfaces with a lambda expression, if you create a helper type, e.g.
interface UnboxingConsumer extends IntConsumer, Consumer<Integer> {
public default void accept(Integer t) {
System.out.println("unboxing "+t);
accept(t.intValue());
}
}
public static void printAll(BaseStream<Integer,?> stream) {
stream.spliterator().forEachRemaining((UnboxingConsumer)System.out::println);
}
public static void main(String[] args) {
System.out.println("Stream.of(1, 2, 3):");
printAll(Stream.of(1, 2, 3));
System.out.println("IntStream.range(0, 3)");
printAll(IntStream.range(0, 3));
}
Stream.of(1, 2, 3):
unboxing 1
1
unboxing 2
2
unboxing 3
3
IntStream.range(0, 3)
0
1
2
How to overload a Function with generic parameter in Java 8?
public class Test<T> {
List<T> list = new ArrayList<>();
public int sum(Function<T, Integer> function) {
return list.stream().map(function).reduce(Integer::sum).get();
}
public double sum(Function<T, Double> function) {
return list.stream().map(function).reduce(Double::sum).get();
}
}
Error: java: name clash:
sum(java.util.function.Function<T,java.lang.Double>) and
sum(java.util.function.Function<T,java.lang.Integer>) have the same erasure
Benji Weber once wrote of a way to circumvent this. What you need to do is to define custom functional interfaces that extend the types for your parameters:
public class Test<T> {
List<T> list = new ArrayList<>();
#FunctionalInterface
public interface ToIntFunction extends Function<T, Integer>{}
public int sum(ToIntegerFunction function) {
return list.stream().map(function).reduce(Integer::sum).get();
}
#FunctionalInterface
public interface ToDoubleFunction extends Function<T, Double>{}
public double sum(ToDoubleFunction function) {
return list.stream().map(function).reduce(Double::sum).get();
}
}
Another way is to use java.util.function.ToIntFunction and java.util.function.ToDoubleFunction instead:
public class Test<T> {
List<T> list = new ArrayList<>();
#FunctionalInterface
public int sum(ToIntFunction function) {
return list.stream().mapToInt(function).sum();
}
public double sum(ToDoubleFunction function) {
return list.stream().mapToDouble(function).sum();
}
}
The example you present in your question has got nothing to do with Java 8 and everything to do with how generics work in Java. Function<T, Integer> function and Function<T, Double> function will go through type-erasure when compiled and will be transformed to Function. The rule of thumb for method overloading is to have different number, type or sequence of parameters. Since both your methods will transform to take a Function argument, the compiler complains about it.
That being said, srborlongan has already provided one way to resolve the issue. The problem with that solution is that you have to keep modifying your Test class for each and every type of operation (addition,subtraction,etc) on different types (Integer,Double, etc). An alternate solution would be to use method overriding instead of method overloading :
Change the Test class a bit as follows :
public abstract class Test<I,O extends Number> {
List<I> list = new ArrayList<>();
public O performOperation(Function<I,O> function) {
return list.stream().map(function).reduce((a,b)->operation(a,b)).get();
}
public void add(I i) {
list.add(i);
}
public abstract O operation(O a,O b);
}
Create a subclass of Test that will add two Integers.
public class MapStringToIntAddtionOperation extends Test<String,Integer> {
#Override
public Integer operation(Integer a,Integer b) {
return a+b;
}
}
Client code can then use the above code as follows :
public static void main(String []args) {
Test<String,Integer> test = new MapStringToIntAddtionOperation();
test.add("1");
test.add("2");
System.out.println(test.performOperation(Integer::parseInt));
}
The advantage of using this approach is that your Test class is in line with the open-closed principle. To add a new operation such as multiplication, all you have to do is add a new subclass of Test and override the operation method to multiply two numbers. Club this with the Decorator pattern and you can even minimize the number of sub-classes that you have to create.
Note The example in this answer is indicative. There are a lot of areas of improvement (such as make Test a functional interface instead of an abstract class) which are beyond the scope of the question.
#srborlongan 's solution won't work very well :)
See a similar example - Comparator methods - comparingDouble(ToDoubleFunction), comparingInt(ToIntFunction), etc. The methods have different names, because overloading is not a good idea here.
The reason is, when you do sum(t->{...}), the compiler is unable to infer which method to call; actually it needs to resolve method overloading first, to pick one method, before inferring the type of the implicit lambda expression (based on that method's signature)
This is disappointing. In the earlier stage, Java8 had a more sophisticated inference engine, and Comparator had overloaded comparing() methods; and sum(t->{...}) would be correctly inferred too. Unfortunately, they decided to simply it :( And here we are now.
Rule of thumb for overloading methods with functional arguments: the arities of the functional interfaces must be different, unless both are 0.
// OK, different arity
m1( X->Y )
m1( (X1, X2)->Y )
// not OK, both are arity 1
m2( X->Y )
m2( A->B )
m2( t->{...} ); // fail; type of `t` cannot be inferred
// OK! both are arity 0
m3( ()->Y )
m3( ()->B )
The reason why overloading with arity 0 is OK is that the lambda expressions won't be implicit - all argument types are known (because there's no argument!), we don't need contextual information for inferring the lambda type
m3( ()-> return new Y() ); // lambda type is ()->Y
m3( ()-> return new B() ); // lambda type is ()->B
I'm having quite a tricky case here with generics and method overloading. Check out this example class:
public class Test {
public <T> void setValue(Parameter<T> parameter, T value) {
}
public <T> void setValue(Parameter<T> parameter, Field<T> value) {
}
public void test() {
// This works perfectly. <T> is bound to String
// ambiguity between setValue(.., String) and setValue(.., Field)
// is impossible as String and Field are incompatible
Parameter<String> p1 = getP1();
Field<String> f1 = getF1();
setValue(p1, f1);
// This causes issues. <T> is bound to Object
// ambiguity between setValue(.., Object) and setValue(.., Field)
// is possible as Object and Field are compatible
Parameter<Object> p2 = getP2();
Field<Object> f2 = getF2();
setValue(p2, f2);
}
private Parameter<String> getP1() {...}
private Parameter<Object> getP2() {...}
private Field<String> getF1() {...}
private Field<Object> getF2() {...}
}
The above example compiles perfectly in Eclipse (Java 1.6), but not with the Ant javac command (or with the JDK's javac command), where I get this sort of error message on the second invocation of setValue:
reference to setValue is ambiguous,
both method
setValue(org.jooq.Parameter,T)
in Test and method
setValue(org.jooq.Parameter,org.jooq.Field)
in Test match
According to the specification and to my understanding of how the Java compiler works, the most specific method should always be chosen: http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#20448
In any case, even if <T> is bound to Object, which makes both setValue methods acceptable candidates for invocation, the one with the Field parameter always seems to be more specific. And it works in Eclipse, just not with the JDK's compiler.
UPDATE:
Like this, it would work both in Eclipse and with the JDK compiler (with rawtypes warnings, of course). I understand, that the rules specified in the specs are quite special, when generics are involved. But I find this rather confusing:
public <T> void setValue(Parameter<T> parameter, Object value) {
}
// Here, it's easy to see that this method is more specific
public <T> void setValue(Parameter<T> parameter, Field value) {
}
UPDATE 2:
Even with generics, I can create this workaround where I avoid the type <T> being bound to Object at setValue invocation time, by adding an additional, unambiguous indirection called setValue0. This makes me think that the binding of T to Object is really what's causing all the trouble here:
public <T> void setValue(Parameter<T> parameter, T value) {
}
public <T> void setValue(Parameter<T> parameter, Field<T> value) {
}
public <T> void setValue0(Parameter<T> parameter, Field<T> value) {
// This call wasn't ambiguous in Java 7
// It is now ambiguous in Java 8!
setValue(parameter, value);
}
public void test() {
Parameter<Object> p2 = p2();
Field<Object> f2 = f2();
setValue0(p2, f2);
}
Am I misunderstanding something here? Is there a known compiler bug related to this? Or is there a workaround/compiler setting to help me?
Follow-Up:
For those interested, I have filed a bug report both to Oracle and Eclipse. Oracle has accepted the bug, so far, Eclipse has analysed it and rejected it! It looks as though my intuition is right and this is a bug in javac
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7031404
https://bugs.eclipse.org/bugs/show_bug.cgi?id=340506
https://bugs.eclipse.org/bugs/show_bug.cgi?id=469014 (a new issue in Eclipse Mars)
JDK is right. The 2nd method is not more specific than the 1st. From JLS3#15.12.2.5
"The informal intuition is that one method is more specific than another if any invocation handled by the first method could be passed on to the other one without a compile-time type error."
This is clearly not the case here. I emphasized any invocation. The property of one method being more specific than the other purely depends on the two methods themselves; it doesn't change per invocation.
Formal analysis on your problem: is m2 more specific than m1?
m1: <R> void setValue(Parameter<R> parameter, R value)
m2: <V> void setValue(Parameter<V> parameter, Field<V> value)
First, compiler needs to infer R from the initial constraints:
Parameter<V> << Parameter<R>
Field<V> << R
The result is R=V, per inference rules in 15.12.2.7
Now we substitute R and check subtype relations
Parameter<V> <: Parameter<V>
Field<V> <: V
The 2nd line does not hold, per subtyping rules in 4.10.2. So m2 is not more specific than m1.
V is not Object in this analysis; the analysis considers all possible values of V.
I would suggest to use different method names. Overloading is never a necessity.
This appears to be a significant bug in Eclipse. The spec quite clearly indicates that the type variables are not substituted in this step. Eclipse apparently does type variable substitution first, then check method specificity relation.
If such behavior is more "sensible" in some examples, it is not in other examples. Say,
m1: <T extends Object> void check(List<T> list, T obj) { print("1"); }
m2: <T extends Number> void check(List<T> list, T num) { print("2"); }
void test()
check( new ArrayList<Integer>(), new Integer(0) );
"Intuitively", and formally per spec, m2 is more specific than m1, and the test prints "2". However, if substitution T=Integer is done first, the two methods become identical!
for Update 2
m1: <R> void setValue(Parameter<R> parameter, R value)
m2: <V> void setValue(Parameter<V> parameter, Field<V> value)
m3: <T> void setValue2(Parameter<T> parameter, Field<T> value)
s4: setValue(parameter, value)
Here, m1 is not applicable for method invocation s4, so m2 is the only choice.
Per 15.12.2.2, to see if m1 is applicable for s4, first, type inference is carried out, to the conclusion that R=T; then we check Ai :< Si, which leads to Field<T> <: T, which is false.
This is consistent with the previous analysis - if m1 is applicable to s4, then any invocation handled by m2 (essentially same as s4) can be handled by m1, which means m2 would be more specific than m1, which is false.
in a parameterized type
Consider the following code
class PF<T>
{
public void setValue(Parameter<T> parameter, T value) {
}
public void setValue(Parameter<T> parameter, Field<T> value) {
}
}
void test()
PF<Object> pf2 = null;
Parameter<Object> p2 = getP2();
Field<Object> f2 = getF2();
pf2.setValue(p2,f2);
This compiles without problem. Per 4.5.2, the types of the methods in PF<Object> are methods in PF<T> with substitution T=Object. That is, the methods of pf2 are
public void setValue(Parameter<Object> parameter, Object value)
public void setValue(Parameter<Object> parameter, Field<Object> value)
The 2nd method is more specific than the 1st.
My guess is that the compiler is doing an method overloading resolution as per JLS, Section 15.12.2.5.
For this Section, the compiler uses strong subtyping (thus not allowing any unchecked conversion), so, T value becomes Object value and Field<T> value becomes Field<Object> value. The following rules will apply:
The method m is applicable by
subtyping if and only if both of the
following conditions hold:
* For 1in, either:
o Ai is a subtype (§4.10) of Si (Ai <: Si) or
o Ai is convertible to some type *Ci* by unchecked conversion
(§5.1.9), and Ci <: Si.
* If m is a generic method as described above then Ul <: Bl[R1 = U1,
..., Rp = Up], 1lp.
(Refer to bullet 2). Since Field<Object> is a subtype of Object then the most specific method is found. Field f2 matches both methods of yours (because of bullet 2 above) and makes it ambiguous.
For String and Field<String>, there is no subtype relationship between the two.
PS. This is my understanding of things, don't quote it as kosher.
Edit: This answer is wrong. Take a look at accepted answer.
I think the issue comes down this: compiler does not see the type of f2 (i.e. Field) and the inferred type of formal parameter (i.e. Field -> Field) as the same type.
In other words, it looks like type of f2 (Field) is considered to be a subtype of the type of formal parameter Field (Field). Since Field is at the same type a subtype of Object, compiler cannot pick one method over another.
Edit: Let me expand my statement a bit
Both methods are applicable and it looks like the Phase 1: Identify Matching Arity Methods Applicable by Subtyping is used to decide which method to call and than rules from Choosing the Most Specific Method applied, but failed for some reason to pick second method over first one.
Phase 1 section uses this notation: X <: S (X is subtype of S). Based on my understanding of <:, X <: X is a valid expression, i.e. the <: is not strict and includes the type itself (X is subtype of X) in this context. This explains the result of Phase 1: both methods are picked as candidates, since Field<Object> <: Object and Field<Object> <: Field<Object>.
Choosing the Most Specific Method section uses same notation to say that one method is more specific than another. The interesting part the paragraph that starts with "One fixed-arity member method named m is more specific than another member...". It has, among other things:
For all j from 1 to n, Tj <: Sj.
This makes me think that in our case second method must be chosen over the first one, because following holds:
Parameter<Object> <: Parameter<Object>
Field<Object> <: Object
while the other way around does not hold due to Object <: Field<Object> being false (Object is not a subtype of Field).
Note: In case of String examples, Phase 1 will simply pick the only method applicable: the second one.
So, to answer your questions: I think this is a bug in compiler implementation. Eclipse has it is own incremental compiler which does not have this bug it seems.