class A {
public void printFirst(int... va) throws IOException{
System.out.print("A");
}
public static void main(String args[]) {
try {
new B().printFirst(2);
} catch (Exception ex) {
}
}
}
class B extends A {
//#Override
public void printFirst(float... va) throws IOException{
System.out.print("B");
}
}
Why, it is showing reference to call ambiguous ??
It actually compiles if you remove the varargs notation. The literal 2 should be considered an int, not a float, so I would expect that the printFirst in A would be chosen by the compiler.
It looks like this has to do with how the compiler does method invocation conversions. This SO question says it's in the spec, but the part of accepted answer that relates to this question appears to be contradictory (it says you can't combine a widening conversion (int to float) with varargs, but then later it says this is okay). A similar problem was discussed in this question and the accepted answer concludes that this case is actually unspecified (unfortunately the link to the discussion is now broken). Making matters worse, the language guide simply suggests avoiding this type of overloading.
This appears to be a bug in your compiler; I can reproduce your compile-error in one compiler (Eclipse), but not in another (javac), and I believe the latter is correct.
According to §15.12.2.5 "Choosing the Most Specific Method" of The Java Language Specification, Java SE 7 Edition, the compile-error that you're seeing should only happen if "no method is the most specific, because there are two or more methods that are maximally specific" (plus various other restrictions). But that's not the case here: in your case, B.printFirst(float...) is not maximally specific, because a method is maximally specific "if it is accessible and applicable and there is no other method that is applicable and accessible that is strictly more specific", and in your case, A.printFirst(int...) is strictly more specific, because int is a subtype of float and float is not a subtype of int.
By the way, your class B is most likely a red herring; in Eclipse, at least, you can trigger the same compile-error by simply writing:
class A
{
public static void printFirst(float... va)
{ System.out.print("float..."); }
public static void printFirst(int... va)
{ System.out.print("int..."); }
public static void main(String args[])
{ printFirst(2); }
}
Related
Primitives are at it again, breaking rules, I learned before. Well not technically primitive but composed of them.
I learned that whenever there's no method more specific than rest, compile time error occurs as it happens here.
public static void caller(){
z5(); // Error. Neither Integer, nor String is more specific
z5(null); // Error for the same reason
}
public static void z5(Integer...integers){
System.out.println("Integer z5 called");
}
public static void z5(String...strings){
System.out.println("String z5 called");
}
Now comes primitives into the picture.
public static void caller(){
z1(null); // Error cuz [I, [J, [F all are subclass of Object.
z1(); // SURPRISINGLY works and calls the int one. WHY?
}
public static void z1(int...integers){
System.out.println("int z1 called");
}
public static void z1(long...longs){
System.out.println("long z1 called");
}
public static void z1(float...floats){
System.out.println("float z1 called");
}
Expected compile time errors occurs here.
public static void caller(){
z1(null); // Error
z1(); // Error
}
public static void z1(int...integers){
System.out.println("int z1 called");
}
public static void z1(boolean...bools){
System.out.println("bool z1 called");
}
Now my question is, int[], float[], or any array of primitives are not primitive types then Why are they treated differently than other reference types?
--UPDATE--
#john16384 You don't think I read your "Possible duplicate" Varargs in method overloading in Java
Top answer there says You cannot combine var-args, with either widening or boxing. Besides I forgot to mention, OP's code posted there, works fine on my jdk 7.
What exactly is going on which works for (int...is) & (float...fs) but not for (Integer...is) & (Float...fs) and not for (int...is) & (boolean...bool)
Quote from the JLS about varargs invocations when multiple methods are applicable:
15.12.2.5. Choosing the Most Specific Method
If more than one member method is both accessible and applicable to a
method invocation, it is necessary to choose one to provide the
descriptor for the run-time method dispatch. The Java programming
language uses the rule that the most specific method is chosen.
The informal intuition is that one method is more specific than
another if any invocation handled by the first method could be passed
on to the other one without a compile-time error. In cases such as an
explicitly typed lambda expression argument (§15.27.1) or a variable
arity invocation (§15.12.2.4), some flexibility is allowed to adapt
one signature to the other.
The important part here is how methods are defined to be more specific. It basically says that int... is more specific than long... because any values you could pass to the first method could also be passed to the second method.
This will also apply to the case where you pass no arguments. int... will be the most specific (it will even see byte... as more specific!).
public static void main(String[] args) {
bla();
}
private static void bla(long... x) {}
private static void bla(int... x) {}
private static void bla(short... x) {}
private static void bla(byte... x) {} // <-- calls this one
The reason you get an error when also creating an overload boolean... is that it now is ambigious which one to call, and the compiler stops before getting to the point where it has to pick the most specific method.
For the code below, why do I get this compile time error:
The method overloadedMethod(IOException) is ambiguous for the type Test.
public class Test {
public static void main(String[] args) {
Test test = new Test();
test.overloadedMethod(null);
}
void overloadedMethod(IOException e) {
System.out.println("1");
}
void overloadedMethod(FileNotFoundException e) {
System.out.println("2");
}
void overloadedMethod(Exception e) {
System.out.println("3");
}
void overloadedMethod(ArithmeticException e) {
System.out.println("4");
}
}
Both FileNotFoundException and ArithmeticException are in the same level in java object hierarchy. Compiler confused to choose most specific method, Since both method are eligible for invocation.
Even Compiler can't choose most specific method, if there are multiple object hierarchies. So removing either of FileNotFoundException or ArithmeticException won't solve the problem.
From JLS 15.12.2.5
If more than one member method is both accessible and applicable to a
method invocation, it is necessary to choose one to provide the
descriptor for the run-time method dispatch. The Java programming
language uses the rule that the most specific method is chosen.
The informal intuition is that one method is more specific than
another if any invocation handled by the first method could be passed
on to the other one without a compile-time type error.
You have four versions of overloadedMethod that accept parameters of type IOException, FileNotFoundException, Exception and ArithmeticException.
When you call
test.overloadedMethod(null);
the JVM cannot know which version of the method you are intending to call. That is why the ambiguous error.
You need to be more specific on which version you want to call. You can do that by casting the parameter:
test.overloadedMethod((IOException)null);
There seems to be a bug in the Java varargs implementation. Java can't distinguish the appropriate type when a method is overloaded with different types of vararg parameters.
It gives me an error The method ... is ambiguous for the type ...
Consider the following code:
public class Test
{
public static void main(String[] args) throws Throwable
{
doit(new int[]{1, 2}); // <- no problem
doit(new double[]{1.2, 2.2}); // <- no problem
doit(1.2f, 2.2f); // <- no problem
doit(1.2d, 2.2d); // <- no problem
doit(1, 2); // <- The method doit(double[]) is ambiguous for the type Test
}
public static void doit(double... ds)
{
System.out.println("doubles");
}
public static void doit(int... is)
{
System.out.println("ints");
}
}
the docs say: "Generally speaking, you should not overload a varargs method, or it will be difficult for programmers to figure out which overloading gets called."
however they don't mention this error, and it's not the programmers that are finding it difficult, it's the compiler.
thoughts?
EDIT - Compiler: Sun jdk 1.6.0 u18
The problem is that it is ambiguous.
doIt(1, 2);
could be a call to doIt(int ...), or doIt(double ...). In the latter case, the integer literals will be promoted to double values.
I'm pretty sure that the Java spec says that this is an ambiguous construct, and the compiler is just following the rules laid down by the spec. (I'd have to research this further to be sure.)
EDIT - the relevant part of the JLS is "15.12.2.5 Choosing the Most Specific Method", but it is making my head hurt.
I think that the reasoning would be that void doIt(int[]) is not more specific (or vice versa) than void doIt(double[]) because int[] is not a subtype of double[] (and vice versa). Since the two overloads are equally specific, the call is ambiguous.
By contrast, void doItAgain(int) is more specific than void doItAgain(double) because int is a subtype of double according the the JLS. Hence, a call to doItAgain(42) is not ambiguous.
EDIT 2 - #finnw is right, it is a bug. Consider this part of 15.12.2.5 (edited to remove non-applicable cases):
One variable arity member method named m is more specific than another variable arity member method of the same name if:
One member method has n parameters and the other has k parameters, where n ≥ k. The types of the parameters of the first member method are T1, . . . , Tn-1 , Tn[], the types of the parameters of the other method are U1, . . . , Uk-1, Uk[]. Let Si = Ui, 1<=i<=k. Then:
for all j from 1 to k-1, Tj <: Sj, and,
for all j from k to n, Tj <: Sk
Apply this to the case where n = k = 1, and we see that doIt(int[]) is more specific than doIt(double[]).
In fact, there is a bug report for this and Sun acknowledges that it is indeed a bug, though they have prioritized it as "very low". The bug is now marked as Fixed in Java 7 (b123).
There is a discussion about this over at the Sun Forums.
No real resolution there, just resignation.
Varargs (and auto-boxing, which also leads to hard-to-follow behaviour, especially in combination with varargs) have been bolted on later in Java's life, and this is one area where it shows. So it is more a bug in the spec, than in the compiler.
At least, it makes for good(?) SCJP trick questions.
Interesting. Fortunately, there are a couple different ways to avoid this problem:
You can use the wrapper types instead in the method signatures:
public static void doit(Double... ds) {
for(Double currD : ds) {
System.out.println(currD);
}
}
public static void doit(Integer... is) {
for(Integer currI : is) {
System.out.println(currI);
}
}
Or, you can use generics:
public static <T> void doit(T... ts) {
for(T currT : ts) {
System.out.println(currT);
}
}
we have been simplifying some definition and usage of generics in our code.
Now we got an interesting case, take this example:
public class MyWeirdClass {
public void entryPoint() {
doSomethingWeird();
}
#SuppressWarnings( "unchecked" )
private <T extends A & B> T getMyClass() {
if ( System.currentTimeMillis() % 2 == 0 ) {
return (T) new MyClass_1();
} else {
return (T) new MyClass_2();
}
}
private <T extends A & B> void doSomethingWeird() {
T obj = getMyClass();
obj.methodFromA();
obj.methodFromB();
}
static interface A {
void methodFromA();
}
static interface B {
void methodFromB();
}
static class MyClass_1 implements A, B {
public void methodFromA() {};
public void methodFromB() {};
}
static class MyClass_2 implements A, B {
public void methodFromA() {};
public void methodFromB() {};
}
}
Now look at the method 'doSeomthingWeird() in MyWeirdClass:
This code will compile correctly using the eclipse JDT compiler, however it will fail when using the Oracle compiler. Since the JDT is able to produce working byte-code, it means that at JVM level, this is valid code and it is 'only' the Oracle compiler not allowing to compile such dirty(!?) stuff.
We understand that Oracle's compiler won't accept the call 'T obj = getMyClass();' since T is not a really existent type. However since we know that the returned object implements A and B, why not allowing it? (The JDT compiler and the JVM do).
Note also that since the generics code is used only internally in private methods, we do not want to expose them at class level, polluting external code with generics definitions, that we are not interested at (from outside the class).
The school book solution will be to create an interface AB extends A,B however since we have a larger number of interfaces which are used in different combinations and coming from different modules, making shared interfaces for all the combinations will significantly increase the number of 'dummy' interfaces and finally make the code less readable. In theory it would require up to N-permutations of different wrapper interfaces in order to cover all the cases.
The 'business-oriented-engineer'(other people call it the 'lazy-engineer') solution would be to leave the code this way and start using only JDT for compiling the code.
Edit: It's a bug in Oracle's Javac 6 and works without problems also on Oracle's Javac 7
What do you mean? Are there any hidden dangers by adopting this 'strategy'?
Addition in order to avoid discussion on (for me) not relevant points:
I am not asking why the code above does not compile on Oracle's compiler I know the reason and I do not want to modify this kind of code without a very good reason if it works perfectly when using another compiler.
Please concentrate on the definition and usage (without giving a specific type) of the method 'doSomethingWeird()'.
Is there a good reason, why we should not use only the JDT compiler that allows writing and compiling this code and stop compiling with the Oracle's compiler, which will not accept the code above?
(Thanks for input)
Edit: The code above compiles correctly on Oracle Javac 7 but not on Javac 6. It is a Javac 6 bug. So this means that there is nothing wrong in our code and we can stick on it.
Question is answered, and I'll mark it as such after the two days timeout on my own answer.
Thanks everybody for the constructive feedback.
In java you can do generic methods if generic type used in Parameter or Return Type of method signature. In your sample generic doSomethingWeird method but never used it in method signature.
see following sample:
class MyWeirdClass
{
public void entryPoint()
{
doSomethingWeird(new MyClass_1());
}
private <T extends A & B> T getMyClass()
{
if (System.currentTimeMillis() % 2 == 0)
{
return (T) new MyClass_1();
}
else
{
return (T) new MyClass_2();
}
}
private <T extends A & B> void doSomethingWeird(T a)
{
T obj = getMyClass();
obj.methodFromA();
obj.methodFromB();
}
}
This code work fine.
JLS(Java Language Specification) says in Generic Method part:
Type parameters of generic methods need not be provided explicitly when a
generic method is invoked. Instead, they are almost always inferred as specified in
§15.12.2.7
By this quotation when you don't use T in doSomethingWeird method signature,What you specify raw type of T in invoking time(in entryPoint method)?
I did not check the code (compile with both compilers). There is a lot of weird stuff in language specification at even more basic level (well, check the array declaration...). However, I believe that the design above is little "over-engineered" and if I translate the need correctly, the required functionality can be achieved with Factory pattern or if you are using some IoC framework (Spring?) then lookup method injection can do the magic for you. I think the code will be more intuitive and easy to read and to maintain.
I think the cause is different. That's not true that the type T on line "T obj = getMyClass();" is unknow - in fact, because of definition "T extends A & B", its erasure is A. This is called multiple bounds and following applies: "When a multiple bound is used, the first type mentioned in the bound is used as the erasure of the type variable."
Based on #MJM's answer, I suggest you to update the code as below. Then your code will not rely on JVM's type infer.
public void entryPoint() {
doSomethingWeird(getMyClass());
}
private <T extends A & B> T getMyClass() {
if (System.currentTimeMillis() % 2 == 0) {
return (T)new MyClass_1();
} else {
return (T)new MyClass_2();
}
}
private <T extends A & B> void doSomethingWeird(T t) {
t.methodFromA();
t.methodFromB();
}
I would go again for creating what you called a "dummy" interface AB.
First of all, I don't find it dummy at all. There are two classes with the same common method definitions and at one place you need to use one of them regardless of which one it is actually. That is the exact useage of inheritence. So the interface AB fits perfect here. And generics is the wrong solution here. Generics were never meant to implement inheritance.
Second, defining the interface will remove all the generics stuff in your code, and will make it much more readable. Actually adding an interface (or class) never makes your code less readable. Otherwise, it would be better to put all the code in a single class.
This is what the OpenJDK guys answered to my question:
These failures are caused by the fact that JDK 6 compiler doesn't
implement type-inference correctly. A lot of effort has been put into
JDK 7 compiler in order to get rid of all these problems (your program
compiles fine in JDK 7). However, some of those inference improvements
require source incompatible changes, which is why we cannot backport
these fixes in the JDK 6 release.
So this means for us: There is absolutely nothing wrong with our code and is officially supported also by Oracle. We can also stick to this kind of code and use Javac 7 with target=1.6 for our maven builds while development in eclipse will guarantee that we do not use Java 7 APIs :D yaaahyyy!!!
Your approach is questionable, because the unchecked casts sacrifice runtime type safety. Consider this example:
interface A {
void methodFromA();
}
interface B {
void methodFromB();
}
class C implements A { // but not B!
#Override public void methodFromA() {
// do something
}
}
class D implements A, B {
#Override
public void methodFromA() {
// TODO implement
}
#Override
public void methodFromB() {
// do something
}
}
class Factory {
#SuppressWarnings( "unchecked" )
public static <T extends A & B> T getMyClass() {
if ( System.currentTimeMillis() % 2 == 0 ) {
return (T) new C();
} else {
return (T) new D();
}
}
}
public class Innocent {
public static <T extends A & B> void main(String[] args) {
T t = Factory.getMyClass();
// Sometimes this line throws a ClassCastException
// really weird, there isn't even a cast here!
// The maintenance programmer
t.methodFromB();
}
}
(You may have to run the program several times to see what confused the maintenance programmer.)
Yes, in this simple program the error is rather obvious, but what if the object is passed around half your program until its interface is missed? How would you find out where the bad object came from?
If that didn't convince you, what about this:
class NotQuiteInnocent {
public static void main(String[] args) {
// Sometimes this line throws a ClassCastException
D d = Factory.getMyClass();
}
}
Is eliminating a couple interface declarations really worth that?
I've run into Java code similar to the following:
public interface BaseArg {
}
public class DerivedArg implements BaseArg {
}
public abstract class Base <A extends BaseArg> {
A arg;
void doIt() {
printArg(arg);
}
void printArg(A a) {
System.out.println("Base: " + a);
}
}
public class Derived extends Base<DerivedArg> {
void printArg(DerivedArg a) {
System.out.println("Derived: " + a);
}
public static void main(String[] args) {
Derived d = new Derived();
d.arg = new DerivedArg();
d.doIt();
}
}
(feel free to split it into files and run it).
This code ends up invoking the Derived printArg. I realize it's the only logical thing to do. However, if I perform "erasure" on the generic Base manually, replacing all occurrences of A with BaseArg, the overriding breaks down. I now get the Base's version of printIt.
Seems like "erasure" is not total - somehow printArg(A a) is not the same as printArg(BaseArg a). I can't find any basis for this in the language spec...
What am I missing in the language spec? It's not really important, but it bugs me :) .
Please note that the derived method is invoked. The question is why, considering their erased signatures are not override-equivalent.
When compiling class Derived, the compiler actually emits two methods: The method printArg(DerivedArg), and a synthetic method printArg(BaseArg), which overrides the superclass method in terms even a virtual machine ignorant of type parameters can understand, and delegates to printArg(DerivedArg). You can verify this by throwing an exception in printArt(DerivedArg), while calling it on a reference of type Base, and examining the stack trace:
Exception in thread "main" java.lang.RuntimeException
at Derived.printArg(Test.java:28)
at Derived.printArg(Test.java:1) << synthetic
at Base.doIt(Test.java:14)
at Test.main(Test.java:39)
As for finding this in the Java Language Specification, I first missed it as well, as it is not, as one might expect, specified where overriding or the subsignature relation is discussed, but in "Members and Constructors of Parameterized Types" (§4.5.2), which reveals that formal type parameters of the superclass are syntactically replaced by the actual type parameter in the subclass prior to checking for override equivalence.
That is, override equivalence is not affected by erasure, contrary to popular assumption.
If you do "manual" type erasure, you define the arg instance in BaseArg as type "BaseArg", not type "DerivedArg", so that's resolved to Base's "doIt(BaseArg)" method rather than Derived's "doIt(DerivedArg)" method. If you then alter Derived's method signature to
void printArg( BaseArg a )
from
void printArg(DerivedArg a)
it will print "Derived: arg" as expected.
I believe the behaviour that you encountered is due to the overloading method resolution.
See Java Lang Spec on overloading: link text
And also this wonderful resource on Java Generic regarding the topic.
The printArg in Derived does not override the printArg in Base. In order for it to override, by JLS 8.4.8.1, the overriding method's signature must be a "subsignature" of the overridden method's. And then by JLS 8.4.2, a subsignature must either have the same argument types (which yours doesn't), or its erasure must be the same (which is also not true).
First of all, you can compile the source code in a single file if you get rid of the "public" declarations for all of the classes/interfaces except "Derived".
Second, go ahead and do the type erasure by hand. Here's what I got when I did it:
interface BaseArg {}
class DerivedArg implements BaseArg {}
abstract class Base {
BaseArg arg;
void doIt() {
printArg(arg);
}
void printArg(BaseArg a) {
System.out.println("Base: " + a);
}
}
public class Derived extends Base {
void printArg(BaseArg a) {
System.out.println("Derived: " + a);
}
public static void main(String[] args) {
Derived d = new Derived();
d.arg = new DerivedArg();
d.doIt();
}
}
In the generic version of the code, it may look like methods Derived.printArg and Base.printArg have different signatures. However, if that were the case, then Derived.printArg could never be invoked by doIt. The type-erased version of the code makes it clear that Derived.printArg overrides Base.printArg, so doIt polymorphically calls the right method.
How is printArg in Base defined after your manual erasure ?
void printArg(BaseArg a) {
so, printArg(Derived a) does NOT override it and will not be called.
EDIT:
if you use the Override annotation in Derived, you'll get an error doing the manual erasure.