Here is what I know about overload resolution in java:
The process of compiler trying to resolve the method call from given
overloaded method definitions is called overload resolution. If the
compiler can not find the exact match it looks for the closest match
by using upcasts only (downcasts are never done).
Here is a class:
public class MyTest {
public static void main(String[] args) {
MyTest test = new MyTest();
Integer i = 9;
test.TestOverLoad(i);
}
void TestOverLoad(int a){
System.out.println(8);
}
void TestOverLoad(Object a){
System.out.println(10);
}
}
As expected the output is 10.
However if I change the class definition slightly and change the second overloaded method.
public class MyTest {
public static void main(String[] args) {
MyTest test = new MyTest();
Integer i = 9;
test.TestOverLoad(i);
}
void TestOverLoad(int a){
System.out.println(8);
}
void TestOverLoad(String a){
System.out.println(10);
}
}
The output is 8.
Here I am confused. If downcasting was never to be used, then why did 8 get printed at all? Why did compiler pick up the TestOverLoad method which takes int as an argument which is a downcast from Integer to int?
The compiler will consider not a downcast, but an unboxing conversion for overload resolution. Here, the Integer i will be unboxed to an int successfully. The String method isn't considered because an Integer cannot be widened to a String. The only possible overload is the one that considers unboxing, so 8 is printed.
The reason that the first code's output is 10 is that the compiler will consider a widening reference conversion (Integer to Object) over an unboxing conversion.
Section 15.12.2 of the JLS, when considering which methods are applicable, states:
The first phase (§15.12.2.2) performs overload resolution without permitting boxing or unboxing conversion, or the use of variable arity method invocation. If no applicable method is found during this phase then processing continues to the second phase.
The second phase (§15.12.2.3) performs overload resolution while allowing boxing and unboxing [...]
In Java, resolving methods in case of method overloading is done with the following precedence:
1. Widening
2. Auto-boxing
3. Var-args
The java compiler thinks that widening a primitive parameter is more desirable than performing an auto-boxing operation.
In other words, as auto-boxing was introduced in Java 5, the compiler chooses the older style(widening) before it chooses the newer style(auto-boxing), keeping existing code more robust. Same is with var-args.
In your 1st code snippet, widening of reference variable occurs i.e, Integer to Object rather than un-boxing i.e, Integer to int. And in your 2nd snippet, widening cannot happen from Integer to String so unboxing happens.
Consider the below program which proves all the above statements:
class MethodOverloading {
static void go(Long x) {
System.out.print("Long ");
}
static void go(double x) {
System.out.print("double ");
}
static void go(Double x) {
System.out.print("Double ");
}
static void go(int x, int y) {
System.out.print("int,int ");
}
static void go(byte... x) {
System.out.print("byte... ");
}
static void go(Long x, Long y) {
System.out.print("Long,Long ");
}
static void go(long... x) {
System.out.print("long... ");
}
public static void main(String[] args) {
byte b = 5;
short s = 5;
long l = 5;
float f = 5.0f;
// widening beats autoboxing
go(b);
go(s);
go(l);
go(f);
// widening beats var-args
go(b, b);
// auto-boxing beats var-args
go(l, l);
}
}
The output is:
double double double double int,int Long,Long
Just for reference, here is my blog on method overloading in Java.
P.S: My answer is a modified version of an example given in SCJP.
widening beats boxing, boxing beats var-args. In your example, the widening cannot happen, so the boxing it's applied and Integer is unboxed. Nothing unordinary.
Actually in the second example no downcasting is occurred. There occurred the following thing -
1. Integer is unwrapped/unboxed to primitive type int.
2. Then the TestOverLoad(int a) method is called.
In main method you declare Integer like -
Integer i = 9;
Then call -
test.TestOverLoad(i);
Whereas, you have 2 overloaded version of TestOverLoad() -
TestOverLoad(int a);
TestOverLoad(String a);
Here the second overloaded version of TestOverLoad() takes completely different argument String. Thats why Integer i is unboxed to a primitive type int and after that the first overloaded version is called.
All objects in Java extend the class Object, including the class Integer. These two class have the following relationship: Integer "is a(n)" Object because Integer extends Object. In your first example, the method with Object parameter is used.
In the second example, no methods are found that accept an Integer. In this case Java uses what is called auto-unboxing to resolve the Integer wrapper class to a primitive int. Thus, the method with the int parameter is used.
While accepted answer of #rgettman leads to very right source, more precisely, §15.12.2.2 and §15.12.2.3 of the JLS Section 15.12.2 discuss the applicability, not the resolution - what the OP asked for. In the example the OP provided both testOverLoad methods are applicable, .i.e. will be successfully resolved in the absence of another one.
Instead, 15.12.2.5. Choosing the Most Specific Method discusses the resolution of the applicable methods.
It reads:
One applicable method m1 is more specific than another applicable
method m2, for an invocation with argument expressions e1, ..., ek, if
any of the following are true:
...
m1 and m2 are applicable by strict or loose invocation, and where m1 has formal parameter types S1, ..., Sn and m2 has formal parameter
types T1, ..., Tn, the type Si is more specific than Ti for argument
ei for all i (1 ≤ i ≤ n, n = k).
So, in the first example, provided by OP, for parameter i of type Integer method testOverLoad(Object a) is more specific than testOverLoad(int a).
This is happening due to Widening and Narrowing Type casting
Widening means a small type can be accommodated in a larger type without any loss of information.
Widening Typecasting is automatic.
That means a byte value can be automatically casted to short, int, long or double.
byte->short->int->float->double
Widens from left to right.
Type Casting in Java
Hope this answers your question!
you can check with one more example :
public class HelloWorld {
void show(String c){
System.out.println("int double overloaded method");
}
void show(Object c){
System.out.println("double int overloaded method");
}
}
here the you will get : double int overloaded method
Related
I am curious to know how casting works in function calls. I have two overloads of a specific function in my class called execute. The first overload is the base implementation and accepts double type parameters. The second overload accepts int type parameters. The second overload also calls the base overload and does so by casting the parameters to double.
The base implementation looks like this:
public double execute(double leftVal, double rightVal) throws IOException {
...
return solution;
}
and the overloaded version looks like this:
public int execute(int leftVal, int rightVal) throws IOException {
return (int) execute((double) leftVal, (double) rightVal);
}
Why is the above, specifically the (double) leftVal, (double) rightVal part, a redundancy and why does it work with one of the casts removed? The call to the first overload works no matter which order the casting is in. Both execute(leftVal, (double)rightVal) and execute((double) leftVal, rightVal) execute normally, producing no errors. I have always thought Java was very strict on explicitly identifying types. I would expect some warning or error that there isn't a function that exists to accept call like execute(double, int). I have an idea, though, that the first cast helps the compiler determine which overload to choose so the second cast is implicit. Maybe because the types are primitive easily cast-able? Also mini-question, are both functions called overloads or is every definition after the first an overload of the original?
Removing either one of the casts will still work, because in an invocation context, a widening primitive conversion is permitted. So no, Java not as strict about types as you thought :)
As per the Java Language Specification §5.3 Invocation Contexts
Invocation contexts allow an argument value in a method or constructor
invocation (§8.8.7.1, §15.9, §15.12) to be assigned to a corresponding
formal parameter.
Strict invocation contexts allow the use of one of the following:
an identity conversion (§5.1.1)
a widening primitive conversion (§5.1.2)
a widening reference conversion (§5.1.5)
A "widening primitive conversion" is... (§5.1.2)
19 specific conversions on primitive types are called the widening
primitive conversions:
byte to short, int, long, float, or double
short to int, long, float, or double
char to int, long, float, or double
int to long, float, or double
long to float or double
float to double
int to double is one of the above conversions, and is therefore permitted when you call a method. On the other hand, double to int is a narrowing primitive conversion, which is not allowed in an invocation context. This explains why the (double, double) method is unambiguously called.
Why is the above, specifically the (double) leftVal, (double) rightVal part, a redundancy?
the cast you made above is necessary for you to explicitly state that you want to call that base-level, overloaded function. Otherwise, the code will just assume you want to recurse in that same method. So yes, you are right in saying "the first cast helps the compiler determine which overload to choose"
So the second cast is implicit. Maybe because the types are primitive easily cast-able?
So if you are going from a primitive type that requires a smaller amount of bits (int is 32 bits) to a primitive type that accommodates a larger amount of bits (double is 64 bits), Java knows to do this at runtime. However, if you go from a double (64 bit) to an int (32 bit), you as a developer need to explicitly cast since you can run into information loss here. This is why that (int) cast is necessary on the return; you are going from a double to an int which can be lossy.
Some Example Code:
public class Test {
public int intMethod(int intArg) {
return intArg;
}
public long longMethod(long longArg) {
// will NOT compile, since we didn't explicitly cast to narrower int type (potentially lossy)
return intMethod(longArg);
}
}
public class Test {
public int intMethod(int intArg) {
return intArg;
}
public long longMethod(long longArg) {
// WILL compile, since we explicitly cast to narrower int type. Also note we do not need to cast result, since result is going from int --> long, which is not lossy
return intMethod((int) longArg);
}
}
public class Test {
public int intMethod(int intArg) {
// WILL compile. don't need to specify arg since its not lossy, but we do specify return type since its lossy
return (int) longMethod(intArg);
}
public long longMethod(long longArg) {
return 3L;
}
}
I have always thought Java was very strict on explicitly identifying types
Primitives are a much more nuanced feature of the Java language. I think by types, you're implicitly referring to Object's in Java. You are right that Java is much more strict with respect to Object-casting. If we take the above example and use Long and Integer objects, they won't compile, and will flag a warning Cannot case java.lang.Long to java.lang.Integer;
Examples that Won't Compile due to Invalid Casting
public class Test {
public Integer intMethod(Integer intArg) {
return intArg;
}
public Long longMethod(Long longArg) {
return intMethod((Integer) longArg);
}
}
public class Test {
public Integer intMethod(Integer intArg) {
return longMethod((Long) intArg);
}
public Long longMethod(Long longArg) {
return 3L;
}
}
ps: it's standard to refer to all functions w/ the same method name as overloaded; not common to refer to one method as the "base" and say the rest overload that method
You can understand that the nature of overloading is early binding, and the compiler verifies that the binding is a suitable function in the compiler.
In following piece of code
class Main {
void m1(double x){
System.out.println("double");
}
void m1(float x){
System.out.println("float");
}
void m1(long x){
System.out.println("long");
}
void m1(byte x){
System.out.println("byte");
}
void m1(short x){
System.out.println("short");
}
void m1(int x){
System.out.println("int");
}
public static void main(String[] args) {
Main m = new Main();
m.m1(1);
}
}
Why is the output "int" instead of "byte" or "long" or "short" or "float" or "double"?
If by following Automatic conversion from "byte" -> "short" -> "int" -> "long" -> "float" -> "double" then output should print "double" right?
(https://www.geeksforgeeks.org/type-conversion-java-examples/)
Int literals (like 1) are expressions of type int.
Overload resolution proceeds in a number of stages, looking for a method which can accept the parameters with the types you are passing.
If there is a single overload where the actual parameters have the same types as the formal parameters, invoke that.
If there is a single non-varargs overload where the actual parameters can be automatically converted (e.g. by widening or un/boxing) to be of the same types as the formal parameters, invoke that.
If there is a single varargs overload where the actual parameters can be automatically converted to be of the same types as the formal parameters invoke that.
Otherwise, error.
These phases are applied in turn, continuing until a match is found.
Since there is an overload of m1 which takes int, a match is found in phase 1, so no conversion of that value to another type needs to happen.
The automatic conversion only happens when a method accepts a wider type than the input. For example, if you remove all the void m1(..) functions except the long version. Then you can pass in an integer, and the integer will be automatically widened to a long.
In your example, there is a method that accepts an integer, so java will use that method instead, and there is no widening happening.
By definition, that primitive literal is of type int. Any numeric literal without a decimal point is of type int, unless you put a l/L in it: 1L to turn it into a long. There is no 1 "byte" literal, you have to go (byte) 1 to get there.
The compiler looks for the best fit, and uses that method that takes an int.
That is all there is to this. If you want to see other methods to be invojed, either cast the value to (long) for example, or start using values such as 1.0 instead of 1.
Regarding your comment: that automatism only kicks in when required. But in your case: A) you are using an int value B) there is an int-taking method. The compiler doesn't turn ints into longs into doubles for no reason!
I have method calling with passing parameters as (4,5).
I have two methods with parameters :
method1(int a, int b);
method1(Integer a, Integer b);
which method will be called and why?
This has been specified in The Java® Language Specification, §15.12.2. Compile-Time Step 2: Determine Method Signature:
…
The remainder of the process is split into three phases, to ensure compatibility with versions of the Java programming language prior to Java SE 5.0. The phases are:
The first phase (§15.12.2.2) performs overload resolution without permitting boxing or unboxing conversion, or the use of variable arity method invocation. If no applicable method is found during this phase then processing continues to the second phase.
…
The second phase (§15.12.2.3) performs overload resolution while allowing boxing and unboxing, but still precludes the use of variable arity method invocation. If no applicable method is found during this phase then processing continues to the third phase.
…
The third phase (§15.12.2.4) allows overloading to be combined with variable arity methods, boxing, and unboxing.
Therefore, since method1(int a, int b) is found in the first phase, it will be used. method1(Integer a, Integer b) would only be found in the second phase, if no matching method has been found in the first phase.
4 and 5 are int literals. Therefore they match your method1(int a, int b) without any conversion. Therefore method1(int a, int b) will be chosen.
In order to choose method1(Integer a, Integer b), the compiler would have to box the two int literals to Integers. That can only take place if no method matching your method name and the passed parameters which doesn't require boxing/unboxing conversions is found. Clearly, that's not the case here, since method1(int a, int b) exists. Even a method1(long a, int b) or method1(long a, long b) or method1(int a, long b) would be preferred over method1(Integer a, Integer b), since they don't require boxing conversion.
Both #Eran and #Holger answers are correct and very well explained, but if you want to try your code by yourself, here it is:
public class Answer {
static void method1(int a, int b) {
System.out.println("Inside the method1(int, int)");
}
static void method1(Integer a, Integer b) {
System.out.println("Inside the method1(Integer, Integer)");
}
//Test
public static void main(String[] args) {
method1(4, 5);
method1(Integer.valueOf(4), Integer.valueOf(5));
// method1(4, Integer.valueOf(5)); //Ambiguous method call - won't compile
}
}
The output is:
Inside the method1(int, int)
Inside the method1(Integer, Integer)
public class Class2 extends Class1{
public static void main(String[] args) {
Class2 c2 = new Class2();
c2.m3(10);
c2.m3(10.5f);
c2.m3('a');
c2.m3(10l);
c2.m3(10.5);
}
public void m2() {
System.out.println("M1 method of class2");
}
public void m3(int i) {
System.out.println("int argument");
}
public void m3(float j) {
System.out.println("float argument");
}
}
Getting error when trying to call c2.m3(10.5); Could you please assist why this is happening?
Getting error when trying to call c2.m3(10.5); Could you please assist why this is happening?
What's happening here is that you are passing a double to the m3() method, when you call m3(10.5), while you only have two m3() methods that accepts either an int or a float, so that's where your problem come from.
Just change your last line like this:
c2.m3((float) 10.5);
You need to cast the double value to a float or pass a float like 10.5f.
You're calling the method with the wrong type: It seems you intended to invoke this overload: void m3(float j); but that cannot be called with 10.5. The literal 10.5 is a double, and there's no method overload taking a double parameter.
You should change the call to c2.m3(10.5f) or add yet another overload with the signature void m3(double j)
Let's refer to the language spec §5.3 Invocation Contexts:
Loose invocation contexts allow a more permissive set of conversions,
because they are only used for a particular invocation if no
applicable declaration can be found using strict invocation contexts.
Loose invocation contexts allow the use of one of the following:
an identity conversion (§5.1.1)
a widening primitive conversion (§5.1.2)
a widening reference conversion (§5.1.5)
a boxing conversion (§5.1.7) optionally followed by widening reference conversion
an unboxing conversion (§5.1.8) optionally followed by a widening primitive conversion
Basically, when you call a method, the arguments you give can only undergo one of the conversions described above.
If the type of the expression cannot be converted to the type of the
parameter by a conversion permitted in a loose invocation context,
then a compile-time error occurs.
You are trying to convert from a double (10.5 is a double literal) to a float, which is a narrowing primitive conversion, which is not permitted in an invocation context.
10 works as an argument because an identity conversion (from int to int) is allowed.
10.5f works because it is a float literal and an identity conversion (from float to float) is allowed.
'a' works because this is a primitive widening conversion (from char to int), which is allowed.
10l works because this is also a widening primitive conversion (from long to float).
For more information on the kinds of conversions, see §5.1.
I have a question related to the following code snippet:
class VarArgsTricky {
static void wide_vararg(long... x) {
System.out.println("long...");
}
static void wide_vararg(Integer... x) {
System.out.println("Integer...");
}
public static void main(String[] args) {
int i = 5;
wide_vararg(i, i, i); // needs to widen and use var-args
Long l = 9000000000l;
wide_vararg(l, l); // prints sucessfully "long..."
}
}
The first call to wide_vararg fails to compile(saying that the method is ambigous) while the second compiles just fine.
Any explanations about this behaviour?
Thanks!
The first wide_vararg call is ambiguous because the compiler could either:
widen the ints to longs, and call the first wide_vararg method, or
autobox the ints to Integers, and call the second wide_vararg.
It doesn't know which it should do, however, so it refuses to compile the ambiguous method call. If you want the first call to compile, declare i as an Integer or a long, not an int.
When a var-arg method is invoked, the parameters get converted into an array of that type at compile time.
In the first call, the parameters get converted to an int[]. As all arrays in Java are direct sub types of the Object class, the concept of primitive widening does not apply in which case, both the overloads become equally applicable because long[] and Integer[] are at the same level. Hence the ambiguity