public class Class2 extends Class1{
public static void main(String[] args) {
Class2 c2 = new Class2();
c2.m3(10);
c2.m3(10.5f);
c2.m3('a');
c2.m3(10l);
c2.m3(10.5);
}
public void m2() {
System.out.println("M1 method of class2");
}
public void m3(int i) {
System.out.println("int argument");
}
public void m3(float j) {
System.out.println("float argument");
}
}
Getting error when trying to call c2.m3(10.5); Could you please assist why this is happening?
Getting error when trying to call c2.m3(10.5); Could you please assist why this is happening?
What's happening here is that you are passing a double to the m3() method, when you call m3(10.5), while you only have two m3() methods that accepts either an int or a float, so that's where your problem come from.
Just change your last line like this:
c2.m3((float) 10.5);
You need to cast the double value to a float or pass a float like 10.5f.
You're calling the method with the wrong type: It seems you intended to invoke this overload: void m3(float j); but that cannot be called with 10.5. The literal 10.5 is a double, and there's no method overload taking a double parameter.
You should change the call to c2.m3(10.5f) or add yet another overload with the signature void m3(double j)
Let's refer to the language spec §5.3 Invocation Contexts:
Loose invocation contexts allow a more permissive set of conversions,
because they are only used for a particular invocation if no
applicable declaration can be found using strict invocation contexts.
Loose invocation contexts allow the use of one of the following:
an identity conversion (§5.1.1)
a widening primitive conversion (§5.1.2)
a widening reference conversion (§5.1.5)
a boxing conversion (§5.1.7) optionally followed by widening reference conversion
an unboxing conversion (§5.1.8) optionally followed by a widening primitive conversion
Basically, when you call a method, the arguments you give can only undergo one of the conversions described above.
If the type of the expression cannot be converted to the type of the
parameter by a conversion permitted in a loose invocation context,
then a compile-time error occurs.
You are trying to convert from a double (10.5 is a double literal) to a float, which is a narrowing primitive conversion, which is not permitted in an invocation context.
10 works as an argument because an identity conversion (from int to int) is allowed.
10.5f works because it is a float literal and an identity conversion (from float to float) is allowed.
'a' works because this is a primitive widening conversion (from char to int), which is allowed.
10l works because this is also a widening primitive conversion (from long to float).
For more information on the kinds of conversions, see §5.1.
Related
I am curious to know how casting works in function calls. I have two overloads of a specific function in my class called execute. The first overload is the base implementation and accepts double type parameters. The second overload accepts int type parameters. The second overload also calls the base overload and does so by casting the parameters to double.
The base implementation looks like this:
public double execute(double leftVal, double rightVal) throws IOException {
...
return solution;
}
and the overloaded version looks like this:
public int execute(int leftVal, int rightVal) throws IOException {
return (int) execute((double) leftVal, (double) rightVal);
}
Why is the above, specifically the (double) leftVal, (double) rightVal part, a redundancy and why does it work with one of the casts removed? The call to the first overload works no matter which order the casting is in. Both execute(leftVal, (double)rightVal) and execute((double) leftVal, rightVal) execute normally, producing no errors. I have always thought Java was very strict on explicitly identifying types. I would expect some warning or error that there isn't a function that exists to accept call like execute(double, int). I have an idea, though, that the first cast helps the compiler determine which overload to choose so the second cast is implicit. Maybe because the types are primitive easily cast-able? Also mini-question, are both functions called overloads or is every definition after the first an overload of the original?
Removing either one of the casts will still work, because in an invocation context, a widening primitive conversion is permitted. So no, Java not as strict about types as you thought :)
As per the Java Language Specification §5.3 Invocation Contexts
Invocation contexts allow an argument value in a method or constructor
invocation (§8.8.7.1, §15.9, §15.12) to be assigned to a corresponding
formal parameter.
Strict invocation contexts allow the use of one of the following:
an identity conversion (§5.1.1)
a widening primitive conversion (§5.1.2)
a widening reference conversion (§5.1.5)
A "widening primitive conversion" is... (§5.1.2)
19 specific conversions on primitive types are called the widening
primitive conversions:
byte to short, int, long, float, or double
short to int, long, float, or double
char to int, long, float, or double
int to long, float, or double
long to float or double
float to double
int to double is one of the above conversions, and is therefore permitted when you call a method. On the other hand, double to int is a narrowing primitive conversion, which is not allowed in an invocation context. This explains why the (double, double) method is unambiguously called.
Why is the above, specifically the (double) leftVal, (double) rightVal part, a redundancy?
the cast you made above is necessary for you to explicitly state that you want to call that base-level, overloaded function. Otherwise, the code will just assume you want to recurse in that same method. So yes, you are right in saying "the first cast helps the compiler determine which overload to choose"
So the second cast is implicit. Maybe because the types are primitive easily cast-able?
So if you are going from a primitive type that requires a smaller amount of bits (int is 32 bits) to a primitive type that accommodates a larger amount of bits (double is 64 bits), Java knows to do this at runtime. However, if you go from a double (64 bit) to an int (32 bit), you as a developer need to explicitly cast since you can run into information loss here. This is why that (int) cast is necessary on the return; you are going from a double to an int which can be lossy.
Some Example Code:
public class Test {
public int intMethod(int intArg) {
return intArg;
}
public long longMethod(long longArg) {
// will NOT compile, since we didn't explicitly cast to narrower int type (potentially lossy)
return intMethod(longArg);
}
}
public class Test {
public int intMethod(int intArg) {
return intArg;
}
public long longMethod(long longArg) {
// WILL compile, since we explicitly cast to narrower int type. Also note we do not need to cast result, since result is going from int --> long, which is not lossy
return intMethod((int) longArg);
}
}
public class Test {
public int intMethod(int intArg) {
// WILL compile. don't need to specify arg since its not lossy, but we do specify return type since its lossy
return (int) longMethod(intArg);
}
public long longMethod(long longArg) {
return 3L;
}
}
I have always thought Java was very strict on explicitly identifying types
Primitives are a much more nuanced feature of the Java language. I think by types, you're implicitly referring to Object's in Java. You are right that Java is much more strict with respect to Object-casting. If we take the above example and use Long and Integer objects, they won't compile, and will flag a warning Cannot case java.lang.Long to java.lang.Integer;
Examples that Won't Compile due to Invalid Casting
public class Test {
public Integer intMethod(Integer intArg) {
return intArg;
}
public Long longMethod(Long longArg) {
return intMethod((Integer) longArg);
}
}
public class Test {
public Integer intMethod(Integer intArg) {
return longMethod((Long) intArg);
}
public Long longMethod(Long longArg) {
return 3L;
}
}
ps: it's standard to refer to all functions w/ the same method name as overloaded; not common to refer to one method as the "base" and say the rest overload that method
You can understand that the nature of overloading is early binding, and the compiler verifies that the binding is a suitable function in the compiler.
In following piece of code
class Main {
void m1(double x){
System.out.println("double");
}
void m1(float x){
System.out.println("float");
}
void m1(long x){
System.out.println("long");
}
void m1(byte x){
System.out.println("byte");
}
void m1(short x){
System.out.println("short");
}
void m1(int x){
System.out.println("int");
}
public static void main(String[] args) {
Main m = new Main();
m.m1(1);
}
}
Why is the output "int" instead of "byte" or "long" or "short" or "float" or "double"?
If by following Automatic conversion from "byte" -> "short" -> "int" -> "long" -> "float" -> "double" then output should print "double" right?
(https://www.geeksforgeeks.org/type-conversion-java-examples/)
Int literals (like 1) are expressions of type int.
Overload resolution proceeds in a number of stages, looking for a method which can accept the parameters with the types you are passing.
If there is a single overload where the actual parameters have the same types as the formal parameters, invoke that.
If there is a single non-varargs overload where the actual parameters can be automatically converted (e.g. by widening or un/boxing) to be of the same types as the formal parameters, invoke that.
If there is a single varargs overload where the actual parameters can be automatically converted to be of the same types as the formal parameters invoke that.
Otherwise, error.
These phases are applied in turn, continuing until a match is found.
Since there is an overload of m1 which takes int, a match is found in phase 1, so no conversion of that value to another type needs to happen.
The automatic conversion only happens when a method accepts a wider type than the input. For example, if you remove all the void m1(..) functions except the long version. Then you can pass in an integer, and the integer will be automatically widened to a long.
In your example, there is a method that accepts an integer, so java will use that method instead, and there is no widening happening.
By definition, that primitive literal is of type int. Any numeric literal without a decimal point is of type int, unless you put a l/L in it: 1L to turn it into a long. There is no 1 "byte" literal, you have to go (byte) 1 to get there.
The compiler looks for the best fit, and uses that method that takes an int.
That is all there is to this. If you want to see other methods to be invojed, either cast the value to (long) for example, or start using values such as 1.0 instead of 1.
Regarding your comment: that automatism only kicks in when required. But in your case: A) you are using an int value B) there is an int-taking method. The compiler doesn't turn ints into longs into doubles for no reason!
I have always heard (and thought of) Java as a strongly typed language. But only recently did I notice something that I have been using almost on a daily basis: int and double overloading.
I can write the following, and it is valid Java code:
int i = 1;
double j = 1.5;
double k = i + j;
But, if I have a method, one of whose arguments is a double, I need to specify it:
public static <K, V> V getOrDefault(K k, Map<K, V> fromMap, V defaultvalue) {
V v = fromMap.get(k);
return (v == null) ? defaultvalue : v;
}
When I call the above method on a Map<String, Double>, the defaultvalue argument cannot be an int:
getOrDefault(aString, aStringDoubleMap, 0); // won't compile
getOrDefault(aString, aStringDoubleMap, 0d); // compiles and runs just fine
Why does Java overload an int to double (just like it does in addition), and then autobox it to Double? I think the answer lies in how Java does operator overloading (i.e. the overloading happens in the + operator, and not from int to double), but I am not sure.
Here's hoping that SO can help me out on this.
That's because primitives don't work with generics. They need to be boxed.
For the invocation
getOrDefault(aString, aStringDoubleMap, 0); // won't compile
to work, Java would have to box the 0 to an Integer, then somehow convert that to a Double. That's not allowed by the language. It's similar to why you can't do
Double value = 3; // Type mismatch: cannot convert from int to Double
From the JLS, on invocation contexts
If the type of the expression cannot be converted to the type of the
parameter by a conversion permitted in a loose invocation context,
then a compile-time error occurs.
The type of the expression, 0, an integer literal, is int. Loose invocation contexts are defined as
Loose invocation contexts allow a more permissive set of conversions,
because they are only used for a particular invocation if no
applicable declaration can be found using strict invocation contexts.
Loose invocation contexts allow the use of one of the following:
an identity conversion (§5.1.1)
a widening primitive conversion (§5.1.2)
a widening reference conversion (§5.1.5)
a boxing conversion (§5.1.7) optionally followed by widening reference conversion
an unboxing conversion (§5.1.8) optionally followed by a widening primitive conversion
int to Double is not supported by any of those.
If you simply had
public static void main(String[] args) throws Exception {
method(3);
}
public static void method(double d) {
}
it would work.
You're looking for the exciting section 5.2 of the Java Language specification.
Basically when you add an int and double it performs a widening conversion. But it doesn't know to do this when trying to Autobox an int to a Double. It's explicitly disallowed in fact.
Java does not support operator overloading(An exception is String concat (+)) operator.
double k = i + j;
Here what is happening is implicit casting.A data type of lower size is widened to a data type of higher size. This is done implicitly by the JVM.
And for getOrDefault, primitives wont work with generics.And here comes autoboxing.
When you call getOrDefault(aString, aStringDoubleMap, 0d);, the 0d will be autoboxed to Double object.
But JVM cannot autobox 0 to a Double object in your first case.
Java will not perform a widening primitive conversion (0 to 0d) and a boxing conversion (double to Double) implicitly.
Check this link
An implicit cast from int to double, followed by boxing to Double, is not allowed.
0 can be only autoboxed to Integer.
0d can be autoboxed to Double.
The int -> double conversion is a widening conversion. Widening conversions do not lose data, so they are performed automatically.
Here is what I know about overload resolution in java:
The process of compiler trying to resolve the method call from given
overloaded method definitions is called overload resolution. If the
compiler can not find the exact match it looks for the closest match
by using upcasts only (downcasts are never done).
Here is a class:
public class MyTest {
public static void main(String[] args) {
MyTest test = new MyTest();
Integer i = 9;
test.TestOverLoad(i);
}
void TestOverLoad(int a){
System.out.println(8);
}
void TestOverLoad(Object a){
System.out.println(10);
}
}
As expected the output is 10.
However if I change the class definition slightly and change the second overloaded method.
public class MyTest {
public static void main(String[] args) {
MyTest test = new MyTest();
Integer i = 9;
test.TestOverLoad(i);
}
void TestOverLoad(int a){
System.out.println(8);
}
void TestOverLoad(String a){
System.out.println(10);
}
}
The output is 8.
Here I am confused. If downcasting was never to be used, then why did 8 get printed at all? Why did compiler pick up the TestOverLoad method which takes int as an argument which is a downcast from Integer to int?
The compiler will consider not a downcast, but an unboxing conversion for overload resolution. Here, the Integer i will be unboxed to an int successfully. The String method isn't considered because an Integer cannot be widened to a String. The only possible overload is the one that considers unboxing, so 8 is printed.
The reason that the first code's output is 10 is that the compiler will consider a widening reference conversion (Integer to Object) over an unboxing conversion.
Section 15.12.2 of the JLS, when considering which methods are applicable, states:
The first phase (§15.12.2.2) performs overload resolution without permitting boxing or unboxing conversion, or the use of variable arity method invocation. If no applicable method is found during this phase then processing continues to the second phase.
The second phase (§15.12.2.3) performs overload resolution while allowing boxing and unboxing [...]
In Java, resolving methods in case of method overloading is done with the following precedence:
1. Widening
2. Auto-boxing
3. Var-args
The java compiler thinks that widening a primitive parameter is more desirable than performing an auto-boxing operation.
In other words, as auto-boxing was introduced in Java 5, the compiler chooses the older style(widening) before it chooses the newer style(auto-boxing), keeping existing code more robust. Same is with var-args.
In your 1st code snippet, widening of reference variable occurs i.e, Integer to Object rather than un-boxing i.e, Integer to int. And in your 2nd snippet, widening cannot happen from Integer to String so unboxing happens.
Consider the below program which proves all the above statements:
class MethodOverloading {
static void go(Long x) {
System.out.print("Long ");
}
static void go(double x) {
System.out.print("double ");
}
static void go(Double x) {
System.out.print("Double ");
}
static void go(int x, int y) {
System.out.print("int,int ");
}
static void go(byte... x) {
System.out.print("byte... ");
}
static void go(Long x, Long y) {
System.out.print("Long,Long ");
}
static void go(long... x) {
System.out.print("long... ");
}
public static void main(String[] args) {
byte b = 5;
short s = 5;
long l = 5;
float f = 5.0f;
// widening beats autoboxing
go(b);
go(s);
go(l);
go(f);
// widening beats var-args
go(b, b);
// auto-boxing beats var-args
go(l, l);
}
}
The output is:
double double double double int,int Long,Long
Just for reference, here is my blog on method overloading in Java.
P.S: My answer is a modified version of an example given in SCJP.
widening beats boxing, boxing beats var-args. In your example, the widening cannot happen, so the boxing it's applied and Integer is unboxed. Nothing unordinary.
Actually in the second example no downcasting is occurred. There occurred the following thing -
1. Integer is unwrapped/unboxed to primitive type int.
2. Then the TestOverLoad(int a) method is called.
In main method you declare Integer like -
Integer i = 9;
Then call -
test.TestOverLoad(i);
Whereas, you have 2 overloaded version of TestOverLoad() -
TestOverLoad(int a);
TestOverLoad(String a);
Here the second overloaded version of TestOverLoad() takes completely different argument String. Thats why Integer i is unboxed to a primitive type int and after that the first overloaded version is called.
All objects in Java extend the class Object, including the class Integer. These two class have the following relationship: Integer "is a(n)" Object because Integer extends Object. In your first example, the method with Object parameter is used.
In the second example, no methods are found that accept an Integer. In this case Java uses what is called auto-unboxing to resolve the Integer wrapper class to a primitive int. Thus, the method with the int parameter is used.
While accepted answer of #rgettman leads to very right source, more precisely, §15.12.2.2 and §15.12.2.3 of the JLS Section 15.12.2 discuss the applicability, not the resolution - what the OP asked for. In the example the OP provided both testOverLoad methods are applicable, .i.e. will be successfully resolved in the absence of another one.
Instead, 15.12.2.5. Choosing the Most Specific Method discusses the resolution of the applicable methods.
It reads:
One applicable method m1 is more specific than another applicable
method m2, for an invocation with argument expressions e1, ..., ek, if
any of the following are true:
...
m1 and m2 are applicable by strict or loose invocation, and where m1 has formal parameter types S1, ..., Sn and m2 has formal parameter
types T1, ..., Tn, the type Si is more specific than Ti for argument
ei for all i (1 ≤ i ≤ n, n = k).
So, in the first example, provided by OP, for parameter i of type Integer method testOverLoad(Object a) is more specific than testOverLoad(int a).
This is happening due to Widening and Narrowing Type casting
Widening means a small type can be accommodated in a larger type without any loss of information.
Widening Typecasting is automatic.
That means a byte value can be automatically casted to short, int, long or double.
byte->short->int->float->double
Widens from left to right.
Type Casting in Java
Hope this answers your question!
you can check with one more example :
public class HelloWorld {
void show(String c){
System.out.println("int double overloaded method");
}
void show(Object c){
System.out.println("double int overloaded method");
}
}
here the you will get : double int overloaded method
Could anyone please explain why this code throws an ambiguous overload error, surely the Integer method is more specific and applicable?
Thanks,
Ned
package object_orientation;
public class Ambiguous {
//ambiguous error compiler unsure whether boxing is needed or not
static void overload(Integer... d){
System.out.println("Integer");
}
static void overload(long... d){
System.out.println("Long");
}
public static void main(String a[]){
int i = 1;
overload(i);
}
}
These concepts in Java should help,
Boxing + Widening is allowed, but not Widening + Boxing.
These rules of Widening, Boxing and Vararg should help:
Primitive Widening > Boxing > Varargs.
Widening and Boxing (WB) not allowed.
Boxing and Widening (BW) allowed.
While overloading, Widening + vararg and Boxing + vararg can only be
used in a mutually exclusive manner
Widening between wrapper classes not allowed.
Widening+varArgs & Boxing+varargs are individually allowed (but not allowed in overloaded version of method).
Boxing+Widening is preferred over Boxing+Varargs.
Hope this helps.