function ambiguity in java - java

in java, i'm facing a function ambiguity. basically i'm overloading a variadic function
i'm defining function like
static void f(Integer... a)
{
// .. some statements
}
static void f(float f,Integer... a)
{
// .. some other statements
}
can calling the function with following function calls
f(1,2);
f(1.2f,1,2);
and this error message pops up
error: reference to f is ambiguous, both method f(Integer...) in Test and method f(float,Integer...) in Test match
f(1,2);
^
can someone help me to understand if i'm missing any basic concept in java here. thnx ..

both method can take the first parameters you've entered f(1,2); and that's why you get ambiguity.
if you'll do
f((float)1,2);
you won't get the error for example

In Java language, int values can be automatically casted to float values (The opposite is not allowed).
Therefore, when you make a call to
f(1,2)
Java compiler matches all possible signatures that automatic type-conversions allow, i.e:
f(int, int)
f(float, int)
f(float, float)
f(int, float)
f(int, ...)
f(float, ...)
There resides the ambiguity for which the compiler does not know if you meant to call f(int, ...) or f(float, float).

When several methods are applicable, the compiler tries to find the most specific one. If two methods are maximally specific, there is an ambiguity and you get an error.
In summary (the actual rules are a little more complicated):
the compiler determines that for f(1, 2), both methods are applicable (1 can be an integer or a float)
the compiler then needs to determine which method is more specific: in your case, none is more specific in the sense defined in the specifications because there is no relationship between float and Integer (examples: if you had a f(int i, Integer... j) vs. f(float f, Integer... j), the former would be more specific because int is more specific than float among primitives. Similarly, if you had a f(Number f, Integer... i) vs. a f(Integer... i), the latter would be more specific because Integer extends Number).

The runtime can choose to cast the function with the list of integers to call either function. That's where the confusion lies. Any list of integers you supply can also be turned into another call with the single leading float followed by a list of integers.

Related

Why Does Java Compiler Treat Long Data Type As Double Instead of Integer By Default?

I'm new to Java and I stumbled upon this while testing some code. Why does Java pass in x (of data type long) into the function that takes in double parameters instead of the one with integer parameters. I would appreciate it if somebody could kindly explain to me why (Even though it might be an easy question to most of you!) Thank you in advance!
public class Hello {
public static void main (String [] args) {
long x=1;
System.out.println("Before calling the method, x is "+x);
increase(x);
System.out.println("After calling the method, x is "+x);
System.out.println();
double y=1;
System.out.println("Before calling the method, y is "+y);
increase(y);
System.out.println("After calling the method, y is "+y);
}
public static void increase(int p) {
p+=1;
System.out.println(" Inside the method is "+p);
}
public static void increase(double p) {
p+=2;
System.out.println(" Inside the method is "+p);
} }
The conversions allowed when calling a method are defined by JLS Chapter 5.
Implicit primitive conversions must be widening, that is, not result in a loss of magnitude (though in the case of long to double, may result in a loss of precision).
There are six kinds of conversion contexts in which poly expressions
may be influenced by context or implicit conversions may occur. Each
kind of context has different rules for poly expression typing and
allows conversions in some of the categories above but not others. The
contexts are:
...
Strict invocation contexts (§5.3, §15.9, §15.12), in which an argument
is bound to a formal parameter of a constructor or method. Widening
primitive, widening reference, and unchecked conversions may occur.
Loose invocation contexts (§5.3, §15.9, §15.12), in which, like strict
invocation contexts, an argument is bound to a formal parameter.
Method or constructor invocations may provide this context if no
applicable declaration can be found using only strict invocation
contexts. In addition to widening and unchecked conversions, this
context allows boxing and unboxing conversions to occur.
JLS 11 Chapter 5
Casting from long to int is a narrowing primitive conversion, as it may result in a loss of magnitude information. So it is not called unless you first explicitly cast to (int).
You have 2 overloaded methods for increase() with int and double as input parameters. Also you are passing input parameter as long type.
In Java the UpCasting happens in the below format.
byte -> short -> int -> long -> float -> double
So when you pass the long type value as input parameter, compiler first looks for exact match in the input parameter. If it not found then it will upcast to next value.
Hence long value can be accepted by the method having double value as input parameter.
Please go through the below url's.
Type Conversion In Java
Type Conversion - Oracle Documentation

Operator '+' cannot be applied to 'T', 'T' for bounded generic type [duplicate]

This question already has answers here:
Generic type extending Number, calculations
(3 answers)
Closed 5 years ago.
The following code snippet throw me the error as shown in the header, I didn't figure out why it does not work as T is of type Number, I expected operator '+' to be fine.
class MathOperationV1<T extends Number> {
public T add(T a, T b) {
return a + b; // error: Operator '+' cannot be applied to 'T', 'T'
}
}
Would be appreciate if anyone can provide some clues, thx !
There is a fundamental problem with the implementation of this idea of generic arithmetic. The problem is not in your reasoning of how, mathematically speaking, this ought to work, but in the implications of how it should be compiled to bytecodes by the Java compiler.
In your example you have this:
class MathOperationV1<T extends Number> {
public T add(T a, T b) {
return a + b; // error: Operator '+' cannot be applied to 'T', 'T'
}
}
Leaving boxing and unboxing aside, the problem is that the compiler does not know how it should compile your + operator. Which of the multiple overloaded versions of + should the compiler use? The JVM has different arithmetic operators (i.e. opcodes) for different primitive types; hence the sum operator for integers is an entirely different opcode than the one for doubles (see, for example, iadd vs dadd) If you think about it, that totally makes sense, because, after all, integer arithmetic and floating-point arithmetic are totally different. Also different types have different sizes, etc (see, for example ladd). Also think about BigInteger and BigDecimal which extend Number as well, but those do not have support for autoboxing and therefore there is no opcode to deal with them directly. There are probably dozens of other Number implementations like those in other libraries out there. How could the compiler know how to deal with them?.
So, when the compiler infers that T is a Number, that is not sufficient to determine which opcodes are valid for the operation (i.e. boxing, unboxing and arithmetic).
Later you suggest to change the code a bit to:
class MathOperationV1<T extends Integer> {
public T add(T a, T b) {
return a + b;
}
}
And now the + operator can be implemented with an integer sum opcode, but the result of the sum would be an Integer, not a T, which would still make this code invalid, since from the compiler standpoint T can be something else other than Integer.
I believe there is no way to make your code generic enough that you can forget about these underlying implementation details.
--Edit--
To answer your question in the comment section consider the following scenario based on the last definition of MathOperationV1<T extends Integer> above.
You're correct when you say that the compiler will do type erasure on the class definition, and it will be compiled as if it was
class MathOperationV1 {
public Integer add(Integer a, Integer b) {
return a + b;
}
}
Given this type erasure it would seem as if using a subclass of Integer ought to work here, but that's not true since it would make the type system unsound. Let me try to demonstrate that.
The compiler cannot only worry for the declaration site, it also has to consider what happens in the multiple call sites, possibly using a different type argument for T.
For example, imagine (for the sake of my argument) that there is a subclass of Integer that we'll call SmallInt. And assume our code above compiled fine (this is actually you question: why it doesn't compile?).
What would happen then if we did the following?
MathOperationV1<SmallInt> op = new MathOperationV1<>();
SmallInt res = op.add(SmallInt.of(1), SmallInt.of(2));
And as you can see the result of the op.add() method is expected to be a SmallInt, not an Integer. However, the result of our a + b above, from our erased class definition, would always return an Integer not a SmallInt (because the + uses the JVM integer arithmetic opcodes), and therefore this result would be unsound, right?.
You may now wonder, but if the type erasure of MathOperationV1 always returns an Integer, how in the world in the call site it might expect something else (like SmallInt) anyways?
Well, the compiler adds some extra magic here by casting the result of add to a SmallInt, but only because it has already ensured that the operation can't return anything else other than the expected type (this is why you see a compiler error).
In other words, your call site would look like this after erasure:
MathOperationV1 op = new MathOperationV1<>(); //with Integer type erasure
SmallInt res = (SmallInt) op.add(SmallInt.of(1), SmallInt.of(2));
But that would only work if you could ensure that add returns always a SmallInt (which we cannot due to the operator problems described in my original answer).
So, as you can see, your type erasure just ensures that, as per the rules of subtyping, you can return anything that extends an Integer, but once your call site declares a type argument for T, you're supposed to always assume that same type wherever T appeared in the original code in order to keep the type system sound.
You can actually prove these points by using the Java decompiler (a tool in your JDK bin directory called javap). I could provide finer examples if you think you need them, but you would do well to try it yourself and see what's happening under the hood :-)
Auto(un)boxing only works for types that can be converted to their primitive equivalents. Addition is only defined for numeric primitive types plus String. i.e: int, long, short, char, double, float, byte . Number does not have a primitive equivalent, so it can't be unboxed, that's why you can't add them.
+ isn't defined for Number. You can see this by writing (with no generics):
Number a = 1;
Number b = 2;
System.out.println(a + b);
This simply won't compile.
You can't do addition generically directly: you need a BiFunction, a BinaryOperator, or similar, which is able to apply the operation to the inputs:
class MathOperationV1<T extends Number> {
private final BinaryOperator<T> combiner;
// Initialize combiner in constructor.
public T add(T a, T b) {
return combiner.apply(a, b);
}
}
But then again, you may as well just use the BinaryOperator<T> directly: MathOperationV1 adds nothing over and above that standard class (actually, it provides less).

Variable Argument List in Between as a Parameter with different data type

I was working with varargs and came to know that :-
public void myMethod(String... args, int val){}
The variable argument type String of the method myMethod must be the
last parameter.
If both will be like String, error its giving is considerable, But in this case, I am setting int as 2nd parameter, so at runtime JVM can check the type of arguments & can differentiate like :-
myMethod("HI", "HELLO", 9)
Wouldn't that be feasible. Any other point I am missing for this to be producing error?
There is a number of reasons why the language designers have chosen to disallow this:
Differentiation has to be done by the compiler, because varargs are entirely a compiler feature. They are simply converted to implicit array constructors.
Your example could in theory work out, but the restrictions would be very harsh: For example, it would be hard for the compiler to see where the first varargs parameter ends in a situation like this
void foo(Object... objs, String... s)
foo("a", "b", "c")
Another example is this:
void bar(int... ints, long... longs)
foo(1, 2, 3, 4)
You could argue that int and long are different data types, but unfortunately it is possible to use integers where long is expected due to widening conversions. And another example involves boxing:
void baz(Object... objs, int... ints)
baz(1, 2, 3, 4)
int and Object are not directly related, but int can be converted to Integer, which is a subclass of Object.
It gets even more complicated the more overloaded methods and varargs parameters you have.
A bit technical, but still relevant: in the bytecode, varargs is not a parameter attribute, but a method modifier flag (ACC_VARARGS). This means that either a method is variadic (the last parameter is varargs) or it's not.
If you really need a varargs parameter in your method, move it to the last position. The only situation in which you couldn't do this is when you have multiple varargs parameters, which is impossible for a good reason.
If the compiler would allow you do this at declaration-side without an error, it should be almost impossible to get a useful error at use-site.
String... is the same as String[], except that you don't need to create an array at use-site. You can declare a method
void foo(String[] strings, int i)
and call it as
void foo({ "a", "b" }, 2)
with just 2 more keystrokes, and without all the struggle introduced by varargs parameters.
I have been working on a JVM-language compiler for more than a year now, and I considered adding this feature as well. However, the method resolution system is already extremely complicated (and my decision to include custom infix and prefix operators / methods and named and default parameters doesn't make it any easier), and multiple varargs parameters wouldn't make it any easier.
The error message is correct. If you use varargs, it has to be the last argument. You could say it could work it out, but it won't. Just swap the order of your arguments.
This is specified in the JLS, section 8.4.1. The language specification calls for this as part of a formal parameter list:
FormalParameterList:
ReceiverParameter
FormalParameters , LastFormalParameter
LastFormalParameter
The LastFormalParameter token is where varargs are defined and permissible.
LastFormalParameter:
{VariableModifier} UnannType {Annotation} ... VariableDeclaratorId
FormalParameter
Pay attention to the .... That's the only place in the grammar of formal parameters in which this is allowed. FormalParameters does not make that allowance.
FormalParameters:
FormalParameter {, FormalParameter}
ReceiverParameter {, FormalParameter}
Disambiguation will become a nightmare when you overload methods with varargs in random positions... (not to say multiple varargs).

overloaded method call ambiguity with ternary operator

I am creating a simple wrapper class for numbers. Simply put, I want it to display the value 42 verses 42.0; however, it should display the value 1.6180338 as that number. Simply enough.
Code
private double number;
...
#Override
public String toString() {
return String.valueOf(
number == longValue()
? longValue()
: number );
}
...
#Override
public long longValue() {
return (long) number;
}
Issue
The problem is that the value of 42.0 is always displayed Not the correct 42 value in the toString(...) method
My Thoughts
Although the String.valueOf(...) method has a lot of overloaded methods to display the correct primitive values as strings, there is ambiguity in which overloaded method to use. It can use String.valueOf(double) or String.valueOf(long). This is because of the ternary operator statement and resulting result type.
I thought that the compiler would be able to discern the long type and call the appropriate String.valueOf(long) method. That appears to not be the case; instead, the JVM will choose at compile time the safest, yet most-confined overloaded method. In this case, that is String.valueOf(double) because it can safely convert a long to a double.
Question
I know this isn't possible in Java right now, but is something like this available in other languages currently? And is there some kind of definition that explains this method, and can you explain it in more detail?
I mean a definition like Covariance or Contra-variance. Note: I realize that the definition is not one of those two :)
As Java is statically typed language, the result of the ternary operator must have an explicit type, defined during the compilation, so the compilator can continue handling the outer expression. As the both branches of ternary are numbers, they are promoted to the more precise type as described in JLS 15.25 and 5.6.2. You can work-around this casting the arguments to the object:
return String.valueOf(
number == longValue()
? (Object)longValue()
: (Object)number );
This way you will box the numbers and use String.valueOf(Object) which works nice for both branches.

Why does Java's type erasure not break this?

public class Test {
public static class Nested<T> {
public T val;
Nested(T val) { this.val = val; }
}
public static void main(String[] args) {
Nested<Integer> a = new Nested<Integer>(5);
Nested<Integer> b = new Nested<Integer>(2);
Integer diff = a.val - b.val;
}
}
The above code works fine. However, if I add a method to Nested:
T diff(Nested<T> other) { return this.val - other.val; }
I get a compilation error:
operator - cannot be applied to T,T
This makes sense to me. The type of T gets erased at runtime, so Java can't apply an operator that's only defined for certain classes like Integer. But why does a.val - b.val work?
Edit:
Lots of good answers. Thanks everyone. The gist of it, if I understand correctly, is that the compiler can add casts to Integer in a.val - b.val because it knows a and b were instantiated as as Nested<Integer>. However, since this.val - other.val occurs inside the body of a generic function definition (where T still could be anything), the compiler cannot add the casts that would be necessary to make "-" work. This leads to a more interesting question, namely, if the Java compiler were capable of inlining, would it be possible for a generic function like diff to work?
The difference between the two is whether you are inside a generic method or you are outside of it.
You got it absolutely right that inside the method T is not known to be an Integer, so operator minus - cannot be applied. However, when you are in main(), outside the generic method, the compiler knows that you've instantiated Nested with Integer, so it knows very well how to apply the operator. Even though the implementation of the generic has erased the type to produce the code for Nested<T>, the compiler does not think of a and b in terms of Nested<T>: it has enough knowledge to insert an appropriate cast, unbox the results, and apply the minus - operator.
You are getting a compile-time error, not a runtime one.
public static void main(String[] args) {
Nested<Integer> a = new Nested<Integer>(5);
Nested<Integer> b = new Nested<Integer>(2);
Integer diff = a.val - b.val;
}
Here, compiler knows that both T are Integer. You just declared <Integer>.
T diff(Nested<T> other) { return this.val - other.val; }
Here, compiler is not certain about T. It could be anything. And, numeric only operator - is not allowed for just anything.
a.val - b.val works because it is validated by the compiler, not in runtime. The compiler "sees" that you're using <Integer> and it compiles and runs Ok, in runtime there is no problem even with erasure because the compiler already validated that.
Because the code doesn't live within Nested, the type is known. The compiler can clearly see that a.val - b.val is an Integer minus an Integer, which can be auto-boxed. The compiler essentially rewrites it to
Integer diff = Integer.valueOf(((Integer) a.val).intValue() - ((Integer) b.val).intValue())
The .intValue and .valueOf calls are from the auto-boxing and auto-unboxing.
The type casts are safe for the compiler to insert because you used a parameterized type Nested.
True, technically, a could be something else, like a Calendar object, since the type is unknown at runtime. But if you are using generics, the compiler trusts that you aren't doing anything dumb to circumvent it. Therefore, if a.val or b.val were anything other than Integers, a ClassCastException would be thrown at runtime.
Because method call is at runtime and a.val - b.val is checked at compile time.
In first case, compiler knows that the type is Integer and -
operation is allowed for integers.
In second case, the type of T is not known to the compiler in advance, hence it is not sure whether - operation is valid or not. Hence the compiler error.
Consider we use the method as diff(Nested<Book> other) so there is no way a book can be subtracted from other.

Categories

Resources