Calling Java vararg method from Scala with primitives - java

I have the following code in Java:
public class JavaClass {
public static void method( Object x ) {
}
public static void varargsMethod( Object... x ) {
}
}
When I try and access it from Scala,
object FooUser {
JavaClass.method(true)
JavaClass.varargsMethod(true) // <-- compile error
}
I get the following compile error:
type mismatch; found : Boolean(true) required: java.lang.Object Note: primitive types are not implicitly converted to AnyRef. You can safely force boxing by casting x.asInstanceOf[AnyRef]
The error message is very helpful and shows how to fix the error, but I was wondering why the compiler is (apparently) happy to implicitly convert a scala.Boolean in one method call but not the other. Is this a bug or intentional?
Updated to add:
I'm using Scala 2.8. If I make the varargsMethod signature
public static <T> void varargsMethod(T... xs) {
instead, then the error also goes away. I'm still puzzled as to why the compiler can't figure it out.

Scala varargs and Java varargs are different. You need to do a conversion:
def g(x: Any*) = x.asInstanceOf[scala.runtime.BoxedObjectArray]
.unbox(x.getClass)
.asInstanceOf[Array[Object]]
...
JavaClass.varargsMethod(g(true))
or (in 2.8.0+)
JavaClass.varargsMethod(java.util.Arrays.asList(true))

Since scala.Boolean is a subclass of scala.AnyVal but not scala.AnyRef (translated to java.lang.Object), a Boolean cannot be passed to a method expecting Object(s).
You can use the companion object scala.Boolean to "box" (in Java's sense, of course) a boolean into java.lang.Boolean:
JavaClass.varargsMethod(Boolean.box(true))
The other AnyVal classes have corresponding box methods (e.g. Int.box). There are also unbox methods to do the opposite.
A more complicated use case:
JavaClass.varargsMethod(Seq(1, 2, 3, 4).map(Int.box): _*) // passes 1, 2, 3, 4
I don't know when these were added to the standard library, but with these you don't have to use the implementation classes of scala.runtime.*.

Note, with Scala version 2.13.x, this works out-of-the-box (no pun) without having to manually box the value.

Probably can file a bug about it. It seems like it should throw an exception in both cases or neither. Not sure it's something that will ever be fixed as it probably is caused by some cleverness in the implementation of varargs that prevents the boxing from taking place.

Related

Need help in understanding Java tutorial on Generics

I was reading a Java tutorial from here. I am having trouble understanding a simple line.
Tutorial says declaration of Collections.emptyList is:
static <T> List<T> emptyList();
So if we write List<String> listOne = Collections.emptyList();, it works as the Java compiler is able to infer the type parameter, as the returned value should be of type List<String>.
Now consider a method: void processStringList(List<String> stringList). Now it states:
processStringList(Collections.emptyList()); The Java SE 7 compiler
generates an error message similar to the following:
List<'Object> cannot be converted to List<'String>
The compiler requires
a value for the type argument T so it starts with the value Object.
Consequently, the invocation of Collections.emptyList returns a value
of type List<Object>, which is incompatible with the method
processStringList
Now what do they mean by: so it starts with the value Object? I mean start doing what?
Basically this is about the capabilities of the compiler. In other words: to a certain degree, the "amount" of possible type inference is an implementation detail.
With Java 7, you sometimes have to use type helpers/hints/witnesses, where you would go Collections.<String>emptyList() to tell the compiler about that missing part.
Later implementations of the compiler improved the situation that you can almost always go with Collections.emptyList().
Regarding The compiler requires a value for the type argument T so it starts with the value Object. ... that is actually quite simple: the java compiler has to implement an algorithm, that finally, infers a specific type. Giving some pseudo-code, that might look like:
Class<?> inferType(SomeSyntaxTree construct) {
I am just using Class here to indicate that the algorithm will return something that resembles a known type. Now, that method could be implemented like this:
Class<?> inferedType = Object.class
while (whatever) {
refine inferedType
}
return inferedType
In other words: that is a very common approach when you "search" for some value: you initialize with the "most generic" value (in the Java type system, that would be Object.class), and then you see if that generic value can be refined by applying whatever algorithm.
In our case, the refinement might end up in figuring "the most specific type that can be used is String", but if no further refinement is possible, then you end up with your "initial default", being Object.
The statement
processStringList(Collections.emptyList());
works fine in Java 8 (I'm assuming above 8 as well). The compiler in this case is smart enough to infer the types by checking what is the expected argument type for the method.
In older versions, when compiler sees no explicit return type (as in List<String> listOne = Collections.emptyList();), it by default infers <T> to java.lang.Object. But note that List<Object> and List<String> are not compatible.
You can declare the method like void processString(List<? super String> list) to avoid the error.

Why does Java allow calling methods with type arguments that don't have type parameters? [duplicate]

Say we have two methods like the following:
public static <T> T genericReturn() { /*...*/ }
public static String stringReturn() { /*...*/ }
In calling any method, you can supply the type witness regardless of whether or not there is any requirement:
String s;
s = Internet.<String>genericReturn(); //Type witness used in return type, returns String
s = Internet.<Integer>stringReturn(); //Type witness ignored, returns String
However I'm not seeing any realistic use for this in Java at all, unless the type cannot be inferred (which is usually indicative of a bigger issue). Additionally the fact that it is simply ignored when it is not appropriately used seems counterintuitive. So what's the point of having this in Java at all?
From the JLS §15.2.12.1:
If the method invocation includes explicit type arguments, and the member is a generic method, then the number of type arguments is equal to the number of type parameters of the method.
This clause implies that a non-generic method may be potentially applicable to an invocation that supplies explicit type arguments. Indeed, it may turn out to be applicable. In such a case, the type arguments will simply be ignored.
It's followed by justification
This rule stems from issues of compatibility and principles of substitutability. Since interfaces or superclasses may be generified independently of their subtypes, we may override a generic method with a non-generic one. However, the overriding (non-generic) method must be applicable to calls to the generic method, including calls that explicitly pass type arguments. Otherwise the subtype would not be substitutable for its generified supertype.
Along this line of reasoning, let's construct an example. Suppose in Java 1.4, JDK has a class
public class Foo
{
/** check obj, and return it */
public Object check(Object obj){ ... }
}
Some user wrote a proprietary class that extends Foo and overrides the check method
public class MyFoo extends Foo
{
public Object check(Object obj){ ... }
}
When Java 1.5 introduced generics, Foo.check is generified as
public <T> T check(T obj)
The ambitious backward comparability goal requires that MyFoo still compiles in Java 1.5 without modification; and MyFoo.check[Object->Object] is still an overriding method of Foo.check[T->T].
Now, according to aforementioned justification, since this compiles:
MyFoo myFoo = new MyFoo();
((Foo)myFoo).<String>check("");
This must compile too:
myFoo.<String>check("");
even though MyFoo.check is not generic.
That sounds like a stretch. But even if we buy that argument, the solution is still too broad and overreaching. JLS could've tighten it up so that myFoo.<String,String>check and obj.<Blah>toString() are illegal, because type parameter arity doesn't match. They probably didn't have time to iron it out so they just took a simple route.
You need the type witness (the type in the diamond) when type inference will not work (see http://docs.oracle.com/javase/tutorial/java/generics/genTypeInference.html)
The example given for this is when daisy-chaining calls like:
processStringList(Collections.emptyList());
where processStringList is defined as:
void processStringList(List<String> stringList)
{
// process stringList
}
This will result in an error because it cannot cast a List<Object> to a List<String>. Thus, the witness is required. Albeit, you could do this in multiple steps, but this can be far more convenient.
It is because of backward- and/or forward-compatibility (at source level).
Imagine something like the introduction of generic parameters for some classes in Swing in JDK 7. It might happen with methods too (even with restrictions). If something turned out to be a bad idea, you can remove them and the source using it would still compile. In my opinion that is the reason why this is allowed, even if it is not used.
The flexibility though is limited. If you introduced type parameters with n types, you cannot later change to m types (in a source compatible way) if m != 0 or m != n.
(I understand this might not answer your question as I am not the designer of Java, this is only my idea/opinion.)
Wondering why "Type Witness" was thrown around in Java? :D
To understand this we should start the story from Type Inference.
Type inference is a Java compiler's ability to look at each method invocation and corresponding declaration to determine the type argument (or arguments) that make the invocation applicable. The inference algorithm determines the types of the arguments and, if available, the type that the result is being assigned, or returned. Finally, the inference algorithm tries to find the most specific type that works with all of the arguments.
If the above algorithm is still not able to determine the type we have "Type Witness" to explicitly state what type we need. For example:
public class TypeWitnessTest {
public static void main(String[] args) {
print(Collections.emptyList());
}
static void print(List<String> list) {
System.out.println(list);
}
}
The above code does not compile:
TypeWitnessTest.java:11: error: method print in class TypeWitnessTest cannot be applied to given types;
print(Collections.emptyList());
^
required: List<String>
found: List<Object>
reason: actual argument List<Object> cannot be converted to List<String> by method invocation conversion
1 error
So, you have Type Witness to rescue from this:
public class TypeWitnessTest {
public static void main(String[] args) {
print(Collections.<String>emptyList());
}
static void print(List<String> list) {
System.out.println(list);
}
}
This is compilable and works fine, however this has been more improved in Java 8:
JEP 101: Generalized Target-Type Inference
PS: I started from fundamentals so that other StackOverflow readers can also benefit.
EDIT:
Type Witness on non-generic Witness!
public class InternetTest {
public static void main(String[] args) {
String s;
s = Internet.<String>genericReturn(); //Witness used in return type, returns a string
s = Internet.<Integer>stringReturn(); //Witness ignored, returns a string
}
}
class Internet {
public static <T> T genericReturn() { return null; }
public static String stringReturn() { return null; }
}
I tried to simulate #Rogue example using javac 1.6.0_65 but it fails compilation with following error:
javac InternetTest.java
InternetTest.java:5: stringReturn() in Internet cannot be applied to <java.lang.Integer>()
s = Internet.<Integer>stringReturn(); //Witness ignored, returns a string
^
1 error
#Rogue: If you were using some prior version than I used, then do let me know your javac version. And if you were then it is not allowed now. :P

Java 8 generic type conversion fails where as it was passing with java 7

I faced this issue and hence posting it as complete solution -
With Java 8, the below code will fail with Runtime exception. The problem is getInteger method is returning a generic Integer type and print method expects exact Object Type.
public static void main(String[] args) {
print(getInteger());
}
private static <T> T getInteger() {
return (T)new Integer(10);
}
private static void print(Object...o1){
for(Object o: o1){
System.out.println(o);
}
}
Your code mixes type inference with varargs.
In Java 7, it worked because there was no target type inference: the type of getInteger call was resolved as just Object, and then the object was boxed into an Object[] to fit the varargs call.
With Java 8, T is inferred from the target type as Object[]. Since you perform an unchecked cast in getInteger, this is completely valid, and mandated by the method signature resolution rules, which will consider varargs only if resolution failed without considering it. Here, that was not the case.
Lesson: by performing the unchecked type cast you have waived your right to expect type safety and correctness. You should be prepared to take care of it yourself.
As part of solution choose any of the following.
Break method call and argument pass in two calls –
Integer i = getInteger();
print(i);
Or
Call print method with Typed argument –
print(Main.<Integer>getInteger());
I hope it will help.

Scala Tuple type inference in Java

This is probably a very noobish question, but I was playing a bit with Scala/Java interaction, and was wondering how well did Tuples play along.
Now, I know that the (Type1, Type2) syntax is merely syntactic sugar for Tuple2<Type1, Type2>, and so, when calling a Scala method that returns a Tuple2 in a plain Java class, I was expecting to get a return type of Tuple2<Type1, Type2>
For clarity, my Scala code:
def testTuple:(Int,Int) = (0,1)
Java code:
Tuple2<Object,Object> objectObjectTuple2 = Test.testTuple();
It seems the compiler expects this to be of parameterized types <Object,Object>, instead of, in my case, <Integer,Integer> (this is what I was expecting, at least).
Is my thinking deeply flawed and is there a perfectly reasonable explanation for this?
OR
Is there a problem in my Scala code, and there's a way of being more... explicit, in the cases that I know will provide an API for Java code?
OR
Is this simply a limitation?
Int is Scala's integer type, which is a value class, so it gets special treatment. It is different from java.lang.Integer. You can specify java.lang.Integer specifically if that's what you need.
[dlee#dlee-mac scala]$ cat SomeClass.scala
class SomeClass {
def testIntTuple: (Int, Int) = (0, 1)
def testIntegerTuple: (java.lang.Integer, java.lang.Integer) = (0, 1)
}
[dlee#dlee-mac scala]$ javap SomeClass
Compiled from "SomeClass.scala"
public class SomeClass implements scala.ScalaObject {
public scala.Tuple2<java.lang.Object, java.lang.Object> testIntTuple();
public scala.Tuple2<java.lang.Integer, java.lang.Integer> testIntegerTuple();
public SomeClass();
}

Method has the same erasure as another method in type

Why is it not legal to have the following two methods in the same class?
class Test{
void add(Set<Integer> ii){}
void add(Set<String> ss){}
}
I get the compilation error
Method add(Set) has the same erasure add(Set) as another method in type Test.
while I can work around it, I was wondering why javac doesn't like this.
I can see that in many cases, the logic of those two methods would be very similar and could be replaced by a single
public void add(Set<?> set){}
method, but this is not always the case.
This is extra annoying if you want to have two constructors that takes those arguments because then you can't just change the name of one of the constructors.
This rule is intended to avoid conflicts in legacy code that still uses raw types.
Here's an illustration of why this was not allowed, drawn from the JLS. Suppose, before generics were introduced to Java, I wrote some code like this:
class CollectionConverter {
List toList(Collection c) {...}
}
You extend my class, like this:
class Overrider extends CollectionConverter{
List toList(Collection c) {...}
}
After the introduction of generics, I decided to update my library.
class CollectionConverter {
<T> List<T> toList(Collection<T> c) {...}
}
You aren't ready to make any updates, so you leave your Overrider class alone. In order to correctly override the toList() method, the language designers decided that a raw type was "override-equivalent" to any generified type. This means that although your method signature is no longer formally equal to my superclass' signature, your method still overrides.
Now, time passes and you decide you are ready to update your class. But you screw up a little, and instead of editing the existing, raw toList() method, you add a new method like this:
class Overrider extends CollectionConverter {
#Override
List toList(Collection c) {...}
#Override
<T> List<T> toList(Collection<T> c) {...}
}
Because of the override equivalence of raw types, both methods are in a valid form to override the toList(Collection<T>) method. But of course, the compiler needs to resolve a single method. To eliminate this ambiguity, classes are not allowed to have multiple methods that are override-equivalent—that is, multiple methods with the same parameter types after erasure.
The key is that this is a language rule designed to maintain compatibility with old code using raw types. It is not a limitation required by the erasure of type parameters; because method resolution occurs at compile-time, adding generic types to the method identifier would have been sufficient.
Java generics uses type erasure. The bit in the angle brackets (<Integer> and <String>) gets removed, so you'd end up with two methods that have an identical signature (the add(Set) you see in the error). That's not allowed because the runtime wouldn't know which to use for each case.
If Java ever gets reified generics, then you could do this, but that's probably unlikely now.
This is because Java Generics are implemented with Type Erasure.
Your methods would be translated, at compile time, to something like:
Method resolution occurs at compile time and doesn't consider type parameters. (see erickson's answer)
void add(Set ii);
void add(Set ss);
Both methods have the same signature without the type parameters, hence the error.
The problem is that Set<Integer> and Set<String> are actually treated as a Set from the JVM. Selecting a type for the Set (String or Integer in your case) is only syntactic sugar used by the compiler. The JVM can't distinguish between Set<String> and Set<Integer>.
Define a single Method without type like void add(Set ii){}
You can mention the type while calling the method based on your choice. It will work for any type of set.
It could be possible that the compiler translates Set(Integer) to Set(Object) in java byte code. If this is the case, Set(Integer) would be used only at compile phase for syntax checking.
I bumped into this when tried to write something like:
Continuable<T> callAsync(Callable<T> code) {....}
and
Continuable<Continuable<T>> callAsync(Callable<Continuable<T>> veryAsyncCode) {...}
They become for compiler the 2 definitions of
Continuable<> callAsync(Callable<> veryAsyncCode) {...}
The type erasure literally means erasing of type arguments information from generics.
This is VERY annoying, but this is a limitation that will be with Java for while.
For constructors case not much can be done, 2 new subclasses specialized with different parameters in constructor for example.
Or use initialization methods instead... (virtual constructors?) with different names...
for similar operation methods renaming would help, like
class Test{
void addIntegers(Set<Integer> ii){}
void addStrings(Set<String> ss){}
}
Or with some more descriptive names, self-documenting for oyu cases, like addNames and addIndexes or such.
In this case can use this structure:
class Test{
void add(Integer ... ii){}
void add(String ... ss){}
}
and inside methods can create target collections
void add(Integer ... values){
this.values = Arrays.asList(values);
}

Categories

Resources