Java 8 method references : validation of methods at compile time - java

I'd like to use the new method references of Java 8 to provide more validation of some code at compile time.
Let's say I have a validateMethod method which requires one parameter : a "method" to be validated. For example :
validateMethod(foo, "methodA");
Here, the method would validate that foo#methodA() exists, at runtime.
Using method references, I'd like to be able to do :
validateMethod(foo::methodA);
So the existence of the method would be validated at compile time.
The problem is that it seems method references have to be assigned to a functional interface. For example, this :
Object dummy = foo::methodA;
Generates the error : "The target type of this expression must be a functional interface".
If I create a functional interface that has a compatible signature with the methodA method, it works :
#FunctionalInterface
public interface MyFunctionalInterface
{
public String run();
}
MyFunctionalInterface dummy = foo::methodA;
Now the existence of foo#methodA() is validated at compile time, which is what I want!
But...
Let's say validateMethod doesn't know the signature of the method it has to validate. Is it still possible to implement it then?
Let's pretend we don't care about ambiguity and overloaded methods. Is it possible in Java 8 to implement some kind of method which would trigger the validation of any method reference?
For example :
public class Foo
{
public String methodA()
{
return "methodA";
}
public String methodB(String str)
{
return "methodB";
}
public String methodC(String str, int nbr)
{
return "methodC";
}
}
Foo foo = new Foo();
validateMethod(foo::methodA); // Compile
validateMethod(foo::methodB); // Compile
validateMethod(foo::methodC); // Compile
validateMethod(foo::methodD); // Error!
Would it be possible to implement validateMethod in such a way that any method reference would be accepted, so the existence of the method would be validated at compile time?
I tried :
public void validateMethod(Object obj){}
But it doesn't work : "The target type of this expression must be a functional interface"
This would work :
#FunctionalInterface
public interface MyFunctionalInterface
{
public String run();
}
public void validateMethod(MyFunctionalInterface param){}
But only for methodA of the Foo class, because its signature (no parameter) is compatible with the functional interface's method signature!
Would it be possible to implement the functional interface MyFunctionalInterface in such a way that any method reference would be a valid parameter and therefore would be validated at compile time?
Any other ways you see to validate the existence of a method at compile time?

You seem to be trying to use method references, which are really the short-hands for lambda expressions, as method literals, which are the syntactic references to methods (much like Foo.class is the syntactic reference to class instance of Foo). These two are not the same, and this is the reason for the impedance you encounter. Things you try are the abuse of language feature which javac compiler utterly resists.
Unfortunately, there is no method literals in Java, so you will have to describe the method by other means, e.g. Reflection, MethodHandles.Lookup, etc. I think it is very easy to come up with the reflective checker for this kind of thing, or even build up the annotation processor to check the existence of given methods in compile time.

You could try something like the following:
public class Validate {
public String methodA() { return "methodA"; }
public String methodB(String s) { return "methodB"; }
public String methodC(String s, int n) { return "methodC"; }
public static void main(String[] args) {
Validate foo = new Validate();
validateMethod(foo::methodA);
validateMethod(foo::methodB);
validateMethod(foo::methodC);
}
private interface Func0 { void method(); }
private interface Func1<T> { void method(T t); }
private interface Func2<T, U> { void method(T t, U u); }
private interface Func3<T, U, V> { void method(T t, U u, V v); }
public static void validateMethod(Func0 f) { }
public static <T> void validateMethod(Func1<T> f) { }
public static <T, U> void validateMethod(Func2<T, U> f) { }
public static <T, U, V> void validateMethod(Func3<T, U, V> f) { }
}
But you'll need to provide an interface and an overload of validateMethod for every arity of method you need to validate. Also, it will not work if the method to validate is overloaded, unless you add an explicit cast:
// if there are two methodA's:
public String methodA() { return "methodA"; }
public String methodA(long x) { return "methodA"; }
validateMethod(foo::methodA); // this doesn't work
validateMethod((Func0)foo::methodA); // this does
validateMethod((Func1<Long>)foo::methodA); // so does this

would interface Method { public Object runMethod(Object... args); } work? the only potential problem i see is methods that deal with primitive types, but perhaps they could be upcast automatically to Double's / Long's, dont really have a running java8 compiler atm.

Related

Can I define the Negatable interface in Java?

Asking this question to clarify my understanding of type classes and higher kinded types, I'm not looking for workarounds in Java.
In Haskell, I could write something like
class Negatable t where
negate :: t -> t
normalize :: (Negatable t) => t -> t
normalize x = negate (negate x)
Then assuming Bool has an instance of Negatable,
v :: Bool
v = normalize True
And everything works fine.
In Java, it does not seem possible to declare a proper Negatable interface. We could write:
interface Negatable {
Negatable negate();
}
Negatable normalize(Negatable a) {
a.negate().negate();
}
But then, unlike in Haskell, the following would not compile without a cast (assume MyBoolean implements Negatable):
MyBoolean val = normalize(new MyBoolean()); // does not compile; val is a Negatable, not a MyBoolean
Is there a way to refer to the implementing type in a Java interface, or is this a fundamental limitation of the Java type system? If it is a limitation, is it related to higher-kinded type support? I think not: it looks like this is another sort of limitation. If so, does it have a name?
Thanks, and please let me know if the question is unclear!
Actually, yes. Not directly, but you can do it. Simply include a generic parameter and then derive from the generic type.
public interface Negatable<T> {
T negate();
}
public static <T extends Negatable<T>> T normalize(T a) {
return a.negate().negate();
}
You would implement this interface like so
public static class MyBoolean implements Negatable<MyBoolean> {
public boolean a;
public MyBoolean(boolean a) {
this.a = a;
}
#Override
public MyBoolean negate() {
return new MyBoolean(!this.a);
}
}
In fact, the Java standard library uses this exact trick to implement Comparable.
public interface Comparable<T> {
int compareTo(T o);
}
In general, no.
You can use tricks (as suggested in the other answers) that will make this work, but they do not provide all of the same guarantees that the Haskell typeclass does. Specifically, in Haskell, I could define a function like this:
doublyNegate :: Negatable t => t -> t
doublyNegate v = negate (negate v)
It is now known that the argument and return value of doublyNegate are both t. But the Java equivalent:
public <T extends Negatable<T>> T doublyNegate (Negatable<T> v)
{
return v.negate().negate();
}
doesn't, because Negatable<T> could be implemented by another type:
public class X implements Negatable<SomeNegatableClass> {
public SomeNegatableClass negate () { return new SomeNegatableClass(); }
public static void main (String[] args) {
new X().negate().negate(); // results in a SomeNegatableClass, not an X
}
This isn't particularly serious for this application, but does cause trouble for other Haskell typeclasses, e.g. Equatable. There is no way of implementing a Java Equatable typeclass without using an additional object and sending an instance of that object around wherever we send values that need comparing, (e.g:
public interface Equatable<T> {
boolean equal (T a, T b);
}
public class MyClass
{
String str;
public static class MyClassEquatable implements Equatable<MyClass>
{
public boolean equal (MyClass a, MyClass b) {
return a.str.equals(b.str);
}
}
}
...
public <T> methodThatNeedsToEquateThings (T a, T b, Equatable<T> eq)
{
if (eq.equal (a, b)) { System.out.println ("they're equal!"); }
}
(In fact, this is exactly how Haskell implements type classes, but it hides the parameter passing from you so you don't need to figure out which implementation to send where)
Trying to do this with just plain Java interfaces leads to some counterintuitive results:
public interface Equatable<T extends Equatable<T>>
{
boolean equalTo (T other);
}
public MyClass implements Equatable<MyClass>
{
String str;
public boolean equalTo (MyClass other)
{
return str.equals(other.str);
}
}
public Another implements Equatable<MyClass>
{
public boolean equalTo (MyClass other)
{
return true;
}
}
....
MyClass a = ....;
Another b = ....;
if (b.equalTo(a))
assertTrue (a.equalTo(b));
....
You'd expect, due to the fact that equalTo really ought to be defined symmetrically, that if the if statement there compiles, the assertion would also compile, but it doesn't, because MyClass isn't equatable with Another even though the other way around is true. But with a Haskell Equatable type class, we know that if areEqual a b works, then areEqual b a is also valid. [1]
Another limitation of interfaces versus type classes is that a type class can provide a means of creating a value which implements the type class without having an existing value (e.g. the return operator for Monad), whereas for an interface you must already have an object of the type in order to be able to invoke its methods.
You ask whether there is a name for this limitation, but I'm not aware of one. It's simply because type classes are actually different to object-oriented interfaces, despite their similarities, because they are implemented in this fundamentally different way: an object is a subtype of its interface, thus carries around a copy of the interface's methods directly without modifying their definition, while a type class is a separate list of functions each of which is customised by substituting type variables. There is no subtype relationship between a type and a type class that has an instance for the type (a Haskell Integer isn't a subtype of Comparable, for example: there simply exists a Comparable instance that can be passed around whenever a function needs to be able to compare its parameters and those parameters happen to be Integers).
[1]: The Haskell == operator is actually implemented using a type class, Eq ... I haven't used this because operator overloading in Haskell can be confusing to people not familiar with reading Haskell code.
I interpret the question as
How can we implement ad-hoc polymorphism using typeclasses in Java?
You can do something very similar in Java, but without the type safety guarantees of Haskell - the solution presented below can throw errors at runtime.
Here is how you can do it:
Define interface that represents the typeclass
interface Negatable<T> {
T negate(T t);
}
Implement some mechanism that allows you to register instances of the typeclass for various types. Here, a static HashMap will do:
static HashMap<Class<?>, Negatable<?>> instances = new HashMap<>();
static <T> void registerInstance(Class<T> clazz, Negatable<T> inst) {
instances.put(clazz, inst);
}
#SuppressWarnings("unchecked")
static <T> Negatable<T> getInstance(Class<?> clazz) {
return (Negatable<T>)instances.get(clazz);
}
Define the normalize method that uses the above mechanism to get the appropriate instance based on the runtime class of the passed object:
public static <T> T normalize(T t) {
Negatable<T> inst = Negatable.<T>getInstance(t.getClass());
return inst.negate(inst.negate(t));
}
Register actual instances for various classes:
Negatable.registerInstance(Boolean.class, new Negatable<Boolean>() {
public Boolean negate(Boolean b) {
return !b;
}
});
Negatable.registerInstance(Integer.class, new Negatable<Integer>() {
public Integer negate(Integer i) {
return -i;
}
});
Use it!
System.out.println(normalize(false)); // Boolean `false`
System.out.println(normalize(42)); // Integer `42`
The main drawback is that, as already mentioned, the typeclass instance lookup can fail at runtime, not at compile-time (as in Haskell). Using a static hash map is suboptimal too, because it brings all the problems of a shared global variable, this could be mitigated with more sophisticated dependency injection mechanisms. Automatically generating typeclass instances from other typeclass instances, would require even more infrastructure (could be done in a library). But in principle, it implements ad-hoc polymorphism using typeclasses in Java.
Full code:
import java.util.HashMap;
class TypeclassInJava {
static interface Negatable<T> {
T negate(T t);
static HashMap<Class<?>, Negatable<?>> instances = new HashMap<>();
static <T> void registerInstance(Class<T> clazz, Negatable<T> inst) {
instances.put(clazz, inst);
}
#SuppressWarnings("unchecked")
static <T> Negatable<T> getInstance(Class<?> clazz) {
return (Negatable<T>)instances.get(clazz);
}
}
public static <T> T normalize(T t) {
Negatable<T> inst = Negatable.<T>getInstance(t.getClass());
return inst.negate(inst.negate(t));
}
static {
Negatable.registerInstance(Boolean.class, new Negatable<Boolean>() {
public Boolean negate(Boolean b) {
return !b;
}
});
Negatable.registerInstance(Integer.class, new Negatable<Integer>() {
public Integer negate(Integer i) {
return -i;
}
});
}
public static void main(String[] args) {
System.out.println(normalize(false));
System.out.println(normalize(42));
}
}
You're looking for generics, plus self typing. Self typing is the notion of generic placeholder that equates to the class of the instance.
However, self typing doesn't exist in java.
You can get close with generics, but it's clunky:
public interface Negatable<T> {
public T negate();
}
Then
public class MyBoolean implements Negatable<MyBoolean>{
#Override
public MyBoolean negate() {
//your impl
}
}
Some implications for implementers:
They must specify themselves when they implement the interface, e.g. MyBoolean implements Negatable<MyBoolean>
Extending MyBoolean would require one to override the negate method again.

Overloading a method: both methods have same erasure

I have the following code and it doesn't work: the error both methods have same erasure appears.
public class Foo<V> {
public static void main(String[] args) {
}
public void Bar(V value) {
}
public void Bar(Object value) {
}
}
Also I have this code:
public class Foo<V> {
public static void main(String[] args) {
}
public void Bar(B value) {
}
public void Bar(A value) {
}
}
class A {
}
class B extends A {
}
And this works. In the first case V is a child of Object, just like in the second case B is a child of A. Then why the first case results in error, while the second compiles successfully?
EDIT: What should I do to achieve method overloading, without raising an error?
What should I do to achieve method overloading, without raising an error?
Simple: don't try to overload the method with parameters with the same erasure.
A few options:
Just give the methods different names (i.e. don't try to use overloading)
Add further parameters to one of the overloads, to allow disambiguation (ideally only do this if you actually need those parameters; but there are examples in the Java API where there are junk parameters simply to avoid overloading issues).
Bound the type variable, as suggested by #Kayaman:
<V extends SomethingOtherThanObject>
V isn't "a child of Object". V is an unbounded generic type that erases to Object, resulting in the error. If the generic type were bounded, such as <V extends Comparable<V>>, it would erase to Comparable and you wouldn't get the error.

Different return types of abstract method in java without casting

I am trying to implement and override a method with different return types without being forced to cast the return type.
public abstract class A {
public abstract Object getValue(String content);
}
public class B extends A {
public String getValue(String content) {...}
}
public class C extends A {
public int getValue(String content) {...}
}
public class D extends A {
public boolean getValue(String content) {...}
}
// Main loop:
for (A a : allAs)
{
// I want to use the method getValue() and corresponding to the type return a String, int or boolean without casting the return type
}
My question:
Is it possible to return different types without being forced to cast?
How has the abstract method look like to solve the problem?
I think there has to be a solution because the compiler should know the return type...
In your example, classes C and D will not compile. The overridden methods in them violate the Liskov substitution principle, aka, their return type is incompatible with their parent class. What you are looking to do can be accomplished with generics, as long as you are willing to forego the use of primitives as your return type.
abstract class A<T> {
public abstract T getValue(String content);
}
class B extends A<String> {
public String getValue(String content) { }
}
class C extends A<Integer> {
public Integer getValue(String content) { }
}
class D extends A<Boolean> {
public Boolean getValue(String content) { }
}
What you describe is not possible in general. However, if the subclass returns a "narrower" subtype of the superclass method return, this is called a "covariant return type" and is allowed in Java since JDK 1.5. However, based on your example I do not think covariant return is what you are looking for.
I assume what you want is
for (A a : allAs)
{
String b = a.getValue();
int c = a.getValue();
}
The problem here is, of course, that the compiler has no way of knowing at compile time which of those two statements is correct, and they can't both be correct.
You could use generics.
public abstract class A<T> {
public abstract T getValue(String content);
}
public class B extends A<String> {
public String getValue(String content) {...}
}
etc... int doesn't work as a return type for this, but Integer would.
I'm not typing at a compiler so there may be typos...
As noted by Jim and Chris, if you are looping over As, you can only get the "A" result, which is Object.
In your example, the definition of class B is ok, since String is a subclass of Object. The other two wont compile, since they are primitive types. You could replace them with Integer and Boolean returns to resolve that though.
As for your main loop, if you're iterating over them as references to A, you'll only be able to use A's definition of the method, which returns Object.

Is there an interface similar to Callable but with arguments?

Is there an interface in Java similar to the Callable interface, that can accept an argument to its call method?
Like so:
public interface MyCallable<V> {
V call(String s) throws Exception;
}
I would rather avoid creating a new type if there already exists something that I can use. Or is there a better strategy to having multiple clients implement and plug in a callable routine?
Copied from here http://www.programmingforums.org/thread27905.html
Since Java 8 there is a whole set of Function-like interfaces in the java.util.function package. The one you're asking for specifically is simply Function.
Prior to Java 8, there was no general-purpose, built-in interface for this, but some libraries provided it.
For example Guava has the Function<F,T> interface with the method T apply(F input). It also makes heavy use of that interface in several places.
at first it thought that this is done with an interface but then i found that it should be done using an abstract class.
i have solved it this way:
edit: lately i just use this:
public static abstract class callback1<T>{
public abstract void run(T value);
}
public static abstract class callback2<T,J>{
public abstract void run(T value,J value2);
}
public static abstract class callback3<T,J,Z>{
public abstract void run(T value,J value2,Z value3);
}
public static abstract class callbackret1<R,T>{
public abstract R run(T value);
}
public static abstract class callbackret2<R,T,J>{
public abstract R run(T value,J value2);
}
public static abstract class callbackret3<R,T,J,Z>{
public abstract R run(T value,J value2,Z value3);
}
CallBack.java
public abstract class CallBack<TRet,TArg> {
public abstract TRet call(TArg val);
}
define method:
class Sample2
{
CallBack<Void,String> cb;
void callcb(CallBack<Void,String> CB)
{
cb=CB; //save the callback
cb.call("yes!"); // call the callback
}
}
use method:
sample2.callcb(new CallBack<Void,String>(){
#Override
public Void call(String val) {
// TODO Auto-generated method stub
return null;
}
});
two arguments sample: CallBack2.java
public abstract class CallBack2<TRet,TArg1,TArg2> {
public abstract TRet call(TArg1 val1,TArg2 val2);
}
notice that when you use Void return type you have to use return null;
so here is a variation to fix that because usually callbacks do not return any value.
void as return type: SimpleCallBack.java
public abstract class SimpleCallBack<TArg> {
public abstract void call(TArg val);
}
void as return type 2 args: SimpleCallBack2.java
public abstract class SimpleCallBack<TArg1,TArg2> {
public abstract void call(TArg1 val1,TArg2 val2);
}
interface is not useful for this.
interfaces allow multiple types match same type. by having a shared predefined set of functions.
abstract classes allow empty functions inside them to be completed later. at extending or instantiation.
I've had the same requirement recently. As others have explained many libs do provide 'functional' methods, but these do not throw exceptions.
An example of how some projects have provided a solution is the RxJava library where they use interfaces such as ActionX where 'X' is 0 ... N, the number of arguments to the call method. They even have a varargs interface, ActionN.
My current approach is to use a simple generic interface:
public interface Invoke<T,V> {
public T call(V data) throws Exception;
// public T call(V... data) throws Exception;
}
The second method is preferable in my case but it exhibits that dreaded "Type safety: Potential heap pollution via varargs parameter data" in my IDE, and that is a whole other issue.
Another approach I am looking at is to use existing interfaces such as java.util.concurrent.Callable that do not throw Exception, and in my implementation wrap exceptions in unchecked exceptions.
Since Java 1.8 there is a Supplier<T> interface. It has a get() method instead of call() and it does not declare any Exception thrown.
The answer is ambiguous. Strictly speaking, that is, "for the same purpose of the Callable interface", there is not.
There are similar classes, and depending on what you want, they may or may not be convenient. One of them is the SwingWorker. However, as the name implies, it was designed for use within the Swing framework. It could be used for other purposes, but this would be a poor design choice.
My best advice is to use one provided by an extension library (Jakarta-Commons, Guava, and so on), depending on what libraries you already use in your system.
Normally arguments are not required as the method can have any number of arguments in its constructor.
final int i = 5;
final String word = "hello";
Future<String> future = service.submit(new Callable<String>() {
public String call() {
return word + i;
}
});
In this example i and word have been used, but you can "pass" any number of parameters.
I just had the same issue. you can wrap any method to return a callable, and execute it's returned value.
i.e.
main {
Callable<Integer> integerCallable = getIntegerCallable(400);
Future<Integer> future = executor.submit(integerCallable);
}
private Callable<Integer> getIntegerCallable(int n) {
return () -> {
try {
TimeUnit.SECONDS.sleep(n);
return 123;
} catch (InterruptedException e) {
throw new IllegalStateException("task interrupted", e);
}
};
}
Define an interface like so:
interface MyFunction<I, O> {
O call(I input);
}
Define a method:
void getOrderData(final Function<JSONArray, Void> func){
...
JSONArray json = getJSONArray();
if(func!=null) func.call(json);
}
Usage example:
//call getOrderData somewhere in your code
getOrderData(new Function<JSONArray, Void>() {
#Override
public Void call(JSONArray input) {
parseJSON(input);
return null;
}
});

Overriding a method using type erasure

Today I stumbled upon something interesting.
Assume the following Java 6 class:
public class Ereasure {
public Object get(Object o) {
return null; // dummy
}
public static class Derived<T> extends Ereasure{
// (1)
#Override
public Object get(T o) {
return super.get(o);
}
// (2)
/*
#Override
public Object get(Object o) {
return super.get(o);
}*/
}
}
If you try to compile the above example, the compiler says
Ereasure.java:9: method does not override or implement a method from a supertype
#Override
If you remove the #Override annotation(which should not be necessary!), it says
Ereasure.java:8: name clash: get(T) in Ereasure.Derived and get(java.lang.Object) in Ereasure have the same erasure, yet neither overrides the other
This is a bit contradictional, since T should erease to Object and therefor override the parent classes get method.
If you leave (1) unannotated and uncomment (2) so (1) overloads (2) it would not work either.
Compiler output:
Ereasure.java:15: get(T) is already defined in Ereasure.Derived
public Object get(Object o) {
As a conclusion, T is being ereased to Object, but cannot override the parent get Method.
My question is now, why dooesn't at least one of the examples compile?
You can see in the example below why it is impossible to do what you want:
public class Erasure {
public void set(Object o) {
return;
}
// method overloading: (which is valid)
public void set(String s) {
return;
}
public static class Derived<S> extends Erasure {
// Oops... which one am I supposed to override?
// (It would actually be overloading if S was a concrete type
// that is neither Object nor String.)
#Override
public void set(S o) { // does not compile
super.set(o);
}
}
}
The solution to your problem is that Erasure should be a parameterized class.
At a simple guess the compiler does not use the generic view when calculating overloads which of course would not make sense, because sometimes T might be Object other times its another type. The overridding would then become dependent on a moving target T which is downright wrong, especially if there were multiple methods all called "get" but with different single parameter types. In such a case it just wouldnt make sense and at a guess they chose to just keep things simple.
Consider a case where you have both a getter and a setter overridden as generics.
Derived<String> d = new Derived<String();
Erasure e = d;
e.set(new Object());
String s = d.get(); //Class cast exception
The fundamental principal of generics is that a class cast exception can only happen if there is either (a) an explicit cast or (b) a warning. If you were allowed to do what you wanted, the above would throw an exception without either.

Categories

Resources