Compiler interpretation of overriding vs overloading - java

Forgive me if this question is primarily opinion based, but I have the feeling that it is not and there is a good reason for the choice. So, here's an example. Sorry, it's really long, but super simple:
Interface:
public interface Shape
{
double area ();
}
Implementing class 1:
import static java.lang.Math.PI;
public class Circle implements Shape
{
private double radius;
public Circle(double radius)
{
this.radius = radius;
}
public double area()
{
return PI*radius*radius;
}
}
Implementing class 2:
public class Square implements Shape
{
private double size;
public Square(double sideLength)
{
size = sideLength;
}
public double area()
{
return size*size;
}
}
Driver:
Shape[] shapes = new Shape[]{new Circle (5.3), new Square (2.4)};
System.out.println(shapes[0].area()); //prints 88.247...
System.out.println(shapes[1].area()); //prints 5.76
This works since .area() is overridden by Circle and Square. Now, here's where my question truly begins. Let's say that the driver has these methods:
public static void whatIs(Shape s)
{
System.out.println("Shape");
}
public static void whatIs(Circle s)
{
System.out.println("Circle");
}
public static void whatIs(Square s)
{
System.out.println("Square");
}
If we call:
whatIs(shapes[0]); //prints "Shape"
whatIs(shapes[1]); //prints "Shape"
This happens because Java interprets the objects as Shapes and not Circle and Square. Of course we can get the desired results through:
if (shapes[0] instanceof Circle)
{
whatIs((Circle) shapes[0]); //prints "Circle"
}
if (shapes[1] instanceof Square)
{
whatIs((Square) shapes[1]); //prints "Square"
}
Now that we have a background my question is:
What reasons contributed to the compiler/language design such that whatIs(shapes[0]); will print "Shape?" As in, why can the Java compiler accurately distinguish between overridden methods for related objects, but not overloaded methods? More specifically, if the only methods that the driver has access to are:
public static void whatIs(Circle s)
{
System.out.println("Circle");
}
public static void whatIs(Square s)
{
System.out.println("Square");
}
and we attempt to call,
whatIs(shapes[0]);
whatIs(shapes[1]);
we will get two errors (one for Square and one for Circle) indicating that:
method Driver.whatIs(Square) is not applicable
actual argument Shape cannot be converted to Square by method invocation conversion
So, again, now that we've gotten to the nitty-gritty, why can Java not handle a situation like this? As in, is this done due to efficiency concerns, is it just not possible due to the some design decisions, is this a bad practice for some reason, etc?

Why can the Java compiler accurately distinguish between overridden methods for related objects, but not overloaded methods?
It can't.
It checks strictly by the type it can see & guarantee. If your code is shapes[0].area() it will check that Shape has an area method and will compile it to "call area() on that object". The concrete Object that exists at runtime is now guaranteed to have that method. Which version from which class is actually used is dynamically resolved at runtime.
Calling overloaded methods works the same. Compiler sees a Shape and compiles that into "call whatis() in the basic Shape version". If you wanted to change that (and even allow having no basic Shape version) you would need to be able to determine the type at compile time.
But it is AFAIK impossible to create a compiler that can determine the type that an object will have at runtime at that point. Think for example:
final Shape[] shapes = new Shape[] { new Circle(5.3), new Square(2.4) };
new Thread() {
public void run() {
shapes[0] = new Square(1.5);
}
}.start();
whatIs(shapes[0]);
You must execute that code to find out.
The compiler could auto generate code like
if (shapes[0] instanceof Circle)
{
whatIs((Circle) shapes[0]); //prints "Circle"
}
for you to achieve dynamic method invocation at runtime but it does not. I don't know the reason but it would be neat to have sometimes. Although instanceof is often a sign for bad class design - you should not look from the outside for differences, let the class behave differently so the outside does not need to know.

Java, with object-oriented features, supports polymorphism, so calling area will call the area method of the specific instance, whatever it is. This is determined at runtime.
However, this polymorphism is not supported with overloaded methods. The Java Language Specification, Section 8.4.9 covers this:
When a method is invoked (§15.12), the number of actual arguments (and
any explicit type arguments) and the compile-time types of the
arguments are used, at compile time, to determine the signature of the
method that will be invoked (§15.12.2). If the method that is to be
invoked is an instance method, the actual method to be invoked will be
determined at run time, using dynamic method lookup (§15.12.4).
That is, with overloaded methods, the method is chosen at compile time, using the compile time types of the variables, not at runtime like with polymorphism.

The dispatch to one of the whatIsmethods is decided by the compiler at compile time. The call to one of the areamethods is decided at runtime, based on the actual class of the object that is referenced.

Q: Why can the Java compiler accurately distinguish between overridden methods for related objects, but not overloaded methods ... why can Java not handle a situation like this?
A: You've got the question backwards.
Java ALLOWS you to distinguish between "overloading" and "overriding".
It doesn't try to second-guess what you mean, it gives you a choice to use one or the other.

Well, as a stupid answer, you could get the whatIs function to work fine THIS way (without any type checking)
class Shape{
public abstract String whatIs();
}
class Square{
public String whatIs(){ return "Square"; }
}
class Circle{
public String whatIs(){ return "Circle"; }
}
And then call them like this
Shape square = new Square();
Shape circle = new Circle();
System.out.println(square.whatIs()) //prints 'square'
System.out.println(circle.whatIs()) //prints 'circle
Not at all the answer to the question you asked... But I couldn't resist.

Related

How to distinguish between objects of the same class with different references?

Sorry if the title is poorly worded, I don't really know how to ask this. But I want to distinguish between instances of the same class, but referenced as different classes. Please consider following code:
class Shape {}
class Circle extends Shape {}
class Main {
public static void main(String[] args) {
Circle myCircle = new Circle();
Shape myOtherCircle = new Circle();
System.out.print(myCircle.getClass() + ", ");
System.out.println(myOtherCircle.getClass());
System.out.print((myCircle instanceof Circle) + ", ");
System.out.println(myOtherCircle instanceof Circle);
System.out.print((myCircle instanceof Shape) + ", ");
System.out.println(myOtherCircle instanceof Shape);
System.out.print(Circle.class.isInstance(myCircle) + ", ");
System.out.println(Circle.class.isInstance(myOtherCircle));
System.out.print(Shape.class.isInstance(myCircle) + ", ");
System.out.println(Shape.class.isInstance(myOtherCircle));
}
}
We can distinguish objects by the type of their instance by using the methods or operators shown above, but as shown, when trying to compare objects by the type of the reference there are no differences the code prints this:
class Circle, class Circle
true, true
true, true
true, true
true, true
How can I distinguish myCircle and myOtherCircle by the type reference. Thank you for reading, I appreciate all answers.
I don't think that is possible. The closest you can get is if these variables are fields of a class. Then you can access the type via the class definition:
class Main {
Circle myMainCircle = new Circle();
Shape myMainOtherCircle = new Circle();
static class Shape {
}
static class Circle extends Shape {
}
public static void main(String[] args) throws Exception {
System.out.println(Main.class.getDeclaredField("myMainCircle").getGenericType());
System.out.println(Main.class.getDeclaredField("myMainOtherCircle").getGenericType());
}
}
output:
class Main$Circle
class Main$Shape
The problem I see is "When would you not know the reference type?" For example we could make two methods.
public int special(Circle a){
return 1;
}
public int special(Shape a){
return 2;
}
Then using your example.
System.out.println(special(myCircle) + ", " + special(myOtherCircle));
(This will print 1,2 because java will use the most specified method. myCircle is a Circle and a Shape, but the Circle is the most specified.)
For this to work though, we already know that one class is referenced as a Shape and the other a Circle.
In other words, you want to know, at runtime, the declared type of a reference to a variable (and not a class field - since you can use introspection to check out those, as shown by Conffusion's answer)
Why would you need to check it at runtime? In what case could it be useful to wait until then? The compiler knows much earlier -- at compile time, as it keeps track of the declared types of all identifiers. In your code, if you write
Shape myOtherCircle = new Circle();
// ...
Circle c = myOtherCircle; // compile-time error: invalid implicit cast
This warns you, at compile-time, that you are doing something fishy - as the compiler does not allow implicit (= non-explicit, that is, without an expliccit (Circle) cast) narrowing casts. For example: implicit casting from a Shape to a Circle: bad, because you could try to convert a Square-Shape to a Circle which would lead to run-time errors. From a Circle to a Shape, a broadening cast, no errors can occur.
So, my short answer would be:
you cannot do this at run-time because the compiler (and your IDE) already has this information at compile-time
On the other hand, with computers, almost everything is possible, although some are quite complicated. It is possible to detect such problems at runtime by using the JDK's built-in java compiler to (uh) compile and report on the declared types of variables in any piece of java code - but doing so is certainly not expected (most folks using it just want to compile and run code at runtime, rather than play with the AST), and requires a deep dive into internals. Asides from the JDK's own compiler, you can also use any of a large set of java compilers to do something similar (but beware possible incompatibilities with the standard one).
In Java, a Circle instance is a Circle, no matter if you store a reference to it in a variable declared as e.g. Circle, Shape or Object.
So, anything you do with the instance found in a variable only depends on the instance's class, not on the variable's declared type. That applies to things like the instanceof operator, the getClass() method and so on.
There's one exception: if you have some overloaded methods like
String myType(Object x) { return "Object"; }
String myType(Shape x) { return "Shape"; }
String myType(Circle x) { return "Circle"; }
then the compiler will decide which version to call, based on the type as it is known at compile-time. And if you pass a variable into a call of myType(), the type that the compiler assumes will be the variable's type, not knowing about the class of the instance that will later be referenced in the variable.
So then the following snippet might do what you want:
System.out.print(myType(myCircle) + ", ");
System.out.println(myType(myOtherCircle));
But, as for any given variable you statically know how you declared it, I don't see how such a construct might be useful.

Overridden and Overloaded Methods

What does it mean by "Casting affects the selection of overloaded methods at compile time but not overridden methods"?
I read the following passage on "Overridden methods and dynamic binding" (https://www.oreilly.com/library/view/learning-java-4th/9781449372477/ch06s01.html) and I couldn't understand the last paragraph
"In a previous section, we mentioned that overloaded methods are selected by the compiler at compile time. Overridden methods, on the other hand, are selected dynamically at runtime. Even if we create an instance of a subclass our code has never seen before (perhaps a new class loaded over the network), any overriding methods that it contains are located and used at runtime, replacing those that existed when we last compiled our code.
In contrast, if we created a new class that implements an additional, more specific, overloaded method, and replace the compiled class in our classpath with it, our code would continue to use the implementation it discovered originally. This situation would persist until we recompiled our code along with the new class. Another effect of this is that casting (i.e., explicitly telling the compiler to treat an object as one of its assignable types) affects the selection of overloaded methods at compile time but not overridden methods."
I couldnt understand the "Casting" line: "Another effect of this is that casting (i.e., explicitly telling the compiler to treat an object as one of its assignable types) affects the selection of overloaded methods at compile time but not overridden methods."
That line is referring to the fact that
overloaded versions of a method are chosen at compile time, based on the compile-time types of the arguments that you are passing; whereas
overridden methods are chosen at run time, based on the classes of the objects on which you call each method.
To understand this distinction, consider a situation where you have both overrides and overloads, like this.
public class Person {
}
---------------------------------------------------------
public class Postman extends Person {
}
---------------------------------------------------------
public class Dog {
public void barkAt(Person p) {
System.out.println("Woof woof");
}
public void barkAt(Postman p) {
System.out.println("Grrrr");
}
}
---------------------------------------------------------
public class Rottweiler extends Dog {
#Override
public void barkAt(Person p) {
System.out.println("I'm going to eat you.");
}
#Override
public void barkAt(Postman p) {
System.out.println("I'm going to rip you apart.");
}
}
In this situation, we call one of these barkAt methods, like this.
Dog cujo = new Rottweiler();
Person pat = new Postman();
cujo.barkAt(pat);
Now in this particular case, it's the compiler that chooses whether cujo.barkAt(pat); calls a method like public void barkAt(Person p) or public void barkAt(Postman p). These methods are overloads of one another.
To do this, the compiler looks at the type of the expression being passed to the method - that is, the variable pat. The variable pat is of type Person, so the compiler chooses the method public void barkAt(Person p).
What the compiler doesn't do is choose whether it's the method from the Rottweiler class or the Dog class that gets called. That happens at run time, based on the class of the object on which the method gets called, NOT on the type of the variable that you call the method on.
So in this case, what matters is the class of the object called cujo. And in this example, cujo is a Rottweiler, so we get the overridden version of the method - the one defined in the Rottweiler class.
This example will print out I'm going to eat you.
To summarise:
The overload is chosen at compile time based on the parameter type.
The override is chosen at run time based on the object class.
Now, it's possible to use casting to change the compiler's choice of overload. It's not possible to use casting to change the run time choice of override. So, we could write
cujo.barkAt((Postman) pat);
This time, the parameter passed to the method is an expression of type Postman. The compiler chooses an overload accordingly, and this will print I'm going to rip you apart..
Casting affects the selection of overloaded methods at compile time but not overridden methods
Overloaded methods are visible at compile time. But overridden methods becomes visible at runtime.
Thumb Rule:
Java calls the overridden methods based on contents of reference variable and not type of reference variables.
Below example is self explainatory. Hope it helps.
class Animal {
public void speak() {
System.out.print("Animal sounds/roars.");
}
}
class Human extends Animal {
#Override // Method is overridden
public void speak() {
System.out.print("Humans talking english.");
}
public void speak(String words) { // Method is overloaded.
System.out.print("We have brain. We are intelligent."+words);
}
}
class Earth {
public static void main(String a[]) {
Animal a = new Animal();
a.speak(); // Prints Animal sounds/roars.
Human h = new Human();
h.speak(); // Prints "Humans talking english."
Animal a = h; // Cast to superclass reference variable. However, underlying object is of Human.
a.speak(); // Prints "Humans talking english." because speak() is known by Animal at compile time. During runtime,
// the object contains the human object and hence java calls human overridden method.
a.speak("I want to be human."); // Compile time error as speak(..) is not known by Animal at compile time.
}
}

what is dynamic method resolution

I am currently reading Herbert Schildt "Java the Complete Reference" and there he has used a term "Dynamic method resolution" and has provided a little explanation, but i am not getting the full import of it so asking for help in this forum.
while discussing 'interfaces', what he is saying is, dynamic method resolution helps in resolution of method name at run-time and it is achieved by declaring a interface variable and using it to refer to a class object. i.e
interface i = new object();
now what is so unique about it? you can use a class variable also to refer to the same object like:
class c = new object();
so, what is the use of interface here? and why introduce this new term "dynamic method resolution"??
Second he makes a point by saying: " when we use an interface variable to refer to instance of any class, and when you call a method through these interface variables, the method to be executed is looked up dynamically at run time allowing classes to be created later than the code which calls method on them. The calling code can dispatch through an interface without having to know anything about the callee".
Now, Anything dealing with objects has to be in run-time as objects are created at runtime, Now, I dont understand what he meant by "allowing classes to be created...on them".
Any help will be appreciated.
Here is a little example:
public interface Animal {
public String sound();
}
public class Cat implements Animal {
public String sound() { return "meow"; }
}
public class Dog implements Animal {
public String sound() { return "woof"; }
}
public class Test {
public static void main(String[] args) {
Animal a;
if (args.length > 0)
a = new Cat();
else {
a = new Dog();
}
System.out.println(a.sound()); // prints "MEOW" or "WOOF"
}
}
What is so unique about it? You can use a class variable also to refer to the same object
Yes. But you cannot use a single class variable to refer to an instance that can be an instance of any class that implements the interface.
In Test class, if I declared a to have type Dog or Cat there would be no way to get the code to compile. Without the ability to declare Animal a, I would need to have two distinct variables, and two separate print statements.
This is what dynamic method resolution (aka polymorphism) gives you.
To understand his second point:
public class Test2 {
public static void main(String[] args) {
Animal a = PetShop.buyPet(args);
System.out.println(a.sound()); // prints "MEOW" or "WOOF"
}
}
The Test2 class will work with my Cat and Dog class from above. It will also continue to work without recompilation if in 3 years time I implement a Goldfish class and modify my PetShop class to stock aquatic pets. And indeed, it is even possible to implement the PetShop class so that it doesn't need to be changed or recompiled to support other kinds of pets.
Now, these examples are clearly not practical. However, the Java features that they illustrate are useful in real Java applications. Indeed, a program as simple as a classic "hello world" program relies on dynamic method lookup.
dynamic method resolution means Single method which can be applied to solve multiple problems. Ex: Consider Shape is an interface and has method name draw.
you have Rectangle and Circle classes implements Shape Interface. So when you create instance of Rectangle object and call the draw method will draw the Rectangle shape.. In other case you can instantiate Circle instance and call draw method to draw Circle...
In interface you may assign child object in the parent container.
Ex: Shape p = new Rectangle();
in this case it will create the instance of Rectangle and assign it into Shape p..
but from the Shape p object you can call only the draw method... you can not call other methods in the Rectangle Object since its assigned to parent interface and parent has only draw method.

Why do I receive a "cannot find symbol" error when compiling this interface method?

I'm using the interface Place:
public interface Place
{
int distance(Place other);
}
But when I try to implement the interface and compile the following code, a "cannot find symbol - variable xcor" error returns.
public class Point implements Place
{
private double xcor, ycor;
public Point (double myX, double myY)
{
xcor = myX;
ycor = myY;
}
public int distance(Place other)
{
double a = Math.sqrt( (other.xcor - xcor) * (other.xcor - xcor) + (other.ycor - ycor) * (other.ycor -ycor) ) + 0.5;
return (int)a;
}
}
Any ideas for what I might be doing wrong? Does it have something to do with the scope of the fields?
The interface Place has no member xcor. Add a method double getXcor() to your interface and implement it in your class. The same applies to ycor. Then you can use these getters in your implementation of the distance method.
public interface Place
{
int distance(Place other);
double getXcor();
double getYcor();
}
It is because the Place interface doesn't expose a symbol named 'xcor'. It only exposes the method 'distance'. so when you have a variable of type Place the compiler doesn't know which underlying type it is. You either have to have Place expose a getter for xcor/ycor etc or downcast the instance of 'Place' to 'Point'. downcasting is usually frowned on when you have multiple implementations of Place, but this is the usual problem with having an interface that overlays implementations that have different underlying properties. Like having a 'Shape' that has 'area()' with implementations of Rectangle and Circle that use different methods of computing area.
A Place does not have an xcor and ycor members, a Point does.
The parameter to the distance method is a Place, not a Point. Only the Point class has a field named xcor.
Several earlier posters mention the problem, which is that distance is being given a Place that has no xcor. I'm going to go a little further and suggest this is a place for generics. It probably makes no sense to define a distance function between arbitrary places. (If it does, then xcor and ycor can be pulled up into an abstract class between Place and Point.)
public interface Place<T> {
int distance (Place<T> other);
}
class Point implements Place<Point> etc.

What's the nearest substitute for a function pointer in Java?

I have a method that's about ten lines of code. I want to create more methods that do exactly the same thing, except for a small calculation that's going to change one line of code. This is a perfect application for passing in a function pointer to replace that one line, but Java doesn't have function pointers. What's my best alternative?
Anonymous inner class
Say you want to have a function passed in with a String param that returns an int.
First you have to define an interface with the function as its only member, if you can't reuse an existing one.
interface StringFunction {
int func(String param);
}
A method that takes the pointer would just accept StringFunction instance like so:
public void takingMethod(StringFunction sf) {
int i = sf.func("my string");
// do whatever ...
}
And would be called like so:
ref.takingMethod(new StringFunction() {
public int func(String param) {
// body
}
});
EDIT: In Java 8, you could call it with a lambda expression:
ref.takingMethod(param -> bodyExpression);
For each "function pointer", I'd create a small functor class that implements your calculation.
Define an interface that all the classes will implement, and pass instances of those objects into your larger function. This is a combination of the "command pattern", and "strategy pattern".
#sblundy's example is good.
When there is a predefined number of different calculations you can do in that one line, using an enum is a quick, yet clear way to implement a strategy pattern.
public enum Operation {
PLUS {
public double calc(double a, double b) {
return a + b;
}
},
TIMES {
public double calc(double a, double b) {
return a * b;
}
}
...
public abstract double calc(double a, double b);
}
Obviously, the strategy method declaration, as well as exactly one instance of each implementation are all defined in a single class/file.
You need to create an interface that provides the function(s) that you want to pass around. eg:
/**
* A simple interface to wrap up a function of one argument.
*
* #author rcreswick
*
*/
public interface Function1<S, T> {
/**
* Evaluates this function on it's arguments.
*
* #param a The first argument.
* #return The result.
*/
public S eval(T a);
}
Then, when you need to pass a function, you can implement that interface:
List<Integer> result = CollectionUtilities.map(list,
new Function1<Integer, Integer>() {
#Override
public Integer eval(Integer a) {
return a * a;
}
});
Finally, the map function uses the passed in Function1 as follows:
public static <K,R,S,T> Map<K, R> zipWith(Function2<R,S,T> fn,
Map<K, S> m1, Map<K, T> m2, Map<K, R> results){
Set<K> keySet = new HashSet<K>();
keySet.addAll(m1.keySet());
keySet.addAll(m2.keySet());
results.clear();
for (K key : keySet) {
results.put(key, fn.eval(m1.get(key), m2.get(key)));
}
return results;
}
You can often use Runnable instead of your own interface if you don't need to pass in parameters, or you can use various other techniques to make the param count less "fixed" but it's usually a trade-off with type safety. (Or you can override the constructor for your function object to pass in the params that way.. there are lots of approaches, and some work better in certain circumstances.)
Method references using the :: operator
You can use method references in method arguments where the method accepts a functional interface. A functional interface is any interface that contains only one abstract method. (A functional interface may contain one or more default methods or static methods.)
IntBinaryOperator is a functional interface. Its abstract method, applyAsInt, accepts two ints as its parameters and returns an int. Math.max also accepts two ints and returns an int. In this example, A.method(Math::max); makes parameter.applyAsInt send its two input values to Math.max and return the result of that Math.max.
import java.util.function.IntBinaryOperator;
class A {
static void method(IntBinaryOperator parameter) {
int i = parameter.applyAsInt(7315, 89163);
System.out.println(i);
}
}
import java.lang.Math;
class B {
public static void main(String[] args) {
A.method(Math::max);
}
}
In general, you can use:
method1(Class1::method2);
instead of:
method1((arg1, arg2) -> Class1.method2(arg1, arg2));
which is short for:
method1(new Interface1() {
int method1(int arg1, int arg2) {
return Class1.method2(arg1, agr2);
}
});
For more information, see :: (double colon) operator in Java 8 and Java Language Specification §15.13.
You can also do this (which in some RARE occasions makes sense). The issue (and it is a big issue) is that you lose all the typesafety of using a class/interface and you have to deal with the case where the method does not exist.
It does have the "benefit" that you can ignore access restrictions and call private methods (not shown in the example, but you can call methods that the compiler would normally not let you call).
Again, it is a rare case that this makes sense, but on those occasions it is a nice tool to have.
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
class Main
{
public static void main(final String[] argv)
throws NoSuchMethodException,
IllegalAccessException,
IllegalArgumentException,
InvocationTargetException
{
final String methodName;
final Method method;
final Main main;
main = new Main();
if(argv.length == 0)
{
methodName = "foo";
}
else
{
methodName = "bar";
}
method = Main.class.getDeclaredMethod(methodName, int.class);
main.car(method, 42);
}
private void foo(final int x)
{
System.out.println("foo: " + x);
}
private void bar(final int x)
{
System.out.println("bar: " + x);
}
private void car(final Method method,
final int val)
throws IllegalAccessException,
IllegalArgumentException,
InvocationTargetException
{
method.invoke(this, val);
}
}
If you have just one line which is different you could add a parameter such as a flag and a if(flag) statement which calls one line or the other.
You may also be interested to hear about work going on for Java 7 involving closures:
What’s the current state of closures in Java?
http://gafter.blogspot.com/2006/08/closures-for-java.html
http://tech.puredanger.com/java7/#closures
New Java 8 Functional Interfaces and Method References using the :: operator.
Java 8 is able to maintain method references ( MyClass::new ) with "# Functional Interface" pointers. There are no need for same method name, only same method signature required.
Example:
#FunctionalInterface
interface CallbackHandler{
public void onClick();
}
public class MyClass{
public void doClick1(){System.out.println("doClick1");;}
public void doClick2(){System.out.println("doClick2");}
public CallbackHandler mClickListener = this::doClick;
public static void main(String[] args) {
MyClass myObjectInstance = new MyClass();
CallbackHandler pointer = myObjectInstance::doClick1;
Runnable pointer2 = myObjectInstance::doClick2;
pointer.onClick();
pointer2.run();
}
}
So, what we have here?
Functional Interface - this is interface, annotated or not with #FunctionalInterface, which contains only one method declaration.
Method References - this is just special syntax, looks like this, objectInstance::methodName, nothing more nothing less.
Usage example - just an assignment operator and then interface method call.
YOU SHOULD USE FUNCTIONAL INTERFACES FOR LISTENERS ONLY AND ONLY FOR THAT!
Because all other such function pointers are really bad for code readability and for ability to understand. However, direct method references sometimes come handy, with foreach for example.
There are several predefined Functional Interfaces:
Runnable -> void run( );
Supplier<T> -> T get( );
Consumer<T> -> void accept(T);
Predicate<T> -> boolean test(T);
UnaryOperator<T> -> T apply(T);
BinaryOperator<T,U,R> -> R apply(T, U);
Function<T,R> -> R apply(T);
BiFunction<T,U,R> -> R apply(T, U);
//... and some more of it ...
Callable<V> -> V call() throws Exception;
Readable -> int read(CharBuffer) throws IOException;
AutoCloseable -> void close() throws Exception;
Iterable<T> -> Iterator<T> iterator();
Comparable<T> -> int compareTo(T);
Comparator<T> -> int compare(T,T);
For earlier Java versions you should try Guava Libraries, which has similar functionality, and syntax, as Adrian Petrescu has mentioned above.
For additional research look at Java 8 Cheatsheet
and thanks to The Guy with The Hat for the Java Language Specification §15.13 link.
#sblundy's answer is great, but anonymous inner classes have two small flaws, the primary being that they tend not to be reusable and the secondary is a bulky syntax.
The nice thing is that his pattern expands into full classes without any change in the main class (the one performing the calculations).
When you instantiate a new class you can pass parameters into that class which can act as constants in your equation--so if one of your inner classes look like this:
f(x,y)=x*y
but sometimes you need one that is:
f(x,y)=x*y*2
and maybe a third that is:
f(x,y)=x*y/2
rather than making two anonymous inner classes or adding a "passthrough" parameter, you can make a single ACTUAL class that you instantiate as:
InnerFunc f=new InnerFunc(1.0);// for the first
calculateUsing(f);
f=new InnerFunc(2.0);// for the second
calculateUsing(f);
f=new InnerFunc(0.5);// for the third
calculateUsing(f);
It would simply store the constant in the class and use it in the method specified in the interface.
In fact, if KNOW that your function won't be stored/reused, you could do this:
InnerFunc f=new InnerFunc(1.0);// for the first
calculateUsing(f);
f.setConstant(2.0);
calculateUsing(f);
f.setConstant(0.5);
calculateUsing(f);
But immutable classes are safer--I can't come up with a justification to make a class like this mutable.
I really only post this because I cringe whenever I hear anonymous inner class--I've seen a lot of redundant code that was "Required" because the first thing the programmer did was go anonymous when he should have used an actual class and never rethought his decision.
The Google Guava libraries, which are becoming very popular, have a generic Function and Predicate object that they have worked into many parts of their API.
One of the things I really miss when programming in Java is function callbacks. One situation where the need for these kept presenting itself was in recursively processing hierarchies where you want to perform some specific action for each item. Like walking a directory tree, or processing a data structure. The minimalist inside me hates having to define an interface and then an implementation for each specific case.
One day I found myself wondering why not? We have method pointers - the Method object. With optimizing JIT compilers, reflective invocation really doesn't carry a huge performance penalty anymore. And besides next to, say, copying a file from one location to another, the cost of the reflected method invocation pales into insignificance.
As I thought more about it, I realized that a callback in the OOP paradigm requires binding an object and a method together - enter the Callback object.
Check out my reflection based solution for Callbacks in Java. Free for any use.
Sounds like a strategy pattern to me. Check out fluffycat.com Java patterns.
oK, this thread is already old enough, so very probably my answer is not helpful for the question. But since this thread helped me to find my solution, I'll put it out here anyway.
I needed to use a variable static method with known input and known output (both double). So then, knowing the method package and name, I could work as follows:
java.lang.reflect.Method Function = Class.forName(String classPath).getMethod(String method, Class[] params);
for a function that accepts one double as a parameter.
So, in my concrete situation I initialized it with
java.lang.reflect.Method Function = Class.forName("be.qan.NN.ActivationFunctions").getMethod("sigmoid", double.class);
and invoked it later in a more complex situation with
return (java.lang.Double)this.Function.invoke(null, args);
java.lang.Object[] args = new java.lang.Object[] {activity};
someOtherFunction() + 234 + (java.lang.Double)Function.invoke(null, args);
where activity is an arbitrary double value. I am thinking of maybe doing this a bit more abstract and generalizing it, as SoftwareMonkey has done, but currently I am happy enough with the way it is. Three lines of code, no classes and interfaces necessary, that's not too bad.
To do the same thing without interfaces for an array of functions:
class NameFuncPair
{
public String name; // name each func
void f(String x) {} // stub gets overridden
public NameFuncPair(String myName) { this.name = myName; }
}
public class ArrayOfFunctions
{
public static void main(String[] args)
{
final A a = new A();
final B b = new B();
NameFuncPair[] fArray = new NameFuncPair[]
{
new NameFuncPair("A") { #Override void f(String x) { a.g(x); } },
new NameFuncPair("B") { #Override void f(String x) { b.h(x); } },
};
// Go through the whole func list and run the func named "B"
for (NameFuncPair fInstance : fArray)
{
if (fInstance.name.equals("B"))
{
fInstance.f(fInstance.name + "(some args)");
}
}
}
}
class A { void g(String args) { System.out.println(args); } }
class B { void h(String args) { System.out.println(args); } }
Check out lambdaj
http://code.google.com/p/lambdaj/
and in particular its new closure feature
http://code.google.com/p/lambdaj/wiki/Closures
and you will find a very readable way to define closure or function pointer without creating meaningless interface or use ugly inner classes
Wow, why not just create a Delegate class which is not all that hard given that I already did for java and use it to pass in parameter where T is return type. I am sorry but as a C++/C# programmer in general just learning java, I need function pointers because they are very handy. If you are familiar with any class which deals with Method Information you can do it. In java libraries that would be java.lang.reflect.method.
If you always use an interface, you always have to implement it. In eventhandling there really isn't a better way around registering/unregistering from the list of handlers but for delegates where you need to pass in functions and not the value type, making a delegate class to handle it for outclasses an interface.
None of the Java 8 answers have given a full, cohesive example, so here it comes.
Declare the method that accepts the "function pointer" as follows:
void doCalculation(Function<Integer, String> calculation, int parameter) {
final String result = calculation.apply(parameter);
}
Call it by providing the function with a lambda expression:
doCalculation((i) -> i.toString(), 2);
If anyone is struggling to pass a function that takes one set of parameters to define its behavior but another set of parameters on which to execute, like Scheme's:
(define (function scalar1 scalar2)
(lambda (x) (* x scalar1 scalar2)))
see Pass Function with Parameter-Defined Behavior in Java
Since Java8, you can use lambdas, which also have libraries in the official SE 8 API.
Usage:
You need to use a interface with only one abstract method.
Make an instance of it (you may want to use the one java SE 8 already provided) like this:
Function<InputType, OutputType> functionname = (inputvariablename) {
...
return outputinstance;
}
For more information checkout the documentation: https://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html
Prior to Java 8, nearest substitute for function-pointer-like functionality was an anonymous class. For example:
Collections.sort(list, new Comparator<CustomClass>(){
public int compare(CustomClass a, CustomClass b)
{
// Logic to compare objects of class CustomClass which returns int as per contract.
}
});
But now in Java 8 we have a very neat alternative known as lambda expression, which can be used as:
list.sort((a, b) -> { a.isBiggerThan(b) } );
where isBiggerThan is a method in CustomClass. We can also use method references here:
list.sort(MyClass::isBiggerThan);
The open source safety-mirror project generalizes some of the above mentioned solutions into a library that adds functions, delegates and events to Java.
See the README, or this stackoverflow answer, for a cheat sheet of features.
As for functions, the library introduces a Fun interface, and some sub-interfaces that (together with generics) make up a fluent API for using methods as types.
Fun.With0Params<String> myFunctionField = " hello world "::trim;`
Fun.With2Params<Boolean, Object, Object> equals = Objects::equals;`
public void foo(Fun.With1ParamAndVoid<String> printer) throws Exception {
printer.invoke("hello world);
}
public void test(){
foo(System.out::println);
}
Notice:
that you must choose the sub-interface that matches the number of parameters in the signature you are targeting. Fx, if it has one parameter, choose Fun.With1Param.
that Generics are used to define A) the return type and B) the parameters of the signature.
Also, notice that the signature of the Method Reference passed to the call to the foo() method must match the the Fun defined by method Foo. If it do not, the compiler will emit an error.

Categories

Resources