I'm confused about the functionality of type checking and method lookup in Java.
From what I understand, type checking is done at the compile time and method lookup is done at the run time.
Type checking is based on the declared type of the reference object whereas method lookup is based on the actual type of the reference.
So suppose the class MyInt is a superclass of the class GaussianInt as follows:
class MyInt
{
private int n;
public myInt(int n)
{
this.n = n;
}
public int getval()
{
return n;
}
public void increment(int n)
{
this.n += n;
}
public myInt add(myInt N)
{
return new myInt(this.n + N.getval());
}
public void show()
{
System.out.println(n);
}
}
class GaussInt extends MyInt
{
private int m; //represents the imaginary part
public GaussInt(int x, int y)
{
super(x);
this.m = y;
}
public void show()
{
System.out.println( "realpart is: " + this.getval() +" imagpart is: " + m);
}
public int realpart()
{
return getval();
}
public int imagpart()
{
return m;
}
public GaussInt add(GaussInt z)
{
return new GaussInt(z.realpart() + realpart(), z.imagpart() + imagpart());
}
And suppose in the main method we have the following:
GaussInt z = new GaussInt(3,4);
MyInt b = z;
MyInt d = b.add(b)
System.out.println("the value of d is:"+ d.show());
Which add method would be used in the show statement inside the print statement in the end?
From what I understand, b is declared to be MyInt, but it is, in fact, GuaussInt. The type checker only sees that b is of MyInt type and that it has add(MyInt) so the code makes sense and compiles.
But then in run time, the method lookup sees that b is of type GaussInt and it has two add() methods, so it will use add(GaussInt) method by looking at method signature and it produces a GaussInt. But d is of type MyInt and method lookup will think it won't work, then will it go back to add(Myint)?
How does the mechanism behind compiling and running of a program work?
From what I understand, b is declared to be MyInt, but it is, in fact,
GaussInt
You are CORRECT. b's reference type is MyInt but it is pointing to an object of GaussInt type.
But then in run time, the method lookup sees that b is of type
GaussInt and it has two add() methods, so it will use add(GaussInt)
method by looking at method signature and it produces a GaussInt. But
d is of type GaussInt and method lookup will think it won't work, then
will it go back to add(Myint)?
As the add method in GaussInt takes a reference of GaussInt type and not of MyInt type. So b.add(b) will call add method of MyInt type. Since the gaussInt has two add methods one take the argument of type MyInt and other takes the argument GaussInt type. So it will call add method of myInt(superclass).
The thing you are trying to achieve is method overriding. For it to work the method signatures should be same. That is, the parent and child class methods should match in every respect, except that the return type of child class method can be subtype of the return type of parent class method
SO in order to achieve what you have mentioned, that is b.add(b) should call add method of gaussInt, make the argument type of add method in both classes same.
Also what you should learn about is dynamic polynorphism(run time check) and static polymorphism(compile time type check).
Related
I am writing some classes and all of them implement a certain method they inherit from an interface. This method is close to the same for all the classes beside one call to a certain other function.
For example:
public void doSomething(){
int a = 6;
int b = 7;
int c = anOtherMethod(a,b);
while(c < 50){
c++;
}
}
What if multiple classes have the function doSomething() but the implementation of the method anOtherMethod() is different?
How do I avoid code duplication in this situation? (This is not my actual code but a simplified version that helps me describe what I mean a bit better.)
This looks like a good example of the template method pattern.
Put doSomething in a base class.
Declare abstract protected anotherMethod in that base class as well, but don't provide an implementation.
Each subclass then provides the proper implementation for anotherMethod.
This is how you could implement the technique that Thilo talked about in the following demo:
Main class:
public class Main extends Method {
public static void main(String[] args) {
Method m = new Main();
m.doSomething();
}
#Override
public int anOtherMethod(int a, int b) {
return a + b;
}
}
Abstact class:
public abstract class Method {
public abstract int anOtherMethod(int a, int b);
public void doSomething() {
int a = 6;
int b = 7;
int c = anOtherMethod(a, b);
System.out.println("Output: "+c);
}
}
This way, all you have to do is override anOtherMethod() in each class that you want to use doSomething() with a different implementation of the method anOtherMethod().
Assuming every version of anOtherFunction takes two integers and returns an integer, I would just have the method accept a function as an argument, making it Higher Order.
A function that takes two arguments of the same type and returns an object of the same type is known as a BinaryOperator. You can add a argument of that type to the method to pass a function in:
// Give the method an operator argument
public void doSomething(BinaryOperator<Integer> otherMethod) {
int a = 6;
int b = 7;
// Then use it here basically like before
// "apply" is needed to call the passed function
int c = otherMethod.apply(a,b);
while(c < 50)
c++;
}
}
How you use it though will depend on your use case. As a simple example using a lambda, you can now call it like:
doSomething((a, b) -> a + b);
Which simply returns the sum of a and b.
For your particular case though, you may find that having doSomething as part of a Interface isn't necessary or optimal. What if instead, anOtherMethod is what's required to be supplied? Instead of expecting your classes to supply a doSomething, have them supply a BinaryOperator<Integer>. Then, when you need to get results from doSomething, get the operator from the class, then pass it to doSomething. Something like:
public callDoSomething(HasOperator obj) {
// There may be a better way than having a "HasOperator" interface
// This is just an example though
BinaryOperator<Integer> f = obj.getOperator();
doSomething(f);
}
What are java runtime overriding rules for below 2 scenarios
Scenario 1 :
class A{
void display(Integer i){
sysout("In A Integer");
}
void display(Object obj){
sysout("In A Object");
}
}
class B extends A{
void display(int i){
sysout("In B int");
}
}
PSVM(){
A a=new B();
a.display(2); //In A Integer
a.display(new Integer(2)); //In A Integer
a.display("hello"); //In A Object
a.display(new Object()); //In A Object
a.display(null);
}
Scenario 2 :
class A {
void display(int i){
sysout("In A int");
}
}
class B extends A{
void display(Integer i){
sysout("In B Integer");
}
void display(Object obj){
sysout("In B Object");
}
}
PSVM(){
A a=new B();
a.display(2); //In A int
a.display(new Integer(2)); //In A int
a.display("hello"); // Compilation error
a.display(new Object()); //Compilation error
a.display(null); //Compilation error
}
I have few queries here :
1. How does general runtime override methods evaluation happens: Any reference available ?
2. Why in Scenario 2, a.display(new Intger(2)) not throwing compilation error ?
First question: you can find the rules for overriding within this tutorial from Oracle. Another good starting point might be that site. Of course, if you are looking for a spec, then only the Java Language Specification does the job.
On your second question:
a.display(new Integer(2)); //In A int
That works because the compiler sees:
a is of class A
A has a method display that takes an int
It knows how to turn an Integer into an int
Therefore it can use display(int) from class A. The compiler does unbox the Integer object into a primitive int value for you behind the covers.
And finally: in your scenario 1, you are not overriding anything. Your display method within B does not override anything in A - as it has different signature. Thus, you are still calling the method from A! You will notice that immediately, when putting #Override on your method in B!
let's say for understanding - JAVA tries to match the closest possible Data type in the parameter (given) to the parameters declared (overriden) by the called method. in both the cases Object being the Broadest Object every Object other than integer is considered as a simple Object. however, Thanks to JAVA's Autoboxing, Integer is first typed into int and therefore passed as an integer instead of an Object.
From my understanding all function-calls in Java are virtual, and numeral literals have the type int. But why does the Output in the example below differ?
public class A {
public int f(long d) {
return 2;
}
}
public class B extends A {
public int f(int d) {
return 1;
}
}
public class M {
public static void main(String[] args) {
B b = new B();
A ab = b;
System.out.println(b.f(1));
System.out.println(ab.f(1));
}
}
You dont override anything.
The first calling System.out.println(b.f(1)); returns 1, because it works with class B, even the method is named same, but parameters are different (long is not the same as int).
In case when parameters are same (int d), the result would be 1, because it overrides (#Override) the method from the class A.
Now, you know why the second calling System.out.println(ab.f(1)); returns 2. Look from what class it's called from.
Actually the subclass B has inherited the method f(with a long) and has added (overloaded) another method f(with a int).
When you write down a value such as 1 in such a way the compiler, even before assigning it to a typed reference does parse it as an int. More here: Java's L number (long) specification .
When you call the method f using the reference ab (which is A class) it (the reference) says I can only send you to a method f(with a long) and then implicitly cast the type int 1 into a long.
Let's try to change the type of the method in A class to f(short),
then System.out.println(ab.f(1)); will give you this error:
"The method f(short) in the type A is not applicable for the arguments (int)"
So, I have these 3 classes in Java.
When I run the program I get:
20,
15,
10,
My question is, why do I get this instead of:
15,
20 (doesn't public int getX(); in class B get us to 15+5=20?),
10
for example?
Can you please explain to me , step by step, what really happens in this program because I am very confused with the output(and the sequence).
public class A {
private int x = 15;
public int getX() {
return x;
}
public void setX(int x) {
this.x = x;
}
public void printX() {
System.out.println(this.getX());
}
}
Child:
public class B extends A {
private int x = 5;
#Override
public int getX() {
return super.getX() + x;
}
#Override
public void setX(int x) {
super.setX(x);
super.printX();
}
#Override
public void printX() {
System.out.println(super.getX());
}
}
and
public class C {
public static void main(String[] args) {
A a = new B();
System.out.println(a.getX());
a.setX(10);
a.printX();
}
}
This is happening because you instantiate a as a B object - A a = new B(). It doesn't matter that the explicit type of a is A; it's more generic (abstract) than its real type B (because B inherits from A), therefore polymorphism calls B methods (more specific) in first order. The same applies to fields.
After calling a.getX() class B references to A's getX(), which returns its 15 (superclass method has no reference to the object it was called from in this case), then 5 (B's x) is added, resulting in 20.
The subsequent calls behave in a similar manner.
I have given a basic example similar to your problem go through it you will get your answer. This is the concept of runtime polymorphism
Inheritance creates typecompatibility. It allows a super class reference to refer to the object of sub class. (Reverse is not true).
A super class reference, that refers to object of sub class, can only be used to access the inherited and overrridden methods of sub class. The members newly defined in sub class are not accessible using reference of super class.
class A
{
void f1()//this holds address of object of B
{
System.out.println("A f1");
}
void f2()
{
System.out.println("A f2");
}
}//A
class B extends A
{
void f3()//new method
{
System.out.println("B f3");
}
void f2()//this holds address of object of B
{
System.out.println("B f2 starts");
f3(); //this.f3()
System.out.println("B f2 ends ");
}
}//B
class TypeCmptbl
{
public static void main(String args[])
{
A ref; //reference of A
ref = new B();//Object of B
//ref.inherited() allowed
ref.f1();
//ref.overridden() allowed
ref.f2();
//ref.newMembersOfChild() not allowed
//ref.f3();
}//main
}
Consider the statement
ref.f2();
Here ref is a reference of class A and it has address of object of class B f2() is a overridden method.
When compiler detects such a statement then it doesn't bind the function call with any definition. It only validates the call.
Binding of such calls is left for the runtime environment. At program runtime system identifies the datatype of the object and binds the function call with the function definition provided by the class of object. This type of binding between the function call and function definition is called as "runtime polymorphism" .
Your question is related to the runtime polymorphism in java note that methods are bind at runtime, and the variables are bind at Compile time
In your example
public class C {
public static void main(String[] args) {
A a=new B();
System.out.println(a.getX());
a.setX(10);
a.printX();
}
}
// reference is of A but the object is of B, so at runtime JVM see that memory is of B so the B's method is called, this is runtime Polymorphism
when the below statement is called , the getX() of class B is invoked.
System.out.println(a.getX());
#Override
public int getX() {
return super.getX() + x; // this will add 15 from class A + 5 in this method.
}
the above statement displays 20
when the below statement is called,
a.setX(10);
#Override
public void setX(int x) {
super.setX(x); // observe this method, this prints 15
super.printX(); // this prints 10
}
//super.setX(x); it will call
public void setX(int x) {
this.x=x; // 10 <--- 15 so 15 is displayed
}
here this.x refers the value which is passed through a.setX(10).
A a = new B();
So the concrete type of a is B.
System.out.println(a.getX());
You're calling getX() on an object of type B. So the following method is called, because the concrete type of a is B, and B has overridden the getX() method defined by A:
public int getX() {
return super.getX() + x;
}
It adds B's x (whose value is 5), with the result of super.getX(). This method, in A, is defined as
public int getX() {
return x;
}
So it returns A's x, which is initialized to 15.
The result is thus 5 + 15 = 20.
The rest can be explained the same way. Remember that fields are not accessed in a polymorphic way. Only methods are. So, inside the code of A, when you see x, it always mean "the value of the field x in A". Inside the code of B, when you see x, it always mean "the value of the field x in B".
When you say:
A a = new B();
it means Object of A is instantiated with class B. So, in the memory, a space is reserved for your object 'a' which contains all the methods and properties of class B (not A).
So, when the first print statement is executed, it executes the getX() method of the class B and not the getX() method of class A.
For other methods called by your object 'a', the methods of the class B are called.
This is also known as dynamic binding, as JVM allocates the memory to the object 'a' at the run time and not compile time.
For more details on dynamic binding check these links:
http://www.tutorialspoint.com/java/java_overriding.htm
http://www.tutorialspoint.com/java/java_polymorphism.htm
I also suggest you to instal eclipse on your machine and run your code in the debug mode.
This is the best way you can study your code.
I have a generic Callback object which provides a (primitive) callback capability for Java, in the absence of closures. The Callback object contains a Method, and returns the parameter and return types for the method via a couple of accessor methods that just delegate to the equivalent methods in Method.
I am trying to validate that a Callback I have been supplied points to a valid method. I need the return type assignment compatible with Number and all parameters to be assignment compatible with Double. My validating method looks like this:
static public void checkFunctionSpec(Callback cbk) {
Class[] prms=cbk.getParmTypes();
Class ret =cbk.getReturnType();
if(!Number.class.isAssignableFrom(ret)) {
throw new IllegalArgumentException(
"A function callback must return a Number type " +
"(any Number object or numeric primitive) - function '" +
cbk + "' is not permitted");
}
for(Class prm: prms) {
if(!Double.class.isAssignableFrom(prm)) {
throw new IllegalArgumentException(
"A function callback must take parameters of " +
"assignment compatible with double " +
"(a Double or Float object or a double or float primitive) " +
"- function '" + cbk + "' is not permitted");
}
}
}
The problem I encounter is that the when I try this with, e.g. Math.abs(), it's throwing an exception for the return type as follows:
java.lang.IllegalArgumentException:
A function callback must return a Number type (any Number object or numeric primitive)
- function 'public static double java.lang.Math.abs(double)' is not permitted
This was surprising to me because I expected primitives to simply work because (a) they are reflected using their wrapper classes, and (b) the Double.TYPE is declared to be of type Class<Double>.
Does anyone know how I can achieve this without modifying my checks to be:
if(!Number.class.isAssignableFrom(ret)
&& ret!=Double.TYPE
&& ret!=Float.TYPE
&& ret!=...) {
Clarification
When you invoke the method double abs(double) using Method.invoke(), you pass in a Object[]{Double} and get back a Double. However, my validation appears to be failing because Double.TYPE is not assignable to a Double. Since I require all these callbacks to return some sort of number, which will be returned by invoke() as a Number, I am trying to validate that the supplied method returns either Number or a numeric primitive.
Validation of the parms is likewise.
In other words, when using reflection the parm and return types Double and double are identical and I would like to validate them easily as such.
EDIT: To further clarify: I want to validate that a Method will, when invoke() is called return an Object of type Number (from which I can call obj.doubleValue() to get the double I want).
Looking more closely at the documentation for Class.isAssignableFrom(), it specifically states that the types for a primitive do not match any class except themselves. So I will need to specifically check for == equality to Byte.TYPE, Double.TYPE, Float.TYPE, Integer.TYPE, Long.TYPE, and Short.TYPE for the return type.
Why not have the compiler do it?
public interface F<A, B> {
public B $(A a);
}
Then you can pass an F<Double, Double> to a method that expects an F<? extends Number, ? extends Number>.
EDIT:
You say you want to provide a single class for the type of a function with any number of arguments. This can be done with the Java type system. Conceptually every function has only one argument. A function with two arguments is equivalent to a function that returns another function. So here's a variable whose value is a function that takes two doubles:
F<Double, F<Double, Double>> f;
Here's a method that passes two doubles to a given function:
public Double operate(F<Double, F<Double, Double>> f, double a, double b) {
return f.$(a).$(b);
}
Or, consider a type L<A extends L> with two subclasses C<E, T extends L<T>> representing a "cons", and a terminator type N:
public abstract class L<A extends L<A>> {
private L() {}
private static final N nil = new N();
public static N nil() {
return nil;
}
public static final class N extends L<N> {
private N() {}
public <E> C<E, N> cons(final E e) {
return new C<E, L>(e, this);
}
}
public static final class C<E, L extends L<L>> extends L<C<E, L>> {
private E e;
private L l;
private C(final E e, final L l) {
this.e = e;
this.l = l;
}
public E head() {
return e;
}
public L tail() {
return l;
}
public <E> C<E, C<E, L>> cons(final E e) {
return new C<E, C<E, L>>(e, this);
}
}
}
In such a case, you can implement a function type thusly:
public interface F<A extends L<A>, B> {
public B $(A args);
}
The following method expects a function with two Double arguments (and returns a Double), along with two doubles to apply it to:
public Double operate(F<C<Double, C<Double, N>>, Double> f, double a, double b) {
return f.$(N.nil().cons(b).cons(a));
}
The implementation of the F interface would have to get the arguments from the list using head and tail. So in effect, you're implementing LISP in Java. :)
Having said that, check out Functional Java, which is a library that has a lot of this stuff already. I'm sure there's also one out there that uses reflection so you don't have to write it yourself.
The parameter to Math.abs() is the double primitive. I'm not quite sure what you mean by a primitive being "assignment compatible" with an object (what the reflection API essentially means is "can be a cast of"). But if you mean "can pass into a Double constructor", then that's essentially a primitive double (or a string)!! Perhaps you need to clarify a bit more what you need to do?