Public class John {
public void setValue(){
this.value = 0;
}
public void setValue(int v){
this.value = v;
}
Now potentially how would i call these two methods??
John j = new John();
j.setValueO();
j.setValue(10);
Correct me if i am wrong.
Is function overloading a concept of polymorphism? If not, under which OOP branch does this come.
Encapsulation means Hiding the information and Abstraction means Hiding the implantation details. So when i do overload a method, do i carry anything on these two above... {Abstraction and Encpsulation}
Is Overloading compile time or runtime? Why do they call this for overloading and overriding?
Yes you are right, expect the typo which you have made?
Is function overloading a concept of polymorphism? If not, under
which OOP branch does this come.
Well speaking from historical point of view, it does come but still many argue that its not a form of polymorphism.
Overloading
The method functions differently based on the arguements.
Overriding
The method functions differently based on which class was used to instainate it.The method bark could sound differently for Class CAT and Class DOG.
Encapsulation means Hiding the information and Abstraction means
Hiding the implantation details. So when i do overload a method, do i
carry anything on these two above... {Abstraction and Encpsulation}
Nope. May be someone can answer on this much clearer.
Is Overloading compile time or runtime? Why do they call this for
overloading and overriding?
Compile time. In overriding the decision that method of which class is to be called is decided at runtime, hence it is runtime.
In overloading the method definition, availability based on the parameters passed in the method call is checked at compile time only.
Java does not identify methods by their names alone, but by their signatures. A signature is composed of the method name and the ordered list of parameter types. So, from a compiler and jvm point of view, those are two completely different methods. The fact that they share the name (and as a consequence a similar signature) has no meaning if not for humans.
Since signatures are used in .class files, the compiler is responsible for computing the signature of a method call, using method name and parameters, at compile time. The late binding that happens at runtime is related to polymorphism, beacuse the current instance on which a certain method is called could be an instance of a subclass that override a certain method, wether that method is also overloaded or not, in java, is not considered by the runtime at all.
You cannot have two method with the same signature in the same class. Notably, the return type of a method is not part of its signature, so you cannot have two method with the same and and same parameters but returning two different types.
In other languages, javascript for example, since parameters are always dynamic, the signature is only composed of the name of the method, which makes overloading impossible
As to the first part of your question, yes the code you showed is an example of overloading, well, assuming the first part is correct and the 0 in the second part is a typo.
I'm not familiar with how these topics are formally taught these days, but to my mind overloading isn't really related to polymorphism. It's just a convenient way for methods that more or less do the same thing (and often call each other) to share a name. I have no idea how to answer your second question. What is an "OOP branch"?
Again, I'm not quite sure how these tie in. Doesn't it depend on what the method actually does?
Well, think about it this way. In Java, when you call a method in general, leaving overloading aside, at what phase does the system figure out which method you're calling (as opposed to which class's implementation of that method)? As to the origin of those terms, honestly that should be pretty easy to look up.
Since function overloading can work very well without objects, I do not see any reason for it to be an OOP concept at all. For the question whether it's polymorphism, it does fulfill the general requirements and according to Wikipedia is a form of polymorphism.
In general, when you create a method you always do both (you abstract away some general functionality and you hide the information of the internal workings of the function). Overloading does not add to neither, IMO. (Even though through overloading and the gained polymorphism you could argue to gain in abstraction, since the function becomes more generic, it is IMO still on the same level of abstraction.)
Overload resolution is - which was suprising to me at first - compile time. This is in contrast to the mentioned overriding. (So its in that sense not the same kind of polymorphism, as one is runtime and the other one is compile time.)
Related
I refer to some java tutorials and found java having two types of polymorphism.
compile-time polymorphism(Static polymorphism)
runtime polymorphism(Dynamic polymorphism)
some learning references are mentioned "If you overload a static method in Java, it is the example of compile-time polymorphism". and some are mentioned "Method overloading is an example of compile-time polymorphism".
what I want to know is if only static method overloading or every method overloading is the java compile-time polymorphism.?
Because when reading the first point I'm feeling why static method specially mentioned and why not instance methods and constructors are not mentioned.
Thanks
In Java, the choice, which of the overloads to invoke, is always made at compile-time. This applies to static methods, instance methods, and also constructors.
Note that the two statements are not contradicting. The first says, “If you overload a static method…”, which names a correct example, but doesn’t preclude other examples. Likewise, the other statement “Method overloading is an example of compile time polymorphism” is broader, still correct, while not mentioning constructors. As long as these statements do not claim to have named all existing examples, they are correct.
Still, in case of instance methods the chosen overload can also be subject to runtime polymorphism when being overridden, in addition to the compile-time polymorphism. Having methods which are both, overloaded and overridden, can easily lead to errors, therefore should be used with care or avoided.
Before I ask my question, let me to explain my understanding and opinion.
By only Overriding we can't achieve polymorphism unless there is upcasting. And as it can only seen at runtime people might have named it as Run time ploymorphism. (I have no objection to call polymorphism as Run Time Polymorphism)
I have objection to call method overloading as compile time polymorphism or polymorphism.
I agree that method overloading is static binding (compile time binding), but I can't see polymorphism in that.
According to the javadoc, there is only polymorphism. There is no compile time or run time polymorphism.
According to javadoc in a section called Defining Methods, it explains Overloading of Methods. But there is nothing about compile time polymorphism.
According to me:
If you put polymorphism in to category of Run time polymorphism, you can only see "compile time polymorphism" when you change your JDK version:
If you switch from a higher JDK version higher to a lower one, you will start seeing compilation errors. The same code behaving differently during compilation time, eg: lambda expression, diamond operator, string in switch case, generics etc.
Let me elaborate my opinion, my prediction about how run time polymorphism and compile time polymorphism came to blogs/tutorials:
Conversation1
Developer 1: Hey I read about polymorphism today. If you do coding to interface, you can achieve polymorphism. writing codes not tightly coupling to a class but instead writing it to interface by making loose coupling, calling a superclass method or interface method actually invokes method of subclass depending on instance passed.
Developer 2: Sorry, I didn't get you.
Developer 1: It's simple at run time which object instance you will pass, that instance method will be executed.
Developer 2: Ohh! Run Time. I got it.
Developer 1: Yes same piece of code, but at Run Time passed instances are different.
Developer 2: **Run Time! Okay, I got it.
Conversation2
Developer 2: Hey yesterday I've met Developer1, he was telling about some run-time polymorphism. By overriding methods in a sub-class we can achieve it.
Developer 3: Run time polymorphism by overriding methods? Then what is oveloading? Compile time polymorphism?
Developer 2: How do you say overloading as compile time polymorphism?
Developer 3: Because it is decided at compile time only.
Developer 2: Silent!
Polymorphism and coding to interface best example is java.sql:
java.sql.Connection conn = DriverManager.getConnection(DB_URL,USER,PASS);
java.sql.Statement stmt = conn.createStatement();
java.sql.ResultSet rs = stmt.executeQuery(sql);
The same piece of code behaves differently based on the Driver registered. This means, if we register the Mysql driver, due to polymorphism this code executes methods of mysql instances. I.e. it executes overridden methods. If you register the Oracle driver, it works for Oracle, and so on.
Above we found same code behaving differently.
Now can anyone show me same code behaving differently in compile time. Or in other words show me add(3,4) method bounded to different methods (other signatured methods) during compile time?
According to javadoc,
The Java programming language supports overloading methods, and Java can distinguish between methods with different method signatures.
The method will be executed depending on signature matched. Having same name for methods does not mean there is a polymorphism, because calling methods signature is different:
Queestion1: If you won't change calling method signature will it call different method other than method for which signature has matched? In any condition will it behave differently?
Let's look at Method overloading:
public void add(int a, int b)
{
System.out.println(a+b);
}
public void add(int a, int b, int c)
{
System.out.println(a+b+c);
}
public static void main(String[] args)
{
add(3,4);
add(3,4,5);
}
Question 1: If method overloading is polymorphism, Which piece of code behaving differently in above block of code? Where is polymorphism here?
Question 2: Method invocation add(3,4); on what scenario it shows ploymorphism, unless it is modified to add(3,4,5)?
EDIT
#FutureVisitor since this thread found no answers in favor of method overloading as a type of polymorphism(Even after a month of question being asked), without any justification accepting answer in favor of Method overloading is not a polymorphism, if any answer points problem in my argument of method overloading is not polymorphism will be accepted and supported their views.
In the Java world, polymorphism means polymorphism between classes. I.e. refering possibly multiple child classes with their common parent. In Java, there is no polymorphism between methods.
void add(int a, int b) and void add(int a, int b, int c) are entirely different methods in the Java syntax. It shouldn't be so - for example, in C++, you can cast them to each other -, but it is so in Java.
The key concept to understand here is the method signature. The method signature defines in a language, what identifies the induvidual methods. (For example, beside a void add(int a, int b);, you simply can't declare an int add(int a, int b); method - the return value is not a part of the method signature in Java, thus the compiler would interpret it as a method re-definition.)
People say
overriding is run time polymorphism and
overloading is compile time polymorphism.
Both are wrong.
Only Overriding is not polymorphism. But overriding helps to achieve polymorphism. If there is no upcasting you can't achieve polymorphism.
Overloading is a concept where method-name along with argument signature is used to bind method call to method body and it can be predicted during compile time only. But this is nowhere related to polymorphism. There is NO behavioral change that can be found in the case of method overloading. Either you compile today or compile after one year behavioral change can only be found if you change calling method signature i.e, only if you modify code to say add(3,4); to add(3,4,5); and hence method overloading is not polymorphism.
One of the most useful features of Java 8 are the new default methods on interfaces. There are essentially two reasons (there may be others) why they have been introduced:
Providing actual default implementations. Example: Iterator.remove()
Allowing for JDK API evolution. Example: Iterable.forEach()
From an API designer's perspective, I would have liked to be able to use other modifiers on interface methods, e.g. final. This would be useful when adding convenience methods, preventing "accidental" overrides in implementing classes:
interface Sender {
// Convenience method to send an empty message
default final void send() {
send(null);
}
// Implementations should only implement this method
void send(String message);
}
The above is already common practice if Sender were a class:
abstract class Sender {
// Convenience method to send an empty message
final void send() {
send(null);
}
// Implementations should only implement this method
abstract void send(String message);
}
Now, default and final are obviously contradicting keywords, but the default keyword itself would not have been strictly required, so I'm assuming that this contradiction is deliberate, to reflect the subtle differences between "class methods with body" (just methods) and "interface methods with body" (default methods), i.e. differences which I have not yet understood.
At some point of time, support for modifiers like static and final on interface methods was not yet fully explored, citing Brian Goetz:
The other part is how far we're going to go to support class-building
tools in interfaces, such as final methods, private methods, protected
methods, static methods, etc. The answer is: we don't know yet
Since that time in late 2011, obviously, support for static methods in interfaces was added. Clearly, this added a lot of value to the JDK libraries themselves, such as with Comparator.comparing().
Question:
What is the reason final (and also static final) never made it to Java 8 interfaces?
This question is, to some degree, related to What is the reason why “synchronized” is not allowed in Java 8 interface methods?
The key thing to understand about default methods is that the primary design goal is interface evolution, not "turn interfaces into (mediocre) traits". While there's some overlap between the two, and we tried to be accommodating to the latter where it didn't get in the way of the former, these questions are best understood when viewed in this light. (Note too that class methods are going to be different from interface methods, no matter what the intent, by virtue of the fact that interface methods can be multiply inherited.)
The basic idea of a default method is: it is an interface method with a default implementation, and a derived class can provide a more specific implementation. And because the design center was interface evolution, it was a critical design goal that default methods be able to be added to interfaces after the fact in a source-compatible and binary-compatible manner.
The too-simple answer to "why not final default methods" is that then the body would then not simply be the default implementation, it would be the only implementation. While that's a little too simple an answer, it gives us a clue that the question is already heading in a questionable direction.
Another reason why final interface methods are questionable is that they create impossible problems for implementors. For example, suppose you have:
interface A {
default void foo() { ... }
}
interface B {
}
class C implements A, B {
}
Here, everything is good; C inherits foo() from A. Now supposing B is changed to have a foo method, with a default:
interface B {
default void foo() { ... }
}
Now, when we go to recompile C, the compiler will tell us that it doesn't know what behavior to inherit for foo(), so C has to override it (and could choose to delegate to A.super.foo() if it wanted to retain the same behavior.) But what if B had made its default final, and A is not under the control of the author of C? Now C is irretrievably broken; it can't compile without overriding foo(), but it can't override foo() if it was final in B.
This is just one example, but the point is that finality for methods is really a tool that makes more sense in the world of single-inheritance classes (generally which couple state to behavior), than to interfaces which merely contribute behavior and can be multiply inherited. It's too hard to reason about "what other interfaces might be mixed into the eventual implementor", and allowing an interface method to be final would likely cause these problems (and they would blow up not on the person who wrote the interface, but on the poor user who tries to implement it.)
Another reason to disallow them is that they wouldn't mean what you think they mean. A default implementation is only considered if the class (or its superclasses) don't provide a declaration (concrete or abstract) of the method. If a default method were final, but a superclass already implemented the method, the default would be ignored, which is probably not what the default author was expecting when declaring it final. (This inheritance behavior is a reflection of the design center for default methods -- interface evolution. It should be possible to add a default method (or a default implementation to an existing interface method) to existing interfaces that already have implementations, without changing the behavior of existing classes that implement the interface, guaranteeing that classes that already worked before default methods were added will work the same way in the presence of default methods.)
In the lambda mailing list there are plenty of discussions about it. One of those that seems to contain a lot of discussion about all that stuff is the following: On Varied interface method visibility (was Final defenders).
In this discussion, Talden, the author of the original question asks something very similar to your question:
The decision to make all interface members public was indeed an
unfortunate decision. That any use of interface in internal design
exposes implementation private details is a big one.
It's a tough one to fix without adding some obscure or compatibility
breaking nuances to the language. A compatibility break of that
magnitude and potential subtlety would seen unconscionable so a
solution has to exist that doesn't break existing code.
Could reintroducing the 'package' keyword as an access-specifier be
viable. It's absence of a specifier in an interface would imply
public-access and the absence of a specifier in a class implies
package-access. Which specifiers make sense in an interface is unclear
- especially if, to minimise the knowledge burden on developers, we have to ensure that access-specifiers mean the same thing in both
class and interface if they're present.
In the absence of default methods I'd have speculated that the
specifier of a member in an interface has to be at least as visible as
the interface itself (so the interface can actually be implemented in
all visible contexts) - with default methods that's not so certain.
Has there been any clear communication as to whether this is even a
possible in-scope discussion? If not, should it be held elsewhere.
Eventually Brian Goetz's answer was:
Yes, this is already being explored.
However, let me set some realistic expectations -- language / VM
features have a long lead time, even trivial-seeming ones like this.
The time for proposing new language feature ideas for Java SE 8 has
pretty much passed.
So, most likely it was never implemented because it was never part of the scope. It was never proposed in time to be considered.
In another heated discussion about final defender methods on the subject, Brian said again:
And you have gotten exactly what you wished for. That's exactly what
this feature adds -- multiple inheritance of behavior. Of course we
understand that people will use them as traits. And we've worked hard
to ensure that the the model of inheritance they offer is simple and
clean enough that people can get good results doing so in a broad
variety of situations. We have, at the same time, chosen not to push
them beyond the boundary of what works simply and cleanly, and that
leads to "aw, you didn't go far enough" reactions in some case. But
really, most of this thread seems to be grumbling that the glass is
merely 98% full. I'll take that 98% and get on with it!
So this reinforces my theory that it simply was not part of the scope or part of their design. What they did was to provide enough functionality to deal with the issues of API evolution.
It will be hard to find and identify "THE" answer, for the resons mentioned in the comments from #EJP : There are roughly 2 (+/- 2) people in the world who can give the definite answer at all. And in doubt, the answer might just be something like "Supporting final default methods did not seem to be worth the effort of restructuring the internal call resolution mechanisms". This is speculation, of course, but it is at least backed by subtle evidences, like this Statement (by one of the two persons) in the OpenJDK mailing list:
"I suppose if "final default" methods were allowed, they might need rewriting from internal invokespecial to user-visible invokeinterface."
and trivial facts like that a method is simply not considered to be a (really) final method when it is a default method, as currently implemented in the Method::is_final_method method in the OpenJDK.
Further really "authorative" information is indeed hard to find, even with excessive websearches and by reading commit logs. I thought that it might be related to potential ambiguities during the resolution of interface method calls with the invokeinterface instruction and and class method calls, corresponding to the invokevirtual instruction: For the invokevirtual instruction, there may be a simple vtable lookup, because the method must either be inherited from a superclass, or implemented by the class directly. In contrast to that, an invokeinterface call must examine the respective call site to find out which interface this call actually refers to (this is explained in more detail in the InterfaceCalls page of the HotSpot Wiki). However, final methods do either not get inserted into the vtable at all, or replace existing entries in the vtable (see klassVtable.cpp. Line 333), and similarly, default methods are replacing existing entries in the vtable (see klassVtable.cpp, Line 202). So the actual reason (and thus, the answer) must be hidden deeper inside the (rather complex) method call resolution mechanisms, but maybe these references will nevertheless be considered as being helpful, be it only for others that manage to derive the actual answer from that.
I wouldn't think it is neccessary to specify final on a convienience interface method, I can agree though that it may be helpful, but seemingly the costs have outweight the benefits.
What you are supposed to do, either way, is to write proper javadoc for the default method, showing exactly what the method is and is not allowed to do. In that way the classes implementing the interface "are not allowed" to change the implementation, though there are no guarantees.
Anyone could write a Collection that adheres to the interface and then does things in the methods that are absolutely counter intuitive, there is no way to shield yourself from that, other than writing extensive unit tests.
We add default keyword to our method inside an interface when we know that the class extending the interface may or may not override our implementation. But what if we want to add a method that we don't want any implementing class to override? Well, two options were available to us:
Add a default final method.
Add a static method.
Now, Java says that if we have a class implementing two or more interfaces such that they have a default method with exactly same method name and signature i.e. they are duplicate, then we need to provide an implementation of that method in our class. Now in case of default final methods, we can't provide an implementation and we are stuck. And that's why final keyword isn't used in interfaces.
I have a code that looks like this
function a(Object m) {}
function a(BasicDbObject) {}
function a(TypeA) {}
function a(TypeB) {}
function a(TypeC) {}
.....
function b(Object m) {
// Some function using Java reflection to determine class of Object m
Class X = c(m);
a(X.cast(m));
}
Here is the problem. It always execute a(Object m) rather than a(BasicDbObject m), even it is BasicDbObject.
My end goal is to execute most closest function to the object passed.
What you are trying cannot be done, because Java is statically typed, and the method overload is resolved at compile-time, not run-time.
The only way to resolve the overload at runtime, is for the method call itself to be done with reflection.
Serious non-answer: wrong approach.
You don't use reflection to dynamically determine a type, to then figure which overloaded method to call.
Instead, use polymorphism. Meaning: don't overload, but override.
Rest assured: getting "reflection" working is hard. Getting it correct, and robust and stable is a super challenging, uphill battle.
You basically want to invent your own personal dynamic dispatch implementation. Unless you have super hard pressing reasons to do so, that is a terrible idea. Because chances are that you will get it wrong. Many many times. And even when your code is working, there will be many incidents later on, when unforeseen things happen in production.
As said: don't do this. Don't fight the language, instead use the means that the language offers you to solve such problems: an inheritance tree of classes, and polymorphic methods. Then let the JVM decide which method to invoke. Most likely, the JVM will do a much better job, compared to what you will come up with.
function a(Object m) {}
function a(BasicDbObject) {}
When methods are overloaded, it may not be intuitive to know the method which gets invoked for any set of parameters because, unlike the situation with overridden methods, the method overloading that gets invoked is determined at compile time (i.e. statically) rather than at run time (i.e. dynamically). This behavior is confusing because overriding methods is more common and this sets our expectations for method invocation.
There are some rules for doing method overloading as robustly and as simply as possible. These are all nicely enumerated in Effective Java (J. Bloch, 2nd and 3rd eds.).
Your situation is made complex because:
You have two overloadings with the same number of parameters whose types are not radically different ... and ...
The behavior of the overloadings is apparently dependent on the type of the parameter (if the behavior was identical, then you simply have one overloading forward to the other)
When this situation arises, you should try to correct it by giving the overloadings different names. It should always be possible to do this and doing so often improves the clarity and maintainability of the code.
If this can't be done for any reason, then the best workaround is to replace the overloadings with a method that accepts the most general parameter type and which invokes helper methods based on the most specific type of the passed argument.
So instead of the above, you can get the behavior you want by using...
public Function a(Object m) {
if (m instanceof BasicDbObject) return doDbObject(m);
if (m instanceof OtherDbObject) return doOtherDbObject(m);
return doGenericObject(m);
}
Note that this isn't the code that you would use when Java adopts pattern matching in the language. Note also that the effect of this code is to give your overloadings different names, but the selection of the distinct method is made at run time using instanceof comparisons rather than at compile time by simply using a distinct name.
TLDR; if you are doing method overloading in a circumstance in which the parameter types are not (or may not be) radically different then you are better off not overloading and using distinct method names.
I know this might be crazy but today one of my friend puzzled by asking when we implement an interface in java is it considered as method overriding. I told him it is not overriding as we are providing working(definition) of method first time when we implement any interface. To support multiple inheritance java provide interface but he was not convinced and was arguing. Please bring some light on to the topic.
The term "overriding" applies when there is an existing implementation of the method . The correct term is "implementing" for interfaces and other abstract declarations.
The #Override tag is used for both cases - it is used when:
The method does override or implement a method declared in a supertype. --javadocs
And from Wikipedia:
Method overriding, in object oriented programming, is a language feature that allows a subclass or child class to provide a specific implementation of a method that is already provided by one of its superclasses or parent classes.
Note that interfaces can have default methods - redefining these methods overrides them:
When you extend an interface that contains a default method, you can ... redefine the default method, which overrides it.
Besides linking to "canonical" sources, I'm not sure what advice to offer on winning a semantic argument with your friend. Perhaps you could ask him what the distinction is between "implementing" and "overriding", and what word he would use instead of "overriding" for the concept of redefining an existing method.
At first glance, interfaces just define API. Since there is no super method to override, the implementations is the first method.
But since Java 5, it's customary to add #Override annotations even for methods which come from interfaces. The main reason here is to catch problems which happen when people change an interface: Now you have a method which is "dangling" - there is no API which says that the method has to be there. The annotation causes an error if you remove a method from the interface, catching this so you can properly clean up all the code.
But that still doesn't mean the implementing method overrides anything.
Except that an interface is very much an abstract class with abstract methods in the byte code. And abstract methods do override.
My feeling is that you can argue both ways but the argument is moot unless you have a use case there the answer to the question actually has a real impact on the code. And here, it doesn't really matter since the compiler hides all the ugly details.