I have some class X with the field A in it. It is a final field, initialized in constructor. Now I have a derived class Y where this field must always be an instance of B, a class that is derived from A. The problem, the class Y needs to call a number specific methods that are only available on the class B but not on its ancestor, A.
I see several solutions:
Reuse the field of the type A, inherited from X, and inside the class Y, cast A to B. This makes my code full of nasty type casts.
Add another field of the type B. Now there are no typecasts but we have two fields of slightly different type that must always hold the same value - also does not feel good.
Add all methods that B provides also to A, throw NotImplementedException. This adds some strange methods that knowingly make no sense in that class.
Which is the most right way to deal with this field? Or maybe some other, better exists? I do not think this is very language specific but must be doable in Java I am using.
The X type should probably be a generic type:
public class X<T extends A> {
protected T a;
public X(T a) {
this.a = a;
}
}
public clas Y extends X<B> {
public Y(B b) {
super(b);
}
public void foo() {
// here, this.a is of type B, without any type cast.
}
}
The only sane solution I can think of is a variation of #1: add to class Y a getter method that returns the field cast to the type B, and access the field only through this getter. That will only require one cast, and as a beneficial side effect it will also document which parts of the code require that the field be actually a B.
Solution #1 is the most correct way, in my opinion. If you know that an object is of a specific class, there's nothing wrong with casting it to that class. It'd be wrong if you were making assumptions while doing this, but this is not an assumption, according to what you're saying. Maybe make a getter method that simply casts the underlying field to the correct type and be done with it? That way, you'll only have to cast once (per subclass).
Solution #2 will cause elusive runtime errors if the fields somehow cease being properly up-to-date with each other. This seriously sounds like the worst solution.
Solution #3 still feels like bad software design. If a method exists and it's not in an abstract class and you are not in a prototyping phase, then that method should be implemented. Otherwise, you're probably just planting unnecessary traps for the user (of the class) by giving it a misleading interface.
Related
A quick example
class A {
protected int foo(int x){
return x;
}
}
class B extends A {
public int foo(int x){
return x*x;
}
}
This is allowed in Java and works without any issue. But let's say in another package you declare
A b = new B();
int z = b.foo(5);
Then this won't work because obviously A foo() is protected. But then why allow in the first place to have more accessible methods in the subclasses? Is there a case where this is helpful?
In general, subclasses can add methods to the interface they inherit from their parent class. Making a method more accessible is effectively adding to the interface.
But then why allow in the first place to have more accessible methods in the subclasses?
Because it can be useful for code that holds a subclass reference.
Is there a case where this is helpful?
One good example is Object.clone(), a protected method. All Java classes are subclasses, directly or indirectly, of Object. Subclasses that support cloning can choose to make this method public.
Foo foo1 = new Foo();
Foo foo2 = foo1.clone(); // Sometimes you're holding a subclass reference.
Because classing is supposed to allow you to think of subclasses as instances of their superclasses. I think this is called an is-a relationship (namely B is-a A, but not the other way around). I don't really know a good way to explain this succinctly.
In your example, all instances of B are also instances of A (be mindful this may be a gross oversimplifaction, but I think for this example it is enough). A only guarantees to have methods that are public in its own definition. B just happens to make one of the protected methods public, but let's suppose you had another class called C that did not do this. Well, you can cast both B and C into A, but if you were allowed to call the protected method on A because B makes it public, then you would get an error when you pass C.
In the other package, you're casting B to A, so your object effectively takes on the interface of A so you are restricted to use only the interface given to you by A. Can the JVM go through the trouble and notice that 1) A is in fact B and that 2) B makes foo public, so foo should be callable? I mean it could I suppose, but how much effort would that take to implement in the JVM and how error prone would that be? Keep it simple. If the type is A, then the interface is A. If the type is B, then the interface is A and whatever else B includes on top of that.
You could make the second example compile if you replaced A with B. That's the point: B can be used everywhere A can, but can also add additional features (such as making methods public).
I suspect it is the type of the class in which it is written but I am not 100% sure, could someone please confirm my suspicion and perhaps give a reference to Java Language Specification where this behaviour is defined?
Let's say class A has a method a() which uses the this keyword in its body, and class B extends class A. Now class B has inherited method a(), however, I am not sure if the compile time type of this in B.a() is now A or B ?
I am asking this because I am trying to understand how the visitor pattern works, as it is described in this Robert C. Martin's Visitor chapter from The Principles, Patterns, and Practices of Agile Software Development.
It seems to be crucial to know the compile time type of this if one wants to fully understand the visitor pattern because overloaded method calls are resolved at compile time. More specifically, I refer to the compile time type of this in the accept methods in the visitor pattern.
The type of this is the type of the class in which it is used. In fact, it is crucial for the visitor pattern from the article to work.
Visitor pattern implements double dispatch in two steps - selecting the appropriate accept method in the object being visited (first leg), and then selecting the appropriate visit method in the visitor (second leg). The first leg is implemented through overriding; the second leg is implemented through overloading.
Note that it is not necessary to use overloading for the second leg. In fact, it is common not to use it there for better readability. Compare these two implementations:
// Copied from Listing 29-2
public interface ModemVisitorOverload
{
void visit(HayesModem modem);
void visit(ZoomModem modem);
void visit(ErnieModem modem);
}
public interface ModemVisitorNoOverload
{
void visitHayes(HayesModem modem);
void visitZoom(ZoomModem modem);
void visitErnie(ErnieModem modem);
}
The second implementation is not using the overloading. It works in exactly the same way, except human readers of the code immediately see what is going on.
this is a reference to the current object and as the JLS 15.8.3 states
The type of this is the class C within which the keyword this occurs.
At run time, the class of the actual object referred to may be the class C or any subclass of C.
Thank you for the answers.
After reading the answers I got inspired to try to compile the code below. The result of this little experiment might give also some insight into the answer to the original question.
The code below does not compile because this in A is not the correct type (according to the compiler). So for the visitor pattern to work the accept method needs to be repeated in subclasses of A and the methods called by v.visit(this) are fixed already at compile time (as others have pointed out already in their answers).
The point of the original question was whether one could avoid this redundancy in code but it seems not.
class A{
void a(VisitorI v){v.visit(this);}
}
class B1 extends A{}
class B2 extends A{}
interface VisitorI{
void visit(B1 b1);
void visit(B2 b2);
}
class Visitor implements VisitorI{
public void visit(B1 b1){
System.out.print("b1");
}
public void visit(B2 b2){
System.out.print("b2");
}
}
There's no such thing as B.a. There is only one method here and that is in class A, so this obviously has the static type of A. This isn't C++ where function bodies get duplicated willy nilly (if you use templates).
Now if you created a method named a inside of class B, then this would have the static type of B inside that method. And thanks to the magic of virtual methods, it would get called based on the runtime type when you invoke .a() on an object that could be either type A or B. But this is onyl if you override A's method.
An instance of an Object can only have one type. In the case of an instance of B extends A then this will always be a B, even inside methods of A.
However you can always use an object of type B as thought it is an an object of type A so even though getClass will return a B all the methods within A can use this as though it were a A.
For example its completely valid to do:
A a = new B(); // Valid as B extends A
a.test();
If B declares a method test() then that version of the method will be called, even though you are referring to it as though it were an A.
This is also how things like over-riding methods can be used to change the behavior of a base class from a sub class. When you have a method test() in A, and then override the method in B then whenever code in A calls this.test() it will actually call the over-ridden method in B.
Your mistake comes from the assumption that Overriding is resolved at compile time. It's not, it's resolved at run time Over load ed methods are resolved at compile time. Over ride n methods are resolved at run time. Overloading and Overriding are very different things.
I'm having some trouble finding the specifics on what happens when you return a subclass in a method of superclass type in Java. For example:
public class SuperClass
{
int a;
}
public class SubClass extends SuperClass
{
int b;
}
SuperClass superObj;
SubClass subObj;
private SuperClass getObject ()
{
return subObj;
}
public static void main (...)
{
superObj = getObject();
}
What exactly happens to subObj when it's returned as its superclass? I realise while typing this example that I could probably just as easily test it myself, but i'm still curious as to what the process is exactly when this happens, and whether it's considered good (if it works, that is) or bad practice.
I'm asking because I'm currently working on a project in which I have two abstract base classes, and several subclasses for each of them. I'm trying to find out good/bad ways to handle having to change from one subclass to another while still using the convenience polymorphism adds when using abstract base classes.
EDIT: I fixed main and the class declarations, sorry about that.
Casting does not fundamentally change an object to a different type, it simply tells the compiler to view the object as its superclass (or subclass, when downcasting). In your example, superObj is still an instance of the SubClass type, but it now appears to be a SuperClass. What this means is that if you try to reference superObj.b, you will get a compilation error (since b does not exist in SuperClass). However you could reference (SubClass)superObj.b. In this case you are telling the compiler to consider subClass as an instance of SubClass (which it really is).
Let's take this a step further and add another class to your code:
public class SisterClass extends SuperClass
{
int c;
}
Without changing anything else in your code (other than the syntax problems), you try to reference ((SisterClass)superObj). This will compile but fail with a ClassCastRuntime runtime error. Although SisterClass is a subclass of SuperClass, superObj is not an instance of SisterClass. So, you can only cast to what the object actually is.
There are some oddities in your code (defining a method inside main?), but that not withstanding... The method getObject will not change your subObj, it will simply return a reference to it that looks like type SuperClass. By "looks like" I mean that it will only expose any methods or members from SuperClass. However, if you take that returned value and attempt to downcast it to SubClass, the cast will succeed and you will find the field/methods from SubClass will work as you expect without any loss of information from having been returned as SuperClass.
SubClass is extension of SuperClass
you are just casting it down to its base class, all extensions are not available, you should not be looking to try and get them back to Implementations as you would be guessing what that are, as you could extend it numerous times in various ways,
The returned class would have a but not b
Suppose I have class A
public class A
{
public void method()
{
//do stuff
}
}
Also another class B
public class B extends A
{
public void method()
{
//do other stuff
}
}
Now I have the following statements:
A a = new B();
a.method();
Is this an example of run time polymorphism? If yes, then is no binding is done for the reference a at compile time?
The compiler will tell you that this can't work, because there's no relationship between A and B that will allow you to write A a = new B();
B either has to extend A or both have to implement a common interface with void method() in it.
You could have answered this very quickly by trying it with the compiler. Be an experimentalist - it's faster than forums.
UPDATE:
It works, now that B extends A. The binding that you care about, dynamic binding, is done at runtime. The static compile time type for the variable "a" is class A; at runtime it is dynamically bound to a reference of type B. Yes, I would consider this to be an example of polymorphism.
I was waiting to put my answer to another post, into a more correctly classified problem. Your question fits the concept, I've tried to explain in detail. but, since I don't know how link answers, I'll just copy/paste the complete script here .... Kindly go through it, you'll understand everything that there is to understand about your problem, you've asked
let's say compiler has to resolve a call like this : *
A a = new AA(); // Assume AA to be some subclass of class A
a->someFunc(); // and we're invoking a method "someFunc()' on a
*.
Now, compiler will go over following steps methodically.
1.) Firstly, compiler knows the declared type of variable a, so it will check whether the declared type of object a (lets call this, class A for time being), have a method with the name someFunc() and that it must be public. This method could either be declared in class A, or it could be a derived method from one of the base class(es) of class A, but it doesn't matter to compiler and it just checks for it's existence with it's access specifier being public.
Needless to say any error in this step will invite a compiler error.
2.) Secondly, once the method is validated to be a part of the class A, compiler has to resolve the call to the correct method, since many methods could be there with same name (thanks to function overloading). This process of resolving correct method is called overloading resolution. Compiler achieves this by matching the signatures of the called method with all the overloaded methods that are part of the class. So, of all the someFunc() s only the correct someFunc() (matching the signatures with called method) will be found and considered further.
3.) Now comes the difficult part, It may very well happen that someFunc() may have been overridden in one of the subclasses of the class A (lets call this class AA and needless to say it is some subclass of A), and that variable a (declared to be of type A) may actually be referring to an object of class AA, (this is permissible in C++ to enable polymorphism). Now, if the someFunc() method is declared to be of type virtual, in base class (i.e. Class A) and someFunc() has been overriden by subclass(es) of A (either in AA or classes between A and AA), the correct version of someFunc() have to be found out by the compiler.
Now, imagine you're the compiler and you have this task of finding whether the class AA has this method. Obviously, class AA will have this method since, it is a subclass of A and public access of A in class A has already been validated in Step 1 by the compiler !!! . But Alternatively, as mentioned in previous paragraph, someFunc() may be overriden by class AA (or any other class between A and AA), which is what compiler needs to catch. Therefore, you (since, your'e playing the compiler) could do a systematic check to find the bottom-most (lowest in the inheritance tree) overridden method someFunc() starting from class A and ending at class AA. In this search, you'll look for same method signatures as was validated in overloading resolution. This method will be the method which will be invoked.
Now, you may be wondering, "What the heck", is this search done everytime ? ... Well, Not really. The compiler knows the overhead of finding this everytime, and as such, maintains a data-structure called Virtual Table for every class type. Think of virtual table, as mapping from method signatures (which are publicly accessible) to the function pointers. This virtual table is made by compiler during compilation process and is maintained in-memory during program execution. In our example, class A and class AA will both have their own virtual tables. And when compiler has to be find someFunc() in class AA (since actual object pointed by variable a is of type AA), it will simply find the function pointer through the virtual table of Class AA. This is as simple has hashing into the table and is a constant time operation.
Regards
AViD
The code example you've given isn't legal, so I guess the answer is that it's not a kind of polymorphism.
You can't assign a class to a variable of an unrelated type like that. I'm guessing that you might be intending for B to derive from A?
Is this an example of run time polymorphism?
As of the edit: yes that would be runtime polymorphism, since which method is actually executed depends on what is assigned to a.
Edit:
If method() would be static, then calling a.method() would always result in the version of A being called. However, you'd then just write A.method() and if you don't do that any decent IDE would warn you about that (because it might be misleading to call a static method on an instance).
End Edit.
Compile time polymorphism would be the overloading mechanism, i.e. the compiler decides whether to use someMethod(Number) or someMethod(Integer) depending on what it knows about the parameter that is passed (if you pass a Double or just a Number it would be the first one, if you pass an Integer it would be the second).
If yes, then is no binding is done for the reference a at compile time?
What do you mean exactly? Note that the type of a will be A so you can assign everything that extends A at runtime. The compiler will just complain if the assigned value is not an A (you might trick the compiler with a cast, but this is just evil and if this breaks your program, you have been warned ;) )
It should be
public class B extends A {
public void method()
{
//do other stuff
} }
If it were compiling, it would be the B::method() that would be called. (only if you changed your code to make the B class inherit the A class).
In this case your use your object like it was only an A, but if some methods are overidden in B, then these methods are called, instead of A's methods.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
java Enum definition
Better formulated question, that is not considered a duplicate:
What would be different in Java if Enum declaration didn't have the recursive part
if language designers were to use simply Enum<E extends Enum> how would that affect the language?
The only difference now would be that someone coud write
A extends Enum<B>
but since it is not allowed in java to extend enums that would be still illegal.
I was also thinking about someone supplying jvm a bytecode that defines smth as extending an enum - but generics can't affect that as they all are erased.
So what is the whole point of such declaration?
Thank you!
Edit
for simplicity let's look at an example:
interface MyComparable<T> {
int myCompare(T o);
}
class MyEnum<E extends MyEnum> implements MyComparable<E> {
public int myCompare(E o) { return -1; }
}
class FirstEnum extends MyEnum<FirstEnum> {}
class SecondEnum extends MyEnum<SecondEnum> {}
what's wrong with this class structure? What can be done that "MyEnum<E extends MyEnum<E>>" would restrict?
This is a common question, and understandably so. Have a look at this part of the generics FAQ for the answer (and actually, read as much of the whole document as you feel comfortable with, it's rather well done and informative).
The short answer is that it forces the class to be parameterized on itself; this is required for superclasses to define methods, using the generic parameter, that work transparently ("natively", if you will) with their subclasses.
Edit: As a (non-)example for instance, consider the clone() method on Object. Currently, it's defined to return a value of type Object. Thanks to covariant return types, specific subclasses can (and often do) define that they return a more specific class, but this cannot be enforced and hence cannot be inferred for an arbitrary class.
Now, if Object were defined like Enum, i.e. Object<T extends Object<T>> then you'd have to define all classes as something like public class MyFoo<MyFoo>. Consequently, clone() could be declared to return a type of T and you can ensure at compile time that the returned value is always exactly the same class as the object itself (not even subclasses would match the parameters).
Now in this case, Object isn't parameterized like this because it would be extremely annoying to have this baggage on all classes when 99% of them aren't going to utilise it at all. But for some class hierarchies it can be very useful - I've used a similar technique myself before with types of abstract, recursive expression parsers with several implementations. This construct made it possible to write code that was "obvious" without having to cast everywhere, or copy-and-paste just to change concrete class definitions.
Edit 2 (To actually answer your question!):
If Enum was defined as Enum<E extends Enum>, then as you rightly say, someone could define a class as A extends Enum<B>. This defeats the point of the generic construct, which is to ensure that the generic parameter is always the exact type of the class in question. Giving a concrete example, Enum declares its compareTo method as
public final int compareTo(E o)
In this case, since you defined A to extend Enum<B>, instances of A could only be compared against instances of B (whatever B is), which is almost certainly not very useful. With the additional construct, you know that any class that extends Enum is comparable only against itself. And hence you can provide method implementations in the superclass that remain useful, and specific, in all subclasses.
(Without this recursive generics trick, the only other option would be to define compareTo as public final int compareTo(Enum o). This is not really the same thing, as then one could compare a java.math.RoundingMode against a java.lang.Thread.State without the compiler complaining, which again isn't very useful.)
OK, let's get away from Enum itself as we appear to be getting hung up on it. Instead, here is an abstract class:
public abstract class Manipulator<T extends Manipulator<T>>
{
/**
* This method actually does the work, whatever that is
*/
public abstract void manipulate(DomainObject o);
/**
* This creates a child that can be used for divide and conquer-y stuff
*/
public T createChild()
{
// Some really useful implementation here based on
// state contained in this class
}
}
We are going to have several concrete implementations of this - SaveToDatabaseManipulator, SpellCheckingManipulator, whatever. Additionally we also want to let people define their own, as this is a super-useful class. ;-)
Now - you will notice that we're using the recursive generic definition, and then returning T from the createChild method. This means that:
1) We know and the compiler knows that if I call:
SpellCheckingManipulator obj = ...; // We have a reference somehow
return obj.createChild();
then the returned value is definitely a SpellCheckingManipulator, even though it's using the definition from the superclass. The recursive generics here allow the compiler to know what is obvious to us, so you don't have to keep casting the return values (like you often have to do with clone(), for example).
2) Notice that I didn't declare the method final, since perhaps some specific subclasses will want to override it with a more suitable version for themselves. The generics definition means that regardless of who create a new class or how it is defined, we can still assert that the return from e.g. BrandNewSloppilyCodedManipulator.createChild() will still be an instance of BrandNewSloppilyCodedManipulator. If a careless developer tries to define it to return just Manipulator, the compiler won't let them. And if they try to define the class as BrandNewSloppilyCodedManipulator<SpellCheckingManipulator>, it won't let them either.
Basically, the conclusion is that this trick is useful when you want to provide some functionality in a superclass that somehow gets more specific in subclasses. By declaring the superclass like this, you are locking down the generic parameter for any subclasses to be the subclass itself. This is why you can write a generic compareTo or createChild method in the superclass and prevent it from becoming overly vague when you're dealing with specific subclasses.