Related
One of the most useful features of Java 8 are the new default methods on interfaces. There are essentially two reasons (there may be others) why they have been introduced:
Providing actual default implementations. Example: Iterator.remove()
Allowing for JDK API evolution. Example: Iterable.forEach()
From an API designer's perspective, I would have liked to be able to use other modifiers on interface methods, e.g. final. This would be useful when adding convenience methods, preventing "accidental" overrides in implementing classes:
interface Sender {
// Convenience method to send an empty message
default final void send() {
send(null);
}
// Implementations should only implement this method
void send(String message);
}
The above is already common practice if Sender were a class:
abstract class Sender {
// Convenience method to send an empty message
final void send() {
send(null);
}
// Implementations should only implement this method
abstract void send(String message);
}
Now, default and final are obviously contradicting keywords, but the default keyword itself would not have been strictly required, so I'm assuming that this contradiction is deliberate, to reflect the subtle differences between "class methods with body" (just methods) and "interface methods with body" (default methods), i.e. differences which I have not yet understood.
At some point of time, support for modifiers like static and final on interface methods was not yet fully explored, citing Brian Goetz:
The other part is how far we're going to go to support class-building
tools in interfaces, such as final methods, private methods, protected
methods, static methods, etc. The answer is: we don't know yet
Since that time in late 2011, obviously, support for static methods in interfaces was added. Clearly, this added a lot of value to the JDK libraries themselves, such as with Comparator.comparing().
Question:
What is the reason final (and also static final) never made it to Java 8 interfaces?
This question is, to some degree, related to What is the reason why “synchronized” is not allowed in Java 8 interface methods?
The key thing to understand about default methods is that the primary design goal is interface evolution, not "turn interfaces into (mediocre) traits". While there's some overlap between the two, and we tried to be accommodating to the latter where it didn't get in the way of the former, these questions are best understood when viewed in this light. (Note too that class methods are going to be different from interface methods, no matter what the intent, by virtue of the fact that interface methods can be multiply inherited.)
The basic idea of a default method is: it is an interface method with a default implementation, and a derived class can provide a more specific implementation. And because the design center was interface evolution, it was a critical design goal that default methods be able to be added to interfaces after the fact in a source-compatible and binary-compatible manner.
The too-simple answer to "why not final default methods" is that then the body would then not simply be the default implementation, it would be the only implementation. While that's a little too simple an answer, it gives us a clue that the question is already heading in a questionable direction.
Another reason why final interface methods are questionable is that they create impossible problems for implementors. For example, suppose you have:
interface A {
default void foo() { ... }
}
interface B {
}
class C implements A, B {
}
Here, everything is good; C inherits foo() from A. Now supposing B is changed to have a foo method, with a default:
interface B {
default void foo() { ... }
}
Now, when we go to recompile C, the compiler will tell us that it doesn't know what behavior to inherit for foo(), so C has to override it (and could choose to delegate to A.super.foo() if it wanted to retain the same behavior.) But what if B had made its default final, and A is not under the control of the author of C? Now C is irretrievably broken; it can't compile without overriding foo(), but it can't override foo() if it was final in B.
This is just one example, but the point is that finality for methods is really a tool that makes more sense in the world of single-inheritance classes (generally which couple state to behavior), than to interfaces which merely contribute behavior and can be multiply inherited. It's too hard to reason about "what other interfaces might be mixed into the eventual implementor", and allowing an interface method to be final would likely cause these problems (and they would blow up not on the person who wrote the interface, but on the poor user who tries to implement it.)
Another reason to disallow them is that they wouldn't mean what you think they mean. A default implementation is only considered if the class (or its superclasses) don't provide a declaration (concrete or abstract) of the method. If a default method were final, but a superclass already implemented the method, the default would be ignored, which is probably not what the default author was expecting when declaring it final. (This inheritance behavior is a reflection of the design center for default methods -- interface evolution. It should be possible to add a default method (or a default implementation to an existing interface method) to existing interfaces that already have implementations, without changing the behavior of existing classes that implement the interface, guaranteeing that classes that already worked before default methods were added will work the same way in the presence of default methods.)
In the lambda mailing list there are plenty of discussions about it. One of those that seems to contain a lot of discussion about all that stuff is the following: On Varied interface method visibility (was Final defenders).
In this discussion, Talden, the author of the original question asks something very similar to your question:
The decision to make all interface members public was indeed an
unfortunate decision. That any use of interface in internal design
exposes implementation private details is a big one.
It's a tough one to fix without adding some obscure or compatibility
breaking nuances to the language. A compatibility break of that
magnitude and potential subtlety would seen unconscionable so a
solution has to exist that doesn't break existing code.
Could reintroducing the 'package' keyword as an access-specifier be
viable. It's absence of a specifier in an interface would imply
public-access and the absence of a specifier in a class implies
package-access. Which specifiers make sense in an interface is unclear
- especially if, to minimise the knowledge burden on developers, we have to ensure that access-specifiers mean the same thing in both
class and interface if they're present.
In the absence of default methods I'd have speculated that the
specifier of a member in an interface has to be at least as visible as
the interface itself (so the interface can actually be implemented in
all visible contexts) - with default methods that's not so certain.
Has there been any clear communication as to whether this is even a
possible in-scope discussion? If not, should it be held elsewhere.
Eventually Brian Goetz's answer was:
Yes, this is already being explored.
However, let me set some realistic expectations -- language / VM
features have a long lead time, even trivial-seeming ones like this.
The time for proposing new language feature ideas for Java SE 8 has
pretty much passed.
So, most likely it was never implemented because it was never part of the scope. It was never proposed in time to be considered.
In another heated discussion about final defender methods on the subject, Brian said again:
And you have gotten exactly what you wished for. That's exactly what
this feature adds -- multiple inheritance of behavior. Of course we
understand that people will use them as traits. And we've worked hard
to ensure that the the model of inheritance they offer is simple and
clean enough that people can get good results doing so in a broad
variety of situations. We have, at the same time, chosen not to push
them beyond the boundary of what works simply and cleanly, and that
leads to "aw, you didn't go far enough" reactions in some case. But
really, most of this thread seems to be grumbling that the glass is
merely 98% full. I'll take that 98% and get on with it!
So this reinforces my theory that it simply was not part of the scope or part of their design. What they did was to provide enough functionality to deal with the issues of API evolution.
It will be hard to find and identify "THE" answer, for the resons mentioned in the comments from #EJP : There are roughly 2 (+/- 2) people in the world who can give the definite answer at all. And in doubt, the answer might just be something like "Supporting final default methods did not seem to be worth the effort of restructuring the internal call resolution mechanisms". This is speculation, of course, but it is at least backed by subtle evidences, like this Statement (by one of the two persons) in the OpenJDK mailing list:
"I suppose if "final default" methods were allowed, they might need rewriting from internal invokespecial to user-visible invokeinterface."
and trivial facts like that a method is simply not considered to be a (really) final method when it is a default method, as currently implemented in the Method::is_final_method method in the OpenJDK.
Further really "authorative" information is indeed hard to find, even with excessive websearches and by reading commit logs. I thought that it might be related to potential ambiguities during the resolution of interface method calls with the invokeinterface instruction and and class method calls, corresponding to the invokevirtual instruction: For the invokevirtual instruction, there may be a simple vtable lookup, because the method must either be inherited from a superclass, or implemented by the class directly. In contrast to that, an invokeinterface call must examine the respective call site to find out which interface this call actually refers to (this is explained in more detail in the InterfaceCalls page of the HotSpot Wiki). However, final methods do either not get inserted into the vtable at all, or replace existing entries in the vtable (see klassVtable.cpp. Line 333), and similarly, default methods are replacing existing entries in the vtable (see klassVtable.cpp, Line 202). So the actual reason (and thus, the answer) must be hidden deeper inside the (rather complex) method call resolution mechanisms, but maybe these references will nevertheless be considered as being helpful, be it only for others that manage to derive the actual answer from that.
I wouldn't think it is neccessary to specify final on a convienience interface method, I can agree though that it may be helpful, but seemingly the costs have outweight the benefits.
What you are supposed to do, either way, is to write proper javadoc for the default method, showing exactly what the method is and is not allowed to do. In that way the classes implementing the interface "are not allowed" to change the implementation, though there are no guarantees.
Anyone could write a Collection that adheres to the interface and then does things in the methods that are absolutely counter intuitive, there is no way to shield yourself from that, other than writing extensive unit tests.
We add default keyword to our method inside an interface when we know that the class extending the interface may or may not override our implementation. But what if we want to add a method that we don't want any implementing class to override? Well, two options were available to us:
Add a default final method.
Add a static method.
Now, Java says that if we have a class implementing two or more interfaces such that they have a default method with exactly same method name and signature i.e. they are duplicate, then we need to provide an implementation of that method in our class. Now in case of default final methods, we can't provide an implementation and we are stuck. And that's why final keyword isn't used in interfaces.
From the pdf of a java course: http://www.ccs.neu.edu/home/riccardo/courses/csu370-fa07/lect4.pdf
It says:
For those of you that follow at home, let me emphasize that subclassing is not inheritance. We will see inheritance later in the course.
Of course, subclassing and inheritance are related. As we will see inheritance is a code reuse mechanism that lets you reuse code easily when defining subclasses. But subclassing makes sense even when you do not have inheritance.
(Indeed, some languages have subclassing but no inheritance, at least, not inheritance like Java implements.)
Subclassing is a property of classes, and is properly part of the type system of Java. Subclassing is used by Java to determine what methods can possibly be invoked on an object, and to return an error at compile-time when an object does not supply a given method.
I want to know which languages have subclassing but no inheritance, at least, not inheritance like Java implements? (Since I'm not quite understand the concepts, so if I can see it in some actual languages, that would make it clearer)
This is a distinction without a difference. Clearly he is talking about inheritance of methods only when he uses the word "inheritance". He isn't using the term in the canonical way introduced by Wegner87, which is inextricably entwined with subclassing:
Inheritance: A class may inherit operations from “superclasses” and may have its operations inherited by “subclasses”. An object of the class C created by the operation “C new” has C as its “base class” and may use operations defined in its base class as well as operations defined in superclasses.
CS teachers often have strange notions. This has been one of them.
Most design patten books say we should "Favor object composition over class inheritance."
But can anyone give me an example that inheritance is better than object composition.
Inheritance is appropriate for is-a relationships. It is a poor fit for has-a relationships.
Since most relationships between classes/components fall into the has-a bucket (for example, a Car class is likely not a HashMap, but it may have a HashMap), it then follows the composition is often a better idea for modeling relationships between classes rather than inheritance.
This is not to say however that inheritance is not useful or not the correct solution for some scenarios.
My simple answer is that you should use inheritance for behavioral purposes. Subclasses should override methods to change the behaviour of the method and the object itself.
This article (interview with Erich Gamma, one of the GoF) elaborates clearly why Favor object composition over class inheritance.
In Java, whenever you inherit from a class, your new class also automatically becomes a subtype of the original class type. Since it is a subtype, it needs to adhere to the Liskov substitution principle.
This principle basically says that you must be able to use the subtype anywhere where the supertype is expected. This severely limits how the behavior of your new inherited class can differ from the original class.
No compiler will be able to make you adhere to this principle though, but you can get in trouble if you don't, especially when other programmers are using your classes.
In languages that allow subclassing without subtyping (like the CZ language), the rule "Favor object composition over inheritance" is not as important as in languages like Java or C#.
Inheritance allows an object of the derived type to be used in nearly any circumstance where one would use an object of the base type. Composition does not allow this. Use inheritance when such substitution is required, and composition when it is not.
Just think of it as having an "is-a" or a "has-a" relationship
In this example Human "is-a" Animal, and it may inherits different data from the Animal class. Therefore Inheritance is used:
abstract class Animal {
private String name;
public String getName(){
return name;
}
abstract int getLegCount();
}
class Dog extends Animal{
public int getLegCount(){
return 4;
}
}
class Human extends Animal{
public int getLegCount(){
return 2;
}
}
Composition makes sense if one object is the owner of another object. Like a Human object owning a Dog object. So in the following example a Human object "has-a" Dog object
class Dog{
private String name;
}
class Human{
private Dog pet;
}
hope that helped...
It is a fundamental design principle of a good OOD. You can assign a behaviour to a class dynamicly "in runtime", if you use composition in your design rather than inheritance like in Strategy Pattern. Say,
interface Xable {
doSomething();
}
class Aable implements Xable { doSomething() { /* behave like A */ } }
class Bable implements Xable { doSomething() { /* behave like B */ } }
class Bar {
Xable ability;
public void setAbility(XAble a) { ability = a; }
public void behave() {
ability.doSomething();
}
}
/*now we can set our ability in runtime dynamicly */
/*somewhere in your code */
Bar bar = new Bar();
bar.setAbility( new Aable() );
bar.behave(); /* behaves like A*/
bar.setAbility( new Bable() );
bar.behave(); /* behaves like B*/
if you did use inheritance, the "Bar" would get the behaviour "staticly" over inheritance.
Inheritance is necessary for subtyping. Consider:
class Base {
void Foo() { /* ... */ }
void Bar() { /* ... */ }
}
class Composed {
void Foo() { mBase.Foo(); }
void Bar() { mBase.Foo(); }
private Base mBase;
}
Even though Composed supports all of the methods of Foo it cannot be passed to a function that expects a value of type Foo:
void TakeBase(Base b) { /* ... */ }
TakeBase(new Composed()); // ERROR
So, if you want polymorphism, you need inheritance (or its cousin interface implementation).
This is a great question. One I've been asking for years, at conferences, in videos, in blog posts. I've heard all kinds of answers. The only good answer I've heard is preformance:
Performance differences in languages. Sometimes, classes take advantage of built-in engine optimizations that dynamic compositions don't. Most of the time, this is a much smaller concern than the problems associated with class inheritance, and usually, you can inline everything you need for that performance optimization into a single class and wrap a factory function around it and get the benefits you need without a problematic class hierarchy.
You should never worry about this unless you detect a problem. Then you should profile and test differences in perf to make informed tradeoffs as needed. Often, there are other performance optimizations available that don't involve class inheritance, including tricks like inlining, method delegation, memoizing pure functions, etc... Perf will vary depending on the specific application and language engine. Profiling is essential, here.
Additionally, I've heard lots of common misconceptions. The most common is confusion about type systems:
Conflating types with classes (there are a couple existing answers concentrate on that here already). Compositions can satisfy polymorphism requirements by implementing interfaces. Classes and types are orthogonal, though in most class-supporting languages, subclasses automatically implement the superclass interface, so it can seem convenient.
There are three very good reasons to avoid class inheritance, and the crop up again and again:
The gorilla/banana problem
"I think the lack of reusability comes in object-oriented languages, not functional languages. Because the problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle." ~ Joe Armstrong, quoted in "Coders at Work" by Peter Seibel.
This problem basically refers to the lack of selective code reuse in class inheritance. Composition lets you select just the pieces you need by approaching software design from a "small, reusable parts" approach rather than building monolithic designs that encapsulate everything related to some given functionality.
The fragile base class problem
Class inheritance is the tightest coupling available in object-oriented design, because the base class becomes part of the implementation of the child classes. This is why you'll also hear the advice from the Gang of Four's "Design Patterns" classic: "Program to an interface, not an implementation."
The problem with implementation inheritance is that even the smallest change to the inner details of that implementation could potentially break child classes. If the interface is public, exposed to user-land in any way, it could break code you are not even aware of.
This is the reason that class hierarchies become brittle -- hard to change as you grow them with new use-cases.
The common refrain is that we should be constantly refactoring our code (see Martin Fowler et al on extreme programming, agile, etc...). The key to refactor success is that you can't break things -- but as we've just seen, it's difficult to refactor a class hierarchy without breaking things.
The reason is that it's impossible to create the correct class hierarchy without knowing everything you need to know about the use-cases, but you can't know that in evolving software. Use cases get added or changed in projects all the time.
There is also a discovery process in programming, where you discover the right design as you implement the code and learn more about what works and what doesn't. But with class inheritance, once you get a class taxonomy going, you've painted yourself into a corner.
You need to know the information before you start the implementation, but part of learning the information you need involves building the implementation. It's a catch-22.
The duplication by necessity problem. This is where the death spiral really gets going. Sometimes, you really just want a banana, not the gorilla holding the banana, and the entire jungle. So you copy and paste it. Now there's a bug in a banana, so you fix it. Later, you get the same bug report and close it. "I already fixed that". And then you get the same bug report again. And again. Uh-oh. It's not fixed. You forgot the other banana! Google "copy pasta".
Other times, you really need to work a new use-case into your software, but you can't change the original base class, so instead, you copy and paste the entire class hierarchy into a new one and rename all the classes you need in the hierarchy to force that new use-case into the code base. 6 months later a new dev is looking at the code and wondering which class hierarchy to inherit from and nobody can provide a good answer.
Duplication by necessity leads to copy pasta messes, and pretty soon people start throwing around the word "rewrite" like it's no big deal. The problem with that is that most rewrite projects fail. I can name several orgs off the top of my head that are currently maintaining two development teams instead of one while they work on a rewrite project. I've seen such orgs cut funding to one or the other, and I've seen projects like that chew through so much cash that a startup or small business runs out of money and shuts down.
Developers underestimate the impact of class inheritance all the time. It's an important choice, and you need to be aware of the trade offs you opt into every time you create or inherit from a base class.
This question already has answers here:
Interface naming in Java [closed]
(11 answers)
Closed 7 years ago.
How do you name different classes / interfaces you create?
Sometimes I don't have implementation information to add to the implementation name - like interface FileHandler and class SqlFileHandler.
When this happens I usually name the interface in the "normal" name, like Truck and name the actual class TruckClass.
How do you name interfaces and classes in this regard?
Name your Interface what it is. Truck. Not ITruck because it isn't an ITruck it is a Truck.
An Interface in Java is a Type. Then you have DumpTruck, TransferTruck, WreckerTruck, CementTruck, etc that implements Truck.
When you are using the Interface in place of a sub-class you just cast it to Truck. As in List<Truck>. Putting I in front is just Hungarian style notation tautology that adds nothing but more stuff to type to your code.
All modern Java IDE's mark Interfaces and Implementations and what not without this silly notation. Don't call it TruckClass that is tautology just as bad as the IInterface tautology.
If it is an implementation it is a class. The only real exception to this rule, and there are always exceptions, could be something like AbstractTruck. Since only the sub-classes will ever see this and you should never cast to an Abstract class it does add some information that the class is abstract and to how it should be used. You could still come up with a better name than AbstractTruck and use BaseTruck or DefaultTruck instead since the abstract is in the definition. But since Abstract classes should never be part of any public facing interface I believe it is an acceptable exception to the rule. Making the constructors protected goes a long way to crossing this divide.
And the Impl suffix is just more noise as well. More tautology. Anything that isn't an interface is an implementation, even abstract classes which are partial implementations. Are you going to put that silly Impl suffix on every name of every Class?
The Interface is a contract on what the public methods and properties have to support, it is also Type information as well. Everything that implements Truck is a Type of Truck.
Look to the Java standard library itself. Do you see IList, ArrayListImpl, LinkedListImpl? No, you see List and ArrayList, and LinkedList. Here is a nice article about this exact question. Any of these silly prefix/suffix naming conventions all violate the DRY principle as well.
Also, if you find yourself adding DTO, JDO, BEAN or other silly repetitive suffixes to objects then they probably belong in a package instead of all those suffixes. Properly packaged namespaces are self documenting and reduce all the useless redundant information in these really poorly conceived proprietary naming schemes that most places don't even internally adhere to in a consistent manner.
If all you can come up with to make your Class name unique is suffixing it with Impl, then you need to rethink having an Interface at all. So when you have a situation where you have an Interface and a single Implementation that is not uniquely specialized from the Interface you probably don't need the Interface in most cases.
However, in general for maintainability, testability, mocking, it's best practice to provide interfaces. See this answer for more details.
Also Refer this interesting article by Martin Fowler on this topic of InterfaceImplementationPair
I've seen answers here that suggest that if you only have one implementation then you don't need an interface. This flies in the face of the Depencency Injection/Inversion of Control principle (don't call us, we'll call you!).
So yes, there are situations in which you wish to simplify your code and make it easily testable by relying on injected interface implementations (which may also be proxied - your code doesn't know!). Even if you only have two implementations - one a Mock for testing, and one that gets injected into the actual production code - this doesn't make having an interface superfluous. A well documented interface establishes a contract, which can also be maintained by a strict mock implementation for testing.
in fact, you can establish tests that have mocks implement the most strict interface contract (throwing exceptions for arguments that shouldn't be null, etc) and catch errors in testing, using a more efficient implementation in production code (not checking arguments that should not be null for being null since the mock threw exceptions in your tests and you know that the arguments aren't null due to fixing the code after these tests, for example).
Dependency Injection/IOC can be hard to grasp for a newcomer, but once you understand its potential you'll want to use it all over the place and you'll find yourself making interfaces all the time - even if there will only be one (actual production) implementation.
For this one implementation (you can infer, and you'd be correct, that I believe the mocks for testing should be called Mock(InterfaceName)), I prefer the name Default(InterfaceName). If a more specific implementation comes along, it can be named appropriately. This also avoids the Impl suffix that I particularly dislike (if it's not an abstract class, OF COURSE it is an "impl"!).
I also prefer "Base(InterfaceName)" as opposed to "Abstract(InterfaceName)" because there are some situations in which you want your base class to become instantiable later, but now you're stuck with the name "Abstract(InterfaceName)", and this forces you to rename the class, possibly causing a little minor confusion - but if it was always Base(InterfaceName), removing the abstract modifier doesn't change what the class was.
The name of the interface should describe the abstract concept the interface represents. Any implementation class should have some sort of specific traits that can be used to give it a more specific name.
If there is only one implementation class and you can't think of anything that makes it specific (implied by wanting to name it -Impl), then it looks like there is no justification to have an interface at all.
I tend to follow the pseudo-conventions established by Java Core/Sun, e.g. in the Collections classes:
List - interface for the "conceptual" object
ArrayList - concrete implementation of interface
LinkedList - concrete implementation of interface
AbstractList - abstract "partial" implementation to assist custom implementations
I used to do the same thing modeling my event classes after the AWT Event/Listener/Adapter paradigm.
The standard C# convention, which works well enough in Java too, is to prefix all interfaces with an I - so your file handler interface will be IFileHandler and your truck interface will be ITruck. It's consistent, and makes it easy to tell interfaces from classes.
I like interface names that indicate what contract an interface describes, such as "Comparable" or "Serializable". Nouns like "Truck" don't really describe truck-ness -- what are the Abilities of a truck?
Regarding conventions: I have worked on projects where every interface starts with an "I"; while this is somewhat alien to Java conventions, it makes finding interfaces very easy. Apart from that, the "Impl" suffix is a reasonable default name.
Some people don't like this, and it's more of a .NET convention than Java, but you can name your interfaces with a capital I prefix, for example:
IProductRepository - interface
ProductRepository, SqlProductRepository, etc. - implementations
The people opposed to this naming convention might argue that you shouldn't care whether you're working with an interface or an object in your code, but I find it easier to read and understand on-the-fly.
I wouldn't name the implementation class with a "Class" suffix. That may lead to confusion, because you can actually work with "class" (i.e. Type) objects in your code, but in your case, you're not working with the class object, you're just working with a plain-old object.
I use both conventions:
If the interface is a specific instance of a a well known pattern (e.g. Service, DAO), then it may not need an "I" (e.g UserService, AuditService, UserDao) all work fine without the "I", because the post-fix determines the meta pattern.
But, if you have something one-off or two-off (usually for a callback pattern), then it helps to distinguish it from a class (e.g. IAsynchCallbackHandler, IUpdateListener, IComputeDrone). These are special purpose interfaces designed for internal use, occasionally the IInterface calls out attention to the fact that an operand is actually an interface, so at first glance it is immediately clear.
In other cases you can use the I to avoid colliding with other commonly known concrete classes (ISubject, IPrincipal vs Subject or Principal).
TruckClass sounds like it were a class of Truck, I think that recommended solution is to add Impl suffix. In my opinion the best solution is to contain within implementation name some information, what's going on in that particular implementation (like we have with List interface and implementations: ArrayList or LinkedList), but sometimes you have just one implementation and have to have interface due to remote usage (for example), then (as mentioned at the beginning) Impl is the solution.
In java, is there ever a case for allowing a non-abstract class to be extended?
It always seems to indicate bad code when there are class hierarchies. Do you agree, and why/ why not?
There are certainly times when it makes sense to have non-final concrete classes. However, I agree with Kent - I believe that classes should be final (sealed in C#) by default, and that Java methods should be final by default (as they are in C#).
As Kent says, inheritance requires careful design and documentation - it's very easy to think you can just override a single method, but not know the situations in which that method may be called from the base class as part of the rest of the implementation.
See "How do you design a class for inheritance" for more discussion on this.
I agree with Jon and Kent but, like Scott Myers (in Effective C++), I go much further. I believe that every class should be either abstract, or final. That is, only leaf classes in any hierarchy are really apt for direct instantiation. All other classes (i.e. inner nodes in the inheritance) are “unfinished” and should consequently be abstract.
It simply makes no sense for usual classes to be further extended. If an aspect of the class is worth extending and/or modifying, the cleaner way would be to take that one class and separate it into one abstract base class and one concrete interchangeable implementation.
there a good reasons to keep your code non-final. many frameworks such as hibernate, spring, guice depend sometimes on non-final classes that they extends dynamically at runtime.
for example, hibernate uses proxies for lazy association fetching.
especially when it comes to AOP, you will want your classes non-final, so that the interceptors can attach to it.
see also the question at SO
This question is equally applicable to other platforms such as C# .NET. There are those (myself included) that believe types should be final/sealed by default and need to be explicitly unsealed to allow inheritance.
Extension via inheritance is something that needs careful design and is not as simple as just leaving a type unsealed. Therefore, I think it should be an explicit decision to allow inheritance.
Your best reference here is Item 15 of Joshua Bloch's excellent book "Effective Java", called "Design and document for inheritance or else prohibit it". However the key to whether extension of a class should be allowed is not "is it abstract" but "was it designed with inheritance in mind". There is sometimes a correlation between the two, but it's the second that is important. To take a simple example most of the AWT classes are designed to be extended, even those that are not abstract.
The summary of Bloch's chapter is that interaction of inherited classes with their parents can be surprising and unpredicatable if the ancestor wasn't designed to be inherited from. Classes should therefore come in two kinds a) classes designed to be extended, and with enough documentation to describe how it should be done b) classes marked final. Classes in (a) will often be abstract, but not always. For
I disagree. If hierarchies were bad, there'd be no reason for object oriented languages to exist. If you look at UI widget libraries from Microsoft and Sun, you're certain to find inheritance. Is that all "bad code" by definition? No, of course not.
Inheritance can be abused, but so can any language feature. The trick is to learn how to do things appropriately.
In some cases you want to make sure there's no subclassing, in other cases you want to ensure subclassing (abstract). But there's always a large subset of classes where you as the original author don't care and shouldn't care. It's part of being open/closed. Deciding that something should be closed is also to be done for a reason.
I couldn't disagree more. Class hierarchies make sense for concrete classes when the concrete classes know the possible return types of methods that they have not marked final. For instance, a concrete class may have a subclass hook:
protected SomeType doSomething() {
return null;
}
This doSomething is guarenteed to be either null or a SomeType instance. Say that you have the ability to process the SomeType instance but don't have a use case for using the SomeType instance in the current class, but know that this functionality would be really good to have in subclasses and most everything is concrete. It makes no sense to make the current class an abstract class if it can be used directly with the default of doing nothing with its null value. If you made it an abstract class, then you would have its children in this type of hierarchy:
Abstract base class
Default class (the class that could have been non-abstract, only implements the protected method and nothing else)
Other subclasses.
You thus have an abstract base class that can't be used directly, when the default class may be the most common case. In the other hierarchy, there is one less class, so that the functionality can be used without making an essentially useless default class because abstraction just had to be forced onto the class.
Default class
Other subclasses.
Now, sure, hierarchies can be used and abused, and if things are not documented clearly or classes not well designed, subclasses can run into problems. But these same problems exist with abstract classes as well, you don't get rid of the problem just because you add "abstract" to your class. For instance, if the contract of the "doSomething()" method above required SomeType to have populated x, y and z fields when they were accessed via getters and setters, your subclass would blow up regardless if you used the concrete class that returned null as your base class or an abstract class.
The general rule of thumb for designing a class hierarchy is pretty much a simple questionaire:
Do I need the behavior of my proposed superclass in my subclass? (Y/N)
This is the first question you need to ask yourself. If you don't need the behavior, there's no argument for subclassing.
Do I need the state of my proposed superclass in my subclass? (Y/N)
This is the second question. If the state fits the model of what you need, this may be a canidate for subclassing.
If the subclass was created from the proposed superclass, would it truly be an IS-A relation, or is it just a shortcut to inherit behavior and state?
This is the final question. If it is just a shortcut and you cannot qualify your proposed subclass "as-a" superclass, then inheritance should be avoided. The state and logic can be copied and pasted into the new class with a different root, or delegation can be used.
Only if a class needs the behavior, state and can be considered that the subclass IS-A(n) instance of the superclass should it be considered to inherit from a superclass. Otherwise, other options exist that would be better suited to the purpose, although it may require a little more work up front, it is cleaner in the long run.
There are a few cases where we dont want to allow to change the behavior. For instance, String class, Math.
I don't like inheritance because there's always a better way to do the same thing but when you're making maintenance changes in a huge system sometimes the best way to fix the code with minimum changes is to extend a class a little. Yes, it's usually leads to a bad code but to a working one and without months of rewriting first. So giving a maintenance man as much flexibility as he can handle is a good way to go.