Which languages "have subclassing but no inheritance"? - java

From the pdf of a java course: http://www.ccs.neu.edu/home/riccardo/courses/csu370-fa07/lect4.pdf
It says:
For those of you that follow at home, let me emphasize that subclassing is not inheritance. We will see inheritance later in the course.
Of course, subclassing and inheritance are related. As we will see inheritance is a code reuse mechanism that lets you reuse code easily when defining subclasses. But subclassing makes sense even when you do not have inheritance.
(Indeed, some languages have subclassing but no inheritance, at least, not inheritance like Java implements.)
Subclassing is a property of classes, and is properly part of the type system of Java. Subclassing is used by Java to determine what methods can possibly be invoked on an object, and to return an error at compile-time when an object does not supply a given method.
I want to know which languages have subclassing but no inheritance, at least, not inheritance like Java implements? (Since I'm not quite understand the concepts, so if I can see it in some actual languages, that would make it clearer)

This is a distinction without a difference. Clearly he is talking about inheritance of methods only when he uses the word "inheritance". He isn't using the term in the canonical way introduced by Wegner87, which is inextricably entwined with subclassing:
Inheritance: A class may inherit operations from “superclasses” and may have its operations inherited by “subclasses”. An object of the class C created by the operation “C new” has C as its “base class” and may use operations defined in its base class as well as operations defined in superclasses.
CS teachers often have strange notions. This has been one of them.

Related

Java 17 Sealed Classes Business Use Case

Java 17 has introduced sealed classes which can permit only specific classes to extend them and would otherwise be final
I understand the technical use-case, but can't think of any real life use cases where this would be useful?
When would we want only specific classes to be able to extend a particular class?
In our own projects, if we want a new class to extend the sealed class can't we just add it to the permitted classes? Wouldn't it be better to just not make the class final or sealed in that case to avoid the slight overhead?
On the other hand, while exposing a library for external use how would a sealed class know beforehand which classes it should permit for extension?
sealed classes provide the opposite guarantee to open classes (the default in Java). An open class says to implementors "Hey, you can subclass me and override my non-final methods", but it says to users "I have no idea what subclasses look like, so you can only use my own methods directly". On the flipside, sealed classes are very restrictive to implementors "You cannot subclass me, and you can only use me in these prescribed ways", but very powerful to users: "I know in advance all of my subclasses, so you know that if you have an instance of me, then it must be one of X, Y, or Z". Consequently, adding a subclass to a sealed class is a breaking change.
It may be helpful to think of sealed classes less as "restricted classes" and more as "enums with superpowers". An enum in Java is a finite set of data values, all constructed in advance. A sealed class is a finite set of classes that you set forth, but those classes may have an infinite number of possible instances.
Here's a real-world example that I wrote myself recently. This was in Kotlin, which also has sealed classes, but it's the same idea. I was writing a wrapper for some Minecraft code and I needed a class that could uniformly represent all of the ways you can die in Minecraft. Long story short, I ended up partitioning the death reasons into "killed by another living thing" and "all other death reasons". So I wrote a sealed interface CauseOfDeath. Its two implementors were VanillaDeath (which took a "cause of damage" as its constructor argument, basically an enum listing all of the possible causes) and VanillaMobDeath (which took the specific entity that killed you as its constructor argument).
Since this was clearly exhaustive, I made it sealed. If Minecraft later adds more death reasons, they will fit into one of the two categories ("death by entity" or "death by other causes"), so it makes no sense for me or anyone else to ever subclass this interface again.
Further, since I'm providing very strong guarantees about the type of death reason, it's reasonable for users to discriminate based on type. Downcasting in Java has always been a bit of a code smell, on the basis that it can't possibly know every possible subclass of a class. The logic is "okay, you've handled cases X and Y, but what if someone comes along and writes class Z that you've never heard of". But that can't happen here. The class is sealed, so it's perfectly reasonable for someone to write a sort of pseudo-visitor that does one thing for "death by entity" and another for "death by other", since Java (or Kotlin, in my case) can be fully confident that there are not, and never will be, any other possibilities.
This makes more sense as well if you've used algebraic data types in Haskell or OCaml. The sealed keyword originated in Scala as a way to encode ADTs, and they're exactly what I just described: a type defined as the (tagged) union of a finite number of possible collections of data. And in Haskell and OCaml, it's entirely normal to discriminate on ADTs as well using match (or case) expressions.

Inheritance vs Composition: Does composition effectively solve dependency issues? [Effective Java]

I'm semi-familiar with Java and came across something in Effective Java(2017) that didn't make much sense to me.
Below is a piece from the book. (item 18)
Unlike method invocation, inheritance violates encapsulation. In other words, a subclass depends on the implementation details of its superclass for its proper function. The superclass's implementation may change from release to release, and if it does, the subclass may break, even though its code has not been touched. As a consequence, a subclass must evolve in tandem with its superclass, unless the superclass's authors have designed and documented it specifically for the purpose of being extended.
I understand composition may be favored over inheritance in some cases and understood other parts of item 18. However, I find it hard to understand how the composition method prevents the problem mentioned in the paragraph above(dependency on implementation details) - as the author speaks as though composition is better than inheritance because of this. Later in the chapter, Bloch gives an example of a custom Set implementation where he uses a Forwarding Class(which is obviously dependent on the Set interface details). One could argue the Set interface doesn't change as often but in practice changes in the interface may as well cause the Wrapper Class to break(note the book gives an example via Forwarding Class and Wrapper Class).
I guess it makes sense if Bloch meant composition is relatively safer than inheritance because class implementations change more often than interfaces. However, I think there is still a dependency issue between Wrapper Class and Interface, and am confused on why the author didn't mention this more clearly.
Am I mistaken in thinking like this?
(In addition, I'm not sure what encapsulation has to do with this. My understanding of encapsulation is hiding variables using private..)
You should actually provide more on the examples i.e. "Later in the chapter, Bloch gives an example of a custom Set implementation"
Basically the inheritance is that the child class will be affected by the change of parent class. See code below:
public class Baby extends Human{
}
In the code above, if Human implement another public or protected method, Baby will be forced to automatically inherit it. It is quite stringent.
But for composition, a change in the Owner object does not really require a change in the child object, and vice versa up to a certain degree.
public class Human{
private Baby baby;
}
In the code above, Human can have any implementation that may not impact Baby and vice versa. There is more leeway for designing what Baby and Human can do. They can be entirely having lots of different properties and methods.
Ok so I looked up what #LeiYang recommended and came to realize the my question wasn't valid. The given paragraph states "a subclass depends on the implementation details of its superclass for its proper function" - which Object Composition would have no problem with, as it merely makes use of provided methods as is(without overriding). Therefore Object Composition doesn't violate encapsulation and is relatively stable compared to Inheritance.

Subclasses in clojure

I'm learning Clojure and I was wondering how to deal with OO-like subclasses in Clojure. For example: a master abstract class, two abstract subclasses (each one redefines some functions) and in the 3rd level, final subclasses that creates "objects" that will be used in the functions. No clue how to do this. However, I managed to do it with one abstract class to a child class, with defprotocol and defrecord. But I can't implement a protocol inside another. Thanks
You don't need classes or subclasses. Represent your data as maps with attributes. The "subclasses" might have more attributes.
If you have a function that varies on attribute, then either use conditional logic based on attribute (if, cond, etc) or use polymorphism based on multimethods or protocols if you really need to.
In the words of the Matrix, there is no spoon.
You can do inheritance with protocols like this:
(extend <subtype>
<protocol>
(merge (get-in <protocol> [:impls <basetype>])
<map-of-redefined-methods>))
Multimethods provide direct support for inheritance with derive.
Actual Java subclass relationships can be specified with the :extends keyword to gen-class. This is meant exclusively for Java interop, though.
Generally, it is worth checking whether you really need inheritance. It is usually not the preferred method of modeling in Clojure.

Java - what is a a prototype?

In a lecture on Java, a computer science professor states that Java interfaces of a class are prototypes for public methods, plus descriptions of their behaviors.
(Source https://www.youtube.com/watch?v=-c4I3gFYe3w #8:47)
And at 8:13 in the video he says go to discussion section with teaching assistants to learn what he means by prototype.
What does "prototype" mean in Java in the above context?
I think the use of the word prototype in this context is unfortunate, some languages like JavaScript use something called prototypical inheritance which is totally different than what is being discussed in the lecture. I think the word 'contract' would be more appropriate. A Java interface is a language feature that allows the author of a class to declare that any concrete implementations of that class will provide implementations of all methods declared in any interfaces they implement.
It is used to allow Java classes to form several is-a relationships without resorting to multiple inheritance (not allowed in Java). You could have a Car class the inherits from a Vehicle class but implements a Product interface, therefor the Car is both a Vehicle and a Product.
What does "prototype" mean in Java in the above context?
The word "prototype" is not standard Java terminology. It is not used in the JLS, and it is not mentioned in the Java Tutorial Glossary. In short there is no Java specific meaning.
Your lecturer is using this word in a broader sense rather than a Java-specific sense. In fact, his usage matches "function prototype" as described in this Wikipedia page.
Unfortunately, the "IT English" language is full of examples where a word or phrase means different (and sometimes contradictory) things in different contexts. There are other meanings for "template" that you will come across in IT. For instance:
In C++ "template" refers to what Java calls a generic class or method.
In Javascript, an object has a "template" attribute that gives the objects methods.
More generally, template-based typing is an alternative (more dynamic) way of doing OO typing.
But the fact that these meanings exist does not mean that your lecturer was wrong to refer to interface method signatures as "templates".
"prototype" is not the the best/right terminus to be used. interfaces are more like "contracts", that implementing classes have to fulfill.
The method's heads/definitions will have to be implemented in the implementing class (using implements keyword in the class head/class definition/public class xy implements ...).
I guess this naming conventions leave much room for many ideological debates.
Or the author had some sort of a mental lapsus and mapped the construct of prototypical inheritance from javascript into java in his mind somehow.
Interfaces are not prototypes for classes in Java.
In languages like C & C++, which compiles to machine code sirectly, compiler should be aware of the nature of any identifier (variable/class/functions) before they are references anywhere in the program. That mean those languages require to know the nature of the identifier to generate a machine code output that is related to it.
In simple words, C++ compiler should be aware of methods and member of a class before that class is used anywhere in the code. To accomplish that, you should define the class before the code line where it is used, or you should at least declare its nature. Declaring only the nature of a function or a class creates a 'prototype'.
In Java, an 'interface' is something like description of a class. This defines what all methods a particular kind of class should mandatory have. You can then create classes that implements those interface. Main purpose that interfaces serve in java is the possibility that a Variable declared as of a particular interface type can hold objects of any class that implements the object.
He tells it in C/C++ way, let me explain, in C++ you can define prototypes for methods at the header files of classes so that other classes can recognize these methods, also in C where there is no class concept, you can define prototypes at the beginning of file and then at somewhere in same file you can implement these prototypes, so that methods can be used even before their implementation is provided. So in Java interfaces provide pretty much same way, you can define prototypes for methods(method headers) that will be implemented by classes that implement this interface.
In a lecture on Java, a computer science professor states that:
Java interfaces of a class are:
1. are prototypes for public methods,
2. plus descriptions of their behaviors.
For 1. Is ok: - yes, they are prototypes for implemented public methods of a class.
For 2. This part could be a little bit tricky. :)
why?
we know: interface definition (contain prototypes), but doesn't define (describe) methods behavior.
computer science professor states: "... plus descriptions of their behaviors.". This is correct only if we look inside class that implements that interface (interface implementation = prototype definitions or descriptions).
Yes, a little bit tricky to understand :)
Bibliography:
Definition vs Description
Context-dependent
Name visibility - C++ Tutorials
ExtraWork:
Note: not tested, just thinking! :)
C++:
// C++ namespace just with prototypes:
// could be used like interface similar with Java?
// hm, could we then define (describe) prototypes?
// could we then inherit namespace? :)
namespace anIntf{
void politeHello(char *msg);
void bigThankYou();
}
Prototypes provide the signatures of the functions you will use
within your code. They are somewhat optional, if you can order
your code such that you only use functions that are previously
defined then you can get away without defining them
Below a prototype for a function that sums two integers is given.
int add(int a, int b);
I found this question because i have the same impression as that teacher.
In early C (and C++ i think) a function, for example "a" (something around lexic analysis or syntactic, whatever) can not be called, for example inside main, before it's declaration, because the compiler doesn't know it (yet).
The way to solve it was, either to declare it before it's usage (before main in the example), or to create a prototype of it (before main in the example) which just specifies the name, return values and parameters; but not the code of the function itself, leaving this last one for wherever now is placed even after it's called.
These prototypes are basically the contents of the include (.h) files
So I think is a way to understand interfaces or the way they say in java "a contract" which states the "header" but not the real body, in this case of a class or methods

Why avoid the final keyword?

In java, is there ever a case for allowing a non-abstract class to be extended?
It always seems to indicate bad code when there are class hierarchies. Do you agree, and why/ why not?
There are certainly times when it makes sense to have non-final concrete classes. However, I agree with Kent - I believe that classes should be final (sealed in C#) by default, and that Java methods should be final by default (as they are in C#).
As Kent says, inheritance requires careful design and documentation - it's very easy to think you can just override a single method, but not know the situations in which that method may be called from the base class as part of the rest of the implementation.
See "How do you design a class for inheritance" for more discussion on this.
I agree with Jon and Kent but, like Scott Myers (in Effective C++), I go much further. I believe that every class should be either abstract, or final. That is, only leaf classes in any hierarchy are really apt for direct instantiation. All other classes (i.e. inner nodes in the inheritance) are “unfinished” and should consequently be abstract.
It simply makes no sense for usual classes to be further extended. If an aspect of the class is worth extending and/or modifying, the cleaner way would be to take that one class and separate it into one abstract base class and one concrete interchangeable implementation.
there a good reasons to keep your code non-final. many frameworks such as hibernate, spring, guice depend sometimes on non-final classes that they extends dynamically at runtime.
for example, hibernate uses proxies for lazy association fetching.
especially when it comes to AOP, you will want your classes non-final, so that the interceptors can attach to it.
see also the question at SO
This question is equally applicable to other platforms such as C# .NET. There are those (myself included) that believe types should be final/sealed by default and need to be explicitly unsealed to allow inheritance.
Extension via inheritance is something that needs careful design and is not as simple as just leaving a type unsealed. Therefore, I think it should be an explicit decision to allow inheritance.
Your best reference here is Item 15 of Joshua Bloch's excellent book "Effective Java", called "Design and document for inheritance or else prohibit it". However the key to whether extension of a class should be allowed is not "is it abstract" but "was it designed with inheritance in mind". There is sometimes a correlation between the two, but it's the second that is important. To take a simple example most of the AWT classes are designed to be extended, even those that are not abstract.
The summary of Bloch's chapter is that interaction of inherited classes with their parents can be surprising and unpredicatable if the ancestor wasn't designed to be inherited from. Classes should therefore come in two kinds a) classes designed to be extended, and with enough documentation to describe how it should be done b) classes marked final. Classes in (a) will often be abstract, but not always. For
I disagree. If hierarchies were bad, there'd be no reason for object oriented languages to exist. If you look at UI widget libraries from Microsoft and Sun, you're certain to find inheritance. Is that all "bad code" by definition? No, of course not.
Inheritance can be abused, but so can any language feature. The trick is to learn how to do things appropriately.
In some cases you want to make sure there's no subclassing, in other cases you want to ensure subclassing (abstract). But there's always a large subset of classes where you as the original author don't care and shouldn't care. It's part of being open/closed. Deciding that something should be closed is also to be done for a reason.
I couldn't disagree more. Class hierarchies make sense for concrete classes when the concrete classes know the possible return types of methods that they have not marked final. For instance, a concrete class may have a subclass hook:
protected SomeType doSomething() {
return null;
}
This doSomething is guarenteed to be either null or a SomeType instance. Say that you have the ability to process the SomeType instance but don't have a use case for using the SomeType instance in the current class, but know that this functionality would be really good to have in subclasses and most everything is concrete. It makes no sense to make the current class an abstract class if it can be used directly with the default of doing nothing with its null value. If you made it an abstract class, then you would have its children in this type of hierarchy:
Abstract base class
Default class (the class that could have been non-abstract, only implements the protected method and nothing else)
Other subclasses.
You thus have an abstract base class that can't be used directly, when the default class may be the most common case. In the other hierarchy, there is one less class, so that the functionality can be used without making an essentially useless default class because abstraction just had to be forced onto the class.
Default class
Other subclasses.
Now, sure, hierarchies can be used and abused, and if things are not documented clearly or classes not well designed, subclasses can run into problems. But these same problems exist with abstract classes as well, you don't get rid of the problem just because you add "abstract" to your class. For instance, if the contract of the "doSomething()" method above required SomeType to have populated x, y and z fields when they were accessed via getters and setters, your subclass would blow up regardless if you used the concrete class that returned null as your base class or an abstract class.
The general rule of thumb for designing a class hierarchy is pretty much a simple questionaire:
Do I need the behavior of my proposed superclass in my subclass? (Y/N)
This is the first question you need to ask yourself. If you don't need the behavior, there's no argument for subclassing.
Do I need the state of my proposed superclass in my subclass? (Y/N)
This is the second question. If the state fits the model of what you need, this may be a canidate for subclassing.
If the subclass was created from the proposed superclass, would it truly be an IS-A relation, or is it just a shortcut to inherit behavior and state?
This is the final question. If it is just a shortcut and you cannot qualify your proposed subclass "as-a" superclass, then inheritance should be avoided. The state and logic can be copied and pasted into the new class with a different root, or delegation can be used.
Only if a class needs the behavior, state and can be considered that the subclass IS-A(n) instance of the superclass should it be considered to inherit from a superclass. Otherwise, other options exist that would be better suited to the purpose, although it may require a little more work up front, it is cleaner in the long run.
There are a few cases where we dont want to allow to change the behavior. For instance, String class, Math.
I don't like inheritance because there's always a better way to do the same thing but when you're making maintenance changes in a huge system sometimes the best way to fix the code with minimum changes is to extend a class a little. Yes, it's usually leads to a bad code but to a working one and without months of rewriting first. So giving a maintenance man as much flexibility as he can handle is a good way to go.

Categories

Resources