Why is java.util.Observable not an abstract class? - java

I just noticed that java.util.Observable is a concrete class. Since the purpose of Observable is to be extended, this seems rather odd to me. Is there a reason why it was implemented this way?
I found this article which says that
The observable is a concrete class, so the class deriving from it must be determined upfront, as Java allows only single inheritance.
But that doesn't really explain it to me. In fact, if Observable were abstract, the user would be forced to determine the class deriving from it.

Quite simply it's a mistake that Observable is a class at all, abstract or otherwise.
Observable should have been an interface and the JDK should have provided a convenient implementation (much like List is an interface and ArrayList is an implementation)
There are quite a few "mistakes" in java, including:
java.util.Stack is a class, not an interface (like Observable, bad choice)
java.util.Properties extends java.util.Hashtable (rather than uses one)
The java.util.Date class is a bit of a mess, and is not immutable!
The java.util.Calendar class is a real mess
No unsigned 'byte' type (this is a real pain and the source of many low-level bugs)
java.sql.SQLException is a checked exception
Arrays don't use Arrays.toString(array) as their default toString() (how many SO questions has this caused?)
Cloneable shouldn't be a marker interface; it should have the clone() method and Object.clone() should not exist
While on the soapbox, in terms of the language itself, IMHO:
== should execute the .equals() method (this causes loads of headaches)
identity comparison == should either be === like javascript or a dedicated method like boolean isIdentical(Object o), because you hardly ever need it!
< should execute compareTo(Object o) < 0 for Comparable objects (and similarly for >, <=, >=)

As a first approach, one could think that this is done to allow the user to use composition instead of inheritance, which is very convenient if your class already inherits from another class, and you cannot inherit from Observable class also.
But if we look to the source code of Observable, we see that there is an internal flag
private boolean changed = false;
That is checked everytime the notifyObservers is invoked:
public void notifyObservers(Object arg) {
Object[] arrLocal;
synchronized (this) {
if (!changed) return;
arrLocal = obs.toArray();
clearChanged();
}
for (int i = arrLocal.length-1; i>=0; i--)
((Observer)arrLocal[i]).update(this, arg);
}
But from a class composed by this Observable, we cannot change this flag, since it is private, and the methods provided to change it are protected.
This means that the user is forced to subclass the Observable class, and I would say that the lack of the "abstract" keyword is just a "mistake".
I would say that this class is a complete screwup.

Related

Interface method referencing a concrete class as parameter causes coupling?

I was thinking about programming to interfaces and not to concrete classes, but I had a doubt: should any interface method be able to hold references to concrete classes?
Suppose the following scenarios:
1)
public interface AbsType1 {
public boolean method1(int a); // it's ok, only primitive types here
}
2)
public interface AbsType2 {
public boolean method2(MyClass a); // I think I have some coupling here
}
Should I choose a different design here in order to avoid the latter? e.g.
public interface MyInterface {} // yes, this is empty
public classe MyClass implements MyInterface {
// basically identical to the previous "MyClass"
}
public interface AbsType2 {
public boolean method2(MyInterface a); // this is better (as long as the
// interface is really stable)
}
But there's still something that doesn't convince me... I feel uncomfortable with declaring an empty interface, though I saw someone else doing so.
Maybe and Abstract Class would work better here?
I am a little bit confused.
EDIT:
Ok, I'll try to be more specific by making an example. Let's say I'm desining a ShopCart and I want of course to add items to the cart:
public interface ShopCart {
public void addArticle(Article a);
}
Now, if Article were a concrete class, what if its implementation changes over time? This is why I could think of making it an Interface, but then again, it's probably not suitable at least at a semantic level because interfaces should specify behaviours and an Article has none (or almost none... I guess it's a sort of entity class).
So, probably I'm ending up right now to the conclusion that making Article an abstract class in this case would be the best thing... what do you think about it?
I would use interfaces because composition is much better than inheritance. "Should any interface method be able to hold references to concrete classes ?", why it shouldn't? Some classes within package are coupled, it's a fact and common use technique. When you marked this relation in interface then you see on which classes is dependent your implementation. Dependency or composition relations are not inheritance so a i would avoid abstract class.
In my opinion Interfaces are fine for all types where the implementation may vary. But if you define a module which introduces a new type, that isn't intended to have alternative implementations then there is no need to define it as an Interface in the first place. Often this would be over-design in my opinion. It depends on the problem domain and often on the way how support testing or AOP-weaving.
For example consider a 2D problem domain where you need to model a Location as a type. If it is clear that a Location is always represented by a x and y coordinate, you may provide it as a Class. But if you do not know which properties a Location could have (GPS data, x, y, z coordinates, etc.) but you rely on some behavior like distance(), you should model it as an Interface instead.
If there are no public methods which AbsType would access in MyClass then the empty interface is probably not a good way to go.
There is no interface declaration (contract) for static methods, which otherwise might make sense here.
So, if AbsType is not going to use any methods from MyClass/MyInterface, then I assume it's basically only storing the class object for some other purpose. In this case, consider using generics to make clear how you want AbsType to be used without coupling closely to the client's code, like
public class AbsType3<C extends Class<?>> {
public boolean method3(T classType) {...}
}
Then you can restrict the types of classes to allow if needed by exchanging the <C extends Class<?>> type parameter for something else which may also be an interface, like
<C extends Class<Collection<?>>>.
Empty interfaces are somewhat like boolean flags for classes: Either a class implements the interface (true) or it doesn't (false). If at all, these marker interfaces should be used to convey an significant statement about how a class is meant to be (or not to be) used, see Serializable for example.

interface and contract : in this example

so i was trying to understand the interfaces, but i almost only see articles that explains "how" to use the interface, my problem is to understand the "why" :
so it's better to use Interface than creating and subclassing a class, which might be useless,
so we implement the interface methods in the class, but i don't understand why this is a good thing,
Let's say :
a class like Car.java defines all the code to make the car
we create the interface Working.java with several methods like start(), stop(), etc.
we implement the methods in Diesel_Car.java, Electric_Car.java, etc.
so what does it change for Car.java? This might not be the best example, as it seems that Car should be the parent of Diesel_Car.java etc,
but what was the meaning to implement the methods in those classes?
Is there a method in Car.java that somehow "calls" the Diesel_Car.java class and its interface methods?
I've read that the interface is like a "Contract", but i only see the second part of this contract (where the method is implemented) and i'm having some trouble to imagine where the first part happen?
Thanks for your help
Lets take your example of a Base class of Car with Electric_Car and Diesel_Car Subclasses, and expand the model a bit.
Car may have the following Interfaces
Working : with start() & stop() methods
Moving : with move(), turn() & stop() methods
The Car might contain an instance of class AirConditioner which should also implement the interface Working.
The Driver object can interact with objects than implement Working , the driver can start() or stop() . (The driver can start or stop the car and the A/C seperatly).
also, since the Driver can walk around on his own (and does not always need a car) he should implement the interface Moving.
The object Ground can now interact with anything that implements Moving : either car or driver.
(Very) contrived example (non-generic, error handling removed, etc. for clarity).
List theList = new ArrayList();
theList is a List, in this case implemented by an ArrayList. Let's say we pass this to a third-party API that somewhere in its bowels adds something to the list.
public void frobList(List list) {
list.add(new Whatever());
}
Now let's say for some reason we want to do something unusual to items that are added to the list. We can't change the third-party API. We can, however, create a new list type.
public FrobbableList extends ArrayList {
public boolean add(E e) {
super.add(Frobnicator.frob(e));
}
}
Now in our code we change the list we instantiate and call the API as before:
List theList = new FrobbableList();
frobber.frobList(theList);
If the third-party API had taken an ArrayList (the actual type) instead of a List (the interface), we'd be unable to do this as easily. By not locking the API in to a specific implementation, it provided us the opportunity to create custom behavior.
Taken further, this is a concept fundamental to extensible, debuggable, testable code. Things like dependency injection/Inversion of Control rely on coding to interfaces to function.
I am making another attempt to explain the concept of interface as a contract.
A typical usage scenario is when you'd like to sort a List of elements using java.util.Collections : <T extends java.lang.Comparable<? super T>> void sort(java.util.List<T> ts)
what does this signature mean? the sort() method will accept a java.util.List<T> of objects of type T, where T is an object that implements the interface Comparable.
so, If you would like to use Collections.sort() with a list of your objects you will need them to implement the Comparable interface:
public interface Comparable<T> {
int compareTo(T t);
}
So, if you implement a class of type Car and want to compare cars by their weight using Collections.sort(), you will have to implement the Comparable interface/contract in class car.
public class Car implements Comparable<Car> {
private int weight;
//..other class implementation stuff
#Override
public int compareTo(Car otherCar) {
if (this.weight == otherCar.weight) return 0;
else if (this.weight > otherCar.weight) return 1;
else return -1;
}
}
under the hood the Collections.sort() will call your implementation of compareTo when it sorts the list.
The contract is a concept of how classes work with each other. The idea is that a interfacing class defines the methods return type and name, but doesn't provide the idea of how it is implemented. That is done by the implementing class.
The concept is that when a Interface A defines methods A and B, any class implementing that interface MUST implement A and B along with its own methods. So it might work like this:
interface InterfaceA {
void methodA();
void methodB(String s);
}
public class ClassA implements InterfaceA {
public void methodA() {
System.out.println("MethodA");
}
public void methodB(String s) {
System.out.println(s);
}
}
The contract principle is that anything implementing a interface must implement the whole interface. Anything that doesn't do this must be abstract.
Hope this helps.
Design by contract (DbC), also known as programming by contract and design-by-contract programming, is an approach for designing computer software. It prescribes that software designers should define formal, precise and verifiable interface specifications for software components, which extend the ordinary definition of abstract data types with preconditions, postconditions and invariants. These specifications are referred to as "contracts", in accordance with a conceptual metaphor with the conditions and obligations of business contracts. Wikipedia
Short-cut.
If you follow the good practice of coding against interfaces, you know that the interface defines the contract all implementation classes must adhere to.
We designed Contract Java, an extension of Java in which method contracts are specified in interfaces. We identified three design goals.
First, Contract Java programs without contracts and programs with
fully-satisfied contracts should behave as if they were run without
contracts in Java.
Second, programs compiled with a conventional Java compiler must be able to interoperate with programs
compiled under Contract Java.
Finally, unless a class declares that it meets a particular contract, it must never be blamed for failing to meet that
contract. Abstractly, if the method m of an object with type t is called, the caller should only be blamed for the
pre-condition contracts associated with t and m should only be blamed for post-condition contracts associated
with t.
These design goals raise several interesting questions and demand decisions that balance language design with
software engineering concerns. This section describes each of the major design issues, the alternatives, our decisions,
our rationale, and the ramifications of the decisions. The decisions are not orthogonal; some of the later decisions
depend on earlier ones.
Contracts in Contract Java are decorations of methods signatures in interfaces. Each method declaration may come
with a pre-condition expression and a post-condition expression; both expressions must evaluate to booleans. The
pre-condition specifies what must be true when the method is called. If it fails, the context of the method call is to
blame for not using the method in a proper context. The post-condition expression specifies what must be true when
the method returns. If it fails, the method itself is to blame for not establishing the promised conditions.
Contract Java does not restrict the contract expressions. Still, good programming discipline dictates that the expressions
should not contribute to the result of the program. In particular, the expressions should not have any side-effects.
Both the pre- and post-condition expressions are parameterized over the arguments of the method and the pseudovariable
this. The latter is bound to the current object. Additionally, the post-condition of the contract may refer to the
name of the method, which is bound to the result of the method call.
Contracts are enforced based on the type-context of the method call. If an object’s type is an interface type, the
method call must meet all of the contracts in the interface. For instance, if an object implements the interface I, a call
to one of I’s methods must check that pre-condition and the post-condition specified in I. If the object’s type is a class
type, the object has no contractual obligations. Since a programmer can always create an interface for any class, we
leave objects with class types unchecked for efficiency reasons.
For an example, consider the interface RootFloat:
interface RootFloat {
float getValue ();
float sqRoot ();
#pre { this.getValue() >= 0f }
#post { Math.abs(sqRoot * sqRoot - this.getValue()) < 0.01f }
}
It describes the interface for a float wrapper class that provides a sqRoot method. The first method, getValue, has no
contracts. It accepts no arguments and returns the unwrapped float. The sqRoot method also accepts no arguments,
but has a contract. The pre-condition asserts that the unwrapped value is greater than or equal to zero. The result type
of sqRoot is float. The post-condition states that the square of the result must be within 0.01 of the value of the float.
Even though the contract language is sufficiently strong to specify the complete behavior in some cases, such as the
previous example, total or even partial correctness is not our goal in designing these contracts. Typically, the contracts
cannot express the full behavior of a method. In fact, there is a tension between the amount of information revealed in
the interface and the amount of validation the contracts can satisfy.
For an example, consider this stack interface:
interface Stack {
void push (int i);
int pop ();
}
With only push and pop operations available in the interface, it is impossible to specify that, after a push, the top
element in the stack is the element that was just pushed. But, if we augment the interface with a top operation that
reveals the topmost item on the stack (without removing it), then we can specify that push adds items to the top of the
stack:
interface Stack {
void push (int x);
#post { x = this.top() }
int pop ();
int top ();
}
In summary, we do not restrict the language of contracts. This makes the contract language as flexible as possible;
contract expression evaluation may even contribute to the final result of a computation. Despite the flexibility of the
contract language, not all desirable contracts are expressible. Some contracts are inexpressible because they may
involve checking undecidable properties, while others are inexpressible because the interface does not permit enough
observations.

"instance of" "type of" What is the use case of this?

While doing some casual reading I came across an interesting quote by Scott Meyers
Anytime you find yourself writing
code of the form "if the object is of
type T1, then do something, but if
it's of type T2, then do something
else," slap yourself.
I was just wondering why Java has "instance of" operator when you could do the same thing by overridden methods? When is it actually used?
Sometimes you have to use objects whose behavior (e.g. source code) you do not control so you cannot always rely on object-oriented solutions to type-related matters. (Especially consider that authors of libraries cannot anticipate every use case you might have; of course, you could argue that extension and implementation provide workarounds but they require much more effort than direct type checking.)
The "instanceof" operator gives you a way to inspect the type of an object and act conditionally.
It's ideal to avoid it, but sometimes necessary.
Use of instanceof can interfere with the Open/Closed Principle (the "O" in SOLID). If you implement instanceof tests, then your class may need to be modified as new implementation classes are created.
However, it is sometimes necessary. For example, it can be used in implementations of the Object.equals() method. The argument is an Object -- so that the method may be overridden by arbitrary subclasses -- but you usually need to cast it to your class's type to compare.
I actually use it when I'm using a 3rd party library and classes are final (the jerks!).
An if-type-do-something in code is a sign that the do-something should be a method defined in the class or interface with overriding behavior. But that assumes you control the implementation. Sometimes you don't.
When I'm implementing equals() for a class Foo it often looks like this:
public boolean equals(Object o) {
if (o instanceof Foo) {
Foo that = (Foo) o;
[ compare this to that ]
} else {
return false;
}
Since I'm overriding equals the signature is forced on me, but I need to know whether I have an instance of Foo or not for a meaningful comparison.
For example :
public void eat(Eatable eatable){
if(eatable instanceof fruit){
//direct eat
}
}
class Eatable {
}
class Fruit extends Eatable {
}
While writing complicated class structure like in Wrapper design pattern, you never know what kind of object u will encounter. In these situation you check the object with instance of operator.

Why in java enum is declared as Enum<E extends Enum<E>> [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
java Enum definition
Better formulated question, that is not considered a duplicate:
What would be different in Java if Enum declaration didn't have the recursive part
if language designers were to use simply Enum<E extends Enum> how would that affect the language?
The only difference now would be that someone coud write
A extends Enum<B>
but since it is not allowed in java to extend enums that would be still illegal.
I was also thinking about someone supplying jvm a bytecode that defines smth as extending an enum - but generics can't affect that as they all are erased.
So what is the whole point of such declaration?
Thank you!
Edit
for simplicity let's look at an example:
interface MyComparable<T> {
int myCompare(T o);
}
class MyEnum<E extends MyEnum> implements MyComparable<E> {
public int myCompare(E o) { return -1; }
}
class FirstEnum extends MyEnum<FirstEnum> {}
class SecondEnum extends MyEnum<SecondEnum> {}
what's wrong with this class structure? What can be done that "MyEnum<E extends MyEnum<E>>" would restrict?
This is a common question, and understandably so. Have a look at this part of the generics FAQ for the answer (and actually, read as much of the whole document as you feel comfortable with, it's rather well done and informative).
The short answer is that it forces the class to be parameterized on itself; this is required for superclasses to define methods, using the generic parameter, that work transparently ("natively", if you will) with their subclasses.
Edit: As a (non-)example for instance, consider the clone() method on Object. Currently, it's defined to return a value of type Object. Thanks to covariant return types, specific subclasses can (and often do) define that they return a more specific class, but this cannot be enforced and hence cannot be inferred for an arbitrary class.
Now, if Object were defined like Enum, i.e. Object<T extends Object<T>> then you'd have to define all classes as something like public class MyFoo<MyFoo>. Consequently, clone() could be declared to return a type of T and you can ensure at compile time that the returned value is always exactly the same class as the object itself (not even subclasses would match the parameters).
Now in this case, Object isn't parameterized like this because it would be extremely annoying to have this baggage on all classes when 99% of them aren't going to utilise it at all. But for some class hierarchies it can be very useful - I've used a similar technique myself before with types of abstract, recursive expression parsers with several implementations. This construct made it possible to write code that was "obvious" without having to cast everywhere, or copy-and-paste just to change concrete class definitions.
Edit 2 (To actually answer your question!):
If Enum was defined as Enum<E extends Enum>, then as you rightly say, someone could define a class as A extends Enum<B>. This defeats the point of the generic construct, which is to ensure that the generic parameter is always the exact type of the class in question. Giving a concrete example, Enum declares its compareTo method as
public final int compareTo(E o)
In this case, since you defined A to extend Enum<B>, instances of A could only be compared against instances of B (whatever B is), which is almost certainly not very useful. With the additional construct, you know that any class that extends Enum is comparable only against itself. And hence you can provide method implementations in the superclass that remain useful, and specific, in all subclasses.
(Without this recursive generics trick, the only other option would be to define compareTo as public final int compareTo(Enum o). This is not really the same thing, as then one could compare a java.math.RoundingMode against a java.lang.Thread.State without the compiler complaining, which again isn't very useful.)
OK, let's get away from Enum itself as we appear to be getting hung up on it. Instead, here is an abstract class:
public abstract class Manipulator<T extends Manipulator<T>>
{
/**
* This method actually does the work, whatever that is
*/
public abstract void manipulate(DomainObject o);
/**
* This creates a child that can be used for divide and conquer-y stuff
*/
public T createChild()
{
// Some really useful implementation here based on
// state contained in this class
}
}
We are going to have several concrete implementations of this - SaveToDatabaseManipulator, SpellCheckingManipulator, whatever. Additionally we also want to let people define their own, as this is a super-useful class. ;-)
Now - you will notice that we're using the recursive generic definition, and then returning T from the createChild method. This means that:
1) We know and the compiler knows that if I call:
SpellCheckingManipulator obj = ...; // We have a reference somehow
return obj.createChild();
then the returned value is definitely a SpellCheckingManipulator, even though it's using the definition from the superclass. The recursive generics here allow the compiler to know what is obvious to us, so you don't have to keep casting the return values (like you often have to do with clone(), for example).
2) Notice that I didn't declare the method final, since perhaps some specific subclasses will want to override it with a more suitable version for themselves. The generics definition means that regardless of who create a new class or how it is defined, we can still assert that the return from e.g. BrandNewSloppilyCodedManipulator.createChild() will still be an instance of BrandNewSloppilyCodedManipulator. If a careless developer tries to define it to return just Manipulator, the compiler won't let them. And if they try to define the class as BrandNewSloppilyCodedManipulator<SpellCheckingManipulator>, it won't let them either.
Basically, the conclusion is that this trick is useful when you want to provide some functionality in a superclass that somehow gets more specific in subclasses. By declaring the superclass like this, you are locking down the generic parameter for any subclasses to be the subclass itself. This is why you can write a generic compareTo or createChild method in the superclass and prevent it from becoming overly vague when you're dealing with specific subclasses.

abstract methods in skeletal implementations of interfaces

I was re-reading Effective Java (2nd edition) item 18, prefer interfaces to abstract classes. In that item Josh Bloch provides an example of a skeletal implementation of the Map.Entry<K,V> interface:
// Skeletal Implementation
public abstract class AbstractMapEntry<K,V>
implements Map.Entry<K,V> {
// Primitive operations
public abstract K getKey();
public abstract V getValue();
// ... remainder omitted
}
Two questions stem from this example:
Why are getKey and getValue explicitly declared here as abstract methods? They are part of the Map.Entry interface, so I don't see a reason for the redundant declaration in the abstract class.
Why use the idiom of leaving these primitives methods, as Mr. Bloch refers to them, as abstract? Why not just do this:
// Skeletal Implementation
public abstract class AbstractMapEntry
implements Map.Entry {
private K key;
private V value;
// Primitive operations
public K getKey() {return key;}
public V getValue() {return value;}
// ... remainder omitted
}
The benefits of this are that each subclass doesn't have to define its own set of fields, and can still access the key and value by their accessors. If a subclass truly needs to define its own behavior for the accessors, it can implement the Map.Entry interface directly. The other downside is that in the equals method provided by the skeletal implementation, the abstract accessors are called:
// Implements the general contract of Map.Entry.equals
#Override public boolean equals(Object o) {
if (o == this)
return true;
if (! (o instanceof Map.Entry))
return false;
Map.Entry<?,?> arg = (Map.Entry) o;
return equals(getKey(), arg.getKey()) &&
equals(getValue(), arg.getValue());
}
Bloch warns against calling overridable methods (item 17) from classes designed for inheritance as it leaves the superclass vulnerable to changes made by subclasses.
Maybe this is a matter of opinion, but I was hoping to determine whether there's more to the story, as Bloch doesn't really elaborate on this in the book.
I would say it helps emphasize what the concrete class is intended to deal with, instead of just leaving it up to the compiler to tell you (or you having to compare both to see what is missing). Kind of self-documenting code. But it certainly isn't necessary, it is more of a style thing, as far as I can see.
There is more significant logic in returning these values than simple getter and setting. Every class I spot checked in the standard JDK(1.5) did something non-simple on at least one of the methods, so I would guess that he views such an implementation as too naive and it would encourage subclasses to use it instead of thinking through the problem on their own.
Regarding the issue with equals, nothing would change if the abstract class implemented them because the issue is overridable. In this case I would say that the equals is attempting to be carefully implemented to anticipate implementations. Normally equals in general should not be implemented to return true between itself and its subclass (although there are plenty that do) due to covariance issues (the superclass will think it equals the subclass, but the subclass won't think it equals the superclass), so this type of implementation of equals is tricky no matter what you do.
Bloch warns against calling
overridable methods (item 17) from
classes designed for inheritance as it
leaves the superclass vulnerable to
changes made by subclasses
He warns about calling overridable methods in the constructor, not in other methods.
One reason that AbstractMapEntry#getKey and getValue are abstract (i.e. unimplemented) is that Map.Entry is an inner interface to Map. Using nested classes/interfaces is how Java implements composition. The idea in composition is that the composed part is not a first-class concept. Rather, the composed part only make sense if it is contained in the whole. In this case, the composed part is Map.Entry and the root object of the composite is Map. Obviously the concept expressed is that a Map has many Map.Entrys.
Therefore the semantics of AbstractMapEntry#getKey and getValue will depend essentially on the implementation of Map that we're talking about. A plain old getter implementation as you've written will work just fine for HashMap. It won't work for something like ConcurrentHashMap which demands thread-safety. It's likely that ConcurrentHashMap's implementation of getKey and getValue make defensive copies. (Recommend checking the source code for yourself).
Another reason not to implement getKey and getValue is that the characters that implement Map are radically different ranging from ones that should have never belonged (i.e. Properties) to completely different universes from an intuitive impls of Map (e.g. Provider, TabularDataSupport).
In conclusion, not implementing AbstractMapEntry#getKey and getValue, because of this golden rule of API design:
When in doubt, leave it out (see here)
I don't see any reason
Allows the implementation to define how the key and value are stored.

Categories

Resources