I don't think that there is a way that is efficient (if at all) of doing this, but I figured I'd ask in case someone else knows otherwise. I'm looking to create my own Cache/lookup table. To make it as useful as possible, I'd like it to be able to store generic objects. The problem with this approach is that even though you can make a Collections.unmodifiableMap, immutableMap, etc, these implementations only prevent you from changing the Map itself. They don't prevent you from getting the value from the map and modifying its underlying values. Essentially what I'd need is for something to the effect of HashMap<K, ? extends Immutable>, but to my knowledge nothing like this exists.
I had originally thought that I could just return a copy of the values in the cache in the get method, but since Java's Cloneable interface is jacked up, you cannot simple call
public V getItem(K key){
return (V) map.get(k).clone();
}
Your thinking is good, and you're right that there's no built-in way of handling immutability.
However, you could try this:
interface Copyable<T> {
T getCopy();
}
Then override the get() method to return copies instead of the value itself;
class CopyMap<K, V extends Copyable<V>> extends HashMap<K, V> {
#Override
public V get(Object key) {
return super.get(key).getCopy();
}
}
Then it's up to the implementation to return a copy of itself, rather than this (unless the class itself is immutable). Although you can't enforce that in code, you would be within your rights to publicly humiliate the programmer that doesn't conform.
I'm looking to create my own Cache/lookup table.
Why not use Guava's cache?
The problem with this approach is that even though you can make a
Collections.unmodifiableMap, immutableMap, etc, these implementations
only prevent you from changing the Map itself. They don't prevent you
from getting the value from the map and modifying its underlying
values.
This is not something any collection can enforce for you. You need to make the classes themselves immutable. There is a hacky approach using Reflection (which can also be used to make a class mutable!), but really, you should avoid this and simply create classes that are immutable.
There are other options for object cloning in Java: Making a copy of an object Dynamically?
Be aware though that deep cloning any object might be dangerous. The objects stored in this map must be i.e. isolated from each other, to make sure that whole object graph won't be copied when returning a single entry.
There is no formal concept of "mutability" or "immutability" in the language. The compiler cannot tell whether a type is "mutable" or "immutable". To determine whether something is immutable, we humans have to examine every field and method of the class, and reason through the behavior of the methods to discover that none of them will alter the state of the object, then we call it "immutable". But there is no difference from the perspective of the language.
Related
What are the reasons behind the decision to not have a fully generic get method
in the interface of java.util.Map<K, V>.
To clarify the question, the signature of the method is
V get(Object key)
instead of
V get(K key)
and I'm wondering why (same thing for remove, containsKey, containsValue).
As mentioned by others, the reason why get(), etc. is not generic because the key of the entry you are retrieving does not have to be the same type as the object that you pass in to get(); the specification of the method only requires that they be equal. This follows from how the equals() method takes in an Object as parameter, not just the same type as the object.
Although it may be commonly true that many classes have equals() defined so that its objects can only be equal to objects of its own class, there are many places in Java where this is not the case. For example, the specification for List.equals() says that two List objects are equal if they are both Lists and have the same contents, even if they are different implementations of List. So coming back to the example in this question, according to the specification of the method is possible to have a Map<ArrayList, Something> and for me to call get() with a LinkedList as argument, and it should retrieve the key which is a list with the same contents. This would not be possible if get() were generic and restricted its argument type.
An awesome Java coder at Google, Kevin Bourrillion, wrote about exactly this issue in a blog post a while ago (admittedly in the context of Set instead of Map). The most relevant sentence:
Uniformly, methods of the Java
Collections Framework (and the Google
Collections Library too) never
restrict the types of their parameters
except when it's necessary to prevent
the collection from getting broken.
I'm not entirely sure I agree with it as a principle - .NET seems to be fine requiring the right key type, for example - but it's worth following the reasoning in the blog post. (Having mentioned .NET, it's worth explaining that part of the reason why it's not a problem in .NET is that there's the bigger problem in .NET of more limited variance...)
The contract is expressed thus:
More formally, if this map contains a
mapping from a key k to a value v such
that (key==null ? k==null :
key.equals(k)), then this method
returns v; otherwise it returns null.
(There can be at most one such
mapping.)
(my emphasis)
and as such, a successful key lookup depends on the input key's implementation of the equality method. That is not necessarily dependent on the class of k.
It's an application of Postel's Law, "be conservative in what you do, be liberal in what you accept from others."
Equality checks can be performed regardless of type; the equals method is defined on the Object class and accepts any Object as a parameter. So, it makes sense for key equivalence, and operations based on key equivalence, to accept any Object type.
When a map returns key values, it conserves as much type information as it can, by using the type parameter.
I think this section of Generics Tutorial explains the situation (my emphasis):
"You need to make certain that the generic API is not unduly restrictive; it must
continue to support the original contract of the API. Consider again some examples
from java.util.Collection. The pre-generic API looks like:
interface Collection {
public boolean containsAll(Collection c);
...
}
A naive attempt to generify it is:
interface Collection<E> {
public boolean containsAll(Collection<E> c);
...
}
While this is certainly type safe, it doesn’t live up to the API’s original contract.
The containsAll() method works with any kind of incoming collection. It will only
succeed if the incoming collection really contains only instances of E, but:
The static type of the incoming
collection might differ, perhaps
because the caller doesn’t know the
precise type of the collection being
passed in, or perhaps because it is a
Collection<S>,where S is a
subtype of E.
It’s perfectly
legitimate to call containsAll() with
a collection of a different type. The
routine should work, returning false."
Compatibility.
Before generics were available, there was just get(Object o).
Had they changed this method to get(<K> o) it would have potentially forced massive code maintenance onto java users just to make working code compile again.
They could have introduced an additional method, say get_checked(<K> o) and deprecate the old get() method so there was a gentler transition path. But for some reason, this was not done. (The situation we are in now is that you need to install tools like findBugs to check for type compatibility between the get() argument and the declared key type <K> of the map.)
The arguments relating to the semantics of .equals() are bogus, I think. (Technically they're correct, but I still think they're bogus. No designer in his right mind is ever going to make o1.equals(o2) true if o1 and o2 do not have any common superclass.)
The reason is that containment is determined by equals and hashCode which are methods on Object and both take an Object parameter. This was an early design flaw in Java's standard libraries. Coupled with limitations in Java's type system, it forces anything that relies on equals and hashCode to take Object.
The only way to have type-safe hash tables and equality in Java is to eschew Object.equals and Object.hashCode and use a generic substitute. Functional Java comes with type classes for just this purpose: Hash<A> and Equal<A>. A wrapper for HashMap<K, V> is provided that takes Hash<K> and Equal<K> in its constructor. This class's get and contains methods therefore take a generic argument of type K.
Example:
HashMap<String, Integer> h =
new HashMap<String, Integer>(Equal.stringEqual, Hash.stringHash);
h.add("one", 1);
h.get("one"); // All good
h.get(Integer.valueOf(1)); // Compiler error
There is one more weighty reason, it can not be done technically, because it brokes Map.
Java has polymorphic generic construction like <? extends SomeClass>. Marked such reference can point to type signed with <AnySubclassOfSomeClass>. But polymorphic generic makes that reference readonly. The compiler allows you to use generic types only as returning type of method (like simple getters), but blocks using of methods where generic type is argument (like ordinary setters).
It means if you write Map<? extends KeyType, ValueType>, the compiler does not allow you to call method get(<? extends KeyType>), and the map will be useless. The only solution is to make this method not generic: get(Object).
Backwards compatibility, I guess. Map (or HashMap) still needs to support get(Object).
I was looking at this and thinking why they did it this way. I don't think any of the existing answers explains why they couldn't just make the new generic interface accept only the proper type for the key. The actual reason is that even though they introduced generics they did NOT create a new interface. The Map interface is the same old non-generic Map it just serves as both generic and non-generic version. This way if you have a method that accepts non-generic Map you can pass it a Map<String, Customer> and it would still work. At the same time the contract for get accepts Object so the new interface should support this contract too.
In my opinion they should have added a new interface and implemented both on existing collection but they decided in favor of compatible interfaces even if it means worse design for the get method. Note that the collections themselves would be compatible with existing methods only the interfaces wouldn't.
We are doing big refactoring just now and we were missing this strongly typed get() to check that we did not missed some get() with old type.
But I found workaround/ugly trick for compilation time check: create Map interface with strongly typed get, containsKey, remove... and put it to java.util package of your project.
You will get compilation errors just for calling get(), ... with wrong types, everything others seems ok for compiler (at least inside eclipse kepler).
Do not forget to delete this interface after check of your build as this is not what you want in runtime.
What are the reasons behind the decision to not have a fully generic get method
in the interface of java.util.Map<K, V>.
To clarify the question, the signature of the method is
V get(Object key)
instead of
V get(K key)
and I'm wondering why (same thing for remove, containsKey, containsValue).
As mentioned by others, the reason why get(), etc. is not generic because the key of the entry you are retrieving does not have to be the same type as the object that you pass in to get(); the specification of the method only requires that they be equal. This follows from how the equals() method takes in an Object as parameter, not just the same type as the object.
Although it may be commonly true that many classes have equals() defined so that its objects can only be equal to objects of its own class, there are many places in Java where this is not the case. For example, the specification for List.equals() says that two List objects are equal if they are both Lists and have the same contents, even if they are different implementations of List. So coming back to the example in this question, according to the specification of the method is possible to have a Map<ArrayList, Something> and for me to call get() with a LinkedList as argument, and it should retrieve the key which is a list with the same contents. This would not be possible if get() were generic and restricted its argument type.
An awesome Java coder at Google, Kevin Bourrillion, wrote about exactly this issue in a blog post a while ago (admittedly in the context of Set instead of Map). The most relevant sentence:
Uniformly, methods of the Java
Collections Framework (and the Google
Collections Library too) never
restrict the types of their parameters
except when it's necessary to prevent
the collection from getting broken.
I'm not entirely sure I agree with it as a principle - .NET seems to be fine requiring the right key type, for example - but it's worth following the reasoning in the blog post. (Having mentioned .NET, it's worth explaining that part of the reason why it's not a problem in .NET is that there's the bigger problem in .NET of more limited variance...)
The contract is expressed thus:
More formally, if this map contains a
mapping from a key k to a value v such
that (key==null ? k==null :
key.equals(k)), then this method
returns v; otherwise it returns null.
(There can be at most one such
mapping.)
(my emphasis)
and as such, a successful key lookup depends on the input key's implementation of the equality method. That is not necessarily dependent on the class of k.
It's an application of Postel's Law, "be conservative in what you do, be liberal in what you accept from others."
Equality checks can be performed regardless of type; the equals method is defined on the Object class and accepts any Object as a parameter. So, it makes sense for key equivalence, and operations based on key equivalence, to accept any Object type.
When a map returns key values, it conserves as much type information as it can, by using the type parameter.
I think this section of Generics Tutorial explains the situation (my emphasis):
"You need to make certain that the generic API is not unduly restrictive; it must
continue to support the original contract of the API. Consider again some examples
from java.util.Collection. The pre-generic API looks like:
interface Collection {
public boolean containsAll(Collection c);
...
}
A naive attempt to generify it is:
interface Collection<E> {
public boolean containsAll(Collection<E> c);
...
}
While this is certainly type safe, it doesn’t live up to the API’s original contract.
The containsAll() method works with any kind of incoming collection. It will only
succeed if the incoming collection really contains only instances of E, but:
The static type of the incoming
collection might differ, perhaps
because the caller doesn’t know the
precise type of the collection being
passed in, or perhaps because it is a
Collection<S>,where S is a
subtype of E.
It’s perfectly
legitimate to call containsAll() with
a collection of a different type. The
routine should work, returning false."
Compatibility.
Before generics were available, there was just get(Object o).
Had they changed this method to get(<K> o) it would have potentially forced massive code maintenance onto java users just to make working code compile again.
They could have introduced an additional method, say get_checked(<K> o) and deprecate the old get() method so there was a gentler transition path. But for some reason, this was not done. (The situation we are in now is that you need to install tools like findBugs to check for type compatibility between the get() argument and the declared key type <K> of the map.)
The arguments relating to the semantics of .equals() are bogus, I think. (Technically they're correct, but I still think they're bogus. No designer in his right mind is ever going to make o1.equals(o2) true if o1 and o2 do not have any common superclass.)
The reason is that containment is determined by equals and hashCode which are methods on Object and both take an Object parameter. This was an early design flaw in Java's standard libraries. Coupled with limitations in Java's type system, it forces anything that relies on equals and hashCode to take Object.
The only way to have type-safe hash tables and equality in Java is to eschew Object.equals and Object.hashCode and use a generic substitute. Functional Java comes with type classes for just this purpose: Hash<A> and Equal<A>. A wrapper for HashMap<K, V> is provided that takes Hash<K> and Equal<K> in its constructor. This class's get and contains methods therefore take a generic argument of type K.
Example:
HashMap<String, Integer> h =
new HashMap<String, Integer>(Equal.stringEqual, Hash.stringHash);
h.add("one", 1);
h.get("one"); // All good
h.get(Integer.valueOf(1)); // Compiler error
There is one more weighty reason, it can not be done technically, because it brokes Map.
Java has polymorphic generic construction like <? extends SomeClass>. Marked such reference can point to type signed with <AnySubclassOfSomeClass>. But polymorphic generic makes that reference readonly. The compiler allows you to use generic types only as returning type of method (like simple getters), but blocks using of methods where generic type is argument (like ordinary setters).
It means if you write Map<? extends KeyType, ValueType>, the compiler does not allow you to call method get(<? extends KeyType>), and the map will be useless. The only solution is to make this method not generic: get(Object).
Backwards compatibility, I guess. Map (or HashMap) still needs to support get(Object).
I was looking at this and thinking why they did it this way. I don't think any of the existing answers explains why they couldn't just make the new generic interface accept only the proper type for the key. The actual reason is that even though they introduced generics they did NOT create a new interface. The Map interface is the same old non-generic Map it just serves as both generic and non-generic version. This way if you have a method that accepts non-generic Map you can pass it a Map<String, Customer> and it would still work. At the same time the contract for get accepts Object so the new interface should support this contract too.
In my opinion they should have added a new interface and implemented both on existing collection but they decided in favor of compatible interfaces even if it means worse design for the get method. Note that the collections themselves would be compatible with existing methods only the interfaces wouldn't.
We are doing big refactoring just now and we were missing this strongly typed get() to check that we did not missed some get() with old type.
But I found workaround/ugly trick for compilation time check: create Map interface with strongly typed get, containsKey, remove... and put it to java.util package of your project.
You will get compilation errors just for calling get(), ... with wrong types, everything others seems ok for compiler (at least inside eclipse kepler).
Do not forget to delete this interface after check of your build as this is not what you want in runtime.
I'd like to utilize a unique java collection that can accept a strategy for determining if member objects are "equal" on collection initialization.
The reason I need to do this is because the equals method of the class that I need to add to this collection is already implemented to satisfy other (more appropriate) functionality. In a specific case, the criteria for uniqueness in this collection instance needs to check only one variable of the class as opposed to a number of variables that are checked in the equals method. I would prefer to avoid decorating the objects as I am gathering them from disparate libraries and it would be costly to loop through for decoration (and it may muddy my code).
I realize this would not be a Set as it would break the Java contract for Set, but I just feel as though this problem must have been encountered previously. I figured Guava or Apache Collections would have provided something, but no luck it seems. Does anybody know of any available library that does provide this type of functionality? Should I be entertaining a different solution altogether?
Can you use a Custom Comparator and a TreeSet or TreeMap? Or use a Map where the Key has your criteria? A HashSet is just a wrapper for a HashMap so using a map instead should be much more expensive.
That is not really practical. Consider for instance two instances of a class C which you consider equivalent.
Now you do:
set.add(c1);
set.remove(c2);
Should the set be empty after that? What about .retainAll(), .removeAll()?
Your best bet here is to create your own class which wraps over class C, deletages whatever is needed to be delegated, and have this wrapper class implement .hashCode() and .equals() (and possibly Comparable of itself too). With such a class, you can just go on and use classical sets and maps.
Guava has an Equivalence, which lets you define whether two objects are equivalent.
It also has Equivalence.Wrapper which wraps arbitrary objects and delegates equals() and hashCode() to the implementations in the equivalence, rather than their own.
So you could do something like this:
public class MySet<T> implements Set<T> {
private final Equivalence<T> equivalence;
private final Set<Wrapper<T>> delegate = new HashSet<Wrapper<T>>();
public MySet(Equivalence<T> equivalence) {
this.equivalence = equivalence;
}
public boolean add(T t) {
return delegate.add(equivalence.wrap(t));
}
// other Set methods
}
Collections.unmodifiableList(...) returns a new instance of a static inner class UnmodifiableList. Other unmodifiable collections classes are constructed same way.
Were these classes public, one had two advantages:
ability to indicate a more specific return value (such as UnmodifiableList), so an API user wouldn't come to the idea of modifying that collection;
ability to check during runtime if a List is instanceof UnmodifiableList.
So, were there any advantages not to make those classes public?
EDIT: No definitely convincing arguments were presented, so I choose the most upvoted answer.
Personally I completely agree with you. At the core of the problem is that fact that Java's generics are not covariant, which, in turn, is because Java's collections are mutable.
It is not possible for Java's type system to codify a type that seems to have mutators is actually immutable. Imagine if we were to start designing some solution:
interface Immutable //marker for immutability
interface ImmutableMap<K, V> extends Map<K, V>, Immutable
But then ImmutableMap is a subclass of Map, and hence Map is assignable from ImmutableMap so any method which returns such an immutable Map:
public ImmutableMap<K, V> foo();
can be assigned to a Map and can therefore be mutated at compile time:
Map<K, V> m = foo();
m.put(k, v); //oh dear
So, you can see that the addition of this type has not actually prevented us from doing anything bad. I think for this reason a judgement was made that it did not have enough to offer.
A language like scala has declaration-site variance annotations. That is, you could specify a type as being covariant (and hence immutable) as Scala's Map is (actually it's covariant in its V parameter). Hence your API can declare whether its return type is mutable or immutable.
As another aside, Scala lets you declare intersection types so that you don't even need to create the ImmutableXYZ interface as a separate entity, you could specify a method to return:
def foo : XYZ with Immutable
But then scala has a proper type system, whereas Java does not
I think both advantages are there but are not that useful. The main problems remain the same: UnmodifiableList still is a List and thus all the setters are available and the underlying collections still are modifiable. Making the class UnmodifiableList public would add to the illusion of being unmodifiable.
The nicer way would be for the compiler to help, but for that the collection class hierarchies would have to changed a lot. E.g., the collection API of Scala is way more advanced in that respect.
A disadvantage would be the introduction of at least three additional classes / interfaces into the API. Because of them not being that useful, I think leaving them out of the API is a good choice.
If it important for you to check if the list was created with Collections.unmodifiableList then you can create an instance and ask for the class. Now you you can compare this class with the class of any list.
private static Class UNMODIFIABLE_LIST_CLASS =
Collections.unmodifiableList( new ArrayList() ).getClass();
...
if( UNMODIFIABLE_LIST_CLASS == listToTest.getClass() ){
...
}
The answer to the why is quite simple: at the time, in 1998, efficient design was a bit flanky. People thought about it it wasn't apparently a priority. But there was no true, deep thinking about it.
If you want to use such a mechanism, use Guava's ImmutableList/Set/Map/...
They are explicitly Immutable and a good practice when using that library is not to return a List for instance but an ImmutableList. So you will know that a List/Set/Map/... is immutable.
Example:
private final ImmutableList constants = ...;
public final ImmutableList<String> getConstants() {
return constants;
}
About the design itself of UnmodifiableXxx, one could have done the following:
public static final class UnmodifiableXxx implements Xxx { // don't allow extend
// static if inside Collections
UnmodifiableXxx (Xxx backend) { // don't allow direct instanciation
...
}
...
}
Suppose UnmodifiableList was a public class. I suspect that it would lull programmers into a false sense of security. Remember, UnmodifiableList is a view of a modifiable List. This means that the contents of an UnmodifiableList can still change via any changes made to its underlying List. A naive programmer may not understand this nuance and may expect instances of UnmodifiableList to be immutable.
ability to indicate a more specific return value (such as UnmodifiableList), so an API user wouldn't come to the idea of modifying that collection;
In a proper API, this should already be documented in the javadoc of the method returning the unmodifiable list.
ability to check during runtime if a List is instanceof UnmodifiableList.
Such a need indicates that the actual problem lies somewhere else. It's a flaw in the code design. Ask yourself, have you ever had the need to check if a List is an instance of ArrayList or LinkedList? Whether it's an ArrayList, LinkedList or UnmodifiableList is clearly a decision which is to be made during code write time, not during code run time. If you're encountering problems because you're attempting to modify an UnmodifiableList (for which the API developer may have very good reasions which should be already documented), then it's rather your own fault, not a runtime fault.
All with all, it makes no sense. The Collections#unmodifiableXXX(), synchronizedXXX() and checkedXXX() do in any way not represent concrete implementations. They are all just decorators which can be applied regardless of the underlying concrete implementation.
I think the answer is because the method form properly knows about the generics used and requires no extra programming to pass this information through, whilst the class form would require more messing about. The method form for unmodifiableMap has two floating generic arguments, which it maps to both the generic arguments of the return type and of the passed argument.
public static <K,V> Map<K,V> unmodifiableMap(Map<? extends K, ? extends V> m) {
return new UnmodifiableMap<K,V>(m);
}
Why wasn't the java.lang.Object class declared to be abstract ?
Surely for an Object to be useful it needs added state or behaviour, an Object class is an abstraction, and as such it should have been declared abstract ... why did they choose not to ?
An Object is useful even if it does not have any state or behaviour specific to it.
One example would be its use as a generic guard that's used for synchronization:
public class Example {
private final Object o = new Object();
public void doSomething() {
synchronized (o) {
// do possibly dangerous stuff
}
}
}
While this class is a bit simple in its implementation (it isn't evident here why it's useful to have an explicit object, you could just declare the method synchronized) there are several cases where this is really useful.
Ande, I think you are approaching this -- pun NOT intended -- with an unnecessary degree of abstraction. I think this (IMHO) unnecessary level of abstraction is what is causing the "problem" here. You are perhaps approaching this from a mathematical theoretical approach, where many of us are approaching this from a "programmer trying to solve problems" approach. I believe this difference in approach is causing the disagreements.
When programmers look at practicalities and how to actually implement something, there are a number of times when you need some totally arbitrary Object whose actual instance is totally irrelevant. It just cannot be null. The example I gave in a comment to another post is the implementation of *Set (* == Hash or Concurrent or type of choice), which is commonly done by using a backing *Map and using the Map keys as the Set. You often cannot use null as the Map value, so what is commonly done is to use a static Object instance as the value, which will be ignored and never used. However, some non-null placeholder is needed.
Another common use is with the synchronized keyword where some Object is needed to synchronize on, and you want to ensure that your synchronizing item is totally private to avoid deadlock where different classes are unintentionally synchronizing on the same lock. A very common idiom is to allocate a private final Object to use in a class as the lock. To be fair, as of Java 5 and java.util.concurrent.locks.Lock and related additions, this idiom is measurably less applicable.
Historically, it has been quite useful in Java to have Object be instantiable. You could make a good point that with small changes in design or with small API changes, this would no longer be necessary. You're probably correct in this.
And yes, the API could have provided a Placeholder class that extends Object without adding anything at all, to be used as a placeholder for the purposes described above. But -- if you're extending Object but adding nothing, what is the value in the class other than allowing Object to be abstract? Mathematically, theoretically, perhaps one could find a value, but pragmatically, what value would it add to do this?
There are times in programming where you need an object, some object, any concrete object that is not null, something that you can compare via == and/or .equals(), but you just don't need any other feature to this object. It exists only to serve as a unique identifier and otherwise does absolutely nothing. Object satisfies this role perfectly and (IMHO) very cleanly.
I would guess that this is part of the reason why Object was not declared abstract: It is directly useful for it not to be.
Does Object specify methods that classes extending it must implement in order to be useful? No, and therefor it needn't be abstract.
The concept of a class being abstract has a well defined meaning that does not apply to Object.
You can instantiate Object for synchronization locks:
Object lock = new Object();
void someMethod() {
//safe stuff
synchronized(lock) {
//some code avoiding race condition
}
}
void someOtherMethod() {
//safe code
synchronized(lock) {
//some other stuff avoiding race condition
}
}
I am not sure this is the reason, but it allows (or allowed, as there are now better ways of doing it) for an Object to be used as a lock:
Object lock = new Object();
....
synchronized(lock)
{
}
How is Object any more offensive than null?
It makes a good place marker (as good as null anyway).
Also, I don't think it would be good design to make an object abstract without an abstract method that needs to go on it.
I'm not saying null is the best thing since sliced bread--I read an article the other day by the "Inventor" discussing the cost/value of having the concept of null... (I didn't even think null was inventable! I guess someone somewhere could claim he invented zero..) just that being able to instantiate Object is no worse than being able to pass null.
You never know when you might want to use a simple Object as a placeholder. Think of it as like having a zero in a numerical system (and null doesn't work for this, since null represents the absence of data).
There should be a reason to make a class abstract. One is to prevent clients from instantiating the class and force them into using only subclasses (for whatever reasons). Another is if you wish to use it as an interface by providing abstract methods, which subclasses must implement. Probably, the designers og Java saw no such reasons, so java.lang.Object remains concrete.
As always, Guava comes to help: with http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/base/Optional.html
Stuff here can be used to kill nulls / Object instances for "a not-null placeholder" from the code.
There are entirely seperated questions here:
why did not they make Object abstract?
how much disaster comes after if they decide to make it abstract in a future release?
I'll just throw in another reason that I've found Object to useful to instantiate on its own. I have a pool of objects I've created that has a number of slots. Those slots can contain any of a number of objects, all that inherit from an abstract class. But what do I put in the pool to represent "empty". I could use null, but for my purpose, it made more sense to insure that there was always some object in each slot. I can't instantiate the abstract class to put in there, and I wouldn't have wanted to. So I could have created a concrete subclass of my abstract class to represent "not a useful foo", but that seemed unnecessary when using an instance of Object was just as good..in fact better, as it clearly says that what's in the slot has no functionality. So when I initialize my pool, I do so by creating an Object to assign to each slot as the initial condition of the pool.
I agree that it might have made sense for the original Java crew to have defined a Placeholder object as a concrete subclass of Object, and then made Object abstract, but it doesn't rub me wrong at all that they went the way they did. I would then have used Placeholder in place of Object.