I have an application that uses a data structure Point. Let's say that overall there are 50 distinct instances of Point (meaning p1.equals(p2) == false). However during calculation loads of new instances are created, that are actually the same as already instantiated objects.
As these instances are stored this has a heavy impact on memory consumption:
50 distinct Points are represented by 500'000 instances of Point. In the data structure there is nothing that would prevent the reuse of already present instances. For that reason I created a cache:
HashMap<Point, Point> pointCache = new HashMap<>();
So I can check if the point is present and add it if it is not. This kind of cache however seems like a bit of overkill, as the key and the value are essentially the same.
Furthermore I already have a map present:
HashMap<Point, Boolean> flag = new HashMap<>();
What I am curious about is: Is there a map like data structure that I could use for flag that would allow the retrieval of the key? If not is there any other data structure that I could use for the cache that would be more like a set and would allow easy checking and retrieval?
EDIT: For completeness, the Point class I am using is javafx.geometry.Point2D and therefore nothing that I can change.
Let's assume, for the sake of this answer, that the uniqueness of a Point is determined by two int coordinates, x and y (you can change that easily to fit the actual parameters that determine your Point's uniqueness).
You don't want to create a Point instance in order to determine if that Point already exists in some HashSet or HashMap. That defeats the purpose of avoiding creation of multiple instances (though using a HashMap or HashSet would prevent you from keeping all those duplicate instances, and the GC will release them soon, so it may be enough to solve the memory consumption issue).
I'm suggesting that you have a static Point getPoint(int x,int y) method in your Point class. That method would check inside a static internal HashMap<Integer,HashMap<Integer,Point>> whether those x,y coordinates already have a corresponding Point instance and return that instance. If an instance doesn't exist, it will be created and added to the HashMap.
This is similar to what Integer.valueOf(int) does for small integers - it returns a cached Integer instance instead of creating a new one.
Your map is entirely reasonable. You could create your own wrapper class if you wanted to, but I'd probably stick with the map for the moment. If Set<E> exposed an operation of "get the existing entry which is equal to this one" then you could use that, but a) it doesn't and b) HashSet is built on HashMap anyway.
Better you could use a HashSet instead of HashMap which would save you from storing different boolean against each points. Although Set would internally use HashMap but in place of value, it would use the same Object reference which would be better than storing odd 50 boolean values which doesn't makes sense and is useless in your case.
You could do a lookup like:
if (set.contains(point)) {
...
}
Imagine the following problem:
// Class PhoneNumber implements hashCode() and equals()
PhoneNumber obj = new PhoneNumber("mgm", "089/358680");
System.out.println("Hashcode: " +
obj.hashCode()); //prints "1476725853"
// Add PhoneNumber object to HashSet
Set<PhoneNumber> set = new HashSet();
set.add(obj);
// Modify object after it has been inserted
obj.setNumber("089/358680-0");
// Modification causes a different hash value
System.out.println("New hashcode: " +
obj.hashCode()); //prints "7130851"
// ... Later or in another class, code such as the following
// is operating on the Set:
// Unexpected Result!
// Output: obj is set member: FALSE
System.out.println("obj is set member: " +
set.contains(obj));
If I've got a class and I want all my fields to be editable and still be able to use a set / hashCode. Would it be a good idea to create an artificial uneditable field in the class that is set at creation of the object? For example the current time in ms. When I've got that field, I can base the hashcode upon it and I would still be able to edit all the "real" fields. Would this be a good idea?
I strongly believe you are presenting a bad use case: if you need to modify object in a Set, you should definitely remove the old one and re-add the new one (or use another java.util.Collection). Taking from your example:
Set<PhoneNumber> set = new HashSet();
set.add(obj);
// Modify object after it has been inserted
set.remove(obj);
obj.setNumber("089/358680-0");
set.add(obj);
The whole purpose of hashCode is to create a bucket of similar objects to reduce the search space, therefore it should be immutable but useful for you (if you use an artificial field, how do you find the object in your set later on? How do you retrieve this artificial field, given you are not with persistence storage of any type - the id in a database is an exception in the usage of artificial field IMHO).
To explain the meaning of
The whole purpose of hashCode is to create a bucket of similar
objects to reduce the search space
have a look at this sample code: http://ideone.com/MJ2MQT. I (wrongly) created to objects with the same hash code, then added both to a set; as expected, the set contains both of them, because the hash code is used to retrieve the elements which collide and then the equals method is called to solve this collision. Collisions (read different objects which return same hash code) are unavoidable, and the goal of a proper designed hash code function is to reduce them as much as possible.
Storing mutable objects in a hash set, or using them as keys in a hash map, is definitely not a good idea, precisely for the reason that you illustrate in your code.
On the other hand, defining an artificial number that serves as an ID of an object defeats the purpose of having a hash code in the first place, because it does not help you find an object that is equal to a given object by limiting the search to objects with identical hash codes.
In fact, your solution is not different from constructing a Map<Integer,PhoneNumber> from an "artificial hash code" to your mutable PhoneNumber object. If finding objects by association is what you need, HashMap from an artificial ID to the mutable object is the way to go.
It usually makes sense to have a unique identifier for your data objects, especially if you are persisting them in some database. It will allow you to have an easy implementation of equals and hashCode, which will only depend on this single identifier.
I'm not sure the current time in ms. will be the best choice, but you should definitely generate some unique ID.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I understand that only one instance of any object according to .equals() is allowed in a Set and that you shouldn't "need to" get an object from the Set if you already have an equivalent object, but I would still like to have a .get() method that returns the actual instance of the object in the Set (or null) given an equivalent object as a parameter.
Any ideas/theories as to why it was designed like this?
I usually have to hack around this by using a Map and making the key and the value same, or something like that.
EDIT: I don't think people understand my question so far. I want the exact object instance that is already in the set, not a possibly different object instance where .equals() returns true.
As to why I would want this behavior, typically .equals() does not take into account all the properties of the object. I want to provide some dummy lookup object and get back the actual object instance in the Set.
While the purity argument does make the method get(Object) suspect, the underlying intent is not moot.
There are various class and interface families that slightly redefine equals(Object). One need look no further than the collections interfaces. For example, an ArrayList and a LinkedList can be equal; their respective contents merely need to be the same and in the same order.
Consequently, there are very good reasons for finding the matching element in a set. Perhaps a clearer way of indicating intent is to have a method like
public interface Collection<E> extends ... {
...
public E findMatch(Object o) throws UnsupportedOperationException;
...
}
Note that this API has value broader that within Set.
As to the question itself, I don't have any theory as to why such an operation was omitted. I will say that the minimal spanning set argument does not hold, because many operations defined in the collections APIs are motivated by convenience and efficiency.
The problem is: Set is not for "getting" objects, is for adding and test for presence.
I understand what are you looking for, I had a similar situation and ended using a map of the same object in key and value.
EDIT: Just to clarify: http://en.wikipedia.org/wiki/Set_(abstract_data_type)
I had the same question in java forum years ago. They told me that the Set interface is defined. It cannot be changed because it will break the current implementations of Set interface. Then, they started to claim bullshit, like you see here: "Set does not need the get method" and started to drill me that Map must always be used to get elements from a set.
If you use the set only for mathematical operations, like intersection or union, then may be contains() is sufficient. However, Set is defined in collections to store data. I explained for need get() in Set using the relational data model.
In what follows, an SQL table is like a class. The columns define attributes (known as fields in Java) and records represent instances of the class. So that an object is a vector of fields. Some of the fields are primary keys. They define uniqueness of the object. This is what you do for contains() in Java:
class Element {
public int hashCode() {return sumOfKeyFields()}
public boolean equals(Object e) {keyField1.equals(e) && keyField2.equals(e) && ..}
I'm not aware of DB internals. But, you specify key fields only once, when define a table. You just annotate key fields with #primary. You do not specify the keys second time, when add a record to the table. You do not separate keys from data, as you do in mapping. SQL tables are sets. They are not maps. Yet, they provide get() in addition to maintaining uniqueness and contains() check.
In "Art of Computer Programming", introducing the search, D. Knuth says the same:
Most of this chapter is devoted to the study of a very simple search
problem: how to find the data that has been stored with a given
identification.
You see, data is store with identification. Not identification pointing to data but data with identification. He continues:
For example, in a numerical application we might want
to find f(x), given x and a table of the values of f; in a
nonnumerical application, we might want to find the English
translation of a given Russian word.
It looks like he starts to speak about mapping. However,
In general, we shall suppose that a set of N records has been stored,
and the problem is to locate the appropriate one. We generally require
the N keys to be distinct, so that each key uniquely identifies its
record. The collection of all records is called a table or file,
where the word "table" is usually used to indicate a small file, and
"file" is usually used to indicate a large table. A large file or a
group of files is frequently called a database.
Algorithms for searching are presented with a so-called argument, K,
and the problem is to find which record has K as its key. Although the
goal of searching is to find the information stored in the record
associated with K, the algorithms in this chapter generally ignore
everything but the keys themselves. In practice we can find the
associated data once we have located K; for example, if K appears in
location TABLE + i, the associated data (or a pointer to it) might be
in location TABLE + i + 1
That is, the search locates the key filed of the record and it should not "map" the key to the data. Both are located in the same record, as fileds of java object. That is, search algorithm examines the key fields of the record, as it does in the set, rather than some remote key, as it does in the map.
We are given N items to be sorted; we shall call them records, and
the entire collection of N records will be called a file. Each
record Rj has a key Kj, which governs the sorting process. Additional
data, besides the key, is usually also present; this extra "satellite
information" has no effect on sorting except that it must be carried
along as part of each record.
Neither, I see no need to duplicate the keys in an extra "key set" in his discussion of sorting.
... ["The Art of Computer Programming", Chapter 6, Introduction]
entity set is collection or set all entities of a particular entity type
[http://wiki.answers.com/Q/What_is_entity_and_entity_set_in_dbms]
The objects of single class share their class attributes. Similarly, do records in DB. They share column attributes.
A special case of a collection is a class extent, which is the
collection of all objects belonging to the class. Class extents allow
classes to be treated like relations
... ["Database System Concepts", 6th Edition]
Basically, class describes the attributes common to all its instances. A table in relational DB does the same. "The easiest mapping you will ever have is a property mapping of a single attribute to a single column." This is the case I'm talking about.
I'm so verbose on proving the analogy (isomorphism) between objects and DB records because there are stupid people who do not accept it (to prove that their Set must not have the get method)
You see in replays how people, who do not understand this, say that Set with get would be redundant? It is because their abused map, which they impose to use in place of set, introduces the redundancy. Their call to put(obj.getKey(), obj) stores two keys: the original key as part of the object and a copy of it in the key set of the map. The duplication is the redundancy. It also involves more bloat in the code and wastes memory consumed at Runtime. I do not know about DB internals, but principles of good design and database normalization say that such duplication is bad idea - there must be only one source of truth. Redundancy means that inconsistency may happen: the key maps to an object that has a different key. Inconsistency is a manifestation of redundancy. Edgar F. Codd proposed DB normalization just to get rid of redundancies and their inferred inconsistencies. The teachers are explicit on the normalization: Normalization will never generate two tables with a one-to-one relationship between them. There is no theoretical reason to separate a single entity like this with some fields in a single record of one table and others in a single record of another table
So, we have 4 arguments, why using a map for implementing get in set is bad:
the map is unnecessary when we have a set of unique objects
map introduces redundancy in Runtime storage
map introduces code bloat in the DB (in the Collections)
using map contradicts the data storage normalization
Even if you are not aware of the record set idea and data normalization, playing with collections, you may discover this data structure and algorithm yourself, as we, org.eclipse.KeyedHashSet and C++ STL designers did.
I was banned from Sun forum for pointing out these ideas. The bigotry is the only argument against the reason and this world is dominated by bigots. They do not want to see concepts and how things can be different/improved. They see only actual world and cannot imagine that design of Java Collections may have deficiencies and could be improved. It is dangerous to remind rationale things to such people. They teach you their blindness and punish if you do not obey.
Added Dec 2013: SICP also says that DB is a set with keyed records rather than a map:
A typical data-management system spends a large amount of time
accessing or modifying the data in the records and therefore requires
an efficient method for accessing records. This is done by identifying
a part of each record to serve as an identifying key. Now we represent
the data base as a set of records.
Well, if you've already "got" the thing from the set, you don't need to get() it, do you? ;-)
Your approach of using a Map is The Right Thing, I think. It sounds like you're trying to "canonicalize" objects via their equals() method, which I've always accomplished using a Map as you suggest.
I'm not sure if you're looking for an explanation of why Sets behave this way, or for a simple solution to the problem it poses. Other answers dealt with the former, so here's a suggestion for the latter.
You can iterate over the Set's elements and test each one of them for equality using the equals() method. It's easy to implement and hardly error-prone. Obviously if you're not sure if the element is in the set or not, check with the contains() method beforehand.
This isn't efficient compared to, for example, HashSet's contains() method, which does "find" the stored element, but won't return it. If your sets may contain many elements it might even be a reason to use a "heavier" workaround like the map implementation you mentioned. However, if it's that important for you (and I do see the benefit of having this ability), it's probably worth it.
So I understand that you may have two equal objects but they are not the same instance.
Such as
Integer a = new Integer(3);
Integer b = new Integer(3);
In which case a.equals(b) because they refer to the same intrinsic value but a != b because they are two different objects.
There are other implementations of Set, such as IdentitySet, which do a different comparison between items.
However, I think that you are trying to apply a different philosophy to Java. If your objects are equal (a.equals(b)) although a and b have a different state or meaning, there is something wrong here. You may want to split that class into two or more semantic classes which implement a common interface - or maybe reconsider .equals and .hashCode.
If you have Joshua Bloch's Effective Java, have a look at the chapters called "Obey the general contract when overriding equals" and "Minimize mutability".
Just use the Map solution... a TreeSet and a HashSet also do it since they are backed up by a TreeMap and a HashMap, so there is no penalty in doing so (actualy it should be a minimal gain).
You may also extend your favorite Set to add the get() method.
[]]
I think your only solution, given some Set implementation, is to iterate over its elements to find one that is equals() -- then you have the actual object in the Set that matched.
K target = ...;
Set<K> set = ...;
for (K element : set) {
if (target.equals(element)) {
return element;
}
}
If you think about it as a mathematical set, you can derive a way to find the object.
Intersect the set with a collection of object containing only the object you want to find. If the intersection is not empty, the only item left in the set is the one you were looking for.
public <T> T findInSet(T findMe, Set<T> inHere){
inHere.retainAll(Arrays.asList(findMe));
if(!inHere.isEmpty){
return inHere.iterator().next();
}
return null;
}
Its not the most efficient use of memory, but its functionally and mathematically correct.
"I want the exact object instance that is already in the set, not a possibly different object instance where .equals() returns true."
This doesn't make sense. Say you do:
Set<Foo> s = new Set<Foo>();
s.Add(new Foo(...));
...
Foo newFoo = ...;
You now do:
s.contains(newFoo)
If you want that to only be true if an object in the set is == newFoo, implement Foo's equals and hashCode with object identity. Or, if you're trying to map multiple equal objects to a canonical original, then a Map may be the right choice.
I think the expectation is that equals truely represent some equality, not simply that the two objects have the same primary key, for example. And if equals represented two really equal objects, then a get would be redundant. The use case you want suggests a Map, and perhaps a different value for the key, something that represents a primary key, rather than the whole object, and then properly implement equals and hashcode accordingly.
Functional Java has an implementation of a persistent Set (backed by a red/black tree) that incidentally includes a split method that seems to do kind of what you want. It returns a triplet of:
The set of all elements that appear before the found object.
An object of type Option that is either empty or contains the found object if it exists in the set.
The set of all elements that appear after the found object.
You would do something like this:
MyElementType found = hayStack.split(needle)._2().orSome(hay);
Object fromSet = set.tailSet(obj).first();
if (! obj.equals(fromSet)) fromSet = null;
does what you are looking for. I don't know why java hides it.
Say, I have a User POJO with ID and name.
ID keeps the contract between equals and hashcode.
name is not part of object equality.
I want to update the name of the user based on the input from somewhere say, UI.
As java set doesn't provide get method, I need to iterate over the set in my code and update the name when I find the equal object (i.e. when ID matches).
If you had get method, this code could have been shortened.
Java now comes with all kind of stupid things like javadb and enhanced for loop, I don't understand why in this particular case they are being purist.
I had the same problem. I fixed it by converting my set to a Map, and then getting them from the map. I used this method:
public Map<MyObject, MyObject> convertSetToMap(Set<MyObject> set)
{
Map<MyObject, MyObject> myObjectMap = new HashMap<MyObject, MyObject>();
for(MyObject myObject: set){
myObjectMap.put(myObject, myObject);
}
return myObjectMap
}
Now you can get items from your set by calling this method like this:
convertSetToMap(myset).get(myobject);
You can override the equals in your class to let it check on only a certain properties like Id or name.
if you have made a request for this in Java bug parade list it here and we can vote it up. I think at least the convenience class java.util.Collections that just takes a set and an object
and is implemented something like
searchSet(Set ss, Object searchFor){
Iterator it = ss.iterator();
while(it.hasNext()){
Object s = it.next();
if(s != null && s.equals(searchFor)){
return s;
}
}
This is obviously a shortcoming of the Set API.
Simply, I want to lookup an object in my Set and update its property.
And I HAVE TO loop through my (Hash)Set to get to my object... Sigh...
I agree that I'd like to see Set implementations provide a get() method.
As one option, in the case where your Objects implement (or can implement) java.lang.Comparable, you can use a TreeSet. Then the get() type function can be obtained by calling ceiling() or floor(), followed by a check for the result being non-null and equal to the comparison Object, such as:
TreeSet myTreeSet<MyObject> = new TreeSet();
:
:
// Equivalent of a get() and a null-check, except for the incorrect value sitting in
// returnedMyObject in the not-equal case.
MyObject returnedMyObject = myTreeSet.ceiling(comparisonMyObject);
if ((null != returnedMyObject) && returnedMyObject.equals(comparisonMyObject)) {
:
:
}
The reason why there is no get is simple:
If you need to get the object X from the set is because you need something from X and you dont have the object.
If you do not have the object then you need some means (key) to locate it. ..its name, a number what ever. Thats what maps are for right.
map.get( "key" ) -> X!
Sets do not have keys, you need yo traverse them to get the objects.
So, why not add a handy get( X ) -> X
That makes no sense right, because you have X already, purist will say.
But now look at it as non purist, and see if you really want this:
Say I make object Y, wich matches the equals of X, so that set.get(Y)->X. Volia, then I can access the data of X that I didn have. Say for example X has a method called get flag() and I want the result of that.
Now look at this code.
Y
X = map.get( Y );
So Y.equals( x ) true!
but..
Y.flag() == X.flag() = false. ( Were not they equals ?)
So, you see, if set allowed you to get the objects like that It surely is to break the basic semantic of the equals. Later you are going to live with little clones of X all claming that they are the same when they are not.
You need a map, to store stuff and use a key to retrieve it.
I understand that only one instance of any object according to .equals() is allowed in a Set and that you shouldn't "need to" get an object from the Set if you already have an equivalent object, but I would still like to have a .get() method that returns the actual instance of the object in the Set (or null) given an equivalent object as a parameter.
The simple interface/API gives more freedom during implementation. For example if Set interface would be reduced just to single contains() method we get a set definition typical for functional programming - it is just a predicate, no objects are actually stored. It is also true for java.util.EnumSet - it contains only a bitmap for each possible value.
It's just an opinion. I believe we need to understand that we have several java class without fields/properties, i.e. only methods. In that case equals cannot be measured by comparing function, one such example is requestHandlers. See the below example of a JAX-RS application. In this context SET makes more sense then any data structure.
#ApplicationPath("/")
public class GlobalEventCollectorApplication extends Application {
#Override
public Set<Class<?>> getClasses() {
Set<Class<?>> classes = new HashSet<Class<?>>();
classes.add(EventReceiverService.class);
classes.add(VirtualNetworkEventSerializer.class);
return classes;
}
}
To answer your question, if you have an shallow-employee object ( i.e. only EMPID, which is used in equals method to determine uniqueness ) , and if you want to get a deep-object by doing a lookup in set, SET is not the data-structure , as its purpose is different.
List is ordered data structure. So it follows the insertion order. Hence the data you put will be available at exact position the time you inserted.
List<Integer> list = new ArrayList<>();
list.add(1);
list.add(2);
list.add(3);
list.get(0); // will return value 1
Remember this as simple array.
Set is un ordered data structure. So it follows no order. The data you insert at certain position will be available any position.
Set<Integer> set = new HashSet<>();
set.add(1);
set.add(2);
set.add(3);
//assume it has get method
set.get(0); // what are you expecting this to return. 1?..
But it will return something else. Hence it does not make any sense to create get method in Set.
**Note****For explanation I used int type, this same is applicable for Object type also.
I think you've answered your own question: it is redundant.
Set provides Set#contains (Object o) which provides the equivalent identity test of your desired Set#get(Object o) and returns a boolean, as would be expected.
hi
I want to create a HashMap (java) that stores Expression, a little object i've created.
How do I choose what type of key to use? What's the difference for me between integer and String? I guess i just don't fully understand the idea behind HashMap so i'm not sure what keys to use.
Thanks!
Java HashMap relies on two things:
the hashCode() method, which returns an integer that is generated from the key and used inside the map
the equals(..) method, which should be consistent to the hash calculated, this means that if two keys has the same hashcode than it is desiderable that they are the same element.
The specific requirements, taken from Java API doc are the following:
Whenever it is invoked on the same object more than once during an execution of a Java application, the hashCode method must consistently return the same integer, provided no information used in equals comparisons on the object is modified. This integer need not remain consistent from one execution of an application to another execution of the same application.
If two objects are equal according to the equals(Object) method, then calling the hashCode method on each of the two objects must produce the same integer result.
It is not required that if two objects are unequal according to the equals(java.lang.Object) method, then calling the hashCode method on each of the two objects must produce distinct integer results. However, the programmer should be aware that producing distinct integer results for unequal objects may improve the performance of hashtables.
If you don't provide any kind of specific implementation, then the memory reference of the object is used as the hashcode. This is usually good in most situations but if you have for example:
Expression e1 = new Expression(2,4,PLUS);
Expression e2 = new Expression(2,4,PLUS);
(I don't actually know what you need to place inside your hashmap so I'm just guessing)
Then, since they are two different object although with same parameters, they will have different hashcodes. This could be or not be a problem for your specific situation.
In case it isn't just use the hasmap without caring about these details, if it is you will need to provide a better way to compute the hashcode and equality of your Expression class.
You could do it in a recursive way (by computing the hashcode as a result of the hashcodes of children) or in a naive way (maybe computing the hashcode over a toString() representation).
Finally, if you are planning to use just simple types as keys (like you said integers or strings) just don't worry, there's no difference. In both cases two different items will have the same hashcode. Some examples:
assert(new String("hello").hashCode() == new String("hello").hashCode());
int x = 123;
assert(new Integer(x).hashCode() == new Integer(123).hashCode());
Mind that the example with strings is not true in general, like I explained you before, it is just because the hashcode method of strings computes the value according to the content of the string itself.
The key is what you use to identify objects. You might have a situation where you want to identify numbers by their name.
Map<String,Integer> numbersByName = new HashMap<String,Integer>();
numbersByName.put("one",Integer.valueOf(1));
numbersByName.put("two",Integer.valueOf(2));
numbersByName.put("three",Integer.valueOf(3));
... etc
Then later you can get them out by doing
Integer three = numbersByName.get("three");
Or you might have a need to go the other way. If you know you're going to have integer values, and want the names, you can map integers to strings
Map<String,Integer> numbersByValue = new HashMap<String,Integer>();
numbersByValue.put(Integer.valueOf(1),"one");
numbersByValue.put(Integer.valueOf(2),"two");
numbersByValue.put(Integer.valueOf(3),"three");
... etc
And get it out
String three = numbersByValue.get(Integer.valueOf(3));
Keys and their associated values are both objects. When you get something from a HashMap, you have to cast it to the actual type of object it represents (we can do this because all objects in Java inherit the Object class). So, if your keys are strings and your values are Integers, you would do something like:
Integer myValue = (Integer)myMap.get("myKey");
However, you can use Java generics to tell the compiler that you're only going to be using Strings and Integers:
HashMap<String,Integer> myMap = new HashMap<String,Integer>();
See http://download.oracle.com/javase/1.4.2/docs/api/java/util/HashMap.html for more details on HashMap.
If you do not want to look up the expressions, why do you want them to store in a map?
But if you want to, then the key is that item you use for lookup.
Could anyone please tell what are the important use cases of IdentityHashMap?
Whenever you want your keys not to be compared by equals but by == you would use an IdentityHashMap. This can be very useful if you're doing a lot of reference-handling but it's limited to very special cases only.
The documentations says:
A typical use of this class is
topology-preserving object graph
transformations, such as serialization
or deep-copying. To perform such a
transformation, a program must
maintain a "node table" that keeps
track of all the object references
that have already been processed. The
node table must not equate distinct
objects even if they happen to be
equal. Another typical use of this
class is to maintain proxy objects.
For example, a debugging facility
might wish to maintain a proxy object
for each object in the program being
debugged.
One case where you can use IdentityHashMap is if your keys are Class objects. This is about 33% faster than HashMap for gets! It probably uses less memory too.
You can also use the IdentityHashMap as a general purpose map if you can make sure the objects you use as keys will be equal if and only if their references are equal.
To what gain? Obviously it will be faster and will use less memory than using implementations like HashMap or TreeMap.
Actually, there are quite a lot of cases when this stands. For example:
Enums. Although for enums there is even a better alternative: EnumMap
Class objects. They are also comparable by reference.
Interned Strings. Either by specifying them as literals or calling String.intern() on them.
Cached instances. Some classes provide caching of their instances. For example quoting from the javadoc of Integer.valueOf(int):
This method will always cache values in the range -128 to 127, inclusive...
Certain libraries/frameworks will manage exactly one instance of ceratin types, for example Spring beans.
Singleton types. If you use istances of types that are built with the Singleton pattern, you can also be sure that (at the most) one instance exists from them and therefore reference equality test will qualify for equality test.
Any other type where you explicitly take care of using only the same references for accessing values that were used to putting values into the map.
To demonstrate the last point:
Map<Object, String> m = new IdentityHashMap<>();
// Any keys, we keep their references
Object[] keys = { "strkey", new Object(), new Integer(1234567) };
for (int i = 0; i < keys.length; i++)
m.put(keys[i], "Key #" + i);
// We query values from map by the same references:
for (Object key : keys)
System.out.println(key + ": " + m.get(key));
Output will be, as expected (because we used the same Object references to query values from the map):
strkey: Key #0
java.lang.Object#1c29bfd: Key #1
1234567: Key #2
HashMap creates Entry objects every time you add an object, which can put a lot of stress on the GC when you've got lots of objects. In a HashMap with 1,000 objects or more, you'll end up using a good portion of your CPU just having the GC clean up entries (in situations like pathfinding or other one-shot collections that are created and then cleaned up). IdentityHashMap doesn't have this problem, so will end up being significantly faster.
See a benchmark here: http://www.javagaming.org/index.php/topic,21395.0/topicseen.html
This is a practical experience from me:
IdentityHashMap leaves a much smaller memory footprint compared to HashMap for large cardinalities.
One important case is where you are dealing with reference types (as opposed to values) and you really want the correct result. Malicious objects can have overridden hashCode and equals methods getting up to all sorts of mischief. Unfortunately, it's not used as often as it should be. If the interface types you are dealing with don't override hashCode and equals, you should typically go for IdentityHashMap.