Java hashCode, artificial fields? - java

Imagine the following problem:
// Class PhoneNumber implements hashCode() and equals()
PhoneNumber obj = new PhoneNumber("mgm", "089/358680");
System.out.println("Hashcode: " +
obj.hashCode()); //prints "1476725853"
// Add PhoneNumber object to HashSet
Set<PhoneNumber> set = new HashSet();
set.add(obj);
// Modify object after it has been inserted
obj.setNumber("089/358680-0");
// Modification causes a different hash value
System.out.println("New hashcode: " +
obj.hashCode()); //prints "7130851"
// ... Later or in another class, code such as the following
// is operating on the Set:
// Unexpected Result!
// Output: obj is set member: FALSE
System.out.println("obj is set member: " +
set.contains(obj));
If I've got a class and I want all my fields to be editable and still be able to use a set / hashCode. Would it be a good idea to create an artificial uneditable field in the class that is set at creation of the object? For example the current time in ms. When I've got that field, I can base the hashcode upon it and I would still be able to edit all the "real" fields. Would this be a good idea?

I strongly believe you are presenting a bad use case: if you need to modify object in a Set, you should definitely remove the old one and re-add the new one (or use another java.util.Collection). Taking from your example:
Set<PhoneNumber> set = new HashSet();
set.add(obj);
// Modify object after it has been inserted
set.remove(obj);
obj.setNumber("089/358680-0");
set.add(obj);
The whole purpose of hashCode is to create a bucket of similar objects to reduce the search space, therefore it should be immutable but useful for you (if you use an artificial field, how do you find the object in your set later on? How do you retrieve this artificial field, given you are not with persistence storage of any type - the id in a database is an exception in the usage of artificial field IMHO).
To explain the meaning of
The whole purpose of hashCode is to create a bucket of similar
objects to reduce the search space
have a look at this sample code: http://ideone.com/MJ2MQT. I (wrongly) created to objects with the same hash code, then added both to a set; as expected, the set contains both of them, because the hash code is used to retrieve the elements which collide and then the equals method is called to solve this collision. Collisions (read different objects which return same hash code) are unavoidable, and the goal of a proper designed hash code function is to reduce them as much as possible.

Storing mutable objects in a hash set, or using them as keys in a hash map, is definitely not a good idea, precisely for the reason that you illustrate in your code.
On the other hand, defining an artificial number that serves as an ID of an object defeats the purpose of having a hash code in the first place, because it does not help you find an object that is equal to a given object by limiting the search to objects with identical hash codes.
In fact, your solution is not different from constructing a Map<Integer,PhoneNumber> from an "artificial hash code" to your mutable PhoneNumber object. If finding objects by association is what you need, HashMap from an artificial ID to the mutable object is the way to go.

It usually makes sense to have a unique identifier for your data objects, especially if you are persisting them in some database. It will allow you to have an easy implementation of equals and hashCode, which will only depend on this single identifier.
I'm not sure the current time in ms. will be the best choice, but you should definitely generate some unique ID.

Related

Usage of identity hash map

I google out the usage of identity hash map but doesn't found a good answer. I also doesn't got the java doc explaination below :
A typical use of this class is topology-preserving object graph transformations, such as serialization or deep-copying. To perform such a transformation, a program must maintain a "node table" that keeps track of all the object references that have already been processed. The node table must not equate distinct objects even if they happen to be equal. Another typical use of this class is to maintain proxy objects. For example, a debugging facility might wish to maintain a proxy object for each object in the program being debugged.
Can some one please provide a good use case of identity hash map ?
I guess the important point here is
The node table must not equate distinct objects even if they happen to be equal
If you add a key value pair to a map the e.g. hashmap will check if the key already exists using the equal method. But there are cases where you want to compare the key on its real identify which in Java is the object reference (address). As stated in the Java doc one use case could be a map that manages proxy objects. If you have two objects which are "equal" you still want to create a separate proxy object for both of them. And as for caching you want to store those proxy objects in a map. Then you use the identity map with the source object as key and the proxy object as value.
Hope this makes it a bit clearer.

Multiple keys pointing/refering to same Object in values in HashMap

I have a HashMap , which has a Object (with 2 String objects as member variables of it) and value as Object containing 3 different Strings.
Say:
Map<ReqDTO , RespDTO> map = new HashMap<ReqDTO ,RespDTO> ();
suppose I have following values :
KEY VALUE
1 ("str1","1") - ("1","2","3")
2 ("str2","2") - ("a","b","c")
3 ("str3","3") - ("1","2","3")
4 ("str4","4") - ("v","b","g")
5 ("str5","5") - ("1","2","3")
When I have thousands of such records , (which is Cache in my application) , then VALUE part of record number : 1,3,5 is holding memory of 3 objects. I want to make KEYS of 1,3,5 records to point to same instance of the VALUES (1,2,3 , in this case) and not as separate memory.
Is there any variant in HashMap for the same? or Any other Datastructure will do..
NOTE: It is loaded only once and all the operations performed on this are READ only..
What should be the preference of datastructure to make it performance intensive ,In other words, It can have costly insertion.
You could use a technique called interning, which is essentially mapping all objects that are equal() to each other to a single authorative instance.
That's used in Java for Strings using String.intern().
But there are some drawbacks to using this method ('though they have been reduced quite a lot with modern JVMs). As an alternative you can use the Guava interface Interner.
Just create a single Interner using the Interners helper class:
Interner strInterner = Interners.newStrongInterner();
and pass each String value through the interner before using it in a key or value:
String v1 = strInterner.intern(param1);
This way for any given value, you'll only ever use 1 String instance. The same can be done for any other class (as long as it correctly implements equals() and is immutable).
You can even discard the Interner after you've constructed the map.
Well, if you put the same object into the map for both keys, then they'll both be the same object. If you have different instances of the object that are .equals() to each other, it gets more interesting. You could try using Flyweight for your value objects, or you could walk through the values() of the map - if you find an equals() value object, put your key with that object instead of the one passed in.
Someone, somewhere has probably already written a Map implementation that does what you want, but my best recommendation there is to use Google and hope they're good at SEO.

Why are immutable objects in hashmaps so effective?

So I read about HashMap. At one point it was noted:
"Immutability also allows caching the hashcode of different keys which makes the overall retrieval process very fast and suggest that String and various wrapper classes (e.g., Integer) provided by Java Collection API are very good HashMap keys."
I don't quite understand... why?
String#hashCode:
private int hash;
...
public int hashCode() {
int h = hash;
if (h == 0 && count > 0) {
int off = offset;
char val[] = value;
int len = count;
for (int i = 0; i < len; i++) {
h = 31*h + val[off++];
}
hash = h;
}
return h;
}
Since the contents of a String never change, the makers of the class chose to cache the hash after it had been calculated once. This way, time is not wasted recalculating the same value.
Quoting the linked blog entry:
final object with proper equals () and hashcode () implementation would act as perfect Java HashMap keys and improve performance of Java hashMap by reducing collision.
I fail to see how both final and equals() have anything to do with hash collisions. This sentence raises my suspicion about the credibility of the article. It seems to be a collection of dogmatic Java "wisdoms".
Immutability also allows caching there hashcode of different keys which makes overall retrieval process very fast and suggest that String and various wrapper classes e.g Integer provided by Java Collection API are very good HashMap keys.
I see two possible interpretations of this sentence, both of which are wrong:
HashMap caches hash codes of immutable objects. This is not correct. The map doesn't have the possibility to find out if an object is "immutable".
Immutability is required for an object to cache its own hash code. Ideally, an object's hash value should always just rely on non-mutating state of the object, otherwise the object couldn't be sensibly used as a key. So in this case, too, the author fails to make a point: If we assume that our object is not changing its state, we also don't have to recompute the hash value every time, even if our object is mutable!
Example
So if we are really crazy and actually decide to use a List as a key for a HashMap and make the hash value dependent on the contents, rather than the identity of the list, we could just decide to invalidate the cached hash value on every modification, thus limiting the number of hash computations to the number of modifications to the list.
It's very simple. Since an immutable object doesn't change over time, it only needs to perform the calculation of the hash code once. Calculating it again will yield the same value. Therefore it is common to calculate the hash code in the constructor (or lazily) and store it in a field. The hashcode function then returns just the value of the field, which is indeed very fast.
Basically immutability is achieved in Java by making the class not extendable and all the operations in the object will ideally not change the state of the object. If you see the operations of String like replace(), it does not change the state of the current object with which you are manipulating rather it gives you a new String object with the replaced string. So ideally if you maintain such objects as keys the state doesn't change and hence the hash code also remains unchanged. So caching the hash code will be performance effective during retrievals.
Think of the hashmap as a big array of numbered boxes. The number is the hashcode, and the boxes are ordered by number.
Now if the object can't change, the hash function will always reproduce the same value. Therefore the object will always stay in it's box.
Now suppose a changeable object. It is changed after adding it to the hash, so now it is sitting in the wrong box, like a Mrs. Jones which happened to marry Mister Doe, and which is now named Doe too, but in many registers still named Jones.
Immutable classes are unmodifiable, that's why those are used as keys in a Map.
For an example -
StringBuilder key1=new StringBuilder("K1");
StringBuilder key2=new StringBuilder("K2");
Map<StringBuilder, String> map = new HashMap<>();
map.put(key1, "Hello");
map.put(key2, "World");
key1.append("00");
System.out.println(map); // This line prints - {K100=Hello, K2=World}
You see the key K1 (which is an object of mutable class StringBuilder) inserted in the map is lost due to an inadvertent change to it. This won't happen if you use immutable classes as keys for the Map family members.
Hash tables will only work if the hash code of an object can never change while it is stored in the table. This implies that the hash code cannot take into account any aspect of the object which could change while it's in the table. If the most interesting aspects of an object are mutable, that implies that either:
The hash code will have to ignore most of the interesting aspects of the object, thus causing many hash collisions, or...
The code which owns the hash table will have to ensure that the objects therein are not exposed to anything that might change them while they are stored in the hash table.
If Java hash tables allowed clients to supply an EqualityComparer (the way .NET dictionaries do), code which knows that certain aspects of the objects in a hash table won't unexpectedly change could use a hash code which took those aspects into account, but the only way to accomplish that in Java would be to wrap each item stored in the hashcode in a wrapper. Such wrapping may not be the most evil thing in the world, however, since the wrapper would be able to cache hash values in a way which an EqualityComparer could not, and could also cache further equality-related information [e.g. if the things being stored were nested collections, it might be worthwhile to compute multiple hash codes, and confirm that all hash codes match before doing any detailed inspection of the elements].

Java sets: why there is no T get(Object o)? [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I understand that only one instance of any object according to .equals() is allowed in a Set and that you shouldn't "need to" get an object from the Set if you already have an equivalent object, but I would still like to have a .get() method that returns the actual instance of the object in the Set (or null) given an equivalent object as a parameter.
Any ideas/theories as to why it was designed like this?
I usually have to hack around this by using a Map and making the key and the value same, or something like that.
EDIT: I don't think people understand my question so far. I want the exact object instance that is already in the set, not a possibly different object instance where .equals() returns true.
As to why I would want this behavior, typically .equals() does not take into account all the properties of the object. I want to provide some dummy lookup object and get back the actual object instance in the Set.
While the purity argument does make the method get(Object) suspect, the underlying intent is not moot.
There are various class and interface families that slightly redefine equals(Object). One need look no further than the collections interfaces. For example, an ArrayList and a LinkedList can be equal; their respective contents merely need to be the same and in the same order.
Consequently, there are very good reasons for finding the matching element in a set. Perhaps a clearer way of indicating intent is to have a method like
public interface Collection<E> extends ... {
...
public E findMatch(Object o) throws UnsupportedOperationException;
...
}
Note that this API has value broader that within Set.
As to the question itself, I don't have any theory as to why such an operation was omitted. I will say that the minimal spanning set argument does not hold, because many operations defined in the collections APIs are motivated by convenience and efficiency.
The problem is: Set is not for "getting" objects, is for adding and test for presence.
I understand what are you looking for, I had a similar situation and ended using a map of the same object in key and value.
EDIT: Just to clarify: http://en.wikipedia.org/wiki/Set_(abstract_data_type)
I had the same question in java forum years ago. They told me that the Set interface is defined. It cannot be changed because it will break the current implementations of Set interface. Then, they started to claim bullshit, like you see here: "Set does not need the get method" and started to drill me that Map must always be used to get elements from a set.
If you use the set only for mathematical operations, like intersection or union, then may be contains() is sufficient. However, Set is defined in collections to store data. I explained for need get() in Set using the relational data model.
In what follows, an SQL table is like a class. The columns define attributes (known as fields in Java) and records represent instances of the class. So that an object is a vector of fields. Some of the fields are primary keys. They define uniqueness of the object. This is what you do for contains() in Java:
class Element {
public int hashCode() {return sumOfKeyFields()}
public boolean equals(Object e) {keyField1.equals(e) && keyField2.equals(e) && ..}
I'm not aware of DB internals. But, you specify key fields only once, when define a table. You just annotate key fields with #primary. You do not specify the keys second time, when add a record to the table. You do not separate keys from data, as you do in mapping. SQL tables are sets. They are not maps. Yet, they provide get() in addition to maintaining uniqueness and contains() check.
In "Art of Computer Programming", introducing the search, D. Knuth says the same:
Most of this chapter is devoted to the study of a very simple search
problem: how to find the data that has been stored with a given
identification.
You see, data is store with identification. Not identification pointing to data but data with identification. He continues:
For example, in a numerical application we might want
to find f(x), given x and a table of the values of f; in a
nonnumerical application, we might want to find the English
translation of a given Russian word.
It looks like he starts to speak about mapping. However,
In general, we shall suppose that a set of N records has been stored,
and the problem is to locate the appropriate one. We generally require
the N keys to be distinct, so that each key uniquely identifies its
record. The collection of all records is called a table or file,
where the word "table" is usually used to indicate a small file, and
"file" is usually used to indicate a large table. A large file or a
group of files is frequently called a database.
Algorithms for searching are presented with a so-called argument, K,
and the problem is to find which record has K as its key. Although the
goal of searching is to find the information stored in the record
associated with K, the algorithms in this chapter generally ignore
everything but the keys themselves. In practice we can find the
associated data once we have located K; for example, if K appears in
location TABLE + i, the associated data (or a pointer to it) might be
in location TABLE + i + 1
That is, the search locates the key filed of the record and it should not "map" the key to the data. Both are located in the same record, as fileds of java object. That is, search algorithm examines the key fields of the record, as it does in the set, rather than some remote key, as it does in the map.
We are given N items to be sorted; we shall call them records, and
the entire collection of N records will be called a file. Each
record Rj has a key Kj, which governs the sorting process. Additional
data, besides the key, is usually also present; this extra "satellite
information" has no effect on sorting except that it must be carried
along as part of each record.
Neither, I see no need to duplicate the keys in an extra "key set" in his discussion of sorting.
... ["The Art of Computer Programming", Chapter 6, Introduction]
entity set is collection or set all entities of a particular entity type
[http://wiki.answers.com/Q/What_is_entity_and_entity_set_in_dbms]
The objects of single class share their class attributes. Similarly, do records in DB. They share column attributes.
A special case of a collection is a class extent, which is the
collection of all objects belonging to the class. Class extents allow
classes to be treated like relations
... ["Database System Concepts", 6th Edition]
Basically, class describes the attributes common to all its instances. A table in relational DB does the same. "The easiest mapping you will ever have is a property mapping of a single attribute to a single column." This is the case I'm talking about.
I'm so verbose on proving the analogy (isomorphism) between objects and DB records because there are stupid people who do not accept it (to prove that their Set must not have the get method)
You see in replays how people, who do not understand this, say that Set with get would be redundant? It is because their abused map, which they impose to use in place of set, introduces the redundancy. Their call to put(obj.getKey(), obj) stores two keys: the original key as part of the object and a copy of it in the key set of the map. The duplication is the redundancy. It also involves more bloat in the code and wastes memory consumed at Runtime. I do not know about DB internals, but principles of good design and database normalization say that such duplication is bad idea - there must be only one source of truth. Redundancy means that inconsistency may happen: the key maps to an object that has a different key. Inconsistency is a manifestation of redundancy. Edgar F. Codd proposed DB normalization just to get rid of redundancies and their inferred inconsistencies. The teachers are explicit on the normalization: Normalization will never generate two tables with a one-to-one relationship between them. There is no theoretical reason to separate a single entity like this with some fields in a single record of one table and others in a single record of another table
So, we have 4 arguments, why using a map for implementing get in set is bad:
the map is unnecessary when we have a set of unique objects
map introduces redundancy in Runtime storage
map introduces code bloat in the DB (in the Collections)
using map contradicts the data storage normalization
Even if you are not aware of the record set idea and data normalization, playing with collections, you may discover this data structure and algorithm yourself, as we, org.eclipse.KeyedHashSet and C++ STL designers did.
I was banned from Sun forum for pointing out these ideas. The bigotry is the only argument against the reason and this world is dominated by bigots. They do not want to see concepts and how things can be different/improved. They see only actual world and cannot imagine that design of Java Collections may have deficiencies and could be improved. It is dangerous to remind rationale things to such people. They teach you their blindness and punish if you do not obey.
Added Dec 2013: SICP also says that DB is a set with keyed records rather than a map:
A typical data-management system spends a large amount of time
accessing or modifying the data in the records and therefore requires
an efficient method for accessing records. This is done by identifying
a part of each record to serve as an identifying key. Now we represent
the data base as a set of records.
Well, if you've already "got" the thing from the set, you don't need to get() it, do you? ;-)
Your approach of using a Map is The Right Thing, I think. It sounds like you're trying to "canonicalize" objects via their equals() method, which I've always accomplished using a Map as you suggest.
I'm not sure if you're looking for an explanation of why Sets behave this way, or for a simple solution to the problem it poses. Other answers dealt with the former, so here's a suggestion for the latter.
You can iterate over the Set's elements and test each one of them for equality using the equals() method. It's easy to implement and hardly error-prone. Obviously if you're not sure if the element is in the set or not, check with the contains() method beforehand.
This isn't efficient compared to, for example, HashSet's contains() method, which does "find" the stored element, but won't return it. If your sets may contain many elements it might even be a reason to use a "heavier" workaround like the map implementation you mentioned. However, if it's that important for you (and I do see the benefit of having this ability), it's probably worth it.
So I understand that you may have two equal objects but they are not the same instance.
Such as
Integer a = new Integer(3);
Integer b = new Integer(3);
In which case a.equals(b) because they refer to the same intrinsic value but a != b because they are two different objects.
There are other implementations of Set, such as IdentitySet, which do a different comparison between items.
However, I think that you are trying to apply a different philosophy to Java. If your objects are equal (a.equals(b)) although a and b have a different state or meaning, there is something wrong here. You may want to split that class into two or more semantic classes which implement a common interface - or maybe reconsider .equals and .hashCode.
If you have Joshua Bloch's Effective Java, have a look at the chapters called "Obey the general contract when overriding equals" and "Minimize mutability".
Just use the Map solution... a TreeSet and a HashSet also do it since they are backed up by a TreeMap and a HashMap, so there is no penalty in doing so (actualy it should be a minimal gain).
You may also extend your favorite Set to add the get() method.
[]]
I think your only solution, given some Set implementation, is to iterate over its elements to find one that is equals() -- then you have the actual object in the Set that matched.
K target = ...;
Set<K> set = ...;
for (K element : set) {
if (target.equals(element)) {
return element;
}
}
If you think about it as a mathematical set, you can derive a way to find the object.
Intersect the set with a collection of object containing only the object you want to find. If the intersection is not empty, the only item left in the set is the one you were looking for.
public <T> T findInSet(T findMe, Set<T> inHere){
inHere.retainAll(Arrays.asList(findMe));
if(!inHere.isEmpty){
return inHere.iterator().next();
}
return null;
}
Its not the most efficient use of memory, but its functionally and mathematically correct.
"I want the exact object instance that is already in the set, not a possibly different object instance where .equals() returns true."
This doesn't make sense. Say you do:
Set<Foo> s = new Set<Foo>();
s.Add(new Foo(...));
...
Foo newFoo = ...;
You now do:
s.contains(newFoo)
If you want that to only be true if an object in the set is == newFoo, implement Foo's equals and hashCode with object identity. Or, if you're trying to map multiple equal objects to a canonical original, then a Map may be the right choice.
I think the expectation is that equals truely represent some equality, not simply that the two objects have the same primary key, for example. And if equals represented two really equal objects, then a get would be redundant. The use case you want suggests a Map, and perhaps a different value for the key, something that represents a primary key, rather than the whole object, and then properly implement equals and hashcode accordingly.
Functional Java has an implementation of a persistent Set (backed by a red/black tree) that incidentally includes a split method that seems to do kind of what you want. It returns a triplet of:
The set of all elements that appear before the found object.
An object of type Option that is either empty or contains the found object if it exists in the set.
The set of all elements that appear after the found object.
You would do something like this:
MyElementType found = hayStack.split(needle)._2().orSome(hay);
Object fromSet = set.tailSet(obj).first();
if (! obj.equals(fromSet)) fromSet = null;
does what you are looking for. I don't know why java hides it.
Say, I have a User POJO with ID and name.
ID keeps the contract between equals and hashcode.
name is not part of object equality.
I want to update the name of the user based on the input from somewhere say, UI.
As java set doesn't provide get method, I need to iterate over the set in my code and update the name when I find the equal object (i.e. when ID matches).
If you had get method, this code could have been shortened.
Java now comes with all kind of stupid things like javadb and enhanced for loop, I don't understand why in this particular case they are being purist.
I had the same problem. I fixed it by converting my set to a Map, and then getting them from the map. I used this method:
public Map<MyObject, MyObject> convertSetToMap(Set<MyObject> set)
{
Map<MyObject, MyObject> myObjectMap = new HashMap<MyObject, MyObject>();
for(MyObject myObject: set){
myObjectMap.put(myObject, myObject);
}
return myObjectMap
}
Now you can get items from your set by calling this method like this:
convertSetToMap(myset).get(myobject);
You can override the equals in your class to let it check on only a certain properties like Id or name.
if you have made a request for this in Java bug parade list it here and we can vote it up. I think at least the convenience class java.util.Collections that just takes a set and an object
and is implemented something like
searchSet(Set ss, Object searchFor){
Iterator it = ss.iterator();
while(it.hasNext()){
Object s = it.next();
if(s != null && s.equals(searchFor)){
return s;
}
}
This is obviously a shortcoming of the Set API.
Simply, I want to lookup an object in my Set and update its property.
And I HAVE TO loop through my (Hash)Set to get to my object... Sigh...
I agree that I'd like to see Set implementations provide a get() method.
As one option, in the case where your Objects implement (or can implement) java.lang.Comparable, you can use a TreeSet. Then the get() type function can be obtained by calling ceiling() or floor(), followed by a check for the result being non-null and equal to the comparison Object, such as:
TreeSet myTreeSet<MyObject> = new TreeSet();
:
:
// Equivalent of a get() and a null-check, except for the incorrect value sitting in
// returnedMyObject in the not-equal case.
MyObject returnedMyObject = myTreeSet.ceiling(comparisonMyObject);
if ((null != returnedMyObject) && returnedMyObject.equals(comparisonMyObject)) {
:
:
}
The reason why there is no get is simple:
If you need to get the object X from the set is because you need something from X and you dont have the object.
If you do not have the object then you need some means (key) to locate it. ..its name, a number what ever. Thats what maps are for right.
map.get( "key" ) -> X!
Sets do not have keys, you need yo traverse them to get the objects.
So, why not add a handy get( X ) -> X
That makes no sense right, because you have X already, purist will say.
But now look at it as non purist, and see if you really want this:
Say I make object Y, wich matches the equals of X, so that set.get(Y)->X. Volia, then I can access the data of X that I didn have. Say for example X has a method called get flag() and I want the result of that.
Now look at this code.
Y
X = map.get( Y );
So Y.equals( x ) true!
but..
Y.flag() == X.flag() = false. ( Were not they equals ?)
So, you see, if set allowed you to get the objects like that It surely is to break the basic semantic of the equals. Later you are going to live with little clones of X all claming that they are the same when they are not.
You need a map, to store stuff and use a key to retrieve it.
I understand that only one instance of any object according to .equals() is allowed in a Set and that you shouldn't "need to" get an object from the Set if you already have an equivalent object, but I would still like to have a .get() method that returns the actual instance of the object in the Set (or null) given an equivalent object as a parameter.
The simple interface/API gives more freedom during implementation. For example if Set interface would be reduced just to single contains() method we get a set definition typical for functional programming - it is just a predicate, no objects are actually stored. It is also true for java.util.EnumSet - it contains only a bitmap for each possible value.
It's just an opinion. I believe we need to understand that we have several java class without fields/properties, i.e. only methods. In that case equals cannot be measured by comparing function, one such example is requestHandlers. See the below example of a JAX-RS application. In this context SET makes more sense then any data structure.
#ApplicationPath("/")
public class GlobalEventCollectorApplication extends Application {
#Override
public Set<Class<?>> getClasses() {
Set<Class<?>> classes = new HashSet<Class<?>>();
classes.add(EventReceiverService.class);
classes.add(VirtualNetworkEventSerializer.class);
return classes;
}
}
To answer your question, if you have an shallow-employee object ( i.e. only EMPID, which is used in equals method to determine uniqueness ) , and if you want to get a deep-object by doing a lookup in set, SET is not the data-structure , as its purpose is different.
List is ordered data structure. So it follows the insertion order. Hence the data you put will be available at exact position the time you inserted.
List<Integer> list = new ArrayList<>();
list.add(1);
list.add(2);
list.add(3);
list.get(0); // will return value 1
Remember this as simple array.
Set is un ordered data structure. So it follows no order. The data you insert at certain position will be available any position.
Set<Integer> set = new HashSet<>();
set.add(1);
set.add(2);
set.add(3);
//assume it has get method
set.get(0); // what are you expecting this to return. 1?..
But it will return something else. Hence it does not make any sense to create get method in Set.
**Note****For explanation I used int type, this same is applicable for Object type also.
I think you've answered your own question: it is redundant.
Set provides Set#contains (Object o) which provides the equivalent identity test of your desired Set#get(Object o) and returns a boolean, as would be expected.

Algorithm to get unique and same hashcode for the object when we run the application multiple times

I m using Java.I want to know,is any algorithm is available that will give me an unique and the same hash code when I will run the application multiple times sop that collisions of hash code will be avoided.
I know the thing that for similar objects, jvm returns same hash code and for different objects it may return same or different hash code.Bt I want some logic that will help to generate generate unique hash code for every object.
unique means that hash code of one object should not collide with any other object's hash code.and same means when i will run the application multiple times ,it should return me the same hash code whatever it returned me previously
The default hash code function in Java might return different hash codes for each JVM invokation, because it is able to use the memory address of the object, mangle it, and return it.
This is however not good coding practice, since objects which are equal should always return the same hashcode! Please read about the hash code contract to learn more. And most Classes in Java already have a hashcode function implemented that returns the same value on each JVM invocation.
To make it simple: All your data holding objects which might be stored in some collection should have an equals and hashcode implemention. If you code with Eclipse or any other reasonable IDE, you can use a wizard that creates the functions automatically.
And while we are at it: It is IMHO good practice to also implement the Comparable<T> interface, so you can use the objects within SortedSets and TreeMaps, too.
While we are at it: If others should your objects, don't forget Serializable and Cloneable.
Unique means that hashcode of one object should not collide with any other object's hashcode. Same means when I run the application multiple times, it should return me the same hash code whatever it returned me previously.
It is impossible to meet these requirements for a number of reasons:
It is not possible to guarantee that hashcodes are unique. Whatever you do in your classes hashcode method, some other classes hashcode method may give a value for some instance that is the same as the hashcode of one of your instances.
It is impossible to guarantee that hashcodes are unique across application runs even just for instances of your class.
The second requires justification. The way to create a unique hashcode is to do something like this:
static HashSet<Integer> usedCodes = ...
static IdentityHashMap<YourClass, Integer> codeMap = ...
public int hashcode() {
Integer code = codeMap.get(this);
if (code == null) {
code = // generate value-based hashcode for 'this'
while (usedCode.contains(code)) {
code = rehash(code);
}
usedCodes.add(code);
codeMap.put(this, code);
}
return code;
}
This gives the hashcodes with the desired uniqueness property, but the sameness property is not guaranteed ... unless the application always generates / accesses the hashcodes for all objects in the same order.
The only way to get this to work would be to persist the usedCode and codeMap data structures in a suitable form. Even (just) storing the unique hashcodes as part of the persisted objects is not sufficient, because there is a risk that the application may reissue a hashcode to a newly created object before reading the existing object that has the hashcode.
Finally, it should be noted that you have to be careful with using identity hashcodes anywhere in the solution. Identity hashcodes are not unique across different runs of an application. Indeed, if there are differences in any inputs, or if there is any non-determinism, it is highly likely that a given object will have a different identity hashcode value each time you run the application.
FOLLOW UP
Suppose you are storing millions of urls in database. While retrieving these urls, I want to generate unique hashcode that will make searching faster.
You need to store the hashcodes in a separate column of the table. But given the constraints discussed above, I don't see how this is going to make search faster. Basically you have to search the database for the URL in order to work out its unique hashcode.
I think you are better off using hashcodes that are not unique with a small probability. If you use a good enough "cryptographic" hashing function and a large enough hash size you can (in theory) make the probability of collision arbitrarily small ... but not zero.
Based on my understanding of your question...
If it is your custom object, then you can override the hashcode method(along with equals) to get a consistent hashcode based on the instance variables of your class. You can even return a constant hashcode, it will still satisfy the hascode contract.

Categories

Resources