I have a Java HashMap whose keys are instances of java.lang.Object, that is: the keys are of different types. The hashCode values of two key objects of different types are likely to be the same when they contain identical variable values.
In order to improve the performance of the get method for my HashMap, I'm inclined to mix the name of the Java type into the hashCode methods of my key objects. I have not seen examples of this elsewhere, and so my this-might-be-wacky alarm went off. Do you think mixing the type into hashCode is a good idea? Should I mix in the class name, or the hashCode of the relevant Class object?
I wouldn't mix the type name in - but if you're controlling the hashCode algorithm already, why not just change it so that they won't clash? For example, if you're using the common "add and multiply" approach, you could start off with different base cases or use different multipliers.
Before you worry about this too much though, have you actually measured how often you're really getting collisions with real data? Is this definitely a problem, or are you just concerned that it might be a problem?
I think your this-might-be-wacky alarm should have gone off when you decided to have keys of different types. But let's assume this is a case where Object is really the way to go.
You should try it without mixing in the type name and stress test the performance if you find that this particular lookup is determined to be a hotspot in the system. Chances are the performance doesn't matter that much.
Like Jon implied, the performance of the hash map is improved by reducing collisions. Mixing in the type name is just as likely to increase collisions as it is to reduce them. To keep your hashmap in peak condition, you want the likelihood of any particular hashcode to be about that same as any other over the domain of valid key values. So the probability of a hashcode of 10 should be about the same as the probability of 100 or any other number. That way the hash table buckets fill evenly (in all likelihood). so whether you have an object of type A or type B should not matter. just the probability distribution of the hashcodes of all occurring key values.
Years later...
Apart from it being a premature optimization, it's not a bad idea and the overhead is tiny. Choy's recommendation to profile first is surely good in general, but sometimes a simple optimization takes much less time than the profiling. This seems to be such a case.
I'd use a different multiplier as already suggested and mix in getClass().getHashCode().
Or maybe getClass().getName().getHashCode() as it stays consistent across JVM invocations, which might be helpful if you want a reproducible HashMap iteration order for easier debugging. Note that you should never rely on such a reproducibility and that there are quite many things destroying it.
Related
So I am working on some data-structure, whose operations tend to generate lots of different instances of a certain type. The datasets can be large enough to potentially yield millions of such objects. It is crucial that I "memoize" them, since there are recurring patterns in these calculations.
Typically the way memoization is done is that you simply have a set (say, a HashMap with its keys as its values) of all instances ever created. Any time an operation would return a result, instead the set is searched for an existing, identical object. If the object is found, it is returned instead, and the result we looked up with instantly becomes garbage.
Now, this is straight-forward enough. However, HashMap is not a perfect fit for this use case. There are two points I will address here:
Lack of support for custom equality and hashing function
Lack of support for "equivalent key" lookup.
The first has already been discussed (question 5453226), though not sufficiently for my purposes; solutions in the form of "just wrap your key types with another object" are a non-starter. If the key is a relatively small object (e.g. an array or a string of small size) the overhead cost of these wrappers can be nearly 2X.
To illustrate the second point, let's say the data type I'd like to memoize is a (typically small) int[]. Suppose I have made an operation and it has yielded a result, but it is not exactly in an int[], rather, it consists of a int[] x and separately a length int length. What I would like to do now is to look up the set for an array that equals (as a sequence of ints) to the data in x[0..length-1]. In principle, there should be no problem to accomplish this, provided that the user supplies equality and hashing predicates that match the ones used for the key type itself. In principle, the "compatible key" (x and length here) doesn't even have to be materialized as an object.
The lack of "compatible key" lookups may cause to program to create temporary key objects that are likely to be thrown away afterwards.
Problem 1 prevents me from using something like int[] as a key type in a Map, since it doesn't have the hashing and equality functions that I want. Further, I actually want to use shallow / reference-only comparisons and hashing on these objects once I'm past the memoization step, since then x is an identical object to y iff x == y. Here we have the same key type, but different desired equality/hashing behavior.
In C++, the functionality in (1) is offered out of the box by most hash-table based containers. (2) is given by some container types, for example the associative container templates in the Boost library.
A third function I'd be interested in (less important) is the insert_check/insert_commit idea, where we first check if a matching key exist, and we also get some implementation-defined marker back (e.g. bucket index). Then if we do create a new key and want to insert it, we give back the marker and it's inserted to the right place in the data structure. There is no reason why the hash of the same key should be computed twice.
If the compiler (or JIT) is clever enough to avoid double lookup in this scenario, that is a preferable solution - it's easier not to have to worry about it. I just never actually tested if it is.
Are there any libraries known to offer any of these capabilities in Java? Do I have to roll my own? Am I missing something?
Hash Collision or Hashing Collision in HashMap is not a new topic and I've come across several blogs and discussion boards explaining how to produce Hash Collision or how to avoid it in an ambiguous and detailed way. I recently came across this question in an interview. I had lot of things to explain but I think it was really hard to precisely give the right explanation. Sorry if my questions are repeated here, please route me to the precise answer:
What exactly is Hash Collision - is it a feature, or common phenomenon which is mistakenly done but good to avoid?
What exactly causes Hash Collision - the bad definition of custom class' hashCode() method, OR to leave the equals() method un-overridden while imperfectly overriding the hashCode() method alone, OR is it not up to the developers and many popular java libraries also has classes which can cause Hash Collision?
Does anything go wrong or unexpected when Hash Collision happens? I mean is there any reason why we should avoid Hash Collision?
Does Java generate or at least try to generate unique hashCode per class during object initiation? If no, is it right to rely on Java alone to ensure that my program would not run into Hash Collision for JRE classes? If not right, then how to avoid hash collision for hashmaps with final classes like String as key?
I'll be greateful if you could please share you answers for one or all of these questions.
What exactly is Hash Collision - is it a feature, or common phenomenon which is mistakenly done but good to avoid?
It's a feature. It arises out of the nature of a hashCode: a mapping from a large value space to a much smaller value space. There are going to be collisions, by design and intent.
What exactly causes Hash Collision - the bad definition of custom class' hashCode() method,
A bad design can make it worse, but it is endemic in the notion.
OR to leave the equals() method un-overridden while imperfectly overriding the hashCode() method alone,
No.
OR is it not up to the developers and many popular java libraries also has classes which can cause Hash Collision?
This doesn't really make sense. Hashes are bound to collide sooner or later, and poor algorithms can make it sooner. That's about it.
Does anything go wrong or unexpected when Hash Collision happens?
Not if the hash table is competently written. A hash collision only means that the hashCode is not unique, which puts you into calling equals(), and the more duplicates there are the worse the performance.
I mean is there any reason why we should avoid Hash Collision?
You have to trade off ease of computation against spread of values. There is no single black and white answer.
Does Java generate or atleast try to generate unique hasCode per class during object initiation?
No. 'Unique hash code' is a contradiction in terms.
If no, is it right to rely on Java alone to ensure that my program would not run into Hash Collision for JRE classes? If not right, then how to avoid hash collision for hashmaps with final classes like String as key?
The question is meaningless. If you're using String you don't have any choice about the hashing algorithm, and you are also using a class whose hashCode has been slaved over by experts for twenty or more years.
Actually I think the hash collision is Normal. Let talk about a case to think. We have 1000000 big numbers(the set S of x), say x is in 2^64. And now we want to do a map for this number set. lets map this number set S to [0,1000000] .
But how? use hash!!
Define a hash function f(x) = x mod 1000000. And now the x in S will be converted into [0,1000000), OK, But you will find that many numbers in S will convert into one number. for example. the number k * 1000000 + y will all be located in y which because (k * 1000000 + y ) % x = y. So this is a hash collision.
And how to deal with collision? In this case we talked above, it is very difficult to delimiter the collision because the math computing has some posibillity. We can find a more complex, more good hash function, but can not definitely say we eliminate the collision. We should do our effort to find a more good hash function to decrease the hash collision. Because the hash collision increase the time cost we use hash to find something.
Simplely there are two ways to deal with hash collision. the linked list is a more direct way, for example: if two numbers above get same value after the hash_function, we create a linkedlist from this value bucket, and all the same value is put the value's linkedlist. And another way is that just find a new position for the later number. for example, if number 1000005 has took the position in 5 and when 2000005 get value 5, it can not be located at position 5, it then go ahead and find a empty position to took.
For the last question : Does Java generate or at least try to generate unique hashCode per class during object initiation?
the hashcode of Object is typically implemented by converting the internal address of the object into an integer. So you can think different objects has different hashcode, if you use the Object's hashcode().
What exactly is Hash Collision - is it a feature, or common phenomenon which is mistakenly done but good to avoid?
a hash collision is exactly that, a collision of that field hashcode on objects...
What exactly causes Hash Collision - the bad definition of custom class' hashCode() method, OR to leave the equals() method
un-overridden while imperfectly overriding the hashCode() method
alone, OR is it not up to the developers and many popular java
libraries also has classes which can cause Hash Collision?
no, collision may happen because they are ruled by math probability and in such cases the birthday paradox is the best way to explain that.
Does anything go wrong or unexpected when Hash Collision happens? I mean is there any reason why we should avoid Hash Collision?
no, String class in java is very well developed class, and you dont need to search too much to find a collision (check the hascode of this Strings "Aa" and "BB" -> both have a collision to 2112)
to summarize:
hashcode collision is harmless is you know what is that for and why is not the same as an id used to prove equality
What exactly is Hash Collision - is it a feature, or common phenomenon
which is mistakenly done but good to avoid?
Neither... both... it is a common phenomenon, but not mistakenly done, that is good to avoid.
What exactly causes Hash Collision - the bad definition of custom
class' hashCode() method, OR to leave the equals() method
un-overridden while imperfectly overriding the hashCode() method
alone, OR is it not up to the developers and many popular java
libraries also has classes which can cause Hash Collision?
by poorly designing your hashCode() method, you can produce too many collisions, leaving you equals method un-overridden should not directly affect the number of collisions, many popular java libraries have classes that can cause collisions (nearly all classes actually).
Does anything go wrong or unexpected when Hash Collision happens? I
mean is there any reason why we should avoid Hash Collision?
There is degradation in performance, that is a reason to avoid them, but the program should continue to work.
Does Java generate or at least try to generate unique hashCode per
class during object initiation? If no, is it right to rely on Java
alone to ensure that my program would not run into Hash Collision for
JRE classes? If not right, then how to avoid hash collision for
hashmaps with final classes like String as key?
Java doesn't try to generate a unique hash code during object initialization, but it has a default implementation of hashCode() and equals(). The default implementation works to know whether two object references point to the same instance or not, and doesn't rely on the content (field values) of the objects. Therefore, the String class has its own implementation.
We use a HashMap<Integer, SomeType>() with more than a million entries. I consider that large.
But integers are their own hash code. Couldn't we save memory with a, say, IntegerHashMap<Integer, SomeType>() that uses a special Map.Entry, using int directly instead of a pointer to an Integer object? In our case, that would save 1000000x the memory required for an Integer object.
Any faults in my line of thought? Too special to be of general interest? (at least, there is an EnumHashMap)
add1. The first generic parameter of IntegerHashMap is used to make it closely similar to the other Map implementations. It could be dropped, of course.
add2. The same should be possible with other maps and collections. For example ToIntegerHashMap<KeyType, Integer>, IntegerHashSet<Integer>, etc.
What you're looking for is a "Primitive collections" library. They are usually much better with memory usage and performance. One of the oldest/popular libraries was called "Trove". However, it is a bit outdated now. The main active libraries in use now are:
Goldman Sach Collections
Fast Util
Koloboke
See Benchmarks Here
Some words of caution:
"integers are their own hash code" I'd be very careful with this statement. Depending on the integers you have, the distribution of keys may be anything from optimal to terrible. Ideally, I'd design the map so that you can pass in a custom IntFunction as hashing strategy. You can still default this to (i) -> i if you want, but you probably want to introduce a modulo factor, or your internal array will be enormous. You may even want to use an IntBinaryOperator, where one param is the int and the other is the number of buckets.
I would drop the first generic param. You probably don't want to implement Map<Integer, SomeType>, because then you will have to box / unbox in all your methods, and you will lose all your optimizations (except space). Trying to make a primitive collection compatible with an object collection will make the whole exercise pointless.
We know that EnumSet and EnumMap are faster than HashSet/HashMap due to the power of bit manipulation. But are we actually harnessing the true power of EnumSet/EnumMap when it really matters? If we have a set of millions of record and we want to find out if some object is present in that set or not, can we take advantage of EnumSet's speed?
I checked around but haven't found anything discussing this. Everywhere the usual stuff is found i.e. because EnumSet and EnumMap uses a predefined set of keys lookups on small collections are very fast. I know enums are compile-time constants but can we have best of both worlds - an EnumSet-like data structure without needing enums as keys?
Interesting insight; the short answer is no, but your question is exploring some good data-structure design concepts which I'll try to discuss.
First, lets talk about HashMap (HashSet uses a HashMap internally, so they share most behavior); a hash-based data structure is powerful because it is fast and general. It's fast (i.e. approximately O(1)) because we can find the key we're looking for with a very small number of computations. Roughly, we have an array of lists of keys, convert the key to an integer index into that array, then look through the associated list for the key. As the mapping gets bigger, the backing array is repeatedly resized to hold more lists. Assuming the lists are evenly distributed, this lookup is very fast. And because this works for any generic object (that has a proper .hashcode() and .equals()) it's useful for just about any application.
Enums have several interesting properties, but for the purpose of efficient lookup we only care about two of them - they're generally small, and they have a fixed number of values. Because of this, we can do better than HashMap; specifically, we can map every possible value to a unique integer, meaning we don't need to compute a hash, and we don't need to worry about hashes colliding. So EnumMap simply stores an array of the same size as the enum and looks up directly into it:
// From Java 7's EnumMap
public V get(Object key) {
return (isValidKey(key) ?
unmaskNull(vals[((Enum)key).ordinal()]) : null);
}
Stripping away some of the required Map sanity checks, it's simply:
return vals[key.ordinal()];
Note that this is conceptually no different than a standard HashMap, it's simply avoiding a few computations. EnumSet is a little more clever, using the bits in one or more longs to represent array indices, but functionally it's no different than the EnumMap case - we allocate enough slots to cover all possible enum values and can use their integer .ordinal() rather than compute a hash.
So how much faster than HashMap is EnumMap? It's clearly faster, but in truth it's not that much faster. HashMap is already a very efficient data structure, so any optimization on it will only yield marginally better results. In particular, both HashMap and EnumMap are asymptotically the same speed (O(1)), meaning as they get bigger, they behave equally well. This is the primary reason why there isn't a more general data-structure like EnumMap - because it wouldn't be worth the effort relative to HashMap.
The second reason we don't want a more general "FiniteKeysMap" is it would make our lives as users more complicated, which would be worthwhile if it were a marked speed increase, but since it's not would just be a hassle. We would have to define an interface (and probably also a factory pattern) for any type that could be a key in this map. The interface would need to provide a guarantee that every unique instance returns a unique hashcode in the range [0-n), and also provide a way for the map to get n and potentially all n elements. Those last two operations would be better as static methods, but since we can't define static methods in an interface, they'd have to either be passed in directly to every map we create, or a separate factory object with this information would have to exist and be passed to the map/set at construction. Because enums are part of the language, they get all of those benefits for free, meaning there's no such cost for end-user programmers to need to take advantage of.
Furthermore, it would be very easy to make mistakes with this interface; say you have a type that has exactly 100,000 unique values. Should it implement our interface? It could. But you'd likely actually be shooting yourself in the foot. This would eat up a lot of unnecessary memory, since our FiniteKeysMap would allocate a new 100,000 length array to represent even an empty map. Generally speaking, that sort of wasted space is not worth the marginal improvement such a data structure would provide.
In short, while your idea is possible, it is not practical. HashMap is so efficient that attempting to create a separate data structure for a very limited number of cases would add far more complexity than value.
For the specific case of faster .contains() checks, you might like Bloom Filters. It's a set-like data structure that very efficiently stores very large sets, with the condition that it may sometimes incorrectly say an element is in the set when it is not (but not the other way around - if it says an element isn't in the set, it's definitely not). Guava provides a nice BloomFilter implementation.
This question already has answers here:
What's the advantage of a String being Immutable?
(7 answers)
Closed 9 years ago.
I have read a lot of places where it is written that immutability in java is a desired feature. Why is this so?
Immutability makes it easier to reason about the life cycle of the object. It is especially useful in multi-threaded programs as it makes it simpler to share between threads.
Some data structures assume the keys or elements are immutable, or are not changed in critical ways. e.g. Maps and Sets. They don't have to be strictly immutable but it makes it a lot easier if they are.
The downside of immutable objects is that it make recycling them much harder and can significantly impact performance.
In short, if performance is an issue, consider mutable objects, if not use immutable objects as much as possible.
The main benefit of immutability is that an object cannot be changed "out from under you". Eg, consider a String that is a key to a Map. If it were mutable then one could insert a key/value pair then modify the String that is the key. This would break the hash scheme used in the map, and lead to some potentially very unpredictable operation.
In some cases such mutability could lead to security exposures, due to intentional abuse, but more commonly it would simply lead to mysterious bugs. And "defensive" copying of the mutable objects (to avoid such bugs) would lead to many more objects being created (especially if a formulaic approach to the problem is taken).
(On the other hand, lots of objects are created, eg, because a new String is created when someone appends to the String, or "chops off" a part of it, vs simply mutating the existing String, so both approaches end up causing more objects to be created, for different reasons.)
Immutability is highly desired when you need to know that a class hasn't changed when you don't expect. Consider the keys in a hash map. If you set a mutable object for a key, then store it at the key's current hash value (mod hash array size), what happens when you change it? The hash changes and suddenly it's in the wrong spot! You (the HashMap) don't know that it has changed, so you don't know to update its location, so the next time someone tries to look up that key, you won't find it. Immutability removes this problem - if you put an object somewhere based on its hash, you know it will always be where you expect it to be.
And, as pointed out elsewhere, being immutable means it's safe to read concurrently from multiple threads without synchronization.
In programming, an immutable class or object is an object whose state can not be modified after it is created. Some cases they are still considered as immutable even if you can change their attributes (fields), but the object nature keeps same.
Immutable Objects are often desired because 3 main reasons:
Thread-safe
Higher security than mutable objects
Simplicity
There is a ref. that you can review for a better understanding about immutability for java classes and objects
Effective Java Item 15 Immutable classes are easier to design, implement, and use than mutable classes. They are less prone
to error and are more secure.