I noticed in the Java 6 source code for String that hashCode only caches values other than 0. The difference in performance is exhibited by the following snippet:
public class Main{
static void test(String s) {
long start = System.currentTimeMillis();
for (int i = 0; i < 10000000; i++) {
s.hashCode();
}
System.out.format("Took %d ms.%n", System.currentTimeMillis() - start);
}
public static void main(String[] args) {
String z = "Allocator redistricts; strict allocator redistricts strictly.";
test(z);
test(z.toUpperCase());
}
}
Running this in ideone.com gives the following output:
Took 1470 ms.
Took 58 ms.
So my questions are:
Why doesn't String's hashCode() cache 0?
What is the probability that a Java string hashes to 0?
What's the best way to avoid the performance penalty of recomputing the hash value every time for strings that hash to 0?
Is this the best-practice way of caching values? (i.e. cache all except one?)
For your amusement, each line here is a string that hash to 0:
pollinating sandboxes
amusement & hemophilias
schoolworks = perversive
electrolysissweeteners.net
constitutionalunstableness.net
grinnerslaphappier.org
BLEACHINGFEMININELY.NET
WWW.BUMRACEGOERS.ORG
WWW.RACCOONPRUDENTIALS.NET
Microcomputers: the unredeemed lollipop...
Incentively, my dear, I don't tessellate a derangement.
A person who never yodelled an apology, never preened vocalizing transsexuals.
You're worrying about nothing. Here's a way to think about this issue.
Suppose you have an application that does nothing but sit around hashing Strings all year long. Let's say it takes a thousand strings, all in memory, calls hashCode() on them repeatedly in round-robin fashion, a million times through, then gets another thousand new strings and does it again.
And suppose that the likelihood of a string's hash code being zero were, in fact, much greater than 1/2^32. I'm sure it is somewhat greater than 1/2^32, but let's say it's a lot worse than that, like 1/2^16 (the square root! now that's a lot worse!).
In this situation, you have more to benefit from Oracle's engineers improving how these strings' hash codes are cached than anyone else alive. So you write to them and ask them to fix it. And they work their magic so that whenever s.hashCode() is zero, it returns instantaneously (even the first time! a 100% improvement!). And let's say that they do this without degrading the performance at all for any other case.
Hooray! Now your app is... let's see... 0.0015% faster!
What used to take an entire day now takes only 23 hours, 57 minutes and 48 seconds!
And remember, we set up the scenario to give every possible benefit of the doubt, often to a ludicrous degree.
Does this seem worth it to you?
EDIT: since posting this a couple hours ago, I've let one of my processors run wild looking for two-word phrases with zero hash codes. So far it's come up with: bequirtle zorillo, chronogrammic schtoff, contusive cloisterlike, creashaks organzine, drumwood boulderhead, electroanalytic exercisable, and favosely nonconstruable. This is out of about 2^35 possibilities, so with perfect distribution we'd expect to see only 8. Clearly by the time it's done we'll have a few times that many, but not outlandishly more. What's more significant is that I've now come up with a few interesting band names/album names! No fair stealing!
It uses 0 to indicate "I haven't worked out the hashcode yet". The alternative would be to use a separate Boolean flag, which would take more memory. (Or to not cache the hashcode at all, of course.)
I don't expect many strings hash to 0; arguably it would make sense for the hashing routine to deliberately avoid 0 (e.g. translate a hash of 0 to 1, and cache that). That would increase collisions but avoid rehashing. It's too late to do that now though, as the String hashCode algorithm is explicitly documented.
As for whether this is a good idea in general: it's an certainly efficient caching mechanism, and might (see edit) be even better with a change to avoid rehashing values which end up with a hash of 0. Personally I would be interested to see the data which led Sun to believe this was worth doing in the first place - it's taking up an extra 4 bytes for every string ever created, however often or rarely it's hashed, and the only benefit is for strings which are hashed more than once.
EDIT: As KevinB points out in a comment elsewhere, the "avoid 0" suggestion above may well have a net cost because it helps a very rare case, but requires an extra comparison for every hash calculation.
I think there's something important that the other answers so far are missing: the zero value exists so that the hashCode-caching mechanism works robustly in a multi-threaded environment.
If you had two variables, like cachedHashCode itself and an isHashCodeCalculated boolean to indicate whether cachedHashCode had been calculated, you'd need thread synchronization for things to work in a multithreaded environment. And synchronization would be bad for performance, especially since Strings are very commonly reused in multiple threads.
My understanding of the Java memory model is a little sketchy, but here's roughly what's going on:
When multiple threads access a variable (like the cached hashCode), there's no guarantee that each thread will see the latest value. If a variable starts on zero, then A updates it (sets it to a non-zero value), then thread B reads it shortly afterwards, thread B could still see the zero value.
There's another problem with accessing shared values from multiple threads (without synchronization) - you can end up trying to use an object that's only been partly initialized (constructing an object is not an atomic process). Multi-threaded reads and writes of 64-bit primitives like longs and doubles are not necessarily atomic either, so if two threads try to read and change the value of a long or a double, one thread can end up seeing something weird and partially set. Or something like that anyway. There are similar problems if you try to use two variables together, like cachedHashCode and isHashCodeCalculated - a thread can easily come along and see the latest version of one of those variables, but an older version of another.
The usual way to get around these multi-threading issues is to use synchronization. For example, you could put all access to the cached hashCode inside a synchronized block, or you could use the volatile keyword (although be careful with that because the semantics are a little confusing).
However, synchronization slows things down. Bad idea for something like a string hashCode. Strings are very often used as keys in HashMaps, so you need the hashCode method to perform well, including in multi-threaded environments.
Java primitives that are 32-bits or less, like int, are special. Unlike, say, a long (64-bit value), you can be sure that you will never read a partially initialized value of an int (32 bits). When you read an int without synchronization, you can't be sure that you'll get the latest set value, but you can be sure that the value you do get is a value that has explicitly been set at some point by your thread or another thread.
The hashCode caching mechanism in java.lang.String is set up to rely on point 5 above. You might understand it better by looking at the source of java.lang.String.hashCode(). Basically, with multiple threads calling hashCode at once, hashCode might end up being calculated multiple times (either if the calculated value is zero or if multiple threads call hashCode at once and both see a zero cached value), but you can be sure that hashCode() will always return the same value. So it's robust, and it's performant too (because there's no synchronization to act as a bottleneck in multi-threaded environments).
Like I said, my understanding of the Java memory model is a little sketchy, but I'm pretty sure I've got the gist of the above right. Ultimately it's a very clever idiom for caching the hashCode without the overhead of synchronization.
0 isn't cached as the implementation interprets a cached value of 0 as "cached value not yet initialised". The alternative would have been to use a java.lang.Integer, whereby null implied that the value was not yet cached. However, this would have meant an additional storage overhead.
Regarding the probability of a String's hash code being computed as 0 I would say the probability is quite low and can happen in the following cases:
The String is empty (although recomputing this hash code each time is effectively O(1)).
An overflow occurs whereby the final computed hash code is 0 (e.g. Integer.MAX_VALUE + h(c1) + h(c2) + ... h(cn) == 0).
The String contains only Unicode character 0. Very unlikely as this is a control character with no meaning apart from in the "paper tape world" (!):
From Wikipedia:
Code 0 (ASCII code name NUL) is a
special case. In paper tape, it is the
case when there are no holes. It is
convenient to treat this as a fill
character without meaning otherwise.
This turns out to be a good question, related to a security vulnerability.
"When hashing a string, Java also caches the
hash value in the hash attribute, but only if the result is different from zero.
Thus, the target value zero is particularly interesting for an attacker as it prevents caching
and forces re-hashing."
Ten years after and things have changed. I honestly can't believe this (but the geek in me is ultra-happy).
As you have noted there are chances where some String::hashCode for some Strings is zero and this was not cached (will get to that). A lot of people argued (including in this Q&A) why there was no addition of a field in java.lang.String, something like : hashAlreadyComputed and simply use that. The problem is obvious : extra-space for every single String instance. There is btw a reason java-9 introduced compact Strings, for the simple fact that many benchmarks have shown that this is a rather (over)used class, in the majority of the applications. Adding more space? The decision was : no. Especially since the smallest possible addition would have been 1 byte, not 1 bit (for 32 bit JMVs, the extra space would have been 8 bytes : 1 for the flag, 7 for alignment).
So, Compact Strings came along in java-9, and if you look carefully (or care) they did add a field in java.lang.String : coder. Didn't I just argue against that? It's not that easy. It seems that the importance of compact strings out-weighted the "extra space" argument. It is also important to say that extra space matters for 32 bits VM only (because there was no gap in alignment). In contrast, in jdk-8 the layout of java.lang.String is:
java.lang.String object internals:
OFFSET SIZE TYPE DESCRIPTION VALUE
0 12 (object header) N/A
12 4 char[] String.value N/A
16 4 int String.hash N/A
20 4 (loss due to the next object alignment)
Instance size: 24 bytes
Space losses: 0 bytes internal + 4 bytes external = 4 bytes total
Notice an important thing right there:
Space losses : ... 4 bytes total.
Because every java Object is aligned (to how much depends on the JVM and some start-up flags like UseCompressedOops for example), in String there is a gap of 4 bytes, un-used. So when adding coder, it simply took 1 byte without adding additional space. As such, after Compact Strings were added, the layout has changed:
java.lang.String object internals:
OFFSET SIZE TYPE DESCRIPTION VALUE
0 12 (object header) N/A
12 4 byte[] String.value N/A
16 4 int String.hash N/A
20 1 byte String.coder N/A
21 3 (loss due to the next object alignment)
Instance size: 24 bytes
Space losses: 0 bytes internal + 3 bytes external = 3 bytes total
coder eats 1 byte and the gap was shrank to 3 bytes. So the "damage" was already made in jdk-9. For 32 bits JVM there was an increase with 8 bytes : 1 coder + 7 gap and for 64 bit JVM - there was no increase, coder occupied some space from the gap.
And now, in jdk-13 they decided to leverage that gap, since it exists anyway. Let me just remind you that the probability to have a String with zero hashCode is 1 in 4 billion; still there are people that say : so what? let's fix this! Voilá: jdk-13 layout of java.lang.String:
java.lang.String object internals:
OFFSET SIZE TYPE DESCRIPTION VALUE
0 12 (object header) N/A
12 4 byte[] String.value N/A
16 4 int String.hash N/A
20 1 byte String.coder N/A
21 1 boolean String.hashIsZero N/A
22 2 (loss due to the next object alignment)
Instance size: 24 bytes
Space losses: 0 bytes internal + 2 bytes external = 2 bytes total
And here it is : boolean String.hashIsZero. And here it is in the code-base:
public int hashCode() {
int h = hash;
if (h == 0 && !hashIsZero) {
h = isLatin1() ? StringLatin1.hashCode(value)
: StringUTF16.hashCode(value);
if (h == 0) {
hashIsZero = true;
} else {
hash = h;
}
}
return h;
}
Wait! h == 0 and hashIsZero field? Shouldn't that be named something like : hashAlreadyComputed? Why isn't the implementation something along the lines of :
#Override
public int hashCode(){
if(!hashCodeComputed){
// or any other sane computation
hash = 42;
hashCodeComputed = true;
}
return hash;
}
Even if I read the comment under the source code:
// The hash or hashIsZero fields are subject to a benign data race,
// making it crucial to ensure that any observable result of the
// calculation in this method stays correct under any possible read of
// these fields. Necessary restrictions to allow this to be correct
// without explicit memory fences or similar concurrency primitives is
// that we can ever only write to one of these two fields for a given
// String instance, and that the computation is idempotent and derived
// from immutable state
It only made sense after I read this. Rather tricky, but this does one write at a time, lots more details in the discussion above.
Why doesn't String's hashCode() cache 0?
The value zero is reserved as meaning "the hash code is not cached".
What is the probability that a Java string hashes to 0?
According to the Javadoc, the formula for a String's hashcode is:
s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]
using int arithmetic, where s[i] is the ith character of the string and n is the length of the string. (The hash of the empty String is defined to be zero as a special case.)
My intuition is that the hashcode function as above gives a uniform spread of String hash values across the range of int values. A uniform spread that would mean that the probability of a randomly generated String hashing to zero was 1 in 2^32.
What's the best way to avoid the performance penalty of recomputing the hash value every time for strings that hash to 0?
The best strategy is to ignore the issue. If you are repeatedly hashing the same String value, there is something rather strange about your algorithm.
Is this the best-practice way of caching values? (i.e. cache all except one?)
This is a space versus time trade-off. AFAIK, the alternatives are:
Add a cached flag to each String object, making every Java String take an extra word.
Use the top bit of the hash member as the cached flag. That way you can cache all hash values, but you only have half as many possible String hash values.
Don't cache hashcodes on Strings at all.
I think that the Java designers have made the right call for Strings, and I'm sure that they have done extensive profiling that confirms the soundness of their decision. However, it does not follow that this would always be the best way to deal with caching.
(Note that there are two "common" String values which hash to zero; the empty String, and the String consisting of just a NUL character. However, the cost of calculating the hashcodes for these values is small compared with the cost of calculating the hashcode for a typical String value.)
Well folks, it keeps 0 because if it is zero length, it will end up as zero anyways.
And it doesn't take long to figure out that the len is zero and so must the hashcode be.
So, for your code-reviewz! Here it is in all it's Java 8 glory:
public int hashCode() {
int h = hash;
if (h == 0 && value.length > 0) {
char val[] = value;
for (int i = 0; i < value.length; i++) {
h = 31 * h + val[i];
}
hash = h;
}
return h;
}
As you can see, this will always return a quick zero if the string is empty:
if (h == 0 && value.length > 0) ...
The "avoid 0" suggestion seems appropriate to recommend as best practice as it helps a genuine problem (seriously unexpected performance degradation in constructible cases that can be attacker supplied) for the meager cost of a branch operation prior to a write. There is some remaining 'unexpected performance degradation' that can be exercised if the only things going into a set hash to the special adjusted value. But this is at worst a 2x degradation rather than unbounded.
Of course, String's implementation can't be changed but there is no need to perpetuate the problem.
Related
public class ConstantsAndCasting{
public static void main (String args[]){
long hugeNum = 23456013477456L;
int smallNum = (int)hugeNum;
System.out.println(hugeNum);
System.out.println(smallNum);
}
}
The output to the code listed above is:
23456013477456
1197074000
It appears as though when casting a long to an integer value, java retains 32 bits starting from the right working left. What results is a value that is closer to 0 than to the original long value. This totally makes sense from a machine perspective but what is the practical use for this? It seems like you'd be better off using the random number generator to produce 10 random characters.
Thanks in advance!
It's a little unclear how to answer that... As with any type conversion, it iis used when you have a value of one type (in this case, long) but you need a value of a different type (in this case, int).
Yes, sometimes it will give you unwanted results, because of the limitations of the casting operation (Which, in turn, are based on the limitations of the data types). But picking one of those out, and saying "so what's the point in ever doing this operation", would be like saying "if I have two ints, a and b, each valued at 2,000,000,000 - adding them doesn't get the desired result... so what's the point adding ints?"
The primary useful purpose of casting a long to an int is to take a long value which is in the range -2147483648..2147483647 and use that number with code that expects an int value in that range. The behavior with values was outside that range was chosen because:
The designers of Java wanted to fully specify its behavior whenever practical.
Taking the bottom 32 bits and ignoring the top 32 bits was faster than any other consistent course of action which would work sensibly with values in the range -2147483648..2147483647.
There are a number of situations where other courses of action might sometimes allow more efficient code generation if consistency wasn't required, or more useful semantics if speed wasn't required. The approach that was taken, however, offers the best compromise between speed, usefulness, and consistency.
I am going through the source code for Math.Random and I have noticed that the source code for nextBoolean()
public boolean nextBoolean() {
return next(1) != 0;
}
calls for a new draw of pseudorandom bits, via next(int bits) which "iterates" the LC-PRNG to the next state, i.e. "draws" a whole set of new bits, even though only one bit is used in nextBoolean. This effectively means that rest of the bits (47 to be exact) are pretty much wasted in that particular iteration.
I have looked at another PRNG which appears to do essentially the same thing, even though the underlying generator is different. Since multiple bits from the same iteration are used for other method calls (such as nextInt(), nextLong(), ...) and the consecutive bits are assumed to be "independent enough" from one another.
So my question is: Why only one bit is used from a draw of the PRNG using nextBoolean()? It should be possible to cache, say 16-bits (if one wants to use the highest quality bits), for successive calls to nextBoolean(), am I mistaken here?
Edit: What I mean by caching the results is something like this:
private long booleanBits = 0L;
private int c = Long.SIZE;
public boolean nextBoolean(){
if(c == 0){
booleanBits = nextLong();
c = Long.SIZE;
}
boolean b = (booleanBits & 1) != 0;
booleanBits >>>= 1;
return b;
//return ( next() & 1 ) != 0;
}
Sure, it's not sure and pretty as the commented out text, but it ends up in 64x less draws. At the cost of 1 int comparison, and 1 right-shift operation per call to nextBoolean(). Am I mistaken?
Edit2: Ok, I had to test the timings, see the code here. The output is as follows:
Uncached time lapse: 13891
Cached time lapse: 8672
Testing if the order matters..:
Cached time lapse: 6751
Uncached time lapse: 8737
Which suggest that caching the bits is not a computational burden but an improvement instead. A couple of things I should note about this test:
I use a custom implementation of xorshift* generators that is heavily inspired from Sebastiano Vigna's work on xorshift* generators.
xorshift* generators are actually much faster than Java's native generator. So if I were to use java.util.Random to draw my bits, caching would make a larger impact. Or that's what I would expect at least.
Single-thread application assumed here, so no sych issues. But that's of course common in both conditions.
Conditionals of any kind can be quite expensive (see Why is it faster to process a sorted array than an unsorted array?), and next itself doesn't do that many more operations: I count five arithmetic operations plus a compareAndSet, which shouldn't cost much in a single-threaded context.
The compareAndSet points out another issue -- thread-safety -- which is much harder when you have two variables that need to be kept in sync, such as booleanBits and c. The synchronization overhead of keeping those in sync for multithreaded use would almost certainly exceed the cost of a next() call.
So I was reading Peter Norvig's IAQ (infrequently asked questions - link) and stumbled upon this:
You might be surprised to find that an
Object takes 16 bytes, or 4 words, in
the Sun JDK VM. This breaks down as
follows: There is a two-word header,
where one word is a pointer to the
object's class, and the other points
to the instance variables. Even though
Object has no instance variables, Java
still allocates one word for the
variables. Finally, there is a
"handle", which is another pointer to
the two-word header. Sun says that
this extra level of indirection makes
garbage collection simpler. (There
have been high performance Lisp and
Smalltalk garbage collectors that do
not use the extra level for at least
15 years. I have heard but have not
confirmed that the Microsoft JVM does
not have the extra level of
indirection.)
An empty new String() takes 40 bytes,
or 10 words: 3 words of pointer
overhead, 3 words for the instance
variables (the start index, end index,
and character array), and 4 words for
the empty char array. Creating a
substring of an existing string takes
"only" 6 words, because the char array
is shared. Putting an Integer key and
Integer value into a Hashtable takes
64 bytes (in addition to the four
bytes that were pre-allocated in the
Hashtable array): I'll let you work
out why.
So well I obviously tried, but I can't figure it out. In the following I only count words:
A Hashtable put creates one Hashtable$Entry: 3 (overhead) + 4 variables (3 references which I assume are 1 word + 1 int). I further assume that he means that the Integers are newly allocated (so not cached by the Integer class or already exist) which comes to 2* (3 [overhead] + 1 [1 int value]).
So in the end we end up with.. 15 words or 60bytes. So what I first thought was that the Entry as a inner class needs a reference to its outer object, but alas it's static so that doesn't make much sense (sure we have to store a pointer to the parent class, but I'd think that information is stored in the class header by the VM).
Just idle curiosity and I'm well aware that all this depends to a good bit on the actual JVM implementation (and on a 64bit version the results would be different), but still I don't like questions I can't answer :)
Edit: Just to make this a bit clearer: While I'm well aware that more compact structures can get us some performance benefits, I agree that in general worrying about a few bytes here or there is a waste of time. I surely wouldn't stop using a Hashtable just because of a few bytes overhead here or there just like I wouldn't use plain char arrays instead of Strings (or start using C). This is purely of academic interest to learn a bit more about the insides of Java/the JVM :)
The author appears to assume there is 3 Objects with 16 bytes overhead each and 2 32-bit references in the Map.Entry and 2 x 1 32-bit int values. This would total 64-bytes
This is flawed in that Sun/Oracle's JVM only allocates on 8-byte boundaries so that while technically an Integer occupies 20 bytes of memory, 24 bytes is used (the next multiple of 8)
Additionally many JVMs now use 64-bit references so the Map.Entry would use another 16 bytes.
This is all very inefficient, which is why you might use a class like TIntIntHashMap instead which use primitives.
However, usually it doesn't matter as memory is surprising cheap when you compare it to the cost of your time. If you work on server applications and you cost your company about $40/hour, you need to be saving about 10 MB every minute to save as much memory as you are costing. (Ideally you need to be saving much more than this) Saving 10 MB each and every minute is hard.
Memory is reusable, but your time isn't.
I have an object with a String that holds a unique id .
(such as "ocx7gf" or "67hfs8")
I need to supply it an implementation of int hascode() which will be unique obviously.
how do i cast a string to a unique int in the easiest/fastest way?
10x.
Edit - OK. I already know that String.hashcode is possible. But it is not recommended in any place. Actually' if any other method is not recommended - Should I use it or not if I have my object in a collection and I need the hashcode. should I concat it to another string to make it more successful?
No, you don't need to have an implementation that returns a unique value, "obviously", as obviously the majority of implementations would be broken.
What you want to do, is to have a good spread across bits, especially for common values (if any values are more common than others). Barring special knowledge of your format, then just using the hashcode of the string itself would be best.
With special knowledge of the limits of your id format, it may be possible to customise and result in better performance, though false assumptions are more likely to make things worse than better.
Edit: On good spread of bits.
As stated here and in other answers, being completely unique is impossible and hash collisions are possible. Hash-using methods know this and can deal with it, but it does impact upon performance, so we want collisions to be rare.
Further, hashes are generally re-hashed so our 32-bit number may end up being reduced to e.g. one in the range 0 to 22, and we want as good a distribution within that as possible to.
We also want to balance this with not taking so long to compute our hash, that it becomes a bottleneck in itself. An imperfect balancing act.
A classic example of a bad hash method is one for a co-ordinate pair of X, Y ints that does:
return X ^ Y;
While this does a perfectly good job of returning 2^32 possible values out of the 4^32 possible inputs, in real world use it's quite common to have sets of coordinates where X and Y are equal ({0, 0}, {1, 1}, {2, 2} and so on) which all hash to zero, or matching pairs ({2,3} and {3, 2}) which will hash to the same number. We are likely better served by:
return ((X << 16) | (x >> 16)) ^ Y;
Now, there are just as many possible values for which this is dreadful than for the former, but it tends to serve better in real-world cases.
Of course, there is a different job if you are writing a general-purpose class (no idea what possible inputs there are) or have a better idea of the purpose at hand. For example, if I was using Date objects but knew that they would all be dates only (time part always midnight) and only within a few years of each other, then I might prefer a custom hash code that used only the day, month and lower-digits of the years, over the standard one. The writer of Date though can't work on such knowledge and has to try to cater for everyone.
Hence, If I for instance knew that a given string is always going to consist of 6 case-insensitive characters in the range [a-z] or [0-9] (which yours seem to, but it isn't clear from your question that it does) then I might use an algorithm that assigned a value from 0 to 35 (the 36 possible values for each character) to each character, and then walk through the string, each time multiplying the current value by 36 and adding the value of the next char.
Assuming a good spread in the ids, this would be the way to go, especially if I made the order such that the lower-significant digits in my hash matched the most frequently changing char in the id (if such a call could be made), hence surviving re-hashing to a smaller range well.
However, lacking such knowledge of the format for sure, I can't make that call with certainty, and I could well be making things worse (slower algorithm for little or even negative gain in hash quality).
One advantage you have is that since it's an ID in itself, then presumably no other non-equal object has the same ID, and hence no other properties need be examined. This doesn't always hold.
You can't get a unique integer from a String of unlimited length. There are 4 billionish (2^32) unique integers, but an almost infinite number of unique strings.
String.hashCode() will not give you unique integers, but it will do its best to give you differing results based on the input string.
EDIT
Your edited question says that String.hashCode() is not recommended. This is not true, it is recommended, unless you have some special reason not to use it. If you do have a special reason, please provide details.
Looks like you've got a base-36 number there (a-z + 0-9). Why not convert it to an int using Integer.parseInt(s, 36)? Obviously, if there are too many unique IDs, it won't fit into an int, but in that case you're out of luck with unique integers and will need to get by using String.hashCode(), which does its best to be close to unique.
Unless your strings are limited in some way or your integers hold more bits than the strings you're trying to convert, you cannot guarantee the uniqueness.
Let's say you have a 32 bit integer and a 64-character character set for your strings. That means six bits per character. That will allow you to store five characters into an integer. More than that and it won't fit.
represent each string character by a five-digit binary digit, eg. a by 00001 b by 00010 etc. thus 32 combinations are possible, for example, cat might be written as 00100 00001 01100, then convert this binary into decimal, eg. this would be 4140, thus cat would be 4140, similarly, you can get cat back from 4140 by converting it to binary first and Map the five digit binary to string
One way to do it is assign each letter a value, and each place of the string it's own multiple ie a = 1, b = 2, and so on, then everything in the first digit (read left to right) would be multiplied by a prime number, the next the next prime number and so on, such that the final digit was multiplied by a prime larger than the number of possible subsets in that digit (26+1 for a space or 52+1 with capitols and so on for other supported characters). If the number is mapped back to the first digits (leftmost character) any number you generate from a unique string mapping back to 1 or 6 whatever the first letter will be, gives a unique value.
Dog might be 30,3(15),101(7) or 782, while God 33,3(15),101(4) or 482. More importantly than unique strings being generated they can be useful in generation if the original digit is kept, like 30(782) would be unique to some 12(782) for the purposes of differentiating like strings if you ever managed to go over the unique possibilities. Dog would always be Dog, but it would never be Cat or Mouse.
I understand that the String class' hashCode() method is not guarantied to generate unique hash codes for distinct String-s. I see a lot of usage of putting String keys into HashMap-s (using the default String hashCode() method). A lot of this usage could result in significant application issues if a map put displaced a HashMap entry that was previously put onto the map with a truely distinct String key.
What are the odds that you will run into the scenario where String.hashCode() returns the same value for distinct String-s? How do developers work around this issue when the key is a String?
Developers do not have to work around the issue of hash collisions in HashMap in order to achieve program correctness.
There are a couple of key things to understand here:
Collisions are an inherent feature of hashing, and they have to be. The number of possible values (Strings in your case, but it applies to other types as well) is vastly bigger than the range of integers.
Every usage of hashing has a way to handle collisions, and the Java Collections (including HashMap) is no exception.
Hashing is not involved in equality testing. It is true that equal objects must have equal hashcodes, but the reverse is not true: many values will have the same hashcode. So don't try using a hashcode comparison as a substitute for equality. Collections don't. They use hashing to select a sub-collection (called a bucket in the Java Collections world), but they use .equals() to actually check for equality.
Not only do you not have to worry about collisions causing incorrect results in a collection, but for most applications, you also *usually* don't have to worry about performance - Java hashed Collections do a pretty good job of managing hashcodes.
Better yet, for the case you asked about (Strings as keys), you don't even have to worry about the hashcodes themselves, because Java's String class generates a pretty good hashcode. So do most of the supplied Java classes.
Some more detail, if you want it:
The way hashing works (in particular, in the case of hashed collections like Java's HashMap, which is what you asked about) is this:
The HashMap stores the values you give it in a collection of sub-collections, called buckets. These are actually implemented as linked lists. There are a limited number of these: iirc, 16 to start by default, and the number increases as you put more items into the map. There should always be more buckets than values. To provide one example, using the defaults, if you add 100 entries to a HashMap, there will be 256 buckets.
Every value which can be used as a key in a map must be able to generate an integer value, called the hashcode.
The HashMap uses this hashcode to select a bucket. Ultimately, this means taking the integer value modulo the number of buckets, but before that, Java's HashMap has an internal method (called hash()), which tweaks the hashcode to reduce some known sources of clumping.
When looking up a value, the HashMap selects the bucket, and then searches for the individual element by a linear search of the linked list, using .equals().
So: you don't have to work around collisions for correctness, and you usually don't have to worry about them for performance, and if you're using native Java classes (like String), you don't have to worry about generating the hashcode values either.
In the case where you do have to write your own hashcode method (which means you've written a class with a compound value, like a first name/last name pair), things get slightly more complicated. It's quite possible to get it wrong here, but it's not rocket science. First, know this: the only thing you must do in order to assure correctness is to assure that equal objects yield equal hashcodes. So if you write a hashcode() method for your class, you must also write an equals() method, and you must examine the same values in each.
It is possible to write a hashcode() method which is bad but correct, by which I mean that it would satisfy the "equal objects must yield equal hashcodes" constraint, but still perform very poorly, by having a lot of collisions.
The canonical degenerate worst case of this would be to write a method which simply returns a constant value (e.g., 3) for all cases. This would mean that every value would be hashed into the same bucket.
It would still work, but performance would degrade to that of a linked list.
Obviously, you won't write such a terrible hashcode() method. If you're using a decent IDE, it's capable of generating one for you. Since StackOverflow loves code, here's the code for the firstname/lastname class above.
public class SimpleName {
private String firstName;
private String lastName;
public SimpleName(String firstName, String lastName) {
super();
this.firstName = firstName;
this.lastName = lastName;
}
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result
+ ((firstName == null) ? 0 : firstName.hashCode());
result = prime * result
+ ((lastName == null) ? 0 : lastName.hashCode());
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
SimpleName other = (SimpleName) obj;
if (firstName == null) {
if (other.firstName != null)
return false;
} else if (!firstName.equals(other.firstName))
return false;
if (lastName == null) {
if (other.lastName != null)
return false;
} else if (!lastName.equals(other.lastName))
return false;
return true;
}
}
I direct you to the answer here. While it is not a bad idea to use strings( #CPerkins explained why, perfectly), storing the values in a hashmap with integer keys is better, since it is generally quicker (although unnoticeably) and has lower chance (actually, no chance) of collisions.
See this chart of collisions using 216553 keys in each case, (stolen from this post, reformatted for our discussion)
Hash Lowercase Random UUID Numbers
============= ============= =========== ==============
Murmur 145 ns 259 ns 92 ns
6 collis 5 collis 0 collis
FNV-1a 152 ns 504 ns 86 ns
4 collis 4 collis 0 collis
FNV-1 184 ns 730 ns 92 ns
1 collis 5 collis 0 collis*
DBJ2a 158 ns 443 ns 91 ns
5 collis 6 collis 0 collis***
DJB2 156 ns 437 ns 93 ns
7 collis 6 collis 0 collis***
SDBM 148 ns 484 ns 90 ns
4 collis 6 collis 0 collis**
CRC32 250 ns 946 ns 130 ns
2 collis 0 collis 0 collis
Avg Time per key 0.8ps 2.5ps 0.44ps
Collisions (%) 0.002% 0.002% 0%
Of course, the number of integers is limited to 2^32, where as there is no limit to the number of strings (and there is no theoretical limit to the amount of keys that can be stored in a HashMap). If you use a long (or even a float), collisions will be inevitable, and therefore no "better" than a string. However, even despite hash collisions, put() and get() will always put/get the correct key-value pair (See edit below).
In the end, it really doesn't matter, so use whatever is more convenient. But if convenience makes no difference, and you do not intend to have more than 2^32 entries, I suggest you use ints as keys.
EDIT
While the above is definitely true, NEVER use "StringKey".hashCode() to generate a key in place of the original String key for performance reasons- 2 different strings can have the same hashCode, causing overwriting on your put() method. Java's implementation of HashMap is smart enough to handle strings (any type of key, actually) with the same hashcode automatically, so it is wise to let Java handle these things for you.
I strongly suspect that the HashMap.put method does not determine whether the key is the same by just looking at String.hashCode.
There is definitely going to be a chance of a hash collision, so one would expect that the String.equals method will also be called to be sure that the Strings are truly equal, if there is indeed a case where the two Strings have the same value returned from hashCode.
Therefore, the new key String would only be judged to be the same key String as one that is already in the HashMap if and only if the value returned by hashCode is equal, and the equals method returns true.
Also to add, this thought would also be true for classes other than String, as the Object class itself already has the hashCode and equals methods.
Edit
So, to answer the question, no, it would not be a bad idea to use a String for a key to a HashMap.
This is not an issue, it's just how hashtables work. It's provably impossible to have distinct hashcodes for all distinct strings, because there are far more distinct strings than integers.
As others have written, hash collisions are resolved via the equals() method. The only problem this can cause is degeneration of the hashtable, leading to bad performance. That's why Java's HashMap has a load factor, a ratio between buckets and inserted elements which, when exceeded, will cause rehashing of the table with twice the number of buckets.
This generally works very well, but only if the hash function is good, i.e. does not result in more than the statistically expected number of collisions for your particular input set. String.hashCode() is good in this regard, but this was not always so. Allegedly, prior to Java 1.2 it only inlcuded every n'th character. This was faster, but caused predictable collisions for all String sharing each n'th character - very bad if you're unluck enough to have such regular input, or if someone want to do a DOS attack on your app.
You are talking about hash collisions. Hash collisions are an issue regardless of the type being hashCode'd. All classes that use hashCode (e.g. HashMap) handle hash collisions just fine. For example, HashMap can store multiple objects per bucket.
Don't worry about it unless you are calling hashCode yourself. Hash collisions, though rare, don't break anything.