I'm trying to develop a reduction function for use within a rainbow table generator.
The basic principle behind a reduction function is that it takes in a hash, performs some calculations, and returns a string of a certain length.
At the moment I'm using SHA1 hashes, and I need to return a string with a length of three. I need the string to be made up on any three random characters from:
abcdefghijklmnopqrstuvwxyz0123456789
The major problem I'm facing is that any reduction function I write, always returns strings that have already been generated. And a good reduction function will only return duplicate strings rarely.
Could anyone suggest any ideas on a way of accomplishing this? Or any suggestions at all on hash to string manipulation would be great.
Thanks in advance
Josh
So it sounds like you've got 20 digits of base 255 (the length of a SHA1 hash) that you need to map into three digits of base 36. I would simply make a BigInteger from the hash bytes, modulus 36^3, and return the string in base 36.
public static final BigInteger N36POW3 = new BigInteger(""+36*36*36));
public static String threeDigitBase36(byte[] bs) {
return new BigInteger(bs).mod(N36POW3).toString(36);
}
// ...
threeDigitBase36(sha1("foo")); // => "96b"
threeDigitBase36(sha1("bar")); // => "y4t"
threeDigitBase36(sha1("bas")); // => "p55"
threeDigitBase36(sha1("zip")); // => "ej8"
Of course there will be collisions, as when you map any space into a smaller one, but the entropy should be better than something even sillier than the above solution.
Applying the KISS principle:
An SHA is just a String
The JDK hashcode for String is "random enough"
Integer can render in any base
This single line of code does it:
public static String shortHash(String sha) {
return Integer.toString(sha.hashCode() & 0x7FFFFFFF, 36).substring(0, 3);
}
Note: The & 0x7FFFFFFF is to zero the sign bit (hash codes can be negative numbers, which would otherwise render with a leading minus sign).
Edit - Guaranteeing hash length
My original solution was naive - it didn't deal with the case when the int hash is less than 100 (base 36) - meaning it would print less than 3 chars. This code fixes that, while still keeping the value "random". It also avoids the substring() call, so performance should be better.
static int min = Integer.parseInt("100", 36);
static int range = Integer.parseInt("zzz", 36) - min;
public static String shortHash(String sha) {
return Integer.toString(min + (sha.hashCode() & 0x7FFFFFFF) % range, 36);
}
This code guarantees the final hash has 3 characters by forcing it to be between 100 and zzz - the lowest and highest 3-char hash in base 36, while still making it "random".
Related
Reading algorithms fourth edition by Robert Sedgewick and Kevin Wayne I found the following question:
Hash attack: find 2^N strings, each of length 2^N, that have the same hashCode() value, supposing that the hashCode() implementation for String is the following:
public int hashCode() {
int hash = 0;
for (int i = 0; i < length(); i++)
hash = (hash * 31) + charAt(i);
return hash;
}
Strong hint: Aa and BB have the same value.
What comes on my mind is generating all possible strings of length 2^N and compare their hashCodes. This, however, is very expensive for large N and I doubt it's the correct solution.
Can you give me hints what I miss in the whole picture?
Andreas' and Glains' answers are both correct, but they aren't quite what you need if your goal is to produce 2N distinct strings of length 2N.
Rather, a simpler approach is to build strings consisting solely of concatenated sequences of Aa and BB. For length 2×1 you have { Aa, BB }; for length 2×2 you have { AaAa, AaBB, BBAa, BBBB }, for length 2×3 you have { AaAaAa, AAaaBB, AaBBAa, AaBBBB, BBAaAa, BBAaBB, BBBBAa, BBBBBB }; and so on.
(Note: you've quoted the text as saying the strings should have length 2N. I'm guessing that you misquoted, and it's actually asking for length 2N; but if it is indeed asking for length 2N, then you can simply drop elements as you proceed.)
"Strong hint" explained.
Strong hint: Aa and BB have the same value.
In ASCII / Unicode, B has a value 1 higher than A. Since those are the second last characters, the value is multiplied by 31, so hash code is increased by 31 when you change xxxxAa to xxxxBa.
To offset that, you need last character to offset by -31. Since lowercase letters are 32 higher than uppercase letters, changing a to A is -32 and changing one letter up to B is then -31.
So, it get same hash code, change second-last letter to next letter (e.g. A to B), and change last letter from lowercase to next uppercase (e.g. a to B).
You can now use that hint to generate up to 26 strings with the same hash code.
Lets take a look at the hashCode() implementation and the given hint:
public int hashCode() {
int hash = 0;
for (int i = 0; i < length(); i++)
hash = (hash * 31) + charAt(i);
return hash;
}
We know that Aa and BB produce the same hash and we can easily verify that:
(65 * 31) + 97 = 2112
(66 * 31) + 66 = 2112
From here on, hash is the same for both inputs. That said, we can easily append any amount of characters to both strings and you will always receive the same value.
One example could be:
hashCode("AaTest") = 1953079538
hashCode("BBTest") = 1953079538
So, you can generate enough hash values by just appending the same sequence of characters to both strings, more formally:
hashCode("Aa" + x") = hashCode("BB" + x)
Another note on your idea to generate all possible strings and search for duplicates. Have a look at the bithday paradox and you will note that it will take much less to find duplicate hash values for different inputs.
It will be very difficult to find the original hashed value (indeed, you would have to try out all possible inputs if the hash algorithm is good).
Duplicate hash values are rare (there have to be duplicates since the hash has a fixed length). If a duplicate is found, the duplicate should be meaningless (random characters), so it cannot be abused by an attacker.
Taking a closer look at the hash function, it works like a number system (e.g. Hexadecimal) where the weight of the digits is 31. That is, think of it as converting a number to base 31 and that makes your final hash code something like hashCode = (31^n) * first-char + (31^n-1) * second-char + ..... + (31^0) * last-char
The second observation is that the ASCII distance between the capital and the small letter is 32. Explained in terms of the hash function, it means that when you replace a capital letter by a small one, it means you are adding 1 more to the higher digit and 1 to your current digit. For example:
BB = (31)(B) + (31^0)B which also equals (31)*(B - 1) + (31^0)*(31 + B) notice that I have just taken one unit from the higher digit and added to the lower digit without changing the overall value. The last equation equals to (31)*(A) + (a) == Aa
So, to generate all of the possible String of a given hash code, start with the initial String and shift the character from right to left by replacing a small character by the capital one while decreasing one from the higher location (where applicable). You can run this in O(1)
Hope this helps.
Ok, I have a project that requires me to have a dynamic hash table that counts the frequency of words in a file. I must use java, however, we are not allowed to use any built in data types or built in classes at all except standard arrays. Also, I am not allowed to use any hash functions off the internet that are known to be fast. I have to make my own hash functions. Lastly, my instructor also wants my table to start as size "1" and double in size every time a new key is added.
My first idea was to sum the ASCII values of the letters composing a word and use that to make a hash function, but different words with the same letters will equal the same value.
How can I get started? Is the ASCII idea on the right track?
A hash table isn't expected to have in general a one-to-one mapping between a value and a hash. A hash table is expected to have collisions. That is, the domain of the hash-function is expected to be larger than the range (i.e., the hash value). However, the general idea is that you come up with a hash function where the probability of collision is drastically small. If your hash-function is uniform, i.e., if you have it designed such that each possible hash-value has the same probability of being generated, then you can minimize collisions this way.
Getting a collision isn't the end of the world. That just means that you have to search the list of values for that hash. If your hashing function is good, overall your performance for lookup should still be O(1).
Generating hashing functions is a subject of its own, and there is no one answer. But a good place for you to start could be to work with the bitwise representations of the characters in the string, and perform some sort of convolution operations on them (rotate, shift, XOR) in series. You could perform these in some way based on some initial seed-value, and then use the output of the first step of hashing as a seed for the next step. This way you can end up magnifying the effects of your convolution.
For example, let's say you get the character A, which is 41 in hex, or 0100 0001 in binary. You could designate each bit to mean some operation (maybe bit 0 is a ROR when it is 0, and a ROL when it is 1; bit 1 is an OR when it is 0, and a XOR when it is 1, etc.). You could even decide how much convolution you want to do based on the value itself. For example, you could say that the lower nibble specifies how much right-rotation you will do, and the upper nibble specifies how much left rotation you will do. Then once you have the final value, you will use that as the seed for the next character. These are just some ideas. Use your imagination as see what you get!
It does not matter how good your hash function is, you will always have collisions you need to resolve.
If you want to keep your approach by using the ASCII values of the you shouldn't just add the values this would lead to a lot collisions. You should work with the power of the values, for example for the word "Help" you just go like: 'H' * 256 + 'e' * 256 + 'l' * 256² + 'p' * 256³. Or in pseudocode:
int hash(String word, int hashSize)
int res = 0
int count = 0;
for char c in word
res += 'c' * 256^count
count++
count = count mod 5
return res mod hashSize
Now you just have to write your own Hashtable:
class WordCounterMap
Entry[] entrys = new Entry[1]
void add(String s)
int hash = hash(s, entrys.length)
if(entrys[hash] == null{
Entry[] temp = new Entry[entry.length * 2]
for(Entry e : entrys){
if(e != null)
int hash = hash(e.word, temp.length)
temp[hash] = e;
entrys = temp;
hash = hash(s, entrys.length)
while(true)
if(entrys[hash] != null)
if(entrys[hash].word.equals(s))
entrys[hash].count++
break
else
entrys[hash] = new Entry(s)
hash++
hash = hash mod entrys.length
int getCount(String s)
int hash = hash(s, length)
if(entrys[hash] == null)
return 0
while(true)
if(entrys[hash].word.equals(s))
entrys[hash].count++
break
hash++
hash = hash mod entrys.length
class Entry
int count
String word
Entry(String s)
this.word = s
count = 1
I need to represent both very large and small numbers in the shortest string possible. The numbers are unsigned. I have tried just straight Base64 encode, but for some smaller numbers, the encoded string is longer than just storing the number as a string. What would be the best way to most optimally store a very large or short number in the shortest string possible with it being URL safe?
I have tried just straight Base64 encode, but for some smaller numbers, the encoded string is longer than just storing the number as a string
Base64 encoding of binary byte data will make it longer, by about a third. It is not supposed to make it shorter, but to allow safe transport of binary data in formats that are not binary safe.
However, base 64 is more compact than decimal representation of a number (or of byte data), even if it is less compact than base 256 (the raw byte data). Encoding your numbers in base 64 directly will make them more compact than decimal. This will do it:
private static final String base64Chars =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_";
static String encodeNumber(long x) {
char[] buf = new char[11];
int p = buf.length;
do {
buf[--p] = base64Chars.charAt((int)(x % 64));
x /= 64;
} while (x != 0);
return new String(buf, p, buf.length - p);
}
static long decodeNumber(String s) {
long x = 0;
for (char c : s.toCharArray()) {
int charValue = base64Chars.indexOf(c);
if (charValue == -1) throw new NumberFormatException(s);
x *= 64;
x += charValue;
}
return x;
}
Using this encoding scheme, Long.MAX_VALUE will be the string H__________, which is 11 characters long, compared to its decimal representation 9223372036854775807 which is 19 characters long. Numbers up to about 16 million will fit in a mere 4 characters. That's about as short as you'll get it. (Technically there are two other characters which do not need to be encoded in URLs: . and ~. You can incorporate those to get base 66, which would be a smidgin shorter for some numbers, although that seems a bit pedantic.)
To extend on Stephen C's answer, here is a piece of code to convert to base 62 (but you can increase this by adding more characters to the digits String (just pick what characters are valid for you):
public static String toString(long n) {
String digits = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
int base = digits.length();
String s = "";
while (n > 0) {
long d = n % base;
s = digits.charAt(d) + s;
n = n / base;
}
return s;
}
This will never result in the string representation being longer than the digit one.
Assuming that you don't do any compression, and that you restrict yourself to URL safe characters, then the following procedure will give you the most compact encoding possible.
Make a list of all URL safe characters
Count them. Suppose you have N.
Represent your number in base N, representing 0 by the first character, 1 by the 2nd and so on.
So, what about compression ...
If you assume that the numbers you are representing are uniformly distributed across their range, then there is no real opportunity for compression.
Otherwise, there is potential for compression. If you can reduce the size of the common numbers then you can typically achieve a saving by compression. This is how Huffman encoding works.
But the downside is that compression at this level is not perfect across the range of numbers. It reduces the size of some numbers, but it inevitably increases the size of others.
So what does this mean for your use-case?
I think it means that you are looking at the problem the wrong way. You should not be aiming for a minimal encoded size for every number. You should be aiming to minimize the size on average ... averaged over the actual distribution of your numbers.
This question is a result of the responses submitted to my post at CodeReview.
I have a class called Point, which is basically "intended to encapsulate a point represented in 2D space." I have overrided the hashcode() function which is as follows:
...
#Override
public int hashCode() {
int hash = 0;
hash += (int) (Double.doubleToLongBits(this.getX())
^ (Double.doubleToLongBits(this.getX()) >>> 32));
hash += (int) (Double.doubleToLongBits(this.getY())
^ (Double.doubleToLongBits(this.getY()) >>> 32));
return hash;
}
...
Let me clarify (for those who didn't check the above link) that my Point uses the two doubles: x and y to represent its coordinates.
Problem:
My Problem is evident when I run this method:
public static void main(String[] args) {
Point p1 = Point.getCartesianPoint(12, 0);
Point p2 = Point.getCartesianPoint(0, 12);
System.out.println(p1.hashCode());
System.out.println(p2.hashCode());
}
I get the Output:
1076363264
1076363264
This is clearly a problem. Basically I intend my hashcode() to return equal hashcodes for equal Points. If I reverse the order in one of the parameter declarations (i.e. swap 12 with 1 in one of them to get equal Points), I get the correct (same) result. How can I correct my approach while maintaining the quality or uniqueness of the hash?
You cannot get an integer hash code for two doubles that is unique, without some further information about the numbers being made available about the nature of the numbers in the doubles.
Why?
int is stored as a 32bit representation, double as a 64 bit representation (see the Java tutorial).
So you are trying to store 128 bits of information in a 32 bit space, so it can never give an unique hash.
However
This really isn't the purpose of a hash code, hash codes
just need to have fairly uncommon collisions to be useful.
If you
know something about the double numbers, that reduces their
entropy/information content then you could use this to compress the
number of bits they use. This will be dependant on the application
of this class that you have not discussed yet.
This is why equals
normally does not make use of the hashcode to check for equality,
use getX and getY of each Point to do the comparison instead.
Try this
public int hashCode() {
long bits = Double.doubleToLongBits(x);
int hash = 31 + (int) (bits ^ (bits >>> 32));
bits = Double.doubleToLongBits(y);
hash = 31 * hash + (int) (bits ^ (bits >>> 32));
return hash;
}
this implementation follows Arrays.hashCode(double a[]) pattern.
It produces these hash codes:
-992476223
1076364225
You can find suggestions how to write good hashCode in Effective Java Item. 9
Can't you use the code that is present in Arrays.hashCode already?
Arrays.hashCode(new double[]{x,y});
This is what guava for example is using in Objects.hashCode.
If you have Java 7, simply:
Objects.hash(x,y)
It might be a silly idea, bt you are using + which is a symetric operation and you are getting symetric problems. What if you ue a non-symmetric operation such as division (check for denominator == 0 though) or minus? Or any other that you cna find in literature or invent yourself.
I'm currently trying to parse some long values stored as Strings in java, the problem I have is this:
String test = "fffff8000261e000"
long number = Long.parseLong(test, 16);
This throws a NumberFormatException:
java.lang.NumberFormatException: For input string: "fffff8000261e000"
However, if I knock the first 'f' off the string, it parses it fine.
I'm guessing this is because the number is large and what I'd normally do is put an 'L' on the end of the long to fix that problem. I can't however work out the best way of doing that when parsing a long from a string.
Can anyone offer any advice?
Thanks
There's two different ways of answering your question, depending on exactly what sort of behavior you're really looking for.
Answer #1: As other people have pointed out, your string (interpreted as a positive hexadecimal integer) is too big for the Java long type. So if you really need (positive) integers that big, then you'll need to use a different type, perhaps java.math.BigInteger, which also has a constructor taking a String and a radix.
Answer #2: I wonder, though, if your string represents the "raw" bytes of the long. In your example it would represent a negative number. If that's the case, then Java's built-in long parser doesn't handle values where the high bit is set (i.e. where the first digit of a 16 digit string is greater than 7).
If you're in case #2, then here is one (pretty inefficient) way of handling it:
String test = "fffff8000261e000";
long number = new java.math.BigInteger(test, 16).longValue();
which produces the value -8796053053440. (If your string is more than 16 hex digits long, it would silently drop any higher bits.)
If efficiency is a concern, you could write your own bit-twiddling routine that takes the hex digits off the end of the string two at a time, perhaps building a byte array, then converting to long. Some similar code is here:
How to convert a Java Long to byte[] for Cassandra?
The primitive long variable can hold values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 inclusive.
The calculation shows that fffff8000261e000 hexademical is 18,446,735,277,656,498,176 decimal, which is obviously out of bounds. Instead, fffff8000261e000 hexademical is 1,152,912,708,553,793,536 decimal, which is as obviously within bounds.
As everybody here proposed, use BigInteger to account for such cases. For example, BigInteger bi = new BigInteger("fffff8000261e000", 16); will solve your problem. Also, new java.math.BigInteger("fffff8000261e000", 16).toString() will yield 18446735277656498176 exactly.
The number you are parsing is too large to fit in a java Long. Adding an L wouldn't help. If Long had been an unsigned data type, it would have fit.
One way to cope is to divide the string in two parts and then use bit shift when adding them together:
String s= "fffff8000261e000";
long number;
long n1, n2;
if (s.length() < 16) {
number = Long.parseLong(s, 16);
}
else {
String s1 = s.substring(0, 1);
String s2 = s.substring(1, s.length());
n1=Long.parseLong(s1, 16) << (4 * s2.length());
n2= Long.parseLong(s2, 16);
number = (Long.parseLong(s1, 16) << (4 * s2.length())) + Long.parseLong(s2, 16);
System.out.println( Long.toHexString(n1));
System.out.println( Long.toHexString(n2));
System.out.println( Long.toHexString(number));
}
Note:
If the number is bigger than Long.MAX_VALUE the resulting long will be a negative value, but the bit pattern will match the input.