I am lookin for a method to have number of 1's in 32 bit number
without using a loop in between.
can any body help me and provide me the code or algorithm
to do so.
Thanks in advance.
See Integer.bitCount(int). You can refer to the source code if you want to see how it works; many of the Integer class's bit twiddling routines are taken from Hacker's Delight.
See the canonical reference: Bit Twiddling Hacks
Short, obscenely optimized answer (in C):
int pop(unsigned x) {
x = x - ((x >> 1) & 0x55555555);
x = (x & 0x33333333) + ((x >> 2) & 0x33333333);
x = (x + (x >> 4)) & 0x0F0F0F0F;
x = x + (x >> 8);
x = x + (x >> 16);
return x & 0x0000003F;
}
To see why this magic works, see The Quest for an Accelerated Population Count by Henry S. Warren, Jr. chapter 10 in Beautiful Code.
Split the 32 bit number into four 8 bit numbers (see bit shifting operator, casting etc.)
Then use a lookup with 256 entries that converts the 8 bit number into a count of bits set. Add the four results, presto!
Also, see what Mitch Wheat said - bit fiddling can be a lot of fun ;)
You can define it recursively:
int bitcount(int x) {
return (x==0) ? 0 : (x & 1 + bitcount(x/2));
}
The code above is not tested, and probably only works for x>=0. Hopefully, you will get the idea anyways...
My personal favourite, directly from Bit Twiddling Hacks:
v = v - ((v >> 1) & 0x55555555);
v = (v & 0x33333333) + ((v >> 2) & 0x33333333);
c = ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24;
Following is JDK 1.5 implementation of of Integer.bitCount
public static int bitCount(int i) {
// HD, Figure 5-2
i = i - ((i >>> 1) & 0x55555555);
i = (i & 0x33333333) + ((i >>> 2) & 0x33333333);
i = (i + (i >>> 4)) & 0x0f0f0f0f;
i = i + (i >>> 8);
i = i + (i >>> 16);
return i & 0x3f;
}
Related
Let us start simple. Say you want to find how many 1's in a long in its binary representation. For example, how many 1's in 228₁₀? The binary represenation is 11100100₂. We can use Long.bitCount(228); which returns 4.
Now, let us say we will interpret two bits as a single quaternary digit (and the two far-right bits is the first digit):
00₂ = 0₄
01₂ = 1₄
10₂ = 2₄
11₂ = 3₄
Hence, 228₁₀ = 11100100₂ = 3210₄. The goal is to find how many non-zero quaternary digits are there in the binary representation. For examples, 3210₄ yields 3, 121100₄ yields 4, 000032₄ yields 2, etc.
The code of the Long.bitCount(i); method from Java's documentation is given by:
public static int bitCount(long i) {
i = i - ((i >>> 1) & 0x5555555555555555L);
i = (i & 0x3333333333333333L) + ((i >>> 2) & 0x3333333333333333L);
i = (i + (i >>> 4)) & 0x0f0f0f0f0f0f0f0fL;
i = i + (i >>> 8);
i = i + (i >>> 16);
i = i + (i >>> 32);
return (int)i & 0x7f;
}
The goal is to find how many non-zero quaternary digits are there in the binary representation without any types of loops nor the usage of Strings. I am trying to manipulate the code so that it works for quaternary. This is what I currently have:
public static int bitCountQuat(long i) {
i = i - ((i >>> 2) & 0x3333333333333333L);
i = (i & 0x3333333333333333L) + ((i >>> 4) & 0x0f0f0f0f0f0f0f0fL);
i = (i + (i >>> 8)) & 0x00ff00ff00ff00ffL;
i = i + (i >>> 16);
i = i + (i >>> 32);
i = i + (i >>> 64);
return (int) i & 0x7f7f;
}
For more reference: Efficient Implementation of Hamming Weight.
Here are the values from 0 to 9:
Binary, desired output, current output
0000, 0, 0
0001, 1, 2
0010, 1, 4
0011, 1, 6
0100, 1, 6
0101, 2, 0
0110, 2, 2
0111, 2, 4
1000, 1, 4
1001, 2, 6
Start by converting all non-zero quaternary digits to 1s ...
i = (i & 0x5555555555555555L) | ((i >> 1) & 0x5555555555555555L));
... then count the bits in the result.
One way to do that bit count would be to continue from there with
i = (i & 0x3333333333333333L) + ((i >> 2) & 0x3333333333333333L));
i = (i & 0x0f0f0f0f0f0f0f0fL) + ((i >> 4) & 0x0f0f0f0f0f0f0f0fL));
i = (i & 0x00ff00ff00ff00ffL) + ((i >> 8) & 0x00ff00ff00ff00ffL));
i = (i & 0x0000ffff0000ffffL) + ((i >> 16) & 0x0000ffff0000ffffL));
i = (i & 0x00000000ffffffffL) + (i >> 32);
, which are the trailing steps of the implementation presented in the Wikipedia article you referenced.
Alternatively, you could use the (whole) implementation of Long.bitCount(), or even Long.bitCount() itself for the bit counting part. Its variation has enough fewer operations than the (full) Wikipedia version to be about a wash with the above shortcut version.
The below two code is the method can invert the bits of an unsigned 32 bits integer. But What's the difference of the two code below?
Why the first code is wrong and the second code is correct.
I can't see the difference of these two.
public int reverseBits(int n) {
int result = 0;
for (int i = 0; i < 32; i++) {
result = result << 1 | (n & (1 << i));
}
return result;
}
public int reverseBits(int n) {
int result = 0;
for (int i = 0; i < 32; i++) {
result = result << 1 | ((n >> i) & 1);
}
return result;
}
Appreciate any help.
The first code is wrong, because it extracts given bit and puts it in the same position of the resulting number. Suppose you are on iteration i = 5. Then n & (1 << 5) = n & 32 which is either 0 or 0b100000. The intention is to put the one-bit to the lowest position, but when performing | operation it actually puts it to the same position #5. On the consequent iterations you move this bit even higher, so you practically have all the bits or'ed at the highest bit position.
Please note that there are more effective algorithms to reverse bits like one implemented in standard JDK Integer.reverse method:
public static int reverse(int i) {
// HD, Figure 7-1
i = (i & 0x55555555) << 1 | (i >>> 1) & 0x55555555;
i = (i & 0x33333333) << 2 | (i >>> 2) & 0x33333333;
i = (i & 0x0f0f0f0f) << 4 | (i >>> 4) & 0x0f0f0f0f;
i = (i << 24) | ((i & 0xff00) << 8) |
((i >>> 8) & 0xff00) | (i >>> 24);
return i;
}
It has to do with whether the bit being grabbed from n is being stored in the rightmost bit of the result or being stored back into the same position.
Suppose n is 4 (for example).
Then when i is 2, the expression (n & (1 << i))
becomes (4 & (1 << 2)), which should equal 4 & 4, so it evaluates to 4.
But the expression ((n >> i) & 1)
becomes ((4 >> 2) & 1), which should equal 1 & 1, so it evaluates to 1.
The two expressions do not have the same result.
But both versions of the function try to use those results in the exact same way, so the two versions of the function do not have the same result.
I am not sure how to translate this from C++ to Java.
It is a function that computes the Hamming weight.
/** This is popcount_3() from:
* http://en.wikipedia.org/wiki/Hamming_weight */
unsigned int popcnt32(uint32_t n) const
{
n -= ((n >> 1) & 0x55555555);
n = (n & 0x33333333) + ((n >> 2) & 0x33333333);
return (((n + (n >> 4))& 0xF0F0F0F)* 0x1010101) >> 24;
}
More concretely, I don't know what to use instead of uint32_t,
and if I use that type whatever it is, can I just leave the rest
code unchanged?
Thanks
It's implemented for you in Integer.bitCount(int i)
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Best algorithm to count the number of set bits in a 32-bit integer?
How do I count the number of 1's a number will have in binary?
So let's say I have the number 45, which is equal to 101101 in binary and has 4 1's in it. What's the most efficient way to write an algorithm to do this?
Instead of writing an algorithm to do this its best to use the built in function. Integer.bitCount()
What makes this especially efficient is that the JVM can treat this as an intrinsic. i.e. recognise and replace the whole thing with a single machine code instruction on a platform which supports it e.g. Intel/AMD
To demonstrate how effective this optimisation is
public static void main(String... args) {
perfTestIntrinsic();
perfTestACopy();
}
private static void perfTestIntrinsic() {
long start = System.nanoTime();
long countBits = 0;
for (int i = 0; i < Integer.MAX_VALUE; i++)
countBits += Integer.bitCount(i);
long time = System.nanoTime() - start;
System.out.printf("Intrinsic: Each bit count took %.1f ns, countBits=%d%n", (double) time / Integer.MAX_VALUE, countBits);
}
private static void perfTestACopy() {
long start2 = System.nanoTime();
long countBits2 = 0;
for (int i = 0; i < Integer.MAX_VALUE; i++)
countBits2 += myBitCount(i);
long time2 = System.nanoTime() - start2;
System.out.printf("Copy of same code: Each bit count took %.1f ns, countBits=%d%n", (double) time2 / Integer.MAX_VALUE, countBits2);
}
// Copied from Integer.bitCount()
public static int myBitCount(int i) {
// HD, Figure 5-2
i = i - ((i >>> 1) & 0x55555555);
i = (i & 0x33333333) + ((i >>> 2) & 0x33333333);
i = (i + (i >>> 4)) & 0x0f0f0f0f;
i = i + (i >>> 8);
i = i + (i >>> 16);
return i & 0x3f;
}
prints
Intrinsic: Each bit count took 0.4 ns, countBits=33285996513
Copy of same code: Each bit count took 2.4 ns, countBits=33285996513
Each bit count using the intrinsic version and loop takes just 0.4 nano-second on average. Using a copy of the same code takes 6x longer (gets the same result)
The most efficient way to count the number of 1's in a 32-bit variable v I know of is:
v = v - ((v >> 1) & 0x55555555);
v = (v & 0x33333333) + ((v >> 2) & 0x33333333);
c = ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; // c is the result
Updated: I want to make clear that it's not my code, actually it's older than me. According to Donald Knuth (The Art of Computer Programming Vol IV, p 11), the code first appeared in the first textbook on programming, The Preparation of Programs for an Electronic Digital Computer by Wilkes, Wheeler and Gill (2nd Ed 1957, reprinted 1984). Pages 191–193 of the 2nd edition of the book presented Nifty Parallel Count by D B Gillies and J C P Miller.
See Bit Twidling Hacks and study all the 'counting bits set' algorithms. In particular, Brian Kernighan's way is simple and quite fast if you expect a small answer. If you expect an evenly distributed answer, lookup table might be better.
This is called Hamming weight. It is also called the population count, popcount or sideways sum.
The following is either from "Bit Twiddling Hacks" page or Knuth's books (I don't remember). It is adapted to unsigned 64 bit integers and works on C#. I don't know if the lack of unsigned values in Java creates a problem.
By the way, I write the code only for reference; the best answer is using Integer.bitCount() as #Lawrey said; since there is a specific machine code operation for this operation in some (but not all) CPUs.
const UInt64 m1 = 0x5555555555555555;
const UInt64 m2 = 0x3333333333333333;
const UInt64 m4 = 0x0f0f0f0f0f0f0f0f;
const UInt64 h01 = 0x0101010101010101;
public int Count(UInt64 x)
{
x -= (x >> 1) & m1;
x = (x & m2) + ((x >> 2) & m2);
x = (x + (x >> 4)) & m4;
return (int) ((x * h01) >> 56);
}
public int f(int n)
{
int result = 0;
for(;n > 0; n = n >> 1)
result += ((n & 1) == 1 ? 1 : 0);
return result;
}
The following Ruby code works for positive numbers.
count = 0
while num > 1
count = (num % 2 == 1) ? count + 1 : count
num = num >> 1
end
count += 1
return count
The fastest I have used and also seen in a practical implementation (in the open source Sphinx Search Engine) is the MIT HAKMEM algorithm. It runs superfast over a very large stream of 1's and 0's.
There is a method in Java that reverses bits in an Integer reverseBytes(). I wanted to try another implementation and this is what I have:
public static int reverse(int num) {
int num_rev = 0;
for (int i = 0; i < Integer.SIZE; i++) {
System.out.print((num >> i) & 1);
if (((num >> i) & 1)!=0) {
num_rev = num_rev | (int)Math.pow(2, Integer.SIZE-i);
}
}
return num_rev;
}
The result num_rev is not correct. Does anyone have any idea how to "reconstruct" the value? Maybe there is a better way to perform it?
Thanks for any suggestions.
The normal way would to reverse bits would be via bit manipulation, and certainly not via floating point math routines!
e.g (nb: untested).
int reverse(int x) {
int y = 0;
for (int i = 0; i < 32; ++i) {
y <<= 1; // make space
y |= (x & 1); // copy LSB of X into Y
x >>>= 1; // shift X right
}
return y;
}
Because x is right shifted and y left shifted the result is that the original LSB of x eventually becomes the MSB of y.
A nice (and reasonably well known) method is this:
unsigned int reverse(unsigned int x)
{
x = (((x & 0xaaaaaaaa) >> 1) | ((x & 0x55555555) << 1));
x = (((x & 0xcccccccc) >> 2) | ((x & 0x33333333) << 2));
x = (((x & 0xf0f0f0f0) >> 4) | ((x & 0x0f0f0f0f) << 4));
x = (((x & 0xff00ff00) >> 8) | ((x & 0x00ff00ff) << 8));
return ((x >> 16) | (x << 16));
}
This is actually C code, but as Java doesn't have unsigned types to port to Java all you should need to do is remove the unsigned qualifiers and use >>> instead of >> to ensure that you don't get any "sign extension".
It works by first swapping every other bit, then every other pair of bits, then every other nybble, then every other byte, and then finally the top and bottom 16-bit words. This actually works :)
There are 2 problems with your code:
You're using an int cast. You should be doing (long)Math.pow(... The problem with int casting is that pow(2, n) will always be a positive number, so pow(2, 31) casted to an int will be rounded to (2^31)-1, because that's the largest positive int. The bit field for 2^31-1 is 0x7ffffff, but in this case you want 0x80000000, which is exactly what the lower 32 bits of the casted long will be.
You're doing pow(2, Integer.Size - i). That should be Integer.SIZE - i - 1. You basically want to take bit 0 to the last bit, bit 1 to the second last and so on. However, the last bit is bit 31, not bit 32. Your code right now is trying to set bit 0 to bit Integer.SIZE-0 == 32, so you need to subtract 1.
The above is assuming this is just for fun. If you really need to reverse bits, however, please don't use floating point ops. Do what some of the another answers suggest.
why you don't want to use:
public static int reverseBytes(int i) {
return ((i >>> 24) ) |
((i >> 8) & 0xFF00) |
((i << 8) & 0xFF0000) |
((i << 24));
}
edited:
Integer has also:
public static int reverse(int i)
Returns the value obtained by
reversing the order of the bits in the
two's complement binary representation
of the specified int value.