Bitshifting in Java - java

I'm trying to understand how bit shift works. Can someone please explain the meaning of this line:
while ((n&1)==0) n >>= 1;
where n is an integer and give me an example of a n when the shift is executed.

Breaking it down:
n & 1 will do a binary comparison between n, and 1 which is 00000000000000000000000000000001 in binary. As such, it will return 00000000000000000000000000000001 when n ends in a 1 (positive odd or negative even number) and 00000000000000000000000000000000 otherwise.
(n & 1) == 0 will hence be true if n is even (or negative odd) and false otherwise.
n >> = 1 is equivalent to n = n >> 1. As such it shifts all bits to the right, which is roughly equivalent to a division by two (rounding down).
If e.g. n started as 12 then in binary it would be 1100. After one loop it will be 110 (6), after another it will be 11 (3) and then the loop will stop.
If n is 0 then after the next loop it will still be 0, and the loop will be infinite.

Lets n be 4 which in binary is represented as:
00000000 00000000 00000000 00000100
(n&1) bitwise ands the n with 1. 1 has the binary representation of:
00000000 00000000 00000000 00000001
The result of the bitwise anding is 0:
00000000 00000000 00000000 00000100 = n
00000000 00000000 00000000 00000001 = 1
------------------------------------
00000000 00000000 00000000 00000000 = 0
so the condition of while is true. Effectively (n&1) was used to extract the least significant bit of the n.
In the while loop you right shift(>>) n by 1. Right shifting a number by k is same as dividing the number by 2^k.
n which is now 00000000 00000000 00000000 00000100 on right shifting once becomes
00000000 00000000 00000000 00000010 which is 2.
Next we extract the LSB(least significant bit) of n again which is 0 and right shift again to give 00000000 00000000 00000000 0000001 which is 1.
Next we again extract LSB of n, which is now 1 and the loop breaks.
So effectively you keep dividing your number n by 2 till it becomes odd as odd numbers have their LSB set.
Also note that if n is 0 to start with you'll go into an infinite loop because no matter how many times you divide 0 by 2 you'll not get a odd number.

Assume n = 12. The bits for this would be 1100 (1*8 + 1*4 + 0*2 + 0*1 = 12).
The first time through the loop n & 1 == 0 because the last digit of 1100 is 0 and when you AND that with 1, you get 0. So n >>= 1 will cause n to change from 1100 (12) to 110 (6). As you may notice, shifting right has the same effect as dividing by 2.
The last bit is still zero, so n & 1 will still be 0, so it will shift right one more time. n>>=1 will cause it to shift one more digit to the right changing n from 110 (6) to 11 (3).
Now you can see the last bit is 1, so n & 1 will be 1, causing the while loop to stop executing. The purpose of the loop appears to be to shift the number to the right until it finds the first turned-on bit (until the result is odd).

Let's assume equals 42 (just because):
int n = 42;
while ((n & 1) == 0) {
n >>= 1;
}
Iteration 0:
n = 42 (or 0000 0000 0000 0000 0000 0000 0010 1010)
n & 1 == 0 is true (because n&1 = 0 or 0000 0000 0000 0000 0000 0000 0000 0000)
Iteration 1:
n = 21 (or 0000 0000 0000 0000 0000 0000 0001 0101)
n & 1 == 0 is false (because n & 1 == 1 or 0000 0000 0000 0000 0000 0000 0000 0001)
What it does:
Basically, you loop divides n by 2 as long as n is an even number:
n & 1 is a simple parity check,
n >>= 1 shifts the bits to the right, which just divides by 2.

for example if n was
n= b11110000
then
n&1= b11110000 &
b00000001
---------
b00000000
n>>=1 b11110000 >> 1
---------
b01111000
n= b01111000
if the loop continues it should be
n= b00001111

n & 1 is actually a bitwise AND operataion. Here the bit pattern of n would be ANDED against the bit pattern of 1. Who's result will be compared against zero. If yes then n is right shifted 1 times. You can take the one right shift as division by 2 and so on.

Related

Complexity of a loop where j shrinks as j = (j - 1) & i

What is the time complexity of this code snippet? Why, mathematically, is that?
for (int i = 0; i < n; i++) {
for (int j = i; j > 0; j = (j - 1) & i) {
System.out.println(j);
}
}
The short version:
The runtime of the code is Θ(nlog2 3), which is approximately Θ(n1.585).
The derivation involves counting the number of 1 bits set in ranges of numbers.
Your connection to Pascal's triangle is not a coincidence!
Here's the route that I used to work this out. There's a really nice pattern that plays out in the bits of the numbers as you're doing the subtractions. For example, suppose that our number i is given by 10101001 in binary. Here's the sequence of values we'll see for j:
10101001
10101000
10100001
10100000
10001001
10001000
10000001
10000000
00101001
00101000
00100001
00100000
00001001
00001000
00000001
00000000
To see the pattern, focus on the columns of the number where there were 1 bits in the original number. Then you get this result:
v v v v
10101001 1111
10101000 1110
10100001 1101
10100000 1100
10001001 1011
10001000 1010
10000001 1001
10000000 1000
00101001 0111
00101000 0110
00100001 0101
00100000 0100
00001001 0011
00001000 0010
00000001 0001
00000000 0000
In other words, the sequence of values j takes on is basically counting down from the binary number 1111 all the way down to zero!
More generally, suppose that the number i has b(i) 1 bits in it. Then we're counting down from a number made of b(i) 1 bits down to 0, which requires 2b(i) steps. Therefore, the amount of work the inner loop does is 2b(i).
That gives us the complexity of the inner loop, but to figure out the total complexity of the loop, we need to figure out how much work is done across all n iterations, not just one of them. So the question then becomes: if you count from 0 up to n, and you sum up 2b(i), what do you get? Or, stated differently, what is
2b(0) + 2b(1) + 2b(2) + ... + 2b(n-1)
equal to?
To make this easier, let's assume that n is a perfect power of two. Say, for example, that n = 2k. This will make this easier because that means that the numbers 0, 1, 2, ..., n-1 all have the same number of bits in them. There's a really nice pattern at play here. Look at the numbers from 0 to 7 in binary and work out what 2b(i) is for each:
000 1
001 2
010 2
011 4
100 2
101 4
110 4
111 8
Now look at the numbers from 0 to 15 in binary:
0000 1
0001 2
0010 2
0011 4
0100 2
0101 4
0110 4
0111 8
----
1000 2
1001 4
1010 4
1011 8
1100 4
1101 8
1110 8
1111 16
In writing out the numbers from 8 to 15, we're basically writing out the numbers from 0 to 7, but with a 1 prefixed. This means each of those numbers has the one plus the number of 1 bits set as the previous versions, so 2b(i) is doubled for each of them. So if we know the sum of these terms from 0 to 2k-1, and we want to know the sum of the terms from 0 to 2k+1 - 1, then we basically take the sum we have, then add two more copies of it.
More formally, let's define S(k) = 2b(0) + 2b(1) + ... + 2b(2k - 1). Then we have
S(0) = 1
S(k + 1) = S(k) + 2S(k) = 3S(k)
This recurrence solves to S(k) = 3k. In other words, the sum 2b(0) + 2b(1) + ... + 2b(2k-1) works out to 3k.
Of course, in general, we won't have n = 2k. However, if we write k = log2 n, then we can get an approximation of the number of iterations at roughly
3log2 k
= klog2 3
≈ k1.584...
So we'd expect the runtime of the code to be Θ(nlog2 3). To see if that's the case, I wrote a program that ran the function and counted the number of times the inner loop executed. I then plotted the number of iterations of the inner loop against the function nlog2 3. Here's what it looks like:
]1
As you can see, this fits pretty well!
So how does connect to Pascal's triangle? It turns out that the numbers 2b(i) has another interpretation: it's the number of odd numbers in the ith row of Pascal's triangle! And that might explain why you're seeing combinations pop out of the math.
Thanks for posting this problem - it's super interesting! Where did you find it?
Here is a Java Code snippet:
int i,j,n,cnt;
int bit=10;
int[] mp = new int[bit+1];
n=(1<<bit);
for(i=0;i<n;i++){
mp[Integer.bitCount(i)]++;
if((i&i+1) ==0){ // check 2^k -1, all bit are set, max value of k bit num
System.out.printf("\nfor %d\n",i);
for(j=0;j<=bit;j++){
System.out.printf("%d ",mp[j]);
}
}
}
Output:
for 0 // 2^0 - 1
1 0 0 0 0 0 0 0 0 0 0
for 1 // 2^1 - 1
1 1 0 0 0 0 0 0 0 0 0
for 3 // 2^2 - 1
1 2 1 0 0 0 0 0 0 0 0
for 7 // 2^3 - 1
1 3 3 1 0 0 0 0 0 0 0
for 15 // 2^4 - 1
1 4 6 4 1 0 0 0 0 0 0
for 31 // 2^5 - 1
1 5 10 10 5 1 0 0 0 0 0
for 63 // 2^6 - 1
1 6 15 20 15 6 1 0 0 0 0
for 127 // 2^7 - 1
1 7 21 35 35 21 7 1 0 0 0
for 255 // 2^8 - 1
1 8 28 56 70 56 28 8 1 0 0
for 511 // 2^9 - 1
1 9 36 84 126 126 84 36 9 1 0
for 1023 // 2^10 - 1
1 10 45 120 210 252 210 120 45 10 1
So it looks like Pascal triangle…
0C0
1C0 1C1
2C0 2C1 2C2
3C0 3C1 3C2 3C3
4C0 4C1 4C2 4C3 4C4
5C0 5C1 5C2 5C3 5C4 5C5
6C0 6C1 6C2 6C3 6C4 6C5 6C6
7C0 7C1 7C2 7C3 7C4 7C5 7C6 7C7
8C0 8C1 8C2 8C3 8C4 8C5 8C6 8C7 8C8
9C0 9C1 9C2 9C3 9C4 9C5 9C6 9C7 9C8 9C9
10C0 10C1 10C2 10C3 10C4 10C5 10C6 10C7 10C8 10C9 10C10
In the question above inner loop executes exactly 2^(number set bit) -1 times.
So if we observe we can ses that If k=number of bit, then N=2^k;
Then Complexity becomes: (kC02^0+kC12^1+kC22^2+kC32^3+ … … … +kCk*2^k) - N
If k=10 then N=2^k=1024 So the complexity becomes as follows:
(10C0*2^0+10C1*2^1+10C2*2^2+10C3*2^3+ … … … +10C10*2^10) - 1024
=(1*1 +10*2 + 45*4+ 120*8+210*16+252*32+210*64+120*128+45*256+10*512+1*1024) - 1024
=59049 - 1024
=58025
Here is another code snippet that helps to verify the number 58025.
int i,j,n,cnt;
n=1024;
cnt=0;
for(i=0;i<n;i++){
for(j=i; j>0; j = (j-1)&i){
cnt++;
}
}
System.out.println(cnt);
The output of the above code is 58025.

Switching the first 4 bits of a byte and the last half

I need to switch the first half and the second half of a byte: Make 0011 0101 to 0101 0011 for example
I thought it might work this way:
For example, i have 1001 1100
i bitshift to the left 4 times and get 1111 1001(because if the first bit is a 1 the others become a one too)
i bitshift to the right 4 times and get 1100 0000(the second half of the byte gets filled with 0s)
i don't want 1111 1001 but 0000 1001 so i do 0x00001111 & 1111 1001 (which filters the frist 4 bits) to make 1111 1001 to 0000 1001
then i add everything up:
0000 1001 + 1100 0000 = 1100 1001
I got this:
bytes[i] = (byte) (( 0x00001111 & (bytes[i] >> 4)) + (bytes[i] << 4)
);
here is one output: 11111111 to 00000001
I do not really understand why this is happening, I know the binary System and I think I know how bitshifting works but I can't explain this one.
Sorry for bad english :)
Be careful with the >>> operation, which shifts the sign bits without sign extending so zero bits will fill in on the left. The problem is that it is an integer operation. The >> works the same way except it sign extends thru the int.
int i = -1;
i >>>= 30;
System.out.println(i); // prints 3 as expected.
byte b = -1;
b >>>= 6;
System.out.printnln(b); // prints -1 ???
The byte is still -1 because byte b = -1 was shifted as though it was an int then reassigned to a byte. So the byte remained negative. To get 3, you would need to do something that seems strange, like the following.
byte b = -1;
b >>>=30;
System.out.println(b); // prints 3
So to do your swap you need to do the following;
byte b = 0b10100110;
b = (byte)(((b>>>4)&0xF)|(b<<4));
The 0xF mask, masks off those lingering high order bits left over from the conversion from integer back to byte.
I'm not sure about the syntax for bit manipulation in Java, although here's how you can do it.
bitmask = 0x1111;
firstHalf = ((bytes[i] >> 4) & bitmask);
secondHalf = ((bytes[i] & bitmask) << 4)
result = firstHalf | secondHalf;
I don't want 1111 1001 but 0000 1001
If so, you need to use shift right zero fill operator(>>>) instead of preserving sign of the number.
I don't think the formula found works properly.
public byte reverseBitsByte(byte x) {
int intSize = 8;
byte y=0;
for(int position=intSize-1; position>0; position--){
y+=((x&1)<<position);
x >>= 1;
}
return y;
}
static int swapBits(int a) {
// Написать решение сюда ↓
int right = (a & 0b00001111);
right= (right<<4);
int left = (a & 0b11110000);
left = (left>>4);
return (right | left);
}

Maximum number of consecutive 1 in a given number

public int tobinary(int x)
{
int count = 0;
while(x!=0)
{
x=(x&(x<<1)); //how this stuff is working
count++;
}
return count;
}
the above code is working fine but actually i did copy and paste.so i just want to know how that line of code which i mentioned above is working.it would be a great help for me.
for example i am giving i/p as 7 the binary format for this is 0111 so our answer will be 3 but how ?
As we know, the << operator shifts all bits in its operand to the left, here by 1. Also, the & operator performs a bitwise-and on all bits of both its operands.
When will x not be 0? When the bitwise-and operation finds 2 bits that are both set in both operands. The only time that this will be true is when there are 2 or more consecutive 1 bits in x and x << 1.
x : 0000 0111
x << 1: 0000 1110
-----------------
&: 0000 0110
count = 1
If it's not 0, then there are at least 2 consecutive 1 bits in the number, and the (maximum) number of consecutive 1 bits in the number has been reduced by 1. Because we entered the loop in the first place, count it and try again. Eventually there won't be any more consecutive 1 bits, so we exit the loop with the correct count.
x : 0000 0110
x << 1: 0000 1100
-----------------
&: 0000 0100
count = 2
x : 0000 0100
x << 1: 0000 1000
-----------------
&: 0000 0000
count = 3, exit loop and return.
x = (x & (x << 1)) is performed enough times to eliminate the longest consecutive groups of 1 bits. Each loop iteration reduces each consecutive group of 1s by one because the number is logically ANDed with itself shifted left by one bit. This continues until no consecutive group of 1s remains.
To illustrate it for number 110111101:
110111101 // initial x, longest sequence 4
1101111010 // x << 1, count 1
100111000 // new x, longest sequence 3
1001110000 // x << 1, count 2
110000 // new x, longest sequence 2
1100000 // x << 1, count 3
100000 // new x, longest sequence 1
1000000 // x << 1, count 4
0 // new x, end of loop
Do note that since Java 7 it's handy to declare binary literals with int input = 0b110111101.
<< (left shift) : Binary Left Shift Operator. The left operands value is moved left by the number of bits specified by the right operand.
i.e. x<<1 shifts the value bit from right to left by 1 and in the unit position add 0. So, lets say for x=7, bit representation will be 111. Performing x << 1 will return 110 i.e. discarded the head element, shifted the bits to left and added 0 in the end.
This while loop can be broken down into below iteration for initial value of x=7
iteration 1) 111 & 110 =110
iteration 2) 110 & 100 =100
iteration 3) 100 & 000 =000
As value of x is 0, no more iteration
Hope this explains the behaviour of this code. You can put System.out.println(Integer.toBinaryString(x)); to see the change in bit value of x yourself
So what happens is, for each iteration in the while loop,
x = x (AND operator) (2 * x).
For example, when x = 7
count = 0
Iteration 1:
//x = 7 & 14 which results in 6 ie
0111
1110 (AND)
-------
0110 => 6
-------
count = 1
Iteration 2:
//x = 6 & 12 results in 4
0110
1100 AND
-------
0100 => 4
-------
count = 2
Iteration 3:
// x = 4 & 8 results in 0
0100
1000 AND
-----
0000
-----
count = 3
That's how you get 3 for 7

JAVA Bitwise code purpose , &

// following code prints out Letters aA bB cC dD eE ....
class UpCase {
public static void main(String args[]) {
char ch;
for(int i = 0; i < 10; i++) {
ch = (char)('a' + i);
System.out.print(ch);
ch = (char)((int) ch & 66503);
System.out.print(ch + " ")
}
}
}
Still learning Java but struggling to understand bitwise operations. Both codes work but I don't understand the binary reasons behind these codes. Why is (int) casted back to ch and what is 66503 used for that enables it to print out different letter casings.
//following code displays bits within a byte
class Showbits {
public static void main(String args[]) {
int t;
byte val;
val = 123;
for(t = 128; t > 0; t = t/2) {
if((val & t) != 0)
System.out.print("1 ");
else System.out.print("0 ");
}
}
}
//output is 0 1 1 1 1 0 1 1
For this code's output what's the step breakdown to achieve it ? If 123 is 01111011 and 128 as well as 64 and 32 is 10000000 shouldnt the output be 00000000 ? As & turns anything with 0 into a 0 ? Really confused.
Second piece of code(Showbits):
The code is actually converting decimal to binary. The algorithm uses some bit magic, mainly the AND(&) operator.
Consider the number 123 = 01111011 and 128 = 10000000. When we AND them together, we get 0 or a non-zero number depending whether the 1 in 128 is AND-ed with a 1 or a 0.
10000000
& 01111011
----------
00000000
In this case, the answer is a 0 and we have the first bit as 0.
Moving forward, we take 64 = 01000000 and, AND it with 123. Notice the shift of the 1 rightwards.
01000000
& 01111011
----------
01000000
AND-ing with 123 produces a non-zero number this time, and the second bit is 1. This procedure is repeated.
First piece of code(UpCase):
Here 65503 is the negation of 32.
32 = 0000 0000 0010 0000
~32 = 1111 1111 1101 1111
Essentially, we subtract a value of 32 from the lowercase letter by AND-ing with the negation of 32. As we know, subtracting 32 from a lowercase ASCII value character converts it to uppercase.
UpCase
The decimal number 66503 represented by a 32 bit signed integer is 00000000 00000001 00000011 11000111 in binary.
The ASCII letter a represented by a 8 bit char is 01100001 in binary (97 in decimal).
Casting the char to a 32 bit signed integer gives 00000000 00000000 00000000 01100001.
&ing the two integers together gives:
00000000 00000000 00000000 01100001
00000000 00000001 00000011 11000111
===================================
00000000 00000000 00000000 01000001
which casted back to char gives 01000001, which is decimal 65, which is the ASCII letter A.
Showbits
No idea why you think that 128, 64 and 32 are all 10000000. They obviously can't be the same number, since they are, well, different numbers. 10000000 is 128 in decimal.
What the for loop does is start at 128 and go through every consecutive next smallest power of 2: 64, 32, 16, 8, 4, 2 and 1.
These are the following binary numbers:
128: 10000000
64: 01000000
32: 00100000
16: 00010000
8: 00001000
4: 00000100
2: 00000010
1: 00000001
So in each loop it &s the given value together with each of these numbers, printing "0 " when the result is 0, and "1 " otherwise.
Example:
val is 123, which is 01111011.
So the loop will look like this:
128: 10000000 & 01111011 = 00000000 -> prints "0 "
64: 01000000 & 01111011 = 01000000 -> prints "1 "
32: 00100000 & 01111011 = 00100000 -> prints "1 "
16: 00010000 & 01111011 = 00010000 -> prints "1 "
8: 00001000 & 01111011 = 00001000 -> prints "1 "
4: 00000100 & 01111011 = 00000000 -> prints "0 "
2: 00000010 & 01111011 = 00000010 -> prints "1 "
1: 00000001 & 01111011 = 00000001 -> prints "1 "
Thus the final output is "0 1 1 1 1 0 1 1", which is exactly right.

count leading zeros (clz) or number of leading zeros (nlz) in Java

I need int 32 in binary as 00100000 or int 127 in binary 0111 1111.
The variant Integer.toBinaryString returns results only from 1.
If I build the for loop this way:
for (int i= 32; i <= 127; i + +) {
System.out.println (i);
System.out.println (Integer.toBinaryString (i));
}
And from binary numbers I need the number of leading zeros (count leading zeros (clz) or number of leading zeros (nlz)) I really meant the exact number of 0, such ex: at 00100000 -> 2 and at 0111 1111 - > 1
How about
int lz = Integer.numberOfLeadingZeros(i & 0xFF) - 24;
int tz = Integer.numberOfLeadingZeros(i | 0x100); // max is 8.
Count the number of leading zeros as follows:
int lz = 8;
while (i)
{
lz--;
i >>>= 1;
}
Of course, this supposes the number doesn't exceed 255, otherwise, you would get negative results.
Efficient solution is int ans = 8-(log2(x)+1)
you can calculate log2(x)= logy (x) / logy (2)
public class UtilsInt {
int leadingZerosInt(int i) {
return leadingZeros(i,Integer.SIZE);
}
/**
* use recursion to find occurence of first set bit
* rotate right by one bit & adjust complement
* check if rotate value is not zero if so stop counting/recursion
* #param i - integer to check
* #param maxBitsCount - size of type (in this case int)
* if we want to check only for:
* positive values we can set this to Integer.SIZE / 2
* (as int is signed in java - positive values are in L16 bits)
*/
private synchronized int leadingZeros(int i, int maxBitsCount) {
try {
logger.debug("checking if bit: "+ maxBitsCount
+ " is set | " + UtilsInt.intToString(i,8));
return (i >>>= 1) != 0 ? leadingZeros(i, --maxBitsCount) : maxBitsCount;
} finally {
if(i==0) logger.debug("bits in this integer from: " + --maxBitsCount
+ " up to last are not set (i'm counting from msb->lsb)");
}
}
}
test statement:
int leadingZeros = new UtilsInt.leadingZerosInt(255); // 8
test output:
checking if bit: 32 is set |00000000 00000000 00000000 11111111
checking if bit: 31 is set |00000000 00000000 00000000 01111111
checking if bit: 30 is set |00000000 00000000 00000000 00111111
checking if bit: 29 is set |00000000 00000000 00000000 00011111
checking if bit: 28 is set |00000000 00000000 00000000 00001111
checking if bit: 27 is set |00000000 00000000 00000000 00000111
checking if bit: 26 is set |00000000 00000000 00000000 00000011
checking if bit: 25 is set |00000000 00000000 00000000 00000001
bits in this integer from: 24 up to last are not set (i'm counting from msb->lsb)

Categories

Resources