public int tobinary(int x)
{
int count = 0;
while(x!=0)
{
x=(x&(x<<1)); //how this stuff is working
count++;
}
return count;
}
the above code is working fine but actually i did copy and paste.so i just want to know how that line of code which i mentioned above is working.it would be a great help for me.
for example i am giving i/p as 7 the binary format for this is 0111 so our answer will be 3 but how ?
As we know, the << operator shifts all bits in its operand to the left, here by 1. Also, the & operator performs a bitwise-and on all bits of both its operands.
When will x not be 0? When the bitwise-and operation finds 2 bits that are both set in both operands. The only time that this will be true is when there are 2 or more consecutive 1 bits in x and x << 1.
x : 0000 0111
x << 1: 0000 1110
-----------------
&: 0000 0110
count = 1
If it's not 0, then there are at least 2 consecutive 1 bits in the number, and the (maximum) number of consecutive 1 bits in the number has been reduced by 1. Because we entered the loop in the first place, count it and try again. Eventually there won't be any more consecutive 1 bits, so we exit the loop with the correct count.
x : 0000 0110
x << 1: 0000 1100
-----------------
&: 0000 0100
count = 2
x : 0000 0100
x << 1: 0000 1000
-----------------
&: 0000 0000
count = 3, exit loop and return.
x = (x & (x << 1)) is performed enough times to eliminate the longest consecutive groups of 1 bits. Each loop iteration reduces each consecutive group of 1s by one because the number is logically ANDed with itself shifted left by one bit. This continues until no consecutive group of 1s remains.
To illustrate it for number 110111101:
110111101 // initial x, longest sequence 4
1101111010 // x << 1, count 1
100111000 // new x, longest sequence 3
1001110000 // x << 1, count 2
110000 // new x, longest sequence 2
1100000 // x << 1, count 3
100000 // new x, longest sequence 1
1000000 // x << 1, count 4
0 // new x, end of loop
Do note that since Java 7 it's handy to declare binary literals with int input = 0b110111101.
<< (left shift) : Binary Left Shift Operator. The left operands value is moved left by the number of bits specified by the right operand.
i.e. x<<1 shifts the value bit from right to left by 1 and in the unit position add 0. So, lets say for x=7, bit representation will be 111. Performing x << 1 will return 110 i.e. discarded the head element, shifted the bits to left and added 0 in the end.
This while loop can be broken down into below iteration for initial value of x=7
iteration 1) 111 & 110 =110
iteration 2) 110 & 100 =100
iteration 3) 100 & 000 =000
As value of x is 0, no more iteration
Hope this explains the behaviour of this code. You can put System.out.println(Integer.toBinaryString(x)); to see the change in bit value of x yourself
So what happens is, for each iteration in the while loop,
x = x (AND operator) (2 * x).
For example, when x = 7
count = 0
Iteration 1:
//x = 7 & 14 which results in 6 ie
0111
1110 (AND)
-------
0110 => 6
-------
count = 1
Iteration 2:
//x = 6 & 12 results in 4
0110
1100 AND
-------
0100 => 4
-------
count = 2
Iteration 3:
// x = 4 & 8 results in 0
0100
1000 AND
-----
0000
-----
count = 3
That's how you get 3 for 7
Related
What is the time complexity of this code snippet? Why, mathematically, is that?
for (int i = 0; i < n; i++) {
for (int j = i; j > 0; j = (j - 1) & i) {
System.out.println(j);
}
}
The short version:
The runtime of the code is Θ(nlog2 3), which is approximately Θ(n1.585).
The derivation involves counting the number of 1 bits set in ranges of numbers.
Your connection to Pascal's triangle is not a coincidence!
Here's the route that I used to work this out. There's a really nice pattern that plays out in the bits of the numbers as you're doing the subtractions. For example, suppose that our number i is given by 10101001 in binary. Here's the sequence of values we'll see for j:
10101001
10101000
10100001
10100000
10001001
10001000
10000001
10000000
00101001
00101000
00100001
00100000
00001001
00001000
00000001
00000000
To see the pattern, focus on the columns of the number where there were 1 bits in the original number. Then you get this result:
v v v v
10101001 1111
10101000 1110
10100001 1101
10100000 1100
10001001 1011
10001000 1010
10000001 1001
10000000 1000
00101001 0111
00101000 0110
00100001 0101
00100000 0100
00001001 0011
00001000 0010
00000001 0001
00000000 0000
In other words, the sequence of values j takes on is basically counting down from the binary number 1111 all the way down to zero!
More generally, suppose that the number i has b(i) 1 bits in it. Then we're counting down from a number made of b(i) 1 bits down to 0, which requires 2b(i) steps. Therefore, the amount of work the inner loop does is 2b(i).
That gives us the complexity of the inner loop, but to figure out the total complexity of the loop, we need to figure out how much work is done across all n iterations, not just one of them. So the question then becomes: if you count from 0 up to n, and you sum up 2b(i), what do you get? Or, stated differently, what is
2b(0) + 2b(1) + 2b(2) + ... + 2b(n-1)
equal to?
To make this easier, let's assume that n is a perfect power of two. Say, for example, that n = 2k. This will make this easier because that means that the numbers 0, 1, 2, ..., n-1 all have the same number of bits in them. There's a really nice pattern at play here. Look at the numbers from 0 to 7 in binary and work out what 2b(i) is for each:
000 1
001 2
010 2
011 4
100 2
101 4
110 4
111 8
Now look at the numbers from 0 to 15 in binary:
0000 1
0001 2
0010 2
0011 4
0100 2
0101 4
0110 4
0111 8
----
1000 2
1001 4
1010 4
1011 8
1100 4
1101 8
1110 8
1111 16
In writing out the numbers from 8 to 15, we're basically writing out the numbers from 0 to 7, but with a 1 prefixed. This means each of those numbers has the one plus the number of 1 bits set as the previous versions, so 2b(i) is doubled for each of them. So if we know the sum of these terms from 0 to 2k-1, and we want to know the sum of the terms from 0 to 2k+1 - 1, then we basically take the sum we have, then add two more copies of it.
More formally, let's define S(k) = 2b(0) + 2b(1) + ... + 2b(2k - 1). Then we have
S(0) = 1
S(k + 1) = S(k) + 2S(k) = 3S(k)
This recurrence solves to S(k) = 3k. In other words, the sum 2b(0) + 2b(1) + ... + 2b(2k-1) works out to 3k.
Of course, in general, we won't have n = 2k. However, if we write k = log2 n, then we can get an approximation of the number of iterations at roughly
3log2 k
= klog2 3
≈ k1.584...
So we'd expect the runtime of the code to be Θ(nlog2 3). To see if that's the case, I wrote a program that ran the function and counted the number of times the inner loop executed. I then plotted the number of iterations of the inner loop against the function nlog2 3. Here's what it looks like:
]1
As you can see, this fits pretty well!
So how does connect to Pascal's triangle? It turns out that the numbers 2b(i) has another interpretation: it's the number of odd numbers in the ith row of Pascal's triangle! And that might explain why you're seeing combinations pop out of the math.
Thanks for posting this problem - it's super interesting! Where did you find it?
Here is a Java Code snippet:
int i,j,n,cnt;
int bit=10;
int[] mp = new int[bit+1];
n=(1<<bit);
for(i=0;i<n;i++){
mp[Integer.bitCount(i)]++;
if((i&i+1) ==0){ // check 2^k -1, all bit are set, max value of k bit num
System.out.printf("\nfor %d\n",i);
for(j=0;j<=bit;j++){
System.out.printf("%d ",mp[j]);
}
}
}
Output:
for 0 // 2^0 - 1
1 0 0 0 0 0 0 0 0 0 0
for 1 // 2^1 - 1
1 1 0 0 0 0 0 0 0 0 0
for 3 // 2^2 - 1
1 2 1 0 0 0 0 0 0 0 0
for 7 // 2^3 - 1
1 3 3 1 0 0 0 0 0 0 0
for 15 // 2^4 - 1
1 4 6 4 1 0 0 0 0 0 0
for 31 // 2^5 - 1
1 5 10 10 5 1 0 0 0 0 0
for 63 // 2^6 - 1
1 6 15 20 15 6 1 0 0 0 0
for 127 // 2^7 - 1
1 7 21 35 35 21 7 1 0 0 0
for 255 // 2^8 - 1
1 8 28 56 70 56 28 8 1 0 0
for 511 // 2^9 - 1
1 9 36 84 126 126 84 36 9 1 0
for 1023 // 2^10 - 1
1 10 45 120 210 252 210 120 45 10 1
So it looks like Pascal triangle…
0C0
1C0 1C1
2C0 2C1 2C2
3C0 3C1 3C2 3C3
4C0 4C1 4C2 4C3 4C4
5C0 5C1 5C2 5C3 5C4 5C5
6C0 6C1 6C2 6C3 6C4 6C5 6C6
7C0 7C1 7C2 7C3 7C4 7C5 7C6 7C7
8C0 8C1 8C2 8C3 8C4 8C5 8C6 8C7 8C8
9C0 9C1 9C2 9C3 9C4 9C5 9C6 9C7 9C8 9C9
10C0 10C1 10C2 10C3 10C4 10C5 10C6 10C7 10C8 10C9 10C10
In the question above inner loop executes exactly 2^(number set bit) -1 times.
So if we observe we can ses that If k=number of bit, then N=2^k;
Then Complexity becomes: (kC02^0+kC12^1+kC22^2+kC32^3+ … … … +kCk*2^k) - N
If k=10 then N=2^k=1024 So the complexity becomes as follows:
(10C0*2^0+10C1*2^1+10C2*2^2+10C3*2^3+ … … … +10C10*2^10) - 1024
=(1*1 +10*2 + 45*4+ 120*8+210*16+252*32+210*64+120*128+45*256+10*512+1*1024) - 1024
=59049 - 1024
=58025
Here is another code snippet that helps to verify the number 58025.
int i,j,n,cnt;
n=1024;
cnt=0;
for(i=0;i<n;i++){
for(j=i; j>0; j = (j-1)&i){
cnt++;
}
}
System.out.println(cnt);
The output of the above code is 58025.
I need to switch the first half and the second half of a byte: Make 0011 0101 to 0101 0011 for example
I thought it might work this way:
For example, i have 1001 1100
i bitshift to the left 4 times and get 1111 1001(because if the first bit is a 1 the others become a one too)
i bitshift to the right 4 times and get 1100 0000(the second half of the byte gets filled with 0s)
i don't want 1111 1001 but 0000 1001 so i do 0x00001111 & 1111 1001 (which filters the frist 4 bits) to make 1111 1001 to 0000 1001
then i add everything up:
0000 1001 + 1100 0000 = 1100 1001
I got this:
bytes[i] = (byte) (( 0x00001111 & (bytes[i] >> 4)) + (bytes[i] << 4)
);
here is one output: 11111111 to 00000001
I do not really understand why this is happening, I know the binary System and I think I know how bitshifting works but I can't explain this one.
Sorry for bad english :)
Be careful with the >>> operation, which shifts the sign bits without sign extending so zero bits will fill in on the left. The problem is that it is an integer operation. The >> works the same way except it sign extends thru the int.
int i = -1;
i >>>= 30;
System.out.println(i); // prints 3 as expected.
byte b = -1;
b >>>= 6;
System.out.printnln(b); // prints -1 ???
The byte is still -1 because byte b = -1 was shifted as though it was an int then reassigned to a byte. So the byte remained negative. To get 3, you would need to do something that seems strange, like the following.
byte b = -1;
b >>>=30;
System.out.println(b); // prints 3
So to do your swap you need to do the following;
byte b = 0b10100110;
b = (byte)(((b>>>4)&0xF)|(b<<4));
The 0xF mask, masks off those lingering high order bits left over from the conversion from integer back to byte.
I'm not sure about the syntax for bit manipulation in Java, although here's how you can do it.
bitmask = 0x1111;
firstHalf = ((bytes[i] >> 4) & bitmask);
secondHalf = ((bytes[i] & bitmask) << 4)
result = firstHalf | secondHalf;
I don't want 1111 1001 but 0000 1001
If so, you need to use shift right zero fill operator(>>>) instead of preserving sign of the number.
I don't think the formula found works properly.
public byte reverseBitsByte(byte x) {
int intSize = 8;
byte y=0;
for(int position=intSize-1; position>0; position--){
y+=((x&1)<<position);
x >>= 1;
}
return y;
}
static int swapBits(int a) {
// Написать решение сюда ↓
int right = (a & 0b00001111);
right= (right<<4);
int left = (a & 0b11110000);
left = (left>>4);
return (right | left);
}
This question is related to but it is different in understanding how the code actually works. More precisely, I do not understand how numberOfTrailingZeros(int i) in java 8 here compute the final result. The code is as follows
public static int numberOfTrailingZeros(int i) {
// HD, Figure 5-14
int y;
if (i == 0) return 32;
int n = 31;
y = i <<16; if (y != 0) { n = n -16; i = y; }
y = i << 8; if (y != 0) { n = n - 8; i = y; }
y = i << 4; if (y != 0) { n = n - 4; i = y; }
y = i << 2; if (y != 0) { n = n - 2; i = y; }
return n - ((i << 1) >>> 31);
}
Now I understand the purpose of the shift operations from 16 to 2, but won't n have already the number of trailing zeros by the last shift operation:
y = i << 2; if (y != 0) { n = n - 2; i = y; }.
That is I do not understand the purpose of this particular line
n - ((i << 1) >>> 31);
why do we need that when n has already the right value?
Can anyone give a detailed example of what is going on?
Thanks!
I will try to explain the algorithm. It is a bit optimized, but I'll start with a (hopefully) more simplified approach.
It is used to determine the number of trailing zero bits of a 32 bit number, that is, how many zeros are on the right side (assuming most significant bit on left). The main idea is to divide the field in two half: if the right one is all zero we add the number of bits in the right half to the result and continue examining the left one (again dividing it); if the right one is not all zero, we can ignore the left one and continue examining the right one (adding nothing to the result).
Example: 0000 0000 0000 0010 0000 0000 0000 0000 (0x0002 0000)
first step (not java code, all numbers base 2, result is decimal):
i = 0000 0000 0000 0010 0000 0000 0000 0000
left = 0000 0000 0000 0010
right = 0000 0000 0000 0000
result = 0
since right is zero, we add 16 (actual number of bits in right part) to the result and continue examining the left part
second step:
i = 0000 0000 0000 0010 // left from previous
left = 0000 0000
right = 0000 0010
result = 16
now right is not zero, so we add nothing to result and continue with right part
third step:
i = 0000 0010 // right from previous
left = 0000
right = 0010
result = 16
right is not zero, nothing added to result, continue with right part
4th step:
i = 0010 // right from previous
left = 00
right = 10
result = 16
5th step:
i = 10 // right from previous
left = 1
right = 0
result = 16
now right is zero, so we add 1 (number of bits in right part) and there is nothing else to divide (that is the return line off original code)
result = 17
Optimizations: instead of having the left and the right part, the algorithm only examines the left left x-most bits of the number and just shift the right part into the left if the right one is not zero, example for first step:
y = i << 16; if (y != 0) { ... i = y;}
and, to avoid having an else part (I think), it starts the result with 31 (sum of all part lengths 1+2+4+8+16) and subtracts the bit count if the right side (after shifting the now left one) is not zero. Again for the first step:
y = i << 16; if (y != 0) { n = n - 16; ....}
Second optimization, last step, instead of
y = i << 1; if (y != 0) { n = n - 1; /* i = y not needed since we are done*/ }
return n;
it does just
return n - ((i << 1) >>> 31);
here ((i << 1) >>>31) is shifting the second bit (second leftmost, second highest bit) of i to leftmost position (eliminating the first bit) and then shifting it to rightmost position, that is, resulting in 0 if the second bit is zero, 1 otherwise. Which is then subtracted from the result (to undo the sum 31 from the beginning).
The first (leftmost) bit need not be directly examined. It only matters if all other bits are zero, that is, the number is 0, checked at the very beginning (if (i == 0) return 32;) or it is -1 in which case the initial value of result is returned: 31.
I was trying to solve this question but the automated judge is returning "time limit exceeded" (TLE).
On the occasion of Valentine Day , Adam and Eve went on to take part in a competition.They cleared all rounds and got into the finals. In the final round, Adam is given a even number N and an integer K and he has to find the greatest odd number M less than N such that the sum of digits in binary representation of M is atmost K.
Input format:
For each test case you are given an even number N and an integer K
Output format:
For each test case, output the integer M if it exists, else print -1
Constraints:
1 ≤ T ≤ 104
2 ≤ N ≤ 109
0 ≤ K ≤ 30
Sample input:
2
10 2
6 1
Sample output:
9
1
This is what I have done so far.
static long play(long n, int k){
if(k==0) return -1;
if(k==1) return 1;
long m=n-1;
while(m>0){
long value=Long.bitCount(m); //built in function to count bits
if(value<=k ){
return m;
}
m=m-2;
}
return -1;
}
public void solve(InputReader in, OutputWriter out) {
long start=System.currentTimeMillis();
int t=in.readInt();
while(t-->0){
long n=in.readLong();
int k=in.readInt();
long result=play(n,k);
out.printLine(result);
}
long end=System.currentTimeMillis();
out.printLine((end-start)/1000d+"ms");
}
}
According to updated question N can be between 2 and 10^9. You're starting with N-1 and looping down by 2, so you get up to about 10^9 / 2 iterations of the loop. Not good.
Starting with M = N - 1 is good. And using bitCount(M) is good, to get started. If the initial bitcount is <= K you're done.
But if it's not, do not loop with step -2.
See the number in your mind as binary, e.g. 110101011. Bit count is 6. Let's say K is 4, that means you have to remove 2 bits. Right-most bit must stay on, and you want largest number, so clear the two second-last 1-bits. Result: 110100001.
Now, you figure out how to write that. And do it without converting to text.
Note: With N <= 10^9, it will fit in an int. No need for long.
You'll have to perform bitwise operations to compute the answer quickly. Let me give you a few hints.
The number 1 is the same in binary and decimal notation: 12 = 110
To make the number 102 = 210, shift 1 to the left by one position. In Java and many other languages, we can write this:
(1 << 1) == 2
To make the binary number 1002 = 410, shift 1 to the left by two positions:
(1 << 2) == 4
To make the binary number 10002 = 810 shift 1 to the left by three positions:
(1 << 3) == 8
You get the idea.
To see if a bit at a certain position is 1 or 0, use &, the bitwise AND operator. For example, we can determine that 510 = 1012 has a 1 at the third most significant bit, a 0 at the second most significant bit, and a 1 at the least significant bit:
5 & (1 << 2) != 0
5 & (1 << 1) == 0
5 & (1 << 0) != 0
To set a bit to 0, use ^, the bitwise XOR operator. For example, we can set the second most significant bit of 710 = 1112 to 0 and thus obtain 510 = 1012:
7 ^ (1 << 1) == 5
As the answer is odd,
let ans = 1, here we use 1 bit so k = k - 1;
Now binary representation of ans is
ans(binary) = 00000000000000000000000000000001
while(k > 0):
make 30th position set
ans(binary) = 01000000000000000000000000000001
if(ans(decimal) < N):
k -= 1
else:
reset 30th position
ans(binary) = 00000000000000000000000000000001
Do the same from 29th to 1st position
I'm trying to understand how bit shift works. Can someone please explain the meaning of this line:
while ((n&1)==0) n >>= 1;
where n is an integer and give me an example of a n when the shift is executed.
Breaking it down:
n & 1 will do a binary comparison between n, and 1 which is 00000000000000000000000000000001 in binary. As such, it will return 00000000000000000000000000000001 when n ends in a 1 (positive odd or negative even number) and 00000000000000000000000000000000 otherwise.
(n & 1) == 0 will hence be true if n is even (or negative odd) and false otherwise.
n >> = 1 is equivalent to n = n >> 1. As such it shifts all bits to the right, which is roughly equivalent to a division by two (rounding down).
If e.g. n started as 12 then in binary it would be 1100. After one loop it will be 110 (6), after another it will be 11 (3) and then the loop will stop.
If n is 0 then after the next loop it will still be 0, and the loop will be infinite.
Lets n be 4 which in binary is represented as:
00000000 00000000 00000000 00000100
(n&1) bitwise ands the n with 1. 1 has the binary representation of:
00000000 00000000 00000000 00000001
The result of the bitwise anding is 0:
00000000 00000000 00000000 00000100 = n
00000000 00000000 00000000 00000001 = 1
------------------------------------
00000000 00000000 00000000 00000000 = 0
so the condition of while is true. Effectively (n&1) was used to extract the least significant bit of the n.
In the while loop you right shift(>>) n by 1. Right shifting a number by k is same as dividing the number by 2^k.
n which is now 00000000 00000000 00000000 00000100 on right shifting once becomes
00000000 00000000 00000000 00000010 which is 2.
Next we extract the LSB(least significant bit) of n again which is 0 and right shift again to give 00000000 00000000 00000000 0000001 which is 1.
Next we again extract LSB of n, which is now 1 and the loop breaks.
So effectively you keep dividing your number n by 2 till it becomes odd as odd numbers have their LSB set.
Also note that if n is 0 to start with you'll go into an infinite loop because no matter how many times you divide 0 by 2 you'll not get a odd number.
Assume n = 12. The bits for this would be 1100 (1*8 + 1*4 + 0*2 + 0*1 = 12).
The first time through the loop n & 1 == 0 because the last digit of 1100 is 0 and when you AND that with 1, you get 0. So n >>= 1 will cause n to change from 1100 (12) to 110 (6). As you may notice, shifting right has the same effect as dividing by 2.
The last bit is still zero, so n & 1 will still be 0, so it will shift right one more time. n>>=1 will cause it to shift one more digit to the right changing n from 110 (6) to 11 (3).
Now you can see the last bit is 1, so n & 1 will be 1, causing the while loop to stop executing. The purpose of the loop appears to be to shift the number to the right until it finds the first turned-on bit (until the result is odd).
Let's assume equals 42 (just because):
int n = 42;
while ((n & 1) == 0) {
n >>= 1;
}
Iteration 0:
n = 42 (or 0000 0000 0000 0000 0000 0000 0010 1010)
n & 1 == 0 is true (because n&1 = 0 or 0000 0000 0000 0000 0000 0000 0000 0000)
Iteration 1:
n = 21 (or 0000 0000 0000 0000 0000 0000 0001 0101)
n & 1 == 0 is false (because n & 1 == 1 or 0000 0000 0000 0000 0000 0000 0000 0001)
What it does:
Basically, you loop divides n by 2 as long as n is an even number:
n & 1 is a simple parity check,
n >>= 1 shifts the bits to the right, which just divides by 2.
for example if n was
n= b11110000
then
n&1= b11110000 &
b00000001
---------
b00000000
n>>=1 b11110000 >> 1
---------
b01111000
n= b01111000
if the loop continues it should be
n= b00001111
n & 1 is actually a bitwise AND operataion. Here the bit pattern of n would be ANDED against the bit pattern of 1. Who's result will be compared against zero. If yes then n is right shifted 1 times. You can take the one right shift as division by 2 and so on.