I came across this piece of code in java and will be delighted if someone can explain the logic to me.
public boolean name(int n) {
return ((n >> n) & 1L) > 0;
}
this is a kind of check operation I guess but what boolean value will this code return. And is there an alternative to this code. I am trying my best to understand bit manipulation in java.
That's a bizarre piece of code. It checks whether the number n, having been shifted right n % 32 bits, is odd.
The first non-negative values which pass are 37 (100101 in binary), 70 (1000110 in binary), and 101 (1100101 in binary).
I doubt that it actually works as the original coder intended it to - it doesn't obviously signify anything useful (and the method name of name is pretty unhelpful...)
Perhaps the point of this puzzle was to see if you would consider shifting outside 0 to 31 bits and what would happen.
It gets more bizarre for negative numbers.
for(int n=-70;n<=200;n++)
if (((n >> n) & 1L) > 0)
System.out.print(n + " ");
prints
-70 -69 -68 -67 -66 -65 -58 -57 -56 -55 -54 -53 -52 -51 -50 -49 -48 -47 -46 -45 -44 -43 -42 -41 -40 -39 -38 -37 -36 -35 -34 -33 -27 -26 -25 -24 -23 -22 -21 -20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 37 70 101 102 135 165 167 198 199
A similar formula when n is an int
n & (1 << (n & 31)) != 0
if n was a long
n & (1L << (n & 63)) != 0
More negative numbers have a 1 set after shifting because they get sign extended.
A similar puzzle
http://vanillajava.blogspot.co.uk/2012/01/another-shifty-challenge.html
http://vanillajava.blogspot.co.uk/2012/01/shifting-challenge.html
For positive numbers, it seems that the function returns true iff a number is of the form:
sum_k (alpha_k * 2^k + d(k)), where
alpha_k = 0 or 1
k >= 5
d(k) = k for exactly one of the k where alpha_k = 1 and 0 otherwise
Example:
alpha_k = 1 for k = 5, 0 otherwise => 32 + 5 = 37
alpha_k = 1 for k = 6, 0 otherwise => 64 + 6 = 70
alpha_k = 1 for k = 5 and 6, 0 otherwise => 32 + 5 + 64 = 101
or 32 + 64 + 6 = 102
etc.
All those numbers will work:
shifting that number by itself % 32 shifts it by d(k) for the k that is not null.
the bit that goes to position 1 is in position k which is 1 by definition (alpha_k = 1)
Proving that only those numbers work is a bit more challenging...
Next question is obviously: what's the point?!
>> is signed right shift operator, left operand is integer to be shifted and the right one is the number of bit positions to shift the integer by. Final & 1L operation tests the zeroth bit: the function returns true if the zeroth bit is 1. True purpose of this is unknown to me, but the resulting set for which this function returns true is dependent on the operand size, e.g. for 32bit int the piece (n >> n) returns non-zero result for multiples of 32 and then ...
32: (n>>n): 32 (n>>n)&1L: 0
33: (n>>n): 16 (n>>n)&1L: 0
34: (n>>n): 8 (n>>n)&1L: 0
35: (n>>n): 4 (n>>n)&1L: 0
36: (n>>n): 2 (n>>n)&1L: 0
37: (n>>n): 1 (n>>n)&1L: 1
or
192: (n>>n): 192 (n>>n)&1L: 0
193: (n>>n): 96 (n>>n)&1L: 0
194: (n>>n): 48 (n>>n)&1L: 0
195: (n>>n): 24 (n>>n)&1L: 0
196: (n>>n): 12 (n>>n)&1L: 0
197: (n>>n): 6 (n>>n)&1L: 0
198: (n>>n): 3 (n>>n)&1L: 1
199: (n>>n): 1 (n>>n)&1L: 1
Related
What is the time complexity of this code snippet? Why, mathematically, is that?
for (int i = 0; i < n; i++) {
for (int j = i; j > 0; j = (j - 1) & i) {
System.out.println(j);
}
}
The short version:
The runtime of the code is Θ(nlog2 3), which is approximately Θ(n1.585).
The derivation involves counting the number of 1 bits set in ranges of numbers.
Your connection to Pascal's triangle is not a coincidence!
Here's the route that I used to work this out. There's a really nice pattern that plays out in the bits of the numbers as you're doing the subtractions. For example, suppose that our number i is given by 10101001 in binary. Here's the sequence of values we'll see for j:
10101001
10101000
10100001
10100000
10001001
10001000
10000001
10000000
00101001
00101000
00100001
00100000
00001001
00001000
00000001
00000000
To see the pattern, focus on the columns of the number where there were 1 bits in the original number. Then you get this result:
v v v v
10101001 1111
10101000 1110
10100001 1101
10100000 1100
10001001 1011
10001000 1010
10000001 1001
10000000 1000
00101001 0111
00101000 0110
00100001 0101
00100000 0100
00001001 0011
00001000 0010
00000001 0001
00000000 0000
In other words, the sequence of values j takes on is basically counting down from the binary number 1111 all the way down to zero!
More generally, suppose that the number i has b(i) 1 bits in it. Then we're counting down from a number made of b(i) 1 bits down to 0, which requires 2b(i) steps. Therefore, the amount of work the inner loop does is 2b(i).
That gives us the complexity of the inner loop, but to figure out the total complexity of the loop, we need to figure out how much work is done across all n iterations, not just one of them. So the question then becomes: if you count from 0 up to n, and you sum up 2b(i), what do you get? Or, stated differently, what is
2b(0) + 2b(1) + 2b(2) + ... + 2b(n-1)
equal to?
To make this easier, let's assume that n is a perfect power of two. Say, for example, that n = 2k. This will make this easier because that means that the numbers 0, 1, 2, ..., n-1 all have the same number of bits in them. There's a really nice pattern at play here. Look at the numbers from 0 to 7 in binary and work out what 2b(i) is for each:
000 1
001 2
010 2
011 4
100 2
101 4
110 4
111 8
Now look at the numbers from 0 to 15 in binary:
0000 1
0001 2
0010 2
0011 4
0100 2
0101 4
0110 4
0111 8
----
1000 2
1001 4
1010 4
1011 8
1100 4
1101 8
1110 8
1111 16
In writing out the numbers from 8 to 15, we're basically writing out the numbers from 0 to 7, but with a 1 prefixed. This means each of those numbers has the one plus the number of 1 bits set as the previous versions, so 2b(i) is doubled for each of them. So if we know the sum of these terms from 0 to 2k-1, and we want to know the sum of the terms from 0 to 2k+1 - 1, then we basically take the sum we have, then add two more copies of it.
More formally, let's define S(k) = 2b(0) + 2b(1) + ... + 2b(2k - 1). Then we have
S(0) = 1
S(k + 1) = S(k) + 2S(k) = 3S(k)
This recurrence solves to S(k) = 3k. In other words, the sum 2b(0) + 2b(1) + ... + 2b(2k-1) works out to 3k.
Of course, in general, we won't have n = 2k. However, if we write k = log2 n, then we can get an approximation of the number of iterations at roughly
3log2 k
= klog2 3
≈ k1.584...
So we'd expect the runtime of the code to be Θ(nlog2 3). To see if that's the case, I wrote a program that ran the function and counted the number of times the inner loop executed. I then plotted the number of iterations of the inner loop against the function nlog2 3. Here's what it looks like:
]1
As you can see, this fits pretty well!
So how does connect to Pascal's triangle? It turns out that the numbers 2b(i) has another interpretation: it's the number of odd numbers in the ith row of Pascal's triangle! And that might explain why you're seeing combinations pop out of the math.
Thanks for posting this problem - it's super interesting! Where did you find it?
Here is a Java Code snippet:
int i,j,n,cnt;
int bit=10;
int[] mp = new int[bit+1];
n=(1<<bit);
for(i=0;i<n;i++){
mp[Integer.bitCount(i)]++;
if((i&i+1) ==0){ // check 2^k -1, all bit are set, max value of k bit num
System.out.printf("\nfor %d\n",i);
for(j=0;j<=bit;j++){
System.out.printf("%d ",mp[j]);
}
}
}
Output:
for 0 // 2^0 - 1
1 0 0 0 0 0 0 0 0 0 0
for 1 // 2^1 - 1
1 1 0 0 0 0 0 0 0 0 0
for 3 // 2^2 - 1
1 2 1 0 0 0 0 0 0 0 0
for 7 // 2^3 - 1
1 3 3 1 0 0 0 0 0 0 0
for 15 // 2^4 - 1
1 4 6 4 1 0 0 0 0 0 0
for 31 // 2^5 - 1
1 5 10 10 5 1 0 0 0 0 0
for 63 // 2^6 - 1
1 6 15 20 15 6 1 0 0 0 0
for 127 // 2^7 - 1
1 7 21 35 35 21 7 1 0 0 0
for 255 // 2^8 - 1
1 8 28 56 70 56 28 8 1 0 0
for 511 // 2^9 - 1
1 9 36 84 126 126 84 36 9 1 0
for 1023 // 2^10 - 1
1 10 45 120 210 252 210 120 45 10 1
So it looks like Pascal triangle…
0C0
1C0 1C1
2C0 2C1 2C2
3C0 3C1 3C2 3C3
4C0 4C1 4C2 4C3 4C4
5C0 5C1 5C2 5C3 5C4 5C5
6C0 6C1 6C2 6C3 6C4 6C5 6C6
7C0 7C1 7C2 7C3 7C4 7C5 7C6 7C7
8C0 8C1 8C2 8C3 8C4 8C5 8C6 8C7 8C8
9C0 9C1 9C2 9C3 9C4 9C5 9C6 9C7 9C8 9C9
10C0 10C1 10C2 10C3 10C4 10C5 10C6 10C7 10C8 10C9 10C10
In the question above inner loop executes exactly 2^(number set bit) -1 times.
So if we observe we can ses that If k=number of bit, then N=2^k;
Then Complexity becomes: (kC02^0+kC12^1+kC22^2+kC32^3+ … … … +kCk*2^k) - N
If k=10 then N=2^k=1024 So the complexity becomes as follows:
(10C0*2^0+10C1*2^1+10C2*2^2+10C3*2^3+ … … … +10C10*2^10) - 1024
=(1*1 +10*2 + 45*4+ 120*8+210*16+252*32+210*64+120*128+45*256+10*512+1*1024) - 1024
=59049 - 1024
=58025
Here is another code snippet that helps to verify the number 58025.
int i,j,n,cnt;
n=1024;
cnt=0;
for(i=0;i<n;i++){
for(j=i; j>0; j = (j-1)&i){
cnt++;
}
}
System.out.println(cnt);
The output of the above code is 58025.
public class PrintStrLenK {
public static void main(String[] args){
int k = 2;
char[] set = {'0', '1'};
char[] str = new char[k];
generate(k, set, str, 0);
}
static void generate(int k, char[] set, char[] str, int index){
if (index == k){
System.out.println(new String(str));
}
else {
for (int i = 0; i < set.length; i++){
str[index] = set[i];
generate(k, set, str, index + 1);
}
}
}
}
I found this code, the problem is that I was asked to have just one char change between permutations
Output:
00
01
02
03
10 --> 2 Char changes. Not OK.
11
12
13
20 --> 2 Char changes. Not OK.
21
22
23
30 --> 2 Char changes. Not OK.
31
32
33
Should Be
00
01
02
03
13 --> 1 Char change. OK
12
11
10
20 -- > 1 Char change. OK
21
22
23
33 -- > 1 Char change. OK
32
31
30
It has to works with different sets and k. For Example
set = {'0', '1'} and k= 3.
000 001 011 010 110 111 101 100
set = {'0', '1','2','3'} and k= 3.
000 001 002 003 013 012 011 010 020 021 022 023 033 032 031 030 130 131 132 133 123 122 121 120 110 111 112 113 103 102 101 100 200 201 202 203 213 212 211 210 220 221 222 223 233 232 231 230 330 331 332 333 323 322 321 320 310 311 312 313 303 302 301 300
It's 2 days that I'm trying to find a solution and nothing so far. Java, C++ or pseudocode for a solution it's ok.
Thanks
The problem is actually like counting in base sizeof(set) on length k (assuming the set has 10 items maximum).
For instance, with a set of { 0, 1, 2 } on length 2, you count from 00 to 22, base 3.
To solve the "one digit change only" constraint, instead of counting increasingly, do that only until the next 10th change. Then count decreasingly, then again increasingly etc...
For instance in the example above
00 -> 02 then increase the next tenth (12), then count downward
12 -> 10 then again +10 to get 20, then go up again
20 -> 22
On length 3, keep the same reasoning, change the next 10th then go up or down depending on the initial value of the current digit
000 -> 002, 012 -> 010, 020 -> 022
122 -> 120, 110 -> 112, 102 -> 100
200 -> 202, 212 -> 210, 220 -> 222
A recursive algorithm is one approach. The function depth 0 takes care of the first (left) digit, i.e. the highest 10th, and count up or down depending on its current digit state. If 0, count up, and down otherwise. For each state, before incrementing, the function calls itself recursively with the next (right) digit status (which is either 0 or the last item in the set). The maximal depth being the length k.
Keep the digits status in an array of length k. Array is initialized to {0 ... 0}. Give the function the index in the array (starting with 0). For each iteration, if we're at max depth (ie i == k-1), print the array ; otherwise call recursively the function with i+1.
Pseudo code
k length of number (number of digits)
N size of set (1 .. 10), values from 0 to N-1
A array of size k
A[0 .. k-1] = 0
function f ( i )
begin
inc = -1 if (A[i] > 0), 1 otherwise # Increment
end = 0 if (A[i] > 0), N-1 otherwise # Max value
j is the counter
j = A[ i ] # Init
while ( (inc<0 AND j>=end) OR (inc>0 AND j<=end) )
do
A[ i ] = j
if (i < k-1) call f ( i+1 )
otherwise print array A
j = j + inc
done
end
call f ( 0 )
This what you should get for N = 3, and k = 4
0000 0001 0002 0012 0011 0010 0020 0021 0022
0122 0121 0120 0110 0111 0112 0102 0101 0100
0200 0201 0202 0212 0211 0210 0220 0221 0222
1222 1221 1220 1210 1211 1212 1202 1201 1200
1100 1101 1102 1112 1111 1110 1120 1121 1122
1022 1021 1020 1010 1011 1012 1002 1001 1000
2000 2001 2002 2012 2011 2010 2020 2021 2022
2122 2121 2120 2110 2111 2112 2102 2101 2100
2200 2201 2202 2212 2211 2210 2220 2221 2222
Note that you should always get Nk numbers...
This is the C code that generated the above:
int a[20] = {0}; // Put here the right size instead of 20, or use #define...
int N,k;
void f(int i) {
int inc = a[i] ? -1:1;
int end = a[i] ? 0:N-1;
int j;
for(j=a[i] ; inc<0 && j>=end || inc>0 && j<=end ; j+=inc) {
a[i] = j;
if (i < k-1) f(i+1);
else {
int z;
for(z=0 ; z<k ; z++) printf("%d", a[z]);
printf("\n");
}
}
}
in main() initialize N and k and call
f(0);
An iterative version, that does basically the same thing
void fi() {
int z,i,inc[k];
for(i=0 ; i<k ; i++) {
a[i] = 0; // initialize our array if needed
inc[i] = 1; // all digits are in +1 mode
}
int p = k-1; // p, position: start from last digit (right)
while(p >= 0) {
if (p == k-1) {
for(z=0 ; z<k ; z++) printf("%d", a[z]);
printf("\n");
}
if (inc[p]<0 && a[p]>0 || inc[p]>0 && a[p]<N-1) {
a[p] += inc[p];
p = k-1;
}
else {
inc[p] = -inc[p];
p--;
}
}
}
You're just changing the iteration direction of your least significant element. If you're generating the permutations to a container you can invert the order of size(set) permutations every other size(set) permutations.
The alternative is to write your own permutation generator that takes care of this for you. For example in C++ a simple permutation generator and printer would look like this:
vector<vector<int>::const_iterator> its(k, cbegin(set));
do {
transform(cbegin(its), cend(its), ostream_iterator<int>(cout), [](const auto& i) { return *i; });
cout << endl;
for (auto it = rbegin(its); it != rend(its) && ++*it == cend(set); ++it) *it = cbegin(set);
} while (count(cbegin(its), cend(its), cbegin(set)) != k);
Live Example
The modification you would need to make would be to alternate the iteration direction of the least significant iterator each time it reached an end of the set, something like this:
vector<vector<int>::const_iterator> its(k, cbegin(set));
vector<bool> ns(k);
for(int i = k - 1; its.front() != cend(set); its[i] = next(its[i], ns[i] ? -1 : 1), i = k - 1) {
transform(cbegin(its), cend(its), ostream_iterator<int>(cout), [](const auto& i) { return *i; });
cout << endl;
while (i > 0 && (!ns[i] && its[i] == prev(cend(set)) || ns[i] && its[i] == cbegin(set))) {
ns[i] = !ns[i];
--i;
}
}
Live Example
public class UnaryOperator {
public static void main(String[] args) {
byte a= -5;
System.out.println(~a); // prints 4
}
}
When I do it manually, I get the answer as 6.
Here is how I did it:
128 64 32 16 8 4 2 1
0 0 0 0 0 1 0 1
As it is a negation I inverted it to the following:
128 64 32 16 8 4 2 1
0 0 0 0 0 1 0 1
sign -1 1 1 1 1 0 1 0
-----------------------------
0 0 0 0 1 0 1
add one--> 0 0 0 0 0 1 1
------------------------------
0 0 0 0 1 1 0 = 6
------------------------------
I know there's something wrong with what I am doing but I am not able to figure it out.
5 is 00000101
-5 is 11111010+00000001 = 11111011
~(-5) is 00000100
so you get 4.
You're starting out with -5, which is in two's complement. Thus:
-128 64 32 16 8 4 2 1
1 1 1 1 1 0 1 1 (= -5)
flip: 0 0 0 0 0 1 0 0 (= +4)
I haven't done much bitwise stuff, but after reading wikipedia for a few seconds it seems like NOT -5 = 4, on wikipedia they used NOT x = -x - 1. So the program is correct.
Edit: For unsigned integers, you use NOT x = y - x were y is the maximum number that integer can hold.
i am playing with shifting and i get troubled with one case:
int maxint = Integer.MAX_VALUE;
LOG.debug("maxint << 31 ---> {} ({})", maxint << 31 , Integer.toBinaryString(maxint << 31 ));
LOG.debug("maxint << 32 ---> {} ({})", maxint << 32 , Integer.toBinaryString(maxint << 32 ));
LOG.debug("maxint << 33 ---> {} ({})", maxint << 33 , Integer.toBinaryString(maxint << 33 ));
and it prints:
maxint << 31 ---> -2147483648 (10000000000000000000000000000000)
maxint << 32 ---> 2147483647 (1111111111111111111111111111111)
maxint << 33 ---> -2 (11111111111111111111111111111110)
So the questions is if shift 31 leaves '1' at MSB then shift 32 should not move it out and return 0?
Going further i do the same starting with shift 31 result (which is Integer.MIN_VALUE) and shift by 1.
int minInt = -2147483648;
LOG.debug("minInt << 1 ---> {} ({})", minInt << 1 , Integer.toBinaryString(minInt << 1 ));
LOG.debug("minInt << 2 ---> {} ({})", minInt << 2 , Integer.toBinaryString(minInt << 2 ));
and it prints:
minInt << 1 ---> 0 (0)
minInt << 2 ---> 0 (0)
which is what I expect.
http://docs.oracle.com/javase/specs/jls/se8/html/jls-15.html#jls-15.19
If the promoted type of the left-hand operand is int, only the five
lowest-order bits of the right-hand operand are used as the shift
distance. It is as if the right-hand operand were subjected to a
bitwise logical AND operator & (§15.22.1) with the mask value 0x1f
(0b11111). The shift distance actually used is therefore always in the
range 0 to 31, inclusive.
and similarly six bits for long. This behavior is also allowed and commonly implemented in C and C++, though not required as in Java.
Also duplicate of Shift operator in Java bizarre program output which my first search missed.
Shift work mod 32 a << b == a <<(b % 32)
ps: For long mod 64
As im currently aware the following is correct:
char 8 bit value e.g 0 0 0 0 0 0 0 0
short 16 bit value e.g 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
int 32 bit value e.g 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I know the above sounds idiot proof but i want to desribe every step.
So i have the values 1 and 29 which are both 8 bits if im correct.
1: 0 0 0 0 0 0 0 1
29: 0 0 0 1 1 0 0 1
Now as these are 8 bits i can do the following
char ff = (char) 1;
char off = (char) 29;
So thats me storying my two values.
I now want to concation these values so it looks like
1 2 9 in binary that would be: 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 1
Im currently doing:
short concat = (short) (ff | off)
But get the result 29 when it should be 285 as the binary would be
32768 16384 8192 4096 2048 1024 512 256 128 64 32 16 8 4 2 1
0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 1
Where the hell im going wrong :(?
-- UPDATE CODE SOLUTION --
byte of= (byte) 29;
byte fm1 = (byte) 1;
char ph1 = (char) (fm1<<8 | of);
or is
short ph2 = (short) (fm1 <<8 | of);
Whats better as there both 16 bit?
System.out.println((int)ph1);
You need to shift the bits left, by 8.
short concat = (short) (ff <<8 | off)
The pipe is the bitwise or, so you just end up putting the same bits on the same place, putting 1s on places were either the first, or the second char has a 1.
char is two bytes
byte is one byte
byte ff = (byte) 1;
byte off = (byte) 29;
short concat = (ff << 8)|off;