I have a question about this problem, and any help would be great!
Write a program that takes one integer N as an
argument and prints out its truncated binary logarithm [log2 N]. Hint: [log2 N] = l is the largest integer ` such that
2^l <= N.
I got this much down:
int N = Integer.parseInt(args[0]);
double l = Math.log(N) / Math.log(2);
double a = Math.pow(2, l);
But I can't figure out how to truncate l while keeping 2^l <= N
Thanks
This is what i have now:
int N = Integer.parseInt(args[0]);
int i = 0; // loop control counter
int v = 1; // current power of two
while (Math.pow(2 , i) <= N) {
i = i + 1;
v = 2 * v;
}
System.out.println(Integer.highestOneBit(N));
This prints out the integer that is equal to 2^i which would be less than N. My test still comes out false and i think that is because the question is asking to print the i that is the largest rather than the N. So when i do
Integer.highestOneBit(i)
the correct i does not print out. For example if i do: N = 38 then the highest i should be 5, but instead it prints out 4.
Then i tried this:
int N = Integer.parseInt(args[0]);
int i; // loop control counter
for (i= 0; Math.pow(2 , i) == N; i++) {
}
System.out.println(Integer.highestOneBit(i));
Where if i make N = 2 i should print out to be 1, but instead it is printing out 0.
I've tried a bunch of things on top of that, but cant get what i am doing wrong. Help would be greatly appreciated. Thanks
I believe the answer you're looking for here is based on the underlying notion of how a number is actually stored in a computer, and how that can be used to your advantage in a problem such as this.
Numbers in a computer are stored in binary - a series of ones and zeros where each column represents a power of 2:
(Above image from http://www.mathincomputers.com/binary.html - see for more info on binary)
The zeroth power of 2 is over on the right. So, 01001, for example, represents the decimal value 2^0 + 2^3; 9.
This storage format, interestingly, gives us some additional information about the number. We can see that 2^3 is the highest power of 2 that 9 is made up of. Let's imagine it's the only power of two it contains, by chopping off all the other 1's except the highest. This is a truncation, and results in this:
01000
You'll now notice this value represents 8, or 2^3. Taking it down to basics, lets now look at what log base 2 really represents. It's the number that you raise 2 to the power of to get the thing your finding the log of. log2(8) is 3. Can you see the pattern emerging here?
The position of the highest bit can be used as an approximation to it's log base 2 value.
2^3 is the 3rd bit over in our example, so a truncated approximation to log base 2(9) is 3.
So the truncated binary logarithm of 9 is 3. 2^3 is less than 9; This is where the less than comes from, and the algorithm to find it's value simply involves finding the position of the highest bit that makes up the number.
Some more examples:
12 = 1100. Position of the highest bit = 3 (starting from zero on the right). Therefore the truncated binary logarithm of 12 = 3. 2^3 is <= 12.
38 = 100110. Position of the highest bit = 5. Therefore the truncated binary logarithm of 38 = 5. 2^5 is <= 38.
This level of pushing bits around is known as bitwise operations in Java.
Integer.highestOneBit(n) returns essentially the truncated value. So if n was 9 (1001), highestOneBit(9) returns 8 (1000), which may be of use.
A simple way of finding the position of that highest bit of a number involves doing a bitshift until the value is zero. Something a little like this:
// Input number - 1001:
int n=9;
int position=0;
// Cache the input number - the loop destroys it.
int originalN=n;
while( n!=0 ){
position++; // Also position = position + 1;
n = n>>1; // Shift the bits over one spot (Overwriting n).
// 1001 becomes 0100, then 0010, then 0001, then 0000 on each iteration.
// Hopefully you can then see that n is zero when we've
// pushed all the bits off.
}
// Position is now the point at which n became zero.
// In your case, this is also the value of your truncated binary log.
System.out.println("Binary log of "+originalN+" is "+position);
Related
Link of the question-https://www.codechef.com/problems/MATPH
So , I'm stuck on this question for hours and I don't know where I'm wrong.
I have used Sieve of Eratosthenes for finding prime and I saved all prime numbers in hash map.Online judge is giving me wrong answer on test cases.
static void dri(int n) {
long large=0;int r=0,x,count=0,p,count1=0;
x=(int)Math.sqrt(n);
//To understand why I calculated x let's take an example
//let n=530 sqrt(530) is 23 so for all the numbers greater than 23 when
//we square them they will come out to be greater than n
//so now I just have to check the numbers till x because numbers
//greater than x will defiantly fail.I think you get
//what I'm trying to explain
while(r<x) {
r = map.get(++count); // Prime numbers will be fetched from map and stored in r
int exp = (int) (Math.log(n) / Math.log(r));
//To explain this line let n=64 and r=3.Now, exp will be equal to 3
//This result implies that for r=3 the 3^exp is the //maximum(less than n) value which I can calculate by having a prime in a power
if (exp != 1) { //This is just to resolve an error dont mind this line
if (map.containsValue(exp) == false) {
//This line implies that when exp is not prime
//So as I need prime number next lines of code will calculate the nearest prime to exp
count1 = exp;
while (!map.containsValue(--count1)) ;
exp = count1;
}
int temp = (int) Math.pow(r, exp);
if (large < temp)
large = temp;
}
}
System.out.println(large);
}
I
For each testcase, output in a single line containing the largest
beautiful number ≤ N. Print −1 if no such number exists.
I believe that 4 is the smallest beautiful number since 2 is the smallest prime number and 2^2 equals 4. N is just required to ≥ 0. So dri(0), dri(1), dri(2) and dri(3) should all print −1. I tried. They don’t. I would believe that this is the reason for your failure on CodeChef.
I am leaving it to yourself to find out how the mentioned calls to your method behave and what to do about it.
As an aside, what’s the point in keeping your prime numbers in a map? Wouldn’t a list or a sorted set be more suitable?
Another description of the problem: Compute a matrix which satisfies certain constraints
Given a function whose only argument is a 4x4 matrix (int[4][4] matrix), determine the maximal possible output (return value) of that function.
The 4x4 matrix must satisfy the following constraints:
All entries are integers between -10 and 10 (inclusively).
It must be symmetrix, entry(x,y) = entry(y,x).
Diagonal entries must be positive, entry(x,x) > 0.
The sum of all 16 entries must be 0.
The function must only sum up values of the matrix, nothing fancy.
My question:
Given such a function which sums up certain values of a matrix (matrix satisfies above constraints), how do I find the maximal possible output/return value of that function?
For example:
/* The function sums up certain values of the matrix,
a value can be summed up multiple or 0 times. */
// for this example I arbitrarily chose values at (0,0), (1,2), (0,3), (1,1).
int exampleFunction(int[][] matrix) {
int a = matrix[0][0];
int b = matrix[1][2];
int c = matrix[0][3];
int d = matrix[1][1];
return a+b+c+d;
}
/* The result (max output of the above function) is 40,
it can be achieved by the following matrix: */
0. 1. 2. 3.
0. 10 -10 -10 10
1. -10 10 10 -10
2. -10 10 1 -1
3. 10 -10 -1 1
// Another example:
// for this example I arbitrarily chose values at (0,3), (0,1), (0,1), (0,4), ...
int exampleFunction2(int[][] matrix) {
int a = matrix[0][3] + matrix[0][1] + matrix[0][1];
int b = matrix[0][3] + matrix[0][3] + matrix[0][2];
int c = matrix[1][2] + matrix[2][1] + matrix[3][1];
int d = matrix[1][3] + matrix[2][3] + matrix[3][2];
return a+b+c+d;
}
/* The result (max output of the above function) is -4, it can be achieved by
the following matrix: */
0. 1. 2. 3.
0. 1 10 10 -10
1. 10 1 -1 -10
2. 10 -1 1 -1
3. -10 -10 -1 1
I don't know where to start. Currently I'm trying to estimate the number of 4x4 matrices which satisfy the constraints, if the number is small enough the problem could be solved by brute force.
Is there a more general approach?
Can the solution to this problem be generalized such that it can be easily adapted to arbitrary functions on the given matrix and arbitrary constraints for the matrix?
You can try to solve this using linear programming techniques.
The idea is to express the problem as some inequalities, some equalities, and a linear objective function and then call a library to optimize the result.
Python code:
import scipy.optimize as opt
c = [0]*16
def use(y,x):
c[y*4+x] -= 1
if 0:
use(0,0)
use(1,2)
use(0,3)
use(1,1)
else:
use(0,3)
use(0,1)
use(0,1)
use(0,3)
use(0,3)
use(0,2)
use(1,2)
use(2,1)
use(3,1)
use(1,3)
use(2,3)
use(3,2)
bounds=[ [-10,10] for i in range(4*4) ]
for i in range(4):
bounds[i*4+i] = [1,10]
A_eq = [[1] * 16]
b_eq = [0]
for x in range(4):
for y in range(x+1,4):
D = [0]*16
D[x*4+y] = 1
D[y*4+x] = -1
A_eq.append(D)
b_eq.append(0)
r = opt.linprog(c,A_eq=A_eq,b_eq=b_eq,bounds=bounds)
for y in range(4):
print r.x[4*y:4*y+4]
print -r.fun
This prints:
[ 1. 10. -10. 10.]
[ 10. 1. 8. -10.]
[-10. 8. 1. -10.]
[ 10. -10. -10. 1.]
16.0
saying that the best value for your second case is 16, with the given matrix.
Strictly speaking you are wanting integer solutions. Linear programming solves this type of problem when the inputs can be any real values, while integer programming solves this type when the inputs must be integers.
In your case you may well find that the linear programming method already provides integer solutions (it does for the two given examples). When this happens, it is certain that this is the optimal answer.
However, if the variables are not integral you may need to find an integer programming library instead.
Sort the elements in the matrix in descending order and store in an array.Iterate through the elements in the array one by one
and add it to a variable.Stop iterating at the point when adding an element to variable decrease its value.The value stored in the variable gives maximum value.
maxfunction(matrix[][])
{
array(n)=sortDescending(matrix[][]);
max=n[0];
i=1;
for i to n do
temp=max;
max=max+n[i];
if(max<temp)
break;
return max;
}
You need to first consider what matrices will satisfy the rules. The 4 numbers on the diagonal must be positive, with the minimal sum of the diagonal being 4 (four 1 values), and the maximum being 40 (four 10 values).
The total sum of all 16 items is 0 - or to put it another way, sum(diagnoal)+sum(rest-of-matrix)=0.
Since you know that sum(diagonal) is positive, that means that sum(rest-of-matrix) must be negative and equal - basically sum(diagonal)*(-1).
We also know that the rest of the matrix is symmetrical - so you're guaranteed the sum(rest-of-matrix) is an even number. That means that the diagonal must also be an even number, and the sum of the top half of the matrix is exactly half the diagonal*(-1).
For any given function, you take a handful of cells and sum them. Now you can consider the functions as fitting into categories. For functions that take all 4 cells from the diagonal only, the maximum will be 40. If the function takes all 12 cells which are not the diagonal, the maximum is -4 (negative minimal diagonal).
Other categories of functions that have an easy answer:
1) one from the diagonal and an entire half of the matrix above/below the diagonal - the max is 3. The diagonal cell will be 10, the rest will be 1, 1, 2 (minimal to get to an even number) and the half-matrix will sum at -7.
2) two cells of the diagonal and half a matrix - the max is 9. the two diagonal cells are maximised to two tens, the remaining cells are 1,1 - and so the half matrix sums at -11.
3) three cells from the diagonal and half a matrix - the max is 14.
4) the entire diagonal and half the matrix - the max is 20.
You can continue with the categories of selecting functions (using some from the diagonal and some from the rest), and easily calculating the maximum for each category of selecting function. I believe they can all be mapped.
Then the only step is to put your new selecting function in the correct category and you know the maximum.
I was going through some coding exercises, and had some trouble with this question:
From 5 dice (6-sided) rolls, generate a random number in the range [1 - 100].
I implemented the following method, but the returned number is not random (called the function 1,000,000 times and several numbers never show up in 1 - 100).
public static int generator() {
Random rand = new Random();
int dices = 0;
for(int i = 0; i < 5; i++) {
dices += rand.nextInt(6) + 1;
}
int originalStart = 5;
int originalEnd = 30;
int newStart = 1;
int newEnd = 100;
double scale = (double) (newEnd - newStart) / (originalEnd - originalStart);
return (int) (newStart + ((dices - originalStart) * scale));
}
Ok, so 5 dice rolls, each with 6 options. if they are un-ordered you have a range of 5-30 as mentioned above - never sufficient for 1-100.
You need to assume an order, this gives you a scale of 1,1,1,1,1 - 6,6,6,6,6 (base 6) assuming 1 --> 0 value, you have a 5 digit base 6 number generated. As we all know 6^5 = 7776 unique possibilities. ;)
For this I am going to give you a biased random solution.
int total = 0;
int[] diceRolls;
for (int roll : diceRolls) {
total = total*6 + roll - 1;
}
return total % 100 + 1;
thanks to JosEdu for clarifying bracket requirement
Also if you wanted to un-bias this, you could divide range by the maxval given in my description above, and subsequently multiply by your total (then add offset), but you would still need to determine what rounding rules you used.
Rolling a 6 sided die 5 times results in 6^5 = 7776 possible sequences, all equally probable. Ideally you'd want to partition those sequences into 100 groups of equal size and you'd have your [1 - 100] rng, but since 7776 isn't evenly divisible by 100 this isn't possible. The best you can do to minimize the bias is 76 groups mapped to by 78 sequences each and 24 groups mapped to by 77 sequences each. Encode the (ordered) dice rolls as a base 6 number n, and return 1 + (n % 100).
Not only is there no way to remove the bias with 5 dice rolls, there is no number of dice rolls that will remove the bias entirely. There is no value of k for which 6^k is evenly divisible by 100 (consider the prime factorizations). That doesn't mean there's no way to remove the bias, it just means you can't remove the bias using a procedure that is guaranteed to terminate after any specific number of dice rolls. But you could for example do 3 dice rolls producing 6^3 = 216 sequences encoded as the base 6 number n, and return 1 + (n % 100) if n < 200. The catch is that if n >= 200 you have to repeat the procedure, and keep repeating until you get n < 200. That way there's no bias but there's also no limit to how long you might be stuck in the loop. But since the probability of having to repeat is only 16/216 each time, from a practical standpoint it's not really much of a problem.
The problem is there aren't enough random values in 5-30 to map one to one to 1-100 interval. This means certain values are destined to never show up; the amount of these "lost" values depends on the size ratio of the two intervals.
You can leverage the power of your dice in a way more efficient way, however. Here's how I'd do it:
Approach 1
Use the result of the first dice to choose one subinterval from the
6 equal subintervals with size 16.5 (99/6).
Use the result of the second dice to choose one subinterval from the 6 equal sub-subintervals of the subinterval you chose in step one.
Use... I guess you know what follows next.
Approach 2
Construct your random number using digits in a base-6 system. I.E. The first dice will be the first digit of the base-6 number, the second dice - the second digit, etc.
Then convert to base-10, and divide by (46656/99). You should have your random number. You could in fact only use 3 dice, of course, the rest two are just redundant.
I'd like to round manually without the round()-Method.
So I can tell my program that's my number, on this point i want you to round.
Let me give you some examples:
Input number: 144
Input rounding: 2
Output rounded number: 140
Input number: 123456
Input rounding: 3
Output rounded number: 123500
And as a litte addon maybe to round behind the comma:
Input number: 123.456
Input rounding: -1
Output rounded number: 123.460
I don't know how to start programming that...
Has anyone a clue how I can get started with that problem?
Thanks for helping me :)
I'd like to learn better programming, so i don't want to use the round and make my own one, so i can understand it a better way :)
A simple way to do it is:
Divide the number by a power of ten
Round it by any desired method
Multiply the result by the same power of ten in step 1
Let me show you an example:
You want to round the number 1234.567 to two decimal positions (the desired result is 1234.57).
x = 1234.567;
p = 2;
x = x * pow(10, p); // x = 123456.7
x = floor(x + 0.5); // x = floor(123456.7 + 0.5) = floor(123457.2) = 123457
x = x / pow(10,p); // x = 1234.57
return x;
Of course you can compact all these steps in one. I made it step-by-step to show you how it works. In a compact java form it would be something like:
public double roundItTheHardWay(double x, int p) {
return ((double) Math.floor(x * pow(10,p) + 0.5)) / pow(10,p);
}
As for the integer positions, you can easily check that this also works (with p < 0).
Hope this helps
if you need some advice how to start,
step by step write down calculations what you need to do to get from 144,2 --> 140
replace your math with java commands, that should be easy, but if you have problem, just look here and here
public static int round (int input, int places) {
int factor = (int)java.lang.Math.pow(10, places);
return (input / factor) * factor;
}
Basically, what this does is dividing the input by your factor, then multiplying again. When dividing integers in languages like Java, the remainder of the division is dropped from the results.
edit: the code was faulty, fixed it. Also, the java.lang.Math.pow is so that you get 10 to the n-th power, where n is the value of places. In the OP's example, the number of places to consider is upped by one.
Re-edit: as pointed out in the comments, the above will give you the floor, that is, the result of rounding down. If you don't want to always round down, you must also keep the modulus in another variable. Like this:
int mod = input % factor;
If you want to always get the ceiling, that is, rounding up, check whether mod is zero. If it is, leave it at that. Otherwise, add factor to the result.
int ceil = input + (mod == 0 ? 0 : factor);
If you want to round to nearest, then get the floor if mod is smaller than factor / 2, or the ceiling otherwise.
Divide (positive)/Multiply (negative) by the "input rounding" times 10 - 1 (144 / (10 * (2 - 1)). This will give you the same in this instance. Get the remainder of the last digit (4). Determine if it is greater than or equal to 5 (less than). Make it equal to 0 or add 10, depending on the previous answer. Multiply/Divide it back by the "input rounding" times 10 - 1. This should give you your value.
If this is for homework. The purpose is to teach you to think for yourself. I may have given you the answer, but you still need to write the code by yourself.
Next time, you should write your own code and ask what is wrong
For integers, one way would be to use a combination of the mod operator, which is the percent symbol %, and the divide operator. In your first example, you would compute 144 % 10, resulting in 4. And compute 144 / 10, which gives 14 (as an integer). You can compare the result of the mod operation to half of the denominator, to find out if you should round the 14 up to 15 or not (in this case not), and then multiply back by the denominator to get your answer.
In psuedo code, assuming n is the number to round, p is the power of 10 representing the position of the significant digits:
denom = power(10, p)
remainder = n % denom
dividend = n / denom
if (remainder < denom/2)
return dividend * denom
else
return (dividend + 1) * denom
This question directly follows after reading through Bits counting algorithm (Brian Kernighan) in an integer time complexity . The Java code in question is
int count_set_bits(int n) {
int count = 0;
while(n != 0) {
n &= (n-1);
count++;
}
}
I want to understand what n &= (n-1) is achieving here ? I have seen a similar kind of construct in another nifty algorithm for detecting whether a number is a power of 2 like:
if(n & (n-1) == 0) {
System.out.println("The number is a power of 2");
}
Stepping through the code in a debugger helped me.
If you start with
n = 1010101 & n-1=1010100 => 1010100
n = 1010100 & n-1=1010011 => 1010000
n = 1010000 & n-1=1001111 => 1000000
n = 1000000 & n-1=0111111 => 0000000
So this iterates 4 times. Each iteration decrements the value in such a way that the least significant bit that is set to 1 disappears.
Decrementing by one flips the lowest bit and every bit up to the first one. e.g. if you have 1000....0000 -1 = 0111....1111 not matter how many bits it has to flip and it stops there leaving any other bits set untouched. When you and this with n the lowest bit set and only the lowest bit becomes 0
Subtraction of 1 from a number toggles all the bits (from right to left) till the rightmost set bit(including the righmost set bit).
So if we subtract a number by 1 and do bitwise & with itself (n & (n-1)), we unset the righmost set bit. In this way we can unset 1s one by one from right to left in loop.
The number of times the loop iterates is equal to the number of set
bits.
Source : Brian Kernighan's Algorithm