Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
Here's the problem: https://projecteuler.net/problem=15
I've come up with a pattern which I thought would work for this and I've looked at what other people have done and they've done the same thing, such as here: http://code.jasonbhill.com/python/project-euler-problem-15/ But I always get a different answer. Here's my code.
import java.util.*;
public class Problem15 {
public static void main(String[] args) {
ArrayList<Long> list = new ArrayList<Long>();
list.add((long)1);
int size;
for (int x = 1;x<20;x++){
size = list.size();
for(int y = 1;y<size;y++){
long sum = list.get(y-1)+list.get(y);
list.set(y, sum);
}
list.add(list.get(size-1)*2);
System.out.println(list);
}
}
}
edit:
In response to Edward, I think my method is currently what you said before your edit in that this isn't about brute force but I'm just summing the possible ways from each point in the grid. However, I don't need a 2d array to do this because I'm only looking at possible moves from only the side. Here's something I drew up to hopefully explain my process.
So for a 1x1. Like you said, once you reach the limit of one direction, you can only travel in the limit of the other, so there's 2 ways. This isn't particularly helpful for a 1x1 but it is for larger ones. For a 2x2, you know that the top corner, being the limit of right, only has 1 possible path from that point. The same logic applies to the bottom corner. And, because you have a square which you have already solved for, a 1x1, you know that the middle point has 2 paths from there. Now, if you look at the sides, you see that the point for instance that has 2 beneath it and 1 to the right has the sum of the number of paths in those adjacent points so then that point must have 3 paths. Same for the other side, giving the top left corner the sum of 3 and 3, or 2 times 3.
Now if you look at my code, that's what it's trying to do. The element with index 0 is always 1, and for the rest of the array, it adds together the previous term and itself and replaces the current term. Lastly, to find the total number of paths, it just doubles the last number. So if the program were to try and solve for a 4x4, the array would currently look like {1, 4, 10, 20}. So the program would change it to {1, 5, 10, 20}, then {1, 5, 15, 20}, then {1, 5, 15, 35}, and finally, adds the total number of paths, {1, 5, 15, 35, 70}. I think this is what you were trying to explain to me in your answer however my answer always comes out incorrect.
Realize that it's more about mathematical complexity than brute force searching.
You have a two dimensional array of points, where you can chose to only travel away from the origin in the x or the y direction. As such, you can represent your travel like so:
(0, 0), (1, 0), (1, 1), (2, 1), (2, 2)
some things become immediately obvious. The first one is that any path through the mess is going to require x + y steps, travelling through x + y + 1 locations. It is a feature of a Manhattan distance style path.
The second is that at any one point until you hit the maximum x or y, you can select either of the two options (x or y); but, as soon as one or the other is at it's limit, the only option left is to chose the non-maximum value repeatedly until it also becomes a maximum.
With this you might have enough of a hint to solve the math problem. Then you won't even need to search through the different paths to get an algorithm that can solve the problem.
--- edited to give a bit more of a hint ---
Each two dimensional array of paths can be broken down into smaller two dimensional arrays of paths. So the solution to f(3, 5) where the function f yields the number of paths is equal to f(2, 5) + f(3, 4). Note that f(0, 5) directly equals 1, as does f(3, 0) because you no longer have "choices" when the paths are forced to be linear.
Once you model the function, you don't even need the array to walk the paths....
f(1, 1) = f(0, 1) + f(1, 0)
f(0, 1) = 1
f(1, 0) = 1
f(1, 1) = 1 + 1
f(1, 1) = 2
and for a set of 3 x 3 verticies (like the example cited has)
f(2, 2) = f(1, 2) + f(2, 1)
f(1, 2) = f(0, 1) + f(1, 1)
(from before)
f(1, 1) = 2
f(0, 2) = 1
f(1, 2) = 2 + 1 = 3
likewise (because it's the mirror image)
f(2, 1) = 1 + 2 = 3
so
f(2, 2) = 3 + 3 = 6
--- last edit (I hope!) ---
Ok, so now you may get the idea that you have really two choices (go down) or (go right). Consider a bag containing four "commands", 2 of "go down" and 2 of "go right". How many different ways can you select the commands from the bag?
Such a "selection" is a permutation, but since we are selecting all of them, it is a special type of permutation called an "order" or "ordering".
The number of binomial (one or the other) orderings is ruled by the mathematical formula
number of orderings = (A + B)!/(A! * B!)
where A is the "count" of items of type A, and B is the "count" of items of type B
3x3 vertices, 2 down choices, 2 right choices
number of orderings = (2+2)!/2!*2!
4!/1*2*1*2
1*2*3*4/1*2*1*2
(1*2)*3*4/(1*2)*1*2
3*4/2
12/2
6
You could probably do a 20*20 by hand if you needed, but the factorial formula is simple enough to do by computer (although keep an eye you don't ruin the answer with an integer overflow).
Another implementation:
public static void main(String[] args) {
int n = 20;
long matrix[][] = new long[n][n];
for (int i = 0; i < n; i++) {
matrix[i][0] = i + 2;
matrix[0][i] = i + 2;
}
for (int i = 1; i < n; i++) {
for (int j = i; j < n; j++) { // j>=i
matrix[i][j] = matrix[i - 1][j] + matrix[i][j - 1];
matrix[j][i] = matrix[i][j]; // avoids double computation (difference)
}
}
System.out.println(matrix[n - 1][n - 1]);
}
Time: 43 microseconds (without printing)
It is based on the following matrix:
| 1 2 3 4 ...
---------------------
1 | 2 3 4 5 ...
2 | 3 6 10 15
3 | 4 10 20 35
4 | 5 15 35 70
. | .
. | .
. | .
where
6 = 3 + 3
10 = 6 + 4
15 = 10 + 5
...
70 = 35 + 35
Notice that I used i + 2 instead of i + 1 in the implementation because the first index is 0.
Of course, the fastest solution is to use a mathematical formula (see Edwin's post) and the code for it:
public static void main(String[] args) {
int n = 20;
long result = 1;
for ( int i = 1 ; i <= n ; i++ ) {
result *= (i+n);
result /= i;
}
System.out.println(result);
}
takes only 5 microseconds (without printing).
If you are afraid about the loss of precision, notice that the product of n consecutive numbers is divisible by n!.
To have a better understanding why the formula is:
(d+r)!
F = --------- , where |D| = d and |R| = r
d!*r!
instead of F = (d+r)!, imagine that every "down" and "right" has an index:
down1,right1,right2,down2,down3,right3
The second formula counts all possible permutations for the "commands" above, but in our case there is no difference between down1, down2 and down3. So, the second formula will count 6 (3!) times the same thing:
down1,down2,down3
down1,down3,down2
down2,down1,down3
down2,down3,down1
down3,down1,down2
down3,down2,down1
This is why we divide the (d+r)! by d!. Analogue for r!.
Related
I was re going through a more versatile explanation of BS. Which has a preposition that maximum number of key compare in a binary search is (1+log(n)). I tried to form an intuition as to how is that possible. I understand the ruining time of BS is (log(n)). Also i speculate that the worse time/maximum key lookups will happen in scenario when we search for a element which is not present in the array. But each time i go over a hypothetical array doing a BS i end up with (log(n)) compares/steps never more than that. Here is the code which was used to form that preposition.
public static int binarySearch(int[] a, int key){
int lo = 0, hi = a.length-1;
while (lo <= hi){
int mid = lo + (hi - lo) / 2;
if (key < a[mid])
hi = mid - 1;
else if (key > a[mid])
lo = mid + 1;
else
return mid;
}
return -1;
}
Probably my speculation is wrong or i am missing some point. If you could explain how maximum possible key compares would be (1+log(n)) it would be great thanks.
Don't forget that even when you have only 1 element, you still have to guess it, because it is possible that the target is not in the array. So we need to add on the +1 for the last guess when we are down to the final remaining element.
It may be clearer if you think about when n=1. We still need 1 guess, but log_2(1) = 0. So we need to add a +1 to fix up the formula.
When n is not a power of 2, we can just go up to the next higher power of 2. For an array whose length is 1000, the next higher power of 2 is 1024, which is 10. Therefore, for a 1000-element array, binary search would require at most 11 (10 + 1) guesses.
Why?
In the worst case, binary search would need 10 steps to separate the remaining numbers and 1 final step to check whether the only one number left is what you want, or it's not in the array.
Here's a different way to think about what you're doing. Rather than thinking of binary search as searching over the elements of the array, think of binary search as searching over the separators between the elements in the array. Specifically, imagine numbering the array like this:
+-----+-----+-----+-----+-----+
| 0 | 1 | 2 | ... | n-1 |
+-----+-----+-----+-----+-----+
Now, number the separators:
+-----+-----+-----+-----+-----+
| 0 | 1 | 2 | ... | n-1 |
+-----+-----+-----+-----+-----+
0 1 2 3 .. n-1 n
Notice that there are n+1 total separators, one before each element and one after the very last element.
Whenever you do a binary search, you're probing the index of the middle separator (do you see why?) and throwing half of the separators away. You can only throw half of a collection of k items away log2 k times before you're down to a single remaining element. This means that the number of probes needed will be ⌈log2 (n+1)⌉, and it happens to be the case that
log2 n < ⌈log2 (n+1)⌉ ≤ log2 n + 1,
so the "1 + log n" bit ends up arising more from "throw away half the separators" than from other sources.
Hope this helps!
Imagine an array of size 8.
l=0, h = 7, mid = 0 + (7-0)/2 = 3 go right
l=4, h = 7, mid = 4 + (7-4)/2 = 5 go right
l=6, h = 7, mid = 6 + (7-6)/2 = 6 go right
l=7, h = 7, mid = 7 + (7-7)/2 = 7 go left
l=7, h=6 ====> terminates
Total comparisons = 1 + Log 8 = 4
EDIT1: Imagine this array and use a pen and paper and trace out the above steps. Search for value 13.
index: 0 1 2 3 4 5 6 7
------------------------------
element: 1 3 5 6 7 9 11 15
I got the following code given:
public class alg
{
public static int hmm (int x)
{
if (x == 1)
{
return 2;
}
return 2*x + hmm(x-1);
}
public static void main (String[] args)
{
int x = Integer.parseInt(args[0]);
System.out.println(hmm(x));
}
}
So first question is, what does this algorithm count?
I have just typed and runned it in eclipse
so I can see better what it does (it was pseudocode before, I couldn't type it here so I typed the code). I have realized that this algorithm does following: It will take the input and multiply it by its following number.
So as examples:
input = 3, output = 12 because 3*4 = 12.
Or Input = 6, output 42 because 6*7 = 42.
Alright, the next question is my problem. I'm asked to analyze the runtime of this algorithm but I have no idea where to start.
I would say, at the beginning, when we define x, we have already got time = 1
The if loop gives time = 1 too I believe.
Last part, return 2x + alg(x-1) should give "something^x" or..?
So in the end we got something like "something^x" + 2, I doubt thats right : /
edit, managed to type pseudocode too :)
Input: Integer x with x > 1
if x = 1 then
return 2;
end if
return 2x + hmm(x-1);
When you have trouble, try to walk through the code with a (small) number.
What does this calculate?
Let's take hmm(3) as an example:
3 != 1, so we calculate 2 * 3 + hmm(3-1). Down a recursion level.
2 != 1, so we calculate 2 * 2 + hmm(2-1). Down a recursion level.
1 == 1, so we return 2. No more recursions, thus hmm(2-1) == hmm(1) == 2.
Back up one recursion level, we get 2 * 2 + hmm(1) = 2 * 2 + 2 = 4 + 2 = 6. Thus hmm(2) = 6
Another level back up, we get 2 * 3 + hmm(2) = 6 + 6 = 12
If you look closely, the algorithm calculates:
2*x + ... + 4 + 2
We can reverse this and factor out 2 and get
2 * (1 + 2 + ... + x).
Which is an arithmetic progression, for which we have a well-known formula (namely x² + x)
How long does it take?
The asymptotic running time is O(n).
There are no loops, so we only have to count the number of recursions. One might be tempted to count the individual steps of calculation, but those a are constant with every step, so we usually combine them into a constant factor k.
What does O(n) mean?
Well ... we make x - 1 recursion steps, decreasing x by 1 in every step until we reach x == 1. From x = n to x = 1 there are n - 1 such steps. We thus need k * (n - 1) operations.
If you think n to be very large, - 1 becomes negligible, so we drop it. We also drop the constant factor, because for large n, O(nk) and O(n) aren't that much different, either.
The function calculates
f(x) = 2(x + x-1 + x-2 + ... + 1)
it will run in O(x), i.e. x times will be called for constant time O(1).
The question about Single Number II from leetcode is:
Given an array of integers, every element appears three times except for one. Find that single one.
Note:
Your algorithm should have a linear runtime complexity. Could you implement it without using extra memory?
Actually, I already found the solution from website, the solution is:
public int singleNumber(int[] A) {
int one = 0, two = 0;
for (int i = 0; i < A.length; i++) {
int one_ = (one ^ A[i]) & ~two;
int two_ = A[i] & one | ~A[i] & two;
one = one_;
two = two_;
}
return one;
}
However, I do not know why this code can work and actually I do not know the way of thinking of this problem when I first saw it? Any help. thx!
So, I was going through some coding problems and stuck with this one for quite some time, and after a ton of research on google, going through different posts and portals, I have understood this problem. Will try to explain it as simple as I can.
The problem has 3 solutions:
Using HashMap: But as we know that will add O(N) space complexity and we don't want that. But for a little bit of understanding, the approach is to iterate the array, get the count of the digits and maintain it in map. Then iterate the map and where the count is 1 that will be your answer.
Using Bitwise operators: The logic for this approach is to think the numbers in bits and then add all the bits of each position. So after adding you will see that the sum is either multiple of 3 or multiple of 3 + 1 (Because the other number is occurring once only). After this, if you do a modulo on this sum you will have the result. You will understand better with the example.
Example: Array - [5, 5, 5, 6]
5 represented in bits: 101
6 represented in bits: 110
[ 101, 101, 101, 110] (Binary Reprenstation of values) After adding to particular positions, we will have
0th -> 3, 1th -> 1, 2nd -> 4
and if you mod by 3 it will become
0th -> 0, 1th -> 1, 2nd -> 1
which in decimal representation is our answer 6.
Now we need to code the same. I have explained the code using comments.
public class SingleNumberII {
/*
* Because the max integer value will go upto 32 bits
* */
private static final int INT_SIZE = 32;
public int singleNumber(final int[] A) {
int result = 0;
for(int bitIterator = 0; bitIterator < INT_SIZE; bitIterator++) {
int sum = 0, mask = (1 << bitIterator);
/*
* Mask:
* 1 << any number means -> it will add that number of 0 in right of 1
* 1 << 2 -> 100
* Why we need mask? So when we do addition we will only count 1's,
* this mask will help us do that
* */
for(int arrIterator = 0; arrIterator < A.length; arrIterator++) {
/*
* The next line is to do the sum.
* So 1 & 1 -> 0
* 1 & 0 -> 0
* The if statement will add only when there is 1 present at the position
* */
if((A[arrIterator] & mask) != 0) {
sum++;
}
}
/*
* So if there were 3 1's and 1 0's
* the result will become 0
* */
if(sum % 3 == 1) {
result |= mask;
}
}
/*So if we dry run our code with the above example
* bitIterator = 0; result = 0; mask = 1;
* after for loop the sum at position 0 will became 3. The if
* condition will be true for 5 as - (101 & 001 -> 001) and false for 6
* (110 & 001 -> 000)
* result -> 0 | 1 -> 0
*
* bitIterator = 1; result = 0; mask = 10;
* after for loop the sum at position 1 will became 1. The if
* condition will be true for 6 as - (110 & 010 -> 010) and false for 5
* (101 & 010 -> 000)
* result -> 00 | 10 -> 10
*
* bitIterator = 2; result = 10; mask = 100;
* after for loop the sum at position 2 will became 4. The if
* condition will be true for 6 as - (110 & 100 -> 100) and true for 5
* (101 & 100 -> 100)
* result -> 10 | 100 -> 110 (answer)
* */
return result;
}
}
As we can see this is not the best solution, because we are unnecessary iterating it over 32 times and it is also not that generalized. Which makes as to visit our next approach.
Using Bitwise operators (Optimized and Generalized):
So for this approach, I'll try to explain the approach, then code and then how to make it generalize.
We will take 2 flags(one, two) for analogy consider them as sets.
So we the number appears 1st time it will be added in one only if it is not present in two. and we will do the same for two, meaning if the number appears 2nd time we will remove it from 1 and then add it to two(only if it is not present in one) and for the number appearing a third time it will be removed from the set two and will no longer exist in either set.
You might have the question that why 2 sets(or variables) reason is explained in 4th point.
public int singleNumberOptimized(int[] A) {
int one = 0, two = 0;
/*
* Two sets to maintain the count the number has appeared
* one -> 1 time
* two -> 2 time
* three -> not in any set
* */
for(int arrIterator = 0; arrIterator < A.length; arrIterator++){
/*
* IF one has a number already remove it, and it does not have that number
* appeared previously and it is not there in 2 then add it in one.
* */
one = (one ^ A[arrIterator]) & ~two;
/*
* IF two has a number already remove it, and it does not have that number
* appeared previously and it is not there in 1 then add it in two.
* */
two = (two ^ A[arrIterator]) & ~one;
}
/*
* Dry run
* First Appearance : one will have two will not
* Second Appearance : one will remove and two will add
* Third Appearance: one will not able to add as it is there in two
* and two will remove because it was there.
*
* So one will have only which has occurred once and two will not have anything
* */
return one;
}
How to solve these type of questions more generically?
The number of sets you need to create depends on the value of k (Appearance of every other integer).
m >= log(K). (To count the number of 1's in the array such that whenever the counted number of 1 reaches a certain value, say k, the count returns to zero and starts over. To keep track of how many 1's we have encountered so far, we need a counter. Suppose the counter has m bits.) m will be the number of sets we need.
For everything else, we are using the same logic. But wait what should I return, the logic is to do OR operations with all the sets which will eventually the OR operation on the single number with itself and some 0s, which interns to the single number.
For a better understanding of this particular part go through this post here.
I have tried my best to explain to you the solution. Hope you like it. #HappyCoding
For more such content refer: https://thetechnote.web.app/
The idea is to reinterpret the numbers as vectors over GF(3). Each bit of the original number becomes a component of the vector. The important part is that for each vector v in a GF(3) vector space the summation v+v+v yields 0. Thus the sum over all vectors will leave the unique vector and cancel all others. Then the result is interpreted again as a number which is the desired single number.
Each component of a GF(3) vector may have the values 0, 1, 2 with addition being performed mod 3. The "one" captures the low bits and the "two" captures the high bits of the result. So although the algorithm looks complicated all that it does is "digitwise addition modulo 3 without carry".
Here is another solution.
public class Solution {
public int singleNumber(int[] nums) {
int p = 0;
int q = 0;
for(int i = 0; i<nums.length; i++){
p = q & (p ^ nums[i]);
q = p | (q ^ nums[i]);
}
return q;
}
}
Analysis from this blog post.
The code seems tricky and hard to understand at first glance.
However, if you consider the problem in Boolean algebra form, everything becomes clear.
What we need to do is to store the number of 1's of every bit. Since each of the 32 bits follow the same rules, we just need to consider 1 bit. We know a number appears 3 times at most, so we need 2 bits to store that. Now we have 4 state, 00, 01, 10 and 11, but we only need 3 of them.
In your solution, 01 for one and 10 for two are chosen. Let one represents the first bit, two represents the second bit. Then we need to set rules for one_ and two_ so that they act as we hopes. The complete loop is 00->10->01->00(0->1->2->3/0).
It's better to make Karnaugh map a.k.a. K-map. For Karnaugh Map Ref.
3-state counter
Respective values of A[i], two, one and two_, one_ after
0 0 0 | 0 0
0 0 1 | 0 1
0 1 0 | 1 0
0 1 1 | X X
1 0 0 | 0 1
1 0 1 | 1 0
1 1 0 | 0 0
1 1 1 | X X
Here X denotes we don't care for that case or simply in the final value of two and one, wherever their output is 1, that can also be considered, the result will be same and 4th and 8th case won't exist for it as two and one can't be one at the same time.
If you're thinking how I came up with the above table. I'm going to explain one of them. Considering 7th case, when A[i] is 1, two is 1 i.e. there already exists A[i] which repeats two times. Finally there is 3 A[i]'s. As, there's 3 of them, then two_ and one_ both should reset to 0.
Considering for one_
It's value is 1 for two cases i.e. 2nd and 5th case. Taking 1 is same as considering minterms in K-map.
one_ = (~A[i] & ~two & one) | (A[i] & ~two & ~one)
If ~two is taken common, then
(~A[i] & one) | (A[i] & ~one) will be same as (A[i]^one)
Then,
one_ = (one ^ A[i]) & ~two
Considering for two_
It's value is 1 for two cases i.e. 3rd and 6th case. Taking 1 is same as considering minterms in K-map.
two_ = (~A[i] & two & ~one) | (A[i] & ~two & one)
Bit manipulation for the calculated two_ will work for the problem. But, As you've mentioned
two_ = (A[i] & one) | (~A[i] & two)
The above expression can be easily be obtained using K-map considering don't cares i.e. X for all cases mentioned above as considering X won't affect our solution.
Considering for two_ and considering X
It's value is 1 for two cases i.e. 3rd and 6th case and X for two cases i.e. 4th and 8th case. Now, considering minterms.
two_ = (~A[i] & two & ~one) | (A[i] & ~two & one) | (~A[i] & two & one) | (A[i] & two & one)
Now, taking common (A & one) and (~A & two) in the above expression, you'll be left with (two|~two) and (one|~one) which is 1. So, we'll be left with
two_ = (A[i] & one) | (~A[i] & two)
For more insights!
There are three status: 0, 1, 2
So cannot use single bit, have to use high/low bit to present them as: 00, 01, 10
Here's the logic:
high/low 00 01 10
x=0 00 01 10
x=1 01 10 00
high is a function of both high and low.
If low == 1 then high = x, else high = high & ~x
We have
high = low & x | high & ~x
This equals to your: "int two_ = A[i] & one | ~A[i] & two;"
Similarly we have low as the function of both high and low:
If high == 1 then low = ~x, else low = low XOR x
I have a solution more straightforward:
int singleNumber(int A[], int n) {
int one = 0, two = 0, three = ~0;
for(int i = 0; i < n; ++i) {
int cur = A[i];
int one_next = (one & (~cur)) | (cur & three);
int two_next = (two & (~cur)) | (cur & one);
int three_next = (three & (~cur)) | (cur & two);
one = one_next;
two = two_next;
three = three_next;
}
return one;
}
First that came to my head, it's bigger but more simple to understand. Just implement addition mod by 3.
*
class Solution {
public:
int sum3[34], bit[33];
int singleNumber(int A[], int n) {
int ans(0);
for(int i=0;i<33;i++){
bit[i + 1] = 1<<i;
}
int aj;
for(int i=0;i<n;i++){
for(int j=1;j<33;j++){
aj = abs(A[i]);
if(bit[j] & aj) sum3[j]++;
}
}
for(int i=0;i<33;i++){
sum3[i] %= 3;
if(sum3[i] == 1) ans += bit[i];
}
int positve(0);
for(int i=0;i<n;i++){
if(A[i] == ans){
positve++;
}
}
if(positve%3 == 1)
return ans;
else return -ans;
}
};
*
I am trying to figure out a function f(x) that would calculate the number of leaves in a k-ary tree. For example, assume we created a tree that began with root 4 with 3 children, each of -1,-2,-3 respectively. Our leaves would only be 0 values, not null values. I have spent the past day trying to figure out a function and it seems like nothing I do goes in the correct direction.
EX:
4
/ | \
3 2 1
/ |\ /| /
2 1 0 1 0 0
/| / /
1 0 0 0
/
0
7 Leaves.
Any help would be very much appreciated! Thanks!
To clarify, I need a mathematical equation that derives the same answer as code would if I recursively transversed the tree.
More examples:
{4,7}{5,13}{6,24}{7,44}{8,81}{9,149}{10,274}{11,504}{12,927}{13,1705}{14,3136}{15,5768}{16,10609}{17,19513}{18,35890}{19,66012}{20,121415}
public int numleaves(TreeNode node) {
if (node == null)
return 0;
else if (node.getLeft() == null && node.getMiddle() == null && node.getRight() == null)
return 1;
else
return numleaves(node.getLeft()) + numleaves(node.getMiddle()) + numleaves(node.getRight());
}
I cannot answer your question, but it has a solution. I can only outline the case for the number of children k being equal to 2. The case k=3 leads to a cubic polynomial with two complex and one real solution, I lack the tools here to derive them in a non-numerical way.
But let's have a look at the case k=2. Interestingly, this problem is very closely related to the Fibonacci numbers, except for having different boundary conditions.
Writing down the recursive formula is easy:
a(n) = a(n-1) + a(n-2)
with boundary conditions a(1)=1 and a(0)=1. The characteristic polynomial of this is
x^2 = x + 1
with the solutions x1 = 1/2 + sqrt(5)/2 and x2 = 1/2 - sqrt(5)/2. It means that
a(n) = u*x1^n + v*x2^n
for some u and v is the explicit formula for the sequence we're looking for. Putting in the boundary conditions we get
u = (sqrt(5)+1)/(2*sqrt(5))
v = (sqrt(5)-1)/(2*sqrt(5))
i.e.
a(n) = (sqrt(5)+1)/(2*sqrt(5))*(1/2 + sqrt(5)/2)^n + (sqrt(5)-1)/(2*sqrt(5))*(1/2 - sqrt(5)/2)^n
for k=2.
Your code seems to be computing a Tribonacci sequence with starting values 1, 1 and 2. This is sequence A000073 from the On-Line Encyclopedia of Integer Sequences, starting from the third entry of that sequence rather than the first. The comments section of the encyclopedia page gives an explicit formula: since this is a linear recurrence relation with a degree 3 characteristic polynomial, there's a closed form solution in terms of the roots of that polynomial. Here's a short piece of Python 2 code based on the given formula that produces the first few values. (See the edit below for a simplification.)
from math import sqrt
c = (1 + (19 - 3 * sqrt(33))**(1/3.) + (19 + 3 * sqrt(33))**(1/3.)) / 3.
m = (1 - c) / 2
p = sqrt(((3*c - 5)*(c+1)/4))
j = 1/((c-m)**2 + p**2)
b = (c - m) / (2 * p*((c - m)**2 + p**2))
k = complex(-j / 2, b)
r1 = complex(m, p)
def f(n):
return int(round(j*c**(n+2) + (2*k*r1**(n+2)).real))
for n in range(0, 21):
print n, f(n)
And the output:
0 1
1 1
2 2
3 4
4 7
5 13
6 24
7 44
8 81
9 149
10 274
11 504
12 927
13 1705
14 3136
15 5768
16 10609
17 19513
18 35890
19 66012
20 121415
EDIT: the above code is needlessly complicated. With the round operation, the second term in f(n) can be omitted (it converges to zero as n increases), and the formula for the first term can be simplified. Here's some simpler code that generates the same output.
s = (19 + 297**0.5)**(1/3.)
c = (1 + s + 4/s)/3
j = 3 - (2 + 1/c)/c
for n in range(0, 32):
print n, int(round(c**n / j))
I can't help it, but I see Binomial tree in it. http://en.wikipedia.org/wiki/Binomial_heap
I think that good approximation could be sum of k-th row of pascal triangle, where k stands for the number of the root node.
Isn't this easier to understand:
We set the starting values for the tribonacci sequence into a list called result. Then we put these values into 3 variables. We change the variable content based on the tribonacci formula (new a is a+b+c, new b is old a, new c is old b). Then we calculate to whatever tribonacci number we want to go up to and store each result into our result list. At the end, we read out the indexed list.
result=[1,1,2]
a,b,c=result[-1],result[-2],result[-3]
for i in range(40):
a,b,c=a+b+c,a,b
result.append(a)
for e,f in enumerate(result):
print e,f
I'm a high school Computer Science student, and today I was given a problem to:
Program Description: There is a belief among dice players that in
throwing three dice a ten is easier to get than a nine. Can you write
a program that proves or disproves this belief?
Have the computer compute all the possible ways three dice can be
thrown: 1 + 1 + 1, 1 + 1 + 2, 1 + 1 + 3, etc. Add up each of these
possibilities and see how many give nine as the result and how many
give ten. If more give ten, then the belief is proven.
I quickly worked out a brute force solution, as such
int sum,tens,nines;
tens=nines=0;
for(int i=1;i<=6;i++){
for(int j=1;j<=6;j++){
for(int k=1;k<=6;k++){
sum=i+j+k;
//Ternary operators are fun!
tens+=((sum==10)?1:0);
nines+=((sum==9)?1:0);
}
}
}
System.out.println("There are "+tens+" ways to roll a 10");
System.out.println("There are "+nines+" ways to roll a 9");
Which works just fine, and a brute force solution is what the teacher wanted us to do. However, it doesn't scale, and I am trying to find a way to make an algorithm that can calculate the number of ways to roll n dice to get a specific number. Therefore, I started generating the number of ways to get each sum with n dice. With 1 die, there is obviously 1 solution for each. I then calculated, through brute force, the combinations with 2 and 3 dice. These are for two:
There are 1 ways to roll a 2 There are 2 ways to roll a 3
There are 3 ways to roll a 4 There are 4 ways to roll a 5
There are 5 ways to roll a 6 There are 6 ways to roll a 7
There are 5 ways to roll a 8 There are 4 ways to roll a 9
There are 3 ways to roll a 10 There are 2 ways to roll a 11
There are 1 ways to roll a 12
Which looks straightforward enough; it can be calculated with a simple linear absolute value function. But then things start getting trickier. With 3:
There are 1 ways to roll a 3 There are 3 ways to roll a 4
There are 6 ways to roll a 5 There are 10 ways to roll a 6
There are 15 ways to roll a 7 There are 21 ways to roll a 8
There are 25 ways to roll a 9 There are 27 ways to roll a 10
There are 27 ways to roll a 11 There are 25 ways to roll a 12
There are 21 ways to roll a 13 There are 15 ways to roll a 14
There are 10 ways to roll a 15 There are 6 ways to roll a 16
There are 3 ways to roll a 17 There are 1 ways to roll a 18
So I look at that, and I think: Cool, Triangular numbers! However, then I notice those pesky 25s and 27s. So it's obviously not triangular numbers, but still some polynomial expansion, since it's symmetric.
So I take to Google, and I come across this page that goes into some detail about how to do this with math. It is fairly easy(albeit long) to find this using repeated derivatives or expansion, but it would be much harder to program that for me. I didn't quite understand the second and third answers, since I have never encountered that notation or those concepts in my math studies before. Could someone please explain how I could write a program to do this, or explain the solutions given on that page, for my own understanding of combinatorics?
EDIT: I'm looking for a mathematical way to solve this, that gives an exact theoretical number, not by simulating dice
The solution using the generating-function method with N(d, s) is probably the easiest to program. You can use recursion to model the problem nicely:
public int numPossibilities(int numDice, int sum) {
if (numDice == sum)
return 1;
else if (numDice == 0 || sum < numDice)
return 0;
else
return numPossibilities(numDice, sum - 1) +
numPossibilities(numDice - 1, sum - 1) -
numPossibilities(numDice - 1, sum - 7);
}
At first glance this seems like a fairly straightforward and efficient solution. However you will notice that many calculations of the same values of numDice and sum may be repeated and recalculated over and over, making this solution probably even less efficient than your original brute-force method. For example, in calculating all the counts for 3 dice, my program ran the numPossibilities function a total of 15106 times, as opposed to your loop which only takes 6^3 = 216 executions.
To make this solution viable, you need to add one more technique - memoization (caching) of previously calculated results. Using a HashMap object, for example, you can store combinations that have already been calculated and refer to those first before running the recursion. When I implemented a cache, the numPossibilities function only runs 151 times total to calculate the results for 3 dice.
The efficiency improvement grows larger as you increase the number of dice (results are based on simulation with my own implemented solution):
# Dice | Brute Force Loop Count | Generating-Function Exec Count
3 | 216 (6^3) | 151
4 | 1296 (6^4) | 261
5 | 7776 (6^5) | 401
6 | 46656 (6^6) | 571
7 | 279936 (6^7) | 771
...
20 | 3656158440062976 | 6101
You don't need to brute force since your first roll determines what values can be used in the second roll, and both first and second roll determine the third roll. Let's take the tens example, suppose you roll a 6, so 10-6=4 meaning you still need 4. For the second roll you need at least 3, because your third roll should at least count for 1. So the second roll goes from 1 to 3. Suppose your second roll is 2, you have 10-6-2=2, meaning your third roll IS a 2, there is no other way.
Pseudo code for tens:
tens = 0
for i = [1..6] // first roll can freely go from 1 to 6
from_j = max(1, 10 - i - 6) // We have the first roll, best case is we roll a 6 in the third roll
top_j = min(6, 10 - i - 1) // we need to sum 10, minus i, minus at least 1 for the third roll
for j = [from_j..to_j]
tens++
Note that each loop adds 1, so at the end you know this code loops exactly 27 times.
Here is the Ruby code for all 18 values (sorry it's not Java, but it can be easily followed). Note the min and max, that determine what values can have each of the dice rolls.
counts = [0] * 18
1.upto(18) do |n|
from_i = [1, n - 6 - 6].max # best case is we roll 6 in 2nd and 3rd roll
top_i = [6, n - 1 -1].min # at least 1 for 2nd and 3rd roll
from_i.upto(top_i) do |i|
from_j = [1, n - i - 6].max # again we have the 2nd roll defined, best case for 3rd roll is 6
top_j = [6, n - i -1].min # at least 1 for 3rd roll
from_j.upto(top_j) do
# no need to calculate k since it's already defined being n - first roll - second roll
counts[n-1] += 1
end
end
end
print counts
For a mathematical approach, take a look at https://math.stackexchange.com/questions/4632/how-can-i-algorithmically-count-the-number-of-ways-n-m-sided-dice-can-add-up-t
Mathematical description is just a 'trick' to make same counting. It uses polynomial to express dice, 1*x^6 + 1*x^5 + 1*x^4 + 1*x^3 + 1*x^2 + 1*x means that each value 1-6 is counted once, and it uses polynomial multiplication P_1*P_2 for a counting of different combinations. That is done since coefficient at some exponent (k) is calculated by summing all coefficient in P_1 and P_2 which exponent sum to k.
E.g. for two dices we have:
(1*x^6 + 1*x^5 + 1*x^4 + 1*x^3 + 1*x^2 + 1*x) * (x^6 + x^5 + x^4 + x^3 + x^2 + x) =
(1*1)*x^12 + (1*1 + 1*1)*x^11 + (1*1 + 1*1 + 1*1)*x^11 + ... + (1*1 + 1*1)*x^3 + (1*1)*x^2
Calculation by this method has same complexity as 'counting' one.
Since function (x^6 + x^5 + x^4 + x^3 + x^2 + x)^n has simpler expression (x(x-1)^6/(x-1))^n, it is possible to use derivation approach. (x(x-1)^6/(x-1))^n is a polynomial, and we are looking for coefficient at x^s (a_s). Free coefficient (at x^0) of s'th derivation is s! * a_k. So, s'th derivation in 0 is s! * a_k.
So, we have to derive this function s times. It can be done using derivation rules, but I think that it will have even worse complexity than counting approach since each derivation produces 'more complex' function. Here are first three derivations from Wolfram Alpha: first, second and third.
In general, I prefer counting solution, and mellamokb gave nice approach and explanation.
Check out Monte Carlo Methods they usually scale linearly with inputsize. In this case the example is easy, we assume that since once throw of the dice doesn't affect the other instead of counting combinations we can simply count the sum of the faces of dices thrown randomly (many times enough).
public class MonteCarloDice {
private Map<Integer, Integer> histogram;
private Random rnd;
private int nDice;
private int n;
public MonteCarloDice(int nDice, int simulations) {
this.nDice = nDice;
this.n = simulations;
this.rnd = new Random();
this.histogram = new HashMap<>(1000);
start();
}
private void start() {
for (int simulation = 0; simulation < n; simulation++) {
int sum = 0;
for (int i = 0; i < nDice; i++) {
sum += rnd.nextInt(6) + 1;
}
if (histogram.get(sum) == null)
histogram.put(sum, 0);
histogram.put(sum, histogram.get(sum) + 1);
}
System.out.println(histogram);
}
public static void main(String[] args) {
new MonteCarloDice(3, 100000);
new MonteCarloDice(10, 1000000);
}
}
The error decreases with number of simulations but at the cost of cputime but the above values were pretty fast.
3 dice
{3=498, 4=1392, 5=2702, 6=4549, 7=7041, 8=9844, 9=11583, 10=12310, 11=12469, 12=11594, 13=9697, 14=6999, 15=4677, 17=1395, 16=2790, 18=460}
10 dice
{13=3, 14=13, 15=40, 17=192, 16=81, 19=769, 18=396, 21=2453, 20=1426, 23=6331, 22=4068, 25=13673, 24=9564, 27=25136, 26=19044, 29=40683, 28=32686, 31=56406, 30=48458, 34=71215, 35=72174, 32=62624, 33=68027, 38=63230, 39=56008, 36=71738, 37=68577, 42=32636, 43=25318, 40=48676, 41=40362, 46=9627, 47=6329, 44=19086, 45=13701, 51=772, 50=1383, 49=2416, 48=3996, 55=31, 54=86, 53=150, 52=406, 59=1, 57=2, 56=7}