Edit: It appears people are confusing this question for another. Both questions are about the same Foobar challenge. The other question asked for an approach better than the exponential time or omega(answer) brute force search, since a brute force search took too long. The answers there suggested using dynamic programming, which is a good idea that is much faster than a brute force search or backtracking, although not the best possible. This question starts with dynamic programming, which works on 4 out of 5 tests, but which seems to get the wrong answer for the 5th and perhaps largest test case. It doesn't take too long. It completes but gets the wrong answer. The answers to the other question do not help with this question, nor does the answer to this question help with that one, so they are not duplicates. They are about different aspects of the same task.
I am working on a Foobar Challenge, trying to determine the number of possible "Winning" roll combinations an individual could make using a 3-sided die. The simulated user will roll t times on a 1-dimensional "game-board" that is n spaces wide. The 3 sided die has 3 possible values: left (-1), stay (0), right (1). User starts out at location '0' on the board. If you are at 0 and you roll a -1 (left) then the game is invalid. If you are on the final square the only valid roll is 0 (stay). The objective is to determine the total amount of roll combinations a user could make that ends up with their marker being on the last square. (READ THE FULL CHALLENGE DESCRIPTION BELOW).
I have a semi-functioning solution to this challenge; however, when I submit it for review it fails 1 out of 5 test scenarios; problem is, Foobar doesn't disclose what the exact scenario was that failed, it simply says 'Test 5 failed!'. Would anybody be able to look at my Java code (below) and see what I am missing?
Here is my code:
public static int answer(int t, int n) {
if ((n - t) > 1) {
return 0;
}
if (n == 2) {
return t * 1;
}
if (t == n) {
return n;
}
// Use dynamic programming:
int lst[] = new int[n]; // lst[k] holds the # valid paths to position k using i-1 steps
int lst2[] = new int[n]; // put # valid paths to position k using i steps into lst2[k]
int total = 0;
lst[0] = 1;
lst[1] = 1;
int max = 1;
for (int i = 1; i < t; i++) {
lst2 = new int[n];
if (max < (n - 1)) {
max++;
}
for (int j = 0; j < n && j < (max + 1); j++) {
if (j == 0) {
lst2[j] = lst[j] + lst[j + 1];
} else if (j == max) {
if (j == (n - 1)) {
total += lst[j - 1];
} else {
lst2[j] = lst[j - 1];
}
} else {
lst2[j] = lst[j - 1] + lst[j] + lst[j + 1];
}
}
lst = lst2;
}
return total % 123454321;
}
Original Challenge Text
There you have it. Yet another pointless "bored" game created by the bored minions of Professor Boolean.
The game is a single player game, played on a board with n squares in a horizontal row. The minion places a token on the left-most square and rolls a special three-sided die.
If the die rolls a "Left", the minion moves the token to a square one space to the left of where it is currently. If there is no square to the left, the game is invalid, and you start again.
If the die rolls a "Stay", the token stays where it is.
If the die rolls a "Right", the minion moves the token to a square, one space to the right of where it is currently. If there is no square to the right, the game is invalid and you start again.
The aim is to roll the dice exactly t times, and be at the rightmost square on the last roll. If you land on the rightmost square before t rolls are done then the only valid dice roll is to roll a "Stay". If you roll anything else, the game is invalid (i.e., you cannot move left or right from the rightmost square).
To make it more interesting, the minions have leaderboards (one for each n,t pair) where each minion submits the game he just played: the sequence of dice rolls. If some minion has already submitted the exact same sequence, they cannot submit a new entry, so the entries in the leader-board correspond to unique games playable.
Since the minions refresh the leaderboards frequently on their mobile devices, as an infiltrating hacker, you are interested in knowing the maximum possible size a leaderboard can have.
Write a function answer(t, n), which given the number of dice rolls t, and the number of squares in the board n, returns the possible number of unique games modulo 123454321. i.e. if the total number is S, then return the remainder upon dividing S by 123454321, the remainder should be an integer between 0 and 123454320 (inclusive).
n and t will be positive integers, no more than 1000. n will be at least 2.
Languages
To provide a Python solution, edit solution.py To provide a Java solution, edit solution.java
Test cases
Inputs: (int) t = 1 (int) n = 2 Output: (int) 1
Inputs: (int) t = 3 (int) n = 2 Output: (int) 3
The counts grow exponentially in t. My guess is that the error is that you are overflowing the integer range. Reduce intermediate results mod m, or use a java.math.BigInteger.
Ok to make it simple yes there is a problem with int overflow. However you don't need to use a larger container ex BigInteger. All you need to store int the second array is the remainder ex lst2[j] = (lst[j - 1] + lst[j] + lst[j + 1]) % 123454321;. By doing this your value will never exceed 123454321 which will easily fit within an integer. then after every iteration of i use total %= 123454321; then you just need to return total. Since we are just adding paths moding the intermediate result just reduces it back to a manageable number.
Related
The description of the problem and it's solution(s) can be found here
https://www.geeksforgeeks.org/count-the-number-of-ways-to-divide-n-in-k-groups-incrementally/
Basically the problem is given N people, how many ways can you divide them into K groups, such that each group is greater than or equal in number of people to the one that came before it?
The solution is to recurse through every possibility, and it's complexity can be cut down from O(NK) to O(N2 * K) through dynamic programming.
I understand the complexity of the old recursive solution, but have trouble understanding why the dynamic programming solution has O(N2 * K) complexity. How does one come to this conclusion about the dynamic programming solution's time complexity? Any help would be appreciated!
First of all, big O notation gives us an idea about the relation between two functions t(n)/i(n) when n -> infinity. To be more specific, it's an upper bound for such relation, which means it's f(n) >= t(n)/i(n). t(n) stands for the speed of growth of time spent on execution, i(n) describes the speed of growth of input. In function space (we work with functions there rather than with numbers and treat functions almost like numbers: we can divide or compare them, for example) the relation between two elements is also a function. Hence, t(n)/i(n) is a function.
Secondly, there are two methods of determining bounds for that relation.
The scientific observational approach implies the next steps. Let's see how much time it takes to execute an algorithm with 10 pieces of input. Then let's increase the input up to 100 pieces, and then up to 1000 and so on. The speed of growth of input i(n) is exponential (10^1, 10^2, 10^3, ...). Suppose, we get the exponential speed of growth of time as well (10^1 sec, 10^2 sec, 10^3 sec, ... respectively).
That means t(n)/i(n) = exp(n)/exp(n) = 1, n -> infinity (for the scientific purity sake, we can divide and compare functions only when n -> infinity, it doesn't mean anything for the practicality of the method though). We can say at least (remember it's an upper bound) the execution time of our algorithm doesn't grow faster than its input does. We might have got, say, the quadratic exponential speed of growth of time. In that case t(n)/i(n) = exp^2(n)/exp(n) = a^2n/a^n = exp(n), a > 1, n -> infinity, which means our time complexity is O(exp(n)), big O notation only reminds us that it's not a tight bound. Also, it's worth pointing out that it doesn't matter which speed of growth of input we choose. We might have wanted to increase our input linearly. Then t(n)/i(n) = exp(n)*n/n = exp(n) would express the same as t(n)/i(n) = exp^2(n)/exp(n) = a^2n/a^n = exp(n), a > 1. What matters here is the quotient.
The second approach is theoretical and mostly used in the analysis of relatively obvious cases. Say, we have a piece of code from the example:
// DP Table
static int [][][]dp = new int[500][500][500];
// Function to count the number
// of ways to divide the number N
// in groups such that each group
// has K number of elements
static int calculate(int pos, int prev,
int left, int k)
{
// Base Case
if (pos == k)
{
if (left == 0)
return 1;
else
return 0;
}
// if N is divides completely
// into less than k groups
if (left == 0)
return 0;
// If the subproblem has been
// solved, use the value
if (dp[pos][prev][left] != -1)
return dp[pos][prev][left];
int answer = 0;
// put all possible values
// greater equal to prev
for (int i = prev; i <= left; i++)
{
answer += calculate(pos + 1, i,
left - i, k);
}
return dp[pos][prev][left] = answer;
}
// Function to count the number of
// ways to divide the number N in groups
static int countWaystoDivide(int n, int k)
{
// Intialize DP Table as -1
for (int i = 0; i < 500; i++)
{
for (int j = 0; j < 500; j++)
{
for (int l = 0; l < 500; l++)
dp[i][j][l] = -1;
}
}
return calculate(0, 1, n, k);
}
The first thing to notice here is a 3-d array dp. It gives us a hint of the time complexity of a DP algorithm because usually, we traverse it once. Then we are concerned about the size of the array. It's initialized with the size 500*500*500 which doesn't give us a lot because 500 is a number, not a function, and it doesn't depend on the input variables, strictly speaking. It's done for the sake of simplicity though. Effectively, the dp has size of k*n*n with assumption that k <= 500 and n <= 500.
Let's prove it. Method static int calculate(int pos, int prev, int left, int k) has three actual variables pos, prev and left when k remains constant. The range of pos is 0 to k because it starts from 0 here return calculate(0, 1, n, k); and the base case is if (pos == k), the range of prev is 1 to left because it starts from 1 and iterates through up to left here for (int i = prev; i <= left; i++) and finally the range of left is n to 0 because it starts from n here return calculate(0, 1, n, k); and iterates through down to 0 here for (int i = prev; i <= left; i++). To recap, the number of possible combinations of pos, prev and left is simply their product k*n*n.
The second thing is to prove that each range of pos, prev and left is traversed only once. From the code, it can be determined by analysing this block:
for (int i = prev; i <= left; i++)
{
answer += calculate(pos + 1, i,
left - i, k);
}
All the 3 variables get changed only here. pos grows from 0 by adding 1 on each step. On each particular value of pos, prev gets changed by adding 1 from prev up to left, on each particular combination of values of pos and prev, left gets changed by subtracting i, which has the range prev to left, from left.
The idea behind this approach is once we iterate over an input variable by some rule, we get corresponding time complexity. We could iterate over a variable stepping on elements by decreasing the range by twice on each step, for example. In that case, we would get logarithmical complexity. Or we could step on every element of the input, then we would get linear complexity.
In other words, we without any doubts assume the minimum time complexity t(n)/i(n) = 1 for every algorithm from common sense. Meaning that t(n) and i(n) grow equally fast. That also means we do nothing with the input. Once we do something with the input, t(n) becomes f(n) times bigger than i(n). By the logic shown in the previous lines, we need to estimate f(n).
I got this interview question and I am still very confused about it.
The question was as the title suggest, i'll explain.
You are given a random creation function to use.
the function input is an integer n. let's say I call it with 3.
it should give me a permutation of the numbers from 1 - 3. so for example it will give me 2, 3 , 1.
after i call the function again, it won't give me the same permutation, now it will give me 1, 2, 3 for example.
Now if i will call it with n = 4. I may get 1,4,3,2.
Calling it with 3 again will not output 2,3,1 nor 1,2,3 as was outputed before, it will give me a different permutation out of the 3! possible permutations.
I was confused about this question there and I still am now. How is this possible within normal running time ? As I see it, there has to be some static variable that remembers what was called before or after the function finishes executing.
So my thought is creating a static hashtable (key,value) that gets the input as key and the value is an array of the length of the n!.
Then we use the random method to output a random instance out of these and move this instance to the back, so it will not be called again, thus keeping the output unique.
The space time complexity seems huge to me.
Am I missing something in this question ?
Jonathan Rosenne's answer was downvoted because it was link-only, but it is still the right answer in my opinion, being that this is such a well-known problem. You can also see a minimal explanation in wikipedia: https://en.wikipedia.org/wiki/Permutation#Generation_in_lexicographic_order.
To address your space-complexity concern, generating permutations in lexicographical ordering has O(1) space complexity, you don't need to store nothing other than the current permutation. The algorithm is quite simple, but most of all, its correctness is quite intuitive. Imagine you had the set of all permutations and you order them lexicographically. Advancing to the next in order and then cycling back will give you the maximum cycle without repetitions. The problem with that is again the space-complexity, since you would need to store all possible permutations; the algorithm gives you a way to get the next permutation without storing anything. It may take a while to understand, but once I got it it was quite enlightening.
You can store a static variable as a seed for the next permutation
In this case, we can change which slot each number will be put in with an int (for example this is hard coded to sets of 4 numbers)
private static int seed = 0;
public static int[] generate()
{
//s is a copy of seed, and increment seed for the next generation
int s = seed++ & 0x7FFFFFFF; //ensure s is positive
int[] out = new int[4];
//place 4-2
for(int i = out.length; i > 1; i--)
{
int pos = s % i;
s /= i;
for(int j = 0; j < out.length; j++)
if(out[j] == 0)
if(pos-- == 0)
{
out[j] = i;
break;
}
}
//place 1 in the last spot open
for(int i = 0; i < out.length; i++)
if(out[i] == 0)
{
out[i] = 1;
break;
}
return out;
}
Here's a version that takes the size as an input, and uses a HashMap to store the seeds
private static Map<Integer, Integer> seeds = new HashMap<Integer, Integer>();
public static int[] generate(int size)
{
//s is a copy of seed, and increment seed for the next generation
int s = seeds.containsKey(size) ? seeds.get(size) : 0; //can replace 0 with a Math.random() call to seed randomly
seeds.put(size, s + 1);
s &= 0x7FFFFFFF; //ensure s is positive
int[] out = new int[size];
//place numbers 2+
for(int i = out.length; i > 1; i--)
{
int pos = s % i;
s /= i;
for(int j = 0; j < out.length; j++)
if(out[j] == 0)
if(pos-- == 0)
{
out[j] = i;
break;
}
}
//place 1 in the last spot open
for(int i = 0; i < out.length; i++)
if(out[i] == 0)
{
out[i] = 1;
break;
}
return out;
}
This method works because the seed stores the locations of each element to be placed
For size 4:
Get the lowest digit in base 4, since there are 4 slots remaining
Place a 4 in that slot
Shift the number to remove the data used (divide by 4)
Get the lowest digit in base 3, since there are 3 slots remaining
Place a 3 in that slot
Shift the number to remove the data used (divide by 3)
Get the lowest digit in base 2, since there are 2 slots remaining
Place a 2 in that slot
Shift the number to remove the data used (divide by 2)
There is only one slot remaining
Place a 1 in that slot
This method is expandable up to 12! for ints, 13! overflows, or 20! for longs (21! overflows)
If you need to use bigger numbers, you may be able to replace the seeds with BigIntegers
I am coding a blackjack game and I am very far through. However, I just got to the point of adding together the score after each hand (something I thought would be easy) but Aces are proving to rack my brain endlessly. Since casinos play with multiple decks, it's possible to get up to 21 aces in the same hand mathematically.
How do I make a loop to go through an ArrayList of Integers called Hand which has ints which correspond to the cards that are in the hand. EX. A player hits and now has a Ace, a 5, a 2, a King, and now draws an Ace. The arraylist to represent his hand is [1, 10, 2, 5, 1]
My idea:
public void calculatePlayerScore()
{
int counter = 0;
int aceCount = 0;
for (int i = 0; i < hand.size(); i++)
{
if (hand.get(i) != 1)
{
counter++;
}
else
{
aceCount++;
}
}
for (int i = 0; i < aceCount; i++)
{
//Now that we know the regular cards
//and we know how many aces there are
//we should be able to loop to find the most
//optimal score without busting, and pick the highest among those
}
If anyone has an idea for this, please let me know. Thanks so much for the help.
Note that only one ace can count as 11 in a hand. (Otherwise, the total will be at least 22.)
Calculate hand with all aces counting 1
if ( (hand total is <= 11) AND (there is at least one ace) )
hand total += 10 // one ace counts as 11
Only one ace can ever be 11. Summing a hand looks like this:
public int calculateScore(Hand hand) {
int total = 0;
bool hasAce = false;
for (int i = 0; i < hand.size(); i++) {
int r = hand.get(i);
total += r;
if (1 == r) { hasAce = true; }
}
if (hasAce && total < 12) { total += 10; }
return total;
}
In a real blackjack game, you might also want to return the fact of whether the hand total is soft or hard also.
IF the total of the hand is over 21 BUT one of the cards in the hand =11, then make 11 = 1.
Take the sum of your "Hand" ArrayList, for any location that you see an "Ace" at, calculate that sum with both 1 or 11 as the additive. If your sum is greater than 21, you bust. If not, continue. If it is exactly 21, add that hand to your successes, and stop hitting.
The brute force method of solving this problem is to implement a "look-ahead" function, where you can look at your entire hand once, then calculate all the possible combinations your hand provides, including the Aces within your hand as either 1's or 11's. After you generate this list of possibilities, you can then see which possibility has the least number of cards to create the blackjack or highest hand value, and choose that possibility. It's a common algorithm problem and there are probably very efficient solutions out there for you to look at.
I have a program that searches a 2d array using a binary search. In this case I am using the matrix below and searching for the integers 4,12,110,5,111. The program finds all of them except for 110 and 111 why is this?
{1,3,7,8,8,9,12},
{2,4,8,9,10,30,38},
{4,5,10,20,29,50,60},
{8,10,11,30,50,60,61},
{11,12,40,80,90,100,111},
{13,15,50,100,110,112,120},
{22,27,61,112,119,138,153},
public static boolean searchMatrix(int[][] matrix, int p,int n) {
int low = 0, high = n-1 ;
while (low < high) {
int mid = (low + high) / 2;
if (p == matrix[mid][0])return true;
else if (p < matrix[mid][0]) high = mid - 1;
else if (p < matrix[mid+1][0]) { low = mid; break; }
else low = mid + 1;
}
int row = low;
low = 0; high = matrix[row].length - 1;
while (low <= high) {
int mid = (low + high) / 2;
if (p == matrix[row][mid]) return true;
else if (p < matrix[row][mid]) high = mid - 1;
else low = mid + 1;
}
return false;
}
I would rather say it's more or less a surprise that your algorithm does find 4, 5, and 12 in the first place. The reason for that is that 4 occurs in the first position of a row and 5 and 12 satisfy the condition that they are less than the first element in the next row. Only due to that fact are they discovered in the second half of your algorithm. The algorithm is a bit hard to read and I did not evaluate the +/-1 magic, but it seems the algorithm expects the 110 and 111 to occur actually in the last row (since both 110 and 111 are greater than 22) where they are not.
If I get you right, your approach is flawed in that it is actually impossible, by looking at a single number, to tell what row it will occur in, which is what your first tries to achieve. So any two-phase algorithm that first picks a row and then searches a column must fail.
With the few constraints you have on your matrix (each row and each column is sorted), it does not seem like binary search will work at all: Even if your bounds low and high would be 2D points it would not help a lot. Consider any element of the matrix that is greater than your search point. Then all you can say is that your search point is not below and right of that element (whereas what you were hoping to be able to conclude was that it was left and above, but that is not necessarily true - it can be above and right or left and below), so you are only cutting off only a too small part of the search space.
Your issue is that you are making the false assumption that you can first lock down the row of your search value, and then easily do binary search on that row. That's not the case at all.
For 110 and 111, the first element of each row is always less than your search value, and your algorithm comes to the false conclusion that that this means that your row must be the array with index 6 after the first loop. This is simply not true.
The reason it works for small numbers is because your algorithm is just lucky enough to lock down the right row in the first loop...
One correct algorithm to quickly search on a 2d matrix where every row and column is sorted in ascending order is as follows:
1) Start with top right element 2) Loop: compare this element e with x
….i) if they are equal then return its position …ii) e < x then move
it to down (if out of bound of matrix then break return false) ..iii)
e > x then move it to left (if out of bound of matrix then break
return false) 3) repeat the i), ii) and iii) till you find element or
returned false
I found this algorithm here: http://www.geeksforgeeks.org/search-in-row-wise-and-column-wise-sorted-matrix/
It's O(n) for an n x n matrix.
I'm solving a Project Euler Problem 14 using java. I am NOT asking for help solving the problem. I have already solved it, but I ran into something I can't figure out.
The problem is like this:
The following iterative sequence is defined for the set of positive
integers:
n = n/2, if n is even
n = 3n + 1, if n is odd
Using the rule above and starting with 13, we generate the following
sequence:
13 -> 40 -> 20 -> 10 -> 5 -> 16 -> 8 -> 4 -> 2 -> 1. Here, the length of the chain is 10 numbers.
Find the starting number below 1,000,000 that produces the longest chain.
So I wrote this code:
public class Euler014 {
public static void main(String[] args){
int maxChainCount = 0;
int answer = 0;
int n;
int chainCount = 1;
for(int i = 0; i < 1000000; i++){
n = i;
while(n > 1){
if(n%2 == 0){ //check if even
n /= 2;
}else{ //else: odd
n = 3*n + 1;
}
chainCount++;
}
if(chainCount > maxChainCount){ //check if it's the longest chain so far
maxChainCount = chainCount;
answer = i;
}
chainCount = 1;
}
System.out.println("\n\nLongest chain: i = " + answer);
}
}
This gives me the answer 910107, which is wrong.
HOWEVER, if i change the type of my n variable to double n it runs and gives me the answer 837799, which is right!
This really confuses me, as I can't see what the difference would be at all. I understand that if we use int and do divisions we can end up rounding numbers when we don't intend to. But in this case, we always check to see if the n is divisble by 2, BEFORE dividing by 2. So I thought that it would be totally safe to use integers. What am I not seeing?
This is the code in its entirety, copy, paste and run it if you'd like to see for yourself. It runs in a couple of seconds despite much iteration. =)
Your problem is overflow. If you change int n to long n, you'll get the right answer.
Remember: The numbers in the sequence can be really big. So big they overflow int's range. But not (in this case) double's, or long's.
At one point in the chain, n is 827,370,449 and you follow the 3n + 1 branch. That value wants to be 2,482,111,348, but it overflows the capacity of int (which is 2,147,483,647 in the positive realm) and takes you to -1,812,855,948. And things go south from there. :-)
So your theory that you'd be fine with integer (I should say integral) numbers is correct. But they have to have the capacity for the task.