I am solving a question where there is a grid with r rows and c columns. We start at the top left corner cell, and end at the bottom right corner cell. The constraint is we can move only once cell at a time, either downward or right. Also some of the cells may be black listed. The question is to find the total no. of ways we can go from source to the target.
This is my solution which is straightforward but runs in exponential time:
int count(boolean[][] array, int r, int c)
{
if ((r < 0 || c < 0) || !array[r][c]) return 0;
if (r == 0 && c == 0) return 1;
return count(array, r - 1, c) + count(array, r, c - 1);
}
The problem I am having is while memoizing this.
Can memoization make this solution be made more efficient?
If so, then I cannot blacklist all the cells that are in a path that fails because there might be other paths through those cells which may lead to target. So I'm confused so as to what I should cache here and where do I add the additional check to avoid checking on paths I have already gone through.
If (1) was yes, then if there were no cells blacklisted, then I was wondering if the memoization would have served any purpose at all.
Can memoization make this solution be made more efficient?
Yes!
If so, then I cannot blacklist all the cells that are in a path that fails because there might be other paths through those cells which may lead to target.
Correct.
So I'm confused so as to what I should cache here and where do I add the additional check to avoid checking on paths I have already gone through.
Here's what you do.
Make an r x c 2-d array of nullable integers, let's call it a. The meaning of the array is "a[x][y] gives the number of paths from (x, y) to (r-1, c-1)" -- this is supposing that (r-1, c-1) is the "exit" cell that we're trying to get to.
The array will start with every element null. That's great. Null means "I don't know".
Fill in every "blocked" cell in the array with zero. That means "there is no way to get from this cell to the exit".
If a[r-1][c-1] is zero, then the exit is blocked, and we're done. The answer to every query is zero because there is no way to get to the exit. Let's assume the exit cell is not blocked.
There is one way to get from the exit cell to itself, so fill in a[r-1][ c-1] with 1.
Now the algorithm proceeds like this:
We are asked for a solution starting from cell (x, y).
Consult the array. If it is null then recurse on the right and down neighbours, and fill in the array at [x][y] with the sum of those answers
Now the array is definitely filled in, so return a[x][y].
Let's work an example. Suppose we have
n n n
n n 0
n n 1
And we are asked for the solution for (0, 1). We don't have a solution. So we try to find the solutions for (1, 1) and (0, 2).
We don't have a solution for (1, 1). So we have to get solutions for (1, 2) and (2, 1).
(1, 2) we've got. It's 0.
(2, 1) we don't have but (2, 2) we do, and that's the only neighbour. (2, 2) is 1, so we fill in (2, 1):
n n n
n n 0
n 1 1
Now we have enough information to fill in (1, 1):
n n n
n 1 0
n 1 1
We still haven't done (0, 2). It has one neighbour which is zero, so that's:
n n 0
n 1 0
n 1 1
And now we can fill in (0, 1)
n 1 0
n 1 0
n 1 1
Which is what we were looking for, so we're done.
Alternative solution: Pre-compute the array.
We start by filling in all the zeros and the one at the exit as before.
Now fill in the rightmost column going from the bottom up: it is all ones, until you get to the first zero, at which point it becomes all zeros.
Now fill in the bottommost row going right to left. Again, it is all ones, until you get to the first zero, at which point it becomes all zeros.
Now we have enough information to fill in the second-from-the-right column and the second-from-the-bottom row; do you see how?
Proceed like that until the entire array is filled in.
And now all the answers are in the array.
Example:
first step:
n n n
n n 0
n n 1
Fill in the outer row and column:
n n 0
n n 0
1 1 1
Fill in the next row and column:
n 1 0
2 1 0
1 1 1
And the last:
3 1 0
2 1 0
1 1 1
And we're done; the whole problem is solved.
if there were no cells blacklisted, then I was wondering if the memoization would have served any purpose at all.
If there are no cells blacklisted then the array looks like this:
20 10 4 1
10 6 3 1
4 3 2 1
1 1 1 1
which is a shape you should have seen before and know how to compute each element directly. Hint: you've usually seen it as a triangle, not a square.
Related
Okay, so here is the scenario:
I have a 2D integer array representing my graph / matrix. If there is a connection, there is a 1, if no connection then there is a 0. Pretty simple, however I am iterating back through the array to create a subset as given. So if I have a graph:
{ 0 1 1 1
1 0 1 1
1 1 0 1
1 1 1 0 }
and I pass the subset {3,1}
then I will be left with
{ 0 0 1 0
0 0 0 0
1 0 0 0
0 0 0 0 }
Now my question is how to go about counting the maximum vertices in the components? So the output I want is the maximum vertices of a single component. My problem is I don't understand how I'm suppose to tell the difference in components. It's easier for me to understand on paper, but I am stumped on how to interpret it through code. I will say I am doing this in Java.
Any insight would be helpful
Edit Note:
I am trying to use BFS or some other search method to count each vertex and its connections. Then iterate over each vertex that has yet to be seen or checked, and continue. Then output the number of max components
Lets say I have a graph with connections as above before the subset given. The subset will be removed, then we are left with pieces of a graph. I then need to iterate over those pieces to find which piece has the most connections.
There was a question asked:
"Presented with the integer n, find the 0-based position of the second
rightmost zero bit in its binary representation (it is guaranteed that
such a bit exists), counting from right to left.
Return the value of 2position_of_the_found_bit."
I had written below solution which works fine.
int secondRightmostZeroBit(int n) {
return (int)Math.pow(2,Integer.toBinaryString(n).length()-1-Integer.toBinaryString(n).lastIndexOf('0',Integer.toBinaryString(n).lastIndexOf('0')-1)) ;
}
But below was the best voted solution which I also liked as it has just few characters of codding and serving the purpose, but I could not understand it. Can someone explain how bit manipulation is helping to achieve it .
int secondRightmostZeroBit(int n) {
return ~(n|(n+1)) & ((n|(n+1))+1) ;
}
Consider some number having at least two 0 bits. Here is an example of such a number with the 2 rightmost 0 bits marked (x...x are bits we don't care about which can be either 0 or 1, and 1...1 are the sequences of zero or more 1 bits to the right and to the left of the rightmost 0 bit) :
x...x01...101...1 - that's n
If you add 1 to that number you get :
x...x01...110...0 - that's (n+1)
which means the right most 0 bit flipped to 1
therefore n|(n+1) would give you:
x...x01...111...1 - that's n|(n+1)
If you add 1 to n|(n+1) you get:
x...x100........0 - that's (n|(n+1))+1
which means the second right most 0 bit also flips to 1
Now, ~(n|(n+1)) is
y...y10.........0 - that's ~(n|(n+1))
where each y bit is the inverse of the corresponding x bit
therefore ~(n|(n+1)) & ((n|(n+1))+1) gives
0...010.........0
where the only 1 bit is at the location of the second rightmost 0 bit of the input number.
I have an algorithm that checks whether or not a game row can be solved. The game row is an array of positive integers, where the last element is 0. The game marker starts at index 0 and moves along the array the number of steps indicated by the integer it is positioned at.
For example, [1, 1, 0] returns true, while [1, 2, 0] returns false.
The marker can also move left or right in order to solve the game.
That is, [3, 3, 2, 2, 0] is solvable.
Algorithm recursiveSolvable(gameArray, index)
if index = gameArray.length - 1 // the last element has been reached
return true
if index < 0 || index >= gameArray.length || arrayList.contains(index)
return false
arrayList.add(index) // store visited indices to avoid infinite loop
else
// move towards the goal (last element) if possible
// otherwise, trace back steps to find another way
return recursiveSolvable(gameArray, index + gameArray[index])
|| recursiveSolvable(gameArray, index - gameArray[index])
I have tried with a few examples of game rows and calculated the time complexity in the worst case:
[2, 0] has 2 recursive calls where the first one returns false, and the second one as well
[1, 1, 2, 0] has 5:
go right || go left - false
|
go right || go left - false
|
go right || go left - false (because index 0 has been visited)
|
false (then go left)
Other cases gave me numbers that I couldn't find the relation with the input size, but when I run the program with input size n = 100, the output is shown instantly, so I assume the time complexity is not O(2^n) (like binary recursion). I am more leaning towards O(n)...
As for the space complexity, I have no idea how to find it.
The run time is indeed more like O(n). This is because each index position is investigated only once (due to the test with the arrayList).
The exact bound depends also on the data structure used for arrayList. Is it really a List or a HashSet?
The space complexity is O(n) for the same reason. There can only be one incarnation of the recursive method for each index position.
i have a Challenge the objective is to get the lowest cost of the path.
The path can proceed horizontally or diagonally. not vertically. like below.
and the first and last row are also adjacent.
for example see below matrix:
output for 1st matrix :
16
1 2 3 4 4 5-->path row number
output for second matrix:
11
1 2 1 5 4 5-->path row number
am doing it in java, am getting the Lowest path but am not getting the path to print the path using row numbers.
int minCost(int cost[r][r], int m, int n)
{
if (n < 0 || m < 0)
return Integer.MAX_VALUE;;
else if ((m == r-1 && n == c-1)|| n+1>=c)
return cost[m][n];
else
return cost[m][n] + min( minCost(cost, m+1>=r?r-1:m+1,n+1),
minCost(cost, m,n+1),
minCost(cost, m-1>=0?m-1:r-1,n+1));
}
// calling it
minCost(cost, 0, 0);
How to get the row numbers for shortest path?
Your algorithm is quite inefficient. The best solution I can think is calculating it backwards(from right to left). Consider the right 2 columns of your second matrix:
8 6
7 4
9 5
2 6
2 3
If now we are on the cell with value 8, the next step can be 6/4/3. Of course we choose 3 because we want a smaller cost. If now we are on the cell with value 7, the next step can be 6/4/5, we will choose 4. So the two columns can merged into one column:
11 //8+3
11 //7+4
13 //9+4
5 //2+3
5 //2+3
Now repeat the last two columns:
2 11
2 11
9 13
3 5
1 5
Finally the matrix will be merged into one column, the smallest value in the column has the lowest cost.
I'll try to expand on fabian's comment:
It's clear that your minCost function will return the same values if called with the same arguments. In your algorithm, it indeed does get called lots of times with the same values. Every call for column zero will generate 3 calls for column 1, which in turn generate 9 calls for column 2, etc. The last column will get a huge number of calls (3^r as fabian pointed), most of them recalculating the very same values for other calls.
The idea is to store these values so they don't need to be recalculated every time they are needed. A very simple way of doing this is to create a new matrix of the same size as the original and calculating, column by column, the minimum sum for getting to each cell. The first column will be trivial (just copy from the original array, as there is only one step involved), and then proceed for the other columns reusing the values already calculated.
After that, you can optimize space usage by replacing the second matrix by only two columns, as you are not going to need column n-1 once you have column n fully calculated. This can be a bit tricky, so if you're unsure I recommend using the full array the first time.
I'm attempting to make a Sudoku solving program. To reach the puzzle's solution, the program interprets 0's as empty slots, and then makes an array that has a length equivalent to the number of zeros in the entire puzzle. From there, it sets all of the values in the array to 1 (the minimum value any slot can have in a Sudoku puzzle). What I'm trying to do is simulate a number's incremental pattern in the array starting from the element with the greatest index.
For example, a puzzle with three empty slots would result in an array of 3 elements. The array would then increase based on the pattern mention above:
0 0 0 (Initiation)
1 1 1 (Set to possible values)
1 1 2
1 1 3
1 1 4
1 1 5
1 1 6
1 1 7
1 1 8
1 1 9
1 2 1 (Skips 1 2 0 since it would include a 0)
1 2 2
etc.
This is a modified form of a base 10 number increment. Instead of 0-9, it uses 1-9. How may I build a method that will increment the array in this manner?
The basic algorithm here is to increment the right most digit then, if it overflows, increment the next to the left and so on. Recursion is a neat way of solving this. I'll do it in pseudocode and leave you to convert to Java
function increment(array, digit)
if (array[digit] < 9)
array[digit] += 1
else if (digit > 0)
array[digit] == 1;
increment(array, digit - 1)
else
you are finished
Then each time you call this with: increment(array, array.length - 1)