i have a Challenge the objective is to get the lowest cost of the path.
The path can proceed horizontally or diagonally. not vertically. like below.
and the first and last row are also adjacent.
for example see below matrix:
output for 1st matrix :
16
1 2 3 4 4 5-->path row number
output for second matrix:
11
1 2 1 5 4 5-->path row number
am doing it in java, am getting the Lowest path but am not getting the path to print the path using row numbers.
int minCost(int cost[r][r], int m, int n)
{
if (n < 0 || m < 0)
return Integer.MAX_VALUE;;
else if ((m == r-1 && n == c-1)|| n+1>=c)
return cost[m][n];
else
return cost[m][n] + min( minCost(cost, m+1>=r?r-1:m+1,n+1),
minCost(cost, m,n+1),
minCost(cost, m-1>=0?m-1:r-1,n+1));
}
// calling it
minCost(cost, 0, 0);
How to get the row numbers for shortest path?
Your algorithm is quite inefficient. The best solution I can think is calculating it backwards(from right to left). Consider the right 2 columns of your second matrix:
8 6
7 4
9 5
2 6
2 3
If now we are on the cell with value 8, the next step can be 6/4/3. Of course we choose 3 because we want a smaller cost. If now we are on the cell with value 7, the next step can be 6/4/5, we will choose 4. So the two columns can merged into one column:
11 //8+3
11 //7+4
13 //9+4
5 //2+3
5 //2+3
Now repeat the last two columns:
2 11
2 11
9 13
3 5
1 5
Finally the matrix will be merged into one column, the smallest value in the column has the lowest cost.
I'll try to expand on fabian's comment:
It's clear that your minCost function will return the same values if called with the same arguments. In your algorithm, it indeed does get called lots of times with the same values. Every call for column zero will generate 3 calls for column 1, which in turn generate 9 calls for column 2, etc. The last column will get a huge number of calls (3^r as fabian pointed), most of them recalculating the very same values for other calls.
The idea is to store these values so they don't need to be recalculated every time they are needed. A very simple way of doing this is to create a new matrix of the same size as the original and calculating, column by column, the minimum sum for getting to each cell. The first column will be trivial (just copy from the original array, as there is only one step involved), and then proceed for the other columns reusing the values already calculated.
After that, you can optimize space usage by replacing the second matrix by only two columns, as you are not going to need column n-1 once you have column n fully calculated. This can be a bit tricky, so if you're unsure I recommend using the full array the first time.
Related
When doing QR or SVD decomposition on an m x n matrix A in ojalgo, I've hit a snag. My purpose is to find a basis for the column null space. If m >= n, things work fine. For instance, QR decomposition of the transpose A' of a 5 x 4 matrix A with rank 2 gives me a 4 x 4 Q matrix whose last two columns span the null space of A.
On the other hand, if I start with a 5 x 7 matrix A with rank 5 (and do a QR decomposition of A'), I get the correct rank, but Q is 5 x 5 rather than 7 x 7, and I don't get the null space basis. Similarly, SVD with that same matrix A gets me five positive singular values (no zeros), and the Q2 matrix is 5 x 7 rather than 7 x 7 (no null vectors).
Is this expected behavior? I found a work-around for matrices with n > m (adding n-m rows of zeros to A), but it's clunky.
The matrices can be any size/shape, but calculating the economy sized decomposition is the default behaviour. It is what most users need/want. But there is an interface MatrixDecomposition.EconomySize that lets you control this (to optionally get the full size decomposition). Currently the QR, SVD and Bidiagonal decompositions implement it.
I am solving a question where there is a grid with r rows and c columns. We start at the top left corner cell, and end at the bottom right corner cell. The constraint is we can move only once cell at a time, either downward or right. Also some of the cells may be black listed. The question is to find the total no. of ways we can go from source to the target.
This is my solution which is straightforward but runs in exponential time:
int count(boolean[][] array, int r, int c)
{
if ((r < 0 || c < 0) || !array[r][c]) return 0;
if (r == 0 && c == 0) return 1;
return count(array, r - 1, c) + count(array, r, c - 1);
}
The problem I am having is while memoizing this.
Can memoization make this solution be made more efficient?
If so, then I cannot blacklist all the cells that are in a path that fails because there might be other paths through those cells which may lead to target. So I'm confused so as to what I should cache here and where do I add the additional check to avoid checking on paths I have already gone through.
If (1) was yes, then if there were no cells blacklisted, then I was wondering if the memoization would have served any purpose at all.
Can memoization make this solution be made more efficient?
Yes!
If so, then I cannot blacklist all the cells that are in a path that fails because there might be other paths through those cells which may lead to target.
Correct.
So I'm confused so as to what I should cache here and where do I add the additional check to avoid checking on paths I have already gone through.
Here's what you do.
Make an r x c 2-d array of nullable integers, let's call it a. The meaning of the array is "a[x][y] gives the number of paths from (x, y) to (r-1, c-1)" -- this is supposing that (r-1, c-1) is the "exit" cell that we're trying to get to.
The array will start with every element null. That's great. Null means "I don't know".
Fill in every "blocked" cell in the array with zero. That means "there is no way to get from this cell to the exit".
If a[r-1][c-1] is zero, then the exit is blocked, and we're done. The answer to every query is zero because there is no way to get to the exit. Let's assume the exit cell is not blocked.
There is one way to get from the exit cell to itself, so fill in a[r-1][ c-1] with 1.
Now the algorithm proceeds like this:
We are asked for a solution starting from cell (x, y).
Consult the array. If it is null then recurse on the right and down neighbours, and fill in the array at [x][y] with the sum of those answers
Now the array is definitely filled in, so return a[x][y].
Let's work an example. Suppose we have
n n n
n n 0
n n 1
And we are asked for the solution for (0, 1). We don't have a solution. So we try to find the solutions for (1, 1) and (0, 2).
We don't have a solution for (1, 1). So we have to get solutions for (1, 2) and (2, 1).
(1, 2) we've got. It's 0.
(2, 1) we don't have but (2, 2) we do, and that's the only neighbour. (2, 2) is 1, so we fill in (2, 1):
n n n
n n 0
n 1 1
Now we have enough information to fill in (1, 1):
n n n
n 1 0
n 1 1
We still haven't done (0, 2). It has one neighbour which is zero, so that's:
n n 0
n 1 0
n 1 1
And now we can fill in (0, 1)
n 1 0
n 1 0
n 1 1
Which is what we were looking for, so we're done.
Alternative solution: Pre-compute the array.
We start by filling in all the zeros and the one at the exit as before.
Now fill in the rightmost column going from the bottom up: it is all ones, until you get to the first zero, at which point it becomes all zeros.
Now fill in the bottommost row going right to left. Again, it is all ones, until you get to the first zero, at which point it becomes all zeros.
Now we have enough information to fill in the second-from-the-right column and the second-from-the-bottom row; do you see how?
Proceed like that until the entire array is filled in.
And now all the answers are in the array.
Example:
first step:
n n n
n n 0
n n 1
Fill in the outer row and column:
n n 0
n n 0
1 1 1
Fill in the next row and column:
n 1 0
2 1 0
1 1 1
And the last:
3 1 0
2 1 0
1 1 1
And we're done; the whole problem is solved.
if there were no cells blacklisted, then I was wondering if the memoization would have served any purpose at all.
If there are no cells blacklisted then the array looks like this:
20 10 4 1
10 6 3 1
4 3 2 1
1 1 1 1
which is a shape you should have seen before and know how to compute each element directly. Hint: you've usually seen it as a triangle, not a square.
I have a a row of int numbers, e.g. 1 2 3 7 8 9. They are sorted.
I often need to insert numbers in this row, so that the row stays sorted, e.g. 4 => 1 2 3 4 7 8 9. Sometimes I have to read the row from the start to a number, that depends on the numbers in the row.
Which data type to choose best and how to insert the new number best?
If your sequence does not have repetitions you can use a SortedSet<Integer>, say, a TreeSet<Integer>, so that every time you and an element the sequence will remain sorted.
If the sequence does have repetitions check out Guava's sorted multiset.
try ArrayList. it is much easier to manipulate than a simple array. if you need to only work with primitives, this can still be done with an array of ints.
ArrayList<Integer> foo = new ArrayList<integer>;
foo.add(2,4) //puts a 4 at index 2, shifting everything else down
first time here at Stackoverflow. I hope someone can help me with my search of an algorithm.
I need to generate N random numbers in given Ranges that sum up to a given sum!
For example: Generatare 3 Numbers that sum up to 11.
Ranges:
Value between 1 and 3.
Value between 5 and 8.
value between 3 and 7.
The Generated numbers for this examle could be: 2, 5, 4.
I already searched alot and couldnt find the solution i need.
It is possible to generate like N Numbers of a constant sum unsing modulo like this:
generate random numbers of which the sum is constant
But i couldnt get that done with ranges.
Or by generating N random values, sum them up and then divide the constant sum by the random sum and afterwards multiplying each random number with that quotient as proposed here.
Main Problem, why i cant adopt those solution is that every of my random values has different ranges and i need the values to be uniformly distributed withing the ranges (no frequency occurances at min/max for example, which happens if i cut off the values which are less/greater than min/max).
I also thought of an soultion, taking a random number (in that Example, Value 1,2 or 3), generate the value within the range (either between min/max or min and the rest of the sum, depending on which is smaller), substracting that number of my given sum, and keep that going until everything is distributed. But that would be horrible inefficiant. I could really use a way where the runtime of the algorithm is fixed.
I'm trying to get that running in Java. But that Info is not that importend, except if someone already has a solution ready. All i need is a description or and idea of an algorithm.
First, note that the problem is equivalent to:
Generate k numbers that sums to a number y, such that x_1, ..., x_k -
each has a limit.
The second can be achieved by simply reducing the lower bound from the number - so in your example, it is equivalent to:
Generate 3 numbers such that x1 <= 2; x2 <= 3; x3 <= 4; x1+x2+x3 = 2
Note that the 2nd problem can be solved in various ways, one of them is:
Generate a list with h_i repeats per element - where h_i is the limit for element i - shuffle the list, and pick the first elements.
In your example, the list is:[x1,x1,x2,x2,x2,x3,x3,x3,x3] - shuffle it and choose first two elements.
(*) Note that shuffling the list can be done using fisher-yates algorithm. (you can abort the algorithm in the middle after you passed the desired limit).
Add up the minimum values. In this case 1 + 5 + 3 = 9
11 - 9 = 2, so you have to distribute 2 between the three numbers (eg: +2,+0,+0 or +0,+1,+1).
I leave the rest for you, it's relatively easy to create a uniform distribution after this transformation.
This problem is equivalent to randomly distributing an excess of 2 over the minimum of 9 on 3 positions.
So you start with the minima (1/5/3) and then cycle 2 times, generating a (pseudo-)random value of [0-2] (3 positions) and increment the indexed value.
e.g.
Start 1/5/3
1st random=1 ... increment index 1 ... 1/6/3
2nd random=0 ... increment index 0 ... 2/6/3
2+6+3=11
Edit
Reading this a second time, I understand, this is exactly what #KarolyHorvath mentioned.
I have a situation where I show a matrix. The matrix contain values greater than or equal to 0. So to add convenience to the user I just added a bar slider whose minimum value is 0 and maximum value is the maximum value from the matrix. So when the user slides on the slider I just filter the matrix cells and show only those cells which have value greater than the slider value user has slided.
My issue is that in the matrix table the values are quite sparse like there are a lot of values in the range sometime 1-5 and then 20-25 (can be other ranges also). So when i slider the tables gets reduced a lot.
I want something now that on moving the slider only a few values gets filtered. I was thinking if there is any logarithmic way of solving this problem...or may be some other way..
If your matrix is size M x N consider storing the data in an array like data[L][2] where L = M x N and data[i][0] is the actual value while data[i][1] is the value's position in your original M x N matrix.
If data[i][0] is sorted you can process it faster to find positions with values under a given threshold.
Example:
Matrix
1 2 3
0 7 1
5 1 9
Array (value, position)
0 1 1 1 2 3 5 7 9
4 1 6 8 2 3 7 5 9
Cells with values less than 3: 4 1 6 8 2