Android path seeking algorithm - java

Context
I am trying to develop a simple 2D game where some "zombies" are going to chase me.
My idea to calculate the paths was the following (X = path not usable):
[4] [4] [X] [1] [1] [2] [3] [4] [5]
[3] [X] [X] [0] [1] [X] [X] [X] [5]
[3] [2] [1] [1] [1] [X] [3] [4] [5]
[3] [2] [2] [2] [2] [2] [3] [4] [5]
Starting from 0, give the positions arround it 1 value, to those close to 1, give 2 value, etc. This way I just need to search for an lower index to know the quickest way to reach 0.
Questions
(1) I don't know if this algorithm has a name so I could not really find information about it.
(2) The most optimal solution/algorithm/flow to calculate this
(3) In my mobile, the game screen has 1700 x 1440 resolution, so my code takes 15 seconds. I crated a final value to scale everything down and lower the matrix size, however, stil takes a lot, literally unplayable.
(4) Are there other needs ? Maybe adding threads ? I don't know if would that work though ...
My code (debug code deleted)
Code
private void expandAllFrom(int x, int y){ // x and y already scalled down
nodes = new ArrayList<Point>(); // "nodes" is a global variable //
nodes.add(new Point(x, y));
while ( nodes.size() > 0 ){
Point p = nodes.remove(0);
expand(p.x, p.y);
}
}
private void expand(int x, int y){
int limXMin = x - 1, limXMax = x + 1, limYMin = y - 1, limYMax = y + 1;
int value = map[x][y];
// Check limits of screen
if ( limXMin < 0 ) limXMin = 0;
if ( limXMax > SCREEN_X_DIV - 1) limXMax = SCREEN_X_DIV - 1;
if ( limYMin < 0 ) limYMin = 0;
if ( limYMax > SCREEN_Y_DIV - 1) limYMax = SCREEN_Y_DIV - 1;
for (int i = limXMin; i <= limXMax; i++){
for (int j = limYMin; j <= limYMax; j++){
if ( map[i][j] == 0 ) {
if ( i != x || j != y ){
nodes.add(new Point(i, j));
map[i][j] = value + 1;
}
}
}
}
}
Explanation
I use a FIFO list. I add the nodes in there, for example, the flow would be something like:
(1) Add 0 position to expand node list.
(2) Expand 0 by setting 1 values arround it. Then add them to expand node list.
(2) Expand 1 by setting 2 values arround it. Then add them to expand node list.
(...)
(X) Expand 2 by setting 3 values arround it. Then add them to expand node list.
(Y) Expand 3 by setting 4 values arround it. Then add them to expand node list.
(...)

This is just breadth-first search (BFS), used to find the single-source shortest path. The numbers you want to calculate correspond exactly to the level in which each grid cell is located. The nice thing is that with a proper implementation of BFS, you don't need the numbers. Just start the BFS procedure at the player's location, then let each zombie step towards the parent-pointer of the cell they are currently in.

As mentioned, what you do is called breadth first search, which is a special case of Dijkstra's algorithm. Well done for finding it out for yourself!
The problem is, that the time complexity of BFS is O(V+E), where V is the number of nodes, E is the number of edges. In your case, it will be in the order of the size of the map, depending on the sparsity of the map (that is, how many X-es there are). That is in any case in the order of millions for a map of size 1700x1440.
If the number of zombies is not too big, it would be much faster to calculate the shortest path for each zombie one by one (you could still share and re-use the expanded nodes between the zombies), using variations of BFS with heuristics. For example, jump point search is optimized for uniform cost mazes (jump point search is a special case of the A-star algorithm).
The idea there is to take a start point (a zombie's position) and an end point (the player's position), and to calculate the shortest path between them, expand the nodes which are closer to the destination first. Closer here means that the approximated distance to the endpoint is less. The distance to the reached nodes is known, and A star will choose the node where the sum of the distance from the start to the node, plus the approximated distance from the node to the end is the smallest. Since you allow diagonal moves, the distance approximation can not be the Manhattan distance, nor Eucledian distance, as the approximation has to be a lower bound to the real distance. You could take eg. max(│x-x'│, │y-y'│). Jump point search improves this further by taking advantage of the maze structure of the map, to exclude further nodes.
This site animates several such algorithms so you can get a feel how these work.
The nice thing about this approach is that you would not search through the whole map, only a small fraction of it which lies between the zombie and the player. This could already be orders of magnitude faster than any full scale BFS algorithm. To show how dramatic the speedup can be, have a look at the following images. Only the marked nodes are explored by the search:
Another advantage is, that you could compromise between running time and the 'cleverness' of the zombies. All you have to do is not run such an algorithm all the way to the end point. You can stop after a pre-defined number of steps and get just an approximation for the beginning of the path (by looking at the shortest path between the start point and the most promising node to expand next). So depending on how much time you have for the calculation, you could have optimal or less optimal zombies.

Related

How priority queue is used with heap to solve min distance

Please bear with me I am very new to data structures.
I am getting confused how a priroity queue is used to solve min distance. For example if I have a matrix and want to find the min distance from the source to the destination, I know that I would perform Dijkstra algorithm in which with a queue I can easily find the distance between source and all elements in the matrix.
However, I am confused how a heap + priority queue is used here. For example say that I start at (1,1) on a grid and want to find the min distance to (3,3) I know how to implement the algorithm in the sense of finding the neighbours and checking the distances and marking as visited. But I have read about priority queues and min heaps and want to implement that.
Right now, my only understanding is a priority queue has a key to position elements. My issue is when I insert the first neighbours (1,0),(0,0),(2,1),(1,2) they are inserted in the pq based on a key (which would be distance in this case). So then the next search would be the element in the matrix with the shortest distance. But with the pq, how can a heap be used here with more then 2 neighbours? For example the children of (1,1) are the 4 neighbours stated above. This would go against the 2*i and 2*i + 1 and i/2
In conclusion, I don't understand how a min heap + priority queue works with finding the min of something like distance.
0 1 2 3
_ _ _ _
0 - |2|1|3|2|
1 - |1|3|5|1|
2 - |5|2|1|4|
3 - |2|4|2|1|
You need to use the priority queue to get the minimum weights in every move so the MinPQ will be fit for this.
MinPQ uses internally technique of heap to put the elements in the right position operations such as sink() swim()
So the MinPQ is the data structure that uses heap technique internally
If I'm interpreting your question correctly, you're getting stuck at this point:
But with the pq, how can a heap be used here with more then 2 neighbours? For example the children of (1,1) are the 4 neighbours stated above. This would go against the 2*i and 2*i + 1 and i/2
It sounds like what's tripping you up is that there are two separate concepts here that you may be combining together. First, there's the notion of "two places in a grid might be next to one another." In that world, you have (up to) four neighbors for each location. Next, there's the shape of the binary heap, in which each node has two children whose locations are given by certain arithmetic computations on array indices. Those are completely independent of one another - the binary heap has no idea that the items its storing come from a grid, and the grid has no idea that there's an array where each node has two children stored at particular positions.
For example, suppose you want to store locations (0, 0), (2, 0), (-2, 0) and (0, 2) in a binary heap, and that the weights of those locations are 1, 2, 3, and 4, respectively. Then the shape of the binary heap might look like this:
(0, 0)
Weight 1
/ \
(2, 0) (0, 2)
Weight 2 Weight 4
/
(0, -2)
Weight 3
This tree still gives each node two children; those children just don't necessarily map back to the relative positions of nodes in the grid.
More generally, treat the priority queue as a black box. Imagine that it's just a magic device that says "you can give me some new thing to store" and "I can give you the cheapest thing you've given be so far" and that's it. The fact that, internally, it coincidentally happens to be implemented as a binary heap is essentially irrelevant.
Hope this helps!

Algorithm to slice into square matrix a matrix

i'm searching for a algorithm that take a matrix (in fact, a double entry array) and return an array of matrix that:
is square (WIDTH = HEIGHT)
all of the element in the matrix has the same value.
I don't know if that is clear, so imagine that you have a image made of pixels that is red, blue or green and i want to get an array that contained the least possible squares. Like the pictures shows
EDIT:
Ok, maybe it's not clear: I've a grid of element that can have some values like that:
0011121
0111122
2211122
0010221
0012221
That was my input, and i want in output somethings like that:
|0|0|111|2|1|
|0|1|111|22|
|2|2|111|22|
|00|1|0|22|1|
|00|1|2|22|1|
When each |X| is an array that is a piece of the input array.
My goal is to minimize the number of output array
This problem does not seem to have an efficient solution.
Consider a subset of instances of your problem defined as follows:
There are only 2 values of matrix elements, say 0 and 1.
Consider only matrix elements with value 0.
Identify each matrix element m_ij with a unit square in a rectangular 2D grid whose lower left corner has the coordinates (i, n-j).
The set of unit squares SU chosen this way must be 'connected' and must not have 'holes'; formally, for each pair of units squares (m_ij, m_kl) \in SU^2: (i, j) != (k, l) there is a sequence <m_ij = m_i(0)j(0), m_i(1)j(1), ..., m_i(q)j(q) = m_kl> of q+1 unit squares such that (|i(r)-i(r+1)| = 1 _and_ j(r)=j(r+1)) _or_ (i(r)=i(r+1) _and_ |j(r)-j(r+1)| = 1 ); r=0...q (unit squares adjacent in the sequence share one side), and the set SUALL of all unit squares with lower left corner coordinates from the integers minus SU is also 'connected'.
Slicing matrices that admit for this construction into a minimal number of square submatrices is equivalent to tiling the smallest orthogonal polygon enclosing SU ( which is the union of all elements of SU ) into the minimum number of squares.
This SE.CS post gives the references (and one proof) that show that this problem is NP-complete for integer side lengths of the squares of the tiling set.
Note that according to the same post, a tiling into rectangles runs in polynomial time.
Some hints may be useful.
For representation of reduced matrix, maybe a vector is better because it's needed to be stored (start_x,start_y,value ... not sure if another matrix very useful).
Step 1: loop on x for n occurrences (start with y=0)
Step 2: loop on y for/untill n occurrences. Most of cases here will be m lees then n.
(case m greater then n excluded since cannot do a square) Fine, just keep the min value[m]
Step 3: mark on vector (start_x,start_y, value)
Repeat Step 1-3 from x=m until end x
Step 4: End x, adjust y starting from most left_x found(m-in vector, reiterate vector).
...
keep going till end matrix.
Need to be very careful of how boundary are made(squares) in order to include in result full cover of initial matrix.
Reformulate full-initial matrix can be recomposed exactly from result vector.
(need to find gaps and place it on vector derived from step_4)
Note ! This is not a full solution, maybe it's how to start and figure out on each steps what is to be adjusted.

Clustering of a set of 3D points

I have a 2D array of size n representing n number of points in the 3D space, position[][] for XYZ (e.g. position[0][0] is X, position[0][1] is Y, and position[0][2] is Z coordinate of point 0.
What I need to do is to do clustering on the points, so to have n/k number of clusters of size k so that each cluster consists of the k closest points in the 3D space. For instance, if n=100 and k=5, I want to have 20 clusters of 5 points which are the closest neighbors in space.
How can I achieve that? (I need pseudo-code. For snippets preferably in Java)
What I was doing so far was a simple sorting based on each component. But this does NOT give me necessarily the closest neighbors.
Sort based on X (position[0][0])
Then sort based on Y (position[0][1])
Then sort based on Z (position[0][2])
for (int i=0; i<position.length; i++){
for (int j=i+1; j<position.length; j++){
if(position[i][0] > position[i+1][0]){
swap (position[i+1][0], position[i][0]);
}
}
}
// and do this for position[i][1] (i.e. Y) and then position[i+2][2] (i.e. Z)
I believe my question slightly differs from the Nearest neighbor search with kd-trees because neighbors in each iteration should not overlap with others. I guess we might need to use it as a component, but how, that's the question.
At start you do not have a octree but list of points instead like:
float position[n][3];
So to ease up the clustering and octree creation you can use 3D point density map. It is similar to creating histogram:
compute bounding box of your points O(n)
so process all points and determine min and max coordinates.
create density map O(max(m^3,n))
So divide used space (bbox) into some 3D voxel grid (use resolution you want/need) do a density map like:
int map[m][m][m]`
And clear it with zero.
for (int x=0;x<m;x++)
for (int y=0;y<m;y++)
for (int z=0;z<m;z++)
map[x][y][z]=0;
Then process all points determine its cell position from x,y,z and increment it.
for (int i=0;i<n;i++)
{
int x=(m-1)*(position[i][0]-xmin)/(xmax-xmin);
int y=(m-1)*(position[i][1]-ymin)/(ymax-ymin);
int z=(m-1)*(position[i][2]-zmin)/(zmax-zmin);
map[x][y][z]++;
// here you can add point i into octree belonging to leaf representing this cell
}
That will give you low res density map. The higher number in cell map[x][y][z] the more points are in it which means a cluster is there and you can also move point to that cluster in your octree.
This can be recursively repeated for cells that have enough points. To make your octree create density map 2x2x2 and recursively split each cell until its count is lesser then threshold or cell size is too small.
For more info see similar QAs
Finding holes in 2d point sets? for the density map
Effective gif/image color quantization? for the clustering
What you what is not Clustering. From what you said, I think you want to divided your N points into N/k groups, with each group have k points, while keeping the points in each cluster are closest in the 3D space.
Think an easy example, if you want to do the same thing one one dimension, that is, just sort the numbers, and the first k points into cluster 1, the second k points into cluster 2, and so on.
Then return the 3D space problem, the answer is the same. Just first find the point with minimum x-axis, y-axis and z-axis, altogether with its closest k-1 points into Cluster 1. Then for the lest points, find the minimum x-axis, y-axis and z-axis points, and k-1 closest points not clustered into Cluster 2, and so on.
Above process will get your results, but that maybe not meaningful in practice, maybe cluster algorithms such as k-means could help you.

Given a set of coordinates, find the max distance (using steps)

Given a set of coordinates representing a flight path, the exercice is to find the maximum distance (given a n number of points to pass through). To illustrate the problem we have a flight path represented on a 2D grid as the following:
.
Here is what the algorithm should do with a parameter n (integer).
The question is to find an algorithm that can scan through all the points and try by combination all the distances and return the length of the final path.
We already have a method that can get the distance of two points:
/**
* #return the distance between the two coordinates
*/
public double distance(Coordinate destination) {}
/**
* #return the farthest coordinate from start
*/
public Coordinate coordMax() {}
/**
* #return max distance using n points
* I would maybe try to go for a recursive solution
* and already have the 2 corner cases down.
*/
public double statMaxDistance(int n) {
if (n == 0)
return coordTable[0].distance(coordTable[coordTable.length - 1]);
if (n == 1)
return coordTable[0].distance(coordMax());
// TODO recursive step
return statMaxDistance();
}
The question is :
Is there a way to complete this task without iterating over each point of the whole path, one by one, trying all possible combinations, computing all possible distances to eventually end up with the farthest one ?
It would seem rather sensical to follow such an approach where only 1 or 2 points would shift along the whole path but such an algorithm would be quite greedy when computing the maximum distance given 3+ reference points.
This can be solved using a Dynamic Program. Assume that D[i][j] is the maximum distance you can get from the start point to the i-th intermediate point where the last point is j. Your solution will be D[n][endPoint].
So how to solve this? The first column D[0][...] is easy to calculate. This will just be the distances of the according point to the start point. The other columns are a bit trickier. You need to check all entries in the previous column where the last point is strictly before the current point. So, to calculate entry D[i][j], you have to calculate:
D[i][j] = max_{k < j} (D[i - 1][k] + distance(k, j))
This means: Iterate all possible k such that k is smaller than j (i.e. the point k is located before the current point j). Calculate the resulting distance as the sum of the distance up to k (this is the D[i - 1][k] part) and the distance from k to j. Put the maximum of these values into D[i][j]. You may also want to keep track of k if you need to reconstruct the path at the end (i.e. you are not just interested in the maximum distance). Note that there may be cells with no valid solutions (e.g. D[1][0] - you cannot get to point 0 as the second (index 1) intermediate point).
Do this for every intermediate point in every column up to D[n - 1][...]. The final step is to do this process once more for D[n][endPoint], which strictly speaking does not need to be located in the D array (because you are interested in only a single value, not an entire column).
Once you have calculated this value, that is your solution. If you want to find the actual path, you have to backtrack using the stored k values for every cell.

Extract nearest point index from a linestring to a given point?

Let's consider a linestring as a list of points, I named it trail. I need to detect which point is close enough to this trail. I have another linestring called interest points, which I need to return the closest index point from trail linestring. I want to mention that these interest points are not included in trail linestring, so I will somehow evaluate the index point in this trail by giving that interest point. The resulted interest point will get the value existing in the trail list.
[EDIT]:
I will convert this problem using plain numbers. I find that is easy.
Input list [0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5].
Input number: 3.30
I can easly see that conditon: list[n] < number < list[n+1]
Then I can check the costs:
cost1 = number - list[n]
cost2 = list[n+1] - number.
Then I can get the index of N if (cost1 < cost2) return N else return N+1.
[IMPORTANT]:
Point objects are not comparable as numbers, this drive me to a blind spot.
If you only need to do this once, or speed is not really important, and your trace doesn't tend to loop back on itself so that detecting the closest two points could differ from detecting the line segment between them, then it's pretty easy.
First, you need a distance squared formula for points (this takes the place of the difference of plain numbers):
def dSq(x0: Double, y0: Double, x1: Double, y1: Double) =
(x1 - x0)*(x1 - x0) + (y1 - y0)*(y1 - y0)
Note that you have no real advantage for using plain distance over distance squared, except that you have to calculate an extra square root. (But feel free to def d(...) = sqrt(dSq(...)) and use that instead.)
Now you find the distances from your trace to your target point:
val ds = trace.map(p => dSq(target.x, target.y, p.x, p.y))
And you find the distances between pairs of points:
val pairDs = ds.sliding(2).map(xx => xx(0) + xx(1))
And you find the index of the smallest of these:
val smallest = pairDs.min
val index = pairDs.indexWhere(_ == smallest)
Then in your original list, the two points are index and index+1.
Alternatively, you could find the closest single point and then decide whether you want the next or previous one by comparing the distances to those. (Again, note that all of these are inexact--to really do it right you have to compute the point of closest approach of the point to the line segment between two points, which is a longer formula.)
If you have to do this a lot, then you'll want to have a faster way to ignore big distances that aren't relevant. One common way is to place a grid over your trace, and then make subtraces inside each grid element. Then, when given a point of interest, you first look up its grid location, then do the same searching trick just inside the grid. (You may need to search the 8 adjacent neighbors also depending on how you clip the points inside the grid.)

Categories

Resources