Simple A star algorithm Tower Defense Path Trapped - java

So first of all I'm in a 100 level CS college class that uses Java. Our assignment is to make a tower defense game and I am having trouble with the pathing. I found from searching that A* seems to be the best for this. Though my pathing get's stuck when I put a U around the path. I'll show some beginner psuedo code since I haven't taken a data structures class yet and my code looks pretty messy(working on that).
Assume that I will not be using diagonals.
while(Castle not reached){
new OpenList
if(up, down, left, right == passable && isn't previous node){
//Adds in alternating order to create a more diagonal like path
Openlist.add(passable nodes)
}
BestPath.add(FindLeasDistancetoEnd(OpenList));
CheckCastleReached(BestPath[Last Index]);
{
private node FindLeastDistancetoEnd(node n){
return first node with Calculated smallest (X + Y to EndPoint)
}
I've stripped A* down(too much, my problem most likely). So I'm adding parents to my nodes and calculating the correct parent though I don't believe this will solve my problem. Here's a visual of my issue.
X = impassable(Towers)
O = OpenList
b = ClosedList(BestPath)
C = Castle(EndPoint)
S = Start
OOOOXX
SbbbBX C
OOOOXX
Now the capitol B is where my issue is. When the towers are placed in that configuration and my Nav Path is recalculated it gets stuck. Nothing is put into the OpenList since the previous node is ignored and the rest are impassable.
Writing it out now I suppose I could make B impassable and backtrack... Lol. Though I'm starting to do a lot of what my professor calls "hacking the code" where I keep adding patches to fix issues, because I don't want to erase my "baby" and start over. Although I am open to redoing it, looking at how messy and unorganized some of my code is bothers me, can't wait to take data structures.
Any advice would be appreciated.

Yes, data structures would help you a lot on this sort of problem. I'll try to explain how A* works and give some better Pseudocode afterwards.
A* is a Best-First search algorithm. This means that it's supposed to guess which options are best, and try to explore those first. This requires you to keep track of a list of options, typically called the "Front" (as in front-line). It doesn't keep track of a path found so far, like in your current algorithm. The algorithm works in two phases...
Phase 1
Basically, you start from the starting position S, and all the neighbouring positions (north, west, south and east) will be in the Front. The algorithm then finds the most promising of the options in the Front (let's call it P), and expands on that. The position P is removed from the Front, but all of its neighbours are added in stead. Well, not all of its neighbours; only the neighbours that are actual options to go. We can't go walking into a tower, and we wouldn't want to go back to a place we've seen before. From the new Front, the most promising option is chosen, and so on. When the most promising option is the goal C, the algorithm stops and enters phase 2.
Normally, the most promising option would be the one that is closest to the goal, as the crow flies (ignoring obstacles). So normally, it would always explore the one that is closest to the goal first. This causes the algorithm to walk towards the goal in a sort-of straight line. However, if that line is blocked by some obstacle, the positions of the obstacle should not be added to the Front. They are not viable options. So in the next round then, some other position in the Front would be selected as the best option, and the search continues from there. That is how it gets out of dead ends like the one in your example. Take a look at this illustration to get what I mean: https://upload.wikimedia.org/wikipedia/commons/5/5d/Astar_progress_animation.gif The Front is the hollow blue dots, and they mark dots where they've already been in a shade from red to green, and impassable places with thick blue dots.
In phase 2, we will need some extra information to help us find the shortest path back when we found the goal. For this, we store in every position the position we came from. If the algorithm works, the position we came from necessarily is closer to S than any other neighbour. Take a look at the pseudocode below if you don't get what I mean.
Phase 2
When the castle C is found, the next step is to find your way back to the start, gathering what was the best path. In phase 1, we stored the position we came from in every position that we explored. We know that this position must always be closer to S (not ignoring obstacles). The task in phase 2 is thus very simple: Follow the way back to the position we came from, every time, and keep track of these positions in a list. At the end, you'll have a list that forms the shortest path from C to S. Then you simply need to reverse this list and you have your answer.
I'll give some pseudocode to explain it. There are plenty of real code examples (in Java too) on the internet. This pseudocode assumes you use a 2D array to represent the grid. An alternative would be to have Node objects, which is simpler to understand in Pseudocode but harder to program and I suspect you'd use a 2D array anyway.
//Phase 1
origins = new array[gridLength][gridWidth]; //Keeps track of 'where we came from'.
front = new Set(); //Empty set. You could use an array for this.
front.add(all neighbours of S);
while(true) { //This keeps on looping forever, unless it hits the "break" statement below.
best = findBestOption(front);
front.remove(best);
for(neighbour in (best's neighbours)) {
if(neighbour is not a tower and origins[neighbour x][neighbour y] == null) { //Not a tower, and not a position that we explored before.
front.add(neighbour);
origins[neighbour x][neighbour y] = best;
}
}
if(best == S) {
break; //Stops the loop. Ends phase 1.
}
}
//Phase 2
bestPath = new List(); //You should probably use Java's ArrayList class for this if you're allowed to do that. Otherwise select an array size that you know is large enough.
currentPosition = C; //Start at the endpoint.
bestPath.add(C);
while(currentPosition != S) { //Until we're back at the start.
currentPosition = origins[currentPosition.x][currentPosition.y];
bestPath.add(currentPosition);
}
bestPath.reverse();
And for the findBestOption method in that pseudocode:
findBestOption(front) {
bestPosition = null;
distanceOfBestPosition = Float.MAX_VALUE; //Some very high number to start with.
for(position in front) {
distance = Math.sqrt(position.x * position.x - C.x * C.x + position.y * position.y - C.y * C.y); //Euclidean distance (Pythagoras Theorem). This does the diagonal thing for you.
if(distance < distanceOfBestPosition) {
distanceOfBestPosition = distance;
bestPosition = position;
}
}
}
I hope this helps. Feel free to ask on!

Implement the A* algorithm properly. See: http://en.wikipedia.org/wiki/A%2A_search_algorithm
On every iteration, you need to:
sort the open nodes into heuristic order,
pick the best;
-- check if you have reached the goal, and potentially terminate if so;
mark it as 'closed' now, since it will be fully explored from.
explore all neighbors from it (by adding to the open nodes map/ or list, if not already closed).
Based on the ASCII diagram you posted, it's not absolutely clear that the height of the board is more than 3 & that there actually is a path around -- but let's assume there is.
The proper A* algorithm doesn't "get stuck" -- when the open list is empty, no path exists & it terminates returning a no path null.
I suspect you may not be closing the open nodes (this should be done as you start processing them), or may not be processing all open nodes on every iteration.
Use a Map<GridPosition, AStarNode> will help performance in checking for all those neighboring positions, whether they are in the open or closed sets/lists.

Related

Find the pairings such that the sum of the weights is minimized?

When solving the Chinese postman problem (route inspection problem), how can we find the pairings (between odd vertices) such that the sum of the weights is minimized?
This is the most crucial step in the algorithm that successfully solves the Chinese Postman Problem for a non-Eulerian Graph. Though it is easy to implement on paper, but I am facing difficulty in implementing in Java.
I was thinking about ways to find all possible pairs but if one runs the first loop over all the odd vertices and the next loop for all the other possible pairs. This will only give one pair, to find all other pairs you would need another two loops and so on. This is rather strange as one will be 'looping over loops' in a crude sense. Is there a better way to resolve this problem.
I have read about the Edmonds-Jonhson algorithm, but I don't understand the motivation behind constructing a bipartite graph. And I have also read Chinese Postman Problem: finding best connections between odd-degree nodes, but the author does not explain how to implement a brute-force algorithm.
Also the following question: How should I generate the partitions / pairs for the Chinese Postman problem? has been asked previously by a user of Stack overflow., but a reply to the post gives a python implementation of the code. I am not familiar with python and I would request any community member to rewrite the code in Java or if possible explain the algorithm.
Thank You.
Economical recursion
These tuples normally are called edges, aren't they?
You need a recursion.
0. Create main stack of edge lists.
1. take all edges into a current edge list. Null the found edge stack.
2. take a next current edge for the current edge list and add it in the found edge stack.
3. Create the next edge list from the current edge list. push the current edge list into the main stack. Make next edge list current.
4. Clean current edge list from all adjacent to current edge and from current edge.
5. If the current edge list is not empty, loop to 2.
6. Remember the current state of found edge stack - it is the next result set of edges that you need.
7. Pop the the found edge stack into current edge. Pop the main stack into current edge list. If stacks are empty, return. Repeat until current edge has a next edge after it.
8. loop to 2.
As a result, you have all possible sets of edges and you never need to check "if I had seen the same set in the different order?"
It's actually fairly simple when you wrap your head around it. I'm just sharing some code here in the hope it will help the next person!
The function below returns all the valid odd vertex combinations that one then needs to check for the shortest one.
private static ObjectArrayList<ObjectArrayList<IntArrayList>> getOddVertexCombinations(IntArrayList oddVertices,
ObjectArrayList<IntArrayList> buffer){
ObjectArrayList<ObjectArrayList<IntArrayList>> toReturn = new ObjectArrayList<>();
if (oddVertices.isEmpty()) {
toReturn.add(buffer.clone());
} else {
int first = oddVertices.removeInt(0);
for (int c = 0; c < oddVertices.size(); c++) {
int second = oddVertices.removeInt(c);
buffer.add(new IntArrayList(new int[]{first, second}));
toReturn.addAll(getOddVertexCombinations(oddVertices, buffer));
buffer.pop();
oddVertices.add(c, second);
}
oddVertices.add(0, first);
}
return toReturn;
}

Why is this called backtracking?

I have read in Wikipedia and have also Googled it,
but I cannot figure out what "Backtracking Algorithm" means.
I saw this solution from "Cracking the Code Interviews"
and wonder why is this a backtracking algorithm?
Backtracking is a form of recursion, at times.
This boolean based algorithm is being faced with a choice, then making that choice and then being presented with a new set of choices after that initial choice.
Conceptually, you start at the root of a tree; the tree probably has some good leaves and some bad leaves, though it may be that the leaves are all good or all bad. You want to get to a good leaf. At each node, beginning with the root, you choose one of its children to move to, and you keep this up until you get to a leaf.(See image below)
Explanation of Example:
Starting at Root, your options are A and B. You choose A.
At A, your options are C and D. You choose C.
C is bad. Go back to A.
At A, you have already tried C, and it failed. Try D.
D is bad. Go back to A.
At A, you have no options left to try. Go back to Root.
At Root, you have already tried A. Try B.
At B, your options are E and F. Try E.
E is good. Congratulations!
Source: upenn.edu
"Backtracking" is a term that occurs in enumerating algorithms.
You built a "solution" (that is a structure where every variable is assigned a value).
It is however possible that during construction, you realize that the solution is not successful (does not satisfy certain constraints), then you backtrack: you undo certain assignments of values to variables in order to reassign them.
Example:
Based on your example you want to construct a path in a 2D grid. So you start generating paths from (0,0). For instance:
(0,0)
(0,0) (1,0) go right
(0,0) (1,0) (1,1) go up
(0,0) (1,0) (1,1) (0,1) go left
(0,0) (1,0) (1,1) (0,1) (0,0) go down
Oops, visiting a cell a second time, this is not a path anymore
Backtrack: remove the last cell from the path
(0,0) (1,0) (1,1) (0,1)
(0,0) (1,0) (1,1) (0,1) (1,1) go right
Oops, visiting a cell a second time, this is not a path anymore
Backtrack: remove the last cell from the path
....
From Wikipedia:
Backtracking is a general algorithm for finding all (or some) solutions to some computational problem, that incrementally builds candidates to the solutions, and abandons each partial candidate c ("backtracks") as soon as it determines that c cannot possibly be completed to a valid solution.
Backtracking is easily implemented as a recursive algorithm. You look for the solution of a problem of size n by looking for solutions of size n - 1 and so on. If the smaller solution doesn't work you discard it.
That's basically what the code above is doing: it returns true in the base case, otherwise it 'tries' the right path or the left path discarding the solution that doesn't work.
Since the code above it's recursive, it might not be clear where the "backtracking" comes into play, but what the algorithm actually does is building a solution from a partial one, where the smallest possible solution is handled at line 5 in your example. A non recursive version of the algorithm would have to start from the smallest solution and build from there.
I cannot figure out what "backtracking algorithm" means.
An algorithm is "back-tracking" when it tries a solution, and on failure, returns to a simpler solution as the basis for new attempts.
In this implementation,
current_path.remove(p)
goes back along the path when the current path does not succeed so that a caller can try a different variant of the path that led to current_path.
Indeed, the word "back" in the term "backtracking" could sometimes be confusing when a backtracking solution "keeps going forward" as in my solution of the classic N queens problem:
/**
* Given *starting* row, try all columns.
* Recurse into subsequent rows if can put.
* When reached last row (stopper), increment count if put successfully.
*
* By recursing into all rows (of a given single column), an entire placement is tried.
* Backtracking is the avoidance of recursion as soon as "cannot put"...
* (eliminating current col's placement and proceeding to the next col).
*/
int countQueenPlacements(int row) { // queen# is also queen's row (y axis)
int count = 0;
for (int col=1; col<=N; col++) { // try all columns for each row
if (canPutQueen(col, row)) {
putQueen(col, row);
count += (row == N) ? print(board, ++solutionNum) : countQueenPlacements(row+1);
}
}
return count;
}
Note that my comment defines Backtracking as the avoidance of recursion as soon as "cannot put" -- but this is not entirely full. Backtracking in this solution could also mean that once a proper placement is found, the recursion stack unwinds (or backtracks).
Backtracking basically means trying all possible options. It's usually the naive, inefficient solutions to problems.
In your example solution, that's exactly what's going on - you simply try out all possible paths, recursively:
You try each possible direction; if you found a successful path - good. if not - backtrack and try another direction.

How to make my path-finding algorithm not go in reverse?

My path-finding method is given two objects containing an id, name, x/y coordinates, and path stored in an Array List. The path data is the id of each object that it can directly connect to. The objective is to call my method recursively until it finds its goal using the shortest distance, and when it reaches the end it returns true.
The problem:
If the distance to the node that you came from is shorter than the other nodes in the current nodes path, then it will cause an infinite loop bouncing back and forth between the two nodes. I have struggled with this problem for several hours and could be over thinking it. Any advice or suggestions will be greatly appreciated!
The Algorithm:
while (!pathfound) {
current = findPath(current, end);
}
public static Place findPath(Place curPlace, Place endPlace) {
ArrayList<Integer> path = curPlace.path;
int id;
double lastdist = 999;
double distance;
Place bestPlace = null;
for (int i = 0; i < path.size(); i++) {
id = curPlace.path.get(i);
distance = distance(getPlace(id), curPlace)
+ distance(getPlace(id), endPlace);
if (distance < lastdist) {
bestPlace = getPlace(id);
}
lastdist = distance;
}
if (result.length() == 0) {
result += bestPlace.name;
} else {
result += ", " + bestPlace.name;
}
System.out.println("CURCITY: " + bestPlace.id);
System.out.println(result);
System.out.println(lastdist);
if (bestPlace == endPlace) {
pathfound = true;
}
return bestPlace;
}
You can ignore result, it is for keeping up with the nodes that are passed through. If there are any other details you would like to know, please ask.
If it is acceptable to modify Place you can add a boolean "visited" flag. Reset them all to false prior to running the algorithm; set to true when you visit and false when you leave (don't forget to unset them on the way out of the recursion - if you do this properly you can even avoid having to explicitly reset the flags before starting). Skip nodes where the flag is true.
A more short-sighted option is to pass the last visited Place as a parameter to the function, and skip that one. This will not prevent larger loops but may be entirely appropriate for your situation and is the simplest to implement.
Both of the above are O(1) with minimal overhead. If you cannot modify Place you could store a Set of visited places (remove them from the set on the way out of recursion), and skip places that are already in that set. Depending on your performance requirements, if you use a HashSet you will want to come up with an appropriate hashing function.
Along those lines, at the expense of more memory, if your ID numbers are unique and cover a reasonably sized finite range, a boolean[] indexed by ID number is a constant time alternative to a set here (it is essentially the "visited" flag option with the flags stored externally).
Using a recursive approach to path finding algorithm can be quite tricky, as you always need some kind of global information to evaluate, which one of two paths is more suitable. While following a single path, you can never be sure, if it is the right one. Even if you always follow the nearest node, it doesn't have to be the right path. This is called best-first search strategy and although it is not the best, it can be made to work, but you have to make sure to try other paths as well, because you can't pull it off by simply always sticking to the closest node.
If you want to do a path finding algorithm, you will need to keep track of all the nodes, that you have already explored exhaustively and therefore will never need to visit again. This can be done either explicitly, by storing the list of visited nodes in a structure of some kind, or you can be smarter about it and enforce this by good design of policy for selecting a new node to be visited.
In other words, if you keep track of the nodes to be visited along with the distances to each node (priority queue) and you always make sure to visit the nearest not-yet-visited node, you will never revisit the same nodes again, without having to explicitly enforce it, such as in A* algorithm, or Dijkstra.

Finding Rectangle which contains a Point

In Java SE 7, I'm trying to solve a problem where I have a series of Rectangles. Through some user interaction, I get a Point. What I need to do is find the (first) Rectangle which contains the Point (if any).
Currently, I'm doing this via the very naieve solution of just storing the Rectangles in an ArrayList, and searching for the containing Rectangle by iterating over the list and using contains(). The problem is that, because this needs to be interactive for the user, this technique starts to be too slow for even a relatively small number of Rectangles (say, 200).
My current code looks something like this:
// Given rects is an ArrayList<Rectangle>, and p is a Point:
for(Rectangle r : rects)
{
if(r.contains(p))
{
return r;
}
}
return null;
Is there a more clever way to solve this problem (namely, in O(log n) instead of O(n), and/or with fewer calls to contains() by eliminating obviously bad candidates early)?
Yes, there is. Build 2 interval trees which will tell you if there is a rectangle between x1 to x2 and between y1 and y2. Then, when you have the co-ordinates of the point, perform O(log n) searches in both the trees.
That'll tell you if there are possibly rectangles around the point of interest. You still need to check if there is a common rectangle given by the two trees.

Collision Detection with MANY objects

I mainly focused on the Graphics aspects to create a little 2DGame. I've watched/looked at several tutorials but none of them were that pleasing. I already have a player(a square) moving and colliding with other squares on the screen. Gravity etc. Are also done.
If there are only that much objects as seen on the screen (30*20), everything works perfectly fine. But if I increase it to let's say 300*300 the program starts to run very slow since it has to check for so many objects.
I really don't get how games like Minecraft can work with ALL THOSE blocks and my program already gives up on 300*300 blocks.
I already tried to ONLY check for collisions when the objects are visible, but that leads to the program checking every single object for it's visibility leading to the same problem.
What am I doing wrong? Help appreciated.
I'll post some code on how I handle the collisions.
player.collision(player, wall);
public void collision(Tile object1, Tile[] object2){
collisionCheckUp(object1, object2);
collisionCheckDown(object1, object2);
collisionCheckLeft(object1, object2);
collisionCheckRight(object1, object2);
}
public void collisionCheckDown(Tile object1, Tile[] object2){
for (int i = 0; i < Map.tileAmount; i++){
if(object2[i] != null && object2[i].visible)
{
if(object1.isCollidingDown(object2[i])){
object1.collisionDown = true;
return;
}
}
}
object1.collisionDown = false;
}
public void compileHullDown(){
collisionHull = new Rectangle((int)x+3, (int)y+3, width-6, height);
}
int wallCount = 0;
for (int x=0;x<Map.WIDTH;x++) {
for (int y=0;y<Map.HEIGHT;y++) {
if (Map.data[x][y] == Map.BLOCKED) {
wall[wallCount] = new Tile(x * Map.TILE_SIZE, y * Map.TILE_SIZE);
wallCount++;
}
}
}
The usual approach to optimize collision detection is to use a space partition to classify/manage your objects.
The general idea of the approach is that you build a tree representing the space and put your objects into that tree, according to their positions. When you calculate the collisions, you traverse the tree. This way, you will have to perform significantly less calculations than using the brute force approach, because you will be ignoring all objects in branches other than the one you're traversing. Minecraft and similar probably use octrees for collision (and maybe for rendering too).
The most common space partition structures are BSP-Trees, kd-Trees (a special type of BSP-trees). The simpler approach would be to use a uniform space partition for the start - split your space in axis-aligned halves.
The best resource on collision that I have discovered is this book. It should clarify all your questions on the topic.
That's if you wanted to do it right. If you want to do it quick, you could just sample the color buffer around your character, or only in the movement direction to determine if an obstacle is close.
As Kostja mentioned, it will be useful for you to partition your space. However, you will need to use QuadTrees instead of Octrees as you are only in 2D not 3D.
Here are is a nice article to get you started on QuadTrees.
You can cut your overhead by a factor of 4 by, instead of calculating collisions for up/down/left/right, calculating collisions once and using the relative positions of the two objects to find out if you hit a floor, wall, or ceiling. Another good idea is to only pay attention to the objects that are nearby - maybe once every 0.25 seconds make a list of all objects that are probably close enough to collide with in the next 0.25 seconds?

Categories

Resources