I have a problem requiring me to implement an algorithm finding path from a character to another character with obstacles along the way.
I know there are a lot of advanced pathfinding algorithms(A*, BFS, DFS,dijkstra....). However, I am struggling to implement all these concepts in my code after plenty of research and attempts, and also I don't think I am required to implement all these advanced algorithms.
"Shortest" path is not the requirement, and all I need is a path that can lead my character to another character by avoiding moving onto obstacles.
Can anyone give me an idea (maybe some algorithms better than backtracking)or useful website (similar examples) for this problem?
Any help would be much appreciated
I can recommend you the A* algortihm.
Its easy to implements the algorithm. For my A* i used the wikipedia code and the geekforgeek code.
I'll post my code aswell its in C# but very similiar to java:
public List<ANote> findPath(ANote start, ANote end)
{
if (start == null || end == null || start == end || !start.walkable || !end.walkable)
return null;
List<ANote> openSet = new List<ANote>();
List<ANote> closedSet = new List<ANote>();
start.parent = null;
openSet.Add(start);
start.h = getDistance(start, end);
while (openSet.Any())
{
openSet = openSet.OrderBy(o => o.f).ToList();
ANote current = openSet[0];
if (current == end)
break;
openSet.Remove(current);
closedSet.Add(current);
foreach (ANote neighbor in current.adjacted)
{
if (closedSet.Contains(neighbor))
continue;
double _gScore = current.g + 1; // For me every distance was 1
if (openSet.Contains(neighbor) && _gScore >= neighbor.g)
continue;
if (!openSet.Contains(neighbor))
openSet.Add(neighbor);
neighbor.parent = current;
neighbor.g = _gScore;
neighbor.h = getDistance(neighbor, end);
}
}
return reconstructPath(start, end);
}
private List<ANote> reconstructPath(ANote start, ANote end)
{
List<ANote> backNotes = new List<ANote>();
ANote current = end;
while (current.parent != null)
{
backNotes.Add(current);
current = current.parent;
}
return backNotes;
}
public class ANote
{
public ANote parent { get; set; }
public double f { get { return g + h; } }
public double g { get; set; }
public double h { get; set; }
public int x { get; set; }
public int y { get; set; }
public bool walkable { get; set; }
public List<ANote> adjacted { get; set; } = new List<ANote>();
public ANote(int x, int y)
{
this.x = x;
this.y = y;
walkable = true;
}
}
Important for this code is that you must define the adjacted nodes and which are walkable and what are not before you search.
I hope my code can help you to implement A* in your code.
Since I have no idea how your "Grid" is described, and what "Cell" is, I assume the grid is rectangular, the cells only map the objects on it, not the empty spaces.
I suggest you make an char[][] array = new char[rows][columns]; (or char) initialize it with some value and iterate over Cells, fill the 2D array in some meaningful manner, the example is G for goal, # for obstacle, etc. Then you start a DFS from Goal to look for Start.
You need to store the correct path somewhere, so you need a ArrayList list; variable too. Since it's a list, you can add items to it with list.add(item);, which is handy.
Non-optimal path: DFS
DFS is a really basic recursive algorithm, in your case it will go like this:
bool DFS(list, array, row, column):
if(array[row][column] == '#' or
array[row][column] == 'v') return False; # Obstacle or visited, ignore it
if( ... == 'S') return True; # It's the Start, path found
array[row][column] = 'v'; # mark as visited, nothing interesting.
# If you want the shortest path, you're supposed to put here the distance from goal,
#that you would pass on and increment as additional argument of DFS
#Check for edge cases, or initialize array to be bigger and place obstacles on edge#
if( DFS(list, array, row-1, column) ){ # If DFS fount a path to Start from cell above
list.add("Move Up"); # Then you add a direction to go below to the list
return True; # And then tell the previous DFS, that the path was found
}
if()
if()
if()
# And then you add the checks for the other directions in a similar manner
return False; # You didn't find anything anywhere
}
That is not the code, but it should be enough for you to do your assignement from there.
Chances are it'll find a path like this:
...→→→→↓
...↑.↓←↓
...S.F↑↓
......↑↓
......↑←
But in grids with a lot of obstacles or only one correct path it'll make more reasonable paths. Also you can improve it by selecting the order you try directions in so it always tries to go towards Goal first, but that's a pain.
Optimal path: augmented DFS
To find the shortest path usually people refer to A*, but I read up on it and it's not as I remember it and it's just unnecessarily complicated, so I'll explain an expanded DFS. It takes a little longer to find the answer, than A* or BFS would, but for reasonably-sized grids it's not noticeable.
The idea of the algorithm is to map the entire grid with distances to Goal and then walk from start to goal following decreasing distance.
First you will need to use int[][] array instead of char of the previous case. That is because you need to store distances, which char can too to some extent, but also non-distance markers in the grid, like obstacles and such.
The idea of the algorithm is you call DFS(array, row, col, distance), where distance is calculated as distance of the Cell that calls DFS incremented by 1. Then DFS in the next cell checks if the distance it was passed is smaller than its current distance, if it is so, then there was a shorter path found to that cell and you need to recalculate all its neighbors too. Otherwise the new path is longer and you can disregard it. By calling DFS recursively you will gradually map the entire maze.
After that you will call another function FindPath(list, array, row, col), that will check the Cell it started in and add a direction to the cell with neighbor.distance == (this.distance - 1) to the list and then call FindPath on that neighbor until distance is 0, at which point it's the goal.
It should look smth like this:
main()
{
# initialize grid with Integer.MAX_VALUE or just a big enough number
# for Cell in Cells -> put obstacles on Grid as -1,
# find the Start and Goal and record them somewhere
# DFS_plus(array, Goal, 0);
# FindPath(list, array, Start);
# Done
}
void DFS_plus(array, row, column, distance):
if(array[row][col] <= distance) return; # There already exists a shorter path there
# or it's an obstacle, we store obstacles as -1.
# It's smaller than any possible path and thus blocks further search
array[row][column] = distance; # update distance.
# If this happened its neighbors will need to be updated too.
#Check for edge cases, or initialize array to be bigger and place obstacles on edge#
DFS_plus(array, row-1, column, distance+1); # You just map everything, no returns expected
DFS_plus(); # For all 4 directions
DFS_plus();
DFS_plus();
}
FindPath(list, array, row, col){
if(array[row][col] == 0) return; # It's the Goal
if(array[row-1][col] == (array[row][col] - 1)){ # Check if Cell above is 1 closer to Goal
list.add("MoveUp"); # Add direction
FindPath(list, array, row-1, col); # Look for next direction
return; # You don't need to check other directions as path is guaranteed
}
if(){}; # Check other directions if Up wasn't the one
if(){};
if(){};
}
It is not much more complicated, but it gets you the shortest path. It's not the quickest way to find the shortest path, but it's relatively simple as any recursive algoithm.
Related
My chess algorithm is based on negamax. The relevant part is:
private double deepEvaluateBoard(Board board, int currentDepth, double alpha, double beta, Move initialMove) {
if (board.isCheckmate() || board.isDraw() || currentDepth <= 0) {
this.moveHistorys.put(initialMove, board.getMoveHistory()); // this is not working
return evaluateBoard(board); // evaluateBoard evaluates from the perspective of color whose turn it is.
} else {
double totalPositionValue = -1e40;
List<Move> allPossibleMoves = board.getAllPossibleMoves();
for (Move move : allPossibleMoves) {
board.makeMove(move);
totalPositionValue = max(-deepEvaluateBoard(board, currentDepth - 1, -beta, -alpha, initialMove), value);
board.unMakeMove(1);
alpha = max(alpha, totalPositionValue);
if (alpha >= beta) {
break;
}
}
return totalPositionValue;
}
}
It would greatly help debugging if I was be able to access the move sequence that the negamax algorithm bases its evaluation on (where on the decision tree the evaluated value is found).
Currently I am trying to save the move history of the board into a hashmap that is a field of the enclosing class. However, it is not working for some reason, as the produced move sequences are not optimal.
Since developing an intuition for negamax is not very easy, I have ended up on banging my head against the wall on this one for quite some time now. I would much appreciate if someone could point me in the right direction!
I try to write a MinMax program in Java for connect-four game, but this program should also be applicable to other games. But, I encountered a problem, which I cannot pass for few days. The values for nodes are not set properly. I am sharing my piece of code which is responsible for generating a tree.
Maybe you will notice where I made a mistake.
If anyone could help me with this, I will be very happy.
public Node generateTree(Board board, int depth) {
Node rootNode = new Node(board);
generateSubtree(rootNode, depth);
minMax(rootNode, depth);
return rootNode;
}
private void generateSubtree(Node subRootNode, int depth) {
Board board = subRootNode.getBoard();
if (depth == 0) {
subRootNode.setValue(board.evaluateBoard());
return;
}
for (Move move : board.generateMoves()) {
Board tempBoard = board.makeMove(move);
Node tempNode = new Node(tempBoard);
subRootNode.addChild(tempNode);
generateSubtree(tempNode, depth - 1);
}
}
public void minMax(Node rootNode, int depth) {
maxMove(rootNode, depth);
}
public int maxMove(Node node, int depth) {
if (depth == 0) {
return node.getValue();
}
int bestValue = Integer.MIN_VALUE;
for (Node childNode : node.getChildren()) {
int tempValue = minMove(childNode, depth - 1);
childNode.setValue(tempValue);
if (tempValue > bestValue) {
bestValue = tempValue;
}
}
return bestValue;
}
public int minMove(Node node, int depth) {
if (depth == 0) {
return node.getValue();
}
int bestValue = Integer.MAX_VALUE;
for (Node childNode : node.getChildren()) {
int tempValue = maxMove(childNode, depth - 1);
childNode.setValue(tempValue);
if (tempValue < bestValue) {
bestValue = tempValue;
}
}
return bestValue;
}
Board class is the representation of the board state.
Move class hold the move to perform (integer [0-8] for tic-tac-toe, [0-6] for Connect Four).
Node class holds the Move and value how good given move is. Also, holds all its children.
In the code I use this method like this:
Node newNode = minmax.generateTree(board, depth, board.getPlayer());
Move newMove = new TicTacToeMove(board.getPlayer(), newNode.getBestMove().getMove(), depth);
board = board.makeMove(newMove);
And when it's obvious that given move is a losing move (or winning), I do not receive this move.
Alright, you did make a couple of mistakes. About 3-4, depending on how you count ;) Took me a bit of debugging to figure it all out, but I finally got an answer for you :D
Mistake #1: All your parents always get twins (that poor mother)
This is only the case with the code you uploaded, not the code in your question, so maybe we count it as half a mistake?
Since your trees aren't that big yet and it won't destroy your algorithm, this was the least important one anyway. Still, it's something to watch out for.
In your uploaded code, you do this in your generateSubtree method:
Node tempNode = new Node(tempBoard, move, subRootNode);
subRootNode.addChild(tempNode);
As that constructor already adds the child to the subRootNode, the second line always adds it a second time.
Mistake #2: That darn depth
If you haven't reached your desired depth yet, but the game is already decided, you completely ignore that. So in your provided example that won't work, if - for example - you look at making move 7 instead of 3 (which would be the 'right' move) and then the opponent does move 3, you don't count it as -10 points because you haven't reached your depth yet. It still won't get any children, so even in your minmax, it will never realize it's a screwed up way to go.
Which is why every move is 'possible' in this scenario and you just get the first one returned.
In the previous moves, there was luckily always a way to reach a losing move with your opponents third move (aka move #5), which is why those were called correctly.
Alright, so how do we fix it?
private void generateSubtree(Node subRootNode, int depth, int player) {
Board board = subRootNode.getBoard();
List<Move> moveList = board.generateMoves();
if (depth == 0 || moveList.isEmpty()) {
subRootNode.setValue(board.evaluateBoard(player));
return;
}
for (Move move : moveList) {
Board tempBoard = board.makeMove(move);
Node tempNode = new Node(tempBoard, move, subRootNode);
generateSubtree(tempNode, depth - 1, player);
}
}
Just get the move list beforehand and then look if it's empty (your generateMoves() method of the Board class (thank god you provided that by the way ;)) already checks if the game is over, so if it is, there won't be any moves generated. Perfect time to check the score).
Mistake #3: That darn depth again
Didn't we just go over this?
Sadly, your Min Max algorithm itself has the same problem. It will only even look at your values if you have reached the desired depth. You need to change that.
However, this is a bit more complicated, since you don't have a nice little method that already checks if the game is finished for you.
You could check to see if your value was set, but here's the problem: It might be set to 0 and you need to take that into account as well (so you can't just do if (node.getValue() != 0)).
I just set the initial value of each node to -1 instead and did a check against -1. It's not... you know... pretty. But it works.
public class Node {
private Board board;
private Move move;
private Node parent;
private List<Node> children = new ArrayList<Node>();;
private boolean isRootNode = false;
private int value = -1;
...
And this in the maxMove:
public int maxMove(Node node, int depth) {
if (depth == 0 || node.getValue() != -1) {
return node.getValue();
}
int bestValue = Integer.MIN_VALUE;
for (Node childNode : node.getChildren()) {
int tempValue = minMove(childNode, depth - 1);
childNode.setValue(tempValue);
if (tempValue > bestValue) {
bestValue = tempValue;
}
}
return bestValue;
}
It works the same for minMove of course.
Mistake #4: The player is screwing with you
Once I changed all that, it took me a moment with the debugger to realize why it still wouldn't work.
This last mistake was not in the code you provided in the question btw. Shame on you! ;)
Turns out it was this wonderful piece of code in your TicTacToeBoard class:
#Override
public int getPlayer() {
// TODO Auto-generated method stub
return 0;
}
And since you called
MinMax minmax = new MinMax();
Node newNode = minmax.generateTree(board, (Integer) spinner.getValue(), board.getPlayer());
in your makeMove method of TicTacToeMainWindow, you would always start out with the wrong player.
As you can probably guess yourself, you just need to change it to:
public int getPlayer() {
return this.player;
}
And it should do the trick.
Also:
Just a couple of things I'd like to remark at this point:
Clean up your imports! Your TicTacToe actually still imports your ConnectFour classes! And for no reason.
Your board is rotated and mirrored in your board array. WHY? You know how annoying that is to debug? I mean, I guess you probably do :D Also, if you're having problems with your code and you need to debug it's extremely helpful to overwrite your boards toString() method, because that will give you a very nice and easy way to look at your board in the debugger. You can even use it to rotate it again, so you see don't have to look at it lying on the side ;)
While we're at the subject of the board... this is just me but... I always tried clicking on the painted surface first and then had to remember: Oh yeah, there were buttons :D I mean... why not just put the images on the buttons or implement a MouseListener so you can actually just click on the painted surface?
When providing code and/or example images, please take out your test outputs. I'm talking about the Player 1 won!s of course ;)
Please learn what a complete, verifiable and minimal example is for the next time you ask a question on StackOverflow. The one in your question wasn't complete or verifiable and the one you provided on github was... well... not complete (the images were missing), but complete enough. It was also verifiable, but it was NOT minimal. You will get answers a LOT sooner if you follow the guidelines.
I've been using examples of others' implementations of the A* pathfinding algorithm as a crutch to help me write my first implementation. I'm having some trouble with the logic in one of the more readable examples I've found.
I'm not here to pick apart this code, really, I'm trying to figure out if I am right or if I am misunderstanding the mechanics here. If I need to review how A* works I will but if this code is incorrect I need to find other sources to learn from.
It appears to me that the logic found here is flawed in two places both contained here:
for(Node neighbor : current.getNeighborList()) {
neighborIsBetter;
//if we have already searched this Node, don't bother and continue to the next
if (closedList.contains(neighbor))
continue;
//also just continue if the neighbor is an obstacle
if (!neighbor.isObstacle) {
// calculate how long the path is if we choose this neighbor as the next step in the path
float neighborDistanceFromStart = (current.getDistanceFromStart() + map.getDistanceBetween(current, neighbor));
//add neighbor to the open list if it is not there
if(!openList.contains(neighbor)) {
--> openList.add(neighbor);
neighborIsBetter = true;
//if neighbor is closer to start it could also be better
--> } else if(neighborDistanceFromStart < current.getDistanceFromStart()) {
neighborIsBetter = true;
} else {
neighborIsBetter = false;
}
// set neighbors parameters if it is better
if (neighborIsBetter) {
neighbor.setPreviousNode(current);
neighbor.setDistanceFromStart(neighborDistanceFromStart);
neighbor.setHeuristicDistanceFromGoal(heuristic.getEstimatedDistanceToGoal(neighbor.getX(), neighbor.getY(), map.getGoalLocationX(), map.getGoalLocationY()));
}
}
}
source
The first line I marked (-->), seems incorrect to me. If you look at the implementation of the list being used(below) it sorts based on heuristicDistanceFromGoal which is set several lines below the .add.
public int compareTo(Node otherNode) {
float thisTotalDistanceFromGoal = heuristicDistanceFromGoal + distanceFromStart;
float otherTotalDistanceFromGoal = otherNode.getHeuristicDistanceFromGoal() + otherNode.getDistanceFromStart();
if (thisTotalDistanceFromGoal < otherTotalDistanceFromGoal) {
return -1;
} else if (thisTotalDistanceFromGoal > otherTotalDistanceFromGoal) {
return 1;
} else {
return 0;
}
}
The second line I marked should always evaluate to false. It reads:
} else if(neighborDistanceFromStart < current.getDistanceFromStart()) {
Which can be simplified to:
if((current.getDistanceFromStart() + map.getDistanceBetween(current, neighbor)) < current.getDistanceFromStart())
And again to:
if(map.getDistanceBetween(current, neighbor) < 0)
Which would be fine except getDistanceBetween() should always return a positive value (see here).
Am I on or off track?
First of all, you are mostly on track. I strongly suspect the code you posted is still under development and has some problems. But, still your assumption of distance is positive is not correct in general. A* is a graph search algorithm and edges can have negative weights as well in general. Hence, I assume they tried to implement the most general case. The openList.add seems totally fine to me though. Your queue should be sorted with heuristic distance. Just check the wiki page https://en.wikipedia.org/wiki/A_star, the related line is f_score[neighbor] := g_score[neighbor] + heuristic_cost_estimate(neighbor, goal). And, main idea behind this is, you always underestimate the distance (admissible heuristic); hence, if the goal is found, none of the not-explored nodes can be optimal. You can read more at http://en.wikipedia.org/wiki/Admissible_heuristic
Second of all, If you want a stable AI code base, You can simply use http://code.google.com/p/aima-java/. It is the implementation of the algorithms in AIMA (Artificial Intelligence A Modern Approach)
SOLVED: I'm sorry. I was reconstructing improperly the path. I thought closedSet had all the waypoints from start to end only, but it has some other waypoints too. I miss understood the concept. Now it's working okey!
I'm still getting some trouble with A*.
My character is finding his path, but sometimes, depending where i click on the map, the algorithm finds the shortest path or the path, but with many nodes that shouldn't be selected.
I've tried to follow Wikipedia's and A* Pathfinding for Beginner's implementation, but they give me the same result. I don't know if it is the heuristic or the algorithm itself, but something's not right.
And this is an example of the problem clicking two different nodes: http://i.imgur.com/gtgxi.jpg
Here's the Pathfind class:
import java.util.ArrayList;
import java.util.Collections;
import java.util.TreeSet;
public class Pathfind {
public Pathfind(){
}
public ArrayList<Node> findPath(Node start, Node end, ArrayList<Node> nodes){
ArrayList<Node> openSet = new ArrayList<Node>();
ArrayList<Node> closedSet = new ArrayList<Node>();
Node current;
openSet.add(start);
while(openSet.size() > 0){
current = openSet.get(0);
current.setH_cost(ManhattanDistance(current, end));
if(start == end) return null;
else if(closedSet.contains(end)){
System.out.println("Path found!");
return closedSet;
}
openSet.remove(current);
closedSet.add(current);
for(Node n : current.getNeigbours()){
if(!closedSet.contains(n)){
if(!openSet.contains(n) || (n.getG_cost() < (current.getG_cost()+10))){
n.setParent(current);
n.setG_cost(current.getG_cost()+10);
n.setH_cost(ManhattanDistance(n, end));
if(!openSet.contains(n))
openSet.add(n);
Collections.sort(openSet);
}
}
}
}
return null;
}
private int ManhattanDistance(Node start, Node end){
int cost = start.getPenalty();
int fromX = start.x, fromY = start.y;
int toX = end.x, toY = end.y;
return cost * (Math.abs(fromX - toX) + Math.abs(fromY - toY));
}
}
I believe the bug is with the condition:
if(n.getCost() < current.getCost()){
You shouldn't prevent advancing if the cost (g(node)+h(node)) is decreasing from the current. Have a look at this counter example: (S is the source and T is the target)
_________
|S |x1|x2|
----------
|x3|x4|x5|
---------
|x6|x7|T |
----------
Now, Assume you are at S, you haven't moved yet so g(S) =0, and under the manhattan distance heuristic, h(S) = 4, so you get f(S)=4
Now, have a look at x1,x3: Assuming you are taking one step to each, they will have g(x1)=g(x3)=1, and both will have h(x1)=h(x3)=3 under the same heuristic. It will result in f(x1)=f(x3)=4 - and your if condition will cause none to "open", thus once you finish iterating on S - you will not push anything to open - and your search will terminate.
As a side note:
I believe the choice of closedSet as ArrayList is not efficient. each contains() op is O(n) (where n is the number of closed nodes). You should use a Set for better performance - A HashSet is a wise choice, and if you want to maintain the order of insertion - you should use a LinkedHashSet. (Note you will have to override equals() and hashCode() methods of Node)
Do your units walk up/down/left/right only, or can they take diagonals as well?
The one requirement for the A*-heuristic is that it's admissible - it must never over-estimate the actual path-length. If your units can walk diagonally, then manhatten-distance will over-estimate the path-length, and thus A* is not guaranteed to work.
I'm having problems with a pathfinder (it's my first, so that was to be expected) : it doesn't always take the shortest way. For example, if I want to go one square down, the path will be : one square left, one down, one right.
public void getSquares(){
actPath = new String[Map.x][Map.y];
isDone = new boolean[Map.x][Map.y];
squareListener = new SquareListener[Map.x][Map.y];
getSquares2(x,y,0,new String());
}
public void getSquares2(int x, int y, int movesused, String path){
boolean test1 = false;
boolean test2 = false;
test1 = (x < 0 || y < 0 || x > Map.x || y > Map.y);
if(!test1){
test2 = Map.landTile[y][x].masterID != 11;
}
if(movesused <= 6 && (test1 || test2)){
addMoveSquare2(x,y, path);
getSquares2(x+1,y,movesused+1,path+"r");
getSquares2(x,y+1,movesused+1,path+"d");
getSquares2(x,y-1,movesused+1,path+"u");
getSquares2(x-1,y,movesused+1,path+"l");
}
}
public void addMoveSquare2(int x, int y, String path){
if(x >= 0 && y>=0 && x < Map.x && y < Map.y && (actPath[x][y] == null || actPath[x][y].length() > path.length())){
if(squareListener[x][y] == null){
actPath[x][y] = new String();
actPath[x][y] = path;
JLabel square = new JLabel();
square.setBounds(x*16,y*16,16,16);
square.setIcon(moveSquare);
squareListener[x][y] = new SquareListener(x,y,path);
square.addMouseListener(squareListener[x][y]);
Map.cases.add(square);
}
else{
squareListener[x][y].path = path;
}
}
}
SquareListener is a simple MouseListener which print the square's location and the path to it.
Map.x, Map.y are the map size.
getSquares2 is called with the start point, and draw every squares that are 6 moves away, and consider every case with the value "11" as obstacle.
Can you please help me finding what I've done wrong ?
Here is a screenshot of the result :
http://img808.imageshack.us/img808/96/screen.gif
The red squares are the possible goal. The real one will be defined only when the player click on one square (the MouseListener being SquareListener, it's supposed to know the path to take). The houses are the cases with a value of "11", the obstacles.
Your algorithm looks nearly correct. Nearly, because you forget to assign actPath[x][y] when a second path to the node is found, rendering your length check with actPath[x][y] incorrect. You should do:
else{
actPath[x][y] = path;
squareListener[x][y].path = path;
}
Your algorithm also has abominable time complexity, as it will iterate all paths of length 6 (all 4^6 = 4096 of them) instead of the just the shortest ones (6*6 + 5*5 = 61)
For inspiration, I recommend looking at Dijkstra's algorithm (the precursor to A*), which manages to only visit the shortest paths and concludes in O(number of reachable nodes) when path lengths are small integers as it the case here.
You can take a look here at my answer with example code for A-Star, not a direct answer but the code is readable and it points you to a good book that deals (among many other things) path finding. I never did get around to commenting the code...
Not sure what you mean, in the comment for Daniel, by "Thanks for the link, however, I don't have 1 goal but a number of moves, which makes a lot of possible goals."
You might be interested in this tutorial on the A* search algorithm.