Why is this called backtracking? - java

I have read in Wikipedia and have also Googled it,
but I cannot figure out what "Backtracking Algorithm" means.
I saw this solution from "Cracking the Code Interviews"
and wonder why is this a backtracking algorithm?

Backtracking is a form of recursion, at times.
This boolean based algorithm is being faced with a choice, then making that choice and then being presented with a new set of choices after that initial choice.
Conceptually, you start at the root of a tree; the tree probably has some good leaves and some bad leaves, though it may be that the leaves are all good or all bad. You want to get to a good leaf. At each node, beginning with the root, you choose one of its children to move to, and you keep this up until you get to a leaf.(See image below)
Explanation of Example:
Starting at Root, your options are A and B. You choose A.
At A, your options are C and D. You choose C.
C is bad. Go back to A.
At A, you have already tried C, and it failed. Try D.
D is bad. Go back to A.
At A, you have no options left to try. Go back to Root.
At Root, you have already tried A. Try B.
At B, your options are E and F. Try E.
E is good. Congratulations!
Source: upenn.edu

"Backtracking" is a term that occurs in enumerating algorithms.
You built a "solution" (that is a structure where every variable is assigned a value).
It is however possible that during construction, you realize that the solution is not successful (does not satisfy certain constraints), then you backtrack: you undo certain assignments of values to variables in order to reassign them.
Example:
Based on your example you want to construct a path in a 2D grid. So you start generating paths from (0,0). For instance:
(0,0)
(0,0) (1,0) go right
(0,0) (1,0) (1,1) go up
(0,0) (1,0) (1,1) (0,1) go left
(0,0) (1,0) (1,1) (0,1) (0,0) go down
Oops, visiting a cell a second time, this is not a path anymore
Backtrack: remove the last cell from the path
(0,0) (1,0) (1,1) (0,1)
(0,0) (1,0) (1,1) (0,1) (1,1) go right
Oops, visiting a cell a second time, this is not a path anymore
Backtrack: remove the last cell from the path
....

From Wikipedia:
Backtracking is a general algorithm for finding all (or some) solutions to some computational problem, that incrementally builds candidates to the solutions, and abandons each partial candidate c ("backtracks") as soon as it determines that c cannot possibly be completed to a valid solution.
Backtracking is easily implemented as a recursive algorithm. You look for the solution of a problem of size n by looking for solutions of size n - 1 and so on. If the smaller solution doesn't work you discard it.
That's basically what the code above is doing: it returns true in the base case, otherwise it 'tries' the right path or the left path discarding the solution that doesn't work.
Since the code above it's recursive, it might not be clear where the "backtracking" comes into play, but what the algorithm actually does is building a solution from a partial one, where the smallest possible solution is handled at line 5 in your example. A non recursive version of the algorithm would have to start from the smallest solution and build from there.

I cannot figure out what "backtracking algorithm" means.
An algorithm is "back-tracking" when it tries a solution, and on failure, returns to a simpler solution as the basis for new attempts.
In this implementation,
current_path.remove(p)
goes back along the path when the current path does not succeed so that a caller can try a different variant of the path that led to current_path.

Indeed, the word "back" in the term "backtracking" could sometimes be confusing when a backtracking solution "keeps going forward" as in my solution of the classic N queens problem:
/**
* Given *starting* row, try all columns.
* Recurse into subsequent rows if can put.
* When reached last row (stopper), increment count if put successfully.
*
* By recursing into all rows (of a given single column), an entire placement is tried.
* Backtracking is the avoidance of recursion as soon as "cannot put"...
* (eliminating current col's placement and proceeding to the next col).
*/
int countQueenPlacements(int row) { // queen# is also queen's row (y axis)
int count = 0;
for (int col=1; col<=N; col++) { // try all columns for each row
if (canPutQueen(col, row)) {
putQueen(col, row);
count += (row == N) ? print(board, ++solutionNum) : countQueenPlacements(row+1);
}
}
return count;
}
Note that my comment defines Backtracking as the avoidance of recursion as soon as "cannot put" -- but this is not entirely full. Backtracking in this solution could also mean that once a proper placement is found, the recursion stack unwinds (or backtracks).

Backtracking basically means trying all possible options. It's usually the naive, inefficient solutions to problems.
In your example solution, that's exactly what's going on - you simply try out all possible paths, recursively:
You try each possible direction; if you found a successful path - good. if not - backtrack and try another direction.

Related

How do I find the optimal path through a grid?

Overview of the problem: You are a truffle collector, and are given a grid of numbers representing plots of land with truffles on them. Each plot has a certain number of truffles on it. You must find the optimal path from the top of the grid to the bottom (the one that collects the most truffles). Importantly, you can start from any cell in the top row. When you are at a cell, you can move diagonally down to the left, directly down, or diagonally down to the right. A truffle field might look like this:
The truffle fields also do not have to be square. They can have any dimensions.
So, I have created an iterative algorithm for this problem. Essentially, what I have done is iterate through each cell in the top row, finding the greedy path emanating from each and choosing the one with the largest truffle yield. To elaborate, the greedy path is one in which at every step, the largest value that can be reached in the next row from the current cell is chosen.
This algorithm yields the correct result for some truffle fields, like the one above, but it fails on fields like this:
This is because when the algorithm hits the 100 in the third column, it will go directly down to the 3 because it is the largest immediate value it can move to, but it does not consider that moving to the 2 to the left of it will enable it to reach another 100. The optimal path through this field obviously involves both cells with a value of 100, but the greedy algorithm I have now will never yield this path.
So, I have a hunch that the correct algorithm for this problem involves recursion, likely recursive backtracking in particular, but I am not sure how to approach creating a recursive algorithm to solve it. I have always struggled with recursion and find it difficult to come up with algorithms using it. I would really appreciate any ideas you all could provide.
Here is the code. My algorithm is being executed in the findPath method: https://github.com/jhould007/Programming-Assignment-3/blob/master/Truffle.java.
You could use recursion, but there's a simple iterative fix to your approach as well.
Instead of the top row, start with the bottom one. Create a 1D array values and initialise it with the values of the bottom row.
Start iterating curr_row from row max_row-1 to 0. For each iteration, create a temporary array temp and initialise it with 0's.
For a row curr_row in the iteration, value[i] represents the max value that you can get if you start from row curr_row+1 (basically the next row) and column i.
To update temp in each iteration, we just need to pick the best path from the next row, which can be fetched from values array.
for column in range [0, max_column]:
temp[column] = truffle_value[column] + max(value[column], value[column+1], value[column-1])
// since temp holds the values for the next iteration in our loop
value = temp
In the end, the answer will simply be max(values).

Understanding Recursion logic

I really need your help for me to understand recursion properly. I can understand basic recursions and their logic like fibonacci
int factorial(int n)
if(n <=1)
return n
else
return(n*factorial(n-1))
That's easy the function keep calling factorial until n becomes zero and finally multiply all the results. But recursions like tree traversal is hard for me to understand
void inorderTraverse(Node* head)
if(head!=NULL){
inorderTraverse(head->left)
cout << head-> data
inorderTraverse(head->right)
}
}
Here I lost the logic how does this function goes if first recursion call will go back to function how can it goes to cout line or how can it show right child data. I really need your help.
Thank you
A binary search tree in alphabetical order:
B
/ \
A C
inorderTraverse(&A) will first descend to A and print it (recursively printing any subtree), then print B, then descend to C and print it (recursively printing any subtree).
So in this case, A B C.
For a more complicated tree:
D
/ \
B E
/ \
A C
This will be printed as A B C D E. Notice how the original tree is on the left of D, so is printed in its entirety first; the problem is reduced to a smaller instance of the starting problem. This is the essence of recursion. Next D is printed, then everything on the right of D (which is just E).
Note that in this implementation, the nodes don't know about their parent. The recursion means this information is stored on the call stack. You could replace the whole thing with an iterative implementation, but you would need some stack-like data structure to keep track of moving back up through the tree.
Inorder traversal says you need to traverse as Left-Root-Right.So for one level it is fine we print in left-root-right format. But With the level increases you need to makesure your algorithm traverse in the same way.So you need to print the leftSubtree first then the root of that subTree and then the right subTree at each level.
The Recursive code inorderTraverse(head->left) tells till the node is not null go to its leftside of the tree.Once it reaches the end it prints the left node then print the Root node of that subTree and wahtever operation u performed on leftSubTree you need to perform the same on Right subTree that's why you write inorderTraverse(head->right). Start debugging by creating 3level trees. Happy Learning.
Try to imagine binary tree and then start traversing it from root. You always go left. If there is no more lefts then you go right and after that you just go up. And you will finish back in root (from right side).
It is similar as going thought maze. You can choose that you will always go to left. (you will always touch left wall). At the end you will finish in exit or back in entrance if there isn't another exit.
In this code is important that you have two recursive calls in body. First is for left subtree and second is for right subtree. When you finish one function returns you back to node where you started.
Binary search trees have the property that for every node, the left subtree contains values that are smaller than the current node's value, and the right subtree contains values that are larger.
Thus, for a given node to yield the values in its subtree in-order the function needs to:
Handle the values less than the current value;
Handle its value;
Handle the values greater than the current value.
If you think of your function initially as a black box that deals with a subtree, then the recursion magically appears
Apply the function to the left subtree;
Deal with the current value;
Apply the function to the right subtree.
Basically, the trick is to initially think of the function as a shorthand way to invoke an operation, without worrying about how it might be accomplished. When you think of it abstractly like that, and you note that the current problem's solution can be achieved by applying that same functionality to a subset of the current problem, you have a recursion.
Similarly, post-order traversal boils down to:
Deal with all my children (left subtree, then right subtree, or vice-versa if you're feeling contrary);
Now I can deal with myself.

Find the pairings such that the sum of the weights is minimized?

When solving the Chinese postman problem (route inspection problem), how can we find the pairings (between odd vertices) such that the sum of the weights is minimized?
This is the most crucial step in the algorithm that successfully solves the Chinese Postman Problem for a non-Eulerian Graph. Though it is easy to implement on paper, but I am facing difficulty in implementing in Java.
I was thinking about ways to find all possible pairs but if one runs the first loop over all the odd vertices and the next loop for all the other possible pairs. This will only give one pair, to find all other pairs you would need another two loops and so on. This is rather strange as one will be 'looping over loops' in a crude sense. Is there a better way to resolve this problem.
I have read about the Edmonds-Jonhson algorithm, but I don't understand the motivation behind constructing a bipartite graph. And I have also read Chinese Postman Problem: finding best connections between odd-degree nodes, but the author does not explain how to implement a brute-force algorithm.
Also the following question: How should I generate the partitions / pairs for the Chinese Postman problem? has been asked previously by a user of Stack overflow., but a reply to the post gives a python implementation of the code. I am not familiar with python and I would request any community member to rewrite the code in Java or if possible explain the algorithm.
Thank You.
Economical recursion
These tuples normally are called edges, aren't they?
You need a recursion.
0. Create main stack of edge lists.
1. take all edges into a current edge list. Null the found edge stack.
2. take a next current edge for the current edge list and add it in the found edge stack.
3. Create the next edge list from the current edge list. push the current edge list into the main stack. Make next edge list current.
4. Clean current edge list from all adjacent to current edge and from current edge.
5. If the current edge list is not empty, loop to 2.
6. Remember the current state of found edge stack - it is the next result set of edges that you need.
7. Pop the the found edge stack into current edge. Pop the main stack into current edge list. If stacks are empty, return. Repeat until current edge has a next edge after it.
8. loop to 2.
As a result, you have all possible sets of edges and you never need to check "if I had seen the same set in the different order?"
It's actually fairly simple when you wrap your head around it. I'm just sharing some code here in the hope it will help the next person!
The function below returns all the valid odd vertex combinations that one then needs to check for the shortest one.
private static ObjectArrayList<ObjectArrayList<IntArrayList>> getOddVertexCombinations(IntArrayList oddVertices,
ObjectArrayList<IntArrayList> buffer){
ObjectArrayList<ObjectArrayList<IntArrayList>> toReturn = new ObjectArrayList<>();
if (oddVertices.isEmpty()) {
toReturn.add(buffer.clone());
} else {
int first = oddVertices.removeInt(0);
for (int c = 0; c < oddVertices.size(); c++) {
int second = oddVertices.removeInt(c);
buffer.add(new IntArrayList(new int[]{first, second}));
toReturn.addAll(getOddVertexCombinations(oddVertices, buffer));
buffer.pop();
oddVertices.add(c, second);
}
oddVertices.add(0, first);
}
return toReturn;
}

Simple A star algorithm Tower Defense Path Trapped

So first of all I'm in a 100 level CS college class that uses Java. Our assignment is to make a tower defense game and I am having trouble with the pathing. I found from searching that A* seems to be the best for this. Though my pathing get's stuck when I put a U around the path. I'll show some beginner psuedo code since I haven't taken a data structures class yet and my code looks pretty messy(working on that).
Assume that I will not be using diagonals.
while(Castle not reached){
new OpenList
if(up, down, left, right == passable && isn't previous node){
//Adds in alternating order to create a more diagonal like path
Openlist.add(passable nodes)
}
BestPath.add(FindLeasDistancetoEnd(OpenList));
CheckCastleReached(BestPath[Last Index]);
{
private node FindLeastDistancetoEnd(node n){
return first node with Calculated smallest (X + Y to EndPoint)
}
I've stripped A* down(too much, my problem most likely). So I'm adding parents to my nodes and calculating the correct parent though I don't believe this will solve my problem. Here's a visual of my issue.
X = impassable(Towers)
O = OpenList
b = ClosedList(BestPath)
C = Castle(EndPoint)
S = Start
OOOOXX
SbbbBX C
OOOOXX
Now the capitol B is where my issue is. When the towers are placed in that configuration and my Nav Path is recalculated it gets stuck. Nothing is put into the OpenList since the previous node is ignored and the rest are impassable.
Writing it out now I suppose I could make B impassable and backtrack... Lol. Though I'm starting to do a lot of what my professor calls "hacking the code" where I keep adding patches to fix issues, because I don't want to erase my "baby" and start over. Although I am open to redoing it, looking at how messy and unorganized some of my code is bothers me, can't wait to take data structures.
Any advice would be appreciated.
Yes, data structures would help you a lot on this sort of problem. I'll try to explain how A* works and give some better Pseudocode afterwards.
A* is a Best-First search algorithm. This means that it's supposed to guess which options are best, and try to explore those first. This requires you to keep track of a list of options, typically called the "Front" (as in front-line). It doesn't keep track of a path found so far, like in your current algorithm. The algorithm works in two phases...
Phase 1
Basically, you start from the starting position S, and all the neighbouring positions (north, west, south and east) will be in the Front. The algorithm then finds the most promising of the options in the Front (let's call it P), and expands on that. The position P is removed from the Front, but all of its neighbours are added in stead. Well, not all of its neighbours; only the neighbours that are actual options to go. We can't go walking into a tower, and we wouldn't want to go back to a place we've seen before. From the new Front, the most promising option is chosen, and so on. When the most promising option is the goal C, the algorithm stops and enters phase 2.
Normally, the most promising option would be the one that is closest to the goal, as the crow flies (ignoring obstacles). So normally, it would always explore the one that is closest to the goal first. This causes the algorithm to walk towards the goal in a sort-of straight line. However, if that line is blocked by some obstacle, the positions of the obstacle should not be added to the Front. They are not viable options. So in the next round then, some other position in the Front would be selected as the best option, and the search continues from there. That is how it gets out of dead ends like the one in your example. Take a look at this illustration to get what I mean: https://upload.wikimedia.org/wikipedia/commons/5/5d/Astar_progress_animation.gif The Front is the hollow blue dots, and they mark dots where they've already been in a shade from red to green, and impassable places with thick blue dots.
In phase 2, we will need some extra information to help us find the shortest path back when we found the goal. For this, we store in every position the position we came from. If the algorithm works, the position we came from necessarily is closer to S than any other neighbour. Take a look at the pseudocode below if you don't get what I mean.
Phase 2
When the castle C is found, the next step is to find your way back to the start, gathering what was the best path. In phase 1, we stored the position we came from in every position that we explored. We know that this position must always be closer to S (not ignoring obstacles). The task in phase 2 is thus very simple: Follow the way back to the position we came from, every time, and keep track of these positions in a list. At the end, you'll have a list that forms the shortest path from C to S. Then you simply need to reverse this list and you have your answer.
I'll give some pseudocode to explain it. There are plenty of real code examples (in Java too) on the internet. This pseudocode assumes you use a 2D array to represent the grid. An alternative would be to have Node objects, which is simpler to understand in Pseudocode but harder to program and I suspect you'd use a 2D array anyway.
//Phase 1
origins = new array[gridLength][gridWidth]; //Keeps track of 'where we came from'.
front = new Set(); //Empty set. You could use an array for this.
front.add(all neighbours of S);
while(true) { //This keeps on looping forever, unless it hits the "break" statement below.
best = findBestOption(front);
front.remove(best);
for(neighbour in (best's neighbours)) {
if(neighbour is not a tower and origins[neighbour x][neighbour y] == null) { //Not a tower, and not a position that we explored before.
front.add(neighbour);
origins[neighbour x][neighbour y] = best;
}
}
if(best == S) {
break; //Stops the loop. Ends phase 1.
}
}
//Phase 2
bestPath = new List(); //You should probably use Java's ArrayList class for this if you're allowed to do that. Otherwise select an array size that you know is large enough.
currentPosition = C; //Start at the endpoint.
bestPath.add(C);
while(currentPosition != S) { //Until we're back at the start.
currentPosition = origins[currentPosition.x][currentPosition.y];
bestPath.add(currentPosition);
}
bestPath.reverse();
And for the findBestOption method in that pseudocode:
findBestOption(front) {
bestPosition = null;
distanceOfBestPosition = Float.MAX_VALUE; //Some very high number to start with.
for(position in front) {
distance = Math.sqrt(position.x * position.x - C.x * C.x + position.y * position.y - C.y * C.y); //Euclidean distance (Pythagoras Theorem). This does the diagonal thing for you.
if(distance < distanceOfBestPosition) {
distanceOfBestPosition = distance;
bestPosition = position;
}
}
}
I hope this helps. Feel free to ask on!
Implement the A* algorithm properly. See: http://en.wikipedia.org/wiki/A%2A_search_algorithm
On every iteration, you need to:
sort the open nodes into heuristic order,
pick the best;
-- check if you have reached the goal, and potentially terminate if so;
mark it as 'closed' now, since it will be fully explored from.
explore all neighbors from it (by adding to the open nodes map/ or list, if not already closed).
Based on the ASCII diagram you posted, it's not absolutely clear that the height of the board is more than 3 & that there actually is a path around -- but let's assume there is.
The proper A* algorithm doesn't "get stuck" -- when the open list is empty, no path exists & it terminates returning a no path null.
I suspect you may not be closing the open nodes (this should be done as you start processing them), or may not be processing all open nodes on every iteration.
Use a Map<GridPosition, AStarNode> will help performance in checking for all those neighboring positions, whether they are in the open or closed sets/lists.

What's the simplest algorithm/solution for a single pair shortest path through a real-weighted undirected graph?

I need to find a shortest path through an undirected graph whose nodes are real (positive and negative) weighted. These weights are like resources which you can gain or loose by entering the node.
The total cost (resource sum) of the path isn't very important, but it must be more than 0, and length has to be the shortest possible.
For example consider a graph like so:
A-start node; D-end node
A(+10)--B( 0 )--C(-5 )
\ | /
\ | /
D(-5 )--E(-5 )--F(+10)
The shortest path would be A-E-F-E-D
Dijkstra's algorithm alone doesn't do the trick, because it can't handle negative values. So, I thought about a few solutions:
First one uses Dijkstra's algorithm to calculate the length of a shortest path from each node to the exit node, not considering the weights. This can be used like some sort of heuristics value like in A*. I'm not sure if this solution could work, and also it's very costly. I also thought about implement Floyd–Warshall's algorithm, but I'm not sure how.
Another solution was to calculate the shortest path with Dijkstra's algorithm not considering the weights, and if after calculating the path's resource sum it's less than zero, go through each node to find a neighbouring node which could quickly increase the resource sum, and add it to the path(several times if needed). This solution won't work if there is a node that could be enough to increase the resource sum, but farther away from the calculated shortest path.
For example:
A- start node; E- end node
A(+10)--B(-5 )--C(+40)
\
D(-5 )--E(-5 )
Could You help me solve this problem?
EDIT: If when calculating the shortest path, you reach a point where the sum of the resources is equal to zero, that path is not valid, since you can't go on if there's no more petrol.
Edit: I didn't read the question well enough; the problem is more advanced than a regular single-source shortest path problem. I'm leaving this post up for now just to give you another algorithm that you might find useful.
The Bellman-Ford algorithm solves the single-source shortest-path problem, even in the presence of edges with negative weight. However, it does not handle negative cycles (a circular path in the graph whose weight sum is negative). If your graph contains negative cycles, you are probably in trouble, because I believe that that makes the problem NP-complete (because it corresponds to the longest simple path problem).
This doesn't seem like an elegant solution, but given the ability to create cyclic paths I don't see a way around it. But I would just solve it iteratively. Using the second example - Start with a point at A, give it A's value. Move one 'turn' - now I have two points, one at B with a value of 5, and one at D also with a value of 5. Move again - now I have 4 points to track. C: 45, A: 15, A: 15, and E: 0. It might be that the one at E can oscillate and become valid so we can't toss it out yet. Move and accumulate, etc. The first time you reach the end node with a positive value you are done (though there may be additional equivalent paths that come in on the same turn)
Obviously problematic in that the number of points to track will rise pretty quickly, and I assume your actual graph is much more complex than the example.
I would do it similarly to what Mikeb suggested: do a breadth-first search over the graph of possible states, i.e. (position, fuel-left)-pairs.
Using your example graph:
Octagons: Ran out of fuel
Boxes: Child nodes omitted for space reasons
Searching this graph breadth-first is guaranteed to give you the shortest route that actually reaches the goal if such a route exists. If it does not, you will have to give up after a while (after x nodes searched, or maybe when you reach a node with a score greater than the absolute value of all negative scores combined), as the graph can contain infinite loops.
You have to make sure not to abort immediately on finding the goal if you want to find the cheapest path (fuel wise) too, because you might find more than one path of the same length, but with different costs.
Try adding the absolute value of the minimun weight (in this case 5) to all weights. That will avoid negative ciclic paths
Current shortest path algorithms requires calculate shortest path to every node because it goes combinating solutions on some nodes that will help adjusting shortest path in other nodes. No way to make it only for one node.
Good luck

Categories

Resources