I'm writing a class in Java to represent a Graph data structure. This is specific to an undirected, unweighted graph and it's purpose is mainly for edge testing (is node A connected to node B, either directly or indirectly).
I need help implementing the indirectEdgeTest method. In the code below, I've only commented this method and I'm returning false so the code will compile as-is.
I have put some time into coming up with an algorithm, but I can't seem to find anything more simple than this, and I fear I'm making it more complicated than it needs to be:
test first for a direct connection
if no direct connection exists from node a to node b:
for every edge i connected to node a:
create a new graph that does not contain edge a -> i
test new graph for indirect connectivity between nodes i and b
Either pseudocode or actual Java code is welcome in your answers. Here's the code I have:
class Graph {
// This is for an undirected, unweighted graph
// This implementation uses an adjacency matrix for speed in edge testing
private boolean[][] edge;
private int numberOfNodes;
public Graph(int numNodes) {
// The indices of the matrix will not be zero-based, for clarity,
// so the size of the array will be increased by 1.
edge = new boolean[numNodes + 1][numNodes + 1];
numberOfNodes = numNodes;
}
public void addEdge(int a, int b) {
if (a <= numberOfNodes && a >= 1) {
if (b <= numberOfNodes && b >= 1) {
edge[a][b] = true;
edge[b][a] = true;
}
}
}
public void removeEdge(int a, int b) {
if (a <= numberOfNodes && a >= 1) {
if (b <= numberOfNodes && b >= 1) {
edge[a][b] = false;
edge[b][a] = false;
}
}
}
public boolean directEdgeTest(int a, int b) {
// if node a and node b are directly connected, return true
boolean result = false;
if (a <= numberOfNodes && a >= 1) {
if (b <= numberOfNodes && b >= 1) {
if (edge[a][b] == true) {
result = true;
}
}
}
return result;
}
public boolean indirectEdgeTest(int a, int b) {
// if there exists a path from node a to node b, return true
// implement indirectEdgeTest algorithm here.
return false;
}
}
Erm, that approach sounds horribly inefficient. What about this one:
void walk(Node orgin, Set<Node> visited) {
for (Node n : origin.neighbours) {
if (!visited.contains(n)) {
visited.add(n);
walk(n, visited);
}
}
}
boolean hasPath(Node origin, Node target) {
Set<Node> reachables = new HashSet<Node>();
walk(origin, reachables);
return reachables.contains(target);
}
Also, using an adjacency matrix is of questionable use for graph traversal, since you can not efficiently iterate over a node's neighbours in a sparse graph.
If that method is frequently used, and the graph changes rarely, you can speed queries up by doing the decomposition into connected regions up front, and storing for each node the region it belongs to. Then, two nodes are connected if they belong to the same region.
Edit: To clarify on how to best represent the graph. For direct edge testing, an adjacency matrix is preferred. For path testing, a decomposition into regions is. The latter is not trivial to keep current as the graph changes, but there may be algorithms for this in the literature. Alternatively, adjacency lists are serviceable for graph traversal and thus path testing, but they remain less efficient than directly recording the decomposition into connected regions. You can also use adjacency sets to combine the more efficient neighbor iteration in sparse graphs with constant-time edge testing.
Keep in mind that you can also store information redundantly, keeping, for each kind of query, a tailored, separate data structure.
Your solution will work, but a better solution would be to construct a spanning tree from the root "a" node. This way you will eventually have only one tree to consider, instead of multiple sub-graphs that only are missing particular edges.
Once you get the idea, how you implement it is up to you. Assuming you can implement the algorithm in a reasonable manner, you should only have one tree to search for connectivity, which would speed things up considerably.
I credit meriton for his or her answer, but I've coded the idea into working Java classes and a unit test, so I'm supplying a separate answer here in case anyone is looking for reusable code.
Thanks meriton. I agree it's important to make a distinction between direct edge testing and path testing, and that there are different implementations of graphs that are better suited to a particular type of testing. In the case of path testing, it seems adjacency lists are much more efficient than an adjacency matrix representation.
My code below is probably not as efficient as it could be, but for now it is solving my problem. If anyone has improvements to suggest, please feel free.
To compile: javac Graph.java
To execute: java GraphTest
class Graph {
private java.util.ArrayList<Node> nodeList;
private int numberOfNodes;
public Graph(int size) {
nodeList = new java.util.ArrayList<Node>(size + 1);
numberOfNodes = size;
for (int i = 0; i <= numberOfNodes; i++) {
nodeList.add(new Node());
}
}
public void addEdge(int a, int b) {
if (a >= 1 && a <= numberOfNodes) {
if (b >= 1 && b <= numberOfNodes) {
nodeList.get(a).addNeighbour(nodeList.get(b));
nodeList.get(b).addNeighbour(nodeList.get(a));
}
}
}
public void walk(Node origin, java.util.Set<Node> visited) {
for (Node n : origin.getNeighbours()) {
if (!visited.contains(n)) {
visited.add(n);
walk(n, visited);
}
}
}
public boolean hasPath(Node origin, Node target) {
java.util.Set<Node> reachables = new java.util.HashSet<Node>();
walk(origin, reachables);
return reachables.contains(target);
}
public boolean hasPath(int a, int b) {
java.util.Set<Node> reachables = new java.util.HashSet<Node>();
Node origin = nodeList.get(a);
Node target = nodeList.get(b);
walk(origin, reachables);
return reachables.contains(target);
}
}
class Node {
private java.util.Set<Node> neighbours;
public Node() {
neighbours = new java.util.HashSet<Node>();
}
public void addNeighbour(Node n) {
neighbours.add(n);
}
public java.util.Set<Node> getNeighbours() {
return neighbours;
}
}
class GraphTest {
private static Graph g;
public static void main(String[] args) {
g = new Graph(6);
g.addEdge(1,5);
g.addEdge(4,1);
g.addEdge(4,3);
g.addEdge(3,6);
printTest(1, 2);
printTest(1, 4);
printTest(6, 1);
}
public static void printTest(int a, int b) {
System.out.print("Are nodes " + a + " and " + b + " connected?");
if (g.hasPath(a, b)) {
System.out.println(" YES.");
} else {
System.out.println(" NO.");
}
}
}
Related
I am attempting to use the functionality of Binary Search Trees without actually create Node objects and giving them left/right children and instead using the basic idea of a Binary Search Tree within three parallel arrays: left, data, and right. At a particular index in these arrays, left holds the index to data where the current data's left child lives, while right holds the index to data where the current data's right child lives. This table gives a better example of what I am talking about:
The -1 values represent where a node does not have a left or right child. Before any nodes are inserted, all of the arrays hold the value 0, and every time a node is inserted, its left and right children index values are set to -1 (indicating that what we just inserted is a leaf). What I'm struggling to figure out is to how to do this recursively without accidentally accessing an index of -1. My current attempt seen below is running into this issue:
public void insert(int d) {
//PRE: the tree is not full
//if d is not in the tree insert d otherwise the tree does not change
if(root == -1) {
root = d;
}
insert(d, 0);
}
private void insert(int d, int index) {
if(data[index] == d) {
return;
}
if(data[index] == 0) {
data[index] = d;
right[index] = -1;
left[index] = -1;
}
if(data[index] > d) {
if(left[index] == 0) {
data[index] = d;
right[index] = -1;
left[index] = -1;
} else {
insert(d, left[index]);
}
}
if(data[index] < d) {
if(right[index] == 0) {
data[index] = d;
right[index] = -1;
left[index] = -1;
} else {
insert(d, right[index]);
}
}
return;
}
I'm curious for ideas as to how I can prevent from accessing an array with index value -1, while still being able to indicate that a node does not have a child to a particular side.
I understand the concept that every time I'm inserting a node, I'm inserting a leaf, so when a node is placed at a particular index, its left and right can automatically be set to -1, but my current recursive calls end up passing in -1 at one point or another. Even if I change this value to 0, or something else, that doesn't necessarily help me make any progress in my recursion.
Some remarks on your code:
The root variable should not be assigned d, but the index where d will be stored, only then does it make sense that an empty tree is encoded with root equal to -1 (realise that d could be -1).
Your code has no logic to determine at which index to store a new node. This is really your question. A simple solution is to maintain a size variable. This is then the index at which the next node will be stored, after which the size member should be incremented.
There is then never a reason to think of 0 as some special indicator, and your code should only check for -1 references, not 0.
You have some code repetition which you can avoid by creating a method that will "create" a node: it will use size for its index, and will take a value as argument.
Here is the suggested code:
class BinaryTree {
public static final int MAXSIZE = 100;
int left[] = new int[BinaryTree.MAXSIZE];
int right[] = new int[BinaryTree.MAXSIZE];
int data[] = new int[BinaryTree.MAXSIZE];
int root = -1;
int size = 0;
private int createNode(int value) {
data[size] = value;
left[size] = -1;
right[size] = -1;
return size++;
}
public void insert(int value) {
if (root == -1) {
root = createNode(value);
} else {
insert(value, 0);
}
}
private void insert(int value, int index) {
if (data[index] == value) {
return;
}
if (data[index] > value) {
if (left[index] == -1) {
left[index] = createNode(value);
} else {
insert(value, left[index]);
}
} else {
if (right[index] == -1) {
right[index] = createNode(value);
} else {
insert(value, right[index]);
}
}
return;
}
}
This code can be further extended with:
verification that the tree has reached a maximum size,
node deletion
"memory management" for reusing the indexes of deleted nodes (by maintaining a "free list")
self-balancing (like AVL or red-black tree)
Doing the "memory management" (array slot management) yourself is really going to mimic the powerful heap memory management that Java offers out of the box when using class instances. For that reason I would advise to implement a tree the OOP way.
How do we perform depth first search on a directed graph using an adjacency matrix in which it explores all of the vertices starting from a random vertex? I attempted to implement dfs, but its only exploring the vertices that are reachable from the starting vertex.
public static void dfs(int [] [] adjMatrix, int startingV,int n)
{
boolean [] visited = new boolean[n];
Stack<Integer> s = new Stack<Integer>();
s.push(startingV);
while(!s.isEmpty())
{
int vertex = s.pop();
if(visited[vertex]==false)
{
System.out.print("\n"+(v));
visited[vertex]=true;
}
for ( int i = 0; i < n; i++)
{
if((adjMatrix[vertex][i] == true) && (visited[i] == false))
{
s.push(vertex);
visited[I]=true;
System.out.print(" " + i);
vertex = i;
}
}
}
}
}
In a directed graph there might be no node from which you can reach all other nodes. So what do you expect in this case?
If there is at least one node from which you can reach all other nodes you only do now know which one it is, you can select a random one, go against the direction of an incoming edge to find a root node from which you can reach all other nodes.
Your code has a couple of issues, one of which is that you do a int vertex = s.pop(); and later an s.push(vertex); with the same vertex. The latter should probably be s.push(i); instead.
The easiest way to implement DF traversal is to just use recursion. Then the code decays to
function dfs(v) {
if v not visited before {
mark v as visited;
for every adjacent vertex a of v do {
dfs(a);
}
do something with v; // this is *after* all descendants have been visited.
}
}
Of course, every recursive implementation can be equivalently implemented using a stack and iteration instead, but in your case that'd be somewhat more complicated because you'd not only have to store the current vertex on the stack but also the state of iteration over its descendants (loop variable i in your case).
I'm researching on how to find k values in the BST that are closest to the target, and came across the following implementation with the rules:
Given a non-empty binary search tree and a target value, find k values in the BST that are closest to the target.
Note:
Given target value is a floating point.
You may assume k is always valid, that is: k ≤ total nodes.
You are guaranteed to have only one unique set of k values in the BST that are closest to the target. Assume that the BST is balanced.
And the idea of the implementation is:
Compare the predecessors and successors of the closest node to the target, we can use two stacks to track the predecessors and successors, then like what we do in merge sort, we compare and pick the closest one to the target and put it to the result list. As we know, inorder traversal gives us sorted predecessors, whereas reverse-inorder traversal gives us sorted successors.
Code:
import java.util.*;
class TreeNode {
int val;
TreeNode left, right;
TreeNode(int x) {
val = x;
}
}
public class ClosestBSTValueII {
List<Integer> closestKValues(TreeNode root, double target, int k) {
List<Integer> res = new ArrayList<>();
Stack<Integer> s1 = new Stack<>(); // predecessors
Stack<Integer> s2 = new Stack<>(); // successors
inorder(root, target, false, s1);
inorder(root, target, true, s2);
while (k-- > 0) {
if (s1.isEmpty()) {
res.add(s2.pop());
} else if (s2.isEmpty()) {
res.add(s1.pop());
} else if (Math.abs(s1.peek() - target) < Math.abs(s2.peek() - target)) {
res.add(s1.pop());
} else {
res.add(s2.pop());
}
}
return res;
}
// inorder traversal
void inorder(TreeNode root, double target, boolean reverse, Stack<Integer> stack) {
if (root == null) {
return;
}
inorder(reverse ? root.right : root.left, target, reverse, stack);
// early terminate, no need to traverse the whole tree
if ((reverse && root.val <= target) || (!reverse && root.val > target)) {
return;
}
// track the value of current node
stack.push(root.val);
inorder(reverse ? root.left : root.right, target, reverse, stack);
}
public static void main(String args[]) {
ClosestBSTValueII cv = new ClosestBSTValueII();
TreeNode root = new TreeNode(53);
root.left = new TreeNode(30);
root.left.left = new TreeNode(20);
root.left.right = new TreeNode(42);
root.right = new TreeNode(90);
root.right.right = new TreeNode(100);
System.out.println(cv.closestKValues(root, 40, 2));
}
}
And my question is, what's the reason for having two stacks and how is in-order a good approach? What's the purpose of each? Wouldn't traversing it with one stack be enough?
And what's the point of having a reverse boolean, such as for inorder(reverse ? ...);? And in the case of if ((reverse && root.val <= target) || (!reverse && root.val > target)), why do you terminate early?
Thank you in advance and will accept answer/up vote.
The idea of the algorithm you found is quite simple. They do just in-order traversal of a tree from the place, where target should be inserted. They use two stacks to store predecessors and successors. Lets take the tree for example:
5
/ \
3 9
/ \ \
2 4 11
Let the target be 8. When all inorder method calls are finished, stacks will be: s1 = {2, 3, 4, 5}, s2 = {11, 9}. As you see, s1 contains all predecessors of target and s2 all successors of it. Moreover, both stacks are sorted in a way, that top of each stack is closer to target, than all other values in stack. As a result, we can easily find kclosest values, just by always comparing tops of the stacks, and popping the closest value until we have k values. The running time of their algorithm is O(n).
Now about your questions. I don't know, how to implement this algorithm using the only stack effectively. The problem with stack is that we have access only to the top of it. But it is extremely easy to implement the algorithm with one array. Lets just do usual in-order traversal of a tree. For my example we will get: arr = {2, 3, 4, 5, 9, 11}. Then lets place l and r indexes to the closest to target values from both of the sides: l = 3, r = 4 (arr[l] = 5, arr[r] = 9). What is left is just to always compare arr[l] and arr[r] and choose what to add to result (absolutely the same, as with two stacks). This algo also takes O(n) operations.
Their approach to the problem seems to me a bit too hard to understand in code, though it is rather elegant.
I'd like to introduce another approach to the problem with another running time. This algorithm will take O(k*logn) time, which is better for small k and worse for bigger ones than previous algorithm.
Lets also store in TreeNode class a pointer to parent node. Then we can find predecessor or successor of any node in tree easily in O(logn) time (if you don't know how). So, lets firstly find in the tree predecessor and successor of the target (without doing any traversals!). Then do the same as with stacks: compare predecessor\successor, choose the closest one, and for the closest go to its predecessor\successor.
I hope, I answered your questions and you understood my explanations. If not, feel free to ask!
The reason why you need two stack is that you must traverse the tree in two directions, and you must compare the current value of each stack with the value you're searching (you may end up having k values greater than the searched value, or k/2 greater and k/2 lower).
I think you should use stacks of TreeNodes rather that stacks of Integer; you could avoid recursion.
UPDATE:
I see two phases in the algorithm:
1) locate the closest value in the tree, that would simultaneously build the initial stack.
2) make a copy of the stack, move back one element, this will give you the second stack; then iterate at most k times: see which of the two elements on top of each stack is the closest to the searched value, add it to the result list, and move the stack forward or backward.
UPDATE 2: A little code
public static List<Integer> closest(TreeNode root, int val, int k) {
Stack<TreeNode> right = locate(root, val);
Stack<TreeNode> left = new Stack<>();
left.addAll(right);
moveLeft(left);
List<Integer> result = new ArrayList<>();
for (int i = 0; i < k; ++i) {
if (left.isEmpty()) {
if (right.isEmpty()) {
break;
}
result.add(right.peek().val);
moveRight(right);
} else if (right.isEmpty()) {
result.add(left.peek().val);
moveLeft(left);
} else {
int lval = left.peek().val;
int rval = right.peek().val;
if (Math.abs(val-lval) < Math.abs(val-rval)) {
result.add(lval);
moveLeft(left);
} else {
result.add(rval);
moveRight(right);
}
}
}
return result;
}
private static Stack<TreeNode> locate(TreeNode p, int val) {
Stack<TreeNode> stack = new Stack<>();
while (p != null) {
stack.push(p);
if (val < p.val) {
p = p.left;
} else {
p = p.right;
}
}
return stack;
}
private static void moveLeft(Stack<TreeNode> stack) {
if (!stack.isEmpty()) {
TreeNode p = stack.peek().left;
if (p != null) {
do {
stack.push(p);
p = p.right;
} while (p != null);
} else {
do {
p = stack.pop();
} while (!stack.isEmpty() && stack.peek().left == p);
}
}
}
private static void moveRight(Stack<TreeNode> stack) {
if (!stack.isEmpty()) {
TreeNode p = stack.peek().right;
if (p != null) {
do {
stack.push(p);
p = p.left;
} while (p != null);
} else {
do {
p = stack.pop();
} while (!stack.isEmpty() && stack.peek().right == p);
}
}
}
UPDATE 3
Wouldn't traversing it with one stack be enough?
And what's the point of having a reverse boolean, such as for
inorder(reverse ? ...);? And in the case of if ((reverse && root.val
<= target) || (!reverse && root.val > target)), why do you terminate
early?
I don't know where you got the solution you gave in you're question from, but to summarize, it builds two lists of Integer, one in straight order, one in reverse order. It terminates "early" when the searched value is reached. This solution sound very inefficient since it requires the traversal of the whole tree. Mine, of course, is much better, and it conforms to the given rules.
I'm trying to implement a program to solve the n-puzzle problem.
I have written a simple implementation in Java that has a state of the problem characterized by a matrix representing the tiles. I am also able to auto-generate the graph of all the states giving the starting state. On the graph, then, I can do a BFS to find the path to the goal state.
But the problem is that I run out of memory and I cannot even create the whole graph.
I tried with a 2x2 tiles and it works. Also with some 3x3 (it depends on the starting state and how many nodes are in the graph). But in general this way is not suitable.
So I tried generating the nodes at runtime, while searching. It works, but it is slow (sometimes after some minutes it still have not ended and I terminate the program).
Btw: I give as starting state only solvable configurations and I don't create duplicated states.
So, I cannot create the graph. This leads to my main problem: I have to implement the A* algorithm and I need the path cost (i.e. for each node the distance from the starting state), but I think I cannot calculate it at runtime. I need the whole graph, right? Because A* does not follow a BFS exploration of the graph, so I don't know how to estimate the distance for each node. Hence, I don't know how to perform an A* search.
Any suggestion?
EDIT
State:
private int[][] tiles;
private int pathDistance;
private int misplacedTiles;
private State parent;
public State(int[][] tiles) {
this.tiles = tiles;
pathDistance = 0;
misplacedTiles = estimateHammingDistance();
parent = null;
}
public ArrayList<State> findNext() {
ArrayList<State> next = new ArrayList<State>();
int[] coordZero = findCoordinates(0);
int[][] copy;
if(coordZero[1] + 1 < Solver.SIZE) {
copy = copyTiles();
int[] newCoord = {coordZero[0], coordZero[1] + 1};
switchValues(copy, coordZero, newCoord);
State newState = checkNewState(copy);
if(newState != null)
next.add(newState);
}
if(coordZero[1] - 1 >= 0) {
copy = copyTiles();
int[] newCoord = {coordZero[0], coordZero[1] - 1};
switchValues(copy, coordZero, newCoord);
State newState = checkNewState(copy);
if(newState != null)
next.add(newState);
}
if(coordZero[0] + 1 < Solver.SIZE) {
copy = copyTiles();
int[] newCoord = {coordZero[0] + 1, coordZero[1]};
switchValues(copy, coordZero, newCoord);
State newState = checkNewState(copy);
if(newState != null)
next.add(newState);
}
if(coordZero[0] - 1 >= 0) {
copy = copyTiles();
int[] newCoord = {coordZero[0] - 1, coordZero[1]};
switchValues(copy, coordZero, newCoord);
State newState = checkNewState(copy);
if(newState != null)
next.add(newState);
}
return next;
}
private State checkNewState(int[][] tiles) {
State newState = new State(tiles);
for(State s : Solver.states)
if(s.equals(newState))
return null;
return newState;
}
#Override
public boolean equals(Object obj) {
if(this == null || obj == null)
return false;
if (obj.getClass().equals(this.getClass())) {
for(int r = 0; r < tiles.length; r++) {
for(int c = 0; c < tiles[r].length; c++) {
if (((State)obj).getTiles()[r][c] != tiles[r][c])
return false;
}
}
return true;
}
return false;
}
Solver:
public static final HashSet<State> states = new HashSet<State>();
public static void main(String[] args) {
solve(new State(selectStartingBoard()));
}
public static State solve(State initialState) {
TreeSet<State> queue = new TreeSet<State>(new Comparator1());
queue.add(initialState);
states.add(initialState);
while(!queue.isEmpty()) {
State current = queue.pollFirst();
for(State s : current.findNext()) {
if(s.goalCheck()) {
s.setParent(current);
return s;
}
if(!states.contains(s)) {
s.setPathDistance(current.getPathDistance() + 1);
s.setParent(current);
states.add(s);
queue.add(s);
}
}
}
return null;
}
Basically here is what I do:
- Solver's solve has a SortedSet. Elements (States) are sorted according to Comparator1, which calculates f(n) = g(n) + h(n), where g(n) is the path cost and h(n) is a heuristic (the number of misplaced tiles).
- I give the starting configuration and look for all the successors.
- If a successor has not been already visited (i.e. if it is not in the global set States) I add it to the queue and to States, setting the current state as its parent and parent's path + 1 as its path cost.
- Dequeue and repeat.
I think it should work because:
- I keep all the visited states so I'm not looping.
- Also, there won't be any useless edge because I immediately store current node's successors. E.g.: if from A I can go to B and C, and from B I could also go to C, there won't be the edge B->C (since path cost is 1 for each edge and A->B is cheaper than A->B->C).
- Each time I choose to expand the path with the minimum f(n), accordin to A*.
But it does not work. Or at least, after a few minutes it still can't find a solution (and I think is a lot of time in this case).
If I try to create a tree structure before executing A*, I run out of memory building it.
EDIT 2
Here are my heuristic functions:
private int estimateManhattanDistance() {
int counter = 0;
int[] expectedCoord = new int[2];
int[] realCoord = new int[2];
for(int value = 1; value < Solver.SIZE * Solver.SIZE; value++) {
realCoord = findCoordinates(value);
expectedCoord[0] = (value - 1) / Solver.SIZE;
expectedCoord[1] = (value - 1) % Solver.SIZE;
counter += Math.abs(expectedCoord[0] - realCoord[0]) + Math.abs(expectedCoord[1] - realCoord[1]);
}
return counter;
}
private int estimateMisplacedTiles() {
int counter = 0;
int expectedTileValue = 1;
for(int i = 0; i < Solver.SIZE; i++)
for(int j = 0; j < Solver.SIZE; j++) {
if(tiles[i][j] != expectedTileValue)
if(expectedTileValue != Solver.ZERO)
counter++;
expectedTileValue++;
}
return counter;
}
If I use a simple greedy algorithm they both work (using Manhattan distance is really quick (around 500 iterations to find a solution), while with number of misplaced tiles it takes around 10k iterations). If I use A* (evaluating also the path cost) it's really slow.
Comparators are like that:
public int compare(State o1, State o2) {
if(o1.getPathDistance() + o1.getManhattanDistance() >= o2.getPathDistance() + o2.getManhattanDistance())
return 1;
else
return -1;
}
EDIT 3
There was a little error. I fixed it and now A* works. Or at least, for the 3x3 if finds the optimal solution with only 700 iterations. For the 4x4 it's still too slow. I'll try with IDA*, but one question: how long could it take with A* to find the solution? Minutes? Hours? I left it for 10 minutes and it didn't end.
There is no need to generate all state space nodes for solving a problem using BFS, A* or any tree search, you just add states you can explore from current state to the fringe and that's why there is a successor function.
If BFS consumes much memory it is normal. But I don't know exactly fro what n it would make problem. Use DFS instead.
For A* you know how many moves you made to come to current state and you can estimate moves need to solve problem, simply by relaxing problem. As an example you can think that any two tiles can replace and then count moves needed to solve the problem. You heuristic just needs to be admissible ie. your estimate be less then actual moves needed to solve the problem.
add a path cost to your state class and every time you go from a parent state P to another state like C do this : c.cost = P.cost + 1 this will compute the path cost for every node automatically
this is also a very good and simple implementation in C# for 8-puzzle solver with A* take a look at it you will learn many things :
http://geekbrothers.org/index.php/categories/computer/12-solve-8-puzzle-with-a
Given the adjacency matrix of a graph, I need to obtain the chromatic number (minimum number of colours needed to paint every node of a graph so that adjacent nodes get different colours).
Preferably it should be a java algorithm, and I don't care about performance.
Thanks.
Edit:
recently introduced a fix so the answer is more accurately. now it will recheck his position with his previous positions.
Now a new question comes up. Which will be better to raise his 'number-color'? the node in which i am standing, or the node i am visiting (asking if i am adjacent to it)?
public class Modelacion {
public static void main(String args[]) throws IOException{
// given the matrix ... which i have hidden the initialization here
int[][] matriz = new int[40][40];
int color[] = new int[40];
for (int i = 0 ; i<40;i++)
color[i]=1;
Cromatico c = new Cromatico(matriz, color);
}
}
import java.io.IOException;
public class Cromatico {
Cromatico(int[][]matriz, int[] color, int fila) throws IOException{
for (int i = 0; i<fila;i++){
for (int j = 0 ; j<fila;j++){
if (matriz[i][j] == 1 && color[i] == color [j]){
if (j<i)
color [i] ++;
else
color [j] ++;
}
}
}
int numeroCromatico = 1;
for (int k = 0; k<fila;k++){
System.out.print(".");
numeroCromatico = Math.max(numeroCromatico, color[k]);
}
System.out.println();
System.out.println("el numero cromatico del grafo es: " + numeroCromatico);
}
}
Finding the chromatic number of a graph is NP-Complete (see Graph Coloring). It is NP-Complete even to determine if a given graph is 3-colorable (and also to find a coloring).
The wiki page linked to in the previous paragraph has some algorithms descriptions which you can probably use.
btw, since it is NP-Complete and you don't really care about performance, why don't you try using brute force?
Guess a chromatic number k, try all possibilities of vertex colouring (max k^n possibilities), if it is not colorable, new guess for chromatic number = min{n,2k}. If it is k-colorable, new guess for chromatic number = max{k/2,1}. Repeat, following the pattern used by binary search and find the optimal k.
Good luck!
And to answer your edit.
Neither option of incrementing the color will work. Also, your algorithm is O(n^2). That itself is enough to tell it is highly likely that your algorithm is wrong, even without looking for counterexamples. This problem is NP-Complete!
Super slow, but it should work:
int chromaticNumber(Graph g) {
for (int ncolors = 1; true; ncolors++) {
if (canColor(g, ncolors)) return ncolors;
}
}
boolean canColor(Graph g, int ncolors) {
return canColorRemaining(g, ncolors, 0));
}
// recursive routine - the first colors_so_far nodes have been colored,
// check if there is a coloring for the rest.
boolean canColorRemaining(Graph g, int ncolors, int colors_so_far) {
if (colors_so_far == g.nodes()) return true;
for (int c = 0; c < ncolors; c++) {
boolean ok = true;
for (int v : g.adjacent(colors_so_far)) {
if (v < colors_so_far && g.getColor(v) == c) ok = false;
}
if (ok) {
g.setColor(colors_so_far, c);
if (canColorRemaining(g, ncolors, colors_so_far + 1)) return true;
}
}
return false;
}