I am looking at a part of code that is used for an IDA* search. When a node is expanded, its child states are then put into the StateCache datastructure, which is a stack as far as I can tell. My question is this: is there any reason one would choose to set the max size of an array to a particular value? The cache data member has a number of elements equal to 10*1024, which seems to me that the cache data member is supposed to only store up to 10 KB (?) of elements, but I am really not sure. What would be the justification for the10*1024 number? Note that I have found this and this stack overflow posts that discuss cache hit/miss w.r.t row-major vs. column-major access of arrays, but these don't answer my question. Also, this code had no comments, otherwise I would've included more.
public class StateCache {
public static final int MAX_CACHE_SIZE = 10 * 1024;
int size;
State[] cache;
public StateCache() {
size = 0;
cache = new State[MAX_CACHE_SIZE];
}
// push and pop operations
public State get(State original) {
if (size > 0) {
size--;
State result = cache[size];
result.init(original);
return result;
} else {
return new State(original);
}
}
public void put(State[] children) {
for (State child: children) {
if (child == null) {
return;
}
if (size >= MAX_CACHE_SIZE) {
return;
}
cache[size] = child;
size++;
}
}
}
Related
I have a BinaryTree and I want to get all nodes of a specific level. Order does not matter. I want to try to do this with recursion . My method looks like this:
public List<T> getNodesOnLevel(int i){
int recursionTool = i
//to do
recursionTool-=1
}
I tried to while(recursionTool != 0){ method.... and then recursionTool -1}
But I ended up getting all nodes until the wanted level.
My Node looks like this:
class Node<T>{
T val;
Node<T> left;
Node<T> right;
Node(T v){
val = v;
left = null;
right = null;
}
It is possible to implement this as a pure functional algorithm by concatenating the lists returned by recursive calls. Unfortunately, that is rather inefficient in Java because all retrieved values are copied by list creation or concatenation once at each recursion level.
If you are willing to use mutation, here is a solution that avoids the copying (assuming that this is a Node<T>):
private void getNodesOnLevel(int level, List<T> list) {
if (node == null) return;
if (level == 0) {
list.add(this.val);
} else {
this.left.getNodesOnLevel(level - 1, list);
this.right.getNodesOnLevel(level - 1, list);
}
}
The above method needs to be called with an empty (mutable) list as the 2nd argument, so we need another method:
public List<T> getNodesOnLevel(int level) {
List<T> list = new ArrayList<>();
this.getNodesOnLevel(level, list);
return list;
}
(In complexity terms, the pure functional solution is O(LN) where L is the level and N is the number of nodes at that level. My solution is O(N). Each value in the list will be copied twice on average, due to the way that ArrayList.append implements list resizing. The resizing could be avoided by creating the list with a capacity of 2level.)
This may help you. I had used this method to print nodes but you can change it.
public void printGivenLevel(TNode root, int level) {
if (root == null)
return;
if (level == 1 && root.getValue() != null) {
// here, add root.getValue() to list
} else if (level > 1) {
printGivenLevel(root.getLeft(), level - 1);
printGivenLevel(root.getRight(), level - 1);
}
}
I have to make a so called "Hit Balanced Tree". The difference is that as you can see, my node class has an instance variable called numberOfHits, which increments anytime you call contains method or findNode method. The point of this exercise is to have the nodes with highest hit count on the top, so the tree basically reconstructs itself (or rotates). Root has the highest hit count obviously.
I have a question regarding a method I have to make, that returns the node with highest hit count. I will later need it to make the tree rotate itself (I guess, at least that's the plan). Here is my node class. (All the getters of course)
public class HBTNode<T> {
private HBTNode<T> left;
private HBTNode<T> right;
private T element;
private int numberOfHits;
public HBTNode(T element){
this.left = null;
this.right = null;
this.element = element;
this.numberOfHits = 0;
}
What I have so far is this:
public int findMaxCount(HBTNode<T> node) {
int max = node.getNumberOfHits();
if (node.getLeft() != null) {
max = Math.max(max, findMaxCount(node.getLeft()));
}
if (node.getRight() != null) {
max = Math.max(max, findMaxCount(node.getRight()));
}
return max;
}
This works fine, except it returns an integer.I need to return the node itself. And since I have to do this recursively, I decided find the biggest hit count and then subsequently using this method in another method that returns a node, like this(it's probably really inefficient, so if you have tips on improvement, I am listening):
public int findMaxCount() {
return findMaxCount(root);
}
public HBTNode<T> findMaxCountNode(HBTNode<T> node) {
if (node.getNumberOfHits() == this.findMaxCount()) {
return node;
}
if (node.getLeft() != null ) {
return findMaxCountNode(node.getLeft());
}
if (node.getRight() != null) {
return findMaxCountNode(node.getRight());
}
return null;
}
I call the method like this:
public HBTNode<T> findMaxCountNode() {
return findMaxCountNode(root);
}
It returns null even though I think it should be fine, I am not that good at recursion so obviously I am missing something. I am open to any help, also new suggestions, if you have any about this exercise of mine. Thanks a lot.
Test code:
public static void main(String[] args) {
HBTree<Integer> tree = new HBTree<Integer>();
tree.add(50);
tree.add(25);
tree.add(74);
tree.add(19);
tree.add(8);
tree.add(6);
tree.add(57);
tree.add(108);
System.out.println(tree.contains(108)); //contains method increases the count by one
System.out.println(tree.contains(8));
System.out.println(tree.contains(8));
System.out.println(tree.contains(108));
System.out.println(tree.contains(8));
System.out.println(tree.contains(108));
System.out.println(tree.contains(108));
System.out.println(tree.contains(108));
System.out.println(tree.findMaxCountNode());
}
Current output: true
true
true
true
true
true
true
true
null
Expected output: true
true
true
true
true
true
true
true
Element: 108
Left child: 6 //this is just a toString, doesn't matter at this point
Right child: null
Number of hits: 5
Seems like your two functions should look like the following. What I'm assuming here is that these functions, which are defined inside the HBTNode class, are designed to find the highest hit-count node below itself:
public HBTNode<T> findMaxCountNode(HBTNode<T> node) {
return findMaxCountNode(node, node);
}
public HBTNode<T> findMaxCountNode(HBTNode<T> node, HBTNode<T> maxNode) {
HBTNode<T> currMax = (node.getNumberOfHits() > maxNode.getNumberOfHits()) ? node : maxNode;
if (node.getLeft() != null ) {
currMax = findMaxCountNode(node.getLeft(), currMax);
}
if (node.getRight() != null) {
currMax = findMaxCountNode(node.getRight(), currMax);
}
return currMax;
}
public int findMaxCount(HBTNode<T> node) {
HBTNode<T> maxNode = findMaxCountNode(node);
if (maxNode != NULL)
return maxNode.getNumberOfHits();
else
return -1;
}
Let me know if there are any issues, this is off the top of my head, but I thought it would be helpful to point out that the "integer" version of your method should just use the "Node finding" version of the method. The method you wrote to find the maximum value is quite similar to the one I wrote here to find the maximum node.
I read some material about Java memory leak. It implements FIFO Queue with array to semantically leak memory. But I don't understand why it will cause memory leak. Is it because it didn't nullify the unused slot in the 'pop' operation? Anyone can explain to me?
queue[head] = null
The FIFO Queue implementation is as follows:
public class FIFOQueue {
private Object[] queue;
private int size = 0, head = 0, tail = 0;
private static final int INITIAL_CAPACITY = 16;
public FIFOQueue() {
queue = new Object[INITIAL_CAPACITY];
}
public void push(Object e) {
ensureCapacity();
queue[tail] = e;
size++;
tail = increment(tail);
}
public Object pop() throws EmptyStackException {
if (size == 0)
throw new EmptyStackException();
size–;
Object returnValue = queue[head];
head = increment(head);
return returnValue;
}
/** doubling the capacity each time the array needs to grow. */
private void ensureCapacity() {
if (queue.length == size)
queue = Arrays.copyOf(queue, 2 * size + 1);
}
/** make sure the pointers are wrapped around at the end of the array */
private int increment( int x ) {
if( ++x == queue.length )
x = 0;
return x;
}
}
You answered your own Question:)
Since you don't clean queue reference Garbage Collector, can't clean it from memory, because you have valid reference to this Object in your FIFOQueue. That way you pollute your memory with unused Objects, reducing memory effectively available to your program.
Also as pointed in comments, your ensureCapasity function will only work, when the tail is at the end of an array, otherwise you'll be loosing elements in your queue on push, this is more critical then queue[head] reference problem.
I need some help with my A* algorithm implementation.
When I run the algorithm it does find the goal, but the path is definately not the shortest :-P
Here is my code, please help me spot the bugs!
I think it might be the reconstruct path that is my problem but I'm not sure.
public class Pathfinder {
public List<Node> aStar(Node start, Node goal, WeightedGraph graph) {
Node x, y;
int tentative_g_score;
boolean tentative_is_better;
FScoreComparator comparator = new FScoreComparator();
List<Node> closedset = new ArrayList<Node>();
Queue<Node> openset = new PriorityQueue<Node>(10, comparator);
openset.add(start);
start.g_score = 0;
start.h_score = heuristic_cost_estimate(start, goal);
start.f_score = start.h_score;
while (!openset.isEmpty()) {
x = openset.peek();
if (x == goal) {
return reconstruct_path(goal);
}
x = openset.remove();
closedset.add(x);
for (Edge e : graph.adj(x)) {
if (e.v == x) {
y = e.w;
} else {
y = e.v;
}
if (closedset.contains(y) || y.illegal) {
continue;
}
tentative_g_score = x.g_score + e.weight;
if (!openset.contains(y)) {
openset.add(y);
tentative_is_better = true;
} else if (tentative_g_score < y.g_score) {
tentative_is_better = true;
} else {
tentative_is_better = false;
}
if (tentative_is_better) {
y.g_score = tentative_g_score;
y.h_score = heuristic_cost_estimate(y, goal);
y.f_score = y.g_score + y.h_score;
y.parent = x;
}
}
}
return null;
}
private int heuristic_cost_estimate(Node start, Node goal) {
return Math.abs(start.x - goal.x) + Math.abs(start.y - goal.y);
}
private List<Node> reconstruct_path(Node current_node) {
List<Node> result = new ArrayList<Node>();
while (current_node != null) {
result.add(current_node);
current_node = current_node.parent;
}
return result;
}
private class FScoreComparator implements Comparator<Node> {
public int compare(Node n1, Node n2) {
if (n1.f_score < n2.f_score) {
return 1;
} else if (n1.f_score > n2.f_score) {
return -1;
} else {
return 0;
}
}
}
}
Thanks to everyone for all the great answers!
My A* algorithm now works perfectly thanks to you guys! :-)
This was my first post and this forum is really amazing!
You are changing the priority of an element in the PriorityQueue after having inserted it. This isn't supported, as the priority queue isn't aware that an object has changed. What you can do is remove and re-add the object when it changes.
The priority is changed in the line: y.f_score = y.g_score + y.h_score;. This line happens after adding y to the priority queue. Note that simply moving the line openset.add(y); to after calculating the cost won't be enough, since y may have been added in a previous iteration.
It also isn't clear from your code whether the heuristic you used is admissible. If it isn't it will also cause you to get suboptimal paths.
Finally, a performance note: The contains method on ArrayList and PriorityQueue takes linear time to run, which will make the running time of your implememtation non-optimal. You can improve this by adding boolean properties to the nodes to indicate if they are in the closed/open sets, or by using a set data structure.
Priority queue does not update position of item when you change its priority.
Therefore heap property does not hold.
Changed priority affect additions/removals of other items, but it does not repair heap property.
therefore you does not get best item from open -> you don't find shortest path.
You can:
1) write your own heap and maintain index into it
2) add another object into PQ and mark the old one as invalid (you must instead of node put some object with validity flag and referencing node into queue).
2) have worse performance and I advise against it, but some navigation software use this approach (or at least few years back it used).
edit: Best practice is, insert immutable (or at least with imutable parts that means priority) objects into PriorityQueue
For my data structures class our homework is to create a generic heap ADT. In the siftUp() method I need to do comparison and if the parent is smaller I need to do a swap. The problem I am having is that the comparison operators are not valid on generic types. I believe I need to use the Comparable interface but from what I read it’s not a good idea to use with Arrays. I have also search this site and I have found good information that relates to this post none of them helped me find the solution
I removed some of the code that wasn’t relevant
Thanks
public class HeapQueue<E> implements Cloneable {
private int highest;
private Integer manyItems;
private E[] data;
public HeapQueue(int a_highest) {
data = (E[]) new Object[10];
highest = a_highest;
}
public void add(E item, int priority) {
// check to see is priority value is within range
if(priority < 0 || priority > highest) {
throw new IllegalArgumentException
("Priority value is out of range: " + priority);
}
// increase the heaps capacity if array is out of space
if(manyItems == data.length)
ensureCapacity();
manyItems++;
data[manyItems - 1] = item;
siftUp(manyItems - 1);
}
private void siftUp(int nodeIndex) {
int parentIndex;
E tmp;
if (nodeIndex != 0) {
parentIndex = parent(nodeIndex);
if (data[parentIndex] < data[nodeIndex]) { <-- problem ****
tmp = data[parentIndex];
data[parentIndex] = data[nodeIndex];
data[nodeIndex] = tmp;
siftUp(parentIndex);
}
}
}
private int parent(int nodeIndex) {
return (nodeIndex - 1) / 2;
}
}
Technically you're using the comparable interface on on item, not an array. One item in the array specifically. I think the best solution here is to accept, in the constructor, a Comparator that the user can pass to compare his generic objects.
Comparator<E> comparator;
public HeapQueue(int a_highest, Comparator<E> compare)
{
this.comparator = compare;
Then, you would store that comparator in a member function and use
if (comparator.compare(data[parentIndex],data[nodeIndex]) < 0)
In place of the less than operator.
If I am reading this right, E simply needs to extend Comparable and then your problem line becomes...
if (data[parentIndex].compareTo(ata[nodeIndex]) < 0)
This is not breaking any bet-practice rules that I know of.