I tried Dijkstras algorithm and got confused between 2 implemntation, one keeps track of visited nodes (code1) and other don't keep the track (code2).
Code1:
dis[S] = 0; //S is source
vis[S] = 1;
PriorityQueue<Node> queue = new PriorityQueue<>();
queue.offer(new Node(S, 0));
while(!queue.isEmpty()){
Node node = queue.poll();
int u = node.v;
vis[u] = 1;
for(Node n: adj1.get(u)){
int v = n.v;
int w = n.w;
if(vis[v] == 0){
if(dis[u]+w < dis[v]){
dis[v] = dis[u]+w;
}
queue.offer(n);
}
}
}
`
code2:
dis[S] = 0;
//vis[S] = 1;
PriorityQueue<Node> queue = new PriorityQueue<>();
queue.offer(new Node(S, 0));
while(!queue.isEmpty()){
Node node = queue.poll();
int u = node.v;
// vis[u] = 1;
for(Node n: adj1.get(u)){
int v = n.v;
int w = n.w;
// if(vis[v] == 0){
if(dis[u]+w < dis[v]){
dis[v] = dis[u]+w;
queue.offer(n);
}
// }
}
}
Code 1 fails some test cases code 2 passes all test cases. can anyone explain why code 1 fails like what are the edge cases i am missing.
Both of your implementations are incorrect. "code 2" might work, but it's probably slow.
The most obvious problem in both implementations is that your priority queue is full of edges. In Dijkstra's algorithm, the queue orders vertices by their currently best discovered cost. There is just no way that your priority queue could be doing this properly.
I would guess that you're actually ordering edges by weight. The will give you vertices in the wrong order, which could cause "code 1" to fail because of the visited check.
The next problem is the implementation of the visited check. If you're not using a heap that supports a decrease_key() operation, then you have to add a vertex to the queue every time you decrease the weight, so it could end up in the queue multiple times. When it comes out of the queue, you will know its best cost, so you can save time by ignoring the other instances in the queue when you see them.
You do not use the visited check to ensure that you only add the vertex once, because then keys can't be decreased and your implementation is broken.
A proper implementation looks like this:
// initialize dis to max value
for (int i=0; i<dis.length; ++i) {
dis[i] = Integer.MAX_VALUE;
}
dis[S] = 0; //S is source
// Note that S isn't scanned yet, so vis[S] == 0
PriorityQueue<PriorityNode> queue = new PriorityQueue<>();
queue.offer(new PriorityNode(S, 0));
while(!queue.isEmpty()){
PriorityNode node = queue.poll();
int u = node.vertex;
if (vis[u] != 0) {
// already resolved this vertex
continue;
}
vis[u] = 1;
for(Edge n: adj1.get(u)){
int v = n.v;
int w = n.w;
if(dis[u]+w < dis[v]){
dis[v] = dis[u]+w;
// We found a better cost for v
queue.offer(new PriorityNode(v, dis[v]));
}
}
}
Related
Given a rooted tree having N nodes. Root node is node 1. Each ith node has some value , val[i] associated with it.
For each node i (1<=i<=N) we want to know MEX of the path values from root node to node i.
MEX of an array is smallest positive integer not present in the array, for instance MEX of {1,2,4} is 3
Example : Say we are given tree with 4 nodes. Value of nodes are [1,3,2,8] and we also have parent of each node i (other than node 1 as it is the root node). Parent array is defined as [1,2,2] for this example. It means parent of node 2 is node 1, parent of node 3 is node 2 and parent of node 4 is also node 2.
Node 1 : MEX(1) = 2
Node 2 : MEX(1,3) = 2
Node 3 : MEX(1,3,2) = 4
Node 4 : MEX(1,3,8) = 2
Hence answer is [2,2,4,2]
In worst case total number of Nodes can be upto 10^6 and value of each node can go upto 10^9.
Attempt :
Approach 1 : As we know MEX of N elements will be always be between 1 to N+1. I was trying to use this understanding with this tree problem, but then in this case N will keep on changing dynamically as one proceed towards leaf nodes.
Approach 2 : Another thought was to create an array with N+1 empty values and then try to fill them as we go along from root node. But then challenge I faced was on to keep track of first non filled value in this array.
public class TestClass {
public static void main(String[] args) throws IOException {
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
PrintWriter wr = new PrintWriter(System.out);
int T = Integer.parseInt(br.readLine().trim());
for(int t_i = 0; t_i < T; t_i++)
{
int N = Integer.parseInt(br.readLine().trim());
String[] arr_val = br.readLine().split(" ");
int[] val = new int[N];
for(int i_val = 0; i_val < arr_val.length; i_val++)
{
val[i_val] = Integer.parseInt(arr_val[i_val]);
}
String[] arr_parent = br.readLine().split(" ");
int[] parent = new int[N-1];
for(int i_parent = 0; i_parent < arr_parent.length; i_parent++)
{
parent[i_parent] = Integer.parseInt(arr_parent[i_parent]);
}
int[] out_ = solve(N, val, parent);
System.out.print(out_[0]);
for(int i_out_ = 1; i_out_ < out_.length; i_out_++)
{
System.out.print(" " + out_[i_out_]);
}
System.out.println();
}
wr.close();
br.close();
}
static int[] solve(int N, int[] val, int[] parent){
// Write your code here
int[] result = new int[val.length];
ArrayList<ArrayList<Integer>> temp = new ArrayList<>();
ArrayList<Integer> curr = new ArrayList<>();
if(val[0]==1)
curr.add(2);
else{
curr.add(1);
curr.add(val[0]);
}
result[0]=curr.get(0);
temp.add(new ArrayList<>(curr));
for(int i=1;i<val.length;i++){
int parentIndex = parent[i-1]-1;
curr = new ArrayList<>(temp.get(parentIndex));
int nodeValue = val[i];
boolean enter = false;
while(curr.size()>0 && nodeValue == curr.get(0)){
curr.remove(0);
nodeValue++;
enter=true;
}
if(curr.isEmpty())
curr.add(nodeValue);
else if(!curr.isEmpty() && curr.contains(nodeValue) ==false && (enter|| curr.get(0)<nodeValue))
curr.add(nodeValue);
Collections.sort(curr);
temp.add(new ArrayList<>(curr));
result[i]=curr.get(0);
}
return result;
}
}
This can be done in time O(n log n) using augmented BSTs.
Imagine you have a data structure that supports the following operations:
insert(x), which adds a copy of the number x.
remove(x), which removes a copy of the number x.
mex(), which returns the MEX of the collection.
With something like this available, you can easily solve the problem by doing a recursive tree walk, inserting items when you start visiting a node and removing those items when you leave a node. That will make n calls to each of these functions, so the goal will be to minimize their costs.
We can do this using augmented BSTs. For now, imagine that all the numbers in the original tree are distinct; we’ll address the case when there are duplicates later. Start off with Your BST of Choice and augment it by having each node store the number of nodes in its left subtree. This can be done without changing the asymptotic cost of an insertion or deletion (if you haven’t seen this before, check out the order statistic tree data structure). You can then find the MEX as follows. Starting at the root, look at its value and the number of nodes in its left subtree. One of the following will happen:
The node’s value k is exactly one plus the number of nodes in the left subtree. That means that all the values 1, 2, 3, …, k are in the tree, so the MEX will be the smallest value missing from the right subtree. Recursively find the MEX of the right subtree. As you do, remember that you’ve already seen the values from 1 to k by subtracting k off of all the values you find there as you encounter them.
The node’s value k is at least two more than the number of nodes in the left subtree. That means that the there’s a gap somewhere in the node’s in the left subtree plus the root. Recursively find the MEX of the left subtree.
Once you step off the tree, you can look at the last node where you went right and add one to it to get the MEX. (If you never went right, the MEX is 1).
This is a top-down pass on a balanced tree that does O(1) work per node, so it takes a total of O(log n) work.
The only complication is what happens if a value in the original tree (not the augmented BST) is duplicated on a path. But that’s easy to fix: just add a count field to each BST node tracking how many times it’s there, incrementing it when an insert happens and decrementing it when a remove happens. Then, only remove the node from the BST in the case where the frequency drops to zero.
Overall, each operation on such a tree takes time O(log n), so this gives an O(n log n)-time algorithm for your original problem.
public class PathMex {
static void dfs(int node, int mexVal, int[] res, int[] values, ArrayList<ArrayList<Integer>> adj, HashMap<Integer, Integer> map) {
if (!map.containsKey(values[node])) {
map.put(values[node], 1);
}
else {
map.put(values[node], map.get(values[node]) + 1);
}
while(map.containsKey(mexVal)) mexVal++;
res[node] = mexVal;
ArrayList<Integer> children = adj.get(node);
for (Integer child : children) {
dfs(child, mexVal, res, values, adj, map);
}
if (map.containsKey(values[node])) {
if (map.get(values[node]) == 1) {
map.remove(values[node]);
}
else {
map.put(values[node], map.get(values[node]) - 1);
}
}
}
static int[] findPathMex(int nodes, int[] values, int[] parent) {
ArrayList<ArrayList<Integer>> adj = new ArrayList<>(nodes);
HashMap<Integer, Integer> map = new HashMap<>();
int[] res = new int[nodes];
for (int i = 0; i < nodes; i++) {
adj.add(new ArrayList<Integer>());
}
for (int i = 0; i < nodes - 1; i++) {
adj.get(parent[i] - 1).add(i + 1);
}
dfs(0, 1, res, values, adj, map);
return res;
}
public static void main(String args[]) {
Scanner sc = new Scanner(System.in);
int nodes = sc.nextInt();
int[] values = new int[nodes];
int[] parent = new int[nodes - 1];
for (int i = 0; i < nodes; i++) {
values[i] = sc.nextInt();
}
for (int i = 0; i < nodes - 1; i++) {
parent[i] = sc.nextInt();
}
int[] res = findPathMex(nodes, values, parent);
for (int i = 0; i < nodes; i++) {
System.out.print(res[i] + " ");
}
}
}
I need some help: I'm making a program that accesses a list and "looks for" an int ID that is identical to its sequence of requests.
Let's say i have a cache with 3 numbers,
20 30 10.
Sequence of requests with 6 numbers,
20 30 5 30 5 20.
The program will start with the first number in the sequence of requests and go through the cache, comparing the request with every number in the cache, one at a time and stops if it finds a match. A match will increase a variable hit. A variable compCount measures the number of comparisons it takes to find a match. If the comparison is more than 1, or in other words if the key found in the cache is not at the head of the LinkedList, the program moves the key to the head of the LinkedList.
Below shows the new cache after 30 is compared with the cache:
30 20 10
On the other hand, if it is a miss, the program will add the key to the head of the LinkedList.
Below shows the new cache after 5 is compared with the cache:
5 30 20 10
Below is what I have done so far:
static void moveToFront() {
int key = 0;
int hit = 0;
int cacheSize = initCount;
boolean found = false;
int[] comparisons = new int[reqCount];
for(int i = 0; i < reqCount; i++) {
found = false;
key = reqData[i];
int compCount = 0;
Node curr = head;
while(curr != null) {
compCount++;
if(curr.data == key) {
found = true;
comparisons[i] = compCount;
}
curr = curr.next;
}
if(found == true) {
hit++;
}
else {
Node newNode = new Node(key);
newNode.next = null;
newNode.prev = tail;
if(tail != null) {
tail.next = newNode;
}
else {
head = newNode;
}
tail = newNode;
cacheSize++;
comparisons[i] = compCount;
}
}
for(int x = 0; x < reqCount; x++) {
System.out.print(comparisons[x] + " ");
}
System.out.println();
System.out.println(hit + " h");
printList(); //prints the updated list
}
There are multiple things wrong with this chunk of code. Instead of adding it to the front, I added the key to the tail of the LinkedList if it is a miss. Also, I have not found a way to move the number in the LinkedList to the head. I figured this chunk of code may be a good place to start from but I'm all out of ideas.
Below is the chunk of code for the Doubly Linked List:
class Node {
public int data;
public Node next;
public Node prev;
public int freq;
// constructor to create a new node with data equals to parameter i
public Node (int i) {
next = null;
data = i;
freq = 1;
}
}
I am also not allowed to use any built in methods. I am open to any thoughts and suggestions. Thank you!
Edit: The comparisons array is the number of comparisons for each of the requests in the sequence of request
Edit 2: The output is as shown below:
1 2 3 2 4 1
5 h
List: 20 30 10 5
The first line is from the comparisons array, second line is total number of hits and the last line is the updated list.
Instead of adding it to the front, I added the key to the tail of the
LinkedList if it is a miss.
The code should be as follows:
if(found == true) {
hit++;
} else {
Node newNode = new Node(key);
newNode.next = head;
head.prev = newNode;
cacheSize++;
comparisons[i] = compCount;
}
Also, I have not found a way to move the number in the LinkedList to
the head.
After the following loop:
for(int x = 0; x < reqCount; x++) {
System.out.print(comparisons[x] + " ");
}
you need to put the following code:
for(int x = 0; x < reqCount; x++) {
if(comparisons[x] > 1){
int temp = cacheData[0];
for(int i = cacheSize - 1; i >= 1; i--) {
cacheData[i] = cacheData[i-1];
}
cacheData[0] = reqData[i];
}
}
I am trying to find the average of each level in a binary tree. I am doing BFS. I am trying to do it using a null node. Whenever I find a dummy node, that means I am at the last node at that level. The problem I am facing is that I am not able to add average of the last level in a tree using this. Can Someone Help me?
Consider example [3,9,20,15,7]
I am getting the output as [3.00000,14.50000]. Not getting the average of the last level that is 15 and 7
Here's my code
/**
* Definition for a binary tree node.
* public class TreeNode {
* int val;
* TreeNode left;
* TreeNode right;
* TreeNode(int x) { val = x; }
* }
*/
public class Solution {
public List<Double> averageOfLevels(TreeNode root) {
List<Double> list = new ArrayList<Double>();
double sum = 0.0;
Queue<TreeNode> q = new LinkedList<TreeNode>();
TreeNode temp = new TreeNode(0);
q.offer(root);
q.offer(temp);
int count = 0;
while(!q.isEmpty()){
root = q.poll();
sum += root.val;
if(root != temp)
{
count++;
if(root.left != null){
q.offer(root.left);
}
if(root.right != null){
q.offer(root.right);
}
}
else
{
if(!q.isEmpty()){
list.add(sum / count);
sum = 0;
count = 0;
q.add(temp);
}
}
}
return list;
}
}
Take a look at this code, which executes whenever you find the marker for the end of the current level:
if(!q.isEmpty()){
list.add(sum / count);
sum = 0;
count = 0;
q.add(temp);
}
This if statement seems to be designed to check whether you've finished the last row in the tree, which you could detect by noting that there are no more entries in the queue that would correspond to the next level. In that case, you're correct that you don't want to add the dummy node back into the queue (that would cause an infinite loop), but notice that you're also not computing the average in the row you just finished.
To fix this, you'll want to compute the average of the last row independently of reseeding the queue, like this:
if(!q.isEmpty()){
q.add(temp);
}
list.add(sum / count);
sum = 0;
count = 0;
There's a new edge case to watch out for, and that's what happens if the tree is totally empty. I'll let you figure out how to proceed from here. Good luck!
I would use recursive deep scan of the tree. On each node I would push the value into a map with a pair .
I DID NOT test that code but it should be along the lines.
void scan(int level, TreeNode n, Map<Integer, List<Integer> m) {
List l = m.get(level); if (l==null) {
l = new ArrayList();
m.put(level, l);
}
l.add(n.val);
int nextLevel = level + 1;
if (n.left != null) scan(nextLevel, n.left, m);
if (n.right != null) scan(nextLevel, n.right, m);
}
Once the scan is done I can calculate the average for each level.
for (int lvl in m.keyset()) {
List l = m.get(lvl);
// MathUtils.avg() - it is obvious what it should be
double avg = MathUtils.avg(l);
// you code here
}
I have a Graph class with a bunch of nodes, edges, etc. and I'm trying to perform Dijkstra's algorithm. I start off adding all the nodes to a priority queue. Each node has a boolean flag for whether it is already 'known' or not, a reference to the node that comes before it, and an int dist field that stores its length from the source node. After adding all the nodes to the PQ and then flagging the source node appropriately, I've noticed that the wrong node is pulled off the PQ first. It should be that the node with the smallest dist field comes off first (since they are all initialized to a a very high number except for the source, the first node off the PQ should be the source... except it isn't for some reason).
Below is my code for the algorithm followed by my compare method within my Node class.
public void dijkstra() throws IOException {
buildGraph_u();
PriorityQueue<Node> pq = new PriorityQueue<>(200, new Node());
for (int y = 0; y < input.size(); y++) {
Node v = input.get(array.get(y));
v.dist = 99999;
v.known = false;
v.prnode = null;
pq.add(v);
}
source.dist = 0;
source.known = true;
source.prnode = null;
int c=1;
while(c != input.size()) {
Node v = pq.remove();
//System.out.println(v.name);
//^ Prints a node that isn't the source
v.known = true;
c++;
List<Edge> listOfEdges = getAdjacent(v);
for (int x = 0; x < listOfEdges.size(); x++) {
Edge edge = listOfEdges.get(x);
Node w = edge.to;
if (!w.known) {
int cvw = edge.weight;
if (v.dist + cvw < w.dist) {
w.dist = v.dist + cvw;
w.prnode = v;
}
}
}
}
}
public int compare (Node d1, Node d2) {
int dist1 = d1.dist;
int dist2 = d2.dist;
if (dist1 > dist2)
return 1;
else if (dist1 < dist2)
return -1;
else
return 0;
}
Can anyone help me find the issue with my PQ?
The priority queue uses assumption that order doesn't change after you will insert the element.
So instead of inserting all of the elements to priority queue you can:
Start with just one node.
Loop while priority queue is not empty.
Do nothing, if element is "known".
Whenever you find smaller distance add it to priority queue with "right" weight.
So you need to store a sth else in priority queue, a pair: distance at insertion time, node itself.
I have attempted to implement Dijkstra's algorithm from the Pseudocode on the Wikipedia page. I have set a condition after the Queue is polled that tests if the current node is the target node, b. If so, then the Algorithm is to break and return the path from a to b.
This condition will always be satisfied as I know that all nodes within the range of the Adjacency Matrix do indeed exist. The program is to model the connections of the London Underground map.
Anyway, I have been trying to figure this out for a while now, and thus far it eludes me. Maybe somebody can spot the issue. Oh, adj is just the adjacency matrix for my graph.
/**
Implementation of Dijkstra's Algorithm taken from "Introduction to
Algorithms" by Cormen, Leiserson, Rivest and Stein. Third edition.
d = Array of all distances.
pi = Previous vertices.
S = Set of vertices whose final shortest path weights have been
determined.
Q = Priority queue of vertices.
**/
public ArrayList<Integer> dijkstra(Integer a, Integer b){
final double[] d = new double[adj.length];
int[] pi = new int[adj.length];
HashSet<Integer> S = new HashSet<Integer>();
PriorityQueue<Integer> Q = new PriorityQueue<Integer>(d.length, new Comparator<Integer>(){
public int compare(Integer a, Integer b){
Double dblA = d[a-1];
Double dblB = d[b-1];
return dblA.compareTo(dblB);
}
});
for(int i=0; i<d.length; i++){
d[i] = Double.POSITIVE_INFINITY;
}
d[a] = 0f;
for(int i=0; i<d.length; i++){
Q.add(i+1);
}
while(Q.size() > 0){
int u = Q.poll();
if(u == b){
System.out.println("jjd");
ArrayList<Integer> path = new ArrayList<Integer>();
for(int i=pi.length-1; i>=0; i--){
path.add(pi[i]);
}
return path;
}
S.add(u);
if(d[u] == Double.POSITIVE_INFINITY){
break;
}
for(int v=0; v<adj.length; v++){
double tmp = d[u] + adj[u][v];
if(tmp < d[v]){
d[v] = tmp;
pi[v] = u;
}
}
}
return new ArrayList<Integer>();
}
}
EDIT:- After doing some debugging, it seems that the body of the while loop is being executed only once.
if(d[u] == Double.POSITIVE_INFINITY){
break;
}
for(int v=0; v<adj.length; v++){
double tmp = d[u] + adj[u][v];
if(tmp < d[v]){
d[v] = tmp;
pi[v] = u;
}
}
The changing of the d values in the loop body doesn't rearrange the priority queue, so unless the element that happened to be at the top of the queue after popping the initial node is one of its neighbours, you will have d[u] == Double.POSITIVE_INFINITY in the next iteration and break then.
In Dijkstra's algorithm, it is important that the queue be updated when the distance of a node changes. java.util.PriorityQueue<E> doesn't offer that functionality, so using that is non-trivial, I see no way to use it other than removing and re-adding the updated nodes on every update. That is of course not very efficient, since removal is O(size).
The inefficiency can be mitigated by not having all nodes in the queue. Star with adding only the initial node, and in the loop, insert only the neighbours not yet seen, remove and reinsert the neighbours that already are in the queue. That keeps the queue shorter, and makes removal cheaper on average.
For an efficient implementation, you would need a custom priority queue that allows faster (O(log size)?) update of priorities.
If you get 'jjd' outputted in the console when you run the program from your System.out.println your problem should be this:
if(u == b){
System.out.println("jjd");
ArrayList<Integer> path = new ArrayList<Integer>();
for(int i=pi.length-1; i>=0; i--){
path.add(pi[i]);
}
return path;
When you are calling the 'return path;' you actually break the whole method and return 'path'.