I believe the space complexity is O(n) here, but I'm also storing everything in the lists, so I'm not sure.
I believe it is only O(n) because of the recursions, but because I'm storing everything in lists, maybe this requires more space?
The algorithm just shows the right side view of the binary tree, so if there root = 5, root.left = 3, root.right = 4. Then right side view will be just [5, 4]. If there was not a root.right, then right side view would be [5, 3]
What is the space complexity of this code?
Code:
import java.util.*;
public class TestingJavaCode {
public List<Integer> rightSideView(TreeNode root) {
ArrayList<Integer> res = new ArrayList<>();
List<Integer> leftS = new ArrayList<>();
if (root == null) return res;
res.add(root.val);
leftS.add(root.val);
if (root.right != null) {
for (int i : rightSideView(root.right)) {
res.add(i);
}
}
if (root.left != null) {
for (int j : rightSideView(root.left)) {
leftS.add(j);
}
}
for (int k = 0; k < leftS.size(); k++) {
if (k > res.size() - 1) res.add(leftS.get(k));
}
return res;
}
public static void main(String[] args) {
TestingJavaCode c = new TestingJavaCode();
TreeNode s4 = new TreeNode(1);
s4.left = new TreeNode(2);
s4.right = new TreeNode(3);
s4.left.left = new TreeNode(4);
s4.left.right = new TreeNode(5);
// [1, 3, 5]
System.out.println(c.rightSideView(s4).toString());
}
}
// Definition for a binary tree node.
class TreeNode {
int val;
TreeNode left;
TreeNode right;
TreeNode(int x) {
val = x;
}
}
The complexity is O(n). Although storing data in lists instead of arrays will add a slight overhead, it will only be a small constant. This won't effect the space taken on a exponential scale.
I suppose you see n as number of nodes in that tree. In that case, the recursion will not take O(n) but the tree height as space. If you assume the tree is somewhat balanced the spacecomplexity of the recursion itself is O(log n) (but is O(n) for any tree - a list in the worst case).
But since you are storing every node in a list at the end the spacecomplexity degenerates to O(n) in all cases.
So yes. Spacecomplexity is O(n), but your reasoning was not correct.
Related
Given a rooted tree having N nodes. Root node is node 1. Each ith node has some value , val[i] associated with it.
For each node i (1<=i<=N) we want to know MEX of the path values from root node to node i.
MEX of an array is smallest positive integer not present in the array, for instance MEX of {1,2,4} is 3
Example : Say we are given tree with 4 nodes. Value of nodes are [1,3,2,8] and we also have parent of each node i (other than node 1 as it is the root node). Parent array is defined as [1,2,2] for this example. It means parent of node 2 is node 1, parent of node 3 is node 2 and parent of node 4 is also node 2.
Node 1 : MEX(1) = 2
Node 2 : MEX(1,3) = 2
Node 3 : MEX(1,3,2) = 4
Node 4 : MEX(1,3,8) = 2
Hence answer is [2,2,4,2]
In worst case total number of Nodes can be upto 10^6 and value of each node can go upto 10^9.
Attempt :
Approach 1 : As we know MEX of N elements will be always be between 1 to N+1. I was trying to use this understanding with this tree problem, but then in this case N will keep on changing dynamically as one proceed towards leaf nodes.
Approach 2 : Another thought was to create an array with N+1 empty values and then try to fill them as we go along from root node. But then challenge I faced was on to keep track of first non filled value in this array.
public class TestClass {
public static void main(String[] args) throws IOException {
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
PrintWriter wr = new PrintWriter(System.out);
int T = Integer.parseInt(br.readLine().trim());
for(int t_i = 0; t_i < T; t_i++)
{
int N = Integer.parseInt(br.readLine().trim());
String[] arr_val = br.readLine().split(" ");
int[] val = new int[N];
for(int i_val = 0; i_val < arr_val.length; i_val++)
{
val[i_val] = Integer.parseInt(arr_val[i_val]);
}
String[] arr_parent = br.readLine().split(" ");
int[] parent = new int[N-1];
for(int i_parent = 0; i_parent < arr_parent.length; i_parent++)
{
parent[i_parent] = Integer.parseInt(arr_parent[i_parent]);
}
int[] out_ = solve(N, val, parent);
System.out.print(out_[0]);
for(int i_out_ = 1; i_out_ < out_.length; i_out_++)
{
System.out.print(" " + out_[i_out_]);
}
System.out.println();
}
wr.close();
br.close();
}
static int[] solve(int N, int[] val, int[] parent){
// Write your code here
int[] result = new int[val.length];
ArrayList<ArrayList<Integer>> temp = new ArrayList<>();
ArrayList<Integer> curr = new ArrayList<>();
if(val[0]==1)
curr.add(2);
else{
curr.add(1);
curr.add(val[0]);
}
result[0]=curr.get(0);
temp.add(new ArrayList<>(curr));
for(int i=1;i<val.length;i++){
int parentIndex = parent[i-1]-1;
curr = new ArrayList<>(temp.get(parentIndex));
int nodeValue = val[i];
boolean enter = false;
while(curr.size()>0 && nodeValue == curr.get(0)){
curr.remove(0);
nodeValue++;
enter=true;
}
if(curr.isEmpty())
curr.add(nodeValue);
else if(!curr.isEmpty() && curr.contains(nodeValue) ==false && (enter|| curr.get(0)<nodeValue))
curr.add(nodeValue);
Collections.sort(curr);
temp.add(new ArrayList<>(curr));
result[i]=curr.get(0);
}
return result;
}
}
This can be done in time O(n log n) using augmented BSTs.
Imagine you have a data structure that supports the following operations:
insert(x), which adds a copy of the number x.
remove(x), which removes a copy of the number x.
mex(), which returns the MEX of the collection.
With something like this available, you can easily solve the problem by doing a recursive tree walk, inserting items when you start visiting a node and removing those items when you leave a node. That will make n calls to each of these functions, so the goal will be to minimize their costs.
We can do this using augmented BSTs. For now, imagine that all the numbers in the original tree are distinct; we’ll address the case when there are duplicates later. Start off with Your BST of Choice and augment it by having each node store the number of nodes in its left subtree. This can be done without changing the asymptotic cost of an insertion or deletion (if you haven’t seen this before, check out the order statistic tree data structure). You can then find the MEX as follows. Starting at the root, look at its value and the number of nodes in its left subtree. One of the following will happen:
The node’s value k is exactly one plus the number of nodes in the left subtree. That means that all the values 1, 2, 3, …, k are in the tree, so the MEX will be the smallest value missing from the right subtree. Recursively find the MEX of the right subtree. As you do, remember that you’ve already seen the values from 1 to k by subtracting k off of all the values you find there as you encounter them.
The node’s value k is at least two more than the number of nodes in the left subtree. That means that the there’s a gap somewhere in the node’s in the left subtree plus the root. Recursively find the MEX of the left subtree.
Once you step off the tree, you can look at the last node where you went right and add one to it to get the MEX. (If you never went right, the MEX is 1).
This is a top-down pass on a balanced tree that does O(1) work per node, so it takes a total of O(log n) work.
The only complication is what happens if a value in the original tree (not the augmented BST) is duplicated on a path. But that’s easy to fix: just add a count field to each BST node tracking how many times it’s there, incrementing it when an insert happens and decrementing it when a remove happens. Then, only remove the node from the BST in the case where the frequency drops to zero.
Overall, each operation on such a tree takes time O(log n), so this gives an O(n log n)-time algorithm for your original problem.
public class PathMex {
static void dfs(int node, int mexVal, int[] res, int[] values, ArrayList<ArrayList<Integer>> adj, HashMap<Integer, Integer> map) {
if (!map.containsKey(values[node])) {
map.put(values[node], 1);
}
else {
map.put(values[node], map.get(values[node]) + 1);
}
while(map.containsKey(mexVal)) mexVal++;
res[node] = mexVal;
ArrayList<Integer> children = adj.get(node);
for (Integer child : children) {
dfs(child, mexVal, res, values, adj, map);
}
if (map.containsKey(values[node])) {
if (map.get(values[node]) == 1) {
map.remove(values[node]);
}
else {
map.put(values[node], map.get(values[node]) - 1);
}
}
}
static int[] findPathMex(int nodes, int[] values, int[] parent) {
ArrayList<ArrayList<Integer>> adj = new ArrayList<>(nodes);
HashMap<Integer, Integer> map = new HashMap<>();
int[] res = new int[nodes];
for (int i = 0; i < nodes; i++) {
adj.add(new ArrayList<Integer>());
}
for (int i = 0; i < nodes - 1; i++) {
adj.get(parent[i] - 1).add(i + 1);
}
dfs(0, 1, res, values, adj, map);
return res;
}
public static void main(String args[]) {
Scanner sc = new Scanner(System.in);
int nodes = sc.nextInt();
int[] values = new int[nodes];
int[] parent = new int[nodes - 1];
for (int i = 0; i < nodes; i++) {
values[i] = sc.nextInt();
}
for (int i = 0; i < nodes - 1; i++) {
parent[i] = sc.nextInt();
}
int[] res = findPathMex(nodes, values, parent);
for (int i = 0; i < nodes; i++) {
System.out.print(res[i] + " ");
}
}
}
I used the following recursion algorithm to calculate the possible cases of binary search trees given its number of nodes being n
public List<TreeNode> generateTrees(int n) {
if(n == 0){
List<TreeNode> empty = new ArrayList<TreeNode>();
return empty;
}
return recurHelper(1, n);
}
public List<TreeNode> recurHelper(int start, int end){
if(start > end){
TreeNode nan = null;
List<TreeNode> empty = new ArrayList<TreeNode>();
empty.add(nan);
return empty;
}
List<TreeNode> result = new ArrayList<TreeNode>();
for(int i = start; i <= end; i++){
List<TreeNode> left = recurHelper(start, i-1);
List<TreeNode> right = recurHelper(i+1, end);
for(TreeNode leftBranch:left){
for(TreeNode rightBranch:right){
TreeNode tree = new TreeNode(i);
tree.left = leftBranch;
tree.right = rightBranch;
result.add(tree);
}
}
}
return result;
}
I wonder what is the space complexity for the recursion, should it be O(h), where h is the height of the tree?
I do not think so because at each level we are storing a result consisting of O(lG_l) elements, where l stands for the level and G_l stands for the number of possible trees with l nodes.
Then it seems to me that the space complexity of the recursion should be nG_n + ... + 1G_1.
I am trying to find the average of each level in a binary tree. I am doing BFS. I am trying to do it using a null node. Whenever I find a dummy node, that means I am at the last node at that level. The problem I am facing is that I am not able to add average of the last level in a tree using this. Can Someone Help me?
Consider example [3,9,20,15,7]
I am getting the output as [3.00000,14.50000]. Not getting the average of the last level that is 15 and 7
Here's my code
/**
* Definition for a binary tree node.
* public class TreeNode {
* int val;
* TreeNode left;
* TreeNode right;
* TreeNode(int x) { val = x; }
* }
*/
public class Solution {
public List<Double> averageOfLevels(TreeNode root) {
List<Double> list = new ArrayList<Double>();
double sum = 0.0;
Queue<TreeNode> q = new LinkedList<TreeNode>();
TreeNode temp = new TreeNode(0);
q.offer(root);
q.offer(temp);
int count = 0;
while(!q.isEmpty()){
root = q.poll();
sum += root.val;
if(root != temp)
{
count++;
if(root.left != null){
q.offer(root.left);
}
if(root.right != null){
q.offer(root.right);
}
}
else
{
if(!q.isEmpty()){
list.add(sum / count);
sum = 0;
count = 0;
q.add(temp);
}
}
}
return list;
}
}
Take a look at this code, which executes whenever you find the marker for the end of the current level:
if(!q.isEmpty()){
list.add(sum / count);
sum = 0;
count = 0;
q.add(temp);
}
This if statement seems to be designed to check whether you've finished the last row in the tree, which you could detect by noting that there are no more entries in the queue that would correspond to the next level. In that case, you're correct that you don't want to add the dummy node back into the queue (that would cause an infinite loop), but notice that you're also not computing the average in the row you just finished.
To fix this, you'll want to compute the average of the last row independently of reseeding the queue, like this:
if(!q.isEmpty()){
q.add(temp);
}
list.add(sum / count);
sum = 0;
count = 0;
There's a new edge case to watch out for, and that's what happens if the tree is totally empty. I'll let you figure out how to proceed from here. Good luck!
I would use recursive deep scan of the tree. On each node I would push the value into a map with a pair .
I DID NOT test that code but it should be along the lines.
void scan(int level, TreeNode n, Map<Integer, List<Integer> m) {
List l = m.get(level); if (l==null) {
l = new ArrayList();
m.put(level, l);
}
l.add(n.val);
int nextLevel = level + 1;
if (n.left != null) scan(nextLevel, n.left, m);
if (n.right != null) scan(nextLevel, n.right, m);
}
Once the scan is done I can calculate the average for each level.
for (int lvl in m.keyset()) {
List l = m.get(lvl);
// MathUtils.avg() - it is obvious what it should be
double avg = MathUtils.avg(l);
// you code here
}
I'm researching on how to find k values in the BST that are closest to the target, and came across the following implementation with the rules:
Given a non-empty binary search tree and a target value, find k values in the BST that are closest to the target.
Note:
Given target value is a floating point.
You may assume k is always valid, that is: k ≤ total nodes.
You are guaranteed to have only one unique set of k values in the BST that are closest to the target. Assume that the BST is balanced.
And the idea of the implementation is:
Compare the predecessors and successors of the closest node to the target, we can use two stacks to track the predecessors and successors, then like what we do in merge sort, we compare and pick the closest one to the target and put it to the result list. As we know, inorder traversal gives us sorted predecessors, whereas reverse-inorder traversal gives us sorted successors.
Code:
import java.util.*;
class TreeNode {
int val;
TreeNode left, right;
TreeNode(int x) {
val = x;
}
}
public class ClosestBSTValueII {
List<Integer> closestKValues(TreeNode root, double target, int k) {
List<Integer> res = new ArrayList<>();
Stack<Integer> s1 = new Stack<>(); // predecessors
Stack<Integer> s2 = new Stack<>(); // successors
inorder(root, target, false, s1);
inorder(root, target, true, s2);
while (k-- > 0) {
if (s1.isEmpty()) {
res.add(s2.pop());
} else if (s2.isEmpty()) {
res.add(s1.pop());
} else if (Math.abs(s1.peek() - target) < Math.abs(s2.peek() - target)) {
res.add(s1.pop());
} else {
res.add(s2.pop());
}
}
return res;
}
// inorder traversal
void inorder(TreeNode root, double target, boolean reverse, Stack<Integer> stack) {
if (root == null) {
return;
}
inorder(reverse ? root.right : root.left, target, reverse, stack);
// early terminate, no need to traverse the whole tree
if ((reverse && root.val <= target) || (!reverse && root.val > target)) {
return;
}
// track the value of current node
stack.push(root.val);
inorder(reverse ? root.left : root.right, target, reverse, stack);
}
public static void main(String args[]) {
ClosestBSTValueII cv = new ClosestBSTValueII();
TreeNode root = new TreeNode(53);
root.left = new TreeNode(30);
root.left.left = new TreeNode(20);
root.left.right = new TreeNode(42);
root.right = new TreeNode(90);
root.right.right = new TreeNode(100);
System.out.println(cv.closestKValues(root, 40, 2));
}
}
And my question is, what's the reason for having two stacks and how is in-order a good approach? What's the purpose of each? Wouldn't traversing it with one stack be enough?
And what's the point of having a reverse boolean, such as for inorder(reverse ? ...);? And in the case of if ((reverse && root.val <= target) || (!reverse && root.val > target)), why do you terminate early?
Thank you in advance and will accept answer/up vote.
The idea of the algorithm you found is quite simple. They do just in-order traversal of a tree from the place, where target should be inserted. They use two stacks to store predecessors and successors. Lets take the tree for example:
5
/ \
3 9
/ \ \
2 4 11
Let the target be 8. When all inorder method calls are finished, stacks will be: s1 = {2, 3, 4, 5}, s2 = {11, 9}. As you see, s1 contains all predecessors of target and s2 all successors of it. Moreover, both stacks are sorted in a way, that top of each stack is closer to target, than all other values in stack. As a result, we can easily find kclosest values, just by always comparing tops of the stacks, and popping the closest value until we have k values. The running time of their algorithm is O(n).
Now about your questions. I don't know, how to implement this algorithm using the only stack effectively. The problem with stack is that we have access only to the top of it. But it is extremely easy to implement the algorithm with one array. Lets just do usual in-order traversal of a tree. For my example we will get: arr = {2, 3, 4, 5, 9, 11}. Then lets place l and r indexes to the closest to target values from both of the sides: l = 3, r = 4 (arr[l] = 5, arr[r] = 9). What is left is just to always compare arr[l] and arr[r] and choose what to add to result (absolutely the same, as with two stacks). This algo also takes O(n) operations.
Their approach to the problem seems to me a bit too hard to understand in code, though it is rather elegant.
I'd like to introduce another approach to the problem with another running time. This algorithm will take O(k*logn) time, which is better for small k and worse for bigger ones than previous algorithm.
Lets also store in TreeNode class a pointer to parent node. Then we can find predecessor or successor of any node in tree easily in O(logn) time (if you don't know how). So, lets firstly find in the tree predecessor and successor of the target (without doing any traversals!). Then do the same as with stacks: compare predecessor\successor, choose the closest one, and for the closest go to its predecessor\successor.
I hope, I answered your questions and you understood my explanations. If not, feel free to ask!
The reason why you need two stack is that you must traverse the tree in two directions, and you must compare the current value of each stack with the value you're searching (you may end up having k values greater than the searched value, or k/2 greater and k/2 lower).
I think you should use stacks of TreeNodes rather that stacks of Integer; you could avoid recursion.
UPDATE:
I see two phases in the algorithm:
1) locate the closest value in the tree, that would simultaneously build the initial stack.
2) make a copy of the stack, move back one element, this will give you the second stack; then iterate at most k times: see which of the two elements on top of each stack is the closest to the searched value, add it to the result list, and move the stack forward or backward.
UPDATE 2: A little code
public static List<Integer> closest(TreeNode root, int val, int k) {
Stack<TreeNode> right = locate(root, val);
Stack<TreeNode> left = new Stack<>();
left.addAll(right);
moveLeft(left);
List<Integer> result = new ArrayList<>();
for (int i = 0; i < k; ++i) {
if (left.isEmpty()) {
if (right.isEmpty()) {
break;
}
result.add(right.peek().val);
moveRight(right);
} else if (right.isEmpty()) {
result.add(left.peek().val);
moveLeft(left);
} else {
int lval = left.peek().val;
int rval = right.peek().val;
if (Math.abs(val-lval) < Math.abs(val-rval)) {
result.add(lval);
moveLeft(left);
} else {
result.add(rval);
moveRight(right);
}
}
}
return result;
}
private static Stack<TreeNode> locate(TreeNode p, int val) {
Stack<TreeNode> stack = new Stack<>();
while (p != null) {
stack.push(p);
if (val < p.val) {
p = p.left;
} else {
p = p.right;
}
}
return stack;
}
private static void moveLeft(Stack<TreeNode> stack) {
if (!stack.isEmpty()) {
TreeNode p = stack.peek().left;
if (p != null) {
do {
stack.push(p);
p = p.right;
} while (p != null);
} else {
do {
p = stack.pop();
} while (!stack.isEmpty() && stack.peek().left == p);
}
}
}
private static void moveRight(Stack<TreeNode> stack) {
if (!stack.isEmpty()) {
TreeNode p = stack.peek().right;
if (p != null) {
do {
stack.push(p);
p = p.left;
} while (p != null);
} else {
do {
p = stack.pop();
} while (!stack.isEmpty() && stack.peek().right == p);
}
}
}
UPDATE 3
Wouldn't traversing it with one stack be enough?
And what's the point of having a reverse boolean, such as for
inorder(reverse ? ...);? And in the case of if ((reverse && root.val
<= target) || (!reverse && root.val > target)), why do you terminate
early?
I don't know where you got the solution you gave in you're question from, but to summarize, it builds two lists of Integer, one in straight order, one in reverse order. It terminates "early" when the searched value is reached. This solution sound very inefficient since it requires the traversal of the whole tree. Mine, of course, is much better, and it conforms to the given rules.
I am computing intersection of 2 linkedlists, where one linkedlists if of size 'n' and second is of size 'm'. The code below, stores the items of smaller linkedlist in a set. Thus space complexity is O(m), where m < n, aka, m is length of smaller linkedlist.
But, it is possible that 2 linked lists are equal, m = n. So is the complexity O(n) ?
public IntersectionAndUnionLinkedList<T> intersection(IntersectionAndUnionLinkedList<T> list) {
final Set<T> items = new HashSet<>();
Node<T> firstSmall = null;
Node<T> firstBig = null;
if (size <= list.size) {
firstSmall = first;
firstBig = list.first;
} else {
firstSmall = list.first;
firstBig = first;
}
Node<T> n = null;
for (n = firstSmall; n != null; n = n.next) {
items.add(n.item);
}
IntersectionAndUnionLinkedList<T> intersectionlist = new IntersectionAndUnionLinkedList<>();
for (n = firstBig; n != null; n = n.next) {
if (items.contains(n.item)) {
intersectionlist.add(n.item);
}
}
return intersectionlist;
}
Well in case m=n, O(m)=O(n), but it is safe to state that the memory complexity is O(m) since it's the only real factor.
On the other hand, a HashSet<T> can under extreme circumstances be less memory efficient: after all it uses buckets, the buckets can be filled in a bad way. It depends on the exact implementation of a HashMap<T>. Although one still expect linear memory complexity so O(m).