public void dfsSearch (TreeNode root, List<String> curr, List<List<String>> res) {
curr.add(String.valueOf(root.val));
// a leaf node has been reached
if (root.left == null && root.right == null) {
res.add(curr);
return;
}
if (root.left != null) {
List<String> temp = new ArrayList<>(curr);
dfsSearch(root.left, temp, res);
}
if (root.right != null) {
List<String> temp = new ArrayList<>(curr);
dfsSearch(root.right, temp, res);
}
}
The code above is a a method using dfs to find all the paths from root to leaves in a binary tree, and my question is, in the two lines above the recursive call, why do I need to instantiate a new list and pass this temp list (temp) to the recursive call, why can't I just use the curr (the argument in the function)?
Imagine this is your binary tree
1
2 3
4 5
Let's say you didn't use temp. Then you would recurse down the left side of the binary tree, meaning curr would add 1,2,and 4. Because lists are mutable, their values are saved in recursive calls, even when you pop from the recursive stack. So after adding 4 to curr, you will go back to node 2, go to the right, and add 5. Thus curr will contain 1,2,4,5 instead of what you want, 1,2,4 and 1,2,5.
The copies are to prevent concurrent modification of curr when the dfsSearch method is called recursively. The first line curr.add(String.valueOf(root.val)); modifies the curr collection, and you can't loop through a collection while modifying it.
Concurrent Modification is one reason. But additionally, the algorithm collects "all paths" from root to leaves in the List of Lists result. If you didn't create new lists at each level on the recursive way down, you would end up with one big jumbled list after the recursion bubbles back up (all paths from root to leaves collected in one mixed list instead of each path in its own list).
Related
I was wondering if someone could help explain how to reverse a singly linked list without creating new nodes or changing data in the existing nodes. I am trying to study for finals and we had this question on a previous test. They don't release answers to the coding portions of the test and I haven't been able to figure it out.
They told us the best way to reverse it was by using a "runner technique" which I believe I understand what that is. They described it as using two pointers or counters to run through a list and gather information but I'm not sure how to use that to reverse a singly liked list. I was able to brute-force code to reverse a list of length 2, 3, and 4 but I was unable to make a loop or do it recursively. Any code or an explanation on how to go about this would be appreciated, thank you.
You can derive the code by starting with the idea of merely - one by one - popping elements off the input list and pushing them onto an initially empty result list:
NODE reverse(NODE list) {
NODE result = null;
while (list != null) {
NODE head = <pop the first node off list>
<push head onto result>
}
return result;
}
The result will be the reverse of the input. Now substitute Java for the missing pieces
NODE reverse(NODE list) {
NODE result = null;
while (list != null) {
// pop
NODE head = list;
list = list.next;
// push
head.next = result;
result = head;
}
return result;
}
And you're done...
It depends on your implementation of the list, but I would recurse to the end and then reverse the references.
void Reverse(List pList) {
Reverse(pList, null, pList.First); // initial call
}
void Reverse(List pList, Node pPrevious, Node pCurrent) {
if (pCurrent != null)
Reverse(pList, pCurrent, pCurrent.Next); // advance to the end
else { // once we get to the end, make the last element the first element
pList.First = pPrevious;
return;
}
pCurrent.Next = pPrevious; // reverse the references on all nodes
}
EDIT
I decided to use a HashSet instead as it has is O(N). However, I am still having an issue that it's not deleting all repeating numbers, 10 13 11 11 12 11 10 12 11.
It returns : 10 13 11 12 10 11
static void removeDups(Node node) {
HashSet<Integer> values = new HashSet<Integer>();
Node previous = null;
while(node != null) {
if(values.contains(node.data))
previous.next = node.next;
else
values.add(node.data);
previous = node;
node= node.next;
}
}
Irrelevant
I am trying to remove duplicate elements from a linked list but for some reason, It does not remove the last repeating element. For instance if the list is 10,11,12,11,12,9,11, It returns : 10,11,12,9,11.
public static void removeDups1(Node head){
if(head == head.next)
head = head.next.next;
Node fastptr =head;
Node slowptr = head;
while(slowptr.next != null && fastptr.next.next !=null) {
if(slowptr.data == fastptr.data) {
fastptr.next = fastptr.next.next;}
slowptr = slowptr.next;
fastptr = fastptr.next;
}}
Checking fastptr.next.next == null prematurely exits your loop.
Your algorithm is trying to find if there are any duplicates for each element from the current position to the next two positions in the linked list. But duplicates can occur anywhere in the linked list. Therefore, for each element, it should traverse through the linked list once again.
That would be a O(n^2) solution
A better approach would be to maintain a hash to keep track of already visited data.
This would be a O(n) solution.
I think, since at the beginning you are pointing both fastptr and slowptr to the same Node, and always end up pointing them to the same node at the end of the while, you're always comparing the same Nodes here, doing nothing valuable to the algorithm:
if(slowptr.data == fastptr.data)
Anyways, the algorightm's logic seems all wrong.
Like others sayd, you should do two loops, one inside another: the first one to the slowptr, the second one to the fastptr
Try to implement based on this proposition:
For all Node (pointed by slowptr, first loop), for all subsequent nodes (second loop) do see if they are the same.
I don't see real Java when I look to your code. But anyway as #Aishwarya said a better solution is to build a map or hash set for better performance. Using Java built-in functions it is even simpler. Just do:
LinkedList<Node> yourList = ...
LinkedList<Node> filteredList = new LinkedList<>(new HashSet<Node>(yourList));
To make this work properly you must make sure that Node equals(Object o) and hashCode()are correctly implemented.
Then your (generic) duplicate removal function might be:
public static void removeDups(LinkedList pList) {
return new LinkedList(new HashSet(pList));
// indeed specifying <Node> is not really needed
}
Essentially what this problem does is take in a Linked List as a parameter and subtracts its contents from the calling Linked List if the same contents exist in both. I have to do it this way (so no changing the parameters).
For instance: l1.subtractList(l2) would subtract the contents of l2 from l1.
The trouble here is that the calling Linked List has 2 of the same number and that number is also in the Linked List passed as a parameter. I only have to remove one instance of it as well.
I've managed to subtract everything but that duplicate number, but I'm not sure what I'm doing wrong. Keep in mind this is a very new subject to me, so I may be way off base. But I appreciate any and all help you may offer. Thanks.
public void subtractList(LinkedList list)
{
Node current = head;
Node<Integer> temp = list.getFirst();
Integer count = -1;
while (current != null)
if (current == temp){
count++;
list.listRemove(count);
temp = list.getFirst();
}
else
{
current = current.getNext();
}
}
What is listRemove method? Why do you need count? Just traverse the argument list and check if its element temp exists in the calling Linked List. You will need an outer loop traversing the list passed as argument and an inner loop iterating over the calling list to check the value of the element needing to be removed and to remove it if required
while(temp!=null)
{
while(current!=null){
//Check if element exists in list
//If yes, remove it from the calling list
}
//Repeat
temp = temp.getNext();
}
I have two solutions to print all paths from the root to all leaves in a binary tree as follows.
In Sol 1, used List as an argument in the recursion to add path from the root to each leaf node and then after returning from the recursion, I have to remove a node which is being returned.
Based on my knowledge, this is because List is an object stored in heap and shared by everyone.
So each recursive call uses the same List object list and thus needs to remove.
However, in Sol 2, I used array as an argument and I don't need to remove the returned node unlike List. I don't understand why?
Based on my understanding, since array is also an object, stored in heap and shared by every recursive call. Thus it was supposed be the same with the List case, and I thought I needed to remove the returned node from the recursive call. But it is not true.
Could you explain why the returned node from the recursive call doesn't have to be removed unlike List? My understand of List case is correct? Please let me know it's confusing for me.
Sol 1: recursive 1 - using List
void printPathsFromRootToLeavesRec1(BTNode node, List<BTNode> list) {
if(node == null) return;
list.add(node);
// viristed and added node from root --> left subtree --> right subtree
if(node.left == null && node.right == null)
printNodeList(list);
printPathsFromRootToLeavesRec1(node.left, list);
printPathsFromRootToLeavesRec1(node.right, list);
**// Note 1: KEY POINT = Remove after print !!!
list.remove(node);**
}
Sol 2: Recursive 2 - using array
void printPathFromRootToLeavsRec2(BTNode node, BTNode[] array, int index) {
if(node == null) return;
array[index] = node;
index++;
if(node.left == null && node.right == null) printPathArray(array, index);
printPathFromRootToLeavsRec2(node.left, array, index);
printPathFromRootToLeavsRec2(node.right, array, index);
**// We don't need to remove the returned node in the array case unlike List**
}
Because of index++. In a list, you are always getting the first element. I mean, you always have 1 element because you remove it at the end. In the array, because index++, you always get the last one element.
Because, in the array, we just overwrite the element in the next function call, so there's no need to remove it.
Note that with the List we're doing an add (which always appends to the end, which obviously won't be overwritten by doing another add, thus we need a remove), but with the array we're simply setting the index-th element (so, if there's already an element at that position, we simply overwrite it).
I am writing a function that will take in the head of a linked list, remove all duplicates, and return the new head. I've tested it but I want to see if you can catch any bugs or improve on it.
removeDuplicates(Node head)
if(head == null) throw new RuntimeException("Invalid linked list");
Node cur = head.next;
while(cur != null) {
if(head.data == cur.data) {
head = head.next;
} else {
Node runner = head;
while(runner.next != cur) {
if(runner.next.data == cur.data) {
runner.next = runner.next.next;
break;
}
runner = runner.next;
}
cur = cur.next;
}
return head;
}
If you are willing to spend a little more RAM on the process, you can make it go much faster without changing the structure.
For desktop apps, I normally favor using more RAM and winning some speed. So I would do something like this.
removeDuplicates(Node head) {
if (head == null) {
throw new RuntimeException("Invalid List");
}
Node current = head;
Node prev = null;
Set<T> data = new HashSet<T>(); // where T is the type of your data and assuming it implements the necessary methods to be added to a Set properly.
while (current != null) {
if (!data.contains(current.data)) {
data.add(current.data);
prev = current;
current = current.next;
} else {
if (prev != null) {
prev.next = current.next;
current = current.next;
}
}
}
}
This should run in O(n) time.
EDIT
I hope I was correct in assuming that this is some kind of project / homework where you are being forced to use a linked list, otherwise, as noted, you would be better off using a different data structure.
I didn't check your code for bugs, but I do have a suggestion for improving it. Allocate a Hashtable or HashMap that maps Node to Boolean. As you process each element, if it is not a key in the hash, add it (with Boolean.TRUE as the value). If it does exist as a key, then it already appeared in the list and you can simply remove it.
This is faster than your method because hash lookups work in roughly constant time, while you have an inner loop that has to go down the entire remainder of the list for each list element.
Also, you might consider whether using an equals() test instead of == makes better sense for your application.
To efficiently remove duplicates you should stay away from linked list: Use java.util.PriorityQueue instead; it is a sorted collection for which you can define the sorting-criteria. If you always insert into a sorted collection removing duplicates can be either done directly upon insertion or on-demand with a single O(n)-pass.
Aside from using the elements of the list to create a hash map and testing each element by using it as a key, which would only be desirable for a large number of elements, where large depends on the resources required to create the hash map, sequentially scanning the list is a practical option, but there are others which will be faster. See user138170's answer here - an in-place merge sort is an O(n log(n)) operation which does not use an extra space, whereas a solution using separately-allocated array would work in O(n) time. Practically, you may want to profile the code and settle for a reasonable value of n, where n is the number of elements in the list, after which a solution allocating memory will be used instead of one which does not.
Edit: If efficiency is really important, I would suggest not using a linked list (largely to preserve cache-coherency) and perhaps using JNI and implementing the function natively; the data would also have to be supplied in a natively-allocated buffer.