I was looking through the LinkedList API for Java 7 and noticed something curious. there does not appear to be a "remove before (or after)" type of method. For example, If I have a 100 element LinkedList, and I want to remove the first 20 elements, Java seems to force you to remove them one at a time, rather than moving the start pointer to the 21st element and deleting the link between 20 and 21. It seems like this is an operation that can be done in O(1) time, instead of O(n) time as Java seems to force you to do it.
Am I missing something here, or is it just a glaring hole in Java?
EDIT
I am aware of the sublist(int, int) method in the List interface. I still think that this will be slightly less efficient than the generic "chop off the first (or last) n" use case.
EDIT 2
Touche to everyone who pointed out that finding the nth element is not O(1). Regardless of the ease of chopping off the first n-1 elements, it still takes O(n) time to find the nth.
However, as Dilum Ranatunga points out, there is the possibility that an iterator already exists at the given position. In this case, it would still be useful to say "I am here, remove all before me".
It's still an O(n) operation no matter what you do.
You don't have direct access to the individual nodes of the linked list, so you would have to traverse the list first to get access to the 21st node. Once you were there it would be O(1) to 're-head' the list, but it's still O(n) for the entire, atomic operation.
LinkedList inherits a sublist method from List interface. Information can be found here http://docs.oracle.com/javase/6/docs/api/java/util/List.html#subList(int, int)
It is impossible to immediately jump to the Nth element because each prior node contains the address of the Node.next (the next node in the chain). So it has to naturally traverse 20 elements causing O(n) runtime.
For example:
[Node 1, address of Node 2] -> [Node 2, address of node 3] -> etc... -> [Node 20, address of Node 21]
You cannot get the "address of Node 21" without getting to the Node 20 first, and to do so you need to "address of Node 20" from Node 19 and so on.
What is your algorithm for finding item n in O(1) time? This would still be an O(n) operation to locate the nth item and set it as the head. You would save some intermediate pointer assignments compared to 20 removes, but not a huge gain.
You can call list = list.subList(21, list.size()) to get a sublist from 21 to the end of the list.
The operation is O(1), but you do not get a LinkedList back, you get an AbstractList.SubList which acts like a wrapper class for the original LinkedList, delegating methods with the sublist's offsets.
Although this is constant time, it is important to note this is not a new list. If your list size went from n to m, subsequent linear-time methods called on the list will still run in O(n) not in O(m).
Java's java.util.List defines the API listIterator(). ListIterator has previous().
So the idiom you want is:
// some loop construct {
listIter.previous();
listIter.remove();
}
Related
I am trying to write a report where I evaluate the time complexity of an algorithm I have designed, I know for sure that it's complexity is O(n). From what I got from Wikipedia, the best case would be O(1), if I have understood correctly it means that the best case is when the ArrayList I am using only contains one element, but I don't get the worst case completely, what does "O(1) iterative" mean and how can it occur?
In a comment you write:
In my case I am not looking for an element of the list in particular, but I need to check if every single element's attribute is true or false.
This is not a linear search. Searching (linear or otherwise) is answering the question "is there at least one matching element". Your question is "do all elements match".
I would always need to go thought the whole list fro mthe first to the last element, so what would be the worst and best case.
The best case is still O(1). If you find that one of the element's attribute is false, you can terminate the scan immediately. The best case is when that happens for the first element ....
Consider this. Checking that "all elements are true" is equivalent to check that "NOT (some element is false)".
The reason it's O(1) best case is not JUST for a list with 1 element (although this would be the case in that scenario too). Imagine you have a list of 10 numbers.
[44,6,1,2,6,10,93,187,33,55]
Let's say we run Linear Search and are searching for the integer 44. Since it's the first element in the list, our time complexity is O(1), the best case scenario, because we only have to search 1 element out of the entire list before we find what we're looking for.
Let's look at a varient of that list.
[55,6,1,2,6,10,93,187,33,44]
In this case we swapped the first and last numbers. So when we run Linear Search for the integer 44 it will be a time complexity of O(n), the worst case, since we have to traverse the entire list of n elements before we find our desired element (if it even exists in the list, in our case is does).
Regarding the "O(1) iterative" on Wikipedia, I wouldn't let it confuse you. Also notice that it's referring to space complexity on the Wikipedia page, and not time complexity performance. We don't need any extra space to store anything during Linear Search, we just compare our desired value (such as 44 in the example) with the elements in the array one by one, so we have a space complexity of O(1).
EDIT: Based upon your comment:
In my case I am not looking for an element of the list in particular
Keep in mind "Linear Search" is a particular algorithm with a specific purpose of finding a particular element in a list, which you mention is NOT what you're trying to do. It doesn't seem Linear Search is what you're looking for. Linear Search is given an array/list and a desired element. It will return the index of where the desired element occurs in the list, assuming it does exist in the list.
I would always need to go thought the whole list fro mthe first to the
last element
From your comment description, I believe you're just trying to traverse a list from beginning to end, always. This would be O(N) always, since you are always traversing the entire list. Consider this simple Python example:
L1 = [1,2,3,4,5,6,7,8,9,10] #Size n, where n = 10
for item in L1:
print(item)
This will just print every item in the list. Our list is of size n. So the time complexity of the list traversal is O(n). This only applies if you want to traverse the entire list every time.
If I have a linked list of objects and I want the sublist from index 2 to 5. Is this operation o(1)? All you would need to do is null the reference to prev on the node at index 2, and return the node at index 2, correct? Does this require copying the contents of the linked list into another one and returning that or just setting the head to be the node at index 2?
Is this operation o(1)?
In general, getting a sublist of a linked list is O(k), not O(1)*.
However, for any specific index value, such as 2, 5, or 5000, any operation is O(1), because a specific index becomes a constant that gets factored out from Big-O notation.
* Construction of sublist may be optimized such that you pay construction costs on the first navigation of sublist, rather on construction. In other words, constructing a sublist without iterating it is O(1).
It appears that the sublist method runs in O(1) time. See the source code for the method.
All that this code does is return a new instance of SubList that is initialized with the list that sublist is invoked upon. No iteration is happening here, so the operation runs in constant time.
It's O(n) if you consider the general case of the algorithm. Even if you do what you said finding the n-th and m-th element would take a complete traversal of the list.
It's wrong to consider that the finding the sublist from 2 to 5 would be O(1). It is O(1) ofcourse but it needs constant amount of operations to do that but are you creating an algorithm for sublist(2,5)? If you do that then ofcoursse it is always O(1).
A better example is sorting 100 numbers is of O(1) complexity so is sorting 10,000 numbers. But it is not what we are concerned about. We want to know the nature of the algorithm based on the inputs fed to that algorithm.
I understand that linked list insertions are constant time due to simple rearrangement of pointers but doesn't this require knowing the element from which you're doing the insert?
And getting access to that element requires a linear search. So why don't we say that inserts are still bound by a linear search bottleneck first?
Edit: I am not talking about head or tail appends, but rather insertions anywhere in between.
Yes, it requires already having a node where you're going to insert next to.
So why don't we say that inserts are still bound by a linear search bottleneck first?
Because that isn't necessarily the case, if you can arrange things such that you actually do know the insertion point (not just the index, but the node).
Obviously you can "insert" at the front or end, that seems like a bit of cheat perhaps, it stretches the meaning of the word "insert" a bit. But consider an other case: while you're appending to the list, at some point you remember a node. Just any node of your choosing, using any criterium to select it that you want. Then you could easily insert after or before that node later.
That sounds like a very "constructed" situation, because it is. For a more practical case that is a lot like this (but not exactly), you could look at the Dancing Links algorithm.
Why do we say linked list inserts are constant time?
Because the insert operation is constant time.
Note that locating the position of the insert is not considered part of the insert operation itself. That would be a different operation, which may or may not be constant time, e.g. if including search time, you get:
Insert at head: Constant
Insert at tail: Constant
Insert before current element while iterating1: Constant
Insert at index position: Linear
1) Assuming you're iterating anyway.
By contrast, ArrayList insert operation is linear time. If including search time, you get:
Insert at head: Linear
Insert at tail: Constant (amortized)
Insert before current element while iterating1: Linear
Insert at index position: Linear
The following two operations are different:
Operation A: Insert anywhere in the linked list
Operation B: Insert at a specific position in the linked list
Operation A can be achieved in O(1). The new element can inserted at head (or tail if maintained and desired).
Operation B involves finding followed by inserting. The finding part is O(n). The inserting is as above, i.e. O(1). If, however, the result of the finding is provided as input, for example if there are APIs like
Node * Find(Node * head, int find_property);
Node * InsertAfter(Node * head, Node * existing_node, Node * new_node);
then the insert part of the operation is O(1).
I know that given a balanced binary search tree, getting the min and max element takes O(logN), I want to ask about their implementation in C++ and Java respectively.
C++
Take std::set for example, getting min/max can be done by calling *set.begin() / *set.rbegin() , and it's constant time.
Java
Take HashSet for example, getting min/max can be done by calling HashSet.first() and HashSet.last(), but it's logarithmic time.
I wonder if this is because std::set has done some extra trick to always update the beigin() and rbegin() pointer, if so, can anyone show me that code? Btw why didn't Java add this trick too, it seems pretty convenient to me...
EDIT:
My question might not be very clear, I want to see how std::set implements insert/erase , I'm curious to see how the begin() and rbegin() iterator are updated during those operations..
EDIT2:
I'm very confused now. Say I have following code:
set<int> s;
s.insert(5);
s.insert(3);
s.insert(7);
... // say I inserted a total of n elements.
s.insert(0);
s.insert(9999);
cout<<*s.begin()<<endl; //0
cout<<*s.rbegin()<<endl; //9999
Isn't both *s.begin() and *s.rbegin() O(1) operations? Are you saying they aren't? s.rbegin() actually iterate to the last element?
My answer isn't language specific.
To fetch the MIN or MAX in a BST, you need to always iterate till the leftmost or the rightmost node respectively. This operation in O(height), which may roughly be O(log n).
Now, to optimize this retrieval, a class that implements Tree can always store two extra pointers pointing to the leftmost and the rightmost node and then retrieving them becomes O(1). Off course, these pointers brings in the overhead of updating them with each insert operation in the tree.
begin() and rbegin() only return iterators, in constant time. Iterating them isn't constant-time.
There is no 'begin()/rbegin() pointer'. The min and max are reached by iterating to the leftmost or rightmost elements, just as in Java, only Java doesn't have the explicit iterator.
In java LinkedLists, we have iterators.
I can use a ListIterator and then do a linear search to find the last element that the Iterator points to. But It will take O(n) time. How can I find an Iterator that points to the last element in O(1) time?
The java.util.LinkedList is actually the doubly-linked variant. It can traverse from both ends. Hence, getting the first and getting the last element are equally fast.
At least this is the case with Sun's (Oracle's?) implementation.
I'm not sure if this is O(1), but if you really want an iterator to the last element (instead of just getting the last element), you can use the listIterator(index) method, where index is the position in the list you want the iterator to start from.
By definition of Linked List, there is a pointer only to the head or first item and access to every other node is possibly only sequentially! So I am afraid the answer is NO! There is no way you can access the last element in O(1) time. You could put the linked-list contents into an array or map and then the next access could be O(1)