I am trying to write a report where I evaluate the time complexity of an algorithm I have designed, I know for sure that it's complexity is O(n). From what I got from Wikipedia, the best case would be O(1), if I have understood correctly it means that the best case is when the ArrayList I am using only contains one element, but I don't get the worst case completely, what does "O(1) iterative" mean and how can it occur?
In a comment you write:
In my case I am not looking for an element of the list in particular, but I need to check if every single element's attribute is true or false.
This is not a linear search. Searching (linear or otherwise) is answering the question "is there at least one matching element". Your question is "do all elements match".
I would always need to go thought the whole list fro mthe first to the last element, so what would be the worst and best case.
The best case is still O(1). If you find that one of the element's attribute is false, you can terminate the scan immediately. The best case is when that happens for the first element ....
Consider this. Checking that "all elements are true" is equivalent to check that "NOT (some element is false)".
The reason it's O(1) best case is not JUST for a list with 1 element (although this would be the case in that scenario too). Imagine you have a list of 10 numbers.
[44,6,1,2,6,10,93,187,33,55]
Let's say we run Linear Search and are searching for the integer 44. Since it's the first element in the list, our time complexity is O(1), the best case scenario, because we only have to search 1 element out of the entire list before we find what we're looking for.
Let's look at a varient of that list.
[55,6,1,2,6,10,93,187,33,44]
In this case we swapped the first and last numbers. So when we run Linear Search for the integer 44 it will be a time complexity of O(n), the worst case, since we have to traverse the entire list of n elements before we find our desired element (if it even exists in the list, in our case is does).
Regarding the "O(1) iterative" on Wikipedia, I wouldn't let it confuse you. Also notice that it's referring to space complexity on the Wikipedia page, and not time complexity performance. We don't need any extra space to store anything during Linear Search, we just compare our desired value (such as 44 in the example) with the elements in the array one by one, so we have a space complexity of O(1).
EDIT: Based upon your comment:
In my case I am not looking for an element of the list in particular
Keep in mind "Linear Search" is a particular algorithm with a specific purpose of finding a particular element in a list, which you mention is NOT what you're trying to do. It doesn't seem Linear Search is what you're looking for. Linear Search is given an array/list and a desired element. It will return the index of where the desired element occurs in the list, assuming it does exist in the list.
I would always need to go thought the whole list fro mthe first to the
last element
From your comment description, I believe you're just trying to traverse a list from beginning to end, always. This would be O(N) always, since you are always traversing the entire list. Consider this simple Python example:
L1 = [1,2,3,4,5,6,7,8,9,10] #Size n, where n = 10
for item in L1:
print(item)
This will just print every item in the list. Our list is of size n. So the time complexity of the list traversal is O(n). This only applies if you want to traverse the entire list every time.
Related
Hi had an interview task, the idea is to store elements with fields: id, name, updateTime;
There should be methods add(Element), getElement(id), getLastUpdatedElements()
Requirements:
code should be on Java
Should be thread safe
Upper bound of computational complexiy for all these methods should be O(log(n))
Notes
Update time of any element can be changed in runtime
getLastUpdatedElements - returns updated last minute elements
My thoughts
I can not use CopyOnWriteArrayList because it will take O(N) to find last updated elements if the key is id, what breaks the requirement.
To fit O(log(N)) complexity with getLastUpdatedElements() I can use ConcurrentSkipListSet with comparator by updateTime but in that case it will take O(N) to get element by ID. (Please note that in this case add(Element) is O(log(N)) since we know updateTime for newly created elements)
I can use two trees, first one with comparator by id, second - with comparator by updateTime, but all access methods I should make synchronize what makes my programm single threaded
I think I'm close, just need to find how to get element with O(log(N)) but my thoughts are running out.
I hope I understood you correctly.
If you need to store the elements and have an "add" and "get" time as low as (log(N)), that sounds like classic hash map (which uses linked list hash and binary tree if search time reaches a certain threshold - since java 8 I believe).
so in the worst case it's log(N).
for the "get last updated" function: you can store each updated element in a stack (not really a stack, just a list you keep adding into) and when the function is performed. just perform a binary search on the list. when you reach the first item that has been updated in the last minute - just return the index to that item.
that way you only perform binary search (log(N)).
oh and of course just have a lock for those two data structures.
if you really want to dig into it performance-wise, you can implement two locks: one for inserting/updating entries, and one just for reading them.
similar to the "readers-writers problem" like so: https://www.tutorialspoint.com/readers-writers-problem
I have an array like MyArr {1,3,5,7,9,2,4,6,8,10}. I need to iterate and print "Not Found" till I reach 2. After that, I need to print "Found" for the remaining.
My approach is to use Arrays.BinarySearch(MyArr,2) which returns the index of 2. I have no idea how to achieve from here.
Binary search can't be used because it only works on sorted arrays.
You need to iterate over the array. For each element, you must check if it's your target value and have your code remember the result, and print the output appropriate for the value of the result.
binarySearch only works for an input array that is sorted (in ascending order), and your input array isn't sorted.
Solution #1:
Sort the array first. Then binarySearch will find you the correct offset for the element in the sorted array.
Hint: look at the other methods in Arrays.
Note that this is not the correct solution. The actual problem statement says that you need to step through the array, 1) printing "not found" for non-matching elements and 2) printing "found" when you find the first match. Solution #1 only addresses the 2nd requirement, not the first one.
In fact, binary search cannot satisfy the first requirement.
Aside: sorting an array so that you can do a binary search ... just once ... is inefficient. You spend more time sorting than is saved in searching. In complexity terms, sorting will be O(NlogN) and searching O(logN) giving an overall complexity of O(NlogN). By contrast a simple linear search is O(N). Hence you will only "break even" if you do O(logN) binary searches for each sort.
Solution #2:
Forget about binary search, and write a loop that steps through all of the elements of the array.
Hint: a for loop would be best, but which kind of for loop?
More specifically, suppose I have an array with duplicates:
{3,2,3,4,2,2,1,4}
I want to have a data structure that supports search and remove the first occurrence of some value faster than O(n), say if the value is 4, then it becomes:
{3,2,3,2,2,1,4}
I also need to iterate the list from head according to the same order. Other operations like get(index) or insert are not needed.
You can use O(n) time to record the original data(say it's an int[]) in your data structure, I just need the later search and remove faster than O(n).
"Search and remove" is considered as ONE operation as shown above.
If I have to make it myself, I would use a LinkedList to store the data, and HashMap to map every key to a list of all occurrence of nodes together with their previous and next ones.
Is it a right approach? Are there any better choices already there in Java?
The data structure you describe, essentially a hybrid linked list and map, I think is the most efficient way of handling your stated problem. You'll have to keep track of the nodes yourself, since Java's LinkedList doesn't provide access to the actual nodes. The AbstractSequentialList may be helpful here.
The index structure you'll need is a map from an element value to the appearances of that element in the list. I recommend a hash table from hashCode % modulus to a linked list of (value, list of main-list nodes).
Note that this approach is still O(n) in the worst case, when you have universal hash collisions; this applies whether you use open or closed hashing. In the average case it should be something closer to O(ln(n)), but I'm not prepared to prove that.
Consider also whether the overhead of keeping track of all of this is really worth the gains. Unless you've actually profiled running code and determined that a LinkedList is causing problems because remove is O(n), stick with that until you do.
Since your requirement is that the first occurrence of the element should be removed and the remaining occurrences retained, there would be no way to do it faster than O(n) as you would definitely have to move through to the end of the list to find out if there is another occurrence. There is no standard api from Oracle in the java package that does this.
I'm trying to find a data structure to use in my Java project. What I'm trying to do is get the next greatest value below an arbitrary number from a set of numbers, or be notified if no such number exists.
Example 1)
My Arbitrary number is 7.0.
{3.1, 6.0, 7.13131313, 8.0}
The number I'd need to get from this set would be 6.0.
Example 2)
My arbitrary number is 1.0.
{2.0, 3.5555, 999.0}
A next highest number doesn't exist in the set, so I'd need to know it doesn't exist.
The best I can think of is indexing and comparing through an array, and going back 1 step once I go over my arbitrary number. In worst case scenarios though my time complexity would be O(n). Is there a better way?
If you can pre-process the list of values, then you can sort the list (O(NLogN) time) and perform a binary search which will take O(LogN) for each value you want to get an answer for. otherwise you can't do better than O(N).
You need to sort the numbers at first.
And then you could do a simple binary search whose compare function modified to your need. At every point check the element is bigger than input, if so search in the left side or in the right side. Your modified binary search, at the end should be able to provide the immediate bigger and the smaller number with which you could solve your problem easily. Complexity is lg n.
I suggest that you look at either TreeSet.floor(...) or TreeSet.lower(...). One of those should satisfy your requirements, and both have O(logN) complexity ... assuming that you have already built the TreeSet instance.
If you only have a sorted array and don't want the overhead of building a TreeSet, then a custom binary search is the probably the best bet.
Your both example sets look sorted ...
If it is the case then you would need a binary search...
If it's not the case then you would need to visit every elements exactly one time.so it would take time n..
For the method add of the ArrayList Java API states:
The add operation runs in amortized constant time, that is, adding n elements requires O(n) time.
I wonder if it is the same time complexity, linear, when using the add method of a LinkedList.
This depends on where you're adding. E.g. if in an ArrayList you add to the front of the list, the implementation will have to shift all items every time, so adding n elements will run in quadratic time.
Similar for the linked list, the implementation in the JDK keeps a pointer to the head and the tail. If you keep appending to the tail, or prepending in front of the head, the operation will run in linear time for n elements. If you append at a different place, the implementation will have to search the linked list for the right place, which might give you worse runtime. Again, this depends on the insertion position; you'll get the worst time complexity if you're inserting in the middle of the list, as the maximum number of elements have to be traversed to find the insertion point.
The actual complexity depends on whether your insertion position is constant (e.g. always at the 10th position), or a function of the number of items in the list (or some arbitrary search on it). The first one will give you O(n) with a slightly worse constant factor, the latter O(n^2).
In most cases, ArrayList outperforms LinkedList on the add() method, as it's simply saving a pointer to an array and incrementing the counter.
If the woking array is not large enough, though, ArrayList grows the working array, allocating a new one and copying the content. That's slower than adding a new element to LinkedList—but if you constantly add elements, that only happens O(log(N)) times.
When we talk about "amortized" complexity, we take an average time calculated for some reference task.
So, answering your question, it's not the same complexity: it's much faster (though still O(1)) in most cases, and much slower (O(N)) sometimes. What's better for you is better checked with a profiler.
If you mean the add(E) method (not the add(int, E) method), the answer is yes, the time complexity of adding a single element to a LinkedList is constant (adding n elements requires O(n) time)
As Martin Probst indicates, with different positions you get different complexities, but the add(E) operation will always append the element to the tail, resulting in a constant (amortized) time operation