Java - Comparing efficiency of two O(n) algorithms - java

I'm studying Linked Lists and the question is - Write a function to print the middle term of a given linked list (assume that LL has odd number of nodes).
Method 1 - Traverse the LL and count the number of nodes using a counter. Add 1 (to make it an even number) and divide the counter by 2 (ignore math for discrepancies). Traverse the LL again but this time only upto the counter-th term and return.
void GetMiddleTermMethod1(){
//Count the number of nodes
int counter = 0;
Node n = FirstNode;
while (n.next != null){
counter = counter + 1;
n = n.next;
}
counter=counter+1/2;
//now counter is equal to the half of the number of nodes
//now a loop to return the nth term of a LL
Node temp = FirstNode;
for(int i=2; i<=counter; i++){
temp = temp.next;
}
System.out.println(temp.data);
}
Method 2 - Initialize 2 references to nodes. One traverses 2 nodes at a time and the other only traverses 1. When the fast reference reaches null (the end of LL), the slow one would have reached the middle and return.
void GetMiddleTermMethod2(){
Node n = FirstNode;
Node mid = FirstNode;
while(n.next != null){
n = n.next.next;
mid = mid.next;
}
System.out.println(mid.next.data);
}
I have 3 questions -
Q1 - How can I know which algorithm is more efficient in case I'm asked this question in a job interview? I mean both functions traverse the LL one and a half times (second one does it in one loop instead of 2 but still its traverses the LL one and a half times)...
Q2 - Since both algorithms have the Big O of O(n), what parameters will decide which one is more efficient?
Q3 - What is the general method of calculating the efficiency of such algorithms? I'd really appreciate If you could link me towards the suitable tutorial...
Thanks

Well, there is no real simple answer for that, the answer may differ on the compiler optimization, JIT optimization, and the actual machine that runs the program (which might be optimized better for one algorithm for some reason).
The truth is, other than the theoretical big O notation that gives us asymptotic behavior, there is seldom a "clean, theoretical" way to determine Algorithm A is faster than Algorithm B in conditions (1),(2),...,(k).
However, it doesn't mean there is nothing you can do, you can benchmark the code by creating various random data sets, and time the duration each algorithm takes. It is very important to do it more than once. How much more? Until you get a statistical significance, when using some known and accepted statistical test, such as Wilcoxon signed ranked test.
In addition, in many cases, insignificant performance usually doesn't worth the time spent to optimize the code, and it would be even worse if it makes the code less readable - and thus harder to maintain.

I just implemented your solution i java and tested it in a LinkedList of 1.111.111 random integers up to 1000. Results are very much the same:
Method 1:
time: 162ms
Method 2:
time: 171ms
Furthermore I wanted to point out that you have two major flaws in your methods:
Method 1:
Change counter = counter + 1 / 2; to counter = (counter + 1) / 2; otherwise you get the end of the list as counter remains counter :)
Method 2:
Change System.out.println(mid.next.data); to System.out.println(mid.data);

Related

Time complexity of travel a trie

Would it be O(26n) where 26 is the number of letters of the alphabet and n is the number of levels of the trie? For example this is the code to print a trie:
public void print()
{
for(int i = 0; i < 26; i++)
{
if(this.next[i] != null)
{
this.next[i].print();
}
}
if(this.word != null)
{
System.out.println(this.word.getWord());
}
}
So watching this code makes me think that my aproximation of the time complexity is correct in the worst of the cases that would be the 26 nodes full for n levels.
Would it be O(26n) where 26 is the number of letters of the alphabet and n is the number of levels of the trie?
No. Each node in the trie must be visited, and O(1) work performed for each one (ignoring the work attributable to processing the children, which is accounted separately). The number of children does not matter on a per-node basis, as long as it is bounded by a constant (e.g. 26).
How many nodes are there altogether? Generally more than the number of words stored in the trie, and possibly a lot more. For a naively-implemented, perfectly balanced, complete trie with n levels below the root, each level has 26 times as many nodes as the previous, and so the total number of nodes is 1 + 26 + 262 + ... + 26n. That is O(26n+1) == O(26n) == O(2n), or "exponential" in the number of levels, which also corresponds to the length of the longest word stored within.
But one is more likely to be interested in measuring the complexity in terms of the number of words stored in the trie. With a careful implementation, it is possible to have nodes only for those words and for each maximal initial substring that is common to two or more of those words. In that event, every node has either zero children or at least two, so for w total words, the total number of nodes is bounded by w + w/2 + w/4 + ..., which converges to 2w. Therefore, a traversal of a trie with those structural properties costs O(2w) == O(w).
Moreover, with a little more thought, it is possible to conclude that the particular structural property I described is not really necessary to have O(w) traversal.
I am not familiar with a trie but the big O notation is mainly to depict approximately how quickly the running time or resource consumption grows relative to the input size. The way I think of it is just rereferring to general shape of the curve on the graph rather than exact points on the graph. A O(1) looks like a flat line, while a O(n) looks like a line at a 45 deg angle, etc.
source: https://medium.com/dataseries/how-to-calculate-time-complexity-with-big-o-notation-9afe33aa4c46
Now for the algorithm in the question. I am not familiar with a trie, but at first glace I would say it is O(1) (constant time), because the number of iterations of the loop is constant (always 26). However, in the loop it has this.next[i].print() which could completely change the answer depending on its complexity, and uncovers a important question we need to know: what is n?.
I am going to assume that the this.next[i] is of the same type as this, making the this.next[i].print() kind of a recursive call. In such a scenario the time it takes to finish executing will all depend on the number of instances that will have to be iterated though (visited). This algorithm resembles Depth First Search but does not safe guard against infinite recursion. This may be based on some additional information known about the next[i] instances (nodes) such as an instance is only ever referenced by at most 1 other instance. In this case the runtime complexity would be on order of O(n) where n is the number of instances or nodes.
... assuming that the this.word.getWord() runs in constant time as well. If it depends on some other word input, the runtime may as well be O(n * w) where n is number of nodes and w is the size of the words.

Task is to create a code in O(n log n), what is the time complexity of this code that I wrote? [duplicate]

I have gone through Google and Stack Overflow search, but nowhere I was able to find a clear and straightforward explanation for how to calculate time complexity.
What do I know already?
Say for code as simple as the one below:
char h = 'y'; // This will be executed 1 time
int abc = 0; // This will be executed 1 time
Say for a loop like the one below:
for (int i = 0; i < N; i++) {
Console.Write('Hello, World!!');
}
int i=0; This will be executed only once.
The time is actually calculated to i=0 and not the declaration.
i < N; This will be executed N+1 times
i++ This will be executed N times
So the number of operations required by this loop are {1+(N+1)+N} = 2N+2. (But this still may be wrong, as I am not confident about my understanding.)
OK, so these small basic calculations I think I know, but in most cases I have seen the time complexity as O(N), O(n^2), O(log n), O(n!), and many others.
How to find time complexity of an algorithm
You add up how many machine instructions it will execute as a function of the size of its input, and then simplify the expression to the largest (when N is very large) term and can include any simplifying constant factor.
For example, lets see how we simplify 2N + 2 machine instructions to describe this as just O(N).
Why do we remove the two 2s ?
We are interested in the performance of the algorithm as N becomes large.
Consider the two terms 2N and 2.
What is the relative influence of these two terms as N becomes large? Suppose N is a million.
Then the first term is 2 million and the second term is only 2.
For this reason, we drop all but the largest terms for large N.
So, now we have gone from 2N + 2 to 2N.
Traditionally, we are only interested in performance up to constant factors.
This means that we don't really care if there is some constant multiple of difference in performance when N is large. The unit of 2N is not well-defined in the first place anyway. So we can multiply or divide by a constant factor to get to the simplest expression.
So 2N becomes just N.
This is an excellent article: Time complexity of algorithm
The below answer is copied from above (in case the excellent link goes bust)
The most common metric for calculating time complexity is Big O notation. This removes all constant factors so that the running time can be estimated in relation to N as N approaches infinity. In general you can think of it like this:
statement;
Is constant. The running time of the statement will not change in relation to N.
for ( i = 0; i < N; i++ )
statement;
Is linear. The running time of the loop is directly proportional to N. When N doubles, so does the running time.
for ( i = 0; i < N; i++ ) {
for ( j = 0; j < N; j++ )
statement;
}
Is quadratic. The running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N * N.
while ( low <= high ) {
mid = ( low + high ) / 2;
if ( target < list[mid] )
high = mid - 1;
else if ( target > list[mid] )
low = mid + 1;
else break;
}
Is logarithmic. The running time of the algorithm is proportional to the number of times N can be divided by 2. This is because the algorithm divides the working area in half with each iteration.
void quicksort (int list[], int left, int right)
{
int pivot = partition (list, left, right);
quicksort(list, left, pivot - 1);
quicksort(list, pivot + 1, right);
}
Is N * log (N). The running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is a combination of linear and logarithmic.
In general, doing something with every item in one dimension is linear, doing something with every item in two dimensions is quadratic, and dividing the working area in half is logarithmic. There are other Big O measures such as cubic, exponential, and square root, but they're not nearly as common. Big O notation is described as O ( <type> ) where <type> is the measure. The quicksort algorithm would be described as O (N * log(N )).
Note that none of this has taken into account best, average, and worst case measures. Each would have its own Big O notation. Also note that this is a VERY simplistic explanation. Big O is the most common, but it's also more complex that I've shown. There are also other notations such as big omega, little o, and big theta. You probably won't encounter them outside of an algorithm analysis course. ;)
Taken from here - Introduction to Time Complexity of an Algorithm
1. Introduction
In computer science, the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input.
2. Big O notation
The time complexity of an algorithm is commonly expressed using big O notation, which excludes coefficients and lower order terms. When expressed this way, the time complexity is said to be described asymptotically, i.e., as the input size goes to infinity.
For example, if the time required by an algorithm on all inputs of size n is at most 5n3 + 3n, the asymptotic time complexity is O(n3). More on that later.
A few more examples:
1 = O(n)
n = O(n2)
log(n) = O(n)
2 n + 1 = O(n)
3. O(1) constant time:
An algorithm is said to run in constant time if it requires the same amount of time regardless of the input size.
Examples:
array: accessing any element
fixed-size stack: push and pop methods
fixed-size queue: enqueue and dequeue methods
4. O(n) linear time
An algorithm is said to run in linear time if its time execution is directly proportional to the input size, i.e. time grows linearly as input size increases.
Consider the following examples. Below I am linearly searching for an element, and this has a time complexity of O(n).
int find = 66;
var numbers = new int[] { 33, 435, 36, 37, 43, 45, 66, 656, 2232 };
for (int i = 0; i < numbers.Length - 1; i++)
{
if(find == numbers[i])
{
return;
}
}
More Examples:
Array: Linear Search, Traversing, Find minimum etc
ArrayList: contains method
Queue: contains method
5. O(log n) logarithmic time:
An algorithm is said to run in logarithmic time if its time execution is proportional to the logarithm of the input size.
Example: Binary Search
Recall the "twenty questions" game - the task is to guess the value of a hidden number in an interval. Each time you make a guess, you are told whether your guess is too high or too low. Twenty questions game implies a strategy that uses your guess number to halve the interval size. This is an example of the general problem-solving method known as binary search.
6. O(n2) quadratic time
An algorithm is said to run in quadratic time if its time execution is proportional to the square of the input size.
Examples:
Bubble Sort
Selection Sort
Insertion Sort
7. Some useful links
Big-O Misconceptions
Determining The Complexity Of Algorithm
Big O Cheat Sheet
Several examples of loop.
O(n) time complexity of a loop is considered as O(n) if the loop variables is incremented / decremented by a constant amount. For example following functions have O(n) time complexity.
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
for (int i = n; i > 0; i -= c) {
// some O(1) expressions
}
O(nc) time complexity of nested loops is equal to the number of times the innermost statement is executed. For example, the following sample loops have O(n2) time complexity
for (int i = 1; i <=n; i += c) {
for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}
}
for (int i = n; i > 0; i += c) {
for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}
For example, selection sort and insertion sort have O(n2) time complexity.
O(log n) time complexity of a loop is considered as O(log n) if the loop variables is divided / multiplied by a constant amount.
for (int i = 1; i <=n; i *= c) {
// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}
For example, [binary search][3] has _O(log n)_ time complexity.
O(log log n) time complexity of a loop is considered as O(log log n) if the loop variables is reduced / increased exponentially by a constant amount.
// Here c is a constant greater than 1
for (int i = 2; i <=n; i = pow(i, c)) {
// some O(1) expressions
}
//Here fun is sqrt or cuberoot or any other constant root
for (int i = n; i > 0; i = fun(i)) {
// some O(1) expressions
}
One example of time complexity analysis
int fun(int n)
{
for (int i = 1; i <= n; i++)
{
for (int j = 1; j < n; j += i)
{
// Some O(1) task
}
}
}
Analysis:
For i = 1, the inner loop is executed n times.
For i = 2, the inner loop is executed approximately n/2 times.
For i = 3, the inner loop is executed approximately n/3 times.
For i = 4, the inner loop is executed approximately n/4 times.
…………………………………………………….
For i = n, the inner loop is executed approximately n/n times.
So the total time complexity of the above algorithm is (n + n/2 + n/3 + … + n/n), which becomes n * (1/1 + 1/2 + 1/3 + … + 1/n)
The important thing about series (1/1 + 1/2 + 1/3 + … + 1/n) is around to O(log n). So the time complexity of the above code is O(n·log n).
References:
1
2
3
Time complexity with examples
1 - Basic operations (arithmetic, comparisons, accessing array’s elements, assignment): The running time is always constant O(1)
Example:
read(x) // O(1)
a = 10; // O(1)
a = 1,000,000,000,000,000,000 // O(1)
2 - If then else statement: Only taking the maximum running time from two or more possible statements.
Example:
age = read(x) // (1+1) = 2
if age < 17 then begin // 1
status = "Not allowed!"; // 1
end else begin
status = "Welcome! Please come in"; // 1
visitors = visitors + 1; // 1+1 = 2
end;
So, the complexity of the above pseudo code is T(n) = 2 + 1 + max(1, 1+2) = 6. Thus, its big oh is still constant T(n) = O(1).
3 - Looping (for, while, repeat): Running time for this statement is the number of loops multiplied by the number of operations inside that looping.
Example:
total = 0; // 1
for i = 1 to n do begin // (1+1)*n = 2n
total = total + i; // (1+1)*n = 2n
end;
writeln(total); // 1
So, its complexity is T(n) = 1+4n+1 = 4n + 2. Thus, T(n) = O(n).
4 - Nested loop (looping inside looping): Since there is at least one looping inside the main looping, running time of this statement used O(n^2) or O(n^3).
Example:
for i = 1 to n do begin // (1+1)*n = 2n
for j = 1 to n do begin // (1+1)n*n = 2n^2
x = x + 1; // (1+1)n*n = 2n^2
print(x); // (n*n) = n^2
end;
end;
Common running time
There are some common running times when analyzing an algorithm:
O(1) – Constant time
Constant time means the running time is constant, it’s not affected by the input size.
O(n) – Linear time
When an algorithm accepts n input size, it would perform n operations as well.
O(log n) – Logarithmic time
Algorithm that has running time O(log n) is slight faster than O(n). Commonly, algorithm divides the problem into sub problems with the same size. Example: binary search algorithm, binary conversion algorithm.
O(n log n) – Linearithmic time
This running time is often found in "divide & conquer algorithms" which divide the problem into sub problems recursively and then merge them in n time. Example: Merge Sort algorithm.
O(n2) – Quadratic time
Look Bubble Sort algorithm!
O(n3) – Cubic time
It has the same principle with O(n2).
O(2n) – Exponential time
It is very slow as input get larger, if n = 1,000,000, T(n) would be 21,000,000. Brute Force algorithm has this running time.
O(n!) – Factorial time
The slowest!!! Example: Travelling salesman problem (TSP)
It is taken from this article. It is very well explained and you should give it a read.
When you're analyzing code, you have to analyse it line by line, counting every operation/recognizing time complexity. In the end, you have to sum it to get whole picture.
For example, you can have one simple loop with linear complexity, but later in that same program you can have a triple loop that has cubic complexity, so your program will have cubic complexity. Function order of growth comes into play right here.
Let's look at what are possibilities for time complexity of an algorithm, you can see order of growth I mentioned above:
Constant time has an order of growth 1, for example: a = b + c.
Logarithmic time has an order of growth log N. It usually occurs when you're dividing something in half (binary search, trees, and even loops), or multiplying something in same way.
Linear. The order of growth is N, for example
int p = 0;
for (int i = 1; i < N; i++)
p = p + 2;
Linearithmic. The order of growth is n·log N. It usually occurs in divide-and-conquer algorithms.
Cubic. The order of growth is N3. A classic example is a triple loop where you check all triplets:
int x = 0;
for (int i = 0; i < N; i++)
for (int j = 0; j < N; j++)
for (int k = 0; k < N; k++)
x = x + 2
Exponential. The order of growth is 2N. It usually occurs when you do exhaustive search, for example, check subsets of some set.
Loosely speaking, time complexity is a way of summarising how the number of operations or run-time of an algorithm grows as the input size increases.
Like most things in life, a cocktail party can help us understand.
O(N)
When you arrive at the party, you have to shake everyone's hand (do an operation on every item). As the number of attendees N increases, the time/work it will take you to shake everyone's hand increases as O(N).
Why O(N) and not cN?
There's variation in the amount of time it takes to shake hands with people. You could average this out and capture it in a constant c. But the fundamental operation here --- shaking hands with everyone --- would always be proportional to O(N), no matter what c was. When debating whether we should go to a cocktail party, we're often more interested in the fact that we'll have to meet everyone than in the minute details of what those meetings look like.
O(N^2)
The host of the cocktail party wants you to play a silly game where everyone meets everyone else. Therefore, you must meet N-1 other people and, because the next person has already met you, they must meet N-2 people, and so on. The sum of this series is x^2/2+x/2. As the number of attendees grows, the x^2 term gets big fast, so we just drop everything else.
O(N^3)
You have to meet everyone else and, during each meeting, you must talk about everyone else in the room.
O(1)
The host wants to announce something. They ding a wineglass and speak loudly. Everyone hears them. It turns out it doesn't matter how many attendees there are, this operation always takes the same amount of time.
O(log N)
The host has laid everyone out at the table in alphabetical order. Where is Dan? You reason that he must be somewhere between Adam and Mandy (certainly not between Mandy and Zach!). Given that, is he between George and Mandy? No. He must be between Adam and Fred, and between Cindy and Fred. And so on... we can efficiently locate Dan by looking at half the set and then half of that set. Ultimately, we look at O(log_2 N) individuals.
O(N log N)
You could find where to sit down at the table using the algorithm above. If a large number of people came to the table, one at a time, and all did this, that would take O(N log N) time. This turns out to be how long it takes to sort any collection of items when they must be compared.
Best/Worst Case
You arrive at the party and need to find Inigo - how long will it take? It depends on when you arrive. If everyone is milling around you've hit the worst-case: it will take O(N) time. However, if everyone is sitting down at the table, it will take only O(log N) time. Or maybe you can leverage the host's wineglass-shouting power and it will take only O(1) time.
Assuming the host is unavailable, we can say that the Inigo-finding algorithm has a lower-bound of O(log N) and an upper-bound of O(N), depending on the state of the party when you arrive.
Space & Communication
The same ideas can be applied to understanding how algorithms use space or communication.
Knuth has written a nice paper about the former entitled "The Complexity of Songs".
Theorem 2: There exist arbitrarily long songs of complexity O(1).
PROOF: (due to Casey and the Sunshine Band). Consider the songs Sk defined by (15), but with
V_k = 'That's the way,' U 'I like it, ' U
U = 'uh huh,' 'uh huh'
for all k.
For the mathematically-minded people: The master theorem is another useful thing to know when studying complexity.
O(n) is big O notation used for writing time complexity of an algorithm. When you add up the number of executions in an algorithm, you'll get an expression in result like 2N+2. In this expression, N is the dominating term (the term having largest effect on expression if its value increases or decreases). Now O(N) is the time complexity while N is dominating term.
Example
For i = 1 to n;
j = 0;
while(j <= n);
j = j + 1;
Here the total number of executions for the inner loop are n+1 and the total number of executions for the outer loop are n(n+1)/2, so the total number of executions for the whole algorithm are n + 1 + n(n+1/2) = (n2 + 3n)/2.
Here n^2 is the dominating term so the time complexity for this algorithm is O(n2).
Other answers concentrate on the big-O-notation and practical examples. I want to answer the question by emphasizing the theoretical view. The explanation below is necessarily lacking in details; an excellent source to learn computational complexity theory is Introduction to the Theory of Computation by Michael Sipser.
Turing Machines
The most widespread model to investigate any question about computation is a Turing machine. A Turing machine has a one dimensional tape consisting of symbols which is used as a memory device. It has a tapehead which is used to write and read from the tape. It has a transition table determining the machine's behaviour, which is a fixed hardware component that is decided when the machine is created. A Turing machine works at discrete time steps doing the following:
It reads the symbol under the tapehead.
Depending on the symbol and its internal state, which can only take finitely many values, it reads three values s, σ, and X from its transition table, where s is an internal state, σ is a symbol, and X is either Right or Left.
It changes its internal state to s.
It changes the symbol it has read to σ.
It moves the tapehead one step according to the direction in X.
Turing machines are powerful models of computation. They can do everything that your digital computer can do. They were introduced before the advent of digital modern computers by the father of theoretical computer science and mathematician: Alan Turing.
Time Complexity
It is hard to define the time complexity of a single problem like "Does white have a winning strategy in chess?" because there is a machine which runs for a single step giving the correct answer: Either the machine which says directly 'No' or directly 'Yes'. To make it work we instead define the time complexity of a family of problems L each of which has a size, usually the length of the problem description. Then we take a Turing machine M which correctly solves every problem in that family. When M is given a problem of this family of size n, it solves it in finitely many steps. Let us call f(n) the longest possible time it takes M to solve problems of size n. Then we say that the time complexity of L is O(f(n)), which means that there is a Turing machine which will solve an instance of it of size n in at most C.f(n) time where C is a constant independent of n.
Isn't it dependent on the machines? Can digital computers do it faster?
Yes! Some problems can be solved faster by other models of computation, for example two tape Turing machines solve some problems faster than those with a single tape. This is why theoreticians prefer to use robust complexity classes such as NL, P, NP, PSPACE, EXPTIME, etc. For example, P is the class of decision problems whose time complexity is O(p(n)) where p is a polynomial. The class P do not change even if you add ten thousand tapes to your Turing machine, or use other types of theoretical models such as random access machines.
A Difference in Theory and Practice
It is usually assumed that the time complexity of integer addition is O(1). This assumption makes sense in practice because computers use a fixed number of bits to store numbers for many applications. There is no reason to assume such a thing in theory, so time complexity of addition is O(k) where k is the number of bits needed to express the integer.
Finding The Time Complexity of a Class of Problems
The straightforward way to show the time complexity of a problem is O(f(n)) is to construct a Turing machine which solves it in O(f(n)) time. Creating Turing machines for complex problems is not trivial; one needs some familiarity with them. A transition table for a Turing machine is rarely given, and it is described in high level. It becomes easier to see how long it will take a machine to halt as one gets themselves familiar with them.
Showing that a problem is not O(f(n)) time complexity is another story... Even though there are some results like the time hierarchy theorem, there are many open problems here. For example whether problems in NP are in P, i.e. solvable in polynomial time, is one of the seven millennium prize problems in mathematics, whose solver will be awarded 1 million dollars.

Big-Oh notation for a single while loop covering two halves of an array with two iterator vars

Trying to brush up on my Big-O understanding for a test (A very basic Big-O understanding required obviously) I have coming up and was doing some practice problems in my book.
They gave me the following snippet
public static void swap(int[] a)
{
int i = 0;
int j = a.length-1;
while (i < j)
{
int temp = a[i];
a[i] = a[j];
a[j] = temp;
i++;
j--;
}
}
Pretty easy to understand I think. It has two iterators each covering half the array with a fixed amount of work (which I think clocks them both at O(n/2))
Therefore O(n/2) + O(n/2) = O(2n/2) = O(n)
Now please forgive as this is my current understanding and that was my attempt at the solution to the problem. I have found many examples of big-o online but none that are quite like this where the iterators both increment and modify the array at basically the same time.
The fact that it has one loop is making me think it's O(n) anyway.
Would anyone mind clearing this up for me?
Thanks
The fact that it has one loop is making me think it's O(n) anyway.
This is correct. Not because it is making one loop, but because it is one loop that depends on the size of the array by a constant factor: the big-O notation ignores any constant factor. O(n) means that the only influence on the algorithm is based on the size of the array. That it actually takes half that time, does not matter for big-O.
In other words: if your algorithm takes time n+X, Xn, Xn + Y will all come down to big-O O(n).
It gets different if the size of the loop is changed other than a constant factor, but as a logarithmic or exponential function of n, for instance if size is 100 and loop is 2, size is 1000 and loop is 3, size is 10000 and loop is 4. In that case, it would be, for instance, O(log(n)).
It would also be different if the loop is independent of size. I.e., if you would always loop 100 times, regardless of loop size, your algorithm would be O(1) (i.e., operate in some constant time).
I was also wondering if the equation I came up with to get there was somewhere in the ballpark of being correct.
Yes. In fact, if your equation ends up being some form of n * C + Y, where C is some constant and Y is some other value, the result is O(n), regardless of whether see is greater than 1, or smaller than 1.
You are right about the loop. Loop will determine the Big O. But the loop runs only for half the array.
So its. 2 + 6 *(n/2)
If we make n very large, other numbers are really small. So they won't matter.
So its O(n).
Lets say you are running 2 separate loops. 2 + 6* (n/2) + 6*(n/2) . In that case it will be O(n) again.
But if we run a nested loop. 2+ 6*(n*n). Then It will be O(n^2)
Always remove the constants and do the math. You got the idea.
As j-i decreases by 2 units on each iteration, N/2 of them are taken (assuming N=length(a)).
Hence the running time is indeed O(N/2). And O(N/2) is strictly equivalent to O(N).

The best, worst, and average-case runtime of a function to check for duplicates?

I'm having some trouble finding the big O for the if statement in the code below:
public static boolean areUnique (int[] ar)
{
for (int i = 0; i < ar.length-1; i++) // O(n)
{
for (int j = i+1; j < ar.length-1; j++) // O(n)
{
if (ar[i] == ar[j]) // O(???)
return false; // O(1)
}
}
return true; //O(1)
}
I'm trying to do a time complexity analysis for the best, worst, and average case
Thank you everyone for answering so quickly! I'm not sure if my best worst and average cases are correct... There should be a case difference should there not because of the if statement? But when I do my analysis I have them all ending up as O(n2)
Best: O(n) * O(n) * [O(1) + O(1)] = O(n2)
Worst: O(n) * O(n) * [O(1) + O(1) + O(1)] = n2
Average: O(n) * O(n) * [O(1) + O(1) + O(1)] = O(n2)
Am I doing this right? My textbook is not very helpful
For starters, this line
if (ar[i] == ar[j])
always takes time Θ(1) to execute. It does only a constant amount of work (a comparison plus a branch), so the work done here won't asymptotically contribute to the overall runtime.
Given this, we can analyze the worst-case behavior by considering what happens if this statement is always false. That means that the loop runs as long as possible. As you noticed, since each loop runs O(n) times, the total work done is Θ(n2) in the worst-case.
In the best case, however, the runtime is much lower. Imagine any array where the first two elements are the same. In that case, the function will terminate almost instantly when the conditional is encountered for the first time. In this case, the runtime is Θ(1), because a constant number of statements will be executed.
The average-case, however, is not well-defined here. Average-case is typically defined relative to some distribution - the average over what? - and it's not clear what that is here. If you assume that the array consists of truly random int values and that ints can take on any integer value (not a reasonable assumption, but it's fine for now), then the probability that a randomly-chosen array has a duplicate is 0 and we're back in the worst-case (runtime Θ(n2)). However, if the values are more constrained, the runtime changes. Let's suppose that there are n numbers in the array and the integers range from 0 to k - 1, inclusive. Given a random array, the runtime depends on
Whether there's any duplicates or not, and
If there is a duplicate, where the first duplicated value appears in the array.
I am fairly confident that this math is going to be very hard to work out and if I have the time later today I'll come back and try to get an exact value (or at least something asymptotically appropriate). I seriously doubt this is what was expected since this seems to be an introductory big-O assignment, but it's an interesting question and I'd like to look into it more.
Hope this helps!
the if itself is O(1);
this is because it does not take into account the process within the ALU or the CPU, so if(ar[i] == ar[j]) would be in reality O(6), that translates into O(1)
You can regard it as O(1).
No matter what you consider as 'one' step,
the instructions for carrying out a[i] == a[j] doesn't depend on the
value n in this case.

Time and complexity of an integer array

I am working on an assignment and don't get answer for some of questions.
I Have been asked :
Input: an array A of length N that can only contain integers from 1 to N
Output: TRUE - A contains duplicate, FALSE - otherwise.
I have created a class which is passing my test cases.
public class ArrayOfIntegers {
public boolean isArrayContainsDuplicates(int [] intArray){
int arraySize = intArray.length;
long expectedOutPut = arraySize*(arraySize+1)/2;
long actualOutput = 0;
for(int i =0 ; i< arraySize; i++){
actualOutput = actualOutput + intArray[i];
}
if(expectedOutPut == actualOutput)
return false;
return true;
}
}
Now further questions on this
Is it possible to provide the answer and NOT to destroy the input array A?
I have not destroy an array. So what I have done is correct?
Analyze time and space complexity of your algorithm?
So do I need to write something about the for loop that as soon as I find the duplicate elements I should break the loop. Frankly speaking I am not very clear about the concept of time and space complexity.
Is O(n) for both time and space possible?
is this should be No as n could be any number. Again , I am not very clear about O(n).
Thanks
Is it possible to provide the answer and NOT to destroy the input array A?
Yes. For example, if you don't care about the time it takes, you can loop over the array once for every possible number and check if you see it exactly once (if not, there must be a duplicate). That would be O(N^2).
Usually, you would use an additional array (or other data structure) as a scratch-list, though (which also does not destroy the input array, see the third question below).
Analyze time and space complexity of your algorithm?
Your algorithm runs in O(n), doing just a single pass over the input array, and requires no additional space. However, it does not work.
Is O(n) for both time and space possible?
Yes.
Have another array of the same size (size = N), count in there how often you see every number (single pass over input), then check the counts (single pass over output, or short-circuit when you have an error).
So do I need to write something about the for loop that as soon as I find the duplicate elements I should break the loop.
No. Complexity considerations are always about the worst case (or sometimes the average case). In the worst case, you cannot break out of the loop. In the average case, you can break out after half the loop. Either way, while being important for someone waiting on a real-life implementation to finish the calculation, this does not make a difference for scalability (complexity as N grows infinite). Constant offsets and multipliers (such as 50% for breaking out early) are not considered.
public boolean hasDuplicates(int[] arr) {
boolean found = false;
for (int i = 1 ; i <= arr.length ; i++) {
for (int a : arr)
if (a == i) found = true;
if (! found) return true;
}
return false;
}
I believe this method would work (as yours currently doesn't). It's O(n^2).
I'm quite sure that it is impossible to attain O(n) for both time and space since two nested for-loops would be required, thereby increasing the method's complexity.
Edit
I was wrong (sometimes it's good to admit it), this is O(n):
public boolean hasDuplicates(int[] arr) {
int[] newarr = new int[arr.length];
for (int a : arr) newarr[a - 1]++;
for (int a : newarr) if (a != 1) return true;
return false;
}
Yes, the input array is not destroyed.
The method directly above is O(n) (by that I mean its runtime and space requirements would grow linearly with the argument array length).
Yes, see above.
As hints:
Yes, it is possible to provide an answer and not destroy the array. Your code* provides an example.
Time complexity can be viewed as, "how many meaningful operations does this algorithm do?" Since your loop goes from 0 to N, at minimum, you are doing O(N) work.
Space complexity can be viewed as, "how much space am I using during this algorithm?" You don't make any extra arrays, so your space complexity is on the order of O(N).
You should really revisit how your algorithm is comparing the numbers for duplicates. But I leave that as an exercise to you.
*: Your code also does not find all of the duplicates in an array. You may want to revisit that.
It's possible by adding all of the elements to a hashset = O(n), then comparing the number of values in the hashset to the size of the array = O(1). If they aren't equal, then there are duplicates.
Creating the hashset will also take up less space on average than creating an integer array to count each element. It's also an improvement from 2n to n, although that has no effect on big-O.
1) This will not require much effort, and leaves the array intact:
public boolean isArrayContainsDuplicates(int [] intArray){
int expectedOutPut = (intArray.length*(intArray.length+1))/2;
int actualOutput = 0;
for(int i =0 ; i < intArray.length; i++){
if(intArray[i]>intArray.length)return true;
actualOutput += intArray[i];
}
return expectedOutPut == actualOutput ? false: true;
}
2) This will require touching a varying amount of elements in the Array. Best case, it hits the first element which would be O(1), average case is it hits in the middle O(log n), and worse case is it goes all the way through and returns false O(n).
O(1) refers to a number of operations which are not related to the total number of items. In this case, the first element would be the only one which has this case.
O(log n) - log is a limiting factor which can be a real number from 0 to 1. Thus, multiplying by log will result in a smaller number. Hence, O(log n) refers to a required amount of operations which are less than the number of items.
O(n) - This is when it required a number of operations equal to the number of items.
These are all big-o notation for the required time.
This algorithm uses memory which will increase as n increases. However, it will not grow linearly but instead be a fraction of the size n is and therefore its spacial Big-O is O(log n).
3) Yes, this is possible - however, it is only possible in best-case scenarios.

Categories

Resources