Time complexity of multiple for loops - java

I was calculating the time complexity for a method that I have:
for(int i = 0; i < P; i++){
for(int j = P-1; j >= 0; j--){
for(int k = 0; k < D; k++){}
for(int z = 0; z < D; z++){}
}}
D is always lesser or equal to P, but my doubt is in the two last fours. I have my time complexity as O(2P^3) but I've read the 2 can be removed like O(P^3). Is any of these correct?

O(2P^3) is equivalent to O(P^3).
O-notation, specifically, describes how worst-case time/space usage increases based on the size of the input (in this case, it increases cubically, as opposed to linearly or quadratically). As such, we're interested in the class of that complexity - as P gets arbitrarily large, 2P^3 is increasingly indistinguishable from P^3, so we can ignore the coefficient 2.
Meanwhile, if we were to compare the scaling to something that was O(P^2), the resource usage would be much less as P got arbitrarily large. Even if we compared a complexity of P^3 to a complexity of (1000P^2), there's a point at which P gets so large that the factor of 1000 becomes insignificant, so we can ignore it.

Related

What is the running time of this Triple nested for loop?

Here is the code:
for (int i = 0; i < 60; i++) {
for (int j = i-1; j < N; j++) {
for (int k = 0; k+2 < N; k++) {
System.out.println(i*j);
System.out.println(i);
i=i+1;
}
}
}
I believe it is O(N^2) since there's two N's in the for loops, but not too sure.
Any help is appreciated, thanks!
It's rather because the i-loop has a fixed limit. Saying it's O(N^2) is not wrong IMO but if we are strict, then the complexity is O(N^2 * log(N)).
We can prove it more formally:
First, let's get rid of the i-loop for the analysis. The value of i is incremented within the k-loop. When N is large, then i will get larger than 60 right in the first iteration, so the i-loop will execute just one iteration. For simplicity, let's say i is just a variable initialized with 0. Now the code is:
int i = 0;
for (int j = -1; j < N; j++) { // j from - 1 to N - 1
for (int k = 0; k+2 < N; k++) { // k from 0 to N - 1 - 2
System.out.println(i*j);
System.out.println(i);
i=i+1;
}
}
If we are very strict then we'd have to look at different cases where the i-loop executes more than one time, but only when N is small, which is not what we are interested in for big O. Let's just forget about the i-loop for simplicity.
We look at the loops first and say that the inner statements are constant for that. We look at them separately.
We can see that the complexity of the loops is O(N^2).
Now the statements: The interesting ones are the printing statements. Printing a number is obviously done digit by digit (simply speaking), so it's not constant. The amount of digits of a number grows logarithmic with the number. See this for details. The last statement is just constant.
Now we need to look at the numbers (roughly). The value of i grows up to N * N. The value of j grows up to N. So the first print prints a number that grows up to N * N * N. The second print prints a number that grows up to N * N. So the inner body has the complexity O(log(N^3) + log(N^2)), which is just O(3log(N) + 2log(N)), which is just O(5log(N)). Constant factors are dropped in big O so the final complexity is O(log(N)).
Combining the complexity of the loops and the complexity of the executed body yields the overall complexity: O(N^2 * log(N)).
Ask your teacher if the printing statements were supposed to be considered.
The answer is O(N^2 log N).
First of all, the outer loop can be ignored, since it has a constant number of iterations, and hence only contributes by a constant factor. Also, the i = i+1 has no effect on the time complexity since it only manipulates the outer loop.
The println(i * j) statement has a time complexity of O(bitlength(i * j)), which is bounded by O(bitlength(N^2)) = O(log N^2) = O(log N) (and similarly for the other println-statement. Now, how often are these println-statement executed?
The two inner loops are nested and and both run from a constant up to something that is linear in N, so they iterate O(N^2) times. Hence the total time complexity is O(N^2 log N).

How do I find worst-case complexity of the following code?

Need to know the worst-case complexity of the following code, would be grateful if you could provide the steps of how to solve it. I was thinking of the answer n^3logn, but not sure.
int complexity(int n)
int i,j,c
for(i=1; i < n; i=i*3)
for(j=n; j < n*n; j++)
c++
for(i=c; i > 0; i--)
if(random(0...999) > 0)
for(j=0; j < 3*n; j++)
c++
else
for(j=2*n*n; j > 0; j--)
c++
return c
Let's look at the first nested loop
for(i=1; i < n; i=i*3)
for(j=n; j < n*n; j++)
c++
The outer loop runs log(n) times [log base 3, but base changes is multiplying by a constant which does not effect the asymptotic complexity] and the inner loop n^2 times, thus after this loop c = n^2 * log(n).
For the second loop:
for(i=c; i > 0; i--)
if(random(0...999) > 0)
for(j=0; j < 3*n; j++)
c++
else
for(j=2*n*n; j > 0; j--)
c++
In the worst case the else case always happens, so we can modify it to
for(i=c; i > 0; i--)
for(j=2*n*n; j > 0; j--)
c++
The outer loop happens c times, which is O(n^2 * log(n)) and the inner loop increments c by 2*n^2, so c is incremented by 2 * n^2 * n^2 * log(n), adding the initial value we get that c (and thus the overall complexity) is in O(2*n^4*log(n) + n^2 * log(n)) = O(n^4 * log(n))
I hope I'm not just doing your homework for you. In future I recommend showing your thought processes thus far, rather than just your final answer.
Let's look at this code one section at a time.
int complexity(int n)
int i,j,c
for(i=1; i < n; i=i*3)
for(j=n; j < n*n; j++)
c++
So in the outer loop, i goes from 1 to n, but each time i is tripled. This means it will finish after log_3(n) loops. Changing base of log is just a constant factor, which doesn't matter in computational complexity, so we can just say log(n).
The inner loop has j going from n to n^2. O(n^2 - n) = O(n^2) since lower powers are dwarfed by higher ones (i.e. quadratic dwarfs linear).
So putting this all together, the first section has computational complexity O(n^2 logn). Now let's look at the second section.
for(i=c; i > 0; i--)
if(random(0...999) > 0)
for(j=0; j < 3*n; j++)
c++
else
for(j=2*n*n; j > 0; j--)
c++
return c
Now because the outer loop's initialization depends on c, we need to know what c is. c was incremented every time in the first two nested loops, so c's value is proportional to n^2 logn. Thus the outer loop will run O(n^2 logn) times.
For the inner loops, remember we are always considering the worst-case scenario. So the random number generator is a bit misleading: compute the computational complexity of both j loops, and assume the worst case always happens.
The first j loop goes from 0 to 3n, which is simply O(n). The second j loop goes from 2n^2 to 0 which is simply O(n^2). Since the second case is worse, we assume it always happens (even though this is highly improbable). So the inner loop is O(n^2).
Multiplying the two nested loops together, we get O(n^2 logn x n^2) = O(n^4 logn).
Finally, remember we have to see which of the two sections dominated. The first section was O(n^2 logn) and the second was O(n^4 logn), so we want O(n^2 logn + n^4 logn). The latter term obviously dominates, so the final answer is O(n^4 logn).
Hope this helps. Please ask if some part was unclear.
p.s. The current top answer states that c will be ~n^3/3. Because i is tripling every time, this is incorrect; it will be n^2 log_3(n). Otherwise their work is correct.
The way to solve this is to work out a formula that gives c as a function of n. (We can use the number of c++ operations as a proxy for the overall number of operations.)
In this case, the random function means that you can't get an exact formula. But you can work out two formulae, one for the case where random always returns zero, and the other for the case where random always returns > zero.
How do you work out the formula / formulae? Maths!
Hint: the worst-case will be one of the two cases that I mentioned above. (Which one? The one whose complexity is worst, of course!)

Big O Notation with the use of methods with different parameters and time

What is the time complexity if faa() takes linear time,
for(int j = 0 ; j < n ; j++)
faa(k)
or
for(int j = 0 ; j < n ;j++)
faa(j)
From what I learned, we are using Big O Notation and time complexity is found by viewing n so both the loops will be O(n) which would be then multiplied by the next line which would give us the answer to the time complexity for both of the problems. Would the answer simply be O(kn) and O(jn) or am I lost? According to my book it says that faa() takes linear time and when it calls upon a parameter faa(k) Big O is O(k) so would j be the same?
With faa(k) : the result is O(n) as the faa is done n times in constant time (does not relate on j, so O(kn) goes to O(n) as k does not change)
With faa(j) : the result is O(n^2) as the linear computation is done n times and each one goes to j but because j goes to n by approximation we say until n for each time,
It is better understandable to say it's quadratic
According to my book it says that faa() takes linear time and when it
calls upon a parameter faa(k) Big O is O(k) so would j be the same?
for(int j = 0 ; j < n ; j++)
faa(k)
Given an input of k the Big O of faa(k) is O(k) and it will be executed n times , it's perfectly acceptable to say that the Big O complexity for that case is O(k*n) .
I disagree with azro.
Even if k and n are independent there is no indication that k is constant, rather the opposite.
They are both variable input important for the calculation of algorithmic complexity.
for(int j = 0 ; j < n ;j++)
faa(j)
This is pretty similar to faa(k).
j averages to n/2 , so this will be executed n * (n/2) times which can be simplified to O(N^2).
Either way this appears to reach quadratic behavior. So you're not lost at all.

Complexity in terms of Θ

I know that the following code is considered "Linear or Θ(n)" what I don't understand is how we know that it is.
Assuming that n has been declared appropriately and has been set to a value.
int sum = 0;
for (int i = 0; i < n*2; i++ )
sum++;
Below is an additional loop that is non-linear from what I can tell but again my question is how do we determine possible speeds by seeing only code? Θ complexity, in terms of n, to be more specific.
int sum = 0;
for ( int i = 0; i < n; i++)
for ( int j = i; j < n; j += 2)
sum++;
In your first example you have only one for loop. The i value linearly increases until the condition i<n*2; is met. The execution time of your for loop is linearly dependent on the value of n so its time complexity is O(n) because your i value is directly proportional to n.
In your second example you have nested for loops. The i value is linearly increasing and for each i value the inner for loop executes n/2 times as your variable j is increased by 2 in each iteration. As the outer loop executes n times and inner loop executes n/2 times for each outer loop iteration, the total running time for this example is n*n/2. But usually the constant part of the time is negligible (or sometimes not considered). So we can say its running time is O(n^2).
Coming to the difference of Big O and Theta notation, Big O is used to represent the upper bound for the growth function and Theta is used to represent the tight bound for the growth function. For more info on the difference, refer difference between Big O and Theta notation.
Big O notion can be thought of as given input of size n how many times are each element processed/accessed in the following program?
So for your first example
int sum = 0;
for (int i = 0; i < n*2; i++ )
sum++;
It doesn't matter what size n is. It only matters how many times does the loop run in respect to n. So since this piece of code will loop n*2 times, the running time of it is also n*2. The reason that this is called linear, or O(n) even though it will run n*2 times is because we are interested in the growth of the running time at astronomically large values of n. At that point, the 2 multiplier in front becomes irrelevant, and that's why it is o(n).
int sum = 0;
for ( int i = 0; i < n; i++)
for ( int j = i; j < n; j += 2)
sum++;
In this example, the program will run n*(n/2) times.
n times in the i loop and n/2 times in the j loop
again, since we are interested in the growth of this function at astronomically large values of n, the 1/2 factor for the n becomes irrelevant, making the running time n*n, or n^2

Big O Notation, adding loops of different significance

I promise this is the last Big O question
Big O Notation for following loops...
for (int i = n; i > 0; i = i / 2){
for (int j = 0; j < n; j++){
count++;
}
}
for (int k = 0; k < n; k++){
for (int m = 0; m < n; m++){
count++;
}
}
here is what i think im sure of.
the first set of nested loops has O(n*log2(n)) and the second set of nested loops is O(n^2). When adding these is it correct to drop the first term? and say that the overall Big O is O(n^2)?
Second question, when adding Big O notation for loops in series is always correct to drop the less significant terms?
The answer to both your questions is yes. You always drop smaller terms, as they are dominated by the larger terms for sufficiently large n, and you only care about large n when doing Big O analysis.
The reason is that n*log2(n) is dominated by n^2 asymptotically: for sufficiently large n, |n * log2(n)| < |n^2|.
If you don't see why this means you can drop the n*log2(n) term, try adding n^2 to both sides:
n^2 + n*log2(n) < n^2 + n^2
n^2 + n*log2(n) < 2 * n^2
Thus, if we know that we can ignore a constant factor k, we know we can ignore a less significant term.,

Categories

Resources