How to calculate Time Complexity in nested for loop? [duplicate] - java

This question already has answers here:
Time complexity of nested for-loop
(10 answers)
Closed 5 years ago.
What is the time complexity of the following snippet? and could you explain it?
for(int i=0;i<n;i++){
for(int j=0;j<=i;j++){
//print something
}
}

The outer loop has n iterations.
The inner loop has i+1 iterations for each iteration of the outer loop.
Therefore the total number of iteration of the inner loop is:
1 + 2 + 3 + ... + n
which is equal to
n*(n+1)
-------
2
This is O(n^2)

Whenever you face a question of calculating time complexity, just look for how many times you are going to do the work.
Here, in your question, whatever the work you are doing, lets say printing something, you are going to do it for the the number of times of outer loop, which itself runs for n length.
hence, you will do the work for 1+2+3+....n times
which becomes
n*(n+1)/2
times.
Hence, it will simply be O(n^2)

For i=0, inner loop runs 1 time
For i=1, inner loop runs 2 times
For i=2, inner loop runs 3 times
...
n-1. For i=n-1, inner loop runs n times
So, total time = 1 + 2 + 3 +....+ n
n*(n+1)/2
and represented as O(n^2)

Time complexity is Big O of n square i.e. O(n^2)
For outer looper, it is n
Cost of inner loop is 1 + 2 + 3,...n-2, n-1, n
So total cost is O(n^2)

Quadratic Function (Quadratic Run Time)
An algorithm is said to be quadratic if
the main reason why the quadratic function appears is the nested loops, where the inner loop is performing a linear number of operations n and the outer loop is also performing a linear number of operations n, thus, in this case, the whole algorithm perform operations.
n is the number of elements you would have in any given array.
if the number of elements in your array is 10, this means it would take 10 * 10 = 100 operations to finish the execution of your logic.
A good example is to check how many a single item occurred in a list, this required comparing each single element in the list with all the element in the list including itself.
quadratic function graph:
I have an old BigO notation notes that covers the 7 most famous run time complexities see if this will help BigO

Related

Time complexity of while inside for loop? [duplicate]

This question already has answers here:
Big O, how do you calculate/approximate it?
(24 answers)
Closed 27 days ago.
In a Java app, I have the following algorithm that is used for "Longest Substring with K Distinct Characters" as shown below:
Input: String="araaci", K=2
Output: 4
Explanation: The longest substring with no more than '2' distinct characters is "araa".
Input: String="cbbebi", K=3
Output: 5
Explanation: The longest substrings with no more than '3' distinct characters are "cbbeb" & "bbebi".
Here is the code:
public static int longestSubstring(String str, int k) {
Map<Character, Integer> map = new HashMap<>();
int maxLength = 0;
int l = 0;
for (int r = 0; r < str.length(); r++) {
char cRight = str.charAt(r);
map.put(cRight, map.getOrDefault(cRight, 0) + 1);
while (map.size() > k) {
char cLeft = str.charAt(l);
map.put(cLeft, map.getOrDefault(cLeft, 0) - 1);
if (map.get(cLeft) == 0) {
map.remove(cLeft);
}
l++;
}
maxLength = Math.max(maxLength, r - l + 1);
}
return maxLength;
}
I could not understand the time complexity in the following definition:
Time Complexity
The time complexity of the above algorithm will be O(N) where ‘N’ is the number of characters in the input string. The outer for loop runs for all characters and the inner while loop processes each character only once, therefore the time complexity of the algorithm will be O(N+N) which is asymptotically equivalent to O(N).
So, I thought when there is a while loop inside another for loop, I thought time complexity is O(n^2). But here I could not understand "inner while loop processes each character only once". Can you explain this state if it is correct?
In order to analyse the complexity of an algorithm, most of the time you'll need to understand what the code does in details (you don't need to understand if what it does is correct tho). Using the structure of the code (ie. whether loops are nested) or just looking at the big picture is usually a bad idea. In other words, computing the complexity of an algorithm takes a lot of (your) time.
As you stated "the inner while loop processes each character only once", which is indeed important to notice, but that's not sufficient in my opinion.
Loops do not matter per se, what matters is the total number of instructions your program will run depending on the input size. You can read "instruction" as "a function that runs in constant time" (independently of the input size).
Making sure all function calls are in O(1)
Let's first look at the complexity of all function calls:
We have several a map reads, all in O(1) (read this as "constant time read"):
map.getOrDefault(cRight, 0)
map.getOrDefault(cLeft, 0)
Also several map insertions, also all in O(1):
map.put(cRight, ...)
map.put(cLeft, ...)
And map item deletion map.remove(cLeft), also in O(1)
The Math.max(..., ...) is also in O(1)
str.charAt(..) is also in O(1)
There is also increments/decrements of loop variables and check their values and also a few other +1, -1 that are all in O(1).
Ok, now we can safely say that all external functions are "instructions" (or more accurately, all these function use a "constant size number of instructions"). Notice that all hashmap complexities are not exactly accurate but this is a detail you can look at separately
Which means we now only need to call how many of these functions are called.
Analyzing the number of function calls
The argument made in the comments is accurate but using the fact that char cLeft = str.charAt(l) will crash the program if l > N is not very satisfactory in my opinion. But this is a valid point, its impossible for the inner loop to be executed more than l times in total (which leads directly to the expected O(N) time complexity).
If this was given as an exercise, I doubt that was the expected answer. Let's analyze the program like it was written using char cLeft = str.charAt(l % str.length()) instead to make it a little more interesting.
I feel like the main argument should be based on the "total number of character count" stored in map (a map of character to counter pairs). Here are some facts, mainly:
The outter loop always increases a single counter by exactly one.
The inner loop always decreases a single counter by exactly one.
Also:
The inner loop ensure that all counters are positive (removes counters when they are equal to 0).
The inner loop runs as long as the number of (positive) counters is > k.
Lemma For the inner loop to be executed C times in total, the outer loop needs to be executed at least C times in total.
Proof Suppose the outer loop is executed C times, and the inner loop at least C + 1 times, that means it exists an iteration r of the outer loop in which the inner loop is executed r + 1 times. During this iteration r of the outer loop, at some point (by 1. and 2.) the sum of all character counters in map will equal 0. By fact 3., this means that there are no counter left (map.size() equals 0). Since k > 0, during iteration r it is impossible to enter the inner loop for a r + 1 time because of 4.. A contradiction that proves the Lemma is true.
Less formally, the inner loop will never execute if the sum of counters is 0 because the total sum of k (> 0) positives counters is greater than 0. In other words, the consumer (inner loop) can't consume more than what is being produced (by the outer loop).
Because of this Lemma and because the outer loop executes exactly N times, the inner loop executes at most N times. In total we will execute at most A * N function calls in the outter loop, and B * N function calls in the inner loop, both A and B are constants and all functions are in O(1), therefore (A + B) * N ∈ O(N).
Also note that writing O(N + N) is a pleonasm (or doesn't make sense) because big-O is supposed to ignore all constant factors (both multiplicative and additive). Usually people will not write equations using big-O notations because it is hard to write something correct and formal (appart for obvious set inclusions like O(log N) ∈ O(N)). Usually you would say something like "all operations are in O(N), therefore the algorithm is in O(N)".

Time complexity of the following program [duplicate]

This question already has answers here:
How can I find the time complexity of an algorithm?
(10 answers)
Closed 4 years ago.
So, basically I wanted to find all the elements in the second array which are less than or equal to the element of 1st array. Both the arrays are sorted. I know the solution. I don't want that.
I just want to know the time complexity of this program and how will we calculate it. Thank you in advance.
int count=0;
for(int i=0;i<n;i++)
{
for(int j=count;j<m;j++)
{
if(arr[i]>=arr[j])
//some O(1) code
else
{
count=j;
break;
}
}
}
The complexity will be O(n*m) simply because the outer loop for each value of n will run m times.
Well there is only one array in your code. In contrary to your explaination that says there are two arrays.
Assuming there is a typo and there should be a second array:
Worst: You can establish an upper bound at O(n * m). Happens if all elements in the second are smaler than in the first.
Best: You can establish a lower bound at O(n) .
Happens if all elements of the second are bigger than in the first(first element breaks the loop).
Average: If you asume an even distribution you get an average at O(n * m / 2).
Conclusion Its a O(n²) league algorithm.
Only one array:
However if I take the code "as is" - only one array and also take into account that it is sorted:
If arr1[i] < arr2[j] for i < j holds:
It will skip the inner loop for j>i. -> the inner loop will stop at j==i; -> upper bound at O(n * m / 2). Still an O(n²) league.
Reverse Order
So arr[i] < arr[j] for i>j holds:
It will skip the inner loop for j < i so the inner loop will be executed at most one time: O(n+m) rsp. O(n).
But I guess it is a typo and you ment two arrays so I skip the case sorted with duplicates(it is again O(n*m) eg. if all elements are the same).
O(n*m)- since you are going through 'n' outer elements
and for each outer element you have an inner loop with m elements.
For loops time complexity - O(n). Basically how many times the for loop will run.
Complexity : O(m*n)
As two for loops involved in this, it may be vary in different cases but it has to be O(m*n) if both gets executed.

Order of growth according to Big-O Notation [duplicate]

This question already has answers here:
Big O, how do you calculate/approximate it?
(24 answers)
How can I find the time complexity of an algorithm?
(10 answers)
Closed 4 years ago.
Hello Stack Users, I am having trouble finishing this growth according to size algorithm problem. I was able to figure out the first two of the problem which are not listed in the picture. That would be 1. O(1) and 3. O(N) I was able to place these into their correct slots. I still cannot figure out how to determine the growth rate for 2,4,5,6 into the slots provided. Any suggestions on how to determine this?
O(N)
The first for loop takes N and the second also takes N so
O(N) = N + N = 2N = N
O(N^2)
The first for loop takes N and the second also N, but in this case it is nested. inner loop takes N for every other loop of outer loop
O(N) = N * N = O(N^2)
O(N)
The first for loop it takes N and the second also 5, but it is nested so
O(N) = 5 * N = 5N = O(N)
O(log(N))
to divide a number N by 2 continuously until it reaches 1, it takes
log(N)

Big-Oh notation for a single while loop covering two halves of an array with two iterator vars

Trying to brush up on my Big-O understanding for a test (A very basic Big-O understanding required obviously) I have coming up and was doing some practice problems in my book.
They gave me the following snippet
public static void swap(int[] a)
{
int i = 0;
int j = a.length-1;
while (i < j)
{
int temp = a[i];
a[i] = a[j];
a[j] = temp;
i++;
j--;
}
}
Pretty easy to understand I think. It has two iterators each covering half the array with a fixed amount of work (which I think clocks them both at O(n/2))
Therefore O(n/2) + O(n/2) = O(2n/2) = O(n)
Now please forgive as this is my current understanding and that was my attempt at the solution to the problem. I have found many examples of big-o online but none that are quite like this where the iterators both increment and modify the array at basically the same time.
The fact that it has one loop is making me think it's O(n) anyway.
Would anyone mind clearing this up for me?
Thanks
The fact that it has one loop is making me think it's O(n) anyway.
This is correct. Not because it is making one loop, but because it is one loop that depends on the size of the array by a constant factor: the big-O notation ignores any constant factor. O(n) means that the only influence on the algorithm is based on the size of the array. That it actually takes half that time, does not matter for big-O.
In other words: if your algorithm takes time n+X, Xn, Xn + Y will all come down to big-O O(n).
It gets different if the size of the loop is changed other than a constant factor, but as a logarithmic or exponential function of n, for instance if size is 100 and loop is 2, size is 1000 and loop is 3, size is 10000 and loop is 4. In that case, it would be, for instance, O(log(n)).
It would also be different if the loop is independent of size. I.e., if you would always loop 100 times, regardless of loop size, your algorithm would be O(1) (i.e., operate in some constant time).
I was also wondering if the equation I came up with to get there was somewhere in the ballpark of being correct.
Yes. In fact, if your equation ends up being some form of n * C + Y, where C is some constant and Y is some other value, the result is O(n), regardless of whether see is greater than 1, or smaller than 1.
You are right about the loop. Loop will determine the Big O. But the loop runs only for half the array.
So its. 2 + 6 *(n/2)
If we make n very large, other numbers are really small. So they won't matter.
So its O(n).
Lets say you are running 2 separate loops. 2 + 6* (n/2) + 6*(n/2) . In that case it will be O(n) again.
But if we run a nested loop. 2+ 6*(n*n). Then It will be O(n^2)
Always remove the constants and do the math. You got the idea.
As j-i decreases by 2 units on each iteration, N/2 of them are taken (assuming N=length(a)).
Hence the running time is indeed O(N/2). And O(N/2) is strictly equivalent to O(N).

Time complexity of the java code

I am taking up algorithm course on coursera,and I am stuck on this particular problem. I am supposed to find the time complexity of this code.
int sum = 0
for (int i = 1; i <= N*N; i = i*2)
{
for (int j = 0; j < i; j++)
sum++;
}
I checked it in eclipse itself, for any value of N the number of times sum statement is executed is less than N
final value of sum:
for N=8 sum=3
for N=16 sum=7
for N=100000 sum=511
so the time complexity should be less than N
but the answer that is given is N raised to the power 2, How is it possible?
What I have done so far:
the first loop will run log(N^ 2) times, and consequently second loop will be execute 1,2,3.. 2 logN
The first inner loop will be 1 + 2 + 4 + 8 .. 2^M where 2^M is <= N * N.
The sum of powers of 2 up to N * N is approximately 2 * N * N or O(N ^ 2)
Note: When N=100000 , N*N will overflow so its result is misleading. If you consider overflow to be part of the problem, the sum is pretty random for large numbers so you could argue its O(1), i.e. if N=2^15, N^2 = 2^30 and the sum will be Integer.MAX_VALUE. No higher value of N will give a higher sum.
There is a lot of confusion here, but the important thing is that Big-O notation is all about growth rate, or limiting behavior as mathematicians will say. That a function will execute in O(n*n) means that the time to execute will increase faster than, for example n, but slower than, for example 2^n.
When reasoning with big-O notation, remember that constants "do not count". There is a few querks in this particular question.
The N*N expression it-self would lead to a O(log n*n) complexity if the loop was a regular for-loop...
...however, the for-loop increment is i = i*2 causing the outer loop to be executed approximately log n and the function would be in O(log n) if the contents of the inner loop run in a time independent of n.
But again, the inner-loop run-time depends on n, but it doesn't do n*n runs, rather it does roughly log (n*n)/2 loops. Remembering that "constants don't count" and that we end up in O(n*n).
Hope this cleared things up.
So sum ++ will be executed 1 + 2 + 4 + 8 + ... + N*N, total log2(N*N) times. Sum of geometrical progression 1 * (1 - 2 ^ log2(N*N)/(1 - 2) = O(N*N).
Your outer loop is log(N^2)->2*log(N)->log(N), your inner loop is N^2/2->N^2. So, the time complexity is N^2*log(N).
About the benchmark, values with N=8 or N=16 are ridiculous, the time in the loop is marginal in relation with setting JVM, cache fails, and so on. You must:
Begin with biggest N, and check how it evaluate.
Make multiple runs with each value of N.
Think that the time complexity is a measure of how the algorithm works when N becomes really big.

Categories

Resources