Find Time Complexity and big O of following Code - java

Find the time complexity and big O of following code.
I am confused about what will be the time complexity of if else statements and other bar(a) and foo(a) function. Some friends are saying its time complexity is O(n^2) and some says its time complexity will be O(n). I also think that the time complexity of the following code will be O(n) because there is a return statement in the for loops which will make the time of both foo and bar function as O(1) and main for loop will run n time so time complexity will be O(n).
// Sample Code
public static void main(String args[]){
Scanner in = new Scanner(System.in);
int n = in.nextInt();
int sum = 0;
for(int i=0 ; i<n ; ++i){
if(i%2 != 0){
sum += foo(i , n) + foo(1+i , n);
}
else{
sum += foo(i , n) + bar(i , n);
}
}
}
-
static void bar(int a , int n){
for(int i=0 ; i<n ; ++i){
for(int j=0 ; j< i ; ++j){
return a*(i+j);
}
}
}
-
static void foo(int a , int n){
for(int i=0 ; i<n ; ++i){
return a*i;
}
}

Since you expressed your algorithm using an iterative paradigm, all you have to do is count the number of operations.
The complexity of bar is O(n^2), because you have n * (n + 1) / 2 operations of the type "add i, j, then multiply by a".
The complexity of foo is O(n) (n operations of the type "multiply by a").
In your main loop, half of the operation will call foo twice, which yields (n / 2) * n operations which is O(n^2).
The second half of the iterations will call foo then bar, which yields (n / 2) * [n + n * (n + 1) / 2] that is to say 0.25.n^3 + 0.75.n^2 which is O(n^3).
Therefore, the overall complexity of your loop is O(n^3). This is generally referred to as the time complexity of your algorithm (even though we counted the number of operations - the intuition behind it is to consider a set of unit of operations and acknowledge each of those will take a constant amount of time. In our case, we chose the arithmetic operations + and *).
We can also analyse your algorithm from the perspective of memory consumption. In your case, the memory consumed is constant whatever the value of n is. This is why we would say that the space complexity is O(1) with respect to n.
Edit: I though the "premature" return in foo and bar was a typo. If you don't loop, then the complexity of foo and bar is O(1), and your main loop is O(n).

foo() -> O(n) because it goes from i = 0 -> n:
bar() -> O(n^2) because it from j = 1 + 2 + 3 + ... + n which is O(n^2), as follows:
For main() you are going from i = 0 -> n, and you are calling foo(n) twice every even value of i. You are also calling foo(n), but thats O(n) which does is not the worst case, so we do not care.
Considering the worst case bar(n), being called n/2 times (every other form 0 to n):

Related

If my method with time complexity O(n^2) uses another method with time complexity O(n) does its time complexity change?

The what method has a time complexity of O(n^2) and it uses the method f with time complexity of O(n). I need to calculate the overall time complexity of method what. Do I also take in account the time complexity of method f, so overall the time complexity of method what would be O(n^2 * n = n^3), or does each method take care of its own time complexity, so in this case the method what will stay at time complexity O(n^2)?
private static int f (int[]a, int low, int high)
{
int res = 0;
for (int i=low; i<=high; i++)
res += a[i];
return res;
}
public static int what (int []a)
{
int temp = 0;
for (int i=0; i<a.length; i++)
{
for (int j=i; j<a.length; j++)
{
int c = f(a, i, j);
if (c%2 == 0)
{
if (j-i+1 > temp)
temp = j-i+1;
}
}
}
return temp;
}
Time Complexity give high level estimate that how much time it takes to execute. So Yes you need to include every line of code which take time i.e time complexity of f function also. It is also take time.
What function has two loops and and one loop inside f function which is being called from what function.
Let calculate Time Complexity
Time complexity of f function when it is being first time from `what' function when i=0 and j incremented by inner loop from 0 to n
1. When i=0 and j=0 then execute one time
2. when i=0 and j=1 then execute two time
3. when i=0 and j=2 then execute three time
soon on
.
.
.
n. When i=0 and j=n then execute n time
SO
Total = 1+ 2+3+4.......+n
= n(n+1)/2 = sum of n consecutive numbers
Thus
TC of `f` Function execute first time from J loop from 0 t0 n =n(n+1)/2
But i loop execute n times so
Total Time Complexity of whole execution/ your program equalvent to n*[n*(n+1)/2 + ((n-1)n)/2) + ((n-2)(n-1))/2+ ...... + 1] ~ that means TC equlavent to O(n^3)
First of all, there is no n in the problem, so you cannot express the complexity as some function of n unless you define it.
Clearly, the function f takes time proportional to high-low+1. It gets called in the inner loop of what for low=i and all high in [i, l) [l:= a.length]. So the cost of this loop is proportional to 1 + 2 + ... l-i = (l-i)(l-i+1)/2, which is a quadratic polynomial in l-i.
Finally, the outer loop repeats this for all i in [0,l), i.e. l-i in [1,l), resulting in a complexity ~l³/6.

What is the running time of this Triple nested for loop?

Here is the code:
for (int i = 0; i < 60; i++) {
for (int j = i-1; j < N; j++) {
for (int k = 0; k+2 < N; k++) {
System.out.println(i*j);
System.out.println(i);
i=i+1;
}
}
}
I believe it is O(N^2) since there's two N's in the for loops, but not too sure.
Any help is appreciated, thanks!
It's rather because the i-loop has a fixed limit. Saying it's O(N^2) is not wrong IMO but if we are strict, then the complexity is O(N^2 * log(N)).
We can prove it more formally:
First, let's get rid of the i-loop for the analysis. The value of i is incremented within the k-loop. When N is large, then i will get larger than 60 right in the first iteration, so the i-loop will execute just one iteration. For simplicity, let's say i is just a variable initialized with 0. Now the code is:
int i = 0;
for (int j = -1; j < N; j++) { // j from - 1 to N - 1
for (int k = 0; k+2 < N; k++) { // k from 0 to N - 1 - 2
System.out.println(i*j);
System.out.println(i);
i=i+1;
}
}
If we are very strict then we'd have to look at different cases where the i-loop executes more than one time, but only when N is small, which is not what we are interested in for big O. Let's just forget about the i-loop for simplicity.
We look at the loops first and say that the inner statements are constant for that. We look at them separately.
We can see that the complexity of the loops is O(N^2).
Now the statements: The interesting ones are the printing statements. Printing a number is obviously done digit by digit (simply speaking), so it's not constant. The amount of digits of a number grows logarithmic with the number. See this for details. The last statement is just constant.
Now we need to look at the numbers (roughly). The value of i grows up to N * N. The value of j grows up to N. So the first print prints a number that grows up to N * N * N. The second print prints a number that grows up to N * N. So the inner body has the complexity O(log(N^3) + log(N^2)), which is just O(3log(N) + 2log(N)), which is just O(5log(N)). Constant factors are dropped in big O so the final complexity is O(log(N)).
Combining the complexity of the loops and the complexity of the executed body yields the overall complexity: O(N^2 * log(N)).
Ask your teacher if the printing statements were supposed to be considered.
The answer is O(N^2 log N).
First of all, the outer loop can be ignored, since it has a constant number of iterations, and hence only contributes by a constant factor. Also, the i = i+1 has no effect on the time complexity since it only manipulates the outer loop.
The println(i * j) statement has a time complexity of O(bitlength(i * j)), which is bounded by O(bitlength(N^2)) = O(log N^2) = O(log N) (and similarly for the other println-statement. Now, how often are these println-statement executed?
The two inner loops are nested and and both run from a constant up to something that is linear in N, so they iterate O(N^2) times. Hence the total time complexity is O(N^2 log N).

What will be the time complexity of the below code? [duplicate]

This question already has answers here:
How can I find the time complexity of an algorithm?
(10 answers)
Closed 2 years ago.
I have the below code to return the index of 2 numbers that add up to the given target. What is the time complexity of the code? Explanations are appreciated. Thanks.
int[] result = new int[2];
int i=0, k=i+1;
while (i < nums.length)
{
if(nums[i]+nums[k]==target && i!=k)
{
result[0]=i;
result[1]=k;
break;
}
else if (k < nums.length-1)
k++;
else
{
i++;
k=i;
}
}
return result;
Premise
It is hard to analyze this without any additional input how nums and target correspond to each other.
Since you do not provide any additional information here, I have to assume that all inputs are possible. In which case the worst case is that none of the pairs buildable by nums can form target at all.
A simple example explaining what I am referring to would be target = 2 with nums = [4, 5, 10, 3, 9]. You can not build target by adding up pairs of nums.
Iterations
So you would end up never hitting your break statement, going through the full execution of the algorithm.
That is, the full range of k from 0 to nums.length - 1, then one increment of i and then k again the full range, starting from i. And so on until i reaches the end as well.
In total, you will thus have the following amount of iterations (where n denotes nums.length):
n, n - 1, n - 2, n - 3, ..., 2, 1
Summed up, those are exactly
(n^2 + n) / 2
iterations.
Complexity
Since all you do inside the iterations is in constant time O(1), the Big-O complexity of your algorithm is given by
(n^2 + n) / 2 <= n^2 + n <= n^2 + n^2 <= 2n^2
Which by definition is in O(n^2).
Alternative code
Your code is very hard to read and a rather unusual way to express what you are doing here (namely forming all pairs, leaving out duplicates). I would suggest rewriting your code like that:
for (int i = 0; i < nums.length; i++) {
for (int j = i; j < nums.length; j++) {
int first = nums[i];
int second = nums[j];
if (first + second == target) {
return {i, j};
}
}
}
return null;
Also, do yourself a favor and do not return result filled with 0 in case you did not find any hit. Either return null as shown or use Optional, for example:
return Optional.of(...);
...
return Optional.empty();
Time Complexity
The worst-case time complexity of the given code would be O(N^2) , where N is nums.length.
This is because you are checking each distinct pair in the array until you find two numbers that add upto the target. In the worst case, you will end up checking all the pairs. The number of pairs in an array of length N would be N^2. Your algorithm will check for N*(N-1) pairs which comes out to be N^2 - N. The upper bound for this would be O(N^2) only since lower order terms are neglected.
Flow of Code
In the code sample, here's how the flow occurs -
i will start from 0 and k will be i+1 which is 1. So, suppose that you won't find the pair which add up to the target.
In that case, each time ( from i = 0 to i = nums.length-1), only the else if (k < nums.length-1) statement will run.
Once k reaches nums.length, i will be incremented and k will again start from i+1.
This will continue till i becomes nums.length - 1. In this iteration the last pair will be checked and then only the loop will end. So worst-case time complexity will come out to be O(N^2) .
Time Complexity Analysis -
So you are checking N pairs in the first loop, N-1 pairs in the next one, N-2 in next and so on... So, total number of checked pairs will be -
N + ( N-1 ) + ( N-2 ) + ( N-3 ) + ... + 2 + 1
= N * ( N + 1 ) / 2
= ( N^2 + N ) / 2
And the above would be considered to have an upper bound of O(N^2) which is your Big-O Worst-Case time complexity.
The Average Case Time Complexity would also be considered as O(N^2).
The Best Case Time Complexity would come out to be O(1) , where only the first pair would be needed to be checked.
Hope this helps !

Time complexity of leetcode 561

The questions is:
Given an array of 2n integers, your task is to group these integers into n pairs of integer, say (a1, b1), (a2, b2), ..., (an, bn) which makes sum of min(ai, bi) for all i from 1 to n as large as possible.
The solution provided as:
public class Solution {
public int arrayPairSum(int[] nums) {
int[] arr = new int[20001];
int lim = 10000;
for (int num: nums)
arr[num + lim]++;
int d = 0, sum = 0;
for (int i = -10000; i <= 10000; i++) {
sum += (arr[i + lim] + 1 - d) / 2 * i;
d = (2 + arr[i + lim] - d) % 2;
}
return sum;
}
}
I think it is unfair to say that the time complexity is O(n). Although, O(n+K) K = 20001 is a constant number which seems could be omitted, the n is also less than K. If so, why can't I say time complexity to be O(1)?
The asymptotic complexity is measured as a function of n, for ALL n. We are concerned with what happens when n gets large. Really, really large.
Maybe in practice n will always be tiny. Fine.
But when you give a complexity measure for an algorithm, you are by definition saying what happens as n grows. And grows and grows. And when it does, it will dwarf K.
So O(n) it is.
Clarification:
It is true that the problem specification says:
n is a positive integer, which is in the range of [1, 10000].
All the integers in the array will be in the range of [-10000, 10000].
But remember, that is just for this problem! The solution given hard codes the value of K. The algorithm used here should indeed be written as O(n + K), as you noticed. This K is not a constant factor and probably should not be dropped.
However with asymptotic complexity (Big-O, Big-Theta, etc.) even with an arbitrary but finite K, you can still find constants k and N such that for all n>N, kn > the number of operations needed in this algorithm, which is the Big-O definition. This is why you will see a lot of people say O(n).
Hope that helps.

Big O notation with recursion

I have a question about the Big O notation of this code:
int strange_sumA(int[] arr) {
if (arr.length == 1) {
return arr[0];
} else {
int newlen = arr.length/2;
int[] arrLeft = new int[newlen];
int[] arrRight = new int[newlen];
for (int i=0; i<newlen; i++) {
arrLeft[i] = arr[i];
}
for (int i=newlen; i<arr.length-1; i++) {
arrRight[i-newlen] = arr[i];
}
return strange_sumA(arrLeft) + strange_sumA(arrRight);
}
}
From what I understand, the first for loop is O(n/2) and the second for loop is O(n/2), making the entire first run O(n). Then, after the first recursion, the big o of the next two recursion will still be O(n) since 2[n/2] = n and the next with be too since 4[n/4] = n. So, will the entire big O notation of this algorithm be O(n^2)? I think the code will run N times but I am not sure
When doing runtime analysis, it is important to think about what it is that you are measuring. For example...It looks like you are summing all of the digits in the array. However, you are not doing it iteratively - You are doing it recursively. So, if your most "expensive" operation (step that takes the most time) is a function call...then you may choose to express run time measured in function calls.
Since you are dividing your array in half every time, then it is logarithmic.
O(log n)
Now, if you also want to take into account each array operation.
arrLeft[i] = arr[i];
you do this O(n/2) operation twice, so O(n), for each function call. So each function call has O(n) array operations.
O(n)
For overall array operations, we must multiply the # of array operations per function call by the # of total function calls.
O(n * log n)
You can also prove this via the master theorm
Let T(n) be the time complexity of this algorithm with array of length n. So we get:
T(n) = 2T(n/2) + O(n)
T(n) = 2(2T(n/4) + n/2) + O(n)
....
T(n) = 2^(log_2(n))T(1) + log_2(n) * O(n) = O(n) + O(n) * log_2(n) = O(n log n)
Start with T(n) = 2T(n/2) + n
T(n/2) = 2T(n/4) + n/2
Substitute this into the original
T(n) = 2[2T(n/4) + n/2] + n
T(n) = 2^2T(n/4) + 2n
Do it again
T(n/4) = 2T(n/8) + n/4
Substitute again
T(n) = 2^2[2T(n/8) + n/4] + 2n
T(n) = 2^3T(n/8) + 3n
You can repeat as much as you want but you will eventually see the pattern
T(n) = 2^kT(n/2^k) + kn
We want to get to T(1) so set n/2^k = 1 and solve for k
n/2^k = 1 then,
lgn = k
Now replace lgn for k in the general form
T(n) = 2^lgnT(1) + lgn * n
T(n) = n + nlgn
nlgn grows faster than n so it is the dominant term. Thus O(nlgn)

Categories

Resources