I have a question about the Big O notation of this code:
int strange_sumA(int[] arr) {
if (arr.length == 1) {
return arr[0];
} else {
int newlen = arr.length/2;
int[] arrLeft = new int[newlen];
int[] arrRight = new int[newlen];
for (int i=0; i<newlen; i++) {
arrLeft[i] = arr[i];
}
for (int i=newlen; i<arr.length-1; i++) {
arrRight[i-newlen] = arr[i];
}
return strange_sumA(arrLeft) + strange_sumA(arrRight);
}
}
From what I understand, the first for loop is O(n/2) and the second for loop is O(n/2), making the entire first run O(n). Then, after the first recursion, the big o of the next two recursion will still be O(n) since 2[n/2] = n and the next with be too since 4[n/4] = n. So, will the entire big O notation of this algorithm be O(n^2)? I think the code will run N times but I am not sure
When doing runtime analysis, it is important to think about what it is that you are measuring. For example...It looks like you are summing all of the digits in the array. However, you are not doing it iteratively - You are doing it recursively. So, if your most "expensive" operation (step that takes the most time) is a function call...then you may choose to express run time measured in function calls.
Since you are dividing your array in half every time, then it is logarithmic.
O(log n)
Now, if you also want to take into account each array operation.
arrLeft[i] = arr[i];
you do this O(n/2) operation twice, so O(n), for each function call. So each function call has O(n) array operations.
O(n)
For overall array operations, we must multiply the # of array operations per function call by the # of total function calls.
O(n * log n)
You can also prove this via the master theorm
Let T(n) be the time complexity of this algorithm with array of length n. So we get:
T(n) = 2T(n/2) + O(n)
T(n) = 2(2T(n/4) + n/2) + O(n)
....
T(n) = 2^(log_2(n))T(1) + log_2(n) * O(n) = O(n) + O(n) * log_2(n) = O(n log n)
Start with T(n) = 2T(n/2) + n
T(n/2) = 2T(n/4) + n/2
Substitute this into the original
T(n) = 2[2T(n/4) + n/2] + n
T(n) = 2^2T(n/4) + 2n
Do it again
T(n/4) = 2T(n/8) + n/4
Substitute again
T(n) = 2^2[2T(n/8) + n/4] + 2n
T(n) = 2^3T(n/8) + 3n
You can repeat as much as you want but you will eventually see the pattern
T(n) = 2^kT(n/2^k) + kn
We want to get to T(1) so set n/2^k = 1 and solve for k
n/2^k = 1 then,
lgn = k
Now replace lgn for k in the general form
T(n) = 2^lgnT(1) + lgn * n
T(n) = n + nlgn
nlgn grows faster than n so it is the dominant term. Thus O(nlgn)
Related
Im just going over some basic sorting algorithms. I implemented the below insertion sort.
public static int[] insertionSort(int[] arr){
int I = 0;
for(int i = 0; i < arr.length; i++){
for(int j = 0; j < i; j++){
if(arr[i] < arr[j]){
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
I++;
}
}
System.out.println(I);
return arr;
}
I prints out 4950 for a sized 100 array with 100 randomly generated integers.
I know the algorithm is considered O(n^2), but what would be the more arithmetically correct runtime? If it was actually O(N^2) Iim assuming, would print out 10,000 and not 4950.
Big-Oh notation gives us how much work an algorithm must do as the input size grows bigger. A single input test doesn't give enough information to verify the theoretical Big-Oh. You should run the algorithm on arrays of different sizes from 100 to a million and graph the output with the size of the array as the x-variable and the number of steps that your code outputs as the y-variable. When you do this, you will see that the graph is a parabola.
You can use algebra to get an function in the form y = a*x^2 + b*x +c that fits as close as possible to this data. But with Big-Oh notation, we don't care about the smaller terms because they grow insignificant compared to the x^2 part. For example, when x = 10^3, then x^2 = 10^6 which is much larger than b*x + c. If x = 10^6 then x^2 = 10^12 which again is so much larger than b*x + c that we can ignore these smaller terms.
You can make the following observations: On the ith iteration of the outer loop, the inner loop runs i times, for i from 0 to n-1 where n is the length of the array.
In total over the entire algorithm the inner loop runs T(n) times where
T(n) = 0 + 1 + 2 + ... + (n-1)
This is an arithmetic series and it's easy to prove the sum is equal to a second degree polynomial on n:
T(n) = n*(n-1)/2 = .5*n^2 - .5*n
For n = 100, the formula predicts the inner loop will run T(100) = 100*99/2 = 4950 times which matches what you calculated.
This question already has answers here:
How can I find the time complexity of an algorithm?
(10 answers)
Closed 2 years ago.
I have the below code to return the index of 2 numbers that add up to the given target. What is the time complexity of the code? Explanations are appreciated. Thanks.
int[] result = new int[2];
int i=0, k=i+1;
while (i < nums.length)
{
if(nums[i]+nums[k]==target && i!=k)
{
result[0]=i;
result[1]=k;
break;
}
else if (k < nums.length-1)
k++;
else
{
i++;
k=i;
}
}
return result;
Premise
It is hard to analyze this without any additional input how nums and target correspond to each other.
Since you do not provide any additional information here, I have to assume that all inputs are possible. In which case the worst case is that none of the pairs buildable by nums can form target at all.
A simple example explaining what I am referring to would be target = 2 with nums = [4, 5, 10, 3, 9]. You can not build target by adding up pairs of nums.
Iterations
So you would end up never hitting your break statement, going through the full execution of the algorithm.
That is, the full range of k from 0 to nums.length - 1, then one increment of i and then k again the full range, starting from i. And so on until i reaches the end as well.
In total, you will thus have the following amount of iterations (where n denotes nums.length):
n, n - 1, n - 2, n - 3, ..., 2, 1
Summed up, those are exactly
(n^2 + n) / 2
iterations.
Complexity
Since all you do inside the iterations is in constant time O(1), the Big-O complexity of your algorithm is given by
(n^2 + n) / 2 <= n^2 + n <= n^2 + n^2 <= 2n^2
Which by definition is in O(n^2).
Alternative code
Your code is very hard to read and a rather unusual way to express what you are doing here (namely forming all pairs, leaving out duplicates). I would suggest rewriting your code like that:
for (int i = 0; i < nums.length; i++) {
for (int j = i; j < nums.length; j++) {
int first = nums[i];
int second = nums[j];
if (first + second == target) {
return {i, j};
}
}
}
return null;
Also, do yourself a favor and do not return result filled with 0 in case you did not find any hit. Either return null as shown or use Optional, for example:
return Optional.of(...);
...
return Optional.empty();
Time Complexity
The worst-case time complexity of the given code would be O(N^2) , where N is nums.length.
This is because you are checking each distinct pair in the array until you find two numbers that add upto the target. In the worst case, you will end up checking all the pairs. The number of pairs in an array of length N would be N^2. Your algorithm will check for N*(N-1) pairs which comes out to be N^2 - N. The upper bound for this would be O(N^2) only since lower order terms are neglected.
Flow of Code
In the code sample, here's how the flow occurs -
i will start from 0 and k will be i+1 which is 1. So, suppose that you won't find the pair which add up to the target.
In that case, each time ( from i = 0 to i = nums.length-1), only the else if (k < nums.length-1) statement will run.
Once k reaches nums.length, i will be incremented and k will again start from i+1.
This will continue till i becomes nums.length - 1. In this iteration the last pair will be checked and then only the loop will end. So worst-case time complexity will come out to be O(N^2) .
Time Complexity Analysis -
So you are checking N pairs in the first loop, N-1 pairs in the next one, N-2 in next and so on... So, total number of checked pairs will be -
N + ( N-1 ) + ( N-2 ) + ( N-3 ) + ... + 2 + 1
= N * ( N + 1 ) / 2
= ( N^2 + N ) / 2
And the above would be considered to have an upper bound of O(N^2) which is your Big-O Worst-Case time complexity.
The Average Case Time Complexity would also be considered as O(N^2).
The Best Case Time Complexity would come out to be O(1) , where only the first pair would be needed to be checked.
Hope this helps !
The questions is:
Given an array of 2n integers, your task is to group these integers into n pairs of integer, say (a1, b1), (a2, b2), ..., (an, bn) which makes sum of min(ai, bi) for all i from 1 to n as large as possible.
The solution provided as:
public class Solution {
public int arrayPairSum(int[] nums) {
int[] arr = new int[20001];
int lim = 10000;
for (int num: nums)
arr[num + lim]++;
int d = 0, sum = 0;
for (int i = -10000; i <= 10000; i++) {
sum += (arr[i + lim] + 1 - d) / 2 * i;
d = (2 + arr[i + lim] - d) % 2;
}
return sum;
}
}
I think it is unfair to say that the time complexity is O(n). Although, O(n+K) K = 20001 is a constant number which seems could be omitted, the n is also less than K. If so, why can't I say time complexity to be O(1)?
The asymptotic complexity is measured as a function of n, for ALL n. We are concerned with what happens when n gets large. Really, really large.
Maybe in practice n will always be tiny. Fine.
But when you give a complexity measure for an algorithm, you are by definition saying what happens as n grows. And grows and grows. And when it does, it will dwarf K.
So O(n) it is.
Clarification:
It is true that the problem specification says:
n is a positive integer, which is in the range of [1, 10000].
All the integers in the array will be in the range of [-10000, 10000].
But remember, that is just for this problem! The solution given hard codes the value of K. The algorithm used here should indeed be written as O(n + K), as you noticed. This K is not a constant factor and probably should not be dropped.
However with asymptotic complexity (Big-O, Big-Theta, etc.) even with an arbitrary but finite K, you can still find constants k and N such that for all n>N, kn > the number of operations needed in this algorithm, which is the Big-O definition. This is why you will see a lot of people say O(n).
Hope that helps.
Find the time complexity and big O of following code.
I am confused about what will be the time complexity of if else statements and other bar(a) and foo(a) function. Some friends are saying its time complexity is O(n^2) and some says its time complexity will be O(n). I also think that the time complexity of the following code will be O(n) because there is a return statement in the for loops which will make the time of both foo and bar function as O(1) and main for loop will run n time so time complexity will be O(n).
// Sample Code
public static void main(String args[]){
Scanner in = new Scanner(System.in);
int n = in.nextInt();
int sum = 0;
for(int i=0 ; i<n ; ++i){
if(i%2 != 0){
sum += foo(i , n) + foo(1+i , n);
}
else{
sum += foo(i , n) + bar(i , n);
}
}
}
-
static void bar(int a , int n){
for(int i=0 ; i<n ; ++i){
for(int j=0 ; j< i ; ++j){
return a*(i+j);
}
}
}
-
static void foo(int a , int n){
for(int i=0 ; i<n ; ++i){
return a*i;
}
}
Since you expressed your algorithm using an iterative paradigm, all you have to do is count the number of operations.
The complexity of bar is O(n^2), because you have n * (n + 1) / 2 operations of the type "add i, j, then multiply by a".
The complexity of foo is O(n) (n operations of the type "multiply by a").
In your main loop, half of the operation will call foo twice, which yields (n / 2) * n operations which is O(n^2).
The second half of the iterations will call foo then bar, which yields (n / 2) * [n + n * (n + 1) / 2] that is to say 0.25.n^3 + 0.75.n^2 which is O(n^3).
Therefore, the overall complexity of your loop is O(n^3). This is generally referred to as the time complexity of your algorithm (even though we counted the number of operations - the intuition behind it is to consider a set of unit of operations and acknowledge each of those will take a constant amount of time. In our case, we chose the arithmetic operations + and *).
We can also analyse your algorithm from the perspective of memory consumption. In your case, the memory consumed is constant whatever the value of n is. This is why we would say that the space complexity is O(1) with respect to n.
Edit: I though the "premature" return in foo and bar was a typo. If you don't loop, then the complexity of foo and bar is O(1), and your main loop is O(n).
foo() -> O(n) because it goes from i = 0 -> n:
bar() -> O(n^2) because it from j = 1 + 2 + 3 + ... + n which is O(n^2), as follows:
For main() you are going from i = 0 -> n, and you are calling foo(n) twice every even value of i. You are also calling foo(n), but thats O(n) which does is not the worst case, so we do not care.
Considering the worst case bar(n), being called n/2 times (every other form 0 to n):
new to computer science here, so i've wrote this bubble sort code and trying to calculate it's time complexity and performance here, the code for the sort is here:
for (int i = 0; i < size; i++)
{
Node current = head.next;
while (current.next != tail)
{
if (current.element.depth < current.next.element.depth)
{
swap(current, current.next);
}
else
{
current = current.next;
}
}
}
The swap method code is here:
void swap(Node nodeA, Node nodeB)
{
Node before = nodeA.prev;
Node after = nodeB.next;
before.next = nodeB;
after.prev = nodeA;
nodeB.next = nodeA;
nodeB.prev = before;
nodeA.prev = nodeB;
nodeA.next = after;
}
Now I know the bubble sort's time complexity is O(n^2) in worst-case performance, but I'm trying to calculate the function of each executions in my for loop here. I have a basic understanding of time complexity, i know a standard for loop is f(n) = 2n + 2 and we consider the worse case scenario for time complexity. So far, this is my thought progress to find the f(n) for my code:
int i = 0; This will be executed only once.
i < size; This will be executed N+1 times.
i ++; This will be executed N times.
current = head.next; This will be executed N times.
current.next != tail; This will be executed N times.
And since a while loop is within the for loop,
it's n*n within the while loop, there
are 4 operations, so it's 4n^2.
In the worst scenario, I'll have to use swap method every time, and since my time complexity for swap method is simply 8 (i think, it's only 8 executions right?) So the worst scenario for swap(current,current.next) is 8n?
If we add them up:
f(n) = 1 + n + 1 + n + n + n+ 4n^2 + 8n
f(n) = 4n^2 + 12n + 2
f(n) ~ O(n^2)
Is my time complexity f(n) correct?
If not, can you point me to the right answer please, also do you have some suggestions to improve on my code performance?
As you have a while loop inside a for loop - yes, you have a O(n^2) complexity, which is considered bad.
The rule of thumb here is as follows (for N elements input, from better to worse):
No loops, just some executions regardless of the input size = O(1)
Loop(N/M) (you divide the input on each iteration) = O(log N)
Just one loop over N elements = O(N)
Loop2(N) inside Loop1(N) = O(N^2)
See this answer for better explanation: What does O(log n) mean exactly?