Complexity of finding all substrings of a string - java

Here is a solution for finding all substrings of a string.
for (int i = 0; i < str.length(); i++) {
String subStr;
for (int j = i; j < str.length(); j++) {
subStr = str + str.charAt(j));
System.out.println(subStr);
}
}
All over the internet I read that the complexity of this code is O(n2).
However the + operation is an O(n) operation.
Thus in my opinion the complexity should be O(n3).
In case I am wrong, please correct my understanding.

Adding a character to a string is a O(1) operation. You get O(n3) if you take into account also the time need to print the output with println.

Finding all substrings of a string is O(n2) (by finding a substring I mean determining its begin and end indexes), it's easy to see because the total number of substrings is O(n2).
But printing all of them out is O(n3), simply because total number of characters to be printed is O(n3). In your code, println adds O(n) complexity (the + operator should have O(1) complexity if used/implemented properly).

Finding all substrings from a string the naive way is indeed O(n^2). But the code in the question doesn't probably do that. Here is the corrected version.
for (int i = 0; i < str.length(); ++i) {
//Initialize with max length of the substring
StringBuilder prefix = new StringBuilder(str.length() - i);
for (int j = i; j < str.length(); ++j) {
prefix.append(str.charAt(j)); //this step is O(1).
System.out.println(prefix); //this step is supposed to be O(1)
}
}
The total number of iterations is given by
Outer Loop : Inner Loop
First time : n
Second time : n - 1
Third Time : n - 2
..
n - 2 time : 2
n - 1 time : 1
So the total number of iterations is sum of iterations of outer loop plus sum of iterations of the inner loop.
n + (1 + 2 + 3 + ... + n - 3 + n - 2 + n - 1 + n) is = O(n^2)

Related

What's the big O Notation for the two for loops

I am having a hard time analyzing this piece of code:
public int example(int[] arr){
int n = arr.length, total = 0 ;
for (int i=0; i < n; i++)
for (int j=0; j <= i; j++)
total += arr[j];
return total;
}
These two loops don't have curly braces. I could not analyze the time complexity of them.
I need help in counting the time of operation for both of them.
It is O(n2), since there are two layers of loops. The fact that there aren't curly braces does not matter.
The outer loop is executed O(n) times and the inner loop is executed first once, then twice, up until n times. This is an arithmetic sequence from 1 to n where the common difference is 1. Its sum is therefore
(1 + n) * (n) / 2
= (n^2 + n) / 2
= O(n^2)
That code could be rewritten with curly braces as follows:
for (int i=0; i < n; i++) {
for (int j=0; j <= i; j++) {
total += arr[j];
}
}
First loop is executed O(n) times.
Nested loop is executed O(n) times.
So overall O(n^2).

What will be the time complexity of the below code? [duplicate]

This question already has answers here:
How can I find the time complexity of an algorithm?
(10 answers)
Closed 2 years ago.
I have the below code to return the index of 2 numbers that add up to the given target. What is the time complexity of the code? Explanations are appreciated. Thanks.
int[] result = new int[2];
int i=0, k=i+1;
while (i < nums.length)
{
if(nums[i]+nums[k]==target && i!=k)
{
result[0]=i;
result[1]=k;
break;
}
else if (k < nums.length-1)
k++;
else
{
i++;
k=i;
}
}
return result;
Premise
It is hard to analyze this without any additional input how nums and target correspond to each other.
Since you do not provide any additional information here, I have to assume that all inputs are possible. In which case the worst case is that none of the pairs buildable by nums can form target at all.
A simple example explaining what I am referring to would be target = 2 with nums = [4, 5, 10, 3, 9]. You can not build target by adding up pairs of nums.
Iterations
So you would end up never hitting your break statement, going through the full execution of the algorithm.
That is, the full range of k from 0 to nums.length - 1, then one increment of i and then k again the full range, starting from i. And so on until i reaches the end as well.
In total, you will thus have the following amount of iterations (where n denotes nums.length):
n, n - 1, n - 2, n - 3, ..., 2, 1
Summed up, those are exactly
(n^2 + n) / 2
iterations.
Complexity
Since all you do inside the iterations is in constant time O(1), the Big-O complexity of your algorithm is given by
(n^2 + n) / 2 <= n^2 + n <= n^2 + n^2 <= 2n^2
Which by definition is in O(n^2).
Alternative code
Your code is very hard to read and a rather unusual way to express what you are doing here (namely forming all pairs, leaving out duplicates). I would suggest rewriting your code like that:
for (int i = 0; i < nums.length; i++) {
for (int j = i; j < nums.length; j++) {
int first = nums[i];
int second = nums[j];
if (first + second == target) {
return {i, j};
}
}
}
return null;
Also, do yourself a favor and do not return result filled with 0 in case you did not find any hit. Either return null as shown or use Optional, for example:
return Optional.of(...);
...
return Optional.empty();
Time Complexity
The worst-case time complexity of the given code would be O(N^2) , where N is nums.length.
This is because you are checking each distinct pair in the array until you find two numbers that add upto the target. In the worst case, you will end up checking all the pairs. The number of pairs in an array of length N would be N^2. Your algorithm will check for N*(N-1) pairs which comes out to be N^2 - N. The upper bound for this would be O(N^2) only since lower order terms are neglected.
Flow of Code
In the code sample, here's how the flow occurs -
i will start from 0 and k will be i+1 which is 1. So, suppose that you won't find the pair which add up to the target.
In that case, each time ( from i = 0 to i = nums.length-1), only the else if (k < nums.length-1) statement will run.
Once k reaches nums.length, i will be incremented and k will again start from i+1.
This will continue till i becomes nums.length - 1. In this iteration the last pair will be checked and then only the loop will end. So worst-case time complexity will come out to be O(N^2) .
Time Complexity Analysis -
So you are checking N pairs in the first loop, N-1 pairs in the next one, N-2 in next and so on... So, total number of checked pairs will be -
N + ( N-1 ) + ( N-2 ) + ( N-3 ) + ... + 2 + 1
= N * ( N + 1 ) / 2
= ( N^2 + N ) / 2
And the above would be considered to have an upper bound of O(N^2) which is your Big-O Worst-Case time complexity.
The Average Case Time Complexity would also be considered as O(N^2).
The Best Case Time Complexity would come out to be O(1) , where only the first pair would be needed to be checked.
Hope this helps !

What's the Big O complexity of a ever decreasing range on a for loop?

I got 2 nested for loops, I calculate a new value for nrResults on every execution of the inner for loop (which will loop nnrResults - 2 times).
The time complexity should be of order O(n), since nrResults depends on the value of n.
But nrResults is being decreased on every loop of the outer for loop.(i.e. firstNext * j is growing with every iteration)
Is the time complexity of the inner for loop still O(n) even though nrResults will keep decreasing throughout execution?
for(int j = 1; j < B; j++) // General case
{
nrResults = n - (firstNext * j);
result = codedInput[0] + results[minI = minIndex(results, 0 + firstNext, nrResults + firstNext)];
for(int i = 1; i < nrResults; i++)
{
if( (i + firstNext) > minI)
results[i] = codedInput[i] + results[minI = minIndex(results, i + firstNext, nrResults + firstNext)];
else
results[i] = codedInput[i] + results[minI];
if(results[i] < result)
result = results[i];
}
}
In this case, you have one loop i:=0..n, and an inner loop j:=0..i-2. Is that so?
Complexity of inner loop will be O((n-1)/2), which is the average. Complexity of external for is O(n).
All you have to do is multiply O((n)*(n/2)), i.e. O(n²/2).
In case inner for iterations is different, you have to recalculate it.

Algorithmic complexity of naive code for processing all consecutive subsequences of a list: n^2 or n^3?

I'm studying for a test and found this question:
I can't really determine the complexity, I figured it's either O(n2) or O(n3) and I'm leaning towards O(n3).
Can someone tell me what it is and why?
My idea that it's O(n2) is because in the j loop, j = i which gives a triangle shape, and then the k loop goes from i + 1 to j, which I think is the other half of the triangle.
public static int what(int[] arr)
{
int m = arr[0];
for (int i=0; i<arr.length; i++)
{
for (int j=i; j<arr.length;j++)
{
int s = arr[i];
for (int k=i+1; k<=j; k++)
s += arr[k];
if (s > m)
m = s;
}
}
return m;
}
Also if you can tell me what it does?
I figured it returns the addition of positive integers or the biggest integer in the array.
But for arrays like {99, -3, 0, 1} it returns 99, which I think is because it's buggy. If not than I have no Idea what it does:
{99, 1} => returns 100
{-1, -2, -3} => return -1
{-1, 5, -2} => returns 5
{99, -3, 0, 1} => returns 99 ???
You can proceed methodically, using Sigma Notation, to obtain the order of growth complexity:
You have 3 for statements. For large n, it is quite obvious that is O(n^3). i and j have O(n) each, k is a little shorter, but still O(n).
The algorithm returns the biggest sum of consecutive terms. That's why for the last one it returns 99, even if you have 0 and 1, you also have -3 that will drop your sum to a maximum 97.
PS: Triangle shape means 1 + 2 + ... + n = n(n+1) / 2 = O(n^2)
Code:
for (int i=0; i<arr.length; i++) // Loop A
{
for (int j=i; j<arr.length;j++) // Loop B
{
for (int k=i+1; k<=j; k++) // Loop C
{
// ..
}
}
}
Asymptotic Analysis on Big-O:
Loop A: Time = 1 + 1 + 1 + .. 1 (n times) = n
Loop B+C: Time = 1 + 2 + 3 + .. + m = m(m+1)/2
Time = SUM { m(m+1)/2 | m in (n,0] }
Time < n * (n(n+1)/2) = 1/2 n^2 * (n+1) = 1/2 n^3 + 1/2 n^2
Time ~ O(n^3)
No matter triangle shape or not, it always a complexity O(N^3), but of course with lower constant then a full triple nested cycles.
You can model the running time of the function as
sum(sum(sum(Theta(1), k=i+1..j),j=i..n),i=1..n)
As
sum(sum(sum(1, k=i+1..j),j=i..n),i=1..n) = 1/6 n^3 - 1/6 n,
the running time is Theta(n^3).
If you do not feel well-versed enough in the underlying theory to directly apply #MohamedEnnahdiElIdri's analysis, why not simply start by testing the code?
Note first that the loop boundaries only depend on the array's length, not its content, so regarding the time complexity, it does not matter what the algorithm does. You might as well analyse the time complexity of
public static long countwhat(int length) {
long count = 0;
for (int i = 0; i < length; i++) {
for (int j = i; j < length; j++) {
for (int k = i + 1; k <= j; k++) {
count++;
}
}
}
return count;
}
Looking at this, is it easier to derive a hypothesis? If not, simply test whether the return value is proportional to length squared or length cubed...
public static void main(String[] args) {
for (int l = 1; l <= 10000; l *= 2) {
long count = countwhat(l);
System.out.println("i=" + l + ", #iterations:" + count +
", #it/n²:" + (double) count / l / l +
", #it/n³:" + (double) count / l / l / l);
}
}
... and notice how one value does not approach anyconstant with rising l and the other one does (not incidentally the very same constant associated with the highest power of $n$ in the methodological analysis).
This requires O(n^3) time due to the fact that in the three loops, three distinct variables are incremented. That is, when one inside loop is over, it does not affect the outer loop. The outer loop runs as many times it was to run before the inner loop was entered.
And this is the maximum contiguous subarray sum problem. Self-explanatory when you see the example:
{99, 1} => returns 100
{-1, -2, -3} => return -1
{-1, 5, -2} => returns 5
{99, -3, 0, 1} => returns 99
There is an excellent algorithm known as Kadane's algorithm (do google for it) which solves this in O(n) time.
Here it goes:
Initialize:
max_so_far = 0
max_ending_here = 0
Loop for each element of the array
(a) max_ending_here = max_ending_here + a[i]
(b) if(max_ending_here < 0)
max_ending_here = 0
(c) if(max_so_far < max_ending_here)
max_so_far = max_ending_here
return max_so_far
References: 1, 2, 3.
O(n^3).
You have calculated any two item between arr[0] and arr[arr.length - 1], running by "i" and "j", which means C(n,2), that is n*(n + 1)/2 times calculation.
And the average step between each calculation running by "k" is (0 + arr.length)/2, so the total calculation times is C(n, 2) * arr.length / 2 = n * n *(n + 1) / 4, that is O(n^3).
The complete reasoning is as follows:
Let n be the length of the array.
1) There are three nested loops.
2) The innermost loop performs exactly j-i iterations (k running from i+1 to j inclusive). There is no premature exit from this loop.
3) The middle loop performs exactly n-j iterations (j running from i to n-1 inclusive), each involving j-i innermost iterations, in total (i-i)+(i+1-i)+(i+2-i)+... (n-1-i) = 0+1+2... + (n-1-i). There is no premature exit from this loop.
4) The outermost loop performs exactly n iterations (i running from 0 to n-1 inclusive), each involving 0+1+2+ ... (n-1-i) innermost iterations. In total, (0+1+2... n-1) + (0+1+2+... n-2) + (0+1+2+... n-3) + ... (0). There is no premature exit from this loop.
Now how do handle handle this mess ? You need to know a little about the Faulhaber's formula (http://en.wikipedia.org/wiki/Faulhaber%27s_formula). In a nutshell, it says that the sum of integers up to n is O(n^2); and the sum of the sum of integers up to n is O(n^3), and so on.
If you recall from calculus, the primitive of X is X^2/2; and the primitive of X^2 is X^3/3. Every time the degree increases. This is not by coincidence.
Your code runs in O(n^3).

I want to calculate total number of iterations a code will take

This is my code for a simple selection sort.usually the complexity (time) for a sort is number of iterations it has taken for sorting O(n^2) in case of selection sort
When I dry ran this code against sample string of 98765, it gave me 25 iterations.
Just to cross check with my dry ran output i put 2 vbl- noi and noj in my code.
Q: will the number of total iterations be = noi*noj or noi+noj;
int index = 0; int noi = 0, noj = 0;
for (j = 0; j < 5; j++)
{
noj++;
index = j;
for (i = j; i < 5; i++)
{
if (a[index] > a[i])
{
a[index] = a[index] + a[i];
a[i] = a[index] - a[i];
a[index] = a[index] - a[i];
noi++;
}
}
}
number of iterations is always 15 (5+4+3+2+1) because in your loops there are j<5 and i<5. So your code complexity is O(n^0) because in your case n is 5
Complexity doesn't depend from n because there's no n. The complexity is always exactly 15 (1+2+3+4+5 as said shift66)
it is: noj [for first loop] + (( noj * (noj + 1) ) / 2) [for inner loop]
as first loops is from 1-noj and second is j-noj (where j depends on first loop)

Categories

Resources