Showing the order of growth of the function - java

I have the following code
int sum = 0;
for (int i = 1; i <= n; i = i*2)
{
for (int j = 1; j <=n; ++j)
{
++sum;
}
}
And I have the following analysis which according to some people is wrong even though the answer is correct, but I don't understand why.
So, I do the following,
n = 1; i = 1; j is being executed 1 time
n = 2; i = 2; j is being executed 2* 1 time
n = 3; i = 4; j = 4 * 3 times
n = 4; i = 8; j = 8 * 4 times
n = 5; i = 16; j = 16 * 5 times
......
n = k; i = 2^k; j = n * 2^k times
And since 2^k is log(n)
The order of growth of this function is nlog(n) which is a linearithmic growth rate.
Am I doing something wrong in my analysis? Please let me know if I have any mistakes because it is confusing me a lot. Thank you! I want to apply this analysis to more complicated nested loops, but before I do that I need to know what I'm doing wrong.
Update: I spent a lot of time by myself trying to figure out why my analysis is wrong. Thanks to #YvesDaoust I think I'm starting to understand it a little bit more.

The body of the inner loop is executed n times.
The body of the outer loop is executed for all powers of 2 from 1 to n and there are about log2(n) of them, hence the global complexity
O(n.log(n))

Related

I thought answer of these time complexity is O(n) but it is wrong. Can someone explain?

What is the big-O notation of the given code snippet?
void snippet() {
for (int i = 0; i < n; i=i+5)
a[i] = i;
}
The time complexity is O(n/5).
Let's take the simplified example (in javascript) below where n is 10
var n = 10;
for (var i = 0; i < n; i += 5)
console.log(i);
The loop will run twice.
First i is 0,
then i is 5,
then i is 10 and 10 < 10 == false so we stop.
Effectively we are moving i towards n in steps of 5.

Big O of functionF

static void functionF(int n) {
for (int i = 0; i < n; i++) {
functionF(n-1);
}
}
I've calculated that:
if n = 2, the function gets called 4 more times. If n = 3, it gets called 15 more times. If n = 4, it gets called 64 more times. I am trying to work out a big O time complexity.
From a complexity pov, it takes factorial time to complete -> O(n!)
Because every step, would run the amount of times specified in the last step.
And note the diference between your function and:
function alpha(n){
for(int i = 0; i < n; n++){
//Do some stuff
alpha(i)
}
}
or
function alpha(n){
for(int i = 0; i < n; i++){
for(int j = 0; j < n; j++){
// Do some stuff
}
}
}
The first one i can't really guess its time, buts lower than O(n!)
The second one is the simple O(n²), which in any real situacion runs faster than O(n!)
Tested this function by incrementing a public static variable every time the function is called. You could do this in the future if you want to check how many times something is done.
The pattern beginning with (1, 4, 15, 64, 325, ...) is:
a(n) = n(a(n-1) + 1)
or
a(n) = n(a(n-1)) + n
That's such an odd function because of the recursive aspect of it. However, it's accurate. You could have searched this pattern up from the numbers you provided, as well.

What are the Big O Notation for these for loops?

I am finishing up an assignment for my class, this particular section involves giving an "analysis" of the running time for several for loops. The instructor specified that he wants a Big Oh Notation answer.
I am building on the foundation that total running time is based on:
1)The cost of executing each statement
2)The frequency of execution of each statement
3)The idea of working from "inside and out", specifically starting from inner loops.
I know that:
total = 0;
for (int i = 0; i < n;i++){
total++;
}
Answer: O(N) its running linear.
for (int i = 0; i < n;++i)
for (int j = 0; j < n;++j)
total++;
Answer: O(N^2), we only care about how large N grows.
I am confused on
for (int i = 0;i < n;++i)
for (j=0;j < n * n;++j)
++total;
and
for ( i = 0;i < n;++i)
for (j = 0; j < i;++j)
++total;
And last but not least, I am assuming from my textbook that all Triple nested loops are running at N^3 time?
You can analyze your algorithms using Sigma notation, to count/expand the number of iterations run by the inner for loop of your algorithms:
Where T_a covers
for (i = 0; i < n; ++i)
for (j = 0; j < n * n; ++j)
++total;
and T_b:
for (i = 0; i < n; ++i)
for (j = 0; j < i; ++j)
++total;
Finally, a note on your question:
"And last but not least, I am assuming from my textbook that all
Triple nested loops are running at N^3 time?"
This is not true: it depends on how the iterate is increased as well as bounded in the signature of each loop. Compare e.g. with the inner loop in T_a above (bounded by n^2, but simply increased by 1 in each iteration) or e.g. the algorithm analyzed in this answer, or, for a slightly trickier case, the single loop analyzed in this answer.

Mathematical equivalent of multiple nested loops

When you have loop in the loop or nested loop, let`s say for-loop, regardless of the programming language(of course must be imperative)
for(int j = 1; j <= 100; j++){
for(int k = 1; k <= 200; k++){
\\body or terms
}
}
is the mathematical equivalent, when I want to sum it for i = 1 with all j = {1, 200} and i = 2 with again all j = {1, 200} and so on :
And the red-circled condition is unnecessary, right?
And the same applies for multiple nested loops?
The code you provided will run as you explained
sum it for i = 1 with all j = {1, 200} and i = 2 with again all j = {1, 200} and so on
However, the mathematical equivalent is without the condition marked in red.
The sigmas with the condition is equivalent to this code:
for(int j = 1; j <= 100; j++){
for(int k = 1; k < j; k++){
\\body or terms
}
}
Hope I helped.
Sigma stands for summation, which means that if you're dealing with sigma for a range, i=1,n, which is defined as x, then the result is going to be x * n (x + x + x + ... + x n times). Transcribed to pseudocode, it would be like this:
result = 0
for i=1,n:
result = result + x
So it doesn't really translate to a general for loop which is more about doing something a certain number or times or until a condition is met.
Often when you see mathematicians researching algorithms that relate directly to software fields, they use the more flexible functional notation and recursion a lot more than summation since such a functional notation actually translates a bit more directly to general loop computations than summation.

worst case running time of the following code

I am attending a course online and stuck at the following question. What is the order of growth of the worst case running time of the following code fragment as a function of N?
int sum = 0;
for (int i = 1; i <= N*N; i++)
for (int j = 1; j <= i; j++)
sum++;
Can anyone explain this?
The execution of this depends on the size of N.
for (int i = 1; i <= N*N; i++)
Value of i varies from 1 to N*N. The inner loop variable j depends on i.
So for every i, inner loop is executed i times. sum++ will be executed N2((N2)+1)/2 times. O(N4) will be the time complexity of this loop.
the code will run everytime (N*N+1)C2 times

Categories

Resources