I have the following code. I have to derive expressions for the runtime using summations, then solve them to obtain an expression for T(n) or T(n,m) that is not written with a summation. Then count how many times the println statement is executed as a function of any variables n and or m. O() is used to check answers.
T(n) E O(n)
for (int i = 0; i < n; ++i)
{
System.out.println("hello");
}
I've got the summation part of this to get...
Sigma i=0 lower bound, n is upper bound, and println is the constant.
From here, how do I solve this to get an expression for T(n)?
Here is another coding example of code where I found the summation but do not quite understand the second part of the question.
T(n) E O(n^2)
for (int i = 1; i <= n; ++i)
{
for (int j = 1; j <= n; ++j)
{
System.out.println("Hello");
}
}
Would like some help on the steps to go along with an answer.
Related
I am trying to figure out a tight bound in terms of big-O for this snippet:
for(int i = 1 ; i <= n ; i++) {
for(int j = 1; j <= i*i ; j++) {
if (j% i == 0){
for(int k = 0 ; k<j ; k++ )
sum++;
}
}
}
If we start with the inner most loop, it will in worst case run k = n^2 times, which accounts for O(N^2).
The if-statement will be true every time j = m*i, where m is an arbitrary constant. Since j runs from 1 to i^2, this will happen when m= {1, 2, ..., i}, which means it will be true i times, and i can at most be n, so the worst case will be m={1,2, ..., n} = n times.
The second loop should have a worsts case of O(N^2) if i = n.
The outer loop has a worst case complexity of O(N).
I argue that this will combine in the following way: O(N^2) for the inner loop * O(N^2) for the second loop * O(N) for the outer loop gives a worst case time complexity of O(N^5)
However, the if-statement guarantees that we will only reach the inner loop n times, not n^2. But regardless of this, we still need to go through the outer loops n * n^2 times. Does the if-test influence the worst case time complexity for the snippet?
Edit: Corrected for j to i^2, not i.
You can simplify the analysis by rewriting your loops without an if, as follows:
for(int i = 1 ; i <= n ; i++) {
for(int j = 1; j <= i ; j++) {
for(int k = 0 ; k<j*i ; k++ ) {
sum++;
}
}
}
This eliminates steps in which the conditional skips over the "payload" of the loop. The complexity of this equivalent system of loops is O(n4).
I analyse your question in a more straightfroward way
we first start by fix i as a costant,
for example, assume it to be k,
so j=1~k^2, when j=k,2k,3k,...,k^2, assume j to be c*k (c=1~k)
the next loop will be executed c^2 times,
so the complexity for a fix i can be expressed as=>
(1+.....+1)+(1+1+...+2^2)+(1+1+...+3^2)+.....+(1+1+...+k^2)
= O(k^3)
so now we set k to be 1~n, so the total complexity will be O(n^4)
I am finishing up an assignment for my class, this particular section involves giving an "analysis" of the running time for several for loops. The instructor specified that he wants a Big Oh Notation answer.
I am building on the foundation that total running time is based on:
1)The cost of executing each statement
2)The frequency of execution of each statement
3)The idea of working from "inside and out", specifically starting from inner loops.
I know that:
total = 0;
for (int i = 0; i < n;i++){
total++;
}
Answer: O(N) its running linear.
for (int i = 0; i < n;++i)
for (int j = 0; j < n;++j)
total++;
Answer: O(N^2), we only care about how large N grows.
I am confused on
for (int i = 0;i < n;++i)
for (j=0;j < n * n;++j)
++total;
and
for ( i = 0;i < n;++i)
for (j = 0; j < i;++j)
++total;
And last but not least, I am assuming from my textbook that all Triple nested loops are running at N^3 time?
You can analyze your algorithms using Sigma notation, to count/expand the number of iterations run by the inner for loop of your algorithms:
Where T_a covers
for (i = 0; i < n; ++i)
for (j = 0; j < n * n; ++j)
++total;
and T_b:
for (i = 0; i < n; ++i)
for (j = 0; j < i; ++j)
++total;
Finally, a note on your question:
"And last but not least, I am assuming from my textbook that all
Triple nested loops are running at N^3 time?"
This is not true: it depends on how the iterate is increased as well as bounded in the signature of each loop. Compare e.g. with the inner loop in T_a above (bounded by n^2, but simply increased by 1 in each iteration) or e.g. the algorithm analyzed in this answer, or, for a slightly trickier case, the single loop analyzed in this answer.
When you have loop in the loop or nested loop, let`s say for-loop, regardless of the programming language(of course must be imperative)
for(int j = 1; j <= 100; j++){
for(int k = 1; k <= 200; k++){
\\body or terms
}
}
is the mathematical equivalent, when I want to sum it for i = 1 with all j = {1, 200} and i = 2 with again all j = {1, 200} and so on :
And the red-circled condition is unnecessary, right?
And the same applies for multiple nested loops?
The code you provided will run as you explained
sum it for i = 1 with all j = {1, 200} and i = 2 with again all j = {1, 200} and so on
However, the mathematical equivalent is without the condition marked in red.
The sigmas with the condition is equivalent to this code:
for(int j = 1; j <= 100; j++){
for(int k = 1; k < j; k++){
\\body or terms
}
}
Hope I helped.
Sigma stands for summation, which means that if you're dealing with sigma for a range, i=1,n, which is defined as x, then the result is going to be x * n (x + x + x + ... + x n times). Transcribed to pseudocode, it would be like this:
result = 0
for i=1,n:
result = result + x
So it doesn't really translate to a general for loop which is more about doing something a certain number or times or until a condition is met.
Often when you see mathematicians researching algorithms that relate directly to software fields, they use the more flexible functional notation and recursion a lot more than summation since such a functional notation actually translates a bit more directly to general loop computations than summation.
I'm pretty new to the Big-O field, so bear with me here. I have been searching about it as much as I could but I still need a lot of work to fully understand it.
I came across these nested for loops in a practicing exercises and there wasn't any solutions and they seem complicated. So, any help would be appreciated.
1)
int sum=0;
for(int i=0; i < n^2; i++) { // n+1
for(int j = n-1; j >= n-1-i; j–-) { // n(n+1)/2 ?
sum = i+j; // n(n+1)/2 ?
System.out.println(sum); // n(n+1)/2 ?
}
}
Big-O = ?
2)
int sum=0;
for(int i=1; i <= 2^n; i=i*2) { // log(n)
for(int j=0; j <= log(i); j++) { // log(n(n+1)/2) ?
sum = i+j; // log(n(n+1)/2) ?
System.out.println(sum); // log(n(n+1)/2) ?
}
}
Big-O = ?
3)
int sum = 0; int k = 23;
for(int i=k; i <= 2^(n−k); i=i*2) { // log(n)
for(int j=2^(i−k); j < 2^(i+k); j=j*2) { // log(log(n)) ?
sum = i+j; // log(log(n)) ?
System.out.println(sum); // log(log(n)) ?
}
}
Big-O = ?
4)
int sum=0;
for(int i=2n; i>=1; i=i/2) {
for(int j=i; j>=1; j=j/2) {
sum = i+j;
System.out.println(sum);
}
}
Big-O = ?
EDIT:
- Corrected #4. Sorry for the mess up.
- Base of the log is 2.
- The ^ here means "to the power", not xor.
There are plenty questions like "Big-O of nested loops" here on stackoverflow (and answers).
However, you will get an answer from me. But first there is a notation problem:
You tagged this question as java. In the code I see something like 2ⁿ or n². In java this means xor, but I think you meant Math.pow(2,n) instead, so for this answer I will treat it as a power operator.
int sum=0;
for(int i=0; i < n^2; i++) { // outer loop
for(int j = n-1; j >= n-1-i; j–-) { // inner loop
sum = i+j; // inner operations
System.out.println(sum);
}
}
The inner operations runs in O(1), so I just counting how often they are called.
The outer loop runs n² times.
for each i (from the outer loop) the inner loop runs i times.
In total you get 0+1+...+(n²-1)+n² = n²(n²+1)/2. This is in Θ(n⁴).
int sum=0;
for(int i=1; i <= 2^n; i=i*2) { // outer loop
for(int j=0; j <= log(i); j++) { // inner loop
sum = i+j; // inner operations
System.out.println(sum);
}
}
The outer loop runs n times, since 2⋅2⋅2⋅...⋅2 (n times) equals 2n.
The inner loop runs k times for each i=2k (1 ≤ k ≤ n), assuming the base of the logarithm is 2.
In total you get 1+2+3+...+n-1+n = n(n+1)/2. This is in Θ(n²).
int sum = 0; int k = 23;
for(int i=k; i <= 2^(n−k); i=i*2) { // outer loop
for(int j=2^(i−k); j < 2^(i+k); j=j*2) { // inner loop
sum = i+j; // inner operations
System.out.println(sum);
}
}
The outer loop runs m times with m minimal such that k⋅2m > 2n-k holds. This can be written as k⋅2k⋅2m > 2n. k has to be positiv (otherwise the outer loop will run forever). Assuming k is bounded by O(n) (canstants are also in O(n)), m is also bounded by O(n).
The inner loop runs always 2⋅k times, no matter what i or n is. This is in O(1) for a constant k and in O(n) for a k bounded by O(n).
In total you get O(n) for a constant k and O(n²) for a k in O(n).
int sum=0;
for(int i=2n; i>=1; i=i/2) { // outer loop
for(int j=i; j>=1; j=j/2) { // inner loop
sum = i+j; // inner operations
System.out.println(sum);
}
}
The outer loop runs log(n) times just like in case 2 (the other way around)
The inner loop runs j times for (basicly) each power of 2 between 1 and 2n.
Assuming n = 2k (means log(n) = k) you get in total
2k+1+2k+2k-1+...+22+21+20=2k+2-1=4n-1. So this in in O(n). This also holds for n not a power of 2.
Methodically finding a solution for your iterative algorithms using Sigma notation:
Using base 2 for the log below:
I promise this is the last Big O question
Big O Notation for following loops...
for (int i = n; i > 0; i = i / 2){
for (int j = 0; j < n; j++){
count++;
}
}
for (int k = 0; k < n; k++){
for (int m = 0; m < n; m++){
count++;
}
}
here is what i think im sure of.
the first set of nested loops has O(n*log2(n)) and the second set of nested loops is O(n^2). When adding these is it correct to drop the first term? and say that the overall Big O is O(n^2)?
Second question, when adding Big O notation for loops in series is always correct to drop the less significant terms?
The answer to both your questions is yes. You always drop smaller terms, as they are dominated by the larger terms for sufficiently large n, and you only care about large n when doing Big O analysis.
The reason is that n*log2(n) is dominated by n^2 asymptotically: for sufficiently large n, |n * log2(n)| < |n^2|.
If you don't see why this means you can drop the n*log2(n) term, try adding n^2 to both sides:
n^2 + n*log2(n) < n^2 + n^2
n^2 + n*log2(n) < 2 * n^2
Thus, if we know that we can ignore a constant factor k, we know we can ignore a less significant term.,