I am trying to find the Big-O for the Summing function. I know the two loops usually mean that N^2 run time but this is just one N but j is running many more than N itself.
int Summing( int n )
{
int i
int j
int s = 0;
for(i=0; i < n; i++){
for(j=0; j < i; j++) {
s += i*i;
}
}
You can calculate the exact time the inner loop takes as a function of i and then sum it up over all values that i takes.
Here the number of times the innermost section is run is 0 + 1 + 2 + ... + (n-1) = (n-1)*n/2 = (n^2)/2 - n/2 which is O(n^2)
Related
I know using two for loops sums up to O(N^2). Is it same in case of for loop and while loop?
Here is a code snippet
for(int num : nums)
{
if(!set.contains(num-1))
{
int currNum = num;
int currStreak = 1;
while(set.contains(currNum+1))
{
currNum += 1;
currStreak += 1;
}
longestStreak = Math.max(longestStreak, currStreak);
}
}
As deHaar wrote. The complexity of two for loops aren't always O(n^2).
for example, for the following code:
for (int i = 0; i < n; i++) {
for (int j = 0; j < n * n; j++) {
//do something
}
}
the complexity is O(n^3) because the code execute the 'do something' n^3 times.
note that while and for are technically the same.
the code:
for (int i = 0; i < 100; i++) {
//do something
}
could be translated to:
int i = 0;
while(i < 100) {
//do something
i++;
}
So in your example:
let say that m is the longest streak. now the code inside the while loop is running at most m times. now let say n is the amount of numbers. so the code in the for loop is running n times.
in the code inside the while runs m times every time the for lop is running so m*n times in total. so the complexity is O(n*m)
if the content of set is nums then the longest possible streak will be with length n. and then you can say the complexity is O(n^2).
public int Loop(int[] array1) {
int result = 0;
for (int i = 0; i < array1.length; i++) {
for (int j = 0; j < array1.length; j++) {
for (int k = 1; k < array1.length; k = k * 2) {
result += j * j * array1[k] + array1[i] + array1[j];
}
}
}
return result;
}
I'm trying to find the complexity function that counts the number of arithmetic operations here. I know the complexity class would be O(n^3), but I'm having a bit of trouble counting the steps.
My reasoning so far is that I count the number of arithmetic operations which is 8, so would the complexity function just be 8n^3?
Any guidance in the right direction would be very much appreciated, thank you!
The first loop will run n times, the second loop will run n times however the third loop will run log(n) times (base 2). Since you are multiplying k by two each time the inverse operation would be to take the log. Multiplying we have O(n^2 log(n))
If we can agree that the following is one big step:
result += j * j * array1[k] + array1[i] + array1[j]
then let's call that incrementResult.
How many times is incrementResult called here? (log n)
for (int k = 1; k < array1.length; k = k * 2) {
// incrementResult
}
Lets call that loop3. Then how many times is loop3 called here? (n)
for (int j = 0; j < array1.length; j++) {
// loop 3
}
Let's call that loop2. Then, how many times is loop2 called here? (n)
for (int i = 0; i < array1.length; i++) {
// loop 2
}
Multiply all of those and you'll get your answer :)
That depends on the loops. For instance:
for (int i = 0; i < 10; i++) {
for (int j = 0; j < 10; j++) {
for (int k = 0; k < 10; k++) {
sum += i * j * k;
}
}
}
has complexity O(1), because the number of iterations does not depend on the input at all.
Or this:
for (int i = 0; i < n*n*n*n*n*n; i++) {
sum += i;
}
is O(n^6), even though there is a single loop.
What really matters is how many iterations each loop makes.
In your case, it is easy to see that each iteration of the innermost loop is O(1). How many iterations are there? How many times do you need to double a number until you reach n? If x is the number of iterations, we'd exit the loop at the first x such that k = 2^x > n. Can you solve this for x?
Each iteration of the second loop will do this, so the cost of the second loop is the number of iterations (which are easier to count this time) times the cost of the inner loop.
And each iteration of the first loop will do this, so the cost of the first loop is the number of iterations (which is also easy to count) times the cost of the second loop.
Overall, the runtime is the product of 3 numbers. Can you find them?
for (int i = 0,len=size-2; i < len; i++) {
for (int j = 1,leng = size-1; j < leng; j++) {
for (int k = 2; k < size; k++) {
if (i < j && j < k) {
sum = sum + Math.floor((a[i] + a[j] + a[k]) / (a[i] * a[j] * a[k]));
}
}
}
}
I need this piece of code to run in atleast half of the current running time.Here, the array 'a' is of the type double. I am taking inputs via the reader class. How to achieve a faster run time?
Your nested loops do nothing but iterate unless the condition i < j && j < k is satisfied. But the middle and inner loops start their iterations at the same initial value every time, regardless of the values of the outer loop indexes. For example, when i is 5, the middle loop still starts at 1 and the inner one still starts at 2, even though they can know that they will not perform any useful work for those values.
Start each loop iterating from a more useful point. You will save much useless index arithmetic and many vain index comparisons. In fact, if you do it properly then you shouldn't need to perform any index comparisons at all.
Details, such as they are, are left as an exercise. I've probably offered too much help already.
You can rewrite 1/(x*y*z)as 1*(1/x)*(1/y)*(1/z). Multiplying is faster than dividing. You can precalculate array of reciprocal values as ra[i] = 1/a[i]. There can be further optimizations, but it depends on what values there can be, You did not answer that question.
With this code there will not be any need of an if statement:
x is always inferior to y which is always inferior to z
for (int x = 0; x < size - 2; x++)
for(int y = x + 1; y < size - 1; y++)
for(int z = y + 1; z < size; z++)
sum += Math.floor((a[x] + a[y] + a[z]) / (a[x] * a[y] * a[z]));
For the following program fragment you will (a) write down the total work done by each program statement (beside each statement), (b) compute an expression for the total time complexity, T(n) and derive thhe big Oh complexity, showing all steps to the final answer. I am having a lot of trouble starting off.
for ( i = 0; i < n; i++) {
for ( j = 0; j < 1000; j++) {
a[ i ] = random(n) // random() takes constant time
}
}
int sortedArray [];
for ( i = 0; i < n; i++) {
for ( j = 0; j < i; j++) {
readArray(a) // reads in an array of n values
sortedArray = sort(a) // sort() takes n log n operations
}
}
I also had this problem. On 2nd line I have 'n', 3rd I have n^2, 4th I have n, and on 5th I have n log n. For my time complexity I have O(n^2).
sum = 0;
for (i = 1; i <= n; i++) { //#1
for (j = 1; j <= i * i; j++) { //#2
if (j % i == 0) { //#3
for (k = 1; k <= j; k++) { //#4
sum++;
}
}
}
}
The above got me confusing
Suppose #1 runs for N times
#2 runs for N^2 times
#3 runs for N/c since for N inputs N/c could be true conditions
#4 runs for N times
Therefore roughly I could be looking at O(N^5) . I am not sure. Please help clarify.
EDIT I was wondering the runtime at the if(j%i==0). Since it takes N^2 inputs from its parent loop it could be doing (N^2)/c executions instead of N/c
I would say its O(N^4) as its the same as.
for (int i = 1; i <= n; i++) //#1 O(n ...
for (int j = i; j <= i * i; j+=i) //#2 ... * n ...
for (int k = 1; k <= j; k++) //#4 ... * n^2) as j ~= i^2
sum++;
or
public static void main(String... args) {
int n = 9000;
System.out.println((double) f(n * 10) / f(n));
}
private static long f(long n) {
long sum = 0;
for (long i = 1; i <= n; i++) //#1
for (long j = 1; j <= i; j++) //#2
sum += i * j; // # 4
return sum;
}
prints
9996.667534360826
which is pretty close to 10^4
#PeterLawrey did the math, here are benchmarks plotted on chart (my data set - n vs. execution time in microseconds).
Basically I run the code in question several times with different n input (X-axis). Then I divided average execution time by n^5, n^4 and n^3 functions and plotted that:
Full size image
Note that this is a logarithmic scale and that all functions were scaled to more-or-less be in the same range.
Guess what, avergae execution time t(n) divided by n^5 keeps getting decreasing, while t(n)/n^3 keeps growing. Only t(n)/n^4 is stabilizes when approaching infinity which proves that the average execution time is in fact O(n^4).
I think the answer, using Sigma notation, would be like the following: