I have a quick question regarding these nested loops:
for(int i = 0; i < N; i++) {
for(int j = 0; j < i; j++)
for(j = i; j < N; j = j/2)
}
I am seeing here at first loop we go O(N) times, second we go O(N) times and lastly we go O(log n). Is it wrong to say this adds up to O(N)?
this is an endless loop instead of j/2 I think it should be
j*2
as suggested in the comments, right now this is O(infinity) since the third loop never stops, so ill assume you meant to write this code
for(int i = 0; i < N; i++) {
for(int j = 0; j < i; j++)
for(k = i; k < N; k = k*2)
}
and in this case, the first and second loop will be O(N) and the third loop will be O(log(N)), but since the loops are nested the total complexity of your code will be
O(N*N*log(N)) = O(N^2 * Log(N))
if you have any questions about my answer feel free to ask in the comments , and if my comment helped you please consider marking it as the answer or upvoting :)
Related
This question already has answers here:
How can I find the time complexity of an algorithm?
(10 answers)
Closed 4 years ago.
I am just learning the Big O notation and wanted to ask how it works for nested loops.
Is it true that in the case of
for (int i = 0; i < N; i++){
for (int j = 0; j < N; j++){
do something;
}
}
It would be O(N squared), while
for (int i = 0; i < 1000; i++){
for (int j = 0; j < N; j++){
do something;
}
}
It would be O(N) because the first loop has a constant? Or would it still be O(N squared)? Thank you
Your first statement is correct.
N can be very large and O(n) takes it into account.
so first code is O(N^2)
while second is O(1000*N) => still O(N)
BIG O notation does not include constants
public int Loop(int[] array1) {
int result = 0;
for (int i = 0; i < array1.length; i++) {
for (int j = 0; j < array1.length; j++) {
for (int k = 1; k < array1.length; k = k * 2) {
result += j * j * array1[k] + array1[i] + array1[j];
}
}
}
return result;
}
I'm trying to find the complexity function that counts the number of arithmetic operations here. I know the complexity class would be O(n^3), but I'm having a bit of trouble counting the steps.
My reasoning so far is that I count the number of arithmetic operations which is 8, so would the complexity function just be 8n^3?
Any guidance in the right direction would be very much appreciated, thank you!
The first loop will run n times, the second loop will run n times however the third loop will run log(n) times (base 2). Since you are multiplying k by two each time the inverse operation would be to take the log. Multiplying we have O(n^2 log(n))
If we can agree that the following is one big step:
result += j * j * array1[k] + array1[i] + array1[j]
then let's call that incrementResult.
How many times is incrementResult called here? (log n)
for (int k = 1; k < array1.length; k = k * 2) {
// incrementResult
}
Lets call that loop3. Then how many times is loop3 called here? (n)
for (int j = 0; j < array1.length; j++) {
// loop 3
}
Let's call that loop2. Then, how many times is loop2 called here? (n)
for (int i = 0; i < array1.length; i++) {
// loop 2
}
Multiply all of those and you'll get your answer :)
That depends on the loops. For instance:
for (int i = 0; i < 10; i++) {
for (int j = 0; j < 10; j++) {
for (int k = 0; k < 10; k++) {
sum += i * j * k;
}
}
}
has complexity O(1), because the number of iterations does not depend on the input at all.
Or this:
for (int i = 0; i < n*n*n*n*n*n; i++) {
sum += i;
}
is O(n^6), even though there is a single loop.
What really matters is how many iterations each loop makes.
In your case, it is easy to see that each iteration of the innermost loop is O(1). How many iterations are there? How many times do you need to double a number until you reach n? If x is the number of iterations, we'd exit the loop at the first x such that k = 2^x > n. Can you solve this for x?
Each iteration of the second loop will do this, so the cost of the second loop is the number of iterations (which are easier to count this time) times the cost of the inner loop.
And each iteration of the first loop will do this, so the cost of the first loop is the number of iterations (which is also easy to count) times the cost of the second loop.
Overall, the runtime is the product of 3 numbers. Can you find them?
For the following program fragment you will (a) write down the total work done by each program statement (beside each statement), (b) compute an expression for the total time complexity, T(n) and derive thhe big Oh complexity, showing all steps to the final answer. I am having a lot of trouble starting off.
for ( i = 0; i < n; i++) {
for ( j = 0; j < 1000; j++) {
a[ i ] = random(n) // random() takes constant time
}
}
int sortedArray [];
for ( i = 0; i < n; i++) {
for ( j = 0; j < i; j++) {
readArray(a) // reads in an array of n values
sortedArray = sort(a) // sort() takes n log n operations
}
}
I also had this problem. On 2nd line I have 'n', 3rd I have n^2, 4th I have n, and on 5th I have n log n. For my time complexity I have O(n^2).
What would the Big O notation be for the following nested loops?
for (int i = n; i > 0; i = i / 2){
for (int j = n; j > 0; j = j / 2){
for (int k = n; k > 0; k = k / 2){
count++;
}
}
}
My thoughts are:
each loop is O(log2(n)) so is it as simple as multiply
O(log2(n)) * O(log2(n)) * O(log2(n)) = O(log2(n)^3)
Yes, that is correct.
One way to figure out the big-O complexity of nested loops whose bounds do not immediately depend on one another is to work from the inside out. The innermost loop does O(log n) work. The second loop runs O(log n) times and does O(log n) work each time, so it does O(log2 n) work. Finally, the outmost loop runs O(log n) times and does O(log2 n) work on each iteration, so the total work done is O(log3 n).
Hope this helps!
Yes you are right.
Easy way to calculate -
for(int i=0; i<n;i++){ // n times
for(int j=0; j<n;j++){ // n times
}
}
This example of simple nested loop. Here Big-O of each loop O(n) and it is nested so typically O(n * n) which is O(n^2) actual Big-O. And in your case -
for (int i = n; i > 0; i = i / 2){ // log(n)
for (int j = n; j > 0; j = j / 2){ // log(n)
for (int k = n; k > 0; k = k / 2){ // log(n)
count++;
}
}
}
Which is in nested loop where each loop Big-O is O(log(n)) so all together complexity would be O(log(n)^3)
Indeed, your assumption is correct. You can show it methodically like the following:
In code and the results below, We can see that “Traverse2” is much faster than "Traverse1", indeed they just traverse the same number of elements.
1.How does this difference happened?
2.Putting longer interation inside shorter interation will have a better performance?
public class TraverseTest {
public static void main(String[] args)
{
int a[][] = new int[100][10];
System.out.println(System.currentTimeMillis());
//Traverse1
for(int i = 0; i < 100; i++)
{
for(int j = 0; j < 10; j++)
a[i][j] = 1;
}
System.out.println(System.currentTimeMillis());
//Traverse2
for(int i = 0; i < 10; i++)
{
for(int j = 0; j < 100; j++)
a[j][i] = 2;
}
System.out.println(System.currentTimeMillis());
}
}
Result:
1347116569345
1347116569360
1347116569360
If i change it to
System.out.println(System.nanoTime());
The result will be:
4888285195629
4888285846760
4888285914219
It means that if we put longer interation inside will have a better performance. And it seems to have some conflicts with cache hits theory.
I suspect that any strangeness in the results you are seeing in this micro-benchmark are due to flaws in the benchmark itself.
For example:
Your benchmark does not take account of "JVM warmup" effects, such as the fact that the JIT compiler does not compile to native code immediately. (This only happens after the code has executed for a bit, and the JVM has measured some usage numbers to aid optimization.) The correct way to deal with this is to put the whole lot inside a loop that runs a few times, and discard any initial sets of times that that look "odd" ... due to warmup effects.
The loops in your benchmark could in theory be optimized away. The JIT compiler might be able to deduce that they don't do any work that affects the program's output.
Finally, I'd just like to remind you that hand-optimizing like this is usually a bad idea ... unless you've got convincing evidence that it is worth your while hand-optimizing AND that this code is really where the application is spending significant time.
First, always run microbenchmark tests several times in a loop. Then you'll see both times are 0, as the array sizes are too small. To get non-zero times, increase array sizes in 100 times. My times are roughly 32 ms for Traverse1 and 250 for Traverse2.
The difference is because processor use cache memory. Access to sequential memory addresses is much faster.
My output(with you original code 100i/10j vs 10i/100j ):
1347118083906
1347118083906
1347118083906
You are using a very bad time resolution for a very quick calculation.
I changed the i and j limit to 1000 both.
int a[][] = new int[1000][1000];
System.out.println(System.currentTimeMillis());
//Traverse1
for(int i = 0; i < 1000; i++)
{
for(int j = 0; j < 1000; j++)
a[i][j] = 1;
}
System.out.println(System.currentTimeMillis());
//Traverse2
for(int i = 0; i < 1000; i++)
{
for(int j = 0; j < 1000; j++)
a[j][i] = 2;
}
System.out.println(System.currentTimeMillis());
output:
1347118210671
1347118210687 //difference is 16 ms
1347118210703 //difference is 16 ms again -_-
Two possibilities:
Java hotspot changes the second loop into a first-type or optimizes
with exchanging i and j.
Time resolution is still not enough.
So i changed output as System.nanoTime()
int a[][] = new int[1000][1000];
System.out.println(System.nanoTime());
//Traverse1
for(int i = 0; i < 1000; i++)
{
for(int j = 0; j < 1000; j++)
a[i][j] = 1;
}
System.out.println(System.nanoTime());
//Traverse2
for(int i = 0; i < 1000; i++)
{
for(int j = 0; j < 1000; j++)
a[j][i] = 2;
}
System.out.println(System.nanoTime());
Output:
16151040043078
16151047859993 //difference is 7800000 nanoseconds
16151061346623 //difference is 13500000 nanoseconds --->this is half speed
1.How does this difference happened?
Note that even ommiting you just used wrong time-resolution, you are making wrong comparations vs inequal cases. First is contiguous-access while second is not.
Lets say first nested loops are just a heating-preparing for the second one then it would make your assumption of "second is much faster" even more wrong.
Dont forget that 2D-array is an "array of arrays" in java. So, the right-most index would show a contiguous area. Faster for the first version.
2.Putting longer interation inside shorter interation will have a better performance?
for(int i = 0; i < 10; i++)
{
for(int j = 0; j < 100; j++)
a[j][i] = 2;
}
Increasing the first index is slower because the next iteration goes kbytes away so you cannot use your cache-line anymore.
Absolutely not!
In my point of view, size of array also affects the result. Like:
public class TraverseTest {
public static void main(String[] args)
{
int a[][] = new int[10000][2];
System.out.println(System.currentTimeMillis());
//Traverse1
for(int i = 0; i < 10000; i++)
{
for(int j = 0; j < 2; j++)
a[i][j] = 1;
}
System.out.println(System.currentTimeMillis());
//Traverse2
for(int i = 0; i < 2; i++)
{
for(int j = 0; j < 10000; j++)
a[j][i] = 2;
}
System.out.println(System.currentTimeMillis());
}
}
Traverse1 needs 10000*3+1 = 30001 comparisons to decide whether to exit the iteration,
however Traverse2 only needs 2*10001+1 = 20003 comparisons.
Traverse1 needs 1.5 times then number of comparisons of Traverse2.