I created this algorithm to find the best trade between 3 numbers. It goes through the program and finds the best day to sell, buy, and profit from stock. I need to explain the algorithm used and how the time complexity is O(n log n) but I have a lot of trouble determining that. I was hoping someone could explain O(n log n) and relate it to the method I have.
Here's my method:
public static Trade bestTrade(int[] a)
{
int lowest = a[0];
int lowestIndex = 0;
int highest = a[a.length - 1];
int highestIndex = a.length - 1;
int profit = 0;
for(int i = 1; i < a.length; i++)
{
if (a[i] < lowest && i < highestIndex)
{
lowest = a[i];
lowestIndex = i;
}
}
for(int i = a.length - 2; i >= 0; i--)
{
if (a[i] > highest && i > lowestIndex)
{
highest = a[i];
highestIndex = i;
}
}
for(int i = 1; i < a.length; i++)
{
if (a[i] < lowest && i < highestIndex)
{
lowest = a[i];
lowestIndex = i;
}
}
if (highestIndex > lowestIndex)
{
profit = highest - lowest;
return new Trade(lowestIndex, highestIndex, profit);
}
return new Trade(lowestIndex, highestIndex, profit);
}
}
This function is of O(n) which is superior to O(n log n) .
In general you just look at the loops, since there is no nested loops and you only have loops which go through all elements of a The function is considered n.
The complexity is O(n), where n the length of array a.
You loop 3 times over a, so the running time is roughly 3n, so it is of the order n: O(n).
Try finding the answer to this by yourself. It will help a lot in the future. Also this looks like a O(N) , I am not sure why you are convinced that it is O(NlogN).
This link might be useful,
http://pages.cs.wisc.edu/~vernon/cs367/notes/3.COMPLEXITY.html
O(n)
It is directly proportional to the number of a.length. Each time the for function is run, it runs through every day of data. If there were a method where the number of processes went up by more than the pure number (nested fors) then it could be O(n log n) or O(n^2). But in this case, it's pretty clearly just big O of n.
Related
What would the big-O run time be? I'm mostly confused about the while loop run time. I know the run times for both for loops are O(n).
cin >> n >> min >> max;
for(int i = min; i < n; i++) {
for(int j = 1; j < max; j++) {
total = 1;
while(total < n) {
total = total *2;
}
}
}
The progression of target in the while loop is:
1 2 4 8 ... 2^P
You need log(2, n) steps -- i.e. log of n in base 2. That loop is O(log n).
First of all, it looks like you forgot to put braces. I'm your code, as it is, the whole loop is not inside the nested for loops. As it is, we have a pointless nested for loop that just sets total to 1, followed by an independent while loop. The complexity of the first is O((n - min) * max), and the second is O(log(n)). The total time complexity is the sum of these.
Probably what you really meant is this:
for(int i = min; i<n; i++) {
for(int j =1; j< max; j++) {
total = 1;
while(total < n) {
total = total *2;
}
}
}
Here, we have the whole loop inside the nested for loops. The time complexity is the multiple of what we calculated earlier, so O((n - min) * max * log(n)). If min and max are constants, then we can reduce to O(n log n)
I'm trying to achieve time complexity O (n log n) for my algorithm that I have been using. What it does is looks at a list of numbers and first the highest number, lowest number, and the profit obtained from the two. How do I modified my code to get the time complexity I am looking for?
Code:
while (i < a.length)
{
for (int j = i; j < a.length; j++)
{
if(a[j] - a[i] > profit)
{
lowestIndex = i;
highestIndex = j;
profit = a[j] - a[i];
}
}
i++;
}
Arrays.sort() is a sorting function that supposedly runs at O(n log n).
You can check the Arrays API for more info. https://docs.oracle.com/javase/7/docs/api/java/util/Arrays.html#sort(int[])
For code, I'd implement it this way. I don't know if it's the most efficient implementation out there.
Arrays.sort(a); // sort in ascending order
profit = a[a.length-1] - a[0];
I am trying to find the most efficient answer (without using a HashMap) to the problem: Find the most frequent integer in an array.
I got answers like:
public int findPopular(int[] a) {
if (a == null || a.length == 0)
return 0;
Arrays.sort(a);
int previous = a[0];
int popular = a[0];
int count = 1;
int maxCount = 1;
for (int i = 1; i < a.length; i++) {
if (a[i] == previous)
count++;
else {
if (count > maxCount) {
popular = a[i-1];
maxCount = count;
}
previous = a[i];
count = 1;
}
}
return count > maxCount ? a[a.length-1] : popular;
}
and
public class Mode {
public static int mode(final int[] n) {
int maxKey = 0;
int maxCounts = 0;
int[] counts = new int[n.length];
for (int i=0; i < n.length; i++) {
counts[n[i]]++;
if (maxCounts < counts[n[i]]) {
maxCounts = counts[n[i]];
maxKey = n[i];
}
}
return maxKey;
}
public static void main(String[] args) {
int[] n = new int[] { 3,7,4,1,3,8,9,3,7,1 };
System.out.println(mode(n));
}
}
The first code snippet claims to be O(n log n). However, the Arrays.sort() function alone is O(n log n) [3]. If you add the for loop, wouldn't the findPopular() function be O(n^2 * log n)? Which would simplify to O(n^2)?
The second code [2] snippet claims to be O(n). However, why do we not consider the initialization of the arrays into our calculation? The initialization of the array would take O(n) time [4], and the for loop would take O(n). So wouldn't the mode() function be O(n^2)?
If I am correct, that would mean I have yet to see an answer that is more efficient than O(n^2).
As always, thank you for the help!
Sources:
Find the most popular element in int[] array
Write a mode method in Java to find the most frequently occurring element in an array
The Running Time For Arrays.Sort Method in Java
Java: what's the big-O time of declaring an array of size n?
Edit: Well, I feel like an idiot. I'll leave this here in case someone made the same mistake I did.
When you perform two tasks one after the other, you add complexities:
Arrays.sort(a); // O(n log n)
for (int i = 0; i < n; i++) { // O(n)
System.out.println(a[i]);
}
// O(n log n + n) = O(n (log n + 1)) = O(n log n)
Only when you repeat an algorithm, you will multiply:
for (int i = 0; i < n; i++) { // O(n)
Arrays.sort(a); // O(n log n), will be executed n times
}
// O((n log n) * n) = O(n² log n)
code -1 : you have only one for loop . So effectively, your time complexity will be : O(n Log n) + O(n) approximately equal to (n Log n)
code-2: Initialization also takes O(n). So effectively, O(n) + O(n) (loop) is still O(n).
Note : While calculating time-complexities with O (big-O), you just need the biggest term(s)
I am trying to teach myself order statistics by solving the problem
find the kth largest element in an array in O(n) time.
My Java implementation is as follows below.
Qn: I am unsure of how to determine the complexity of my code. From what I understand, it does not exceed n. Is this correct? Or how should I get this?
Have adapted the algorithm from pg 215, Intro to Algo MIT Press.
package intp;
public class IntP {
public static void main(String[] args) {
int[] qn = {10,22,33,4,5,6,1};
int[] res = {0,0};
int q;
int k =3;
q=k;
while (k>=1){
res = findMax(qn,k);
qn[res[1]]=0;
k=k-1;
}
System.out.println("Largest element number "+q+ " is: "+res[0]);
}
public static int[] findMax(int[] a,int k){
int pos=0;
int max = a[0];
int[] ans= {0,0};
for(int i= 1;i<a.length;i+=2){
if (i+1==a.length){
if (a[i]>max){
max=a[i];
pos=i;
}
break;
}
if (a[i]>a[i+1] && a[i]>max){
max=a[i];
pos=i;
}
else if (a[i+1]>max && a[i+1]>max){
max= a[i+1];
pos=i+1;
}
}
ans[0]=max;
ans[1]= pos;
return ans;
}
}
First Time complexity of findMax:
Time(findMax) = 3 + 1/2 n * (4 + 2) + 1 + 3
Time(findMax) = 3 n + 7
Time(findMax) ~ n
Time(findMax) ~ O(n)
Then Time complexity of main:
Time(main) = 5 + k * (3 + Time(findMax))
Time(main) = k * (3 n + 10) + 5
Time(main) = 3 k n + 10 k + 5
Time(main) ~ k * Time(findMax)
Time(main) ~ O(kn)
Note: I considered any managed instruction as 1 operation
Repetitive selection of the maximum is a poor way to implement selection of the Kth (except maybe for very small K). It takes worst-case time O(KN).
A better approach is to sort the array, for example using Quicksort, performing in expected O(N.Lg(N)). Anyway, the worst-case time is quadratic, O(N²).
A bit better, Quickselect, a "stripped-off" version of Quicksort. Closer to linear time O(N), but it keeps the worst-case O(N²).
The truly optimal approach (in the asymptotic sense) is that of the Median of Medians, with a guaranteed O(N) behavior.
My preferred implementation of the naïve approach:
for (i= 0; i < n; i++) // Try every element a[i]
{
int r= 0;
for (j= 0; j < n; j++) // Evaluate the rank by counting inferior elements
{
r+= a[j] < a[i] || (a[j] == a[i] && j < i); // Mind the equal elements
}
if (r == k) // Desired rank
return i;
}
I'm trying to learn Big O analysis, and I was wondering if someone could let me know if I'm doing it right with these two examples (and if I'm not, where did I go wrong?). I got the first one to be O(N^2) and the second to be O(N). My breakdown of how I got them is in the code below.
First example
public void sort(Integer[] v) {
//O(1)
if(v.length == 0)
return;
//O(N)*O(N)*O(1)
for(int i = 0; i < v.length; ++i)
{
for(int j = i + 1; j < v.length; ++j )
{
if(v[j].compareTo(v[i]) < 0)
{
Integer temp = v[i];
v[i] = v[j];
v[j] = v[i];
}
}
}
}
Second example
public void sort(Integer[] v){
TreeSet<Integer> t = new TreeSet<>();
//O(N)
for(int i = 0; i < v.length(); ++i)
{
t.add(v[i]);
}
int i = 0;
//O(N)
for(Integer value : temp)
{
v[i++] = v;
}
}
Thanks for the help!
You are right - the first is O(N^2) because you have one loop nested inside another, and the length of each depends on the input v. If v has length 2, you'll run those swaps 4 times. If v has length 8, they will execute 64 times.
The second is O(N) because you have iterate over your input, and your loop contain any iterative or expensive operations. The second is actually O(n log(n)) - see comments on the original post.
Your first example is O(N^2), you are right.
Your second example is not O(N), so you are not right.
It is O(N) * O(log N) + O(N)
O(N) first loop
O(log N) insert into set
O(N) second loop
Finally you have O(N * log N + N), take the higher value, so answer is O(N*log N)
Edited
By the way, Big O notation does not depend of programming language
It could helps