I wrote a function to find the position where the target value should be inserted in the given array. We assume the array has distinct values and is sorted in ascending order. My solution must be in O(log N) time complexity
public static int FindPosition(int[] A, int target) {
int a = A.length / 2;
System.out.println(a);
int count = a;
for (int i = a; i < A.length && A[i] < target; i++) {
count++;
}
for (int i = a; i > A.length && A[i] > target; i--) {
count++;
}
return count;
}
Does this code have complexity of O(log N)?
Short answer
No.
Longer answer
With increments of 1 in your indices, you cannot expect to have a better solution than O(n).
If your algorithm works at all (I don't think it does), it looks like it would need O(n) steps.
Also, you say you assume that the array is sorted, but you sort it anyway. So your code is O(n*log(n)).
What's more, trying to sort an already sorted array is the worst case for some sorting algorithms : it might even be O(n**2).
You're looking for a binary search.
No it isn't nlogn
public static int FindPosition(int[] A, int target){
/*
Time taken to sort these elements would depend on how
sort function is implemented in java. Arrays.sort has average
time complexity of Ω(n log n), and worst case of O(n^2)
*/
Arrays.sort(A);
/*Time taken to run this loop = O(length of A) or O(N)
where N = arraylength
*/
for(int i=a;i<A.length&&A[i]<target;i++){
count++;
}
/*Time taken to run this loop = O(length of A) or O(N)
where N = arraylength
*/
for(int i=a;i>A.length&&A[i]>target;i--){
count++;
}
return count;
}
Now time complexity would be represented by the longest of above three, since assignments and all are done in constant time.
Thus making your complexity O(n^2) in worst case.
Related
The objective is to create a function that accepts two arguments: and array of integers and an integer that is our target, the function should return the two indexes of the array elements that add up to the target. We cannot sum and element by it self and we should assume that the given array always contains and answer
I solved this code kata exercise using a a for loop and a while loop. The time complexity for a for loop when N is the total elements of the array is linear O(N) but for each element there is a while process hat also increases linearly.
Does this means that the total time complexity of this code is O(N²) ?
public int[] twoSum(int[] nums, int target) {
int[] answer = new int[2];
for (int i = 0; i <= nums.length - 1; i++){
int finder = 0;
int index = 0;
while ( index <= nums.length - 1 ){
if (nums[i] != nums[index]){
finder = nums[i] + nums[index];
if (finder == target){
answer[0] = index;
answer[1] = i;
}
}
index++;
}
}
return answer;
}
How would you optimize this for time and space complexity?
Does this means that the total time complexity of this code is O(N²) ?
Yes, your reasoning is correct and your code is indeed O(N²) time complexity.
How would you optimize this for time and space complexity?
You can use an auxilary data structure, or sort the array, and perform lookups on the array.
One simple solution, which is O(n) average case is using a hash table, and inserting elements while you traverse the list. Then, you need to lookup for target - x in your hash table, assuming the element currently traversed is x.
I am leaving the implementation to you, I am sure you can do it and learn a lot in the process!
Is this in time complexity of O(nlogn) time? If not how do I fix this
Goal: Using HeapSort supposed to find sum pairs using 1 number in each array to find a + b = c (given)
Basic HeapSort sort function
boolean SumPairs(int[] Arr1, int[] Arr2, int p) {
heapSort(Arr2, p);
int target = 0;
for (int i = 0; i < Arr1.length; i++) {
target = p - Arr1[i];
if (BinarySearch(Arr2, target) != -1)
return true;
}
return false;
}
The average time complexity of a correctly implemented heap sort is O(NlogN); see https://en.wikipedia.org/wiki/Heapsort.
The average time complexity of a correctly implemented binary search is O(logN); see https://en.wikipedia.org/wiki/Binary_search_algorithm
So, assuming that the methods you are calling are correctly implemented, the average time complexity of your methods is O(NlogN) + (O(N) * O(logN)) per call. That reduces to O(NlogN).
Note that this is actually a method with two array parameters, so strictly speaking, the complexity class is: O(NlogN + MlogN) were M is the length of the first array and N is the length of the second one.
package com.sort;
public class ArraySel {
private Long[] a;
private int nElems;
public ArraySel(int max)
{
a = new Long[max];
nElems = 0;
}
public void insert(long max)
{
a[nElems] = max;
nElems++;
}
public void display()
{
for(int j = 0; j < nElems; j++)
{
System.out.print(a[j]+ " ");
}
System.out.println();
}
public void insertionSort()
{
int in , out, flag = 0;
long temp;
for(out = 1; out < nElems; out++)
{
temp = a[out];
in = out;
while(in > 0 && a[in - 1] >= temp )
{
if(a[in] == a[in - 1 ])
{
flag++;
in--;
}
else
{
a[in] = a[in-1];
in--;
}
}
a[in] = temp;
}
}
}
This code takes an unsorted array and sorts it using Insertion Sort.
When duplicates are arranged together in unsorted array then due to multiple shifting complexity raises to O(N^2) , which i tried to make it O(N) by making sure no item moved more than once in case of duplicates arranged together.
But when duplicates are not arranged together complexity remains O(N^2).
Can we make the complexiy O(N) in this case too ?
Complexity isn't given by the number of moves but by the number of operations overall, in this case comparisons as well.
Insertion sort is O(n^2) average complexity, you can't make it faster than that. In works in O(n) only in best case scenario, when the input string is already sorted (http://en.wikipedia.org/wiki/Insertion_sort).
Without further information about the underlying data, the best time complexity you can achieve with sorting algorithms is O(n log n) (n being the number of elements).
Sorting algorithms like insertion sort, bubble sort, selection sort, etc., all have a time complexity of O(n²) due to their double loops. In fact they sometimes tend to work better, when getting an already sorted list of elements. Insertion sort for example has a time complexity of O(n) for a completely sorted list.
There is nothing you can do to change the inherent time complexity of those algorithms. The only thing you can do is short-cutting the algorithm when finding pre-sorted regions in the incoming list.
In this implementation of Quick Find algorithm, Constructor takes N steps so does union().
The instructor said that union is too expensive as it takes N^2 to process sequence of N union commands on N objects, How can union be quadratic when it accesses array elements one at a time?
public class QuickFind
{
private int[] id;
public QuickFind(int N) {
id = new int[N];
for (int i=0; i<N; i++) {
id[i] = i;
}
}
public boolean connected(int p, int q) {
return id[p] == id[q];
}
public void union(int p, int q) {
int pid = id[p];
int qid = id[q];
for (int i=0; i<id.length; i++)
if (id[i] == pid)
id[i] = qid;
}
}
Each invocation of union method requires you iterate over the id array, which takes O(n) time. If you invoke union method n times, then the time required is n*O(n) = O(n^2).
You can improve time complexity of union method to O(1), by making the time complexity of connected method higher, probably O(log n), but this is just one time operation. I believe that your text book explain this in details.
Union operation for Quick Find is quadratic O(n^2) for n operations, because each operation takes O(n) time, as is easy to notice in for loop inside union(int p, int q)
for (int i=0; i<id.length; i++)
Notice that the algorithm is called Quick Find, as each find (connected(int p, int q)) operation takes constant time. However for this algorithm you end up paying in union operation, as mentioned in your question.
There is another algorithm Quick Union, which improves time for union operation. But then find doesn't remain O(1) (but better than linear time).
I'm trying to learn Big O analysis, and I was wondering if someone could let me know if I'm doing it right with these two examples (and if I'm not, where did I go wrong?). I got the first one to be O(N^2) and the second to be O(N). My breakdown of how I got them is in the code below.
First example
public void sort(Integer[] v) {
//O(1)
if(v.length == 0)
return;
//O(N)*O(N)*O(1)
for(int i = 0; i < v.length; ++i)
{
for(int j = i + 1; j < v.length; ++j )
{
if(v[j].compareTo(v[i]) < 0)
{
Integer temp = v[i];
v[i] = v[j];
v[j] = v[i];
}
}
}
}
Second example
public void sort(Integer[] v){
TreeSet<Integer> t = new TreeSet<>();
//O(N)
for(int i = 0; i < v.length(); ++i)
{
t.add(v[i]);
}
int i = 0;
//O(N)
for(Integer value : temp)
{
v[i++] = v;
}
}
Thanks for the help!
You are right - the first is O(N^2) because you have one loop nested inside another, and the length of each depends on the input v. If v has length 2, you'll run those swaps 4 times. If v has length 8, they will execute 64 times.
The second is O(N) because you have iterate over your input, and your loop contain any iterative or expensive operations. The second is actually O(n log(n)) - see comments on the original post.
Your first example is O(N^2), you are right.
Your second example is not O(N), so you are not right.
It is O(N) * O(log N) + O(N)
O(N) first loop
O(log N) insert into set
O(N) second loop
Finally you have O(N * log N + N), take the higher value, so answer is O(N*log N)
Edited
By the way, Big O notation does not depend of programming language
It could helps