package com.sort;
public class ArraySel {
private Long[] a;
private int nElems;
public ArraySel(int max)
{
a = new Long[max];
nElems = 0;
}
public void insert(long max)
{
a[nElems] = max;
nElems++;
}
public void display()
{
for(int j = 0; j < nElems; j++)
{
System.out.print(a[j]+ " ");
}
System.out.println();
}
public void insertionSort()
{
int in , out, flag = 0;
long temp;
for(out = 1; out < nElems; out++)
{
temp = a[out];
in = out;
while(in > 0 && a[in - 1] >= temp )
{
if(a[in] == a[in - 1 ])
{
flag++;
in--;
}
else
{
a[in] = a[in-1];
in--;
}
}
a[in] = temp;
}
}
}
This code takes an unsorted array and sorts it using Insertion Sort.
When duplicates are arranged together in unsorted array then due to multiple shifting complexity raises to O(N^2) , which i tried to make it O(N) by making sure no item moved more than once in case of duplicates arranged together.
But when duplicates are not arranged together complexity remains O(N^2).
Can we make the complexiy O(N) in this case too ?
Complexity isn't given by the number of moves but by the number of operations overall, in this case comparisons as well.
Insertion sort is O(n^2) average complexity, you can't make it faster than that. In works in O(n) only in best case scenario, when the input string is already sorted (http://en.wikipedia.org/wiki/Insertion_sort).
Without further information about the underlying data, the best time complexity you can achieve with sorting algorithms is O(n log n) (n being the number of elements).
Sorting algorithms like insertion sort, bubble sort, selection sort, etc., all have a time complexity of O(n²) due to their double loops. In fact they sometimes tend to work better, when getting an already sorted list of elements. Insertion sort for example has a time complexity of O(n) for a completely sorted list.
There is nothing you can do to change the inherent time complexity of those algorithms. The only thing you can do is short-cutting the algorithm when finding pre-sorted regions in the incoming list.
Related
The objective is to create a function that accepts two arguments: and array of integers and an integer that is our target, the function should return the two indexes of the array elements that add up to the target. We cannot sum and element by it self and we should assume that the given array always contains and answer
I solved this code kata exercise using a a for loop and a while loop. The time complexity for a for loop when N is the total elements of the array is linear O(N) but for each element there is a while process hat also increases linearly.
Does this means that the total time complexity of this code is O(N²) ?
public int[] twoSum(int[] nums, int target) {
int[] answer = new int[2];
for (int i = 0; i <= nums.length - 1; i++){
int finder = 0;
int index = 0;
while ( index <= nums.length - 1 ){
if (nums[i] != nums[index]){
finder = nums[i] + nums[index];
if (finder == target){
answer[0] = index;
answer[1] = i;
}
}
index++;
}
}
return answer;
}
How would you optimize this for time and space complexity?
Does this means that the total time complexity of this code is O(N²) ?
Yes, your reasoning is correct and your code is indeed O(N²) time complexity.
How would you optimize this for time and space complexity?
You can use an auxilary data structure, or sort the array, and perform lookups on the array.
One simple solution, which is O(n) average case is using a hash table, and inserting elements while you traverse the list. Then, you need to lookup for target - x in your hash table, assuming the element currently traversed is x.
I am leaving the implementation to you, I am sure you can do it and learn a lot in the process!
Is this implementation of insertion considered correct? This is a little different from some other examples I have seen.
public static int[] insertionSort(int[] numbers) {
for (int i = 1; i < numbers.length; i++) {
int index = i;
for (int j = i-1; j >= 0 ; j--) {
if (numbers[index] < numbers[j]) {
int temp = numbers[index];
numbers[index] = numbers[j];
numbers[j] = temp;
index--;
}
}
}
return numbers;
}
Your logic is right, it should work and basically it is insertion sort, you are right. However, your algorithm will make unnecessary iterations with current implementation.
The inner loop will go through all the values which lie left to the i even if we already found the place to insert. As a result, your index value will point to the correct place, but the loop will go further, but the if condition won't be satisfied until the next i value.
To fix this you can just add the next:
else {
break;
}
It should finish the inner loop and go to the next value of the outer loop. However, it would be even better to replace inner loop with while to make the code more readable.
As for the complexity, your current code will work with O(n^2) complexity even for the sorted array. With such enhancement it still will be working on O(n^2) in average, but for the best case it will be improved to O(n).
int firstDuplicate(int[] a) {
Set<Integer> result = new HashSet();
for(int i=0; i<a.length; i++) {
if(result.contains(a[i])) {
return a[i];
} else {
result.add(a[i]);
}
}
return -1;
}
The complexity of the above code is O(n2) because of contains check inside the for loop.
How to achieve this with a complexity of O(n) in java.
Implementation of .contains for ArrayList
public int indexOf(Object o) {
if (o == null) {
for (int i = 0; i < size; i++)
if (elementData[i]==null)
return i;
} else {
for (int i = 0; i < size; i++)
if (o.equals(elementData[i]))
return i;
}
return -1;
}
The implementation you're providing corresponds to the contains(Object) method in the ArrayList class. The method you're using is actually HashSet.contains(Object).
The complexity of HashSet.contains(Object) is O(1). To achieve this, it uses a hash of the value stored to find the element searched for.
In the unlikely event that two objects share the same hash, both elements will be stored under the same hash in a list. This might be the reason that is misleading you to believe that the cost for HashSet.contains(Object) is O(n). Although there is a list, elements are nearly evenly distributed, and thus the list size tends to 1, transforming O(n) to O(1).
As already explained in this answer your algorithm has already O(n) time complexity as HashSet’s lookup methods (includes both contains and add) have O(1) time complexity.
Still, you can improve your method, as there as no reason to perform two lookups for each element:
static int firstDuplicate(int[] a) {
Set<Integer> result = new HashSet<>();
for(int i: a) if(!result.add(i)) return i;
return -1;
}
The contract of Set.add is to only add a value (and return true) if the value is not already contained in the set and return false if it is already contained in the set.
Using this you may end up with half the execution time (it’s still the same time complexity) and have simpler code…
void printRepeating(int a[], int asize){
int i;
System.out.println("The repeating elements are : ");
for (i = 0; i < asize; i++)
{
if (a[Math.abs(a[i])] >= 0){
a[Math.abs(a[i])] = -a[Math.abs(a[i])];
}
else{
System.out.print(Math.abs(a[i]) + " ");
break;
}
}
}
I wrote a function to find the position where the target value should be inserted in the given array. We assume the array has distinct values and is sorted in ascending order. My solution must be in O(log N) time complexity
public static int FindPosition(int[] A, int target) {
int a = A.length / 2;
System.out.println(a);
int count = a;
for (int i = a; i < A.length && A[i] < target; i++) {
count++;
}
for (int i = a; i > A.length && A[i] > target; i--) {
count++;
}
return count;
}
Does this code have complexity of O(log N)?
Short answer
No.
Longer answer
With increments of 1 in your indices, you cannot expect to have a better solution than O(n).
If your algorithm works at all (I don't think it does), it looks like it would need O(n) steps.
Also, you say you assume that the array is sorted, but you sort it anyway. So your code is O(n*log(n)).
What's more, trying to sort an already sorted array is the worst case for some sorting algorithms : it might even be O(n**2).
You're looking for a binary search.
No it isn't nlogn
public static int FindPosition(int[] A, int target){
/*
Time taken to sort these elements would depend on how
sort function is implemented in java. Arrays.sort has average
time complexity of Ω(n log n), and worst case of O(n^2)
*/
Arrays.sort(A);
/*Time taken to run this loop = O(length of A) or O(N)
where N = arraylength
*/
for(int i=a;i<A.length&&A[i]<target;i++){
count++;
}
/*Time taken to run this loop = O(length of A) or O(N)
where N = arraylength
*/
for(int i=a;i>A.length&&A[i]>target;i--){
count++;
}
return count;
}
Now time complexity would be represented by the longest of above three, since assignments and all are done in constant time.
Thus making your complexity O(n^2) in worst case.
I'm trying to learn Big O analysis, and I was wondering if someone could let me know if I'm doing it right with these two examples (and if I'm not, where did I go wrong?). I got the first one to be O(N^2) and the second to be O(N). My breakdown of how I got them is in the code below.
First example
public void sort(Integer[] v) {
//O(1)
if(v.length == 0)
return;
//O(N)*O(N)*O(1)
for(int i = 0; i < v.length; ++i)
{
for(int j = i + 1; j < v.length; ++j )
{
if(v[j].compareTo(v[i]) < 0)
{
Integer temp = v[i];
v[i] = v[j];
v[j] = v[i];
}
}
}
}
Second example
public void sort(Integer[] v){
TreeSet<Integer> t = new TreeSet<>();
//O(N)
for(int i = 0; i < v.length(); ++i)
{
t.add(v[i]);
}
int i = 0;
//O(N)
for(Integer value : temp)
{
v[i++] = v;
}
}
Thanks for the help!
You are right - the first is O(N^2) because you have one loop nested inside another, and the length of each depends on the input v. If v has length 2, you'll run those swaps 4 times. If v has length 8, they will execute 64 times.
The second is O(N) because you have iterate over your input, and your loop contain any iterative or expensive operations. The second is actually O(n log(n)) - see comments on the original post.
Your first example is O(N^2), you are right.
Your second example is not O(N), so you are not right.
It is O(N) * O(log N) + O(N)
O(N) first loop
O(log N) insert into set
O(N) second loop
Finally you have O(N * log N + N), take the higher value, so answer is O(N*log N)
Edited
By the way, Big O notation does not depend of programming language
It could helps