So I was working on this sorting algorithm in java, and was wondering if anybody has seen anything like this before. Specifically, this algorithm is meant to sort very large lists with a very small range of numbers. If you have seen that or have any suggestions to enhance the algorithm, can you say something about that below? I have the code here:
public static int[] sort(int[] nums)
{
int lowest = Integer.MAX_VALUE;
for (int n : nums)
{
if (n < lowest)
lowest = n;
}
int index = 0;
int down = 0;
while (index < nums.length)
{
for (int i = index; i < nums.length; i++)
{
if (nums[i] == lowest)
{
int temp = nums[i] + down;
nums[i] = nums[index];
nums[index] = temp;
index++;
}
else
nums[i]--;
}
down++;
}
return nums;
}
If I'm not mistaken, that is your standard-issue BubbleSort. It's simple to implement but has poor performance: O(n^2). Notice the two nested loops: as the size of the array increases, the runtime of the algorithm will increase exponentially.
It's named Bubble Sort because the smallest values will "bubble" to the front of the array, one at a time. You can read more about it on Wikipedia.
So the algorithm seems to work but it does a lot of unnecessary work in the process.
Basically you are throwing in the necessity to subtract from a number x times before you add x back and try and swap it where x is the difference between the number and the lowest number in the array. Take [99, 1] for example. With your algorithm you would update the array to [98, 1] in the first for loop iteration and then the next you would make the swap [1, 98] then you have to make 97 more iterations to bring your down variable up to 98 and your array to [1,1] state then you add 98 to it and swap it with itself. Its an interesting technique for sure but its not very efficient.
The best algorithm for any given job really depends on what you know about your data. Look into other sorting algorithms to get a feel for what they do and why they do it. Make sure that you walk through the algorithm you make and try to get rid of unnecessary steps.
To enhance the algorithm first I would get rid of finding the lowest in the set and remove the addition and subtraction steps. If you know that your numbers will all be integers in a given range look into bucket sorting otherwise you can try merge or quicksort algorithms.
Related
The objective is to create a function that accepts two arguments: and array of integers and an integer that is our target, the function should return the two indexes of the array elements that add up to the target. We cannot sum and element by it self and we should assume that the given array always contains and answer
I solved this code kata exercise using a a for loop and a while loop. The time complexity for a for loop when N is the total elements of the array is linear O(N) but for each element there is a while process hat also increases linearly.
Does this means that the total time complexity of this code is O(N²) ?
public int[] twoSum(int[] nums, int target) {
int[] answer = new int[2];
for (int i = 0; i <= nums.length - 1; i++){
int finder = 0;
int index = 0;
while ( index <= nums.length - 1 ){
if (nums[i] != nums[index]){
finder = nums[i] + nums[index];
if (finder == target){
answer[0] = index;
answer[1] = i;
}
}
index++;
}
}
return answer;
}
How would you optimize this for time and space complexity?
Does this means that the total time complexity of this code is O(N²) ?
Yes, your reasoning is correct and your code is indeed O(N²) time complexity.
How would you optimize this for time and space complexity?
You can use an auxilary data structure, or sort the array, and perform lookups on the array.
One simple solution, which is O(n) average case is using a hash table, and inserting elements while you traverse the list. Then, you need to lookup for target - x in your hash table, assuming the element currently traversed is x.
I am leaving the implementation to you, I am sure you can do it and learn a lot in the process!
Our teacher didn't taught us how to analyze the running time of an algorithm before she want us to report about Shell sort.
I just want to know if there is a simple way to find the average/best/worst case performance of an algorithm like shell sort.
//for references
class ShellSort{
void shellSort(int array[], int n){
//n = array.length
for (int gap = n/2; gap > 0; gap /= 2){
for (int i = gap; i < n; i += 1) {
int temp = array[i];
int j;
for (j = i; j >= gap && array[j - gap] > temp; j -= gap){
array[j] = array[j - gap];
}
array[j] = temp;
}
}
}
welcome to stack overflow! Usually, time complexity for a sorting algorithm is measured via the number of key comparisons performed. One can begin by considering what is the input that requires the fewest number of key comparisons to completely sort (best case) then follow it up with the input that would require the most number (worst case). Oftentimes, the best case would be a sorted list and the worst case might be a list sorted in reverse order, though this might not be the case for some divide and conquer algorithms.
As for average case, once you derive the best and worst case, you know the average is bounded between the two. If both have the same time complexity class (oftentimes 'Big Oh' class), then you know the average is the same. Otherwise you can derive it mathematically through a probabilistic analysis.
For every loop over the array, that would add a time complexity factor of 'n', and nested loops would 'multiply' this complexity. i.e. 2 nested loops give a complexity of n^2 and 3 nested loops give a complexity of n^3.
Partitioning an array into half repeatedly would often give you a time complexity factor of 'log(n)'.
As I am pretty new to java, I'm struggeling with optimization of the time complexity of my programs. I have written a simple code which takes an array, and counts how many pairs of numbers there are for which the element with the lower index in the array is greater than the element with the greater index.
For example, if you have the array: [9,8,12,14,10,54,41], there will be 4 such pairs: (9,8),(12,10),(14,10) and (54,41).
I tried to optimize the code by not just comparing every element with every other one. I aimed for a time complexity of n log n. I have not yet figured out a way to write this code in a more efficient manner. I hope my question is clear.
The code(I have omitted adding the heapsort code, as it's not related to my question.)
import java.util.Scanner;
class Main4 {
static int n;
static int[] A;
// "A" is the input vector.
// The number of elements of A can be accessed using A.length
static int solve(int[] A) {
int counter = 0;
int[] B = new int[n];
B = A.clone();
heapSort(B);
for (int i = 0; i < A.length; i++) {
for (int j = 0; j < A.length; j++) {
while( B[j] == Integer.MIN_VALUE&&j+1<n) {
j=j+1;
}
if (A[i] != B[j]) {
counter++;
} else {
B[j] = Integer.MIN_VALUE;
break;
}
}
}
return counter; }
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
int ntestcases = scanner.nextInt();
for (int testno = 0; testno < ntestcases; testno++) {
n = scanner.nextInt();
A = new int[n];
for (int i = 0; i < n; i++)
A[i] = scanner.nextInt();
System.out.println(solve(A));
}
scanner.close();
}
}
Divide and conquer 1 (merge-sort like)
Split the whole list W into two parts L and R of early equal lengths. The count for W is the sum of
counts for L and R
the number of pairs (l, r) with l > r where l and r belong to L and R respectively.
The first bullet is recursion. The second bullet does not depend of the ordering of the lists L and R. So you can sort them and determine the result using a single pass through both lists (count all smaller r in sorted R for the first element of sorted L, the count for the second can now be computed incrementally, etc).
The time complexity is given by
T(n) = T(n/2) + T(n/2) + O(n log n)
which I guess is O(n log n). Anyway, it's much smaller than O(n*n).
You could improve it a bit by using merge sort: You need sorted L and this can be obtained by merging sorted LL and sorted LR (which are the two parts of L in the recursive step).
Divide and conquer 2 (quick-sort like)
Select an element m such that the number of bigger and smaller elements is about the same (the median would be perfect, but a randomly chosen element is usable, too).
Do a single pass through the array and count how many elements smaller than m are there. Do a second pass and count the pairs (x, y) with x placed to the left of y and x >= m and m > y.
Split the list into two parts: elements e >= m and the remaining ones. Rinse and repeat.
You are looking for all possible pairs.
You can check from left to right to find all the matches. That's O(n^2) solution. As suggested by Arkadiy in the comments, this solution is okay for the worst case of the input.
I came up with the idea that you might want to store elements in sorted order AND keep the original unsorted array.
You keep the original array and build binary search tree. You can find the element with original index i in time O(lgn) and remove it in O(lgn), which is great. You can also determine the number of values smaller than ith element with tiny additional cost.
To be able to count the elements smaller than, each node has to store the number of its children + 1. When you remove, you simply decrement the number of children in each node on your way down. When you insert, you increment the number of children in each node on your way down. When you search for a node you store the value root node has in variable and
do nothing when you go to the right child,
subtract the number child has from your variable when you go to the left child
Once you stop (you found the node), you subtract the value right child has (0 if there is no right child) and decrement the value.
You iterate over the original array from left to right. At each step you find element in your tree and calculate how many elements that are smaller are in tree. You know how many smaller than your current are there and you also know that all elements in the tree have greater index than the current element, which know how many elements you can pair it up with! You remove this element from the tree after you calculate the number of pairs. You do that n times. Lookup and removing from the tree is O(lgn) == O(nlgn) time complexity! The total time is O(nlgn + nlgn) = O(nlgn)!!
Chapter 12 of Introduction to algorithms (3rd edition) explains in depth how to implement BST. You may also find many resources on the Internet that explain it with pictures.
I want to find The number of ways to divide an array into 3 contiguous parts such that the sum of the three parts is equal
-10^9 <= A[i] <= 10^9
My approach:
Taking Input and Checking for Base Case:
for(int i=0;i<n;i++){
a[i]= in.nextLong();
sum+=a[i];
}
if(sum%3!=0)System.out.println("0");
If The answer is not above Then Forming the Prefix and Suffix Sum.
for(int i=1;i<=n-2;i++){
xx+=a[i-1];
if(xx==sum/3){
dp[i]=1;
}
}
Suffix Sum and Updating the Binary Index Tree:
for(int i=n ;i>=3;i--){
xx+=a[i-1];
if(xx==sum/3){
update(i, 1, suffix);
}
}
And Now simple Looping the array to find the Total Ways:
int ans=0;
for(int i=1;i<=n-2;i++){
if(dp[i]==1)
{
ans+= (query(n, suffix) - query(i+1, suffix));
// Checking For the Sum/3 in array where index>i+1
}
}
I Getting the wrong answer for the above approachI don't Know where I have made mistake Please Help to Correct my mistake.
Update and Query Function:
public static void update(int i , int value , int[] arr){
while(i<arr.length){
arr[i]+=value;
i+=i&-i;
}
}
public static int query(int i ,int[] arr){
int ans=0;
while(i>0){
ans+=arr[i];
i-=i&-i;
}
return ans;
}
As far as your approach is concerned its correct. But there are some points because of which it might give WA
Its very likely that sum overflows int as each element can magnitude of 10^9, so use long long .
Make sure that suffix and dp array are initialized to 0.
Having said that using a BIT tree here is an overkill , because it can be done in O(n) compared to your O(nlogn) solution ( but does not matter if incase you are submitting on a online judge ).
For the O(n) approach just take your suffix[] array.And as you have done mark suffix[i]=1 if sum from i to n is sum/3, traversing the array backwards this can be done in O(n).
Then just traverse again from backwards doing suffix[i]+=suffix[i-1]( apart from base case i=n).So now suffix[i] stores number of indexs i<=j<=n such that sum from index j to n is sum/3, which is what you are trying to achieve using BIT.
So what I suggest either write a bruteforce or this simple O(n) and check your code against it,
because as far as your approach is concerned it is correct, and debugging is something not suited for
stackoverflow.
First, we calculate an array dp, with dp[i] = sum from 0 to i, this can be done in O(n)
long[]dp = new long[n];
for(int i = 0; i < n; i++)
dp[i] = a[i];
if(i > 0)
dp[i] += dp[i - 1];
Second, let say the total sum of array is x, so we need to find at which position, we have dp[i] == x/3;
For each i position which have dp[i] == 2*x/3, we need to add to final result, the number of index j < i, which dp[j] == x/3.
int count = 0;
int result = 0;
for(int i = 0; i < n - 1; i++){
if(dp[i] == x/3)
count++;
else if(dp[i] == x*2/3)
result += count;
}
The answer is in result.
What wrong with your approach is,
if(dp[i]==1)
{
ans+= (query(n, suffix) - query(i+1, suffix));
// Checking For the Sum/3 in array where index>i+1
}
This is wrong, it should be
(query(n, suffix) - query(i, suffix));
Because, we only need to remove those from 1 to i, not 1 to i + 1.
Not only that, this part:
for(int i=1;i<=n-2;i++){
//....
}
Should be i <= n - 1;
Similarly, this part, for(int i=n ;i>=3;i--), should be i >= 1
And first part:
for(int i=0;i<n;i++){
a[i]= in.nextLong();
sum+=a[i];
}
Should be
for(int i=1;i<=n;i++){
a[i]= in.nextLong();
sum+=a[i];
}
A lot of small errors in your code, which you need to put in a lot of effort to debugging first, jumping to ask here is not a good idea.
In the question asked we need to find three contiguous parts in an array whose sum is the same.
I will mention the steps along with the code snippet that will solve the problem for you.
Get the sum of the array by doing a linear scan O(n) and compute sum/3.
Start scanning the given array from the end. At each index we need to store the number of ways we can get a sum equal to (sum/3) i.e. if end[i] is 3, then there are 3 subsets in the array starting from index i till n(array range) where sum is sum/3.
Third and final step is to start scanning from the start and find the index where sum is sum/3. On finding the index add to the solution variable(initiated to zero), end[i+2].
The thing here we are doing is, start traversing the array from start till len(array)-3. On finding the sum, sum/3, on let say index i, we have the first half that we require.
Now, dont care about the second half and add to the solution variable(initiated to zero) a value equal to end[i+2]. end[i+2] tells us the total number of ways starting from i+2 till the end, to get a sum equal to sum/3 for the third part.
Here, what we have done is taken care of the first and the third part, doing which we have also taken care of the second part which will be by default equal to sum/3. Our solution variable will be the final answer to the problem.
Given below are the code snippets for better understanding of the above mentioned algorithm::-
Here we are doing the backward scanning to store the number of ways to get sum/3 from the end for each index.
long long int *end = (long long int *)calloc(numbers, sizeof(long long int);
long long int temp = array[numbers-1];
if(temp==sum/3){
end[numbers-1] = 1;
}
for(i=numbers-2;i>=0;i--){
end[i] = end[i+1];
temp += array[i];
if(temp==sum/3){
end[i]++;
}
}
Once we have the end array we do the forward loop and get our final solution
long long int solution = 0;
temp = 0;
for(i=0;i<numbers-2;i++){
temp+= array[i];
if(temp==sum/3){
solution+=end[i+2];
}
}
solution stores the final answer i.e. the number of ways to split the array into three contiguous parts having equal sum.
pretty simple question:
Given an array, find all subsets which sum to value k
I am trying to do this in Java and seem to have found a solution which solves it in O(n^2) time. Is this solution a correct O(n^2) implementation?
#Test
public void testFindAllSubsets() {
int[] array = {4,6,1,6,2,1,7};
int k=7;
// here the algorithm starts
for(int i = 0; i < array.length;i++){
// now work backwords
int sum = array[i];
List<Integer> subset = new ArrayList<Integer>();
subset.add(array[i]);
for(int j = array.length -1; j > i && sum < k; j--){
int newSum = sum + array[j];
// if the sum is greater, than ditch this subset
if(newSum <= k){
subset.add(array[j]);
sum = newSum;
}
}
// we won't always find a subset, but if we do print it out
if(sum == k){
System.out.print("{");
System.out.print(subset.get(0));
for(int l = 1; l < subset.size(); l++){
System.out.print(","+subset.get(l));
}
System.out.print("}");
System.out.println();
}
}
}
I have tried it with various examples and have not found any that seem to break it. However, when I have searched online, this does not appear to be the common solution to the problem, and many solution claim this problem is O(2^n).
P.S.
This is not a homework question, I'm a brogrammer with a job trying to work on my Comp Sci fundamentals in Java. Thanks!
No this is not correct.Take this simple example
Your array is 4,6,1,2,3,1 and target sum is 7 then in your logic it
would only find (4,3) (6,1) (1,2,3,1) your code would miss (4,2,1), (4,3).
I would refer to go through wiki
An elegant solution is to simply think of a subset as each member answering the question "Am I in or not?" So essentially each can answer yes/no, so you have 2N subsets(including the empty subset). The most natural way to code this up is to recurse through each element and do one of the following:
Pick it
Skip it
Thus the time complexity is O(2N) simply because you have so many answers possible in the worst case.