Complexity in Java - java

I have a general question about my complexity in my Code. Since I am doing some things from Codeforces, complexity matters the first time to me.
Problemset: https://codeforces.com/problemset/problem/1554/A
Code:
import java.util.Arrays;
import java.util.List;
import java.util.Scanner;
import java.util.Collections;
public class main {
public static void main(String[] args) {
doCalculation();
}
public static void doCalculation () {
Scanner sc = new Scanner(System.in);
long n = sc.nextLong();
for (int i = 0; i < n; i++) {
int numbersInRow = sc.nextInt();
long [] numbersToRead = new long[numbersInRow];
for(int j = 0; j < numbersInRow; j++) {
numbersToRead[j] = sc.nextLong();
}
long maxMultiplication = 0;
for(int k = 0; k < numbersInRow; k++) {
for (int m = k + 1; m < numbersInRow; m++) {
//Von jetzt an von 1 bis 2; von 1 bis 3; von 2 bis 3
long[] subarray = subArray(numbersToRead, k, m);
//System.out.println(Arrays.toString(subarray));
long min = Arrays.stream(subarray).min().getAsLong();
//System.out.println(min);
long max = Arrays.stream(subarray).max().getAsLong();
//System.out.println(max);
long multiplicationEveryArray = min * max;
if(multiplicationEveryArray > maxMultiplication) {
maxMultiplication = multiplicationEveryArray;
}
}
}
System.out.println(maxMultiplication);
}
}
public static long[] subArray(long[] array, int beg, int end) {
return Arrays.copyOfRange(array, beg, end + 1);
}
}
I think copyRange has complexity O(n) and subarry has the same complexity.
What are some ways to improve my complexity without taking a different approach?

Usually, these problems are set up in a way that the brute-force method will finish in a million years.
Looking at the examples, my first intuition kind of was that you only need to look at neighboring numbers, making this an O(n) problem.
If you have f(1,2) and f(2,3) then f(1,3) will definitely be the max of f(1,2) and f(2,3). The product of the first and last elements is impossible to be the max as it means that at least one of them has to be smaller than or equal to the second element in the middle to be considered, making the product smaller than or equal to the product of "just neighbors".
You can continue on through induction to reach to the conclusion that f(1,n+1) will be max(f(1,2), f(2,3), ... f(n,n+1)).
In order to get rid of the O(n^2) loop try
for(int k = 0; k < numbersInRow - 1; k++) {
final int product = numbersToRead[k] * numbersToRead[k+1];
if (product > maxMultiplication) {
maxMultiplication = product;
}
}

There are multiple problem with this code :
I think you are using too much space for this question. In this question why are you always creating a new array before finding out max and min.
And that too in a loop so it will cause memory problems for sure later.
As approximately space complexity will go above O(n2)
long[] subarray = subArray(numbersToRead, k, m);
long min = Arrays.stream(subarray).min().getAsLong();
long max = Arrays.stream(subarray).max().getAsLong();
First of all you are doing all this in 3n time ( O(n) for copying +O(n) for finding minimum, O(n) for finding maximum )
i suggest if possible find a better algorithm which can find maximum and minimum in O(n) time if possible.
If thats not possible try finding maximum , minimum in a single loop and try avoiding this function of yours.
public static long[] subArray(long[] array, int beg, int end) {
return Arrays.copyOfRange(array, beg, end + 1);
}
Instead of creating a new array everytime try iterating on the same array.
Avoid streams if you dont have anykind of function which can iterate on original array without creating a new one though i doubt so, that would be the case. I think there is some kind of function which can help limit the finding of minimum and maximum on some range of array
Something like :
Arrays.stream(numbersToRead).range(k, m)

Related

Reducing the time of execution of the following code [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I am working on a problem. Out of 17 test cases 10 works fine and gives the result in less than a second but in 7 cases it is taking 2 seconds which are beyond the time limit. Following is the code
import java.util.*;
import java.io.*;
class TestClass
{
static PrintWriter wr = new PrintWriter(System.out);
public static void func1(int arr[], int n)
{
int temp = arr[0];
for (int jj = 0; jj < n; jj++)
{
if (jj == (n - 1))
arr[jj] = temp;
else
arr[jj] = arr[jj + 1];
}
}
public static void func2(int arr[], int n, int rt)
{
int count = 0;
for (int a = 0; a < n; a++)
{
for (int b = a; b < n; b++)
{
if (arr[a] > arr[b])
count++;
}
}
if (rt == (n - 1))
wr.print(count);
else
wr.print(count + " ");
}
public static void main(String args[]) throws Exception
{
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
String str = br.readLine().trim();
StringTokenizer st = new StringTokenizer(str);
int t = Integer.parseInt(st.nextToken());
for (int i = 0; i < t; i++) //for test cases
{
str = br.readLine().trim();
st = new StringTokenizer(str);
int n = Integer.parseInt(st.nextToken());
int arr[] = new int[n];
str = br.readLine().trim();
st = new StringTokenizer(str);
for (int j = 0; j < n; j++) //to take input of array for each test case
{
arr[j] = Integer.parseInt(st.nextToken());
}
for (int rt = 0; rt < n; rt++) //for number of times circular shifting of array is done
{
func1(arr, n); //circularly shifts the array by one position
func2(arr, n, rt); //prints the number of inversion counts
}
if (i != (t - 1))
wr.println();
}
wr.close();
br.close();
}
}
Can someone suggest how to optimize the code so that it takes less time in execution.
I know BufferReader and PrintWriter takes less time as compared to Scanner and System.out.print. I was using scanner and System.out.print earlier but changed it later in hope of getting less time but it didn't help. Also I did it earlier without the use of func1 and func2 and did all the operations in main only. The time in both the cases remains the same.
I am getting the currect output in all the cases so code is correct, I just need help in optimizing it.
The website you are using acquires questions from past programming competitions. I recognize this as a familiar problem
Like most optimization questions, the preferred steps are:
Do less.
Do the same in fewer instructions.
Don't use functions.
Use faster instructions.
In your case, you have an array, and you wish to rotate it a number of times, and then to process it from the rotated position.
Rotating an array is an incredibly expensive operation, because you typically need to copy every element in the array into a new location. What is worse for you is that you are doing it the simplest way, you are rotating the array one step for every step needing rotation.
So, if you have a 100 element array that needs rotated 45 steps, you would then have (3 copies per element swap) 100 * 45 * 3 copies to perform your rotation.
In the above example, a better approach would be to figure out a routine that rotates an array 45 elements at a time. There are a number of ways to do this. The easiest is to double the RAM requirements and just have two arrays
b[x] = a[(mod(x+45), a.length)]
An even faster "do less" would be to never rotate the array, but to perform the calculation in reverse. This is conceptually the function of the desired index in the rotated array to the actual index in the pre-rotated array. This avoids all copying, and the index numbers (by virtue of being heavily manipulated in the math processing unit) will already be stored in the CPU registers, which is the fastest RAM a computer has.
Note that once you have the starting index in the original array, you can then calculate the next index without going through the calculation again.
I might have read this problem a bit wrong; because, it is not written to highlight the problem being solved. However, the core principles above apply, and it will be up to you to apply them to the exact specifics of your programming challenge.
An example of a faster rotate that does less
public static void func1(int arr[], int shift) {
int offset = shift % arr.length;
int [] rotated = new int[arr.length];
// (arr.length - 1) is the last index, walk up till we need to copy from the head of arr
for (int index = 0; index < (arr.length - 1) - offset; index++) {
rotated[index] = arr[index+offset];
}
// copy the end of the array back into the beginning
for ( int index = (arr.length - 1) - offset; index < arr.length; index++ ) {
rotated[index] = (offset - ((arr.length - 1) - index) - 1);
}
System.arraycopy(rotated, 0, arr, 0, arr.length);
}
This copies the array into its rotated position in one pass, instead of doing a pass per index to be rotated.
The first rule of optimisation (having decided it is necessary) is to use a profiler. This counts how many times methods are invoked, and measures the accumulated time within each method, and gives you a report.
It doesn't matter if a method is slow if you only run it a few times. If you run it hundreds of thousands of times, you need to either make it faster, or run it fewer times.
If you're using a mainstream IDE, you already have a profiler. Read its documentation and use it.
The other first rule of optimisation is, if there's already literature about the problem you're trying to solve, read it. Most of us might have invented bubble-sort independently. Fewer of us would have come up with QuickSort, but it's a better solution.
It looks as if you're counting inversions in the array. Your implementation is about as efficient as you can get, given that naive approach.
for(int i=0; i< array.length; i++) {
int n1 = array[i];
for(int j=i+1; j< array.length; j++) {
n2 = array[j];
if(n1 > n2) {
count++;
}
}
}
For an array of length l this will take ( l - 1) + ( l - 2 ) ... 1 -- that's a triangular number, and grows proportionally to the square of l.
So for l=1000 you're doing ~500,000 comparisons. Then since you're repeating the count for all 1000 rotations of the array, that would be 500,000,000 comparisons, which is definitely the sort of number where things start taking a noticeable amount of time.
Googling for inversion count reveals a more sophisticated approach, which is to perform a merge sort, counting inversions as they are encountered.
Otherwise, we need to look for opportunities for huge numbers of loop iterations. A loop inside a loop makes for big numbers. A loop inside a loop inside another loop makes for even bigger numbers.
You have:
for (int i = 0; i < t; i++) {
// stuff removed
for (int rt = 0; rt < n; rt++) {
// snip
func2(arr, n, rt); //prints the number of inversion counts
}
// snip
}
public static void func2(int arr[], int n, int rt) {
// snip
for (int a = 0; a < n; a++) {
for (int b = a; b < n; b++) {
// stuff
}
}
// snip
}
That's four levels of looping. Look at the input values for your slow tests, and work out what n * n * n * t is -- that an indicator of how many times it'll do the work in the inner block.
We don't know what your algorithm is supposed to achieve. But think about whether you're doing the same thing twice in any of these loops.
It looks as if func1() is supposed to rotate an array. Have a look at System.arrayCopy() for moving whole chunks of array at a time. Most CPUs will do this in a single operation.

Improving the algorithm for removal of element

Problem
Given a string s and m queries. For each query delete the K-th occurrence of a character x.
For example:
abcdbcaab
5
2 a
1 c
1 d
3 b
2 a
Ans abbc
My approach
I am using BIT tree for update operation.
Code:
for (int i = 0; i < ss.length(); i++) {
char cc = ss.charAt(i);
freq[cc-97] += 1;
if (max < freq[cc-97]) max = freq[cc-97];
dp[cc-97][freq[cc-97]] = i; // Counting the Frequency
}
BIT = new int[27][ss.length()+1];
int[] ans = new int[ss.length()];
int q = in.nextInt();
for (int i = 0; i < q; i++) {
int rmv = in.nextInt();
char c = in.next().charAt(0);
int rr = rmv + value(rmv, BIT[c-97]); // Calculating the original Index Value
ans[dp[c-97][rr]] = Integer.MAX_VALUE;
update(rmv, 1, BIT[c-97], max); // Updating it
}
for (int i = 0; i < ss.length(); i++) {
if (ans[i] != Integer.MAX_VALUE) System.out.print(ss.charAt(i));
}
Time Complexity is O(M log N) where N is length of string ss.
Question
My solution gives me Time Limit Exceeded Error. How can I improve it?
public static void update(int i , int value , int[] arr , int xx){
while(i <= xx){
arr[i ]+= value;
i += (i&-i);
}
}
public static int value(int i , int[] arr){
int ans = 0;
while(i > 0){
ans += arr[i];
i -= (i &- i);
}
return ans ;
}
There are key operations not shown, and odds are that one of them (quite likely the update method) has a different cost than you think. Furthermore your stated complexity is guaranteed to be wrong because at some point you have to scan the string which is at minimum O(N).
But anyways the obviously right strategy here is to go through the queries, separate them by character, and then go through the queries in reverse order to figure out the initial positions of the characters to be suppressed. Then run through the string once, emitting characters only when it fits. This solution, if implemented well, should be doable in O(N + M log(M)).
The challenge is how to represent the deletions efficiently. I'm thinking of some sort of tree of relative offsets so that if you find that the first deletion was 3 a you can efficiently insert it into your tree and move every later deletion after that one. This is where the log(M) bit will be.

Why is sorting/initializing an array not counted in Big O?

I am trying to find the most efficient answer (without using a HashMap) to the problem: Find the most frequent integer in an array.
I got answers like:
public int findPopular(int[] a) {
if (a == null || a.length == 0)
return 0;
Arrays.sort(a);
int previous = a[0];
int popular = a[0];
int count = 1;
int maxCount = 1;
for (int i = 1; i < a.length; i++) {
if (a[i] == previous)
count++;
else {
if (count > maxCount) {
popular = a[i-1];
maxCount = count;
}
previous = a[i];
count = 1;
}
}
return count > maxCount ? a[a.length-1] : popular;
}
and
public class Mode {
public static int mode(final int[] n) {
int maxKey = 0;
int maxCounts = 0;
int[] counts = new int[n.length];
for (int i=0; i < n.length; i++) {
counts[n[i]]++;
if (maxCounts < counts[n[i]]) {
maxCounts = counts[n[i]];
maxKey = n[i];
}
}
return maxKey;
}
public static void main(String[] args) {
int[] n = new int[] { 3,7,4,1,3,8,9,3,7,1 };
System.out.println(mode(n));
}
}
The first code snippet claims to be O(n log n). However, the Arrays.sort() function alone is O(n log n) [3]. If you add the for loop, wouldn't the findPopular() function be O(n^2 * log n)? Which would simplify to O(n^2)?
The second code [2] snippet claims to be O(n). However, why do we not consider the initialization of the arrays into our calculation? The initialization of the array would take O(n) time [4], and the for loop would take O(n). So wouldn't the mode() function be O(n^2)?
If I am correct, that would mean I have yet to see an answer that is more efficient than O(n^2).
As always, thank you for the help!
Sources:
Find the most popular element in int[] array
Write a mode method in Java to find the most frequently occurring element in an array
The Running Time For Arrays.Sort Method in Java
Java: what's the big-O time of declaring an array of size n?
Edit: Well, I feel like an idiot. I'll leave this here in case someone made the same mistake I did.
When you perform two tasks one after the other, you add complexities:
Arrays.sort(a); // O(n log n)
for (int i = 0; i < n; i++) { // O(n)
System.out.println(a[i]);
}
// O(n log n + n) = O(n (log n + 1)) = O(n log n)
Only when you repeat an algorithm, you will multiply:
for (int i = 0; i < n; i++) { // O(n)
Arrays.sort(a); // O(n log n), will be executed n times
}
// O((n log n) * n) = O(n² log n)
code -1 : you have only one for loop . So effectively, your time complexity will be : O(n Log n) + O(n) approximately equal to (n Log n)
code-2: Initialization also takes O(n). So effectively, O(n) + O(n) (loop) is still O(n).
Note : While calculating time-complexities with O (big-O), you just need the biggest term(s)

Sum of all prime numbers below 2 million

Problem 10 from Project Euler:
The program runs for smaller numbers and slows to a crawl in the hundred thousands.
At 2 million, an answer fails to show up even though the program seems like it is still running.
I'm trying to implement the Sieve of Eratosthenes. It is supposed to be very fast. What's wrong with my approach?
import java.util.ArrayList;
public class p010
{
/**
* The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17
* Find the sum of all the primes below two million.
* #param args
*/
public static void main(String[] args)
{
ArrayList<Integer> primes = new ArrayList<Integer>();
int upper = 2000000;
for (int i = 2; i < upper; i++)
{
primes.add(i);
}
int sum = 0;
for (int i = 0; i < primes.size(); i++)
{
if (isPrime(primes.get(i)))
{
for (int k = 2; k*primes.get(i) < upper; k++)
{
if (primes.contains(k*primes.get(i)))
{
primes.remove(primes.indexOf(k*primes.get(i)));
}
}
}
}
for (int i = 0; i < primes.size(); i++)
{
sum += primes.get(i);
}
System.out.println(sum);
}
public static boolean isPrime(int number)
{
boolean returnVal = true;
for (int i = 2; i <= Math.sqrt(number); i ++)
{
if (number % i == 0)
{
returnVal = false;
}
}
return returnVal;
}
}
You appear to be trying to implement the Sieve of Eratosthenes which should perform better that O(N^2) (In fact, Wikipedia says it is O(N log(log N)) ...).
The fundamental problem is your choice of data structure. You've chosen to represent the set of remaining prime candidates as an ArrayList of primes. This means that your test to see if a number is still in the set takes O(N) comparisons ... where N is the number of remaining primes. Then you are using ArrayList.remove(int) to remove the non-primes ... which is O(N) also.
That all adds up to making your Sieve implementation worse than O(N^2).
The solution is to replace the ArrayList<Integer> with an boolean[] where the positions (indexes) in the boolean array represent the numbers, and the value of the boolean says whether the number is prime / possibly prime, or not prime.
(There were other problems too that I didn't notice ... see the other answers.)
There are a few issues here. First, lets talk about the algorithm. Your isPrime method is actually the very thing that the sieve is designed to avoid. When you get to a number in the sieve, you already know it's prime, you don't need to test it. If it weren't prime, it would already have been eliminated as a factor of a lower number.
So, point 1:
You can eliminate the isPrime method altogether. It should never return false.
Then, there are implementation issues. primes.contains and primes.remove are problems. They run in linear time on an ArrayList, because they require checking each element or rewriting a large portion of the backing array.
Point 2:
Either mark values in place (use boolean[], or use some other more appropriate data structure.)
I typically use something like boolean primes = new boolean[upper+1], and define n to be included if !(primes[n]). (I just ignore elements 0 and 1 so I don't have to subtract indices.) To "remove" an element, I set it to true. You could also use something like TreeSet<Integer>, I suppose. Using boolean[], the method is near-instantaneous.
Point 3:
sum needs to be a long. The answer (roughly 1.429e11) is larger than the maximum value of an integer (2^31-1)
I can post working code if you like, but here's a test output, without spoilers:
public static void main(String[] args) {
long value;
long start;
long finish;
start = System.nanoTime();
value = arrayMethod(2000000);
finish = System.nanoTime();
System.out.printf("Value: %.3e, time: %4d ms\n", (double)value, (finish-start)/1000000);
start = System.nanoTime();
value = treeMethod(2000000);
finish = System.nanoTime();
System.out.printf("Value: %.3e, time: %4d ms\n", (double)value, (finish-start)/1000000);
}
output:
Using boolean[]
Value: 1.429e+11, time: 17 ms
Using TreeSet<Integer>
Value: 1.429e+11, time: 4869 ms
Edit:
Since spoilers are posted, here's my code:
public static long arrayMethod(int upper) {
boolean[] primes = new boolean[upper+1];
long sum = 0;
for (int i = 2; i <=upper; i++) {
if (!primes[i]) {
sum += i;
for (int k = 2*i; k <= upper; k+=i) {
primes[k] = true;
}
}
}
return sum;
}
public static long treeMethod(int upper) {
TreeSet<Integer> primes = new TreeSet<Integer>();
for (int i = 2; i <= upper; i++) {
primes.add(i);
}
long sum = 0;
for (Integer i = 2; i != null; i=primes.higher(i)) {
sum += i;
for (int k = 2*i; k <= upper; k+=i) {
primes.remove(k);
}
}
return sum;
}
Two things:
Your code is hard to follow. You have a list called "primes", that contains non prime numbers!
Also, you should strongly consider whether or not an array list is appropriate. In this case, a LinkedList would be much more efficient.
Why is this? An array list must constantly resize an array by: asking for new memory to create an array, then copying the old memory over in the newly created array. A Linked list would just resize the memory by changing a pointer. This is a lot quicker! However, I do not think that by making this change you can salvage your algorithm.
You should use an array list if you need to access the items non-sequentially, here, (with a suitable algorithm) you need to access the items sequentially.
Also, your algorithm is slow.Take the advice of SJuan76 (or gyrogearless), thanks sjuan76
The key to the efficiency of classic implementation of the sieve of Eratosthenes on modern CPUs is the direct (i.e. non-sequential) memory access. Fortunately, ArrayList<E> does implement RandomAccess.
Another key to the sieve's efficiency is its conflation of index and value, just like in integer sorting. Actually removing any number from the sequence destroys this ability to directly address without any computations. We must mark, not remove, any composite as we find them, so any numbers greater than it will remain in their places in the sequence.
ArrayList<Integer> can be used for that (except taking more memory than is strictly necessary, but for 2 million this is inconsequential).
So your code with a minimal edit fix (also changing sum to be long as others point out too), becomes
import java.util.ArrayList;
public class Main
{
/**
* The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17
* Find the sum of all the primes below two million.
* #param args
*/
public static void main(String[] args)
{
ArrayList<Integer> primes = new ArrayList<Integer>();
int upper = 5000;
primes.ensureCapacity(upper);
for (int i = 0; i < upper; i++) {
primes.add(i);
}
long sum = 0;
for (int i = 2; i <= upper / i; i++) {
if ( primes.get(i) > 0 ) {
for (int k = i*i; k < upper ; k+=i) {
primes.set(k, 0);
}
}
}
for (int i = 2; i < upper; i++) {
sum += primes.get(i);
}
System.out.println(sum);
}
}
Finds the result for 2000000 in half a second on Ideone. The projected run time for your original code there: between 10 and 400 hours (!).
To find rough estimates for the run time when faced with a slow code, you should always try to find out its empirical orders of growth: run it for some small size n1, then a bigger size n2, record the run times t1 and t2. If t ~ n^a, then a = log(t2/t1) / log(n2/n1).
For your original code the empirical orders of growth measured on 10k .. 20k .. 40k range of upper limit value N, are ~ N^1.7 .. N^1.9 .. N^2.1. For the fixed code it's faster than ~ N (in fact, it's ~ N^0.9 in the tested range 0.5 mln .. 1 mln .. 2 mln). The theoretical complexity is O(N log (log N)).
Your program is not the Sieve of Eratosthenes; the modulo operator gives it away. Your program will be O(n^2), where a proper Sieve of Eratosthenes is O(n log log n), which is essentially n. Here's my version; I'll leave it to you to translate to Java with appropriate numeric datatypes:
function sumPrimes(n)
sum := 0
sieve := makeArray(2..n, True)
for p from 2 to n step 1
if sieve[p]
sum := sum + p
for i from p * p to n step p
sieve[i] := False
return sum
If you're interested in programming with prime numbers, I modestly recommend this essay at my blog.

Finding the distance between the two closest elements in an array of numbers

So I'm teaching myself algorithms from this book I purchased, and I have a pseudo-code for Finding the distance between the two closetst elements in an array of numbers
MinDistance(a[0...n-1])
Input: Array A of numbers
Output: Minimum Distance between two of its elements
dMin <- maximum integer
for i=0 to n-1 do
for j=0 to n-1 do
if i!=j and | A[i] - A[j] | < dMin
dMin = | A[i]-A[j] |
return dMin
However, I wanted to make improvements to this algorithmic solution. Change what's already there, or rewrite all together. Can someone help?
I wrote the function and class in Java to test the pseudo-code? Is that correct? And once again, how can I make it better from efficiency standpoint.
//Scanner library allowing the user to input data
import java.lang.Math.*;
public class ArrayTester{
//algorithm for finding the distance between the two closest elements in an array of numbers
public int MinDistance(int [] ar){
int [] a = ar;
int aSize = a.length;
int dMin = 0;//MaxInt
for(int i=0; i< aSize; i++)
{
for(int j=i+1; j< aSize;j++)
{
dMin = Math.min(dMin, Math.abs( a[i]-a[j] );
}
}
return dMin;
}
//MAIN
public static void main(String[] args){
ArrayTester at = new ArrayTester();
int [] someArray = {9,1,2,3,16};
System.out.println("NOT-OPTIMIZED METHOD");
System.out.println("Array length = "+ someArray.length);
System.out.println("The distance between the two closest elements: " + at.MinDistance(someArray));
} //end MAIN
} //END CLASS
SO I updated the function to minimize calling the Math.abs twice. What else can I do improve it. If I was to rewrite it with sort, would it change my for loops at all, or would it be the same just theoretically run faster.
public int MinDistance(int [] ar){
int [] a = ar;
int aSize = a.length;
int dMin = 0;//MaxInt
for(int i=0; i< aSize; i++)
{
for(int j=i+1; j< aSize;j++)
{
dMin = Math.min(dMin, Math.abs( a[i]-a[j] );
}
}
return dMin;
}
One obvious efficiency improvement: sort the integers first, then you can look at adjacent ones. Any number is going to be closest to its neighbour either up or down.
That changes the complexity from O(n2) to O(n log n). Admittedly for the small value of n shown it's not going to make a significant difference, but in terms of theoretical complexity it's important.
One micro-optimization you may want to make: use a local variable to store the result of Math.abs, then you won't need to recompute it if that turns out to be less than the minimum. Alternatively, you might want to use dMin = Math.min(dMin, Math.abs(a[i] - a[j])).
Note that you need to be careful of border conditions - if you're permitting negative numbers, your subtraction might overflow.
That's a naive solution of O(n^2).
Better way:
Sort the array, then go over it once more and check the distance between the sorted items.
This will work because they are in ascending order, so the number with the nearest value is adjacent.
That solution will be O(nlogn)
First of all, before making it fast, make it correct. Why is dmin initialized with the length of the array? If the array is [1, 1000], the result of your algorithm will be 2 instead of 999.
Then, why do you make j go from 0 to the length of the array? You compare each pair of elements twice. You should make j go from i + 1 to the length of the array (which will also avoid the i != j comparison).
Finally, you could gain a few nanoseconds by avoiding calling Math.abs() twice.
And then, you could completely change your algorithm by sorting the array first, as noted in other answers.
You can theoretically get an O(n) solution by
sorting with shell radix sort (edited, thanks to j_random_hacker for pointing it out)
one pass to find difference between numbers
Here's a question:
How long would it take to find the min distance if the array was sorted?
You should be able to finish the rest out from here.
Sorting the array first would exempt us from using another FOR loop.
public static int distclosest(int numbers[]) {
Arrays.sort(numbers);
int aSize = numbers.length;
int dMin = numbers[aSize-1];
for(int i=0; i<aSize-1; i++) {
dMin = Math.min(dMin, numbers[i+1]-numbers[i]);
}
return dMin;
}
static void MinNumber(int [] nums){
Arrays.sort(nums);
int min = nums[1] - nums[0];
int indexOne = 0 , indexTwo = 1;
for (int i = 1; i < nums.length -1; i++) {
if (min > (nums[i+1] - nums[i])) {
min = nums[i+1] - nums[i] ;
indexOne = i ;
indexTwo = i+1 ;
}
}
System.out.println("Minimum number between two values is: "+ min + " and the values is "+nums[indexOne]+" , "+nums[indexTwo] );
}
np: sorting the array is a must before executing the algorithm.
static int minDist(int arr[]) {
int firstPointer, nextPointer;
int minDistance = arr[1] - arr[0];
int tempDistance;
for (firstPointer = 0; firstPointer < arr.length; firstPointer++) {
for (nextPointer = firstPointer + 1; nextPointer < arr.length; nextPointer++) {
if (arr[nextPointer] == arr[firstPointer]) {
return 0;
} else {
tempDistance = (arr[nextPointer] - arr[firstPointer]);
if (minDistance > tempDistance) {
minDistance = tempDistance;
}
}
}
}
return minDistance;
}
public static void main(String[] args) {
int[] testArray = {1000, 1007, 3, 9, 21};
Arrays.sort(testArray);
int result = minDist(testArray);
System.out.println(result);
}

Categories

Resources