I am trying to create a fast prime generator in Java. It is (more or less) accepted that the fastest way for this is the segmented sieve of Eratosthenes: https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes. Lots of optimizations can be further implemented to make it faster. As of now, my implementation generates 50847534 primes below 10^9 in about 1.6 seconds, but I am looking to make it faster and at least break the 1 second barrier. To increase the chance of getting good replies, I will include a walkthrough of the algorithm as well as the code.
Still, as a TL;DR, I am looking to include multi-threading into the code
For the purposes of this question, I want to separate between the 'segmented' and the 'traditional' sieves of Eratosthenes. The traditional sieve requires O(n) space and therefore is very limited in range of the input (the limit of it). The segmented sieve however only requires O(n^0.5) space and can operate on much larger limits. (A main speed-up is using a cache-friendly segmentation, taking into account the L1 & L2 cache sizes of the specific computer). Finally, the main difference that concerns my question is that the traditional sieve is sequential, meaning it can only continue once the previous steps are completed. The segmented sieve however, is not. Each segment is independent, and is 'processed' individually against the sieving primes (the primes not larger than n^0.5). This means that theoretically, once I have the sieving primes, I can divide the work between multiple computers, each processing a different segment. The work of eachother is independent of the others. Assuming (wrongly) that each segment requires the same amount of time t to complete, and there are k segments, One computer would require total time of T = k * t, whereas k computers, each working on a different segment would require a total amount of time T = t to complete the entire process. (Practically, this is wrong, but for the sake of simplicity of the example).
This brought me to reading about multithreading - dividing the work to a few threads each processing a smaller amount of work for better usage of CPU. To my understanding, the traditional sieve cannot be multithreaded exactly because it is sequential. Each thread would depend on the previous, rendering the entire idea unfeasible. But a segmented sieve may indeed (I think) be multithreaded.
Instead of jumping straight into my question, I think it is important to introduce my code first, so I am hereby including my current fastest implementation of the segmented sieve. I have worked quite hard on it. It took quite some time, slowly tweaking and adding optimizations to it. The code is not simple. It is rather complex, I would say. I therefore assume the reader is familiar with the concepts I am introducing, such as wheel factorization, prime numbers, segmentation and more. I have included notes to make it easier to follow.
import java.math.BigInteger;
import java.util.ArrayList;
import java.util.Arrays;
public class primeGen {
public static long x = (long)Math.pow(10, 9); //limit
public static int sqrtx;
public static boolean [] sievingPrimes; //the sieving primes, <= sqrtx
public static int [] wheels = new int [] {2,3,5,7,11,13,17,19}; // base wheel primes
public static int [] gaps; //the gaps, according to the wheel. will enable skipping multiples of the wheel primes
public static int nextp; // the first prime > wheel primes
public static int l; // the amount of gaps in the wheel
public static void main(String[] args)
{
long startTime = System.currentTimeMillis();
preCalc(); // creating the sieving primes and calculating the list of gaps
int segSize = Math.max(sqrtx, 32768*8); //size of each segment
long u = nextp; // 'u' is the running index of the program. will continue from one segment to the next
int wh = 0; // the will be the gap index, indicating by how much we increment 'u' each time, skipping the multiples of the wheel primes
long pi = pisqrtx(); // the primes count. initialize with the number of primes <= sqrtx
for (long low = 0 ; low < x ; low += segSize) //the heart of the code. enumerating the primes through segmentation. enumeration will begin at p > sqrtx
{
long high = Math.min(x, low + segSize);
boolean [] segment = new boolean [(int) (high - low + 1)];
int g = -1;
for (int i = nextp ; i <= sqrtx ; i += gaps[g])
{
if (sievingPrimes[(i + 1) / 2])
{
long firstMultiple = (long) (low / i * i);
if (firstMultiple < low)
firstMultiple += i;
if (firstMultiple % 2 == 0) //start with the first odd multiple of the current prime in the segment
firstMultiple += i;
for (long j = firstMultiple ; j < high ; j += i * 2)
segment[(int) (j - low)] = true;
}
g++;
//if (g == l) //due to segment size, the full list of gaps is never used **within just one segment** , and therefore this check is redundant.
//should be used with bigger segment sizes or smaller lists of gaps
//g = 0;
}
while (u <= high)
{
if (!segment[(int) (u - low)])
pi++;
u += gaps[wh];
wh++;
if (wh == l)
wh = 0;
}
}
System.out.println(pi);
long endTime = System.currentTimeMillis();
System.out.println("Solution took "+(endTime - startTime) + " ms");
}
public static boolean [] simpleSieve (int l)
{
long sqrtl = (long)Math.sqrt(l);
boolean [] primes = new boolean [l/2+2];
Arrays.fill(primes, true);
int g = -1;
for (int i = nextp ; i <= sqrtl ; i += gaps[g])
{
if (primes[(i + 1) / 2])
for (int j = i * i ; j <= l ; j += i * 2)
primes[(j + 1) / 2]=false;
g++;
if (g == l)
g=0;
}
return primes;
}
public static long pisqrtx ()
{
int pi = wheels.length;
if (x < wheels[wheels.length-1])
{
if (x < 2)
return 0;
int k = 0;
while (wheels[k] <= x)
k++;
return k;
}
int g = -1;
for (int i = nextp ; i <= sqrtx ; i += gaps[g])
{
if(sievingPrimes[( i + 1 ) / 2])
pi++;
g++;
if (g == l)
g=0;
}
return pi;
}
public static void preCalc ()
{
sqrtx = (int) Math.sqrt(x);
int prod = 1;
for (long p : wheels)
prod *= p; // primorial
nextp = BigInteger.valueOf(wheels[wheels.length-1]).nextProbablePrime().intValue(); //the first prime that comes after the wheel
int lim = prod + nextp; // circumference of the wheel
boolean [] marks = new boolean [lim + 1];
Arrays.fill(marks, true);
for (int j = 2 * 2 ;j <= lim ; j += 2)
marks[j] = false;
for (int i = 1 ; i < wheels.length ; i++)
{
int p = wheels[i];
for (int j = p * p ; j <= lim ; j += 2 * p)
marks[j]=false; // removing all integers that are NOT comprime with the base wheel primes
}
ArrayList <Integer> gs = new ArrayList <Integer>(); //list of the gaps between the integers that are coprime with the base wheel primes
int d = nextp;
for (int p = d + 2 ; p < marks.length ; p += 2)
{
if (marks[p]) //d is prime. if p is also prime, then a gap is identified, and is noted.
{
gs.add(p - d);
d = p;
}
}
gaps = new int [gs.size()];
for (int i = 0 ; i < gs.size() ; i++)
gaps[i] = gs.get(i); // Arrays are faster than lists, so moving the list of gaps to an array
l = gaps.length;
sievingPrimes = simpleSieve(sqrtx); //initializing the sieving primes
}
}
Currently, it produces 50847534 primes below 10^9 in about 1.6 seconds. This is very impressive, at least by my standards, but I am looking to make it faster, possibly break the 1 second barrier. Even then, I believe it can be made much faster still.
The whole program is based on wheel factorization: https://en.wikipedia.org/wiki/Wheel_factorization. I have noticed I am getting the fastest results using a wheel of all primes up to 19.
public static int [] wheels = new int [] {2,3,5,7,11,13,17,19}; // base wheel primes
This means that the multiples of those primes are skipped, resulting in a much smaller searching range. The gaps between numbers which we need to take are then calculated in the preCalc method. If we make those jumps between the the numbers in the searching range we skip the multiples of the base primes.
public static void preCalc ()
{
sqrtx = (int) Math.sqrt(x);
int prod = 1;
for (long p : wheels)
prod *= p; // primorial
nextp = BigInteger.valueOf(wheels[wheels.length-1]).nextProbablePrime().intValue(); //the first prime that comes after the wheel
int lim = prod + nextp; // circumference of the wheel
boolean [] marks = new boolean [lim + 1];
Arrays.fill(marks, true);
for (int j = 2 * 2 ;j <= lim ; j += 2)
marks[j] = false;
for (int i = 1 ; i < wheels.length ; i++)
{
int p = wheels[i];
for (int j = p * p ; j <= lim ; j += 2 * p)
marks[j]=false; // removing all integers that are NOT comprime with the base wheel primes
}
ArrayList <Integer> gs = new ArrayList <Integer>(); //list of the gaps between the integers that are coprime with the base wheel primes
int d = nextp;
for (int p = d + 2 ; p < marks.length ; p += 2)
{
if (marks[p]) //d is prime. if p is also prime, then a gap is identified, and is noted.
{
gs.add(p - d);
d = p;
}
}
gaps = new int [gs.size()];
for (int i = 0 ; i < gs.size() ; i++)
gaps[i] = gs.get(i); // Arrays are faster than lists, so moving the list of gaps to an array
l = gaps.length;
sievingPrimes = simpleSieve(sqrtx); //initializing the sieving primes
}
At the end of the preCalc method, the simpleSieve method is called, efficiently sieving all the sieving primes mentioned before, the primes <= sqrtx. This is a simple Eratosthenes sieve, rather than segmented, but it is still based on wheel factorization, perviously computed.
public static boolean [] simpleSieve (int l)
{
long sqrtl = (long)Math.sqrt(l);
boolean [] primes = new boolean [l/2+2];
Arrays.fill(primes, true);
int g = -1;
for (int i = nextp ; i <= sqrtl ; i += gaps[g])
{
if (primes[(i + 1) / 2])
for (int j = i * i ; j <= l ; j += i * 2)
primes[(j + 1) / 2]=false;
g++;
if (g == l)
g=0;
}
return primes;
}
Finally, we reach the heart of the algorithm. We start by enumerating all primes <= sqrtx, with the following call:
long pi = pisqrtx();`
which used the following method:
public static long pisqrtx ()
{
int pi = wheels.length;
if (x < wheels[wheels.length-1])
{
if (x < 2)
return 0;
int k = 0;
while (wheels[k] <= x)
k++;
return k;
}
int g = -1;
for (int i = nextp ; i <= sqrtx ; i += gaps[g])
{
if(sievingPrimes[( i + 1 ) / 2])
pi++;
g++;
if (g == l)
g=0;
}
return pi;
}
Then, after initializing the pi variable which keeps track of the enumeration of primes, we perform the mentioned segmentation, starting the enumeration from the first prime > sqrtx:
int segSize = Math.max(sqrtx, 32768*8); //size of each segment
long u = nextp; // 'u' is the running index of the program. will continue from one segment to the next
int wh = 0; // the will be the gap index, indicating by how much we increment 'u' each time, skipping the multiples of the wheel primes
long pi = pisqrtx(); // the primes count. initialize with the number of primes <= sqrtx
for (long low = 0 ; low < x ; low += segSize) //the heart of the code. enumerating the primes through segmentation. enumeration will begin at p > sqrtx
{
long high = Math.min(x, low + segSize);
boolean [] segment = new boolean [(int) (high - low + 1)];
int g = -1;
for (int i = nextp ; i <= sqrtx ; i += gaps[g])
{
if (sievingPrimes[(i + 1) / 2])
{
long firstMultiple = (long) (low / i * i);
if (firstMultiple < low)
firstMultiple += i;
if (firstMultiple % 2 == 0) //start with the first odd multiple of the current prime in the segment
firstMultiple += i;
for (long j = firstMultiple ; j < high ; j += i * 2)
segment[(int) (j - low)] = true;
}
g++;
//if (g == l) //due to segment size, the full list of gaps is never used **within just one segment** , and therefore this check is redundant.
//should be used with bigger segment sizes or smaller lists of gaps
//g = 0;
}
while (u <= high)
{
if (!segment[(int) (u - low)])
pi++;
u += gaps[wh];
wh++;
if (wh == l)
wh = 0;
}
}
I have also included it as a note, but will explain as well. Because the segment size is relatively small, we will not go through the entire list of gaps within just one segment, and checking it - is redundant. (Assuming we use a 19-wheel). But in a broader scope overview of the program, we will make use of the entire array of gaps, so the variable u has to follow it and not accidentally surpass it:
while (u <= high)
{
if (!segment[(int) (u - low)])
pi++;
u += gaps[wh];
wh++;
if (wh == l)
wh = 0;
}
Using higher limits will eventually render a bigger segment, which might result in a neccessity of checking we don't surpass the gaps list even within the segment. This, or tweaking the wheel primes base might have this effect on the program. Switching to bit-sieving can largely improve the segment limit though.
As an important side-note, I am aware that efficient segmentation is
one that takes the L1 & L2 cache-sizes into account. I get the
fastest results using a segment size of 32,768 * 8 = 262,144 = 2^18. I am not sure what the cache-size of my computer is, but I do
not think it can be that big, as I see most cache sizes <= 32,768.
Still, this produces the fastest run time on my computer, so this is
why it's the chosen segment size.
As I mentioned, I am still looking to improve this by a lot. I
believe, according to my introduction, that multithreading can result
in a speed-up factor of 4, using 4 threads (corresponding to 4
cores). The idea is that each thread will still use the idea of the
segmented sieve, but work on different portions. Divide the n
into 4 equal portions - threads, each in turn performing the
segmentation on the n/4 elements it is responsible for, using the
above program. My question is how do I do that? Reading about
multithreading and examples, unfortunately, did not bring to me any
insight on how to implement it in the case above efficiently. It
seems to me, as opposed to the logic behind it, that the threads were
running sequentially, rather than simultaneously. This is why I
excluded it from the code to make it more readable. I will really
appreciate a code sample on how to do it in this specific code, but a
good explanation and reference will maybe do the trick too.
Additionally, I would like to hear about more ways of speeding-up
this program even more, any ideas you have, I would love to hear!
Really want to make it very fast and efficient. Thank you!
An example like this should help you get started.
An outline of a solution:
Define a data structure ("Task") that encompasses a specific segment; you can put all the immutable shared data into it for extra neatness, too. If you're careful enough, you can pass a common mutable array to all tasks, along with the segment limits, and only update the part of the array within these limits. This is more error-prone, but can simplify the step of joining the results (AFAICT; YMMV).
Define a data structure ("Result") that stores the result of a Task computation. Even if you just update a shared resulting structure, you may need to signal which part of that structure has been updated so far.
Create a Runnable that accepts a Task, runs a computation, and puts the results into a given result queue.
Create a blocking input queue for Tasks, and a queue for Results.
Create a ThreadPoolExecutor with the number of threads close to the number of machine cores.
Submit all your Tasks to the thread pool executor. They will be scheduled to run on the threads from the pool, and will put their results into the output queue, not necessarily in order.
Wait for all the tasks in the thread pool to finish.
Drain the output queue and join the partial results into the final result.
Extra speedup may (or may not) be achieved by joining the results in a separate task that reads the output queue, or even by updating a mutable shared output structure under synchronized, depending on how much work the joining step involves.
Hope this helps.
Are you familiar with the work of Tomas Oliveira e Silva? He has a very fast implementation of the Sieve of Eratosthenes.
How interested in speed are you? Would you consider using c++?
$ time ../c_code/segmented_bit_sieve 1000000000
50847534 primes found.
real 0m0.875s
user 0m0.813s
sys 0m0.016s
$ time ../c_code/segmented_bit_isprime 1000000000
50847534 primes found.
real 0m0.816s
user 0m0.797s
sys 0m0.000s
(on my newish laptop with an i5)
The first is from #Kim Walisch using a bit array of odd prime candidates.
https://github.com/kimwalisch/primesieve/wiki/Segmented-sieve-of-Eratosthenes
The second is my tweak to Kim's with IsPrime[] also implemented as bit array, which is slightly less clear to read, although a little faster for big N due to the reduced memory footprint.
I will read your post carefully as I am interested in primes and performance no matter what language is used. I hope this isn't too far off topic or premature. But I noticed I was already beyond your performance goal.
I know that this question was asked, but it has no distinct answer.
So, what I've found is some example here : FFT spectrum analysis
Where I can transform my array of doubles with FFT class
RealDoubleFFT transformer;
int blockSize= */2048;
short[] buffer = new short[blockSize];
double[] toTransform = new double[blockSize];
bufferReadResult = audioRecord.read(buffer, 0, blockSize);
for (int i = 0; i < blockSize && i < bufferReadResult; i++) {
toTransform[i] = (double) buffer[i] / 32768.0; // signed 16 bit
}
transformer.ft(toTransform);
so now I don't know how to get a frequency
I wrote such method :
public static int calculateFFTFrequency(double[] audioData){
float sampleRate = 44100;
int numSamples = audioData.length;
double max = Double.MIN_VALUE;
int index = 0;
for (int i = 0; i< numSamples -1; i++){
if (audioData[i] > max) {
max = audioData[i];
index = i;
}
}
float freq = (sampleRate / (float) numSamples * (float) index) * 2F;
return (int)freq;
}
I try to implement a formula, but it doesn't return me anything good - some wild numbers
I tried zero passing as well :
public static int calculateFrequency(short [] audioData){
int sampleRate = 44100;
int numSamples = audioData.length;
int numCrossing = 0;
for (int p = 0; p < numSamples-1; p++)
{
if ((audioData[p] > 0 && audioData[p + 1] <= 0) ||
(audioData[p] < 0 && audioData[p + 1] >= 0))
{
numCrossing++;
}
}
float numSecondsRecorded = (float)numSamples/(float)sampleRate;
float numCycles = numCrossing/2;
float frequency = numCycles/numSecondsRecorded;
return (int)frequency;
}
But in zero passing method if I play "A" note on piano it shows me 430 for a moment (which is close to A) and then start to show some wild numbers when the sound fades - 800+ , 1000+ , etc.
Can somebody help me how to get more or less actual frequency from the mic?
You should test your solution using a generated stream rather than a mic, then testing if the frequency detected is what you expect. Then you can do real life tests with mic, you should analyze the data collected by mic by yourself in case of any issues. There could be non audible sounds in your environment that could cause some strange results. When the sound fades there could be some harmonical sounds and these harmonicals can become lauder than the base sound. There's a lot of things to be considered when processing sounds from real environment.
What you hear from a piano is a pitch, not just a spectral frequency. They are not the same thing. Pitch is a psycho-acoustic phenomena, depending more on periodicity, not just the spectral peak. A bare FFT reports spectral frequency magnitudes, which can be composed of overtones, harmonics, and other artifacts, and may or may not include the fundamental pitch frequency.
So what you may want to use instead of an FFT is a pitch detection/estimation algorithm, which is a bit more complicated than just picking a peak magnitude out of an FFT.
I'm a university student currently programming an assignment. I need to program various methods for sending and receiving messages, like for example the Hamming algorithm and Cyclic Redundancy Check.
I'm trying to program the CRC method on the transmitter end, but I can't manage to successfully program the polynomial division required. I've tried several solutions posted here, like using BitSet for division, to no avail.
Since I'm working with a graphical interface, designed in NetBeans 8.0.1, my question is: how can I manipulate Strings coming from several jTextFields to generate a binary message with the CRC algorithm?
Note: this is the first time I'm using Stack Overflow, so if I'm missing anything, please point it out for me. Thanks in advance.
EDIT: As requested, here is my sample code using BitSet: (note: some variable names are in Spanish since I'm a native Spanish-speaker)
public static String CRC(String m, String G){
BitSet dividendo, divisor, divid1, divid2, resto, blanco;
dividendo = new BitSet(m.length());
divisor = new BitSet(G.length());
blanco = new BitSet(G.length());
blanco.clear();
for (int i = 0; i < m.length(); i++){
if(Integer.parseInt(m.substring(i, i+1)) == 1) {
dividendo.set(i);
} else {
dividendo.clear(i);
}
}
for (int i = 0; i < G.length(); i++){
if(Integer.parseInt(G.substring(i, i+1)) == 1) {
divisor.set(i);
} else {
divisor.clear(i);
}
}
divid1 = dividendo.get(0, divisor.length());
int largo1, largo2, largo3, largo4;
largo1 = dividendo.length();
largo2 = divisor.length();
largo3 = blanco.length();
largo4 = divid1.length();
for (int i = divisor.length(); i < dividendo.length(); i++) {
if (divid1.get(0) == divisor.get(0)){
divid1.xor(divisor);
divid2 = new BitSet(divid1.length());
for (int j = 1; j<divid1.length(); j++){
if(divid1.get(j))
divid2.set(j-1);
else
divid2.clear(j-1);
}
boolean valor = dividendo.get(i);
int largo5 = divid2.length();
divid2.set(divid2.length(), valor);
divid1 = divid2;
} else {
divid1.xor(blanco);
divid2 = new BitSet(divid1.length());
for (int j = 1; j<divid1.length(); j++){
if(divid1.get(j))
divid2.set(j);
else
divid2.clear(j);
}
boolean valor = dividendo.get(i);
divid2.set(divid2.length(), valor);
divid1 = divid2;
}
}
resto = new BitSet(divid1.length());
for (int j = 1; j<divid1.length(); j++){
if(divid1.get(j))
resto.set(j);
else
resto.clear(j);
}
String mFinal = dividendo.toString() + resto.toString();
return mFinal;
}
With inputs
String m = "10010101"
String G = "1011"
my expected output vs. actual output was
expected = 10010101010
actual = {0, 3, 5, 7}{1, 2, 3} (first array is the original message, second is the appended remainder)
with the code mentioned above.
I'm at a loss on what I'm doing wrong, so any kind of help will be appreciated.
CRCs are not calculated by doing polynomial divisions term-by-term. Polynomials over GF(2) and their division are how CRCs are defined and analyzed mathematically. The actual calculation is done with binary representations of the polynomials and states stored in machine integers, using rotations and exclusive-ors to operate on all the terms of the polynomials in parallel.
I recommend Ross Williams' tutorial on CRCs for a description of how you go from polynomials over GF(2) to exclusive-ors and rotates of binary strings.
My assignment deals with hashing and using Horner's polynomial to create a hash function. I have to computer the theoretical probe length using ( 1 + 1/(1-L)**2)/2 (Usuccessful) or (1+1/(1-L))/2 (successful) for Linear probing and then the same for the correct equations that correspond to quadratic probing. I then have to compare the theoretical values with experimental values for load factors 0.1 through 0.9. I am using the find method and searching for 100 random ints to acquire the experimental data. The problem that I am having is that I am not obtaining the correct probeLength value once the find either succeeds or fails.
I create 10000 random ints to fill with and then 100 random ints that I will search for.
for(i = 0; i<10000; i++)
{
int x = (int)(java.lang.Math.random() * size);
randomints.add(x);
}
//Make arraylist of 10000 random ints to fill
for(p = 0; p<100; p++)
{
int x = (int)(java.lang.Math.random() * size);
randomintsfind.add(x);
}
Later on I have a loop that does the finding and keeps track of how many times the find succeeds or fails. That part of it is working. It is also supposed to keep track of the probeLength for each find and then add them all together so that it can be divided by the number of successes or failures respectively to find out what the average is. That is where I am having a problem. The probeLength isn't being retrieved correctly and I am not sure why.
This is the section of code that calls the find method and keeps track of those variables as well as the creation and filling.
HashTableLinear theHashTable = new HashTableLinear(primesize);
for(int j=0; j<randomintscopy.length; j++) // insert data
{
//aKey = (int)(java.lang.Math.random() * size);
aDataItem = new DataItem(randomintscopy[j]);
theHashTable.insert(aDataItem);
}
for(int f = 0; f < randomintsfindcopy.length;f++)
{
aDataItem = theHashTable.find(randomintsfindcopy[f]);
if(aDataItem != null)
{
linearsuccess += 1;
experimentallinearsuccess += theHashTable.probeLength;
theHashTable.probeLength = 0;
}
else
{
linearfailure += 1;
experimentallinearfailure += theHashTable.probeLength;
theHashTable.probeLength = 0;
}
}
And then the find method in the HashTableLinear class
public DataItem find(int key) // find item with key
{
int hashVal = hashFunc(key); // hash the key
probeLength = 1;
while(hashArray[hashVal] != null) // until empty cell,
{ // found the key?
if(hashArray[hashVal].getKey() == key)
return hashArray[hashVal]; // yes, return item
++hashVal; // go to next cell
++probeLength;
//System.out.println("Find Test: " + probeLength);
hashVal %= arraySize; // wraparound if necessary
}
return null; // can't find item
}
When I test printing the probeLength value in the find method and the values that are gotten in the loops calling find are different from each other.
I realized that I was thinking too hard about this. It resolved it by making a getter and a setter and then setting the value once the item is either found or not found and then retrieving the value with the getter.
I have a program that checks distance and whether or not the player has collided with a barrier. I now am trying to calculate which barrier in the array of barriers is the closest to the moving player, then returning the index of that barrier.
Here is what I have so far:
public static int closestBarrier(GameObject object, GameObject[] barriers)
// TODO stub
{
int closest = 0;
for (int i = 0; i < barriers.length - 1; i++) {
if (Math.sqrt((object.getX() - barriers[i].getX())
* (object.getX() - barriers[i].getX()))
+ ((object.getY() - barriers[i].getY()) * (object.getY() - barriers[i]
.getY())) <= Math
.sqrt((object.getX() - barriers[i + 1].getX())
* (object.getX() - barriers[i + 1].getX()))
+ ((object.getY() - barriers[i + 1].getY()) * (object
.getY() - barriers[i + 1].getY()))) {
closest = i;
} else
closest = i + 1;
}
return closest;
}
I am still new to java so I understand what I already have probably isn't very efficient or the best method of doing it (or even right at all!?).
I'd refactor it a wee bit simpler like so:
public static int closestBarrier(GameObject object, GameObject[] barriers)
{
int closest = -1;
float minDistSq = Float.MAX_VALUE;//ridiculously large value to start
for (int i = 0; i < barriers.length - 1; i++) {
GameObject curr = barriers[i];//current
float dx = (object.getX()-curr.getX());
float dy = (object.getY()-curr.getY());
float distSq = dx*dx+dy*dy;//use the squared distance
if(distSq < minDistSq) {//find the smallest and remember the id
minDistSq = distSq;
closest = i;
}
}
return closest;
}
This way you're doing less distance checks (your version does two distance checks per iteration) and also you only need the id, not the actual distance, so you can gain a bit of speed by not using Math.sqrt() and simply using the squared distance instead.
Another idea I can think of depends on the layout. Say you have a top down vertical scroller, you would start by checking the y property of your obstacle. If you have a hash of them or a sorted list, for an object at the bottom of the screen you would start loop from the largest y barrier to the smallest. Once you found the closest barriers on the Y axis, if there are more than 1 you can check for the closest on the x axis. You wouldn't need to use square or square root as you're basically splitting the checks from 1 in 2D per barrier to 2 checks in 1D, narrowing down your barrier and discarding far away barriers instead of checking against every single object all the time.
An even more advanced version would be using space partitioning but hopefully you won't need it for a simple game while learning.