I am wokring on an Android project where I am using FFT for processing accelerometer data and I have problems understanding how are these things actually working.
I am using jTransform library by Piotr Wendykier in the following way:
int length = vectors.length;
float[] input = new float[length*2];
for(int i=0;i<length;i++){
input[i]=vectors[i];
}
FloatFFT_1D fftlib = new FloatFFT_1D(length);
fftlib.complexForward(input);
float outputData[] = new float[(input.length+1)/2];
if(input.length%2==0){
for(int i = 0; i < length/2; i++){
outputData[i]= (float) Math.sqrt((Math.pow(input[2*i],2))+(Math.pow(input[2*(i)+1], 2)));
}
}else{
for(int i = 0; i < length/2+1; i++){
outputData[i]= (float) Math.sqrt((Math.pow(input[2*i],2))+(Math.pow(input[2*i+1], 2)));
}
}
List<Float> output = new ArrayList<Float>();
for (float f : outputData) {
output.add(f);
}
the result is an array with following data .
I have problem with interpreting the output data..The data are from 10 seconds long interval, and the sampling frequency is 50Hz..While capturing I was moving the phone up and down cca each 3/4 second in my hand, so is possible that the extreme which is about x value 16 could be the period of the strongest component of the signal?
I need to obtain the frequency of the strongest component in the signal..
The frequency represented by each fft result bin is the bin number times the sample rate divided by the length of the fft (convolved with a Sinc function giving it non-zero width, to get a bit technical). If your sample rate is 50 Hz and your fft's lenght is fft length is 512, then bin 16 of the fft result would represent about 1.6 Hz which is close to having a period of 0.7 seconds.
The spike at bin 0 (DC) might represent the non-zero force of gravity on the accelerometer.
Since you have the real data, you should pass these values to realForward function (not complexForward) as stated here.
Related
I'm currently occupied in a practicum and my boss wants me to have a FFT ready by the end of the week programmed in java.
Now I already got a code for the FFT from Princeton University: http://introcs.cs.princeton.edu/java/97data/FFT.java
I implemented and extended this code in my project with which I'm now being able to first read the binary input of a signal, then process the FFT on this sample values and then provide the magnitude.
Now I come to my problem.
I inputted the following values which I generated with final double d = Math.sin(i); and looped 8 times (this is just for testing purpose, next week I'll have to input real values).
0.0
0.8414709848078965
0.9092974268256817
0.1411200080598672
-0.7568024953079282
-0.9589242746631385
-0.27941549819892586
0.6569865987187891
so those values come from a pure sine (I don't know the right English word but with pure sine I mean a sine with exactly one frequency for example 50 Hz).
The output is now
0.553732750242242
2.3946469565385193 - 2.0970118573701813i
-1.386684423934684 + 0.9155598966338983i
-0.8810419659226628 + 0.28041399267903344i
-0.8075738836045867
-0.8810419659226628 - 0.28041399267903366i
-1.386684423934684 - 0.9155598966338983i
2.394646956538519 + 2.0970118573701817i
And the magnitudes of the output
0.553732750242242
3.183047718211326
1.6616689248786416
0.9245901540720989
0.8075738836045867
0.924590154072099
1.6616689248786416
3.183047718211326
Now I actually expected the output values to be 0 at each frequency sample point up until I reach the frequency the pure sine is dominated by, where the output should be >0 (for example at 50 Hz). At least that's what my boss expected when he gave me this task.
Summary:
So this is what I'm struggling with. I've read another thread asking about a similar issue, but there are still unanswered questions for me. What am I supposed to do with the given output data? How do I find the most occurring frequency?
I really could need some help or explanation of where my thinking is wrong.
Thanks for listening...
Computing a 512-point fourier transform after applying a simple window function:
w(i)= ((float)i-(float)(n-1f)/2f)
it gives peak at i=25 (max magnitude on result array).
Input was also added with more info such as frequency of sine wave generator (50 Hz) and sampling rate (1kHz or 0.001 seconds per sample) and adding 2PI constant:
initialization now looks like this(as a sin(2xPIxFxi) notation):
for (int i = 0; i < n; i++)
{
v1[i] = ((float)i-(float)(n-1f)/2f)*
(float)Math.Sin(0.001f* 50f*2f*Math.PI*(float)i);
^ ^ ^ ^
| | | |
| F | |
sampling 2xPi constant sample bin
rate(simulation)
}
and result peak looks like:
v2[22] 2145,21852033773
v2[23] 3283,36245333956
v2[24] 6368,06249969329
v2[25] 28160,6579468591 <-- peak
v2[26] 23231,0481898687
v2[27] 1503,8455705291
v2[28] 1708,68502071037
so we have
25
now these steps are in frequency space and input was at 1kHz rate so you can perceive a maximum of 500 Hz harmonic signal(half of sampling rate).
25*500 = 12500
also result range is 0 to N/2 with other half mirrored, and dividing perceivable frequency range(500) to results range(256 for N=512) gives
48.83 Hz
big portion of the error must have been the window function used at the beginning but v2[26] has higher value than v2[24] so actual pick is somewhere closer to v2[26] and smoothed graph of these points should show 50 Hz.
Ignore first element of result array as it is about constant signal level or infinity wavelength or zero frequency.
Here are dft computing codes just to be sure if fft is returning right results:
//a---->b Fourier Transformation brute-force
__kernel void dft(__global float *aRe,
__global float *aIm,
__global float *bRe,
__global float *bIm)
{
int id=get_global_id(0); // thread id
int s=get_global_size(0); // total threads = 512
double cRe=0.0f;
double cIm=0.0f;
double fid=(double)id;
double fmpi2n=(-2.0*M_PI)*fid/(double)s;
for(int i=0;i<s;i++)
{
double fi=(float)i;
double re=cos(fmpi2n*fi);
double im=sin(fmpi2n*fi);
cRe+=aRe[i]*re-aIm[i]*im;
cIm+=aRe[i]*im+aIm[i]*re;
}
bRe[id]=cRe;
bIm[id]=cIm;
}
and to be sure, testing result against inverse-transformation to check if original input signal is achieved again:
// a--->b inverse Fourier Transformation brute force
__kernel void idft(__global float *aRe,
__global float *aIm,
__global float *bRe,
__global float *bIm)
{
int id=get_global_id(0); // thread id
int s=get_global_size(0); // total threads = 512
double cRe=0.0f;
double cIm=0.0f;
for(int i=0;i<s;i++)
{
double re=cos(2.0*M_PI*((double)id)*((double)i)/(double)s);
double im=sin(2.0*M_PI*((double)id)*((double)i)/(double)s);
cRe+=aRe[i]*re-aIm[i]*im;
cIm+=aRe[i]*im+aIm[i]*re;
}
cRe/=(double)s;
cIm/=(double)s;
bRe[id]=cRe;
bIm[id]=cIm;
}
I know it's bad to run slow code on fast machines but this looks so much simpler to try and is scalable to many cores (2.4ms for a 320core-gpu including array copies).
I am reading wave files in my java program. The right channel audio has half the sample which happens to be 445440 samples (double amplitude values). Everything is working fine except for some significant differences in the values I am reading in Matlab. What's bugging me is that most of the values are identical (in my program and Matlab), but when I averaged all the elements, the values are quite far apart:
in Matlab I got: 1.4581E*-05, and my program: -44567.3253
So I started checking out values until I found a different value at the 166th element!
Matlab has -6.10351562500000e-05 and I have 2.0! (the value before and after are this are identical).
This is quite frustrating as only few elements in the first 300 elements differed! As you can imagine, I cannot physically go through all 445440 elements to understand the pattern.
I don't even know where to start looking for the issue. So taking a chance by asking all the brilliant minds out there. Here's my code if it helps:
public double[] getAmplitudes(Boolean asArrayOfDouble){
//bytesInASample is 2 (16-bit little endian);
int numOfSamples = data.length / bytesInASample ;
double[] amplitudes = new double[numOfSamples];
int pointer = 0;
for (int i = 0; i < numSamples; i++) {
double ampValue= 0;
for (int byteNumber = 0; byteNumber < bytesPerSample; byteNumber ++) {
ampValue+= (double) ((data[pointer ++] & 0xFF) << (byteNumber * 8))/32767.0;
}
amplitudes[i] = ampValue;
}
return amplitudes;
}
After this, I am simply reading the right channel data by using the following code:
double[] rightChannelData = new double[data.length/2];
for(int i = 0; i < data.length/2; i++)
{
rightChannelData [i] = data[2*i+1];
}
I know this might be a hard question to answer without seeing the actual program and it's output in contrast to the Matlab output. So do let me know if any additional information is needed.
You are masking all bytes with the term data[pointer ++] & 0xFF creating all-unsigned values. For values consisting of two bytes you are creating int values between 0 and 65536 which, after dividing by 32767.0, yield to values between 0.0 and 2.0 whereas Matlab using signed interpretation produces values in the range -1.0 and 1.0.
To illustrate this:
The short value 0xFFFE, interpreted as signed value is -2, and the division is -2/32768.0 produces -6.10351562500000e-05 while interpreted as unsigned is 65534 and 65534/32767.0 produces 2.0.
(Note that the negative value was divided by the absolute value of Short.MIN_VALUE rather than Short.MAX_VALUE…)
It’s not clear how you could calculate an average of -44567.3253 from that. Even for your unsigned values (between 0.0 and 2.0) that is way off.
After all, you are better off not doing everything manually:
ShortBuffer buf=ByteBuffer.wrap(data).order(ByteOrder.LITTLE_ENDIAN)
.asShortBuffer();
int numOfSamples = buf.remaining();
double[] amplitudes = new double[numOfSamples];
for (int i = 0; i < numOfSamples; i++) {
amplitudes[i] = buf.get() * (1.0/32768.0);
}
return amplitudes;
Since I don’t know how Matlab does the normalization I cannot guaranty that the values are the same. It’s possible that the loop body has to look like this instead:
final short s = buf.get();
amplitudes[i] = s * (s<0? (1.0/32768.0): (1.0/32767));
I'm trying to write a GPS tracking (akin to a jogging app) on android and the issue of GPS location jitter has reared it's ugly head. When accuracy is FINE and accuracy is within 5 meters, the position is jittering 1-n meters per second. How do you determine or filter out this jitter from legitimate movement?
Sporypal etc apps clearly have some way they are filtering out this noise.
Any thoughts?
Could you just run the positions through a low pass filter?
Something of the order
x(n) = (1-K)*x(n-1) + K*S(n)
where
S is your noisy samples and x, the low pass filtered samples. K is a constant between 0 and 1 which you would probably have to experiment with for best performance.
Per TK's suggestion:
My pseudocode will look awfully C like:
float noisy_lat[128], noisy_long[128];
float smoothed_lat[128], smoothed_lon[128];
float lat_delay=0., lon_delay=0.;
float smooth(float in[], float out[], int n, float K, float delay)
{
int i;
for (i=0; i<n; i++) {
*out = *in++ * K + delay * (1-K);
delay = *out++;
}
return delay;
}
loop:
Get new samples of position in noisy_lat and noise_lon
// LPF the noise samples to produce smoother position data
lat_delay = smooth(noisy_lat, smoothed_lat, 128, K, lat_delay);
lon_delay = smooth(noisy_lon, smoothed_lon, 128, K, lon_delay);
// Rinse. Repeat.
go to loop:
In a nutshell, this is a simply a feedback integrator with a one-sample delay. If your input has low frequency white-ish noise on top of the desired signal, this integrator will average the input signal over time, thus causing the noise components to average out to near zero, leaving you with the desired signal.
How well it works will depend on how much noise your signal has and the filter feedback factor K. As I said before, you'll have to play around a bit with the value to see which value produces the cleanest, most desirable result.
My professor gave us an assignment to test the difference in runtimes and search sizes using linear & binary algorithms, and the data is to be graphed.
I have the search methods put the runtime & array sizes as Points in an ArrayList, which is then sent to the GraphResults class for plotting. I need to convert those data points into xy coordinates before. The search size is the x-axis and the runtime is the y axis
As the search sizes are fixed as a multiple of 128 and there are only 8 sizes, I used switch for calculating the x value, but am looking for a more efficient way to convert the runtimes into coordinates.
Right now, I'm using nested conditionals with 5 like this:
if (y<=1000) {
if (y<= 500) {
if (y<= 250) {
newy= yaxis-32; }//equals to 250ms category
else {
newy= yaxis-(32*2); }//500ms category
}
else if (y<=750) {
newy= yaxis-(32*3);} //750ms category
else {
newy= yaxis-(32*4);} //1000ms category
} //end of the 1000ms tests
Right now, the numbers that are over 5000ms require 7 tests. Is there a more efficient way to assign a number based on a number size?
As you are trying to determine the range of your measurement, you can divide the amount by the range size, followed by calculating the number you want to show in the graph.
Btw, in your code, you made a logic error, if the value is y <= 1000 the first condition evaluates to true, and the second for y <= 750 will never be evaluated.
Also it seems that the higher the value range, the lower your graph point. Is that as intended? (1000 -> ymax - 128 while 1 -> ymax - 32)
As an aside, if you want to compare values to uneven ranges, you can also do something like an array lookup (pseudo code):
int[] ranges = new int { 50, 500, 5000, 50000 };
for (int n = 0; n < ranges.length && value > ranges[n]; n++) {
}
int range = n;
int newy = yaxis - range * 32;
Note that the out-of-range index acts as the range found for a value that is bigger than the biggest value in your array.
How about newy = yaxis - 32 * ((y/250)% 8);?
I would reformat your code to something more like this:
newy = yaxis - 32 * ((y-1)/250 + 1);
This way, you're calculating the multiplier rather than choosing it manually.
How to track sections without sounds in a wav file?
A small software that I want to develop is dividing a wav file, and it consider a no volume area as a dividing point.
How can a program know that volume of a wav file is low?
I'll use Java or MFC.
I've had success with silence detection by calculating RMS of the signal. This is done in the following manner (assuming you have an array of audio samples):
long sumOfSquares = 0;
for (int i = startindex; i <= endindex; i++) {
sumOfSquares = sumOfSquares + samples[i] * samples[i];
}
int numberOfSamples = endindex - startindex + 1;
long rms = Math.sqrt(sumOfSquares / (numberOfSamples));
if rms is below a certain threshold, you can consider it being silent.
Well, a wave file is basically a list of values, which represents a sound wave discretely divided with some rate (44100 Hz usually). Silence is basically when values are near 0. Just set some threshold value and look for continuous ( let's say 100ms length) regions where value is below that threshold.
Simple silence detection is done by sequentially comparing sound chunks with some value (which is chosen depending on record quality).
Something like:
abs(track[position]) < 0.1
or
(track[position]) < 0.1) && (track[position]) > -0.1)
if we assume that chunk is [-1, 1] float.
It would work better if sound is normalized.