I'm trying to write a GPS tracking (akin to a jogging app) on android and the issue of GPS location jitter has reared it's ugly head. When accuracy is FINE and accuracy is within 5 meters, the position is jittering 1-n meters per second. How do you determine or filter out this jitter from legitimate movement?
Sporypal etc apps clearly have some way they are filtering out this noise.
Any thoughts?
Could you just run the positions through a low pass filter?
Something of the order
x(n) = (1-K)*x(n-1) + K*S(n)
where
S is your noisy samples and x, the low pass filtered samples. K is a constant between 0 and 1 which you would probably have to experiment with for best performance.
Per TK's suggestion:
My pseudocode will look awfully C like:
float noisy_lat[128], noisy_long[128];
float smoothed_lat[128], smoothed_lon[128];
float lat_delay=0., lon_delay=0.;
float smooth(float in[], float out[], int n, float K, float delay)
{
int i;
for (i=0; i<n; i++) {
*out = *in++ * K + delay * (1-K);
delay = *out++;
}
return delay;
}
loop:
Get new samples of position in noisy_lat and noise_lon
// LPF the noise samples to produce smoother position data
lat_delay = smooth(noisy_lat, smoothed_lat, 128, K, lat_delay);
lon_delay = smooth(noisy_lon, smoothed_lon, 128, K, lon_delay);
// Rinse. Repeat.
go to loop:
In a nutshell, this is a simply a feedback integrator with a one-sample delay. If your input has low frequency white-ish noise on top of the desired signal, this integrator will average the input signal over time, thus causing the noise components to average out to near zero, leaving you with the desired signal.
How well it works will depend on how much noise your signal has and the filter feedback factor K. As I said before, you'll have to play around a bit with the value to see which value produces the cleanest, most desirable result.
Related
I'm currently occupied in a practicum and my boss wants me to have a FFT ready by the end of the week programmed in java.
Now I already got a code for the FFT from Princeton University: http://introcs.cs.princeton.edu/java/97data/FFT.java
I implemented and extended this code in my project with which I'm now being able to first read the binary input of a signal, then process the FFT on this sample values and then provide the magnitude.
Now I come to my problem.
I inputted the following values which I generated with final double d = Math.sin(i); and looped 8 times (this is just for testing purpose, next week I'll have to input real values).
0.0
0.8414709848078965
0.9092974268256817
0.1411200080598672
-0.7568024953079282
-0.9589242746631385
-0.27941549819892586
0.6569865987187891
so those values come from a pure sine (I don't know the right English word but with pure sine I mean a sine with exactly one frequency for example 50 Hz).
The output is now
0.553732750242242
2.3946469565385193 - 2.0970118573701813i
-1.386684423934684 + 0.9155598966338983i
-0.8810419659226628 + 0.28041399267903344i
-0.8075738836045867
-0.8810419659226628 - 0.28041399267903366i
-1.386684423934684 - 0.9155598966338983i
2.394646956538519 + 2.0970118573701817i
And the magnitudes of the output
0.553732750242242
3.183047718211326
1.6616689248786416
0.9245901540720989
0.8075738836045867
0.924590154072099
1.6616689248786416
3.183047718211326
Now I actually expected the output values to be 0 at each frequency sample point up until I reach the frequency the pure sine is dominated by, where the output should be >0 (for example at 50 Hz). At least that's what my boss expected when he gave me this task.
Summary:
So this is what I'm struggling with. I've read another thread asking about a similar issue, but there are still unanswered questions for me. What am I supposed to do with the given output data? How do I find the most occurring frequency?
I really could need some help or explanation of where my thinking is wrong.
Thanks for listening...
Computing a 512-point fourier transform after applying a simple window function:
w(i)= ((float)i-(float)(n-1f)/2f)
it gives peak at i=25 (max magnitude on result array).
Input was also added with more info such as frequency of sine wave generator (50 Hz) and sampling rate (1kHz or 0.001 seconds per sample) and adding 2PI constant:
initialization now looks like this(as a sin(2xPIxFxi) notation):
for (int i = 0; i < n; i++)
{
v1[i] = ((float)i-(float)(n-1f)/2f)*
(float)Math.Sin(0.001f* 50f*2f*Math.PI*(float)i);
^ ^ ^ ^
| | | |
| F | |
sampling 2xPi constant sample bin
rate(simulation)
}
and result peak looks like:
v2[22] 2145,21852033773
v2[23] 3283,36245333956
v2[24] 6368,06249969329
v2[25] 28160,6579468591 <-- peak
v2[26] 23231,0481898687
v2[27] 1503,8455705291
v2[28] 1708,68502071037
so we have
25
now these steps are in frequency space and input was at 1kHz rate so you can perceive a maximum of 500 Hz harmonic signal(half of sampling rate).
25*500 = 12500
also result range is 0 to N/2 with other half mirrored, and dividing perceivable frequency range(500) to results range(256 for N=512) gives
48.83 Hz
big portion of the error must have been the window function used at the beginning but v2[26] has higher value than v2[24] so actual pick is somewhere closer to v2[26] and smoothed graph of these points should show 50 Hz.
Ignore first element of result array as it is about constant signal level or infinity wavelength or zero frequency.
Here are dft computing codes just to be sure if fft is returning right results:
//a---->b Fourier Transformation brute-force
__kernel void dft(__global float *aRe,
__global float *aIm,
__global float *bRe,
__global float *bIm)
{
int id=get_global_id(0); // thread id
int s=get_global_size(0); // total threads = 512
double cRe=0.0f;
double cIm=0.0f;
double fid=(double)id;
double fmpi2n=(-2.0*M_PI)*fid/(double)s;
for(int i=0;i<s;i++)
{
double fi=(float)i;
double re=cos(fmpi2n*fi);
double im=sin(fmpi2n*fi);
cRe+=aRe[i]*re-aIm[i]*im;
cIm+=aRe[i]*im+aIm[i]*re;
}
bRe[id]=cRe;
bIm[id]=cIm;
}
and to be sure, testing result against inverse-transformation to check if original input signal is achieved again:
// a--->b inverse Fourier Transformation brute force
__kernel void idft(__global float *aRe,
__global float *aIm,
__global float *bRe,
__global float *bIm)
{
int id=get_global_id(0); // thread id
int s=get_global_size(0); // total threads = 512
double cRe=0.0f;
double cIm=0.0f;
for(int i=0;i<s;i++)
{
double re=cos(2.0*M_PI*((double)id)*((double)i)/(double)s);
double im=sin(2.0*M_PI*((double)id)*((double)i)/(double)s);
cRe+=aRe[i]*re-aIm[i]*im;
cIm+=aRe[i]*im+aIm[i]*re;
}
cRe/=(double)s;
cIm/=(double)s;
bRe[id]=cRe;
bIm[id]=cIm;
}
I know it's bad to run slow code on fast machines but this looks so much simpler to try and is scalable to many cores (2.4ms for a 320core-gpu including array copies).
I'd like to use my accelerometer in my car and with the accelerometer values, draw a trajectory in excel or any other platform with the origin in the first position value, that is the beginning of the path.
How can I achieve this? Please give me details I don't have any physics notion.
Please help, thanks in advance.
PS: I already programmed the SensorListener...
I have this for instance:
#Override
public void onSensorChanged(SensorEvent event){
if(last_values != null){
float dt = (event.timestamp - last_timestamp) * NS2S;
for(int index = 0 ; index < 3 ; ++index){
acceleration[index] = event.values[index];
velocity[index] += (acceleration[index] + last_values[index])/2 * dt;
position[index] += velocity[index] * dt;
}
vxarr.add(velocity[0]);
vyarr.add(velocity[1]);
vzarr.add(velocity[2]);
axarr.add(acceleration[0]);
ayarr.add(acceleration[1]);
azarr.add(acceleration[2]);
}
else{
last_values = new float[3];
acceleration = new float[3];
velocity = new float[3];
position = new float[3];
velocity[0] = velocity[1] = velocity[2] = 0f;
position[0] = position[1] = position[2] = 0f;
}
xarr.add(position[0]);
yarr.add(position[1]);
zarr.add(position[2]);
tvX.setText(String.valueOf(acceleration[0]));
tvY.setText(String.valueOf(acceleration[1]));
tvZ.setText(String.valueOf(acceleration[2]));
last_timestamp = event.timestamp;
}
but when I draw a circle with my phone I got this:
Sometimes I have just only negative values and sometimes I have just positive values, I never have negative AND positive values in order to have circle.
Acceleration is the derivative of speed by time (in other words, rate of change of speed); speed is the derivative of position by time. Therefore, acceleration is the second derivative of position. Conversely, position is the second antiderivative of acceleration. You could take the accelerometer measurements and do the double intergration over time to obtain the positions for your trajectory, except for two problems:
1) It's an indefinite integral, i.e. there are infinitely many solutions (see e.g. https://en.wikipedia.org/wiki/Antiderivative). In this context, it means that your measurements tell you nothing about the initial speed. But you can get it from GPS (with limited accuracy) or from user input in some form (e.g. assume the speed is zero when the user hits some button to start calculating the trajectory).
2) Error accumulation. Suppose the accelerometer error in any given direction a = 0.01 m/s^2 (a rough guess based on my phone). Over t = 5 minutes, this gives you an error of a*t^2/2 = 450 meters.
So you can't get a very accurate trajectory, especially over a long period of time. If that doesn't matter to you, you may be able to use code from the other answer, or write your own etc. but first you need to realize the very serious limitations of this approach.
How to calculate the position of the device using the accelerometer values?
Physicists like to think of the position in space of an object at a given time as a mathematical function p(t) with values ( x(t), y(t), z(t) ). The velocity v(t) of that object turns out to be the first derivative of p(t), and the acceleration a(t) nicely fits in as the first derivative of v(t).
From now on, we will just look at one dimension, the other two can be treated in the same way.
In order to get the velocity from the acceleration, we have to "reverse" the operation using our known initial values (without them we would not obtain a unique solution).
Another problem we are facing is that we don't have the acceleration as a function. We just have sample values handed to us more or less frequently by the accelerometer sensor.
So, with a prayer to Einstein, Newton and Riemann, we take these values and think of the acceleration function as a lot of small lines glued together. If the sensor fires often, this will be a very good approximation.
Our problem now has become much simpler: the (indefinite) integral (= antiderivative) to a linear function
f(t) = m*t + b is F(t) = m/2 * t^2 + b*t + c, where c can be chosen to satisfy the initial condition (zero velocity in our case) .
Let's use the point-slope form to model our approximation (with time values t0 and t1 and corresponding acceleration values a0 and a1):
a(t) = a0 + (a1 – a0)/(t1 – t0) * (t – t0)
Then we get (first calculate v(t0) + "integral-between-t0-and-t-of-a", then use t1 in place of t)
v(t1) = v(t0) + (a1 + a0) * (t1 – t0) / 2
Using the same logic, we also get a formula for the position:
p(t1) = p(t0) + (v(t1) + v(t0)) * (t1 – t0) / 2
Translated into code, where last_values is used to store the old acceleration values:
float dt = (event.timestamp - last_timestamp) * NS2S;
for(int index = 0 ; index < 3 ; ++index){
acceleration[index] = event.values[index];
float last_velocity = velocity[index];
velocity[index] += (acceleration[index] + last_values[index])/2 * dt;
position[index] += (velocity[index] + last_velocity )/2 * dt;
last_values[index] = acceleration[index];
}
**EDIT: **
All of this is only useful for us as long as our device is aligned with the world's coordinate system. Which will almost never be the case. So before calculating our values like above, we first have to transform them to world coordinates by using something like the rotation matrix from SensorManager.getRotationMatrix().
There is a code snippet in this answer by Csaba Szugyiczki which shows how to get the rotation matrix.
But as the documentation on getRotationMatrix() states
If the device is accelerating, or placed into a strong magnetic field, the returned matrices may be inaccurate.
...so I'm a bit pessimistic about using it while driving a car.
My particular case of summing digits deals with colors represented as integer. Java function BufferedImage.getRGB returns image in 0x00RRGGBB format. I'm making a function that gives you grayscale (color independent) sum of colors on the image. Currently, my operation looks very naive:
//Just a pseudocode
int sum = 0;
for(x->width) {
for(y->height) {
int pixel = image.getRGB(x,y);
sum+=(pixel&0x00FF0000)+(pixel&0x0000FF00)+(pixel&0x000000FF);
}
}
//The average value for any color then equals:
float avg = sum/(width*height*3);
I was wondering if I could do it even faster with some bit-shifting logic. And I am mostly asking this question to learn more about bit-shifting as I doubt any answer will speed up the program really significantly.
R, G and B do not attribute equally to the perceived intensity. A better way to sum things up than this:
sum+=(pixel&0x00FF0000)+(pixel&0x0000FF00)+(pixel&0x000000FF);
Would be, with the necessary bitshifting and weighing (assuming 00RRGGBB):
sum+= ((pixel&0x00FF0000)>>16) * .30 / 255
+ ((pixel&0x0000FF00)>> 8) * .59 / 255
+ (pixel&0x000000FF) * .11 / 255;
You might want to leave the /255 part out here and replace the floating point numbers with scaled-up integer numbers (like 30, 59 and 11), bearing in mind that you'll need a long sum to prevent overflow to a reasonable degree.
I'm getting a perplexing result doing math with floats. I have code that should never produce a negative number producing a negative number, which causes NaNs when I try to take the square root.
This code appears to work very well in tests. However, when operating on real-world (i.e. potentially very small, seven and eight negative exponents) numbers, eventually sum becomes negative, leading to the NaNs. In theory, the subtraction step only ever removes a number that has already been added to the sum; is this a floating-point error problem? Is there any way to fix it?
The code:
public static float[] getRmsFast(float[] data, int halfWindow) {
int n = data.length;
float[] result = new float[n];
float sum = 0.000000000f;
for (int i=0; i<2*halfWindow; i++) {
float d = data[i];
sum += d * d;
}
result[halfWindow] = calcRms(halfWindow, sum);
for (int i=halfWindow+1; i<n-halfWindow; i++) {
float oldValue = data[i-halfWindow-1];
float newValue = data[i+halfWindow-1];
sum -= (oldValue*oldValue);
sum += (newValue*newValue);
float rms = calcRms(halfWindow, sum);
result[i] = rms;
}
return result;
}
private static float calcRms(int halfWindow, float sum) {
return (float) Math.sqrt(sum / (2*halfWindow));
}
For some background:
I am trying to optimize a function that calculates a rolling root mean square (RMS) function on signal data. The optimization is pretty important; it's a hot-spot in our processing. The basic equation is simple - http://en.wikipedia.org/wiki/Root_mean_square - Sum the squares of the data over the window, divide the sum by the size of the window, then take the square.
The original code:
public static float[] getRms(float[] data, int halfWindow) {
int n = data.length;
float[] result = new float[n];
for (int i=halfWindow; i < n - halfWindow; i++) {
float sum = 0;
for (int j = -halfWindow; j < halfWindow; j++) {
sum += (data[i + j] * data[i + j]);
}
result[i] = calcRms(halfWindow, sum);
}
return result;
}
This code is slow because it reads the entire window from the array at each step, instead of taking advantage of the overlap in the windows. The intended optimization was to use that overlap, by removing the oldest value and adding the newest.
I've checked the array indices in the new version pretty carefully. It seems to be working as intended, but I could certainly be wrong in that area!
Update:
With our data, it was enough to change the type of sum to a double. Don't know why that didn't occur to me. But I left the negative check in. And FWIW, I was also able to implement a sol'n where recomputing the sum every 400 samples gave great run-time and enough accuracy. Thanks.
is this a floating-point error problem?
Yes it is. Due to rounding, you could well get negative values after subtracting a previous summand.
For example:
float sum = 0f;
sum += 1e10;
sum += 1e-10;
sum -= 1e10;
sum -= 1e-10;
System.out.println(sum);
On my machine, this prints
-1.0E-10
even though mathematically, the result is exactly zero.
This is the nature of floating point: 1e10f + 1e-10f gives exactly the same value as 1e10f.
As far as mitigation strategies go:
You could use double instead of float for enhanced precision.
From time to time, you could fully recompute the sum of squares to reduce the effect of rounding errors.
When the sum goes negative, you could either do a full recalculation as in (2) above, or simply set the sum to zero. The latter is safe since you know that you'll be pushing the sum towards its true value, and never away from it.
Try checking your indices in the second loop. The last value of i will be n-halfWindow-1 and n-halfWindow-1+halfWindow-1 is n-2.
You may need to change the loop to for (int i=halfWindow+1; i<n-halfWindow+1; i++).
You are running into issues with floating point numbers because you believe that they are just like mathematical real numbers. They are not, they are approximations of real numbers, mapped into discrete numbers, with a few special rules added into the mix.
Take the time to read up on what every programmer should know about floating point numbers, if you intend to use them often. Without some care the differences between floating point numbers and real numbers can come back and bite you in the worst ways.
Or, just take my word for it and know that every floating point number is "pretty close" to the requested value, with some being "dead on" accurate, but most being "mostly" accurate. This means you need to account for measurement error and keep it in mind after the calculations or risk believing you have an exact result at the end of the computation of the value (which you don't).
I am wokring on an Android project where I am using FFT for processing accelerometer data and I have problems understanding how are these things actually working.
I am using jTransform library by Piotr Wendykier in the following way:
int length = vectors.length;
float[] input = new float[length*2];
for(int i=0;i<length;i++){
input[i]=vectors[i];
}
FloatFFT_1D fftlib = new FloatFFT_1D(length);
fftlib.complexForward(input);
float outputData[] = new float[(input.length+1)/2];
if(input.length%2==0){
for(int i = 0; i < length/2; i++){
outputData[i]= (float) Math.sqrt((Math.pow(input[2*i],2))+(Math.pow(input[2*(i)+1], 2)));
}
}else{
for(int i = 0; i < length/2+1; i++){
outputData[i]= (float) Math.sqrt((Math.pow(input[2*i],2))+(Math.pow(input[2*i+1], 2)));
}
}
List<Float> output = new ArrayList<Float>();
for (float f : outputData) {
output.add(f);
}
the result is an array with following data .
I have problem with interpreting the output data..The data are from 10 seconds long interval, and the sampling frequency is 50Hz..While capturing I was moving the phone up and down cca each 3/4 second in my hand, so is possible that the extreme which is about x value 16 could be the period of the strongest component of the signal?
I need to obtain the frequency of the strongest component in the signal..
The frequency represented by each fft result bin is the bin number times the sample rate divided by the length of the fft (convolved with a Sinc function giving it non-zero width, to get a bit technical). If your sample rate is 50 Hz and your fft's lenght is fft length is 512, then bin 16 of the fft result would represent about 1.6 Hz which is close to having a period of 0.7 seconds.
The spike at bin 0 (DC) might represent the non-zero force of gravity on the accelerometer.
Since you have the real data, you should pass these values to realForward function (not complexForward) as stated here.