Velocity verlet algorithm not conserving energy - java

I was under the impression that the algorithm should conserve energy if the system being modelled does. I'm modelling the solar system, which should conserve energy. The program conserves angular momentum and does produce stable orbits, but the total energy (kinetic + gravitational potential) oscillates around some baseline. The oscillations are significant. Are there common reasons why this might happen?
Model assumes planets are point masses, circular orbits (I've also tried elliptical orbits and the energy still oscillates) and uses Newtonian mechanics. I can't think what other features of the program might be affecting the outcome.
If it is just expected that the energy oscillates, what causes that??

Look up the Verlet-Störmer paper by Hairer et al. (Geometric numerical integration illustrated by the Störmer/Verlet method). There should be several sources online.
In short, a symplectic integrator preserves a Hamiltonian and thus energy, but it is a modified Hamiltonian. If the method is correctly initialized, the modification is a perturbation of order O(h²), where h is the step size. Incorrect initialization gives a perturbation of O(h), while the observed oscillation should still have an amplitude of O(h²).
Thus the observed pattern of an oscillating energy as computed by the physical formulas is completely normal and expected. An error would be observed if the energy were to (rapidly) deviate from this relatively stable pattern.
An easy, but slightly inefficient method to get from the order 2 Verlet method an order 4 symplectic integrator is to replace
Verlet(h)
by
Verlet4(h) {
Verlet(b0*h);
Verlet(-b1*h);
Verlet(b0*h);
}
where b0=1/(2-2^(1/3))=1.35120719196… and b1=2*b0-1=1.70241438392…. This is called a "composition method".

Merged from the comments:
For a full gravitational N-body problem, I don't think any numerical integrator will be symplectic. Velocity Verlet isn't symplectic even for a single point orbiting a center (easy to check, since it has a trivial analytical solution with g = v^2/R). So I suggest trying a higher-order integrator (such as Runge-Kutta), and if energy deviations almost go away (meaning the the calculations are generally correct), you can re-scale the combined kinetic energy to keep the total energy conserved explicitly. Specifically, you compute the updated Ekin_actual and Ekin_desired = Etotal_initial - Epotential, and scale all velocities by sqrt(Ekin_desired / Ekin_actual)

Related

GPS data comparison after smoothing

I'm trying to compare multiple algorithms that are used to smooth GPS data. I'm wondering what should be the standard way to compare the results to see which one provides better smoothing.
I was thinking on a machine learning approach. To crate a car model based on a classifier and check on which tracks provides better behaviour.
For the guys who have more experience on this stuff, is this a good approach? Are there other ways to do this?
Generally, there is no universally valid way for comparing two datasets, since it completely depends on the applied/required quality criterion.
For your appoach
I was thinking on a machine learning approach. To crate a car model
based on a classifier and check on which tracks provides better
behaviour.
this means that you will need to define your term "better behavior" mathematically.
One possible quality criterion for your application is as follows (it consists of two parts that express opposing quality aspects):
First part (deviation from raw data): Compute the RMSE (root mean squared error) between the smoothed data and the raw data. This gives you a measure for the deviation of your smoothed track from the given raw coordinates. This means, that the error (RMSE) increases, if you are smoothing more. And it decreases if you are smoothing less.
Second part (track smoothness): Compute the mean absolute lateral acceleration that the car will experience along the track (second deviation). This will decrease if you are smoothing more, and it will increase if you are smoothing less. I.e., it behaves in contrary to the RMSE.
Result evaluation:
(1) Find a sequence of your data where you know that the underlying GPS track is a straight line or where the tracked object is not moving. Note, that for those tracks, the (lateral) acceleration is zero by definition(!).
For these, compute RMSE and mean absolute lateral acceleration.
The RMSE of appoaches that have (almost) zero acceleration results from measurement inaccuracies!
(2) Plot the results in a coordinate system with the RMSE on the x axis and the mean acceleration on the y axis.
(3) Pick all approaches that have an RMSE similar to what you found in step (1).
(4) From those approaches, pick the one(s) with the smallest acceleration. Those give you the smoothest track with an error explained through measurement inaccuracies!
(5) You're done :)
I have no experience on this topic but I have few things in mind that may help you.
You know it is a car. You know that the data is generated from a car so you can define a set of properties of a car. For example if a car is moving with speed above 50km than the angle of the corner should be at least 110 degrees. I am absolutely guessing with the values but if you do a little research i am sure you will be able to define such properties. Next thing you can do is to test how each approximation fits the car properties and choose the best one.
Raw data. I assume you are testing all methods on a part of given road. You can generate a "raw gps track" - a track that best fits the movement of a car. Google maps may help you to generate such track os some gps devise with higher accuracy. Than you measure the distance between each approximation and your generated track - the one with the min distance wins.
i think you easily match the coordinates after the address conversion.
because address have street,area and city. so you can easily match the different radius.
let try this link
Take a look at this paper that discusses comparing machine learning algorithms:
"Choosing between two learning algorithms
based on calibrated tests" available at:
http://www.cs.waikato.ac.nz/ml/publications/2003/bouckaert-calibrated-tests.pdf
Also check out this paper:
"Bayesian Comparison of Machine Learning Algorithms on Single and
Multiple Datasets" available at:
http://www.jmlr.org/proceedings/papers/v22/lacoste12/lacoste12.pdf
Note: It is noted from the question that you are looking into the best way to compare the results for machine learning algorithms and are not looking for additional machine learning algorithms that may implement this feature.
Machine Learning is not an well suited approach for that task, you would have to define what is good smoothing...
Principially your task cannot be solved by an algorithm that gives an general answer because every smoothing destroy the original data by some amount and adds invented positions, and different systems/humans that use the smoothed data react differently on that changed data.
The question is: What do you want to achieve with smoothing?
Why do you need smoothing? (have you forgotten to implement or enable a stand still filter that eliminates movement while the vehicle is standing still, which in GPS introduces jumping location during stand still?)
The GPS chip has already built in a (best possible?) real time smoothing using a Kalman filter, having on the one side more information than a post processed smotthing algo, on the other side it has less.
So next you have to ask yourself: do you compare post processing smooting algos or real time algos? (probably post processing) Comparing a real time smoothing algorithm with a post process smoothing algorithm is not fair.
Again: What do you expect from smoothed data: That they look somewhat fine, but unrealistic like photoshopped models for tv-advertisments?
What is good smoothing? near to real vehicle postion which nobody ever knows, or a curve whith low acceleration?
I would prefer an smoothing algorithm that produces the curve most near to the real (usually unknown) vehicle trajectory.
Or you might just think it should somehow look beautifull: In that case overlay the curves with different colors, display it on a satelitte image map, and let a team of humans (experts at least owning and driving an own car) decide what looks good and realistic.
We humans have the best multi purpose pattern matching algorithm built in.
Again why smooth?: for display in a map to please humans that look at that map?
or to use the smoothed tracks to feed other algorithms that have problems with the original data?
To please humans I have given an answer above.
To please other algorithms:
What they need? nearer positions? or better course value / direction between points.
What attributes do you want to smooth: only the latitude, longitude coordinates, or also the speed value, and course value?
I have much professional experience with GPS tracks, and recommend, to just remove every location under 7km/h and keep the rest as it is. In most cases there is no need for further smoothing.
Otherwise it gets expensive:
A possible solution:
1) You arrange a 2000€ Reference GPS receiver delivered with a magnetic vehicle roof antenna (E.g Company hemisphere 2000 GPS receiver) and use that as reference
2) You use a comnsumer GPS usually used for your task (smartphone, etc.)
Both mounted inside the car: drive some test tracks, in good conditions (highways) but more tracks at very bad: strong curves combined with big houses left and right. And through tunnel, a struight and a curved one, if you have one.
3) apply the smoothing algoritms to the consumer GPS tracks
4) compare the smoothed to the reference track, by matching two positions and finally calulate the (RMSE Root mean squared error)
Difficulties
matching two positions: Hopefully the time can be exactly matched which is usually not the case (0,5s offset possible).
Think what do you do when having an GPS outage.
Consider first to display a raw track and identify what kind of unsmoothed data is not suitable/ nice looking. (Probably later posting the pics here)
what about using the good old Kalman Filter!

Fast multi-body gravity algorithm?

I am writing a program to simulate an n-body gravity system, whose precision is arbitrarily good depending on how small a step of "time" I take between each step. Right now, it runs very quickly for up to 500 bodies, but after that it gets very slow since it has to run through an algorithm determining the force applied between each pair of bodies for every iteration. This is of complexity n(n+1)/2 = O(n^2), so it's not surprising that it gets very bad very quickly. I guess the most costly operation is that I determine the distance between each pair by taking a square root. So, in pseudo code, this is how my algorithm currently runs:
for (i = 1 to number of bodies - 1) {
for (j = i to number of bodies) {
(determining the force between the two objects i and j,
whose most costly operation is a square root)
}
}
So, is there any way I can optimize this? Any fancy algorithms to reuse the distances used in past iterations with fast modification? Are there any lossy ways to reduce this problem? Perhaps by ignoring the relationships between objects whose x or y coordinates (it's in 2 dimensions) exceed a certain amount, as determined by the product of their masses? Sorry if it sounds like I'm rambling, but is there anything I could do to make this faster? I would prefer to keep it arbitrarily precise, but if there are solutions that can reduce the complexity of this problem at the cost of a bit of precision, I'd be interested to hear it.
Thanks.
Take a look at this question. You can divide your objects into a grid, and use the fact that many faraway objects can be treated as a single object for a good approximation. The mass of a cell is equal to the sum of the masses of the objects it contains. The centre of mass of a cell can be treated as the centre of the cell itself, or more accurately the barycenter of the objects it contains. In the average case, I think this gives you O(n log n) performance, rather than O(n2), because you still need to calculate the force of gravity on each of n objects, but each object only interacts individually with those nearby.
Assuming you’re calculating the distance with r2 = x2 + y2, and then calculating the force with F = Gm1m2 / r2, you don’t need to perform a square root at all. If you do need the actual distance, you can use a fast inverse square root. You could also used fixed-point arithmetic.
One good lossy approach would be to run a clustering algorithm to cluster the bodies together.
There are some clustering algorithms that are fairly fast, and the trick will be to not run the clustering algorithm every tick. Instead run it every C ticks (C>1).
Then for each cluster, calculate the forces between all bodies in the cluster, and then for each cluster calculate the forces between the clusters.
This will be lossy but I think it is a good approach.
You'll have to fiddle with:
which clustering algorithm to use: Some are faster, some are more accurate. Some are deterministic, some are not.
how often to run the clustering algorithm: running it less will be faster, running it more will be more accurate.
how small/large to make the clusters: most clustering algorithms allow you some input on the size of the clusters. The larger you allow the clusters to be, the faster but less accurate the output will be.
So it's going to be a game of speed vs accuracy, but at least this way you will be able to sacrafice a bit of accuracy for some speed gains - with your current approach there's nothing you can really tweak at all.
You may want to try a less precise version of square root. You probably don't need a full double precision. Especially if the order of magnitude of your coordinate system is normally the same, then you can use a truncated taylor series to estimate the square root operation pretty quickly without giving up too much in efficiency.
There is a very good approximation to the n-body problem that is much faster (O(n log n) vs O(n²) for the naive algorithm) called Barnes Hut. Space is subdivided into a hierarchical grid, and when computing force contribution for distant masses, several masses can be considered as one. There is an accuracy parameter that can be tweaked depending on how much your willing to sacrifice accuracy for computation speed.

Can you programmatically detect white noise?

The Dell Streak has been discovered to have an FM radio which has very crude controls. 'Scanning' is unavailable by default, so my question is does anyone know how, using Java on Android, one might 'listen' to the FM radio as we iterate up through the frequency range detecting white noise (or a good signal) so as to act much like a normal radio's seek function?
I have done some practical work on this specific area, i would recommend (if you have a little time for it) to try just a little experimentation before resorting to fft'ing. The pcm stream can be interpreted very complexely and subtly (as per high quality filtering and resampling) but can also be practically treated for many purposes as the path of a wiggly line.
White noise is unpredictable shaking of the line, which is never-the-less quite continuous in intensity (rms, absolute mean..) Acoustic content is recurrent wiggling and occasional surprises (jumps, leaps) :]
Non-noise like content of a signal may be estimated by performing quick calculations on a running window of the pcm stream.
For example, noise will strongly tend to have a higher value for the absolute integral of its derivative, than non-noise. I think that is the academic way of saying this:
loop(n+1 to n.length)
{ sumd0+= abs(pcm[n]);
sumd1+= abs(pcm[n]-pcm[n-1]);
}
wNoiseRatio = ?0.8; //quite easily discovered, bit tricky to calculate.
if((sumd1/sumd0)<wNoiseRatio)
{ /*not like noise*/ }
Also, the running absolute average over ~16 to ~30 samples of white noise will tend to vary less, over white noise than acoustic signal:
loop(n+24 to n.length-16)
{ runAbsAve1 += abs(pcm[n]) - abs(pcm[n-24]); }
loop(n+24+16 to n.length)
{ runAbsAve2 += abs(pcm[n]) - abs(pcm[n-24]); }
unusualDif= 5; //a factor. tighter values for longer measures.
if(abs(runAbsAve1-runAbsAve2)>(runAbsAve1+runAbsAve2)/(2*unusualDif))
{ /*not like noise*/ }
This concerns how white noise tends to be non-sporadic over large enough span to average out its entropy. Acoustic content is sporadic (localised power) and recurrent (repetitive power).
The simple test reacts to acoustic content with lower frequencies and could be drowned out by high frequency content. There are simple to apply lowpass filters which could help (and no doubt other adaptions).
Also, the root mean square can be divided by the mean absolute sum providing another ratio which should be particular to white noise, though i cant figure what it is right now. The ratio will also differ for the signals derivatives as well.
I think of these as being simple formulaic signatures of noise. I'm sure there are more..
Sorry to not be more specific, it is fuzzy and imprecise advice, but so is performing simple tests on the output of an fft. For better explaination and more ideas perhaps check out statistical and stochastic(?) measurements of entropy and randomness on wikipedia etc.
Use a Fast Fourier Transform.
This is what you can use a Fast Fourier Transform for. It analyzes the signal and determines the strength of the signal at various frequencies. If there's a spike in the FFT curve at all, it should indicate that the signal is not simply white noise.
Here is a library which supports FFT's. Also, here is a blog with source code in case you want to learn about what the FFT does.
If you don't have FFT tools available, just a wild suggestion:
Try to compress a few milliseconds of audio.
A typical feature of noise is that it compresses much less than clear signal.
As far as I know there is no API or even drivers for the FM Radio in the Android SDK and unless Dell releases one you will have to roll your own. It's actually even worse than that. All(?) new chipsets has FM Radio but not all phones has an FM Radio application.
The old Windows Mobile had the same problem.
For white noise detection you need to do FFT and see that it has more or less continious spectrum. But recording from FM might be a problem.
Just high pass filtering it will give a good idea, and has sometimes been used for squelch on fm radios.
Note that this is comparable to what the derivative suggestion was getting at - taking the derivative is a simple form of high pass filter, and taking the absolute value of that a crude way of measuring power.
Do you have a subscription to the IEEE Xplore library? There are countless papers (one picked at random) on this very topic.
A very simplistic method would be to observe the "flatness" of the power spectral density. One could take this by using a Fast Fourier Transform of the signal in the time domain and find the standard deviation of the spectral density. If it is below some threshold, you have your white noise.
The main question here is: what type of signal do you have access to?
I bet you don't have direct access to the analog EM signal directly. So no use of FFT on this signal possible. You can't also try to build a phased-lock loop, which is the way your standard old radio tuner works ("Scanning" in your case).
Your only option is indeed to pick one frequency and listen too it (and try do detect when it's noise with FFT on sound). You might even only have access to the FFTed signal.
Problem here: If you want to detect a potential frequency using white noise you will pick up signals too easily.
Anyway, here is what I would try to do with this strategy:
Double integrate the autocorrelation of the spectral density over a fraction of a second of audio. And this for each frequency.
Then look for a FM frequency where this number is maxed.
Little explanation here:
Spectral density gives you a signal which most used frequencies are maxed.
If a bit of time later if the same frequencies are maxed then you have some supposedly clear audio. You get this by integrating the autocorrelation the spectral density for one audio frequency for a fraction of a second (using some function that grows larger than linear might also work)
You then just have to integrate this for all audio frequencies
Also be careful to normalize the integrals: a loud white noise signal should not get a higher score than a clear but low audio signal.
Several people have mentioned the FFT, which you'll want to do, but to then detect white noise you need to make sure that the magnitude is relatively constant over the range of audio frequencies. You'll want to look at magnitudes only, you can throw away the phases. You can compute an average and standard deviation for the magnitudes in O(N) time. For white noise, you should find the standard deviation to be a relatively small fraction of the average. If I remember my statistics right, it should be about (1/sqrt(N)) of the average.

Java: Calculate distance between a large number of locations and performance

I'm creating an application that will tell a user how far away a large number of points are from their current position.
Each point has a longitude and latitude.
I've read over this article
http://www.movable-type.co.uk/scripts/latlong.html
and seen this post
Calculate distance in meters when you know longitude and latitude in java
There are a number of calculations (50-200) that need carried about.
If speed is more important than the accuracy of these calculations, which one is best?
this is O(n)
Dont worry about performance. unless every single calculation takes too long (which it isnt).
As Imre said this is O(n), or linear, meaning that no matter how the values differ or how many times you do it the calculations in the algorithm will take the same amount of time for each iteration. However, I disagree in the context that the Spherical Law of Cosines has less actual variables and calculations being performed in the algorithm meaning that less resources are being used. Hence, I would choose that one because the only thing that will differ speed would be the computer resources available. (note: although it will be barely noticable unless on a really old/slow machine)
Verdict based on opinion: Spherical Law of Cosines
The two links that you posted use the same spherical geometry formula to calculate the distances, so I would not expect there to be a significant difference between their running speed. Also, they are not really computationally expensive, so I would not expect it to be a problem, even on the scale of a few hundred iterations, if you are running on modern hardware.

How to do Gesture Recognition using Accelerometers

My goal is to recognize simple gestures from accelerometers mounted on a sun spot. A gesture could be as simple as rotating the device or moving the device in several different motions. The device currently only has accelerometers but we are considering adding gyroscopes if it would make it easier/more accurate.
Does anyone have recommendations for how to do this? Any available libraries in Java? Sample projects you recommend I check out? Papers you recommend?
The sun spot is a Java platform to help you make quick prototypes of systems. It is programmed using Java and can relay commands back to a base station attached to a computer. If I need to explain how the hardware works more leave a comment.
The accelerometers will be registering a constant acceleration due to gravity, plus any acceleration the device is subjected to by the user, plus noise.
You will need to low pass filter the samples to get rid of as much irrelevant noise as you can. The worst of the noise will generally be higher frequency than any possible human-induced acceleration.
Realise that when the device is not being accelerated by the user, the only force is due to gravity, and therefore you can deduce its attitude in space. Moreover, when the total acceleration varies greatly from 1g, it must be due to the user accelerating the device; by subtracting last known estimate of gravity, you can roughly estimate in what direction and by how much the user is accelerating the device, and so obtain data you can begin to match against a list of known gestures.
With a single three-axis accelerometer you can detect the current pitch and roll, and also acceleration of the device in a straight line. Integrating acceleration minus gravity will give you an estimate of current velocity, but the estimate will rapidly drift away from reality due to noise; you will have to make assumptions about the user's behaviour before / between / during gestures, and guide them through your UI, to provide points where the device is not being accelerated and you can reset your estimates and reliably estimate the direction of gravity. Integrating again to find position is unlikely to provide usable results over any useful length of time at all.
If you have two three-axis accelerometers some distance apart, or one and some gyros, you can also detect rotation of the device (by comparing the acceleration vectors, or from the gyros directly); integrating angular momentum over a couple of seconds will give you an estimate of current yaw relative to that when you started integrating, but again this will drift out of true rapidly.
Since no one seems to have mentioned existing libraries, as requested by OP, here goes:
http://www.wiigee.org/
Meant for use with the Wiimote, wiigee is an open-source Java based implementation for pattern matching based on accelerometer readings. It accomplishes this using Hidden Markov Models[1].
It was apparently used to great effect by a company, Thorn Technologies, and they've mentioned their experience here : http://www.thorntech.com/2013/07/mobile-device-3d-accelerometer-based-gesture-recognition/
Alternatively, you could consider FastDTW (https://code.google.com/p/fastdtw/). It's less accurate than regular DTW[2], but also computationally less expensive, which is a big deal when it comes to embedded systems or mobile devices.
[1] https://en.wikipedia.org/wiki/Hidden_Markov_model
[2] https://en.wikipedia.org/wiki/Dynamic_time_warping
EDIT: The OP has mentioned in one of the comments that he completed his project, with 90% accuracy in the field and a sub-millisecond compute time, using a variant of $1 Recognizer. He also mentions that rotation was not a criteria in his project.
What hasn't been mentioned yet is the actual gesture recognition. This is the hard part. After you have cleaned up your data (low pass filtered, normalized, etc) you still have most of the work to do.
Have a look at Hidden Markov Models. This seems to be the most popular approach, but using them isn't trivial. There is usually a preprocessing step. First doing STFT and clustering the resultant vector into a dictionary, then feeding that into a HMM. Have a look at jahmm in google code for a java lib.
Adding to moonshadow's point about having to reset your baseline for gravity and rotation...
Unless the device is expected to have stable moments of rest (where the only force acting on it is gravity) to reset its measurement baseline, your system will eventually develop an equivalent of vertigo.

Categories

Resources