Use android compass, accelerometer, and gyroscope to get current position - java

I'm new to android programing. I know how to get a position of a user in outdoor environment using GPS.
But i'm faced with a problem to find the location of a person inside a building where you don't have GPS available.
I can handle the coding part but i need an idea in how to use compass, accelerometer, and gyroscope data or any other sensor data except GPS to get the current position of the user inside the building after you pass the compass, accelerometer, and gyroscope data to the server. Here consider that the inside of the building is mapped.
Sorry if this is a stupid question.

It sounds like you intend to perform inertial navigation using accelerometer data based on the last good GPS fix. I think you'll find this is not feasible on a mobile phone. Accelerometers used in inertial navigation -- for example, in aircraft -- have to be extremely accurate and highly calibrated to minimize errors. Even then, all inertial systems drift over time. With the relatively low accuracy of a phone, these errors will accumulate quite rapidly and render your position solution unusable very quickly.
Without GPS, most phones can still give you a rough position estimate using cell-site multilateration. This is nowhere near as accurate as a GPS fix, but it's better than nothing.
See also this excellent discussion of indoor locationing (inertial navigation is mentioned there too):
Android accelerometer accuracy (Inertial navigation)

Indoor locationing is quite difficult, if not impossible today.
What works is mounting Low energy Blue Tooth "Beacons" at multiple places in the building,
and use that as info. (see ios Beacons) (But you have to manage that yourself, relation between beaconID and location inside bulding).
With compass, accelerometre and gyroscope you will not have (much) success:
In this Google Tech Talk video it is quite detailed explained why a gyro / accelerometer (relaive) navigation cannot work because the slight inaccuracies accumulate within 2 or 3 seconds so strong that you cannot use the result. This is cause by the double integration, see video at 23:30.
Private WLAN (inside your building) triangulation can work, but not inside an (limited) phone which does not give you the info which WLANs are vissible, at which strength.
What remains the already build it GSM-Cell or Wlan location in your smart phone for indoor usage.
There is one further approach:
Evaluating the magnet fields of an building.
This can work, and at least one company works on that, but you have to calibrate that for each building.
But this is more a reasearch topic, than a well known technic.
Further info
My Algorithm to Calculate Position of Smartphone - GPS and Sensors

Related

fused location API sometimes returns freak values, how to identify and discard

I'm developing an app that tracks walks, bicycle rides, car rides etc. I need precise info, so I basically would like to use only GPS. Still most sources I found recommend using Googles fused API e.g. for power saving reasons, so I went for the fused API.
Now once in a while (once or twice a month) I get one freak value among thousands of good ones. a few of them I got near railway stations, where the freak value is at another railway station, several kilometers away, so I assume it is a wrong interpreted WiFi based position.
Here's one example, where I ride my bicycle from the river towards the main railway station located east of the river. Once I arrive at the main station, I get no position for 126 s (I asked for every 10 sec, so I probably lost the GPS signal), and then suddenly I get a freak GPS value at another railway station 3450m away on the other side of the river. The reported accuracy for the freak value is 20 m.
The problem is that I cannot easily identify and filter these freak values.
Calling currentLocation.getProvider() always returns "fused", which is not very helpful.
Also Location.getAccuracy() returns typical values below 100m.
So today I filter based on evaluating speed combined with unrealistic changes in bearing, but I'm afraid I might also discard good samples in the process.
I scanned a lot of Stackoverflow, but strangely enough I didn't find any relevant answers yet.
I now feel like moving to the old framework location API and use GPS based data only. But is that really necessary, or does anybody have an idea how to avoid getting the freak values, or alternatively how to easily identify and discard all wifi based positions?
And will using the framework location API have bad battery life as a result?

android accelometer detect jumping?

Hi!
I'm just starting out with Android development and i was wondering if there is any built in machinery in Android phones that are able to detect if a person is squatting (if a person holds his phone in his hand and keeps his hand steady, the phone changes it's altitude by approximately 1 m) or jumping. The user is holding his phone in his hand in both cases. If there is, what kind of packages should one look into to use those built in detectors?
Thank you very much for your help!
In both cases you'll want to make use of the inbuilt accelerometer. Squatting will be very difficult to detect reliably though. Jumping could be quite easy. For example, when jumping, there will be an upward acceleration significantly greater than gravity, followed by free-fall, followed by another upward acceleration upwards. (Landing).

Android: Simple way to make a geofence?

I'm new to Android development and am creating an app that allows a User to create a Geo fence around a specific location for reminders. For example: Making a geofence around a grocery store to remind the user as he enters to pick up Orange Juice.
Does anyone know of a tutorial that could help in developing something like this?
AFAIK there are no tutorials for geofencing, but it's pretty simple (assuming you want a circular fence).
This tutorial will tell you how to get the user's location
Any of these links will show you how to calculate the distance between their current location and the centerpoint for their fence
Calculate their distance from the center at regular intervals. I'd let them set the interval through a settings screen and start with a default around five minutes because anything more than that is a hefty battery drain. Store that distance every time you get it.
If their last distance compared to their new distance crosses the boundary, perform your action. For instance, I worked on an app that would alert a parent if the child left a friend's house or arrived at school.
An new API called Proxim.io was recently developed that serves your purpose. The website is www.proxim.io. Here is a quote from the developer documentation:
Proxim.io is a platform for delivering relevant messages to your application users. Messages that are delivered based on real-time location, topics of interest, and geo-triggers. Messages can be delivered to a device via Push, or can be routed to a software system through an API (Web Events).

Will J2ME Location Based API work without GPS

Will J2ME Location Based API work without GPS?can you gave me example ?
Depends on the device implementation!
On many devices you can specify the behaviour during accessing the Location API. (Nokia s40 lets you search for bluetooth gps devices e.g.)
If the location API is able to use data like cellID, LAC, ... (if you specify low accuracy criteria) depends on the device implementation and even if your operator lets you use those values. ( http://www.easywms.com/easywms/?q=en/node/3589 )
Many devices do have the API on board even if they are not equipped with gps functionality.
I believe it will work, but it will be less accurate because it has to use the cell towers to determine location, not GPS.
Well, the call will work, but how will it know where you are?
See the discussion at Oracle, which says:
"To discover the location of the
device, LBS must use real-time
positioning methods. Accuracy depends
on the method used."
In the United States, to provide semi-accurate tracking data for the 911 emergency system, all phones must be able to be located within a few hundred feet. It's done by triangulating distance from the nearest three or so towers.
GPS simply makes that (existing) data more accurate.

How to do Gesture Recognition using Accelerometers

My goal is to recognize simple gestures from accelerometers mounted on a sun spot. A gesture could be as simple as rotating the device or moving the device in several different motions. The device currently only has accelerometers but we are considering adding gyroscopes if it would make it easier/more accurate.
Does anyone have recommendations for how to do this? Any available libraries in Java? Sample projects you recommend I check out? Papers you recommend?
The sun spot is a Java platform to help you make quick prototypes of systems. It is programmed using Java and can relay commands back to a base station attached to a computer. If I need to explain how the hardware works more leave a comment.
The accelerometers will be registering a constant acceleration due to gravity, plus any acceleration the device is subjected to by the user, plus noise.
You will need to low pass filter the samples to get rid of as much irrelevant noise as you can. The worst of the noise will generally be higher frequency than any possible human-induced acceleration.
Realise that when the device is not being accelerated by the user, the only force is due to gravity, and therefore you can deduce its attitude in space. Moreover, when the total acceleration varies greatly from 1g, it must be due to the user accelerating the device; by subtracting last known estimate of gravity, you can roughly estimate in what direction and by how much the user is accelerating the device, and so obtain data you can begin to match against a list of known gestures.
With a single three-axis accelerometer you can detect the current pitch and roll, and also acceleration of the device in a straight line. Integrating acceleration minus gravity will give you an estimate of current velocity, but the estimate will rapidly drift away from reality due to noise; you will have to make assumptions about the user's behaviour before / between / during gestures, and guide them through your UI, to provide points where the device is not being accelerated and you can reset your estimates and reliably estimate the direction of gravity. Integrating again to find position is unlikely to provide usable results over any useful length of time at all.
If you have two three-axis accelerometers some distance apart, or one and some gyros, you can also detect rotation of the device (by comparing the acceleration vectors, or from the gyros directly); integrating angular momentum over a couple of seconds will give you an estimate of current yaw relative to that when you started integrating, but again this will drift out of true rapidly.
Since no one seems to have mentioned existing libraries, as requested by OP, here goes:
http://www.wiigee.org/
Meant for use with the Wiimote, wiigee is an open-source Java based implementation for pattern matching based on accelerometer readings. It accomplishes this using Hidden Markov Models[1].
It was apparently used to great effect by a company, Thorn Technologies, and they've mentioned their experience here : http://www.thorntech.com/2013/07/mobile-device-3d-accelerometer-based-gesture-recognition/
Alternatively, you could consider FastDTW (https://code.google.com/p/fastdtw/). It's less accurate than regular DTW[2], but also computationally less expensive, which is a big deal when it comes to embedded systems or mobile devices.
[1] https://en.wikipedia.org/wiki/Hidden_Markov_model
[2] https://en.wikipedia.org/wiki/Dynamic_time_warping
EDIT: The OP has mentioned in one of the comments that he completed his project, with 90% accuracy in the field and a sub-millisecond compute time, using a variant of $1 Recognizer. He also mentions that rotation was not a criteria in his project.
What hasn't been mentioned yet is the actual gesture recognition. This is the hard part. After you have cleaned up your data (low pass filtered, normalized, etc) you still have most of the work to do.
Have a look at Hidden Markov Models. This seems to be the most popular approach, but using them isn't trivial. There is usually a preprocessing step. First doing STFT and clustering the resultant vector into a dictionary, then feeding that into a HMM. Have a look at jahmm in google code for a java lib.
Adding to moonshadow's point about having to reset your baseline for gravity and rotation...
Unless the device is expected to have stable moments of rest (where the only force acting on it is gravity) to reset its measurement baseline, your system will eventually develop an equivalent of vertigo.

Categories

Resources