I need to validate if a pedestrian crossed an intersection using GPS' readings and findNearestIntersectionOSM calls to get the nearest intersections.
For each response from geoname, i check if the distance between the 2 points is less than a certain threshold and also using the sin function, i check if the angle between the intersection(GeoPoint.BearingTo) and pedestrian's current location flips its sign
Sin(previous location reading) * Sin(Current location read) < 0
Unfortunately, this is insufficient, and i sometimes receive false positives and so on.
Is there a better approach, or anything I'm missing?
Just to make clear, I'm not planning to dive into Image Processing field, but simply use some of OSM's functionality (if possible)
private void OnClosestIntersectionPoint(GeoPoint gPtIntersection) {
int iDistance = mGeoLastKnownPosition.distanceTo(gPtIntersection);
double dbCurrentBearing = mGeoLastKnownPosition.bearingTo(gPtIntersection);
if(mDbLastKnownBearing == null) {
mDbLastKnownBearing = new Double(dbCurrentBearing);
return;
}
boolean bFlippedSignByCrossing = Math.sin(mDbLastKnownBearing) * Math.sin(dbCurrentBearing) < 0;
mDbLastKnownBearing = dbCurrentBearing; // update bearing regardless to what's going to happen
if(bFlippedSignByCrossing && iDistance <= 10 && !HasntMarkIntersectionAsCrossed(gPtIntersection))
MarkAsIntersectionCrossed(mGeoLastKnownIntersection);
}
Related
I am trying to detect if a player is inside a specific region, I currently store a Region object that contains two variables that I'll be calling cornerOne and cornerTwo, the corners are basically vector variables that contains X, Y, Z, I save all the regions on a MutableSet.
I want to make sure that the new vector I am passing to it is inside the region.
Currently what I tried was:
fun isInRegion(location: Location): Boolean {
return regions.none { inside(location, it.cornerOne, it.cornerTwo) }
}
private fun inside(location: Location, cornerOne: Location, cornerTwo: Location): Boolean {
return (location.x >= cornerOne.x && location.x <= cornerTwo.x) &&
(location.z >= cornerOne.z && location.z <= cornerTwo.z)
}
I am ignoring Y because the region is only horizontal, so I'll be ignoring height.
The way I currently have it, works for the first 3 regions, but as soon as I make a 4th one it stops working completely, detects the first ones but doesn't the other ones.
Is there a better way to do this? I was told a quadtree could be better, but I don't understand how it would work in this situation.
PS: I am tagging Java too because if someone sees it in the Java section I won't mind a Java help either.
Edit:
On the region code I have if (!isValidRegion()) return which will prevent the region from being too small:
fun isValidRegion(): Boolean {
return !(getXSelected() < 5 || getZSelected() < 5)
}
This makes sure that cornerOne.x <= cornerTwo.x and cornerOne.z <= cornerTwo.z.
This is the method to get the selected X, it'll get the X of the final block and subtract from the X of the first block.
private fun getXSelected(): Int {
return abs(finalBlock.x - originBlock.x) + 1
}
Edit 2:
So I changed the inside function to be:
private fun inside(location: Location, cornerOne: Location, cornerTwo: Location): Boolean {
return inBetween(location.x, cornerOne.x, cornerTwo.x) &&
inBetween(location.z, cornerOne.z, cornerTwo.z)
}
private fun inBetween(a: Int, b: Int, c: Int): Boolean {
return (a in b..c) || (a in c..b)
}
And it worked, however I don't know if this would be a good solution, as I don't know if it would be bad for performance if it is called too often.
Change your code in one of two ways:
1) change the definition of inside to be (less preferred):
private fun inside(location: Location, cornerOne: Location, cornerTwo: Location): Boolean {
return (location.x >= Math.min(cornerOne.x, cornerTwo.x) && location.x <= Math.max(cornerOne.x, cornerTwo.x)) &&
(location.z >= Math.min(cornerOne.z, cornerTwo.z) && location.z <= Math.max(cornerOne.z, cornerTwo.z))
}
Or change the way you generate cornerOne and cornerTwo:
2.1) do without the `abs in your generation (you will need more iterations of generation)
2.2) after you generate the initial candidates of corners swap cornerOne xs if their order is not as expected and do the same on the z axis (separately!!)
I have an android application which is getting gesture coordinates (3 axis - x,y,z). I need to compare them with coordinates which I have in my DB and determine whether they are the same or not.
I also need to add some tolerance, since accelerometer (device which captures gestures) is very sensitive. It would be easy, but I also want to consider e.g. "big circle" drawn in the air, same as "small circle" drawn in the air. meaning that there would be different values, but structure of the graph would be the same, right?
I have heard about translating graph values into bits and then compare. Is that the right approach? Is there any library for such comparison?
So far I just hard coded it, covering all my requirements except the last one (big circle vs small circle).
My code now:
private int checkWhetherGestureMatches(byte[] values, String[] refValues) throws IOException {
int valuesSize = 32;
int ignorePositions = 4;
byte[] valuesX = new byte[valuesSize];
byte[] valuesY = new byte[valuesSize];
byte[] valuesZ = new byte[valuesSize];
for (int i = 0; i < valuesSize; i++) {
int position = i * 3 + ignorePositions;
valuesX[i] = values[position];
valuesY[i] = values[position + 1];
valuesZ[i] = values[position + 2];
}
Double[] valuesXprevious = new Double[valuesSize];
Double[] valuesYprevious = new Double[valuesSize];
Double[] valuesZprevious = new Double[valuesSize];
for (int i = 0; i < valuesSize; i++) {
int position = i * 3 + ignorePositions;
valuesXprevious[i] = Double.parseDouble(refValues[position]);
valuesYprevious[i] = Double.parseDouble(refValues[position + 1]);
valuesZprevious[i] = Double.parseDouble(refValues[position + 2]);
}
int incorrectPoints = 0;
for (int j = 0; j < valuesSize; j++) {
if (valuesX[j] < valuesXprevious[j] + 20 && valuesX[j] > valuesXprevious[j] - 20
&& valuesY[j] < valuesYprevious[j] + 20 && valuesY[j] > valuesYprevious[j] - 20
&& valuesZ[j] < valuesZprevious[j] + 20 && valuesZ[j] > valuesZprevious[j] - 20) {
} else {
incorrectPoints++;
}
}
return incorrectPoints;
}
EDIT:
I found JGraphT, it might work. If you know anything about that already, let me know.
EDIT2:
See these images, they are the same gesture but one is done in a slower motion than another.
Faster one:
Slower one:
I haven't captured images of the same gesture where one would be smaller than another, might add that later.
If your list of gestures is complex, I would suggest training a neural network which can classify the gestures based on the graph value bits you mentioned. The task is very similar to classification of handwritten numerical digits, for which lots of resources are there on the net.
The other approach would be to mathematically guess the shape of the gesture, but I doubt it will be useful considering the tolerance of the accelerometer and the fact that users won't draw accurate shapes.
(a) convert your 3D coordinates into 2D plain figure. Use matrix transformations.
(b) normalize your gesture scale - again with matrix transformations
(c) normalize the number of points or use interpolation on the next step.
(d) calculate the difference between your stored (s) gesture and current (c) gesture as
Sum((Xs[i] - Xc[i])^2 + (Ys[i] - Yc[i])^2) where i = 0 .. num of points
If the difference is below your predefined precision - gestures are equal.
I have used a Java implementation of Dynamic Time Wrapping algorithm. The library is called fastDTW.
Unfortunately from what I undersood they don't support it anymore, though I found a use for it.
https://code.google.com/p/fastdtw/
I can't recall now, but I think I used this one and compiled it myself:
https://github.com/cscotta/fastdtw/tree/master/src/main/java/com/fastdtw/dtw
I have written an implementation of the A* algorithm, taken mainly from This wiki page, however I have a major problem; in that I believe I am visiting way too many nodes while calculating a route therefore ruining my performance. I've been trying to figure out the issue for a few days and I can't see what's wrong. Please note, all my data structures are self implemented however I've tested them and believe they're not the issue.
I've included my Priority Queue implementation just in case.
closedVertices is a Hash map of Vertices.
private Vertex routeCalculation(Vertex startLocation, Vertex endLocation, int routetype)
{
Vertex vertexNeighbour;
pqOpen.AddItem(startLocation);
while (!(pqOpen.IsEmpty()))
{
tempVertex = pqOpen.GetNextItem();
for (int i = 0; i < tempVertex.neighbors.GetNoOfItems(); i++) //for each neighbor of tempVertex
{
currentRoad = tempVertex.neighbors.GetItem(i);
currentRoad.visited = true;
vertexNeighbour = allVertices.GetNewValue(currentRoad.toid);
//if the neighbor is in closed set, move to next neighbor
checkClosed();
nodesVisited++;
setG_Score();
//checks if neighbor is in open set
findNeighbour();
//if neighbour is not in open set
if (!foundNeighbor || temp_g_score < vertexNeighbour.getTentativeDistance())
{
vertexNeighbour.setTentativeDistance(temp_g_score);
//calculate H once, store it and then do an if statement to see if it's been used before - if true, grab from memory, else calculate.
if (vertexNeighbour.visited == false)
vertexNeighbour.setH(heuristic(endLocation, vertexNeighbour));
vertexNeighbour.setF(vertexNeighbour.getH() + vertexNeighbour.getTentativeDistance());
// if neighbor isn't in open set, add it to open set
if (!(foundNeighbor))
{
pqOpen.AddItem(vertexNeighbour);
}
else
{
pqOpen.siftUp(foundNeighbourIndex);
}
}
}
}
}
return null;
}
Can anyone see where I may be exploring too many nodes?
Also, I've attempted to implement a way to calculate the quickest (timed) route, by modifying F by the speed of the road. Am I right in saying this the correct way to do it?
(I divided the speed of the road by 100 because it was taking a long time to execute otherwise).
I found my own error; I had implemented the way in which I calculate the heuristic for each node wrong - I had an IF statement to see if the H had already been calculated however I had done this wrong and therefore it never actually calculated the H for some nodes; resulting in excessive node exploration. I simply removed the line: if (vertexNeighbour.visited == false) and now I have perfect calculations.
However I am still trying to figure out how to calculate the fastest route in terms of time.
I researched online and saw that Location Manager.requestLocationUpdates method and saw that it took an argument of minDistance. The definition that the site(http://blog.doityourselfandroid.com/2010/12/25/understanding-locationlistener-android/) gave me for that argument was "minimum distance interval for notifications" with an example of 10 meters. Can anyone clarify what that means? Everytime i move 10 meters with a phone, i get an gps update?
Yes, essentially this means that if the platform observes your current position as being more than minDistance from the last location that was saved by the platform, your listener will get notified with the updated position. Note that these two positions don't necessarily need to be sequential (i.e., there could be a number of small displacements that eventually add up to minDistance, and the location that finally exceeds the threshold will be the one reported to the app).
The actual platform code can be seen on Github, which I've also pasted below:
private static boolean shouldBroadcastSafe(
Location loc, Location lastLoc, UpdateRecord record, long now) {
// Always broadcast the first update
if (lastLoc == null) {
return true;
}
// Check whether sufficient time has passed
long minTime = record.mRequest.getFastestInterval();
long delta = (loc.getElapsedRealtimeNanos() - lastLoc.getElapsedRealtimeNanos())
/ NANOS_PER_MILLI;
if (delta < minTime - MAX_PROVIDER_SCHEDULING_JITTER_MS) {
return false;
}
// Check whether sufficient distance has been traveled
double minDistance = record.mRequest.getSmallestDisplacement();
if (minDistance > 0.0) {
if (loc.distanceTo(lastLoc) <= minDistance) {
return false;
}
}
...
Note that minDistance parameter only affects when your app gets notified if the value is greater than 0.
Also please be aware that with all positioning systems there is a significant level of error when calculating locations, so with small minDistance values you may get notified frequently, but these notifications may be error in positioning calculations, not true user movement.
I'm writing code to automate simulate the actions of both Theseus and the Minoutaur as shown in this logic game; http://www.logicmazes.com/theseus.html
For each maze I provide it with the positions of the maze, and which positions are available eg from position 0 the next states are 1,2 or stay on 0. I run a QLearning instantiation which calculates the best path for theseus to escape the maze assuming no minotaur. then the minotaur is introduced. Theseus makes his first move towards the exit and is inevitably caught, resulting in reweighting of the best path. using maze 3 in the game as a test, this approach led to theseus moving up and down on the middle line indefinatly as this was the only moves that didnt get it killed.
As per a suggestion recieved here within the last few days i adjusted my code to consider state to be both the position of thesesus and the minotaur at a given time. when theseus would move the state would be added to a list of "visited states".By comparing the state resulting from the suggested move to the list of visited states, I am able to ensure that theseus would not make a move that would result in a previous state.
The problem is i need to be able to revisit in some cases. Eg using maze 3 as example and minotaur moving 2x for every theseus move.
Theseus move 4 -> 5, state added(t5, m1). mino move 1->5. Theseus caught, reset. 4-> 5 is a bad move so theseus moves 4->3, mino catches on his turn. now both(t5, m1) and (t3 m1) are on the visited list
what happens is all possible states from the initial state get added to the dont visit list, meaning that my code loops indefinitly and cannot provide a solution.
public void move()
{
int randomness =10;
State tempState = new State();
boolean rejectMove = true;
int keepCurrent = currentPosition;
int keepMinotaur = minotaurPosition;
previousPosition = currentPosition;
do
{
minotaurPosition = keepMinotaur;
currentPosition = keepCurrent;
rejectMove = false;
if (states.size() > 10)
{
states.clear();
}
if(this.policy(currentPosition) == this.minotaurPosition )
{
randomness = 100;
}
if(Math.random()*100 <= randomness)
{
System.out.println("Random move");
int[] actionsFromState = actions[currentPosition];
int max = actionsFromState.length;
Random r = new Random();
int s = r.nextInt(max);
previousPosition = currentPosition;
currentPosition = actions[currentPosition][s];
}
else
{
previousPosition = currentPosition;
currentPosition = policy(currentPosition);
}
tempState.setAttributes(minotaurPosition, currentPosition);
randomness = 10;
for(int i=0; i<states.size(); i++)
{
if(states.get(i).getMinotaurPosition() == tempState.getMinotaurPosition() && states.get(i).theseusPosition == tempState.getTheseusPosition())
{
rejectMove = true;
changeReward(100);
}
}
}
while(rejectMove == true);
states.add(tempState);
}
above is the move method of theseus; showing it occasionally suggesting a random move
The problem here is a discrepancy between the "never visit a state you've previously been in" approach and your "reinforcement learning" approach. When I recommended the "never visit a state you've previously been in" approach, I was making the assumption that you were using backtracking: once Theseus got caught, you would unwind the stack to the last place where he made an unforced choice, and then try a different option. (That is, I assumed you were using a simple depth-first-search of the state-space.) In that sort of approach, there's never any reason to visit a state you've previously visited.
For your "reinforcement learning" approach, where you're completely resetting the maze every time Theseus gets caught, you'll need to change that. I suppose you can change the "never visit a state you've previously been in" rule to a two-pronged rule:
never visit a state you've been in during this run of the maze. (This is to prevent infinite loops.)
disprefer visiting a state you've been in during a run of the maze where Theseus got caught. (This is the "learning" part: if a choice has previously worked out poorly, it should be made less often.)
For what is worth, the simplest way to solve this problem optimally is to use ALPHA-BETA, which is a search algorithm for deterministic two-player games (like tic-tac-toe, checkers, chess). Here's a summary of how to implement it for your case:
Create a class that represents the current state of the game, which
should include: Thesesus's position, the Minoutaur's position and
whose turn is it. Say you call this class GameState
Create a heuristic function that takes an instance of GameState as paraemter, and returns a double that's calculated as follows:
Let Dt be the Manhattan distance (number of squares) that Theseus is from the exit.
Let Dm be the Manhattan distance (number of squares) that the Minotaur is from Theseus.
Let T be 1 if it's Theseus turn and -1 if it's the Minotaur's.
If Dm is not zero and Dt is not zero, return Dm + (Dt/2) * T
If Dm is zero, return -Infinity * T
If Dt is zero, return Infinity * T
The heuristic function above returns the value that Wikipedia refers to as "the heuristic value of node" for a given GameState (node) in the pseudocode of the algorithm.
You now have all the elements to code it in Java.