Closest pair of points using sweep line algorithm in Java - java

First off; I'm doing this as an assignment for school, and that's why I'm using the sweep line algorithm. I'm basing it off of the pseudocode given by my teacher.
I've done an implementation of my own using TreeMap instead of a balanced binary search tree, which I was told would provide the same functionality. (Don't know if this is true, though?)
However I don't get the proper end result, and I really have no idea why. I've been staring myself blind.
Below is the part of my code that performs the actual computation. I've omitted the creation of the points-list, and other unimportant stuff.
count = 0;
TreeMap<Double, Point> tree = new TreeMap<Double, Point>();
double dist = Double.POSITIVE_INFINITY;
// Sorts points on x-axis
Collections.sort(points);
// Gets left-most point
Point q = points.get(count++);
for (Point p : points) {
while (q.getX() < p.getX() - dist) {
tree.remove(q.getY());
q = points.get(count++);
}
NavigableSet<Double> keys = tree.navigableKeySet();
// Look at the 4 points above 'p'
int i = 1;
Iterator<Double> iterHi = keys.tailSet(p.getY()).iterator();
while (i <= 4 && iterHi.hasNext()) {
double tmp = p.distanceTo(tree.get(iterHi.next()));
if (tmp < dist) {
dist = tmp;
pClosest = p;
qClosest = q;
}
i++;
}
// Look at the 4 points below 'p'
i = 1;
Iterator<Double> iterLo = keys.headSet(p.getY()).iterator();
while (i <= 4 && iterLo.hasNext()) {
double tmp = q.distanceTo(tree.get(iterLo.next()));
if (tmp < dist) {
dist = tmp;
pClosest = p;
qClosest = q;
}
i++;
}
tree.put(p.getY(), p);
}
double finalDist = pClosest.distanceTo(qClosest);
Edit: The pseudocode can be found here: http://pastebin.com/i0XbPp1a . It's based on notes taken from what my teacher wrote on the whiteboard.
Regarding results:
Using the following points (X, Y):
(0, 2) - (6, 67) - (43, 71) - (39, 107) - (189, 140)
I should get ~36, but I'm getting ~65.

I have already found several bugs in your code(I'm not sure that there are no others):
What if several points have the same y coordinate? A TreeMap can hold only one point for each y value. Is that what you want?
When you look at points below and above the current one, you compute a distance to the iterHi.next(): double tmp = p.distanceTo(tree.get(iterHi.next()));, but then assign qClosest to q. It is not correct(obviously, iterHi.next() and q is not the same point).
In the second inner loop, you compute the distance from q to the element of the set: double tmp = q.distanceTo(tree.get(iterLo.next()));. It should be p instead.
I would also recommend maintaining a TreeSet of Point instead of using a TreeMap(they should compared by their y coordinate, of course).

Related

Java - Quick way to find the four "extreme corners" in a set of points?

Assuming we're given an x and y bound (lets call the bounds A and B) for a set of random coordinates, for example x <= 10, y <= 10. What is the fastest way to find the four points closest to (0,0), (A,0), (A,B), (0,B)? The points may be ordered from least to greatest if that is faster. This is what I have currently, but I feel like this can be sped up:
private void quadrilateral(){
NW = null;
NE = null;
SE = null;
SW = null;
Point NWbound = new Point(0,B);
Point NEbound = new Point(A,B);
Point SEbound = new Point(A, 0);
Point SWbound = new Point(0,0);
for (Point p : points){
if (NW == null || p.distance(NWbound) < NW.distance(NWbound)){
NW = p;
}
if (NE == null || p.distance(NEbound) < NE.distance(NEbound)){
NE = p;
}
if (SE == null || p.distance(SEbound) < SE.distance(SEbound)){
SE = p;
}
if (SW == null || p.distance(SWbound) < SW.distance(SWbound)){
SW = p;
}
}
}
I haven't yet been able to utilize an ordered list, and I'm not even sure if ordering the list would help at all.
To make it faster, use p.distanceSq() instead of p.distance(). Also, save each of the four closest distance-squared values so you don't have to keep computing them again for each iteration.

Douglas-Peucker point count tolerance

I am trying to implement Douglas-Peucker Algorithm with point count tolerance. I mean that i specifies that i want 50% compression. I found this algorithm on this page http://psimpl.sourceforge.net/douglas-peucker.html under Douglas-Peucker N. But i am not sure how this algorithm is working. Is there any implementation of this in java or some good specification about this version of algorithm?
What i dont understand from psimpl explanation is what will happend after we choose fist point into simplification? We will broke the edge into two new edges and rank all points and choose best point from both edges?
DP searches the polyline for the farthest vertex from the baseline. If this vertex is farther than the tolerance, the polyline is split there and the procedure applied recursively.
Unfortunately, there is no relation between this distance and the number of points to keep. Usually the "compression" is better than 50%, so you may try to continue the recursion deeper. But achieving a good balance of point density looks challenging.
Combine the Douglas-peucker algorithm with iteration, and consider the remained points as a judge criteria.
Here is my algorithm, array 'points' stores the points of the trajectory.Integer 'd' is the threshold.
public static Point[] divi(Point[] points,double d)
{
System.out.println("threshold"+d);
System.out.println("the nth divi iteration");
int i = 0;
Point[] p1 = new Point[points.length];
for (i = 0;i<points.length;i++)
p1[i] = points[i];
compress(p1, 0,p1.length - 1,d); //first compression
int size = 0;
for (Point p : p1) //trajectory after compression
if(p != null)
size ++;
System.out.println("size of points"+size);
if(size<=200 && size>=100)
return p1;
else if(size>200)
return divi(p1,d + d/2.0);
else
return divi(points,d/2.0);
}
public static void compress(Point[] points,int m, int n,double D)
{
System.out.println("threshold"+D);
System.out.println("startIndex"+m);
System.out.println("endIndex"+n);
while (points[m] == null)
m ++;
Point from = points[m];
while(points[n] == null)
n--;
Point to = points[n];
double A = (from.x() - to.x()) /(from.y() - to.y());
/** -
* 由起始点和终止点构成的直线方程一般式的系数
*/
double B = -1;
double C = from.x() - A *from.y();
double d = 0;
double dmax = 0;
if (n == m + 1)
return;
List<Double> distance = new ArrayList<Double>();
for (int i = m + 1; i < n; i++) {
if (points[i] ==null)
{
distance.add(0.0);
continue;
}
else
{
Point p = points[i];
d = Math.abs(A * (p.y()) + B * (p.x()) + C) / Math.sqrt(Math.pow(A, 2) + Math.pow(B, 2));
distance.add(d);
}
}
dmax= Collections.max(distance);
if (dmax < D)
for(int i = n-1;i > m;i--)
points[i] = null;
else
{
int middle = distance.indexOf(dmax) + m + 1;
compress(points,m, middle,D);
compress(points,middle, n,D);
}
}

Java functional streams: generate a set of random points between distance A and B from each other

While working on a toy project I was faced with the problem of generating a set of N 2d points where every point was between distance A and B from every other point in the set (and also within certain absolute bounds).
I prefer working with java streams and lambdas for practice, because of their elegance and the possibility for easy parallelization, so I'm not asking how to solve this problem in an imperative manner!
The solution that first came to mind was:
seed the set (or list) with a random vector
until the set reaches size N:
create a random vector with length between A and B and add it to a random "parent" vector
if it's outside the bounds or closer than A to any vector in the set, discard it, otherwise add it to the set
repeat
This would be trivial for me with imperative programming (loops), but I was stumped when doing this the functional way because the newly generated elements in the stream depend on previously generated elements in the same stream.
Here's what I came up with - notice the icky loop at the beginning.
while (pointList.size() < size) {
// find a suitable position, not too close and not too far from another one
Vec point =
// generate a stream of random vectors
Stream.generate(vecGen::generate)
// elongate the vector and add it to the position of one randomly existing vector
.map(v -> listSelector.getRandom(pointList).add(v.mul(random.nextDouble() * (maxDistance - minDistance) + minDistance)))
// remove those that are outside the borders
.filter(v -> v.length < diameter)
// remove those that are too close to another one
.filter(v -> pointList.stream().allMatch(p -> Vec.distance(p, v) > minDistance))
// take the first one
.findAny().get();
pointList.add(point);
}
I know that this loop might never terminate, depending on the parameters - the real code has additional checks.
One working functional solution that comes to mind is to generate completely random sets of N vectors until one of the sets satisfy the condition, but the performance would be abysmal. Also, this would circumvent the problem I'm facing: is it possible to work with the already generated elements in a stream while adding new elements to the stream (Pretty sure that would violate some fundamental principle, so I guess the answer is NO)?
Is there a way to do this in a functional - and not too wasteful - way?
A simple solution is shown below. The Pair class can be found in the Apache commons lang3.
public List<Pair<Double, Double>> generate(int N, double A, double B) {
Random ySrc = new Random();
return new Random()
.doubles(N, A, B)
.boxed()
.map(x -> Pair.of(x, (ySrc.nextDouble() * (B - A)) + A))
.collect(Collectors.toList());
}
My original solution (above) missed the point that A and B represented the minimum and maximum distance between any two points. So I would instead propose a different solution (way more complicated) that relies on generating points on a unit circle. I scale (multiply) the unit vector representing the point using a random distance with minimum of -1/2 B and maximum of 1/2 B. This approach uniformly distributes points in an area bounded by a circle of radius 1/2 B. This addresses the maximum distance between points constraint. Given sufficient difference between A and B, where A < B, and N is not too large, the minimum distance constraint will probably also be satisfied.Satisfying the maximum distance constraint can be accomplished with purely functional code (i.e., no side effects).
To ensure that the minimum constraint is satisfied requires some imperative code (i.e., side effects). For this purpose, I use a predicate with side effects. The predicate accumulates points that meet the minimum constraint criteria and returns true when N points have been accumulated.
Note the running time is unknown because points are randomly generated. With N = 100, A = 1.0, and B = 30.0, the test code runs quickly. I tried values of 10 and 20 for B and didn't wait for it to end. If you want a tighter cluster of points you will probably need to speed up this code or start looking at linear solvers.
public class RandomPoints {
/**
* The stop rule is a predicate implementation with side effects. Not sure
* about the wisdom of this approach. The class does not support concurrent
* modification.
*
* #author jgmorris
*
*/
private class StopRule implements Predicate<Pair<Double, Double>> {
private final int N;
private final List<Pair<Double, Double>> points;
public StopRule(int N, List<Pair<Double, Double>> points) {
this.N = N;
this.points = points;
}
#Override
public boolean test(Pair<Double, Double> t) {
// Brute force test. A hash based test would work a lot better.
for (int i = 0; i < points.size(); ++i) {
if (distance(t, points.get(i)) < dL) {
// List size unchanged, continue
return false;
}
}
points.add(t);
return points.size() >= N;
}
}
private final double dL;
private final double dH;
private final double maxRadius;
private final Random r;
public RandomPoints(double dL, double dH) {
this.dL = dL;
this.dH = dH;
this.maxRadius = dH / 2;
r = new Random();
}
public List<Pair<Double, Double>> generate(int N) {
List<Pair<Double, Double>> points = new ArrayList<>();
StopRule pred = new StopRule(N, points);
new Random()
// Generate a uniform distribution of doubles between 0.0 and
// 1.0
.doubles()
// Transform primitive double into a Double
.boxed()
// Transform to a number between 0.0 and 2ϖ
.map(u -> u * 2 * Math.PI)
// Generate a random point
.map(theta -> randomPoint(theta))
// Add point to points if it meets minimum distance criteria.
// Stop when enough points are gathered.
.anyMatch(p -> pred.test(p));
return points;
}
private final Pair<Double, Double> randomPoint(double theta) {
double x = Math.cos(theta);
double y = Math.sin(theta);
double radius = randRadius();
return Pair.of(radius * x, radius * y);
}
private double randRadius() {
return maxRadius * (r.nextDouble() - 0.5);
}
public static void main(String[] args) {
RandomPoints rp = new RandomPoints(1.0, 30.0);
List<Pair<Double, Double>> points = rp.generate(100);
for (int i = 0; i < points.size(); ++i) {
for (int j = 1; j < points.size() - 1; ++j) {
if (i == j) {
continue;
}
double distance = distance(points.get(i), points.get(j));
if (distance < 1.0 || distance > 30.0) {
System.out.println("oops");
}
}
}
}
private static double distance(Pair<Double, Double> p1, Pair<Double, Double> p2) {
return Math.sqrt(Math.pow(p1.getLeft() - p2.getLeft(), 2.0) + Math.pow(p1.getRight() - p2.getRight(), 2.0));
}
}

Strange bug when incrementing a value while iterating through a collection in Java

I recently wrote a program that does Ant Colony Optimization on a graph.
The following code has a bug in it I can't understand.
Map<Node, Edge> nodesLinkedToCurrentNode = a.getCurrentNode().getLinkedNodes();
TreeMap<Double, Node> probabilitiesForNodes = new TreeMap<>();
double totalProb = 0d;
for (Node n : graph.values()) {
if (!a.getVisited().contains(n)) {
//For each node that has not yet been visited
//calculate it's weighted probabily
double weightedProbability
= (Math.pow(nodesLinkedToCurrentNode.get(n).getPheremoneLevel(), RPI))
* (Math.pow((double) 1 / nodesLinkedToCurrentNode.get(n).getDistance(), RHI));
totalProb += weightedProbability;
//Map the node to its probability
probabilitiesForNodes.put(weightedProbability, n);
}
}
double testTotalProb = 0d;
for (Double d : probabilitiesForNodes.keySet()) {
testTotalProb += d;
}
if (testTotalProb != totalProb) { <----------How can this happen??
System.out.println("Why?");
totalProb = testTotalProb;
}
That if statement executes all the time and I don't understand why.
I'm just incrementing a value, but for some reason, it's not being incremented properly.
I made the project open source, if you want to check it out
The java file with the code in it
I replicated the bug with the following code:
TreeMap<Double, String> probabilitiesForNodes = new TreeMap<>();
double totalProb = 0d;
for (int i = 1; i < 10; i++) {
//For each node that has not yet been visited
//calulate it's weighted probabily
double weightedProbability
= (Math.pow(0.7 + 1 / i, 2))
* (Math.pow((double) 1 / 30, i));
totalProb += weightedProbability;
String sudoNode = "node" + i;
//Map the node to its probability
probabilitiesForNodes.put(weightedProbability, sudoNode);
}
double testTotalProb = 0d;
for (Double d : probabilitiesForNodes.keySet()) {
testTotalProb += d;
}
if (testTotalProb != totalProb) {
System.out.println("Why?");
totalProb = testTotalProb;
}
You are working with double numbers so you should expect this. Specifically you obtain totalProb and testTotalProb by iteratively adding the same double numbers, but in different order. Since adding doubles is not an exactly associative operation, enough discrepancy occurs to make your equality test fail.
Another thing that can happen is a collision on the same Double key. There is nothing stopping two nodes from having exactly the same weighted probability. So for starters you can just check the sizes of the two collections.

Finding the closest object (barrier) to the player

I have a program that checks distance and whether or not the player has collided with a barrier. I now am trying to calculate which barrier in the array of barriers is the closest to the moving player, then returning the index of that barrier.
Here is what I have so far:
public static int closestBarrier(GameObject object, GameObject[] barriers)
// TODO stub
{
int closest = 0;
for (int i = 0; i < barriers.length - 1; i++) {
if (Math.sqrt((object.getX() - barriers[i].getX())
* (object.getX() - barriers[i].getX()))
+ ((object.getY() - barriers[i].getY()) * (object.getY() - barriers[i]
.getY())) <= Math
.sqrt((object.getX() - barriers[i + 1].getX())
* (object.getX() - barriers[i + 1].getX()))
+ ((object.getY() - barriers[i + 1].getY()) * (object
.getY() - barriers[i + 1].getY()))) {
closest = i;
} else
closest = i + 1;
}
return closest;
}
I am still new to java so I understand what I already have probably isn't very efficient or the best method of doing it (or even right at all!?).
I'd refactor it a wee bit simpler like so:
public static int closestBarrier(GameObject object, GameObject[] barriers)
{
int closest = -1;
float minDistSq = Float.MAX_VALUE;//ridiculously large value to start
for (int i = 0; i < barriers.length - 1; i++) {
GameObject curr = barriers[i];//current
float dx = (object.getX()-curr.getX());
float dy = (object.getY()-curr.getY());
float distSq = dx*dx+dy*dy;//use the squared distance
if(distSq < minDistSq) {//find the smallest and remember the id
minDistSq = distSq;
closest = i;
}
}
return closest;
}
This way you're doing less distance checks (your version does two distance checks per iteration) and also you only need the id, not the actual distance, so you can gain a bit of speed by not using Math.sqrt() and simply using the squared distance instead.
Another idea I can think of depends on the layout. Say you have a top down vertical scroller, you would start by checking the y property of your obstacle. If you have a hash of them or a sorted list, for an object at the bottom of the screen you would start loop from the largest y barrier to the smallest. Once you found the closest barriers on the Y axis, if there are more than 1 you can check for the closest on the x axis. You wouldn't need to use square or square root as you're basically splitting the checks from 1 in 2D per barrier to 2 checks in 1D, narrowing down your barrier and discarding far away barriers instead of checking against every single object all the time.
An even more advanced version would be using space partitioning but hopefully you won't need it for a simple game while learning.

Categories

Resources