I'm using Particle Swarm Optimization(PSO) in java. I am having little knowledge about what we do. Since, I am applying for multiple sequence alignment in bioinformatics.
We need to find position and velocity for aligning those sequences. I need detailed explanation and references about PSO and the need for calculating velocity and position in PSO. If possible I need simple example explaining PSO in java. Actually, I need to understand how it optimizes a problem.
public class Position {
private double x;
private double y;
public Position(double x, double y) {
this.x = x;
this.y = y;
}
public double getX() {
return x;
}
public void setX(double x) {
this.x = x;
}
public double getY() {
return y;
}
public void setY(double y) {
this.y = y;
}
}
Here is the class for representing the position of the particle with getters and setters
Like wise other classes are available here
Particle Swarm Optimization:
randomly initialize a set of particles at random positions in the
search space;
evaluate all positions and update the global best
position and the personal best positions;
update each velocity
based on the relative position of the global best position, the
current velocity of the particle, the personal best position of the
particle and some random vector;
goto 2.
General Overview
Particle Swarm Optimization (PSO) is a population-based, stochastic search method. The position of each individual or particle in the population represents a possible solution to the optimization problem.
The goodness/score of a given position in the search space is measured by the objective function, which is the function being optimized.
The particles move through the search space in search of the optimal solution.
The way in which the particles move through the search is where the magic happens. Assuming that you're using the inertia weight model, this is governed by three different factors: the inertia component, the social component and the cognitive component.
The inertia component essentially introduces a form of momentum so that the particle's movement isn't too erratic from one iteration to the next. The cognitive component allows a particle's memory of where it has previously found good solutions to influence its movement direction. The social component allows the knowledge of other swarm members (i.e. where other members of the swarm have found good solutions) to influence a particle's movement.
The Update Equations
At iteration t a given particle's position is denoted by x(t). Its new position for the next iteration is calculated according to:
x(t+1) = x(t) + v(t+1)
where v(t+1) denotes the particle's velocity for the next iteration. Note that each of the values above are vectors. The length of these vectors will be equal to the problem dimensionality/the number of input variables to the objective function. (Apologies for my terrible notation; I don't have enough reputation to post pretty equations). The particle's velocity is calculated according to:
v(t+1) = w*v(t) + c1*r1*(pBest(t) - x(t)) + c2*r2*(gBest(t) - x(t))
The three different components described previously (inertia, cognitive and social) are each represented by one of the three terms in the equation above.
w is called the inertia weight and regulates the effect of the momentum component. c1 and c2 are the cognitive and social acceleration coefficients and they regulate the importance of the cognitive and social components (respectively). r1 and r2 are vectors of random numbers (sampled from a unifrom distribution between 0 and 1) that are used to scale each component of the difference vectors.
The first difference vector (pBest(t) - x(t)) allows the particle to move towards its personal best/pBest - the best position that the particle has encountered thus far. (In order to implement the algorithm, it is thus necessary for a particle to examine every position it encounters and save it if it is the best so far).
The second difference vector (gBest(t) - x(t)) allows the particle to use information from other particles in the swarm. In this expression, gBest(t) denotes the best position found by the swarm thus far. (So for implementation, after every iteration, the scores of all the particles should be examined so that the very best one can be saved for future use).
The particles move around the search space based on these equations for a number of iterations until hopefully, they all converge to the same point. The global best can then be taken as the final solution produced by the algorithm.
Hopefully this makes the inner workings of PSO more clear. With every iteration, every particle moves in a direction that is determined by its personal best and the global best. Assuming that the objective function's optimum is somewhere near these two points, it is likely that the particle will eventually encounter better and better solutions.
Other Considerations
Note that there are many different variations on the process described above. For example, it is possible to only allow knowledge sharing amongst a subset of the swarm's particles. The subset of particles with which a given particle can communicate is known as its neighbourhood. The particle will then move in the direction of the best solution found in its neighbourhood, the "local best" instead of the global best.
Also, there are a number of other possible pitfalls such as the values for w, c1, and c2. Although it is possible to do fancy things here, the general rule of thumb is to set:
w = 0.729844
c1 = c2 = 1.49618
as suggested by http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=870279&tag=1 to lead to convergent behaviour (i.e. so that all the particles will eventually converge to roughly the same point).
Usually, the particles are randomly initialized throughout the search space. Although it is possible to initialize their velocities randomly as well, this isn't necessary and may lead to divergent behaviour, so it's fine to just start the velocities as 0-vectors.
Some parties also advise making use of velocity clamping (where every velocity component is bounded above and below by some maximum and minimum value; if a particle's velocity exceeds that, it is clamped to the maximum/minimum). This is usually not necessary if w, c1 and c2 are chosen correctly and the gBest and pBest are only updated if they are within the bounds of the search space.
Related
I have a collection of java.awt.Shape objects covering a two-dimensional plane with no overlap. These are from a data set of U.S. Counties, at a fairly low resolution. For an (x,y) latitude/longitude point, I want a quick way to identify the shape which county contains that point. What's an optimal way to index this?
Brute force would look like:
for (Shape eachShape : countyShapes) {
if (eachShape.contains(x, y)) {
return eachShape;
}
}
To optimize this, I can store the min/max bounds of the (possibly complex) shapes, and only call contains(x, y) on shapes whose rectangular bounds encompass a given x,y coordinate. What's the best way to build this index? A SortedMultiset would work for indexing on the x minima and maxima, but how to also include the y coordinates in the index?
For this specific implementation, doing a few seconds of up-front work to index the shapes is not a problem.
If possible you could try a bitmap with each shape in a different color. Then simply query the point and the color and lookup the shape.
This question is outside the scope of Stackoverflow but the answer is probably Binary Space Partitioning.
Roughly:
Divide the space in two either on the x coordinate or y coordinate using the mid-point of the range.
Create a list of counties on the two sides of that line (and divided by that line).
On each side of that line divide the collections again by the other dimension.
Continue recursively building a tree dividing alternately by x and y until you reach a satisfactory set of objects to examine by brute force.
The conventional algorithm actually divides the shapes lying across the boundary but that might not be necessary here.
A smart implementation might look for the most efficient line to divide on which is the one where the longest of the two lists is the smallest.
That involves more up front calculation but a more efficient and consistently performing partition.
You could use an existing GIS library like GeoTools and then all the hard work is already done.
Simply load your shapefile of counties and execute queries like
"the_geom" contains 'POINT(x y)
The quickstart tutorial will show you how to load the shapes, and the query tutorial will show you how to query them.
Having min an max values of coordinates of the bounds not guarantee that you can determine if one point is in or out in any situations. If you want achieve this by yourself you should implement some algorithm. There's a good one that is called "radial algorithm", I recommend that uses this, and it isn't so complicated to implement, there are sufficient bibliography and examples.
Hope this help.
Since in the digital world a real collision almost never happens, we will always have a situation where the "colliding" balls overlap.
How to put back balls in situation where they collide perfectly without overlap?
I would solve this problem with a posteriori approach (in two dimensions).
In short I have to solve this equation for t:
Where:
is a number that answers to the question: how many frames ago
did the collision happen perfectly?
is the center of the first ball
is the center of the second ball
and are their velocities.
but the solution from WolframAlpha is too complicated (I changed the name of the velocities but essentially does not change anything).
It looks complicated because it's the full solution, not just the simplified polynomial form of it. Multiply everything out and gather the constant, t, and t^2 terms, and you'll find that it becomes just at^2 + bt + c = 0. From there you can just use the quadratic formula.
Also, if you want to keep things simple, do them with vector math. There's no reason here to separate out the x and y coordinates; vector addition and dot products are all you need.
Finally, all that matters is the relative position and relative velocity. Pretend one circle is at the origin and stationary, and apply the difference to the other ball. That doesn't change the answer, but it does reduce the number of variables you're wrangling.
I need to make a wheel fall on 1 of five angles and I want it to teeter when it gets to the angle. After the user spins the wheel, I have it slow down by multiplying the rotation velocity by .98 per tick. I sort of have it working by finding the closest of the angles and adding a small value in its direction to the velocity. However this looks unrealistic and can be glitchy.
I was thinking of implementing a damped sine wave but i'm not sure how I would do this.
Current Pseudocode:
var rotation, rotationVelocity, stoppingPoints[5];
update(deltaT) {
velocity -= rotationVelocity * 0.5 * dt;
closestAngle = findClosestAngle().angle;
rotationVelocity += (closestAngle - rotation) / 36 * dt;
rotation += rotationVelocity;
}
Edit:
Teeter: move or balance unsteadily; sway back and forth:
subtract a constant amount from it's velocity every iteration until it reaches zero
not only does this actually represent how friction works in real life, but it's easier too.
If you want it to move as though it were connected to a spring:
Hooke's law for springs is F = -kx where k is a constant and x is the distance from the origin if you want it to sway back and forth as though it were on a spring. keep track of it's rotation from an origin, and add -kx where x is it's rotational distance(or angle) from the origin.
Now, if you apply both friction and hooke's law to the spring, it should look realistic.
I think the closest angle that you want is the cloest to where it will stop. You can simulate where it will end, how long it takes to end, and use that to determine how much extra(or less) velocity you'll need.
Not sure what you mean by teetering exactly.
It sounds like you want to model a wheel with weights attached at the stoppingPoints. I mean, from a physics viewpoint. There is the existing rotational velocity, then deceleration from friction, and an unknown acceleration/deceleration caused by the effects of gravity on those points (as translated to a rotational velocity based on the position of the weights). Anyway, that's my interpretation and would probably be my approach (to model the gravity).
I think the teetering you speak of will be achieved when the acceleration caused by the weights exceeds the existing rotational velocity.
I will probably have to implement a center-of-gravity class but I will ask help in seeking such a Java class before I do. I suspect this has been implemented by others as part of a math library.
In a space of n-dimensions, suppose each dimension is discrete. So for example in 3 dimensions, you can have an X dimension with a range of [0..a]. You also have a Y dimension with a range of [0..b] and a Z dimension with a range of [0..c]. The implementation should be general so that the number of dimensions can be greater than 3 and also generally a not equal to b where a and b are the maximum coordinates of their respective dimensions.
Each point in the space is a double precision float (non-negative).
Find the coordinate of the center-of-gravity.
If you use a physics engine, you can easily get the centre of gravity -- try JBullet :) The centre of mass which you can get with the API is essentially the same thing, but with a slight difference:
The term center of mass is often used interchangeably with center of
gravity, but they are physically different concepts. They happen to
coincide in a uniform gravitational field, but where gravity is not
uniform, center of gravity refers to the mean location of the
gravitational force acting on a body. This results in small but
measurable gravitational torque that must be accounted for in the
operation of artificial satellites.
http://www.continuousphysics.com/Bullet/BulletFull/classbtRigidBody.html
I'm calculating shortest path of a robot on a plane with polygonal obstacles. Everything works well and fast, no problems there. But, how to smoothen the path so it becomes curvy ?
Below is a picture of a path connecting vertices with a straight line.
P.S
Robot is just a circle.
This paper might be useful. It looks like it's a non-trivial problem. Abstract:
Automatic graph drawers need to compute paths among ver- tices of a simple polygon which besides remaining in the interior need to exhibit certain aesthetic properties. Some of these require the incorpo- ration of some information about the polygonal shape without being too far from the actual shortest path. We present an algorithm to compute a locally convex region that “contains” the shortest Euclidean path among two vertices of a simple polygon. The region has a boundary shape that “follows” the shortest path shape. A cubic Bezier spline in the region in- terior provides a “short and smooth” collision free curve between the two given vertices. The obtained results appear to be aesthetically pleasant and the methods used may be of independent interest. They are elemen- tary and implementable. Figure 7 is a sample output produced by our current implementation.
I used to play with path calculation techniques a lot when trying to make realistic flying sequences to be rendered in Teragen. I initially tried using Bézier Curves.
But discovered that (for flying at least) they aren't that useful. The reason is that the curvature between curves is discontinuous, and so cannot be used to calculate a continuous correct banking angle for a fly-by. Also, it's hard to be sure that the curve doesn't intersect an mountain.
I digress. The way I eventually settled on was a simple mass-spring based path, and relax it into submission.
Subdivide the path into lots of little segments, the more the merrier. For each point, move it a little bit in a direction so as to reduce the angle between it and its neighbours, and way from the obstacles. Repeat many times until the path has settled down.
k = 0.01 // Adjust the values of k and j to your liking.
j = 0.01 // Small values take longer to settle. Larger values are unstable.
For each point P
normal_vector = vector_to_previous_point + vector_to_next_point
obstacle_vector = vector_to_nearest_obstacle
obstacle_distance = magnitude(obstacle_vector)
obstacle_vector *= obstacle_distance^2
P += (normal_vector * k) - (obstacle_vector * j)
The benefit of these kind of finite element relaxation techniques is that you can program all kinds of constraints into it, and the path will settle on some compromise between them, depending on the weights (j and k in this case).
If you're into robotics, why not come and join the Robotics Proposal?
Can't you just make the path curvy in the actual execution of the path following algorithm? If you leave the path as is (i.e. connected straight lines), implementing a look ahead distance of ~1 meter (this value will depend on the speed of your robot and the amount you've padded the configuration space to avoid obstacles) in the control algorithm that calculates the velocity of each wheel will automatically smooth out the path without any need for preprocessing.
Here's a quick images of what I mean...the red dotted-line is the path that is actually executed by the robot when you control to a point based on a lookahead distance. A lookahead distance just computes a point further down the path by some arbitrary distance.
Again, the only thing you have to worry about is how much you're padding obstacles to make sure you avoid hitting them. Normally I believe the area of an obstacle is padded by half the robot's radius, but if you're controlling to a lookahead distance you may have to make this slightly larger.
In the case of a robot we can't know the future. We have to draw each point knowing only the location of the robot and the obstacles. The usual method for making curved paths of minimum length is to model the robot with a circle and move the circle so it remains in contact with the obstacles. Just keep one radius away and the turns will be curves.
If you want curves and NOT minimum distance try making the above radius proportional to the distance you are from a polygon vertex.
The Bezier curves idea only works to make the path curved in retrospect. It changes where the robot ha been. Usually with robots changing the past is called "cheating". One way to avoid having to change the path you've already walked is to look ahead. But can the robot see around corners? You have to specify the rules better.