I have a line segment that represents a direction and magnitude (length), when i draw the segment it works as it should. the value getAimArmsRotation is being pulled from another class that contains a touchpad value.
if (player.getFacingRight()) {
lOriginX = (player.getPosition().x + Player.SIZEw/2);
lOriginY = (player.getPosition().y + Player.SIZEh/1.5f);
//lEndX = lOriginX + (float)Math.cos((player.getAimArmsRotation())/57) * 15f;
//lEndY = lOriginY + (float)Math.sin((player.getAimArmsRotation())/57) * 15f;
laserO = new Vector2(lOriginX, lOriginY);
laserE = new Vector2(lEndX, lEndY);
However if I use the Vectors or floats from this calculation and apply them to a model's velocity vector it does not move the model along the line segment as I would think it should.
EDIT: Sorry meant to attach this picture when I created the question. Fig 1 is how my line segment looks, when I set the velocity values that make up the line segment to my object it moves in the direction that fig 2 shows.
getAimArmsRotation() is just a method that sets a sprite's rotation with a value from the touchpad in another class. I don't think that the values should matter since these floats are what i've used in order to give the line segment it's length and direction, I would think that giving an object a velocity of the x and y floats would give it the same direction as the line?
Thanks for the DV, jerks.
I wasn't taking into account the origin position of the object when trying to send it along the desired path. I was only using the LineEnd values, I needed to give the object it's origin point to correctly calculate the trajectory or path.
for (GameObject go: gObjects) {
if (go.getType() == PROJECTILE_ID) {
go.getVelocity().x = player.getLineEndX() - player.getLineOrgX();
go.getVelocity().y = player.getLineEndY() - player.getLineOrgY();
System.out.println(go.getVelocity().x);
System.out.println(go.getVelocity().y);
}
}
Related
I'm trying to implement linear interpolation and a fixed time step for my game loop. I'm using the libGDX engine and box2D. I'm attempting to find the amount the simulation moves my character's body during a world step like this:
old_pos = guyBody.getPosition();
world.step(STEP_TIME, VELOCITY_ITERATIONS, POSITION_ITERATIONS);
new_pos = guyBody.getPosition();
printLog(new_pos.x-old_pos.x);
This returns 0 each time. The simulation works fine, and the body definitely moves each step.
Additional code:
#Override
public void render(float delta) {
accumulator+=delta;
while (accumulator>=STEP_TIME){
accumulator-=STEP_TIME;
stepWorld();
}
alpha = accumulator/STEP_TIME;
update(delta);
//RENDER
}
private void stepWorld() {
old_pos = guyBody.getPosition();
old_angle = guyBody.getAngle() * MathUtils.radiansToDegrees;
world.step(STEP_TIME, VELOCITY_ITERATIONS, POSITION_ITERATIONS);
new_angle = guyBody.getAngle() * MathUtils.radiansToDegrees;
new_pos = guyBody.getPosition();
}
I'm attempting to use alpha to check how far I am in between physics steps so I can interpolate a Sprite's position.
Thanks!
Body's getPosition method is returning Vector reference - that means that you not copying it by value but only assign "pointer" on position object to old_pos/new_pos. However you are assigning it once before step and then after step all in all both variables keeps the same object with state after step already.
What you need to do is to copy position vector by value - to do this you can use Vector's cpy() method.
Your code should looks like
old_pos = guyBody.getPosition().cpy();
world.step(STEP_TIME, VELOCITY_ITERATIONS, POSITION_ITERATIONS);
new_pos = guyBody.getPosition().cpy();
printLog(new_pos.x-old_pos.x);
If you do not use y coordinate you should also consider keeping only x in float type variable to not copy whole object (however it should not really impact your performance).
While the accepted response does answer my question, I wanted to add some information I figured out while trying to get this to work that I wish I knew at the beginning of this.
If you're going to use a fixed timestep for your physics calculations (which you should), you should also interpolate(or extrapolate) a Sprite's position between physics steps. In my code, the screen is being rendered more often than the world is being stepped:
#Override
public void render(float delta) {
accumulator+=delta;
while (accumulator>=STEP_TIME){
accumulator-=STEP_TIME;
stepWorld();
}
alpha = accumulator/STEP_TIME;
update(delta);
//RENDER using alpha
}
To avoid a jittery rendering of moving objects, render Sprites or Textures at their positions, modified by alpha. Since alpha is the ratio of your accumulator to the step time, it will always be between 0 and 1.
You then need to find how much your body is moving during one step. This can be done with the accepted answer or using the body velocity:
newPos = oldPos + body.getLinearVelocity()*STEP_TIME*alpha
Then just render Sprite at the new position and you should see smooth movement with your fixed timestep at most frame rates.
EDIT: I found out that all the pixels were upside down because of the difference between screen and world coordinates, so that is no longer a problem.
EDIT: After following a suggestion from #TheVee (using absolute values), my image got much better, but I'm still seeing issues with color.
I having a little trouble with ray-tracing triangles. This is a follow-up to my previous question about the same topic. The answers to that question made me realize that I needed to take a different approach. The new approach I took worked much better, but I'm seeing a couple of issues with my raytracer now:
There is one triangle that never renders in color (it is always black, even though it's color is supposed to be yellow).
Here is what I am expecting to see:
But here is what I am actually seeing:
Addressing debugging the first problem, even if I remove all other objects (including the blue triangle), the yellow triangle is always rendered black, so I don't believe that it is an issues with my shadow rays that I am sending out. I suspect that it has to do with the angle that the triangle/plane is at relative to the camera.
Here is my process for ray-tracing triangles which is based off of the process in this website.
Determine if the ray intersects the plane.
If it does, determine if the ray intersects inside of the triangle (using parametric coordinates).
Here is the code for determining if the ray hits the plane:
private Vector getPlaneIntersectionVector(Ray ray)
{
double epsilon = 0.00000001;
Vector w0 = ray.getOrigin().subtract(getB());
double numerator = -(getPlaneNormal().dotProduct(w0));
double denominator = getPlaneNormal().dotProduct(ray.getDirection());
//ray is parallel to triangle plane
if (Math.abs(denominator) < epsilon)
{
//ray lies in triangle plane
if (numerator == 0)
{
return null;
}
//ray is disjoint from plane
else
{
return null;
}
}
double intersectionDistance = numerator / denominator;
//intersectionDistance < 0 means the "intersection" is behind the ray (pointing away from plane), so not a real intersection
return (intersectionDistance >= 0) ? ray.getLocationWithMagnitude(intersectionDistance) : null;
}
And once I have determined that the ray intersects the plane, here is the code to determine if the ray is inside the triangle:
private boolean isIntersectionVectorInsideTriangle(Vector planeIntersectionVector)
{
//Get edges of triangle
Vector u = getU();
Vector v = getV();
//Pre-compute unique five dot-products
double uu = u.dotProduct(u);
double uv = u.dotProduct(v);
double vv = v.dotProduct(v);
Vector w = planeIntersectionVector.subtract(getB());
double wu = w.dotProduct(u);
double wv = w.dotProduct(v);
double denominator = (uv * uv) - (uu * vv);
//get and test parametric coordinates
double s = ((uv * wv) - (vv * wu)) / denominator;
if (s < 0 || s > 1)
{
return false;
}
double t = ((uv * wu) - (uu * wv)) / denominator;
if (t < 0 || (s + t) > 1)
{
return false;
}
return true;
}
Is think that I am having some issue with my coloring. I think that it has to do with the normals of the various triangles. Here is the equation I am considering when I am building my lighting model for spheres and triangles:
Now, here is the code that does this:
public Color calculateIlluminationModel(Vector normal, boolean isInShadow, Scene scene, Ray ray, Vector intersectionPoint)
{
//c = cr * ca + cr * cl * max(0, n \dot l)) + cl * cp * max(0, e \dot r)^p
Vector lightSourceColor = getColorVector(scene.getLightColor()); //cl
Vector diffuseReflectanceColor = getColorVector(getMaterialColor()); //cr
Vector ambientColor = getColorVector(scene.getAmbientLightColor()); //ca
Vector specularHighlightColor = getColorVector(getSpecularHighlight()); //cp
Vector directionToLight = scene.getDirectionToLight().normalize(); //l
double angleBetweenLightAndNormal = directionToLight.dotProduct(normal);
Vector reflectionVector = normal.multiply(2).multiply(angleBetweenLightAndNormal).subtract(directionToLight).normalize(); //r
double visibilityTerm = isInShadow ? 0 : 1;
Vector ambientTerm = diffuseReflectanceColor.multiply(ambientColor);
double lambertianComponent = Math.max(0, angleBetweenLightAndNormal);
Vector diffuseTerm = diffuseReflectanceColor.multiply(lightSourceColor).multiply(lambertianComponent).multiply(visibilityTerm);
double angleBetweenEyeAndReflection = scene.getLookFrom().dotProduct(reflectionVector);
angleBetweenEyeAndReflection = Math.max(0, angleBetweenEyeAndReflection);
double phongComponent = Math.pow(angleBetweenEyeAndReflection, getPhongConstant());
Vector phongTerm = lightSourceColor.multiply(specularHighlightColor).multiply(phongComponent).multiply(visibilityTerm);
return getVectorColor(ambientTerm.add(diffuseTerm).add(phongTerm));
}
I am seeing that the dot product between the normal and the light source is -1 for the yellow triangle, and about -.707 for the blue triangle, so I'm not sure if the normal being the wrong way is the problem. Regardless, when I added made sure the angle between the light and the normal was positive (Math.abs(directionToLight.dotProduct(normal));), it caused the opposite problem:
I suspect that it will be a small typo/bug, but I need another pair of eyes to spot what I couldn't.
Note: My triangles have vertices(a,b,c), and the edges (u,v) are computed using a-b and c-b respectively (also, those are used for calculating the plane/triangle normal). A Vector is made up of an (x,y,z) point, and a Ray is made up of a origin Vector and a normalized direction Vector.
Here is how I am calculating normals for all triangles:
private Vector getPlaneNormal()
{
Vector v1 = getU();
Vector v2 = getV();
return v1.crossProduct(v2).normalize();
}
Please let me know if I left out anything that you think is important for solving these issues.
EDIT: After help from #TheVee, this is what I have at then end:
There are still problems with z-buffering, And with phong highlights with the triangles, but the problem I was trying to solve here was fixed.
It is an usual problem in ray tracing of scenes including planar objects that we hit them from a wrong side. The formulas containing the dot product are presented with an inherent assumption that light is incident at the object from a direction to which the outer-facing normal is pointing. This can be true only for half the possible orientations of your triangle and you've been in bad luck to orient it with its normal facing away from the light.
Technically speaking, in a physical world your triangle would not have zero volume. It's composed of some layer of material which is just thin. On either side it has a proper normal that points outside. Assigning a single normal is a simplification that's fair to take because the two only differ in sign.
However, if we made a simplification we need to account for it. Having what technically is an inwards facing normal in our formulas gives negative dot products, which case they are not made for. It's like light was coming from the inside of the object or that it hit a surface could not possibly be in its way. That's why they give an erroneous result. The negative value will subtract light from other sources, and depending on the magnitude and implementation may result in darkening, full black, or numerical underflow.
But because we know the correct normal is either what we're using or its negative, we can simply fix the cases at once by taking a preventive absolute value where a positive dot product is implicitly assumed (in your code, that's angleBetweenLightAndNormal). Some libraries like OpenGL do that for you, and on top use the additional information (the sign) to choose between two different materials (front and back) you may provide if desired. Alternatively, they can be set to not draw the back faces for solid object at all because they will be overdrawn by front faces in solid objects anyway (known as face culling), saving about half of the numerical work.
I'm making a 2d game in libgdx and I would like to know what the standard way of moving (translating between two known points) on the screen is.
On a button press, I am trying to animate a diagonal movement of a sprite between two points. I know the x and y coordinates of start and finish point. However I can't figure out the maths that determines where the texture should be in between on each call to render. At the moment my algorithm is sort of like:
textureProperty = new TextureProperty();
firstPtX = textureProperty.currentLocationX
firstPtY = textureProperty.currentLocationY
nextPtX = textureProperty.getNextLocationX()
nextPtX = textureProperty.getNextLocationX()
diffX = nextPtX - firstPtX
diffY = nextPtY - firstPtY
deltaX = diffX/speedFactor // Arbitrary, controlls speed of the translation
deltaX = diffX/speedFactor
renderLocX = textureProperty.renderLocX()
renderLocY = textureProperty.renderLocY()
if(textureProperty.getFirstPoint() != textureProperty.getNextPoint()){
animating = true
}
if (animating) {
newLocationX = renderLocX + deltaX
newLocationY = renderLocY + deltaY
textureProperty.setRenderPoint(renderLocX, renderLocY)
}
if (textureProperty.getRenderPoint() == textureProperty.getNextPoint()){
animating = false
textureProperty.setFirstPoint(textureProperty.getNextPoint())
}
batch.draw(texture, textureProperty.renderLocX(), textureProperty.renderLocY())
However, I can foresee a few issues with this code.
1) Since pixels are integers, if I divide that number by something that doesn't go evenly, it will round. 2) as a result of number 1, it will miss the target.
Also when I do test the animation, the objects moving from point1, miss by a long shot, which suggests something may be wrong with my maths.
Here is what I mean graphically:
Desired outcome:
Actual outcome:
Surely this is a standard problem. I welcome any suggestions.
Let's say you have start coordinates X1,Y1 and end coordinates X2,Y2. And let's say you have some variable p which holds percantage of passed path. So if p == 0 that means you are at X1,Y1 and if p == 100 that means you are at X2, Y2 and if 0<p<100 you are somewhere in between. In that case you can calculate current coordinates depending on p like:
X = X1 + ((X2 - X1)*p)/100;
Y = Y1 + ((Y2 - Y1)*p)/100;
So, you are not basing current coords on previous one, but you always calculate depending on start and end point and percentage of passed path.
First of all you need a Vector2 direction, giving the direction between the 2 points.
This Vector should be normalized, so that it's length is 1:
Vector2 dir = new Vector2(x2-x1,y2-y1).nor();
Then in the render method you need to move the object, which means you need to change it's position. You have the speed (given in distance/seconds), a normalized Vector, giving the direction, and the time since the last update.
So the new position can be calculated like this:
position.x += speed * delta * dir.x;
position.y += speed * delta * dir.y;
Now you only need to limit the position to the target position, so that you don't go to far:
boolean stop = false;
if (position.x >= target.x) {
position.x = target.x;
stop = true;
}
if (position.y >= target.y) {
position.y = target.y;
stop = true;
}
Now to the pixel-problem:
Do not use pixels! Using pixels will make your game resolution dependent.
Use Libgdx Viewport and Camera instead.
This alows you do calculate everything in you own world unit (for example meters) and Libgdx will convert it for you.
I didn't saw any big errors, tho' i saw some like you are comparing two objects using == and !=, But i suggest u to use a.equals(b) and !a.equals(b) like that. And secondly i found that your renderLock coords are always being set same in textureProperty.setRenderPoint(renderLocX, renderLocY) you are assigning the same back. Maybe you were supposed to use newLocation coords.
BTW Thanks for your code, i was searching Something that i got by you <3
Hello I am fairly new to programming and I am trying, in Java, to create a function that creates recursive triangles from a larger triangles midpoints between corners where the new triangles points are deviated from the normal position in y-value. See the pictures below for a visualization.
The first picture shows the progression of the recursive algorithm without any deviation (order 0,1,2) and the second picture shows it with(order 0,1).
I have managed to produce a working piece of code that creates just what I want for the first couple of orders but when we reach order 2 and above I run into the problem where the smaller triangles don't use the same midpoints and therefore looks like the picture below.
So I need help with a way to store and call the correct midpoints for each of the triangles. I have been thinking of implementing a new class that controls the calculation of the midpoints and stores them and etc, but as I have said I need help with this.
Below is my current code
The point class stores a x and y value for a point
lineBetween creates a line between the the selected points
void fractalLine(TurtleGraphics turtle, int order, Point ett, Point tva, Point tre, int dev) {
if(order == 0){
lineBetween(ett,tva,turtle);
lineBetween(tva,tre,turtle);
lineBetween(tre,ett,turtle);
} else {
double deltaX = tva.getX() - ett.getX();
double deltaY = tva.getY() - ett.getY();
double deltaXtre = tre.getX() - ett.getX();
double deltaYtre = tre.getY() - ett.getY();
double deltaXtva = tva.getX() - tre.getX();
double deltaYtva = tva.getY() - tre.getY();
Point one;
Point two;
Point three;
double xt = ((deltaX/2))+ett.getX();
double yt = ((deltaY/2))+ett.getY() +RandomUtilities.randFunc(dev);
one = new Point(xt,yt);
xt = (deltaXtre/2)+ett.getX();
yt = (deltaYtre/2)+ett.getY() +RandomUtilities.randFunc(dev);
two = new Point(xt,yt);
xt = ((deltaXtva/2))+tre.getX();
yt = ((deltaYtva/2))+tre.getY() +RandomUtilities.randFunc(dev);
three = new Point(xt,yt);
fractalLine(turtle,order-1,one,tva,three,dev/2);
fractalLine(turtle,order-1,ett,one,two,dev/2);
fractalLine(turtle,order-1,two,three,tre,dev/2);
fractalLine(turtle,order-1,one,two,three,dev/2);
}
}
Thanks in Advance
Victor
You can define a triangle by 3 points(vertexes). So the vertexes a, b, and c will form a triangle. The combinations ab,ac and bc will be the edges. So the algorithm goes:
First start with the three vertexes a,b and c
Get the midpoints of the 3 edges p1,p2 and p3 and get the 4 sets of vertexes for the 4 smaller triangles. i.e. (a,p1,p2),(b,p1,p3),(c,p2,p3) and (p1,p2,p3)
Recursively find the sub-triangles of the 4 triangles till the depth is reached.
So as a rough guide, the code goes
findTriangles(Vertexes[] triangle, int currentDepth) {
//Depth is reached.
if(currentDepth == depth) {
store(triangle);
return;
}
Vertexes[] first = getFirstTriangle(triangle);
Vertexes[] second = getSecondTriangle(triangle);
Vertexes[] third = getThirdTriangle(triangle);;
Vertexes[] fourth = getFourthTriangle(triangle)
findTriangles(first, currentDepth+1);
findTriangles(second, currentDepth+1);
findTriangles(third, currentDepth+1);
findTriangles(fourth, currentDepth+1);
}
You have to store the relevant triangles in a Data structure.
You compute the midpoints of any vertex again and again in the different paths of your recursion. As long as you do not change them by random, you get the same midpoint for every path so there's no problem.
But of course, if you modify the midpoints by random, you'll end with two different midpoints in two different paths of recursion.
You could modify your algorithm in a way that you not only pass the 3 corners of the triangle along, but also the modified midpoints of each vertex. Or you keep them in a separate list or map or something and only compute them one time and look them up otherwise.
I have two GPS locations. For each I am creating a bounding box in a different range.
Each bounding box has min/max latitude and min/max longitude.
Need to implement a method to detect if those two boxes overlap (don't mind the overlap range.. only true/false). Also, this method will be integrated in a long loop so I am looking for the most efficient way to do it.
note: when saying overlap I mean - "there is at least one single point on the map that is contained in both bounding boxes".
Any ideas?
I'm facing the same issue and the previous solution is not sufficient.
This image show cases that are covered and not covered
I've found this web page that give the correct method to respond to the problem: https://rbrundritt.wordpress.com/2009/10/03/determining-if-two-bounding-boxes-overlap/
Here is the implementation of this solution:
function DoBoundingBoxesIntersect(bb1, bb2) {
//First bounding box, top left corner, bottom right corner
var ATLx = bb1.TopLeftLatLong.Longitude;
var ATLy = bb1.TopLeftLatLong.Latitude;
var ABRx = bb1.BottomRightLatLong.Longitude;
var ABRy = bb1.BottomRightLatLong.Latitude;
//Second bounding box, top left corner, bottom right corner
var BTLx = bb2.TopLeftLatLong.Longitude;
var BTLy = bb2.TopLeftLatLong.Latitude;
var BBRx = bb2.BottomRightLatLong.Longitude;
var BBRy = bb2.BottomRightLatLong.Latitude;
var rabx = Math.abs(ATLx + ABRx – BTLx – BBRx);
var raby = Math.abs(ATLy + ABRy – BTLy – BBRy);
//rAx + rBx
var raxPrbx = ABRx – ATLx + BBRx – BTLx;
//rAy + rBy
var rayPrby = ATLy – ABRy + BTLy – BBRy;
if(rabx <= raxPrbx && raby <= rayPrby)
{
return true;
}
return false;
}
We can adapt the solution like that :
step 1 : check if the 2 boundingbox overlap on longitude
the left longitude of a bondingbox1 is between longMin and longMax of the boundingbox2 OR the right longitude of a bondingbox1 is between longMin and longMax of the boundingbox2
step 2 : check if the 2 boundingbox overlap on latitude
the top latitude of a bondingbox1 is between latMin and latMax of the boundingbox2 OR the bottom longitude of a bondingbox1 is between latMin and latMax of the boundingbox2
if step1 and step 2 are right, then the 2 boundingbox overlap
You can see the corresponding sketch here :
It's enough to check if one of the corners of one rectangle is within the other rectangle. This is true of these two hold:
rect1.minX or rect1.maxX is between rect2.minX and rect2.maxX
and
rect1.minY or rect1.maxY is between rect2.minY and rect2.maxY
This check should take no time at all to do, so efficiency isn't a problem. Also, the order of the arguments is irrelevant.