Libgdx calculate PerspectiveCamera view in 2d - java

I need to calculate the camera view bounds in 2d, x y width and height
In this screen shot the grid is each 1 unit, I need to calculate the bounding box of the 3d view, in the example image above, the results should be:
float x = -3;
float y = 0;
float width = 14;
float height = 6;

Assuming your camera is always going to be looking down at your world so the horizon is not visible, and that the camera is always looking parallel to the Y axis of the map so the horizontal lines are never crooked, I think this is a matter of calculating the width of the farthest away line that is within view, because that is where perspective will be showing you the most tiles horizontally. This will get you a rectangular area of the tiled map that fully covers the frustum, although you'll be drawing some extra tiles at the near corners.
private final Ray tmpRay = new Ray();
private final Vector3 tmpVec = new Vector3();
private final Rectangle visibleTilesRegion = new Rectangle();
private void updateVisibleTilesRegion (){
// Define a ray that is a projection of the direction the camera is looking onto the
// tile plane (assuming it is a Z=0 plane).
tmpRay.origin.set(camera.position.x, camera.position.y, 0f);
tmpRay.direction.set(0f, 1f, 0f);
//Find top and bottom
Intersector.intersectRayPlane(tmpRay, camera.frustum.planes[4], tmpVec);
float yTop = tmpVec.y;
Intersector.intersectRayPlane(tmpRay, camera.frustum.planes[5], tmpVec);
float yBottom = tmpVec.y;
// Find left and right at the top of the screen by intersecting that line with the left
// and right planes
tmpRay.origin.set(camera.position.x, yTop, 0f);
tmpRay.direction.set(-1f, 0f, 0f);
Intersector.intersectRayPlane(tmpRay, camera.frustum.planes[2], tmpVec);
float xLeft = tmpVec.x;
tmpRay.direction.set(1f, 0f, 0f);
Intersector.intersectRayPlane(tmpRay, camera.frustum.planes[3], tmpVec);
float xRight = tmpVec.x;
visibleTilesRegion.set(xLeft, yBottom, xRight - xLeft, yTop - yBottom);
}

Related

Java OpenGL - Mouse position from window to world space

I try to transform the window mouse coordinates (0/0 is the upper left corner) into world space coordinates. I just tried to solve it by this description. Here is my code:
public void showMousePosition(float mx, float my){
Matrix4f projectionMatrix = camera.getProjectionMatrix();
Matrix4f viewMatrix = camera.getViewMatrix();
Matrix4f projMulView = projectionMatrix.mul(viewMatrix);
projMulView.invert();
float px = ((2*mx)/650)-1;
float py = ((2*my)/650)-1;
Vector4f vec4 = new Vector4f(px, py*(-1), 0.0f, 1.0f);
vec4.mul(projMulView);
vec4.w = 1.0f / vec4.w;
vec4.x *= vec4.w;
vec4.y *= vec4.w;
vec4.z *= vec4.w;
System.out.println(vec4.x + ", " + vec4.y);
}
But thats not 100% correct. I have an Object on 0/-11 on world space and when I move my mouse to this point, my function say 0/9,8. And when I go to the left side of my window the x value is 5,6 but it should be something like 28.
Someone know what is wrong on my code?
First of all, your code says that your windows size is always width=650, height=650.
Then you are getting the position when z=0. But this z is in screen space and therefore it changes as you change the camera position and orientation. Normally, you get this information from the depth buffer, using glReadPixel. You should do it in this case.
However, there is another way to do this also. In the code I will share, I am looking for the intersection between a ray (generated from the mouse position) and the plane (0,0,0) with normal (0,1,0), I hope this helps.
/*Given the inverse PV (projection*view) matrix, the position of the mouse on screen and the size of the screen, transforms the screen coordinates to world coordinates*/
glm::vec3 Picking::OnWorld(glm::mat4 const& m_inv, glm::vec2 const & spos,size_t width, size_t height) {
float x = spos.x;
float y = spos.y;
y = height - y;
//InputOrigin, start of the ray for intersection with plane
glm::vec4 inputO = glm::vec4(x / width*2.0f - 1.0f, y / height*2.0f - 1.0f, -1.0f, 1.0f); //transforms screen position to the unit cube range
glm::vec4 resO = m_inv*inputO; //transforms to world space
if (resO.w == 0.0f)
return glm::vec3(-1); //return an invalid value to show a problem during a calculation, normally this means that the m_inv matrix was incorrect
resO /= resO.w; //homogeneous division
glm::vec4 inputE = inputO; //inputEnd, the end of the ray
inputE.z = 1.0;
//End of ray to world space
glm::vec4 resE = m_inv*inputE;
//checks that the coordinates are correct
if (resE.w == 0.0f)
return glm::vec3(-1); //return an invalid value to show a problem during a calculation, normally this means that the m_inv matrix was incorrect
resE /= resE.w;
//ray for intersection
glm::vec3 ray = glm::vec3(resE - resO); //vector between z=-1 and z=1
glm::vec3 normalRay = glm::normalize(ray);
glm::vec3 normalPlane = glm::vec3(0, 1, 0); //detects collision with plane 0, normal 1
float denominator = glm::dot(normalRay, normalPlane);
if (denominator == 0)
return glm::vec3(-1); //return an invalid value to show a problem during a calculation, normally this means that the m_inv matrix was incorrect
float numerator = glm::dot(glm::vec3(resO), normalPlane);
//intersection between ray and plane
glm::vec3 result = glm::vec3(resO) - normalRay*(numerator / denominator);
return result;
}
The math for the intersection can be read from this link:
https://www.cs.princeton.edu/courses/archive/fall00/cs426/lectures/raycast/sld017.htm

simple Circle on circle Collision libgdx

made two circles one of radius 8(image 16x16)
and one of radius 20( image 40x40)
i am calling the circle over overlap method and the collsion is just off. It is colliding with a circle that is around the 0,0 point of where ever my image of the ball is. the bullet can go within the ball on the bottom and right sides.
public class MyGame extends ApplicationAdapter {
SpriteBatch batch;
Texture ballImage, bulletImage;
OrthographicCamera cam;
Circle ball;
Array <Circle> bullets;
long lastShot;
#Override
public void create ()
{
System.out.println("game created");
ballImage = new Texture(Gdx.files.internal("ball.png"));
bulletImage = new Texture(Gdx.files.internal("bullet.png"));
cam = new OrthographicCamera();
cam.setToOrtho(true,320,480);//true starts top right false starts top left
batch = new SpriteBatch();
ball = new Circle();
ball.radius=20;
ball.x=320/2-ball.radius; // half screen size - half image
ball.y=480/2-ball.radius;
bullets = new Array<Circle>();
spawnBullet();
/*
batch.draw(bulletImage,bullet.x,bullet.y);
bullet.x++;
bullet.y++; */
}
public void spawnBullet()
{
Circle bullet = new Circle();
bullet.radius=8;
bullet.x=0;
bullet.y=0;
bullets.add(bullet);
lastShot = TimeUtils.nanoTime();
}
#Override
public void render ()
{
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
cam.update();
batch.setProjectionMatrix(cam.combined);
batch.begin();
batch.draw(ballImage,ball.x,ball.y);
for(Circle bullet: bullets)
{
batch.draw(bulletImage, bullet.x, bullet.y);
}
batch.end();
if(Gdx.input.isTouched())
{
Vector3 pos = new Vector3();
pos.set(Gdx.input.getX(), Gdx.input.getY(),0);
cam.unproject(pos);
ball.y = pos.y - ball.radius;
ball.x = pos.x - ball.radius ;
}
//if(TimeUtils.nanoTime()-lastShot >1000000000) one second
//spawnBullet();
Iterator<Circle> i = bullets.iterator();
while(i.hasNext())
{
Circle bullet = i.next();
bullet.x++;
bullet.y++;
if(bullet.overlaps(ball))
{
System.out.println("overlap");
i.remove();
}
}
}
}
If your bullet and the ball are 2 circles, like you said you don't need an overlap method.
It is simple: 2 circles collide, if their distance is smaller then the sum of their radiuses.
To calculate the distance you need to make a squareroot. This is a pretty expensive calculation, so it would be better to use squared distance and squared sum of radiuses:
float xD = ball.x - bullet.x; // delta x
float yD = ball.y - bullet.y; // delta y
float sqDist = xD * xD + yD * yD; // square distance
boolean collision = sqDist <= (ball.radius+bullet.radius) * (ball.radius+bullet.radius);
Thats it.
Also in your cam.setToOrtho you wrote a cooment:
//true starts top right false starts top left
Thats wrong, it is top left or bottom left. By default it is bottom left, because this is the way a coordinate system works normaly. The top left is, because the monitor addresses pixels starting from top left = pixel 1.
EDIT: this should be the problem: The coordinates you give the batch.draw method are the left lower corner of the Texture by default, if you are using the "y = Down"-System it should be the top left corner (you have to try i am not sure).
The Circles position instead is its center.
To solve the problem you need to adjust the position like this (for "y = Up"-System):
batch.draw(bulletImage, bullet.x - bullet.radius, bullet.y - bullet.radius);
It is possible, that the same formula works also for the "y = Down"-System but i am not sure

My custom scaling method not working

I've got the following code to display an image/texture in Opengl. The method is supposed to display the image in its correct aspect ratio and zoom in/out.
The image does not seem to maintain its aspect ratio on the horizontal axis. Why?
(NB: The OpenGL viewing width is from -1 to 0 and height from 1 to -1).
private void renderImage(Rectangle dst, float magnification) {
float width, height;
float horizontalOffset, verticalOffset;
// Default: Fill screen horizontally
width = 1f;
height = dst.getHeight()/(float) dst.getWidth();
// magnification
width *= magnification;
height *= magnification;
// Offsets
horizontalOffset = width/2f;
verticalOffset = height/2f;
// Do the actual OpenGL rendering
glBegin (GL_QUADS);
// Right top
glTexCoord2f(0.0f, 0.0f);
glVertex2f(-0.5f + horizontalOffset, verticalOffset);
// Right bottom
glTexCoord2f(0.0f, 1.0f);
glVertex2f(-0.5f + horizontalOffset, -verticalOffset);
// Left bottom
glTexCoord2f(1.0f,1.0f);
glVertex2f(-0.5f - horizontalOffset, -verticalOffset);
// Left top
glTexCoord2f(1.0f, 0.0f);
glVertex2f(-0.5f - horizontalOffset, verticalOffset);
glEnd();
}
I don't have any experience with OpenGL but from looking at your code it seems that there is something fishy about your default fill.
// Default: Fill screen horizontally
width = 1f;
height = dst.getHeight()/(float) dst.getWidth();
This is setting your "width" variable to a constant value of 1 whilst the "height" variable relies on the height and width of the rectangle that you are passing in, which is then being used to calculate the offsets
// Offsets
horizontalOffset = width/2f;
verticalOffset = height/2f;
in my experience this can cause the problems you have stated, first try stepping through this function in the debugger to analyse the values that the width and height variables hold, or try changing
// Default: Fill screen horizontally
width = 1f;
height = dst.getHeight()/(float) dst.getWidth();
to
// Default: Fill screen horizontally
width = 1f;
height = 1f;
and rerun this to see if it has had an effect on your output

Drawing rectangles at an angle

What is a method in Java that draws a rectangle given the following:
The coordinates of the center of the square
The angle of the rectangle from vertical, in degrees
To draw a rectangle in the way you suggest you need to use the class AffineTransform. The class can be used to transform a shape in all manner of ways. To perform a rotation use:
int x = 200;
int y = 100;
int width = 50;
int height = 30;
double theta = Math.toRadians(45);
// create rect centred on the point we want to rotate it about
Rectangle2D rect = new Rectangle2D.Double(-width/2., -height/2., width, height);
AffineTransform transform = new AffineTransform();
transform.rotate(theta);
transform.translate(x, y);
// it's been while, you might have to perform the rotation and translate in the
// opposite order
Shape rotatedRect = transform.createTransformedShape(rect);
Graphics2D graphics = ...; // get it from whatever you're drawing to
graphics.draw(rotatedRect);
For the first point, you can just figure out the coordinates of the center of the square by using a distance formula, (int)Math.sqrt((x1 - x2)*(x1 - x2) + (y1 - y2)*(y1 - y2)); them divide by 2. you can do this for the width and height. I don't know enough about Java draw to give you better answers based on what was in your question but I hope that helps.
For the second, you would need to just create a polygon right?

OpenGL Mouse Input

I have this mouse function in my OpenGL program:
public void mouseInput(){
int mouseX = Mouse.getX();
int mouseY = 600 - Mouse.getY();
int mouseDX = 0, mouseDY = 0;
int lastX = 0, lastY = 0;
mouseDX = mouseX - lastX;
mouseDY = mouseY - lastY;
lastX = mouseX;
lastY = mouseY;
xrot += (float) mouseDX;
yrot += (float) mouseDY;
}
I rotate the "camera" using this code:
glRotatef(xrot, 1.0f, 0.0f, 0.0f);
glRotatef(yrot, 0.f, 1.0f, 0.0f);
And I call the mouseInput() function in the !DisplayIsClosedRequested loop. Currently this causes my game to freak out and my camera rotates all over the place even without me touching the mouse. The cubes I have rendered out also move around the screen randomely. I am using LWJGL, so I cant use any glut functions like glutPassiveMotionFunc(). Can anyone offer help? Basically in summary, my camera is very jerky and rotates the camera in random patterns very fast.
If the camera is rotating even when you are not touching the mouse, you are probably applying the rotation over and over again. You could reset the camera view matrix first (glLoadIdentity() in OpenGL 2 fixed-functionality), every frame, and then apply the rotation. That way you will only rotate from a fixed reference point every frame, instead of the last reference point which was the result of a rotation from a previous frame.

Categories

Resources