Why doesn't gluUnProject() seem to behave? - java

I am trying to find the coordinates of my mouse on a flat 3D surface. After googling a bit on that, I found out that you use gluUnProject for doing so. So, I implemented that. Here is my code (when taking away the parts that are not interesting):
public class Input {
private FloatBuffer modelView = BufferUtils.createFloatBuffer(16);
private FloatBuffer projection = BufferUtils.createFloatBuffer(16);
private IntBuffer viewport = BufferUtils.createIntBuffer(16);
private FloatBuffer location = BufferUtils.createFloatBuffer(3);
private FloatBuffer winZ = BufferUtils.createFloatBuffer(1);
public float[] getMapCoords(int x, int y)
{
modelView.clear().rewind();
projection.clear().rewind();
viewport.clear().rewind();
location.clear().rewind();
winZ.clear().rewind();
glGetFloat(GL_MODELVIEW_MATRIX, modelView);
glGetFloat(GL_PROJECTION_MATRIX, projection);
glGetInteger(GL_VIEWPORT, viewport);
float winX = (float)x;
float winY = (float)viewport.get(3) - (float)y;
glReadPixels(x, (int)winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, winZ);
gluUnProject(winX, winY, winZ.get(0), modelView, projection, viewport, location);
return new float[] {location.get(0), location.get(1), location.get(2)};
}
}
When I call this function, passing in the X and Y coordinates of the mouse, I get some numbers that increase by 100 for every pixel I move with my mouse (it should be around 1). After doing some prints from various variables, I found out that the winZ buffer contains the value 1.0. My gluPerspective is set up in such a way that its near clipping point it at 0.1 and its far point at 10000, which would explain why the number is increasing that rapidly. Yet I don't know how to force openGl to use my flat plane instead for finding this distance.
So now I am wondering; if this is the correct/best/easiest method for finding the mouse coordinates on a surface in the 3D world, what could I be doing wrong? If it is not, what is a better way of doing it?

Yes, this is the correct method to use. You are probably just calling it at the point where the matrices do not contain "good values" (might be caused by glPush/PopMatrix() functions in your code).
To confirm you are getting good coordinates, draw a single GL_POINT at the coordinates you get, after you get them (disable depth test and increase point size to, say, 10 pixels). If the point is moving under your mouse, then the calculated coords are correct but the matrices are not.

Related

What could be causing meshes to generate rotated? LWGL

I am loading chunk values from a height map to create infinite terrain, but I'm having issues with the rendering of each mesh chunk. When I load a single 500x500 unit mesh, it is a smooth mesh. When I load a 5x5 of 100x100 meshes, it is a jumbled up mess.
I made the two different types of meshes save an image of the height of each mesh, and they both give a smooth height map with gradual value changes.
...But when I render them, this is what I see:
500x500 mesh
5x5 100x100 mesh
As you can see, the 5x5 mesh isn't at all aligned. The first three chunks 0:0, 0:1, and 1:0 seem to look correct, but the others are all different. All rotations in their transformation matrix is vec3(0,0,0), but they generate like this. Here is the code used to render them:
public void render() {
shader.start();
shader.loadViewMatrix(Maths.createViewMatrix(engine.getPlayer()));
shader.loadLight(engine.getManagers().getLightManager().getLights().get(0));
for(Chunk chunk : engine.getManagers().getTerrainManager().getLoadedChunks().values()) {
RawModel model = chunk.getModel();
bindChunk(chunk);
loadShaderUniforms(chunk);
GL11.glDrawElements(GL11.GL_TRIANGLES, model.getVertexCount(), GL11.GL_UNSIGNED_INT, 0);
unbindChunk();
}
shader.stop();
}
private void bindChunk(Chunk chunk) {
RawModel rawModel = chunk.getModel();
GL30.glBindVertexArray(rawModel.getVaoID());
GL20.glEnableVertexAttribArray(0); // position
GL20.glEnableVertexAttribArray(1); // textureCoordinates
GL20.glEnableVertexAttribArray(2); // normal
ModelTexture texture = chunk.getTexture();
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, texture.getTextureID());
}
private void unbindChunk() {
GL20.glDisableVertexAttribArray(0);
GL20.glDisableVertexAttribArray(1);
GL20.glDisableVertexAttribArray(2);
}
private void loadShaderUniforms(Chunk chunk) {
Matrix4f transformation = Maths.createTransformationMatrix(chunk.getLocation().getPosition(),
chunk.getLocation().getRotation(), 1, true);
System.out.println(chunk.getLocation().getRotation());
shader.loadTransformationMatrix(transformation);
shader.setShowHeightMap(chunk.shouldShowHeightMap());
}
I'm not familiar with the details of how LWJGL uses VAOs, so it might be a simple issue I'm forgetting. Any help would be appreciated as I've spent several days just narrowing down the issue to a rendering problem instead of a problem with the perlin noise/mesh generation.
I discovered the issue! There was a flaw in how I created the mesh which caused it to render the triangles flipped over. I discovered this by turning off culling and observing it from both sides and moving the three vertex coordinates around. Fixing the vertex face (e.g. swapping it from a clockwise to counter-clockwise and then reversing the X and Z coordinates) fixed it.

Local rotation in specific implementation openGL and LWJGL

I am also fiddling with the global/local rotation problem and I cannnot put my finger on it. I used a lwjgl book for the implementation of my game, using openGL and LWJGL. I am using the JOML lib for vectors and matrices.
The modelview matrix construction is below. By the book, this is originally without local rotations, I added them myself. The idea is that each object has a global and local rotation. Those rotations get individually calculated and then are multiplied left/right side with the modelview matrix.
public Matrix4f getModelViewMatrix(Object obj, Matrix4f viewMatrix) {
Vector3f rotation = obj.getRot();
Vector3f localRot = obj.getLocalRot();
Matrix4f localRotMat = new Matrix4f().identity();
Matrix4f worldRotMat = new Matrix4f().identity();
localRotMat.rotateLocalX((float)Math.toRadians(localRot.x)).
rotateLocalY((float)Math.toRadians(localRot.y)).
rotateLocalZ((float)Math.toRadians(localRot.z));
worldRotMat.rotateX((float)Math.toRadians(-rotation.x)).
rotateY((float)Math.toRadians(-rotation.y)).
rotateZ((float)Math.toRadians(-rotation.z));
modelViewMatrix.identity().translate(obj.getPos());
modelViewMatrix.mulLocal(localRotMat);
modelViewMatrix.mul(worldRotMat);
modelViewMatrix.scale(obj.getScale());
Matrix4f viewCurr = new Matrix4f(viewMatrix);
return viewCurr.mul(modelViewMatrix);
}
This still results in local rotations around the 'wrong' axes. I've seen implementations using quaternions and read about gimbal lock and the like, but either the answers are very specific or too general for me. Furthermore, it would be great if I wouldn't need to use a quaternions implementation, as I would have to refactor a lot of code possibly.
Relevant code for the object class:
// Object class
private final Vector3f rot;
private final Vector3f localRot;
public Object() {
pos = new Vector3f(0, 0, 0);
scale = 1;
rot = new Vector3f(0, 0, 0);
localRot = new Vector3f(0, 0, 0);
}
// getters and setters for above
Can somebody explain what is wrong about the calculation of the rotations for the modelview matrix?
EDIT:
I can rewrite the code like below, which is a bit more in line with the hints from #GeestWagen. However, the 'local rotation' of my object is still displayed as global, so it indeed seems like it is applied 'the same' rotation twice. However, now I am stuck, because I cant find more documentation on these functions (rotateLocal/rotate).
modelViewMatrix.identity().translate(obj.getPos()).
rotateLocalX((float)Math.toRadians(-localRot.x)).
rotateLocalY((float)Math.toRadians(-localRot.y)).
rotateLocalZ((float)Math.toRadians(-localRot.z)).
rotateX((float)Math.toRadians(-rotation.x)).
rotateY((float)Math.toRadians(-rotation.y)).
rotateZ((float)Math.toRadians(-rotation.z)).
scale(obj.getScale());
Okay, I finally fixed it. It resulted in me doing a bunch more research. What I came up with was the following:
Vector3f rotation = obj.getRot();
Vector3f localRot = obj.getLocalRot();
Quaternionf rotationQ = new Quaternionf().
rotateAxis((float)Math.toRadians(-localRot.z), new Vector3f(0f, 0f, 1f)).
rotateAxis((float)Math.toRadians(-localRot.y), new Vector3f(0f, 1f, 0f)).
rotateAxis((float)Math.toRadians(-localRot.x), new Vector3f(1f, 0f, 0f)).
premul(new Quaternionf().
rotateX((float)Math.toRadians(-rotation.x)).
rotateY((float)Math.toRadians(-rotation.y)).
rotateZ((float)Math.toRadians(-rotation.z))
);
modelViewMatrix.identity().
translate(obj.getPos()).
rotate(rotationQ).
scale(obj.getScale());
This is inspired by among others this and this. What confused me a lot was the lack of hits on doing local and global rotations. Most stuff I was able to find was either. This creates a quaternion and sets the x, y, and z axes to the local rotation of the object. Then it pre-multiplies by a quaternion which axes are set to the global rotation of the object. Then this resulting quaternion is used for the modelView matrix.
Thus, for combining local and global rotations, a quaternion seems to be necessary. I thought it was used to make sure the axes do not change in a local/global rotation, but they should also be used when combining both.

Move rectangle by touching the screen

I want to move blocks with different x-positions without changing their shape by reducing the x-position.
I have tried to run the following code, but it seems like the blocks move to a tow position way to fast (correct potion and other i can't see where).
downBlocks=new Arraylist<Rectangle>;
for (DownBlocks downBlocks:getBlocks()){
if(Gdx.input.isTouched()) {
Vector3 touchPos = new Vector3();
touchPos.set(Gdx.input.getX(), Gdx.input.getY(), 0);
camera.unproject(touchPos);
downBlocks.x = (int) touchPos.x - downBlocks.x;
}
}
To do a drag, you need to remember the point where the finger last touched the screen so you can get a finger delta. And as a side note, avoid putting code inside your loop iteration if it only needs to be called once. It's wasteful to unproject the screen's touch point over and over for every one of your DownBlocks.
static final Vector3 VEC = new Vector3(); // reusuable static member to avoid GC churn
private float lastX; //member variable for tracking finger movement
//In your game logic:
if (Gdx.input.isTouching()){
VEC.set(Gdx.input.getX(), Gdx.input.getY(), 0);
camera.unproject(VEC);
}
if (Gdx.input.justTouched())
lastX = VEC.x; //starting point of drag
else if (Gdx.input.isTouching()){ // dragging
float deltaX = VEC.x - lastX; // how much finger has moved this frame
lastX = VEC.x; // for next frame
// Since you're working with integer units, you can round position
int blockDelta = (int)Math.round(deltaX);
for (DownBlocks downBlock : getBlocks()){
downBlock.x += blockDelta;
}
}
I don't recommend using integer units for your coordinates, though. If you are doing pixel art, then I recommend using floats for storing coordinates, and rounding off the coordinates only when drawing. That will reduce jerky-looking movement. If you are not using pixel art, I would just use float coordinates all the way. Here's a good article to help understand units.

gluProject Converting 3D coordinates to 2D coordinates does not convert the 2D Y Coordinate correctly

After two hours of googling (here, here, here, here, and here, and a ton others which I am not bothered to find), I thought I had finally learnt the theory of turning 3D coordinates to 2D coordinates. But it isn't working. The idea is to translate the 3D coordinates of a ship to 2D coordinates on the screen to render the username of the player controlling that ship.
However, the text is rendering in the wrong location:
The text is "Test || 2DXCoordinate || 2DZCoordinate".
Here is my getScreenCoords() - Which converts the 3D coordinates to 2D.
public static int[] getScreenCoords(double x, double y, double z) {
FloatBuffer screenCoords = BufferUtils.createFloatBuffer(4);
IntBuffer viewport = BufferUtils.createIntBuffer(16);
FloatBuffer modelView = BufferUtils.createFloatBuffer(16);
FloatBuffer projection = BufferUtils.createFloatBuffer(16);
GL11.glGetFloat(GL11.GL_MODELVIEW_MATRIX, modelView);
GL11.glGetFloat(GL11.GL_PROJECTION_MATRIX, projection);
GL11.glGetInteger(GL11.GL_VIEWPORT, viewport);
boolean result = GLU.gluProject((float) x, (float) y, (float) z, modelView, projection, viewport, screenCoords);
if (result) {
return new int[] { (int) screenCoords.get(0), (int) screenCoords.get(1) };
}
return null;
}
screenCoords.get(0) is returning a perfect X coordinate. However, screenCoords.get(1) is going higher or lower depending on how far away I am from the ship. After many hours of debugging, I have narrowed it down to this line being incorrect:
GLU.gluProject((float) x, (float) y, (float) z, modelView, projection, viewport, screenCoords);
However, I have no idea what is wrong. The X coordinate of the ship is fine.... Why not the Y?
According to BDL's answer, I am supplying the "wrong matrix" to gluProject(). But I don't see how that is possible, since I call the method right after I render my ship (Which is obviously in whatever matrix draws the ship).
I just can't fathom what is wrong.
Note: BDL's answer is perfectly adequate except that it does not explain why the Y coordinates are incorrect.
Note: This question used to be much longer and much more vague. I have posted my narrowed-down question above after hours of debugging.
You have to use the same projection matrix in gluProject that you use for rendering your ship. In your case the ship is rendered using a perspective projection, but when you call gluProject a orthographic projection is used.
General theory about coordinate systems in OpenGL
In most cases geometry of a model in your scene (e.g. the ship) is given in a model-coordinate system. This is the space where your vertex coordinates exist. When now placing the model in your scene we apply the model-matrix to each vertex to get the coordinates the ship has in the scene. This coordinate system is called world space. When viewing the scene from a given viewpoint and a viewing direction, again a transformation is needed that transforms the scene such that the viewpoint is located in the origin (0,0,0) and view-direction is along the negativ z-axis. This is the view coordinate system. The last step is to transform view-coordinates into ndc, which is done via a projection matrix.
In total we get the transformation of a vertex to the screen as:
v_screen = Projection * View * Model * v_model
In ancient OpenGL (as you use it) View and Model are stored together in the ModelView matrix.
(I skipped here some problems as perspective divide, but it should be sufficient to understand the problem.)
Your problem
You already have a position in world space (x,y,z) of your ship. Thus the transformation with Model has already happend. What is left is
v_screen = Projection * View * v_worldspace
For this we see, that in our case the ModelView matrix that gets entered to gluProject has to be exactly the View matrix.
I can't tell you where you get the view matrix in your code, since I don't know this part of your code.
I found an answer to my issue!
I used
font.drawString(drawx - offset, drawy, (sh.username + " || " + drawx + " | " + drawy), Color.orange);
When it should have been
font.drawString(drawx - offset, Display.getHeight() - drawy, (sh.username + " || " + drawx + " | " + drawy), Color.orange);

Getting object coordinates from camera

I've implemented a camera in Java using a position vector and three direction vectors so I can use gluLookAt(); moving around in `ghost mode' works fine enough, but I want to add collision detection. I can't seem to figure out how to transform my position vector to coordinates in which OpenGL draws my objects.
A rough sketch of my drawing loop is this:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
camera.setView();
drawer.drawTheScene();
I'm at a loss of how to proceed; looking at the ModelView matrix between calls and my position vector, I haven't found any kind of correlation.
Finally figured it out by reviewing http://fly.cc.fer.hr/~unreal/theredbook/chapter03.html again. To get from eye space (camera) to object space, you have to multiply that vector with the inverse of the ModelView matrix, or in code:
Vector4f vpos = new Vector4f(0, 0, 0, 1);
// (0,0,0,1) because it's relative to the cam
float mv[]=new float[16];
ByteBuffer temp = ByteBuffer.allocateDirect(64);
temp.order(ByteOrder.nativeOrder());
GL11.glGetFloat(GL11.GL_MODELVIEW_MATRIX, (FloatBuffer)temp.asFloatBuffer());
temp.asFloatBuffer().get(mv);
Matrix4f m4 = new Matrix4f();
m4.load((FloatBuffer)temp.asFloatBuffer());
m4.invert();
vpos = Matrix4f.transform(m4, vpos, vpos);

Categories

Resources