Alright, so I got this code for gluLookAt:
lookAt = new Vector3f(-player.pos.x, -player.pos.y, -player.pos.z);
lookAt.x += (float)Math.cos(Math.toRadians(player.yaw)) * Math.cos(Math.toRadians(player.pitch));
lookAt.y += (float)Math.sin(Math.toRadians(player.pitch));
lookAt.z += (float) Math.sin(Math.toRadians(player.yaw)) * Math.cos(Math.toRadians(player.pitch));
GLU.gluLookAt(-player.pos.x, -player.pos.y, -player.pos.z,
lookAt.x, lookAt.y, lookAt.z,
0, 1, 0);
And when I try to draw a rotated cube it does not rotate properly.
GL11.glPushMatrix();
GL11.glLoadIdentity();
GL11.glTranslatef(-cube.pos.x, -cube.pos.y, -cube.pos.z);
GL11.glRotatef(cube.yaw, 0, 1, 0);
GL11.glTranslatef(cube.pos.x, cube.pos.y, cube.pos.z);
/*draw the cube normally*/
GL11.glPopMatrix();
So my question is am I handling the alterations to the matrix done by glulookat properly? or am I doing somethign wrong? The result I am looking for is to return the cube to 0,0,0 and rotate it, then put it back where it was.
problem is here:
GL11.glTranslatef(-cube.pos.x, -cube.pos.y, -cube.pos.z);
GL11.glRotatef(cube.yaw, 0, 1, 0);
GL11.glTranslatef(cube.pos.x, cube.pos.y, cube.pos.z);
The cube is already 'at' 0,0,0. This is because the Medel matrix is identity (as you called glLoadIdentity()).
so you should do:
GL11.glRotatef(cube.yaw, 0, 1, 0);
GL11.glTranslatef(cube.pos.x, cube.pos.y, cube.pos.z);
which should have the desired effect. If not, try it with a fixed camera to see if the code you added before glulookat() is causing the lookat target to be too far away from 0,0,0 (where your cube is)
Calling glLoadIdentity before drawing your cube will wipe out whatever gluLookAt has setup in your view matrix, I don't think it should be there.
Related
I'm using the camera anchor in ArCore to create a static object in the scene.
float scaleFactor = 1.0f;
camera.getPose().toMatrix(cameraAnchorMatrix, 0);
// Update and draw the model and its shadow.
Matrix.rotateM(cameraAnchorMatrix, 0, 110, 0f, 1f, 0f);
virtualObject.updateModelMatrix(cameraAnchorMatrix, scaleFactor / 10);
virtualObject.draw(viewmtx, projmtx, colorCorrectionRgba);
However rotating the object sometimes makes it not visible, also translating it doesn't seem to work. I'm also kinda guessing the values for the rotation. Also the object is visible from the top, how can I make it look more natural? (It's an arrow that's supposed to show a direction.)
How can I move the object to the bottom left corner of the screen and rotate it from left to right?
This is how it looks at the moment. I want to move the arrow down and to the left and also tilt it forward. Then it should be able to rotate left and right. Thank your for your help.
Solved it with the following code:
camera.getPose().compose(Pose.makeTranslation(0.37f, -0.17f, -1f)).extractTranslation().toMatrix(cameraAnchorMatrix, 0);
This makes the object appear 'behind' the camera and moves it to the bottom-left. Then you can rotate the object with the angle value:
Matrix.rotateM(cameraAnchorMatrix, 0, 230 - directionChange, 0f, 1f, 0f);
Hello I'm tring to make a neverending background. So I try to wrap my texture.
texture.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat);
And draw it:
bach.begin();
bach.draw(texture, 0, 0);
bach.end();
I'm ending with no changes to the texture when I use setWrap.
If i draw the texture that way:
bach.begin();
bach.draw(texture, 0, 0, texture.width, texture.height, 0, 0, 1, 1);
bach.end();
It repeats the texture but flipped...
If I try to flip the y and x in the bach.draw I get an error.
I only can flip the camera but then the position y flips too (translating up goes negative value / translating down goes positive value)
To repeat a texture, you need it to use UVs bigger than one, which you can do with a TextureRegion that is bigger than the Texture it references.
TextureRegion backgroundTextureRegion = new TextureRegion(texture, bgWidth, bgHeight);
//...
batch.begin();
batch.draw(backgroundTextureRegion, 0, 0, cameraWidth, cameraHeight);
batch.end();
Where bgWidth and bgHeight are how many texels wide and high you want to draw the background. For instance, if your camera's viewportWidth is 1920 and you want your texture to be drawn at 1:1 scale (texture pixels : camera units), then bgWidth would be 1920.
If you need to flip it vertically, you can use -bgHeight instead without messing with the camera.
I have a triangle like this;
shapeRenderer.begin(ShapeType.Line);
shapeRenderer.setColor(1, 1, 0, 1);
shapeRenderer.polygon(new float[] { -10, 0, 10, 0, 0, 200 });
shapeRenderer.rotate(0, 0, 1, 1);
shapeRenderer.end();
and I rotate 1 degree in each render. But I want to fix rotation (e.g. 45) to an angle. How can I do this?
Thanks.
To have a fixed rotation you hav to rotate the ShapeRenderer only once.
There are 2 possible ways i can think about:
call shapeRenderer.rotate(0, 0, 1, 45); in the constructor or in create() / show() method
This call rotates your ShapeRenderer by 45° (last parameter) arround the Z-Axis (The 3rd parameter)
call shapeRenderer.rotate(0, 0, 1, 45); in the rendermethod, only if you did not rotate yet. So you have to keep a boolean rotated and only if it is false you call rotate() and set it to true.
To answer the question in your comment: You cannot directly set the rotation, you can only rotate (relative to the current rotation). So i would suggest to store a float rotation, and everytime you rotate your ShapeRenderer you set the new value. To set a rotation in degrees you have to rotate like:
shapeRenderer.rotate(0, 0, 1, newRotation - rotation);
rotation = newRotation;
This works only if you always rotate arround the same axis, in your case the Z-axis. Else you would have to store 3 rotations (x,y,z). If you rotate arround a custom axis, defined by for example (0.1, 0.3, 0.6) you would need to calculate the rotation for all axes. But i don't really know how to do that. I think some Vectormath would do that. But i don't think you need that.
I want to create a camera moving above a tiled plane. The camera is supposed to move in the XY-plane only and to look straight down all the time. With an orthogonal projection I expect a pseudo-2D renderer.
My problem is, that I don't know how to translate the camera. After some research it seems to me, that there is nothing like a "camera" in OpenGL and I have to translate the whole world. Changing the eye-position and view center coordinates in the Matrix.setLookAtM-function just leads to distorted results.
Translating the whole MVP-Matrix does not work either.
I'm running out of ideas now; do I have to translate every single vertex every frame directly in the vertex buffer? That does not seem plausible to me.
I derived GLSurfaceView and implemented the following functions to setup and update the scene:
public void onSurfaceChanged(GL10 unused, int width, int height) {
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
// Setup the projection Matrix for an orthogonal view
Matrix.orthoM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
public void onDrawFrame(GL10 unused) {
// Draw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
//Setup the camera
float[] camPos = { 0.0f, 0.0f, -3.0f }; //no matter what else I put in here the camera seems to point
float[] lookAt = { 0.0f, 0.0f, 0.0f }; // to the coordinate center and distorts the square
// Set the camera position (View matrix)
Matrix.setLookAtM( vMatrix, 0, camPos[0], camPos[1], camPos[2], lookAt[0], lookAt[1], lookAt[2], 0f, 1f, 0f);
// Calculate the projection and view transformation
Matrix.multiplyMM( mMVPMatrix, 0, projMatrix, 0, vMatrix, 0);
//rotate the viewport
Matrix.setRotateM(mRotationMatrix, 0, getRotationAngle(), 0, 0, -1.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mRotationMatrix, 0, mMVPMatrix, 0);
//I also tried to translate the viewport here
// (and several other places), but I could not find any solution
//draw the plane (actually a simple square right now)
mPlane.draw(mMVPMatrix);
}
Changing the eye-position and view center coordinates in the "LookAt"-function just leads to distorted results.
If you got this from the android tutorial, I think they have a bug in their code. (made a comment about it here)
Try the following fixes:
Use setLookatM to point to where you want the camera to be.
In the shader, change the gl_Position line
from: " gl_Position = vPosition * uMVPMatrix;"
to: " gl_Position = uMVPMatrix * vPosition;"
I'd think the //rotate the viewport section should be removed as well, as this is not rotating the camera properly. You can change the camera's orientation in the setlookat function.
I've implemented a camera in Java using a position vector and three direction vectors so I can use gluLookAt(); moving around in `ghost mode' works fine enough, but I want to add collision detection. I can't seem to figure out how to transform my position vector to coordinates in which OpenGL draws my objects.
A rough sketch of my drawing loop is this:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
camera.setView();
drawer.drawTheScene();
I'm at a loss of how to proceed; looking at the ModelView matrix between calls and my position vector, I haven't found any kind of correlation.
Finally figured it out by reviewing http://fly.cc.fer.hr/~unreal/theredbook/chapter03.html again. To get from eye space (camera) to object space, you have to multiply that vector with the inverse of the ModelView matrix, or in code:
Vector4f vpos = new Vector4f(0, 0, 0, 1);
// (0,0,0,1) because it's relative to the cam
float mv[]=new float[16];
ByteBuffer temp = ByteBuffer.allocateDirect(64);
temp.order(ByteOrder.nativeOrder());
GL11.glGetFloat(GL11.GL_MODELVIEW_MATRIX, (FloatBuffer)temp.asFloatBuffer());
temp.asFloatBuffer().get(mv);
Matrix4f m4 = new Matrix4f();
m4.load((FloatBuffer)temp.asFloatBuffer());
m4.invert();
vpos = Matrix4f.transform(m4, vpos, vpos);