I'm getting screen coordinates using this:
#Override
public boolean onTouchEvent(MotionEvent ev) {
x = ev.getX(0);
y = ev.getY(0);
return true;
}
And these are the verticles of my openGL 1.0 square:
private float vertices[] = {
-1.0f, -1.0f, 0.0f, // V1 - bottom left
-1.0f, 1.0f, 0.0f, // V2 - top left
1.0f, -1.0f, 0.0f, // V3 - bottom right
1.0f, 1.0f, 0.0f // V4 - top right
};
Everybody who have worked with openGL knows, that if i would paste x and y variables instead of verticles, i would get absolute nonsense. My question is: what formula should i use to convert screen coordinates x and y to world coordinates so i could use them to position my square to the touched point?
EDIT:
Oops, i forgot to say, that it's a 2D game...
Actualy i found a way myself, and the glUnProject is not the best way on android platform...
http://magicscrollsofcode.blogspot.com/2010/10/3d-picking-in-android.html
There is a function called 'gluunproject', that can do this for you. Link is here.
http://www.opengl.org/sdk/docs/man/xhtml/gluUnProject.xml
By the way, the screen coordinates will correspond to a 3D line passing from center of camera through the screen coordinates (image plane).
The ModelView, projection and viewport inputs can be obtained by querying OpenGL the current matrices. Refer the same link (function calls are specified).
Other than the x and y screen parameters, you need the depth parameter or z parameter. You can use the depth range to place the square in a particular z plane. Or give a default value. But make sure it is inside the visible region.
Once you receive the object co-ordinates, consider it as the center of square and draw a square of required length.
Satish
Related
I am currently trying to convert the drawing methods for my 2D java game to OpenGL by using JOGL, because native java seems rather slow for drawing high-res images in rapid succession. Now i want to use a 16:9 aspect ratio, but the problem is that my image is stretched to the sides. Currently i am only drawing a white rotating quad for testing this:
public void resize(GLAutoDrawable d, int width, int height) {
GL2 gl = d.getGL().getGL2(); // get the OpenGL 2 graphics context
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glOrtho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
}
public void display(GLAutoDrawable d) {
GL2 gl = d.getGL().getGL2(); // get the OpenGL 2 graphics context
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
gl.glClear(GL.GL_COLOR_BUFFER_BIT);
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glColor3f(1.0f,1.0f,1.0f);
degree += 0.1f;
gl.glRotatef(degree, 0.0f, 0.0f, 1.0f);
gl.glBegin(GL2.GL_QUADS);
gl.glVertex2f(-0.25f, 0.25f);
gl.glVertex2f(0.25f, 0.25f);
gl.glVertex2f(0.25f, -0.25f);
gl.glVertex2f(-0.25f, -0.25f);
gl.glEnd();
gl.glRotatef(-degree, 0.0f, 0.0f, 1.0f);
gl.glFlush();
}
I know that you can somehow adress this problem by using glOrtho() and I tried many different values for this now, but none of them achieved a unstretched image. How do I have to use this? Or is there another simple solution for this?
The projection matrix transforms all vertex data from the eye coordinates to the clip coordinates.
Then, these clip coordinates are also transformed to the normalized device coordinates (NDC) by dividing with w component of the clip coordinates.
The normalized device coordinates is in range (-1, -1, -1) to (1, 1, 1).
With the orthographic projection, the eye space coordinates are linearly mapped to the NDC.
If the viewport is rectangular this has to be considered by mapping the coordinates.
float aspect = (float)width/height;
gl.glOrtho(-aspect, aspect, -1.0f, 1.0f, -1.0f, 1.0f);
i want to make more than one texture on a cube as easy as possible. there should be a different texture (picture like the one loaded in the loadtexture method, android.png) on each side of the cube. part of my code:
public class Square2 {
private FloatBuffer vertexBuffer; // buffer holding the vertices
private FloatBuffer vertexBuffer2; // buffer holding the vertices
[and so on for all 6 sides]
private float vertices[] = {
-1.0f, -1.0f, -1.0f, // V1 - bottom left
-1.0f, 1.0f, -1.0f, // V2 - top left
1.0f, -1.0f, -1.0f, // V3 - bottom right
1.0f, 1.0f, 1.0f // V4 - top right
};
private float vertices2[] = {
1.0f, -1.0f, -1.0f, // V1 - bottom left
1.0f, 1.0f, -1.0f, // V2 - top left
1.0f, -1.0f, 1.0f, // V3 - bottom right
1.0f, 1.0f, 1.0f // V4 - top right
};
[for all 6 sides too]
private FloatBuffer textureBuffer; // buffer holding the texture coordinates
private FloatBuffer textureBuffer2; // buffer holding the texture coordinates
[and so on for all 6 sides]
private float texture[] = {
// Mapping coordinates for the vertices
0.0f, 1.0f, // top left (V2)
0.0f, 0.0f, // bottom left (V1)
1.0f, 1.0f, // top right (V4)
1.0f, 0.0f // bottom right (V3)
};
private float texture2[] = {
// Mapping coordinates for the vertices
0.0f, 1.0f, // top left (V2)
0.0f, 0.0f, // bottom left (V1)
1.0f, 1.0f, // top right (V4)
1.0f, 0.0f // bottom right (V3)
};
[dont really understand the texture array, is one enough for all sides, even if i want different textures?]
/** The texture pointer */
private int[] textures = new int[6];
public Square2() {
// a float has 4 bytes so we allocate for each coordinate 4 bytes
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(vertices.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
// allocates the memory from the byte buffer
vertexBuffer = byteBuffer.asFloatBuffer();
// fill the vertexBuffer with the vertices
vertexBuffer.put(vertices);
// set the cursor position to the beginning of the buffer
vertexBuffer.position(0);
ByteBuffer byteBuffer2 = ByteBuffer.allocateDirect(vertices2.length * 4);
byteBuffer2.order(ByteOrder.nativeOrder());
// allocates the memory from the byte buffer
vertexBuffer2 = byteBuffer2.asFloatBuffer();
// fill the vertexBuffer with the vertices
vertexBuffer2.put(vertices2);
// set the cursor position to the beginning of the buffer
vertexBuffer2.position(0);
[and so on for all 6 sides]
byteBuffer = ByteBuffer.allocateDirect(texture.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer = byteBuffer.asFloatBuffer();
textureBuffer.put(texture);
textureBuffer.position(0);
[and so on]
}
/**
* Load the texture for the square
* #param gl
* #param context
*/
public void loadGLTexture(GL10 gl, Context context) {
// loading texture
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(),
R.drawable.android);
// generate one texture pointer
gl.glGenTextures(1, textures, 0);
// ...and bind it to our array
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
// create nearest filtered texture
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
//Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE
//gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT);
//gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT);
// Use Android GLUtils to specify a two-dimensional texture image from our bitmap
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
// Clean up
bitmap.recycle();
}
/** The draw method for the square with the GL context */
public void draw(GL10 gl) {
// bind the previously generated texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[1]);
// Point to our buffers
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Set the face rotation
gl.glFrontFace(GL10.GL_CW);
// Point to our vertex buffer1 vorn
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
// Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
//2 rechts
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer2);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer2);
// Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices2.length / 3);
[and so on]
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
}
unfortunately this just gives me the same texture on all sides. how can i make different textures on each side? do i have to call the loadtexture method more than one time or edit it? thought about not recycling the bitmap, would that help?
Thera are may ways to achieve this. The most straight forward would be to load all the textures, then bind a different one before each side is drawn (so all you are missing is glBindTexture before each glDrawArrays). For this case you do not need multiple texture coordinate buffers and overall you wouldn't even need multiple vertex buffers. You can create 1 square buffer and position (rotate) it with matrices for each side (+ binding the correct texture).
Another approach is to use a single texture to which you can load all 6 images via glTextureSubImage (all being at different parts of the texture, no overlapping) and then use different texture coordinates for each side of the cube. By doing this you can create a single vertex/texture buffer for the whole cube and are able to draw it with a single glDrawArrays call. Though in your specific case that would be no less then 4x6=24 vertices.
As you said you do not understand texture coordinate arrays.. Texture coordinates are coordinates corresponding to the relative position on the texture. That being, (0,0) is the top left part of the texture(image) and (1,1) is the bottom right part of it. If the texture had 4 images on it as in 2x2 and you would for example want to use the bottom left one, your texture coordinates would look like this:
private float texture[] = {
0.0f, 1.0f,
0.0f, .5f,
.5f, 1.0f,
.5f, 0.5f
};
As for the coordinate values being larger then 1.0 or smaller then .0 I believe they can be used ether to clamp some part of the object or to repeat the texture (those 2 are bought possible parameters for texture, usually set when loading the texture). As for the number of texture coordinates it should be at least as many as the last parameter in glDrawArrays or as many as is the largest index+1 when using glDrawElements. So for your specific case you can have only 1 texture coordinate buffer unless you will at some point change your mind and want one of the sides to have rotated image on it and solve that by changing texture coordinates.
Ive added a light source to my JOGL project which seems to work quite well when the object is stationary, when I move the camera it gradually gets darker as it rotates, which is what id expect but as soon as it rotates 90 degree the screen goes completely black, does anyone know why this is? Do I need to another light source for the other side? I was hoping it would kind of act like the sun, i.e. light up the whole scene but be slightly darker when the camera is on the other side of the object.
Lighting
float light_ambient[] = { 0.0f, 0.0f, 0.0f, 1.0f };
float light_diffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f };
float light_specular[] = { 1.0f, 1.0f, 1.0f, 1.0f };
float light_position[] = { 1.0f, 1.0f, 1.0f, 0.0f };
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_AMBIENT, light_ambient, 0);
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_DIFFUSE, light_diffuse, 0);
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_SPECULAR, light_specular, 0);
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_POSITION, light_position, 0);
gl.glEnable(GL2.GL_LIGHTING);
gl.glEnable(GL2.GL_LIGHT0);
gl.glDepthFunc(GL.GL_LESS);
gl.glEnable(GL.GL_DEPTH_TEST);
Secondly, when the camera rotates some of the shapes seem to deform and look like completely different shapes, i.e. cubes turning pincushion like, sides being stretched an incredible amount and its making my whole object look slightly deformed. Is there an easy way to change this? Ive tried messing with gluPerspective and that doesnt seem to do change what I want either. Is there any way around this?
You have added diffuse and specular light to your scene, but these will not reach surfaces that are facing away from the light source. You could add some ambient light (currently set to 0, 0, 0 in your code snippet) so that all surfaces receive some illumination.
As for the deformed shapes, that is really a separate question, and there is not enough detail given to know why this is happening.
I have a model matrix that I am keeping track of for position of the mesh in my world. With each call to glRotate() and glTranslate() I have a corresponding call to modelMatrix.rotate() and modelMatrix.translate() which appears to be working correctly.
Now I need to update the bounding box associated with each of my models. I'm working in the libGDX framework and in the BoundingBox class found here, there is a method mul() that should allow me to apply a matrix to the bounding box but the values are not being updated correctly and I think it may be the way I am trying to apply it. Any ideas?
Here is my relevant code:
gl.glPushMatrix();
// Set the model matrix to the identity matrix
modelMatrix.idt();
// Update the orbit value of this model
orbit = (orbit + ORBIT_SPEED * delta) % 360;
gl.glRotatef(orbit, 1.0f, 1.0f, 0);
// Update the model matrix rotation
modelMatrix.rotate(1.0f, 1.0f, 0, orbit);
// Move the model to it's specified radius
gl.glTranslatef(0, 0, -ORBIT_DISTANCE);
// Update the model matrix translation
modelMatrix.translate(0, 0, -ORBIT_DISTANCE);
// Update the bounding box
boundingBox.mul(modelMatrix);
if (GameState.DEBUG)
{
renderBoundingBox(gl, delta);
}
// Bind the texture and draw
texture.bind();
mesh.render(GL10.GL_TRIANGLES);
gl.glPopMatrix();
The order that matrix multiplication is computed is important. Can you do ModelMatrix * Box instead. I think that's the issue.
I'm trying to implement a blurring mechanic on a java game. How do I create a blur effect on runtime?
Google "Gaussian Blur", try this: http://www.jhlabs.com/ip/blurring.html
Read about/Google "Convolution Filters", it's a method of changing a pixels value based on the values of pixels around it. So apart from blurring, you can also do image sharpening and line-finding.
If you are doing java game development, I'm willing to bet you are using java2d.
You want to create a convolution filter like so:
// Create the kernel.
kernel = new KernelJAI
float[] = { 0.0F, -1.0F, 0.0F,
-1.0F, 5.0F, -1.0F,
0.0F, -1.0F, 0.0F };
// Create the convolve operation.
blurredImage = JAI.create("convolve", originalImage, kernel);
You can find more information at: http://java.sun.com/products/java-media/jai/forDevelopers/jai1_0_1guide-unc/Image-enhance.doc.html#51172 (which is where the code is from too)