different textures in android - java

i want to make more than one texture on a cube as easy as possible. there should be a different texture (picture like the one loaded in the loadtexture method, android.png) on each side of the cube. part of my code:
public class Square2 {
private FloatBuffer vertexBuffer; // buffer holding the vertices
private FloatBuffer vertexBuffer2; // buffer holding the vertices
[and so on for all 6 sides]
private float vertices[] = {
-1.0f, -1.0f, -1.0f, // V1 - bottom left
-1.0f, 1.0f, -1.0f, // V2 - top left
1.0f, -1.0f, -1.0f, // V3 - bottom right
1.0f, 1.0f, 1.0f // V4 - top right
};
private float vertices2[] = {
1.0f, -1.0f, -1.0f, // V1 - bottom left
1.0f, 1.0f, -1.0f, // V2 - top left
1.0f, -1.0f, 1.0f, // V3 - bottom right
1.0f, 1.0f, 1.0f // V4 - top right
};
[for all 6 sides too]
private FloatBuffer textureBuffer; // buffer holding the texture coordinates
private FloatBuffer textureBuffer2; // buffer holding the texture coordinates
[and so on for all 6 sides]
private float texture[] = {
// Mapping coordinates for the vertices
0.0f, 1.0f, // top left (V2)
0.0f, 0.0f, // bottom left (V1)
1.0f, 1.0f, // top right (V4)
1.0f, 0.0f // bottom right (V3)
};
private float texture2[] = {
// Mapping coordinates for the vertices
0.0f, 1.0f, // top left (V2)
0.0f, 0.0f, // bottom left (V1)
1.0f, 1.0f, // top right (V4)
1.0f, 0.0f // bottom right (V3)
};
[dont really understand the texture array, is one enough for all sides, even if i want different textures?]
/** The texture pointer */
private int[] textures = new int[6];
public Square2() {
// a float has 4 bytes so we allocate for each coordinate 4 bytes
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(vertices.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
// allocates the memory from the byte buffer
vertexBuffer = byteBuffer.asFloatBuffer();
// fill the vertexBuffer with the vertices
vertexBuffer.put(vertices);
// set the cursor position to the beginning of the buffer
vertexBuffer.position(0);
ByteBuffer byteBuffer2 = ByteBuffer.allocateDirect(vertices2.length * 4);
byteBuffer2.order(ByteOrder.nativeOrder());
// allocates the memory from the byte buffer
vertexBuffer2 = byteBuffer2.asFloatBuffer();
// fill the vertexBuffer with the vertices
vertexBuffer2.put(vertices2);
// set the cursor position to the beginning of the buffer
vertexBuffer2.position(0);
[and so on for all 6 sides]
byteBuffer = ByteBuffer.allocateDirect(texture.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer = byteBuffer.asFloatBuffer();
textureBuffer.put(texture);
textureBuffer.position(0);
[and so on]
}
/**
* Load the texture for the square
* #param gl
* #param context
*/
public void loadGLTexture(GL10 gl, Context context) {
// loading texture
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(),
R.drawable.android);
// generate one texture pointer
gl.glGenTextures(1, textures, 0);
// ...and bind it to our array
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
// create nearest filtered texture
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
//Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE
//gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT);
//gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT);
// Use Android GLUtils to specify a two-dimensional texture image from our bitmap
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
// Clean up
bitmap.recycle();
}
/** The draw method for the square with the GL context */
public void draw(GL10 gl) {
// bind the previously generated texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[1]);
// Point to our buffers
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Set the face rotation
gl.glFrontFace(GL10.GL_CW);
// Point to our vertex buffer1 vorn
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
// Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
//2 rechts
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer2);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer2);
// Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices2.length / 3);
[and so on]
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
}
unfortunately this just gives me the same texture on all sides. how can i make different textures on each side? do i have to call the loadtexture method more than one time or edit it? thought about not recycling the bitmap, would that help?

Thera are may ways to achieve this. The most straight forward would be to load all the textures, then bind a different one before each side is drawn (so all you are missing is glBindTexture before each glDrawArrays). For this case you do not need multiple texture coordinate buffers and overall you wouldn't even need multiple vertex buffers. You can create 1 square buffer and position (rotate) it with matrices for each side (+ binding the correct texture).
Another approach is to use a single texture to which you can load all 6 images via glTextureSubImage (all being at different parts of the texture, no overlapping) and then use different texture coordinates for each side of the cube. By doing this you can create a single vertex/texture buffer for the whole cube and are able to draw it with a single glDrawArrays call. Though in your specific case that would be no less then 4x6=24 vertices.
As you said you do not understand texture coordinate arrays.. Texture coordinates are coordinates corresponding to the relative position on the texture. That being, (0,0) is the top left part of the texture(image) and (1,1) is the bottom right part of it. If the texture had 4 images on it as in 2x2 and you would for example want to use the bottom left one, your texture coordinates would look like this:
private float texture[] = {
0.0f, 1.0f,
0.0f, .5f,
.5f, 1.0f,
.5f, 0.5f
};
As for the coordinate values being larger then 1.0 or smaller then .0 I believe they can be used ether to clamp some part of the object or to repeat the texture (those 2 are bought possible parameters for texture, usually set when loading the texture). As for the number of texture coordinates it should be at least as many as the last parameter in glDrawArrays or as many as is the largest index+1 when using glDrawElements. So for your specific case you can have only 1 texture coordinate buffer unless you will at some point change your mind and want one of the sides to have rotated image on it and solve that by changing texture coordinates.

Related

OpenGL 2D projection stretched by aspect ratio

I am currently trying to convert the drawing methods for my 2D java game to OpenGL by using JOGL, because native java seems rather slow for drawing high-res images in rapid succession. Now i want to use a 16:9 aspect ratio, but the problem is that my image is stretched to the sides. Currently i am only drawing a white rotating quad for testing this:
public void resize(GLAutoDrawable d, int width, int height) {
GL2 gl = d.getGL().getGL2(); // get the OpenGL 2 graphics context
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glOrtho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
}
public void display(GLAutoDrawable d) {
GL2 gl = d.getGL().getGL2(); // get the OpenGL 2 graphics context
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
gl.glClear(GL.GL_COLOR_BUFFER_BIT);
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glColor3f(1.0f,1.0f,1.0f);
degree += 0.1f;
gl.glRotatef(degree, 0.0f, 0.0f, 1.0f);
gl.glBegin(GL2.GL_QUADS);
gl.glVertex2f(-0.25f, 0.25f);
gl.glVertex2f(0.25f, 0.25f);
gl.glVertex2f(0.25f, -0.25f);
gl.glVertex2f(-0.25f, -0.25f);
gl.glEnd();
gl.glRotatef(-degree, 0.0f, 0.0f, 1.0f);
gl.glFlush();
}
I know that you can somehow adress this problem by using glOrtho() and I tried many different values for this now, but none of them achieved a unstretched image. How do I have to use this? Or is there another simple solution for this?
The projection matrix transforms all vertex data from the eye coordinates to the clip coordinates.
Then, these clip coordinates are also transformed to the normalized device coordinates (NDC) by dividing with w component of the clip coordinates.
The normalized device coordinates is in range (-1, -1, -1) to (1, 1, 1).
With the orthographic projection, the eye space coordinates are linearly mapped to the NDC.
If the viewport is rectangular this has to be considered by mapping the coordinates.
float aspect = (float)width/height;
gl.glOrtho(-aspect, aspect, -1.0f, 1.0f, -1.0f, 1.0f);

How to properly draw a texture with opacity in Android OpenGL?

I'm trying to draw 5 different textures on the screen, but I can't seem to make the alpha work. Textures are rendered fine if there is no alpha, but when there is, there's a really weird "effect".
Ok, first off I call draw to all 5 of my textures with opacity (in the given order): 0.5,1.0,0.5,1.0,0.5. But when I start the app I actually get 1.0,0.5,1.0,0.5,1.0 like there's an offset of 1. That's not all that's weird. About a half of second after my app launches the very first texture gets opacity of 0.5, even weirder is that this is not happening on the second draw method, but somewhere between the first and second call to draw. How is this even possible?
This is the final result (it should be 0.5,1.0,0.5,1.0,0.5, but as you can see it's not even close):
Now for some code (I've skipped some unrelevant parts).
Creating the surface:
public void onSurfaceCreated(GL10 gl, EGLConfig config)
{
// Textures are being loaded in here, which is just fine (code skipped)
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glShadeModel(GL10.GL_SMOOTH);
gl.glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
gl.glClearDepthf(1.0f);
gl.glEnable(GL10.GL_DEPTH_TEST);
gl.glDepthFunc(GL10.GL_LEQUAL);
gl.glEnable(GL10.GL_BLEND);
gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
}
Updating the surface:
public void onSurfaceChanged(GL10 gl, int width, int height)
{
// Here I pass width & height to my texture objects for future reference (code skipped)
gl.glViewport(0, 0, width, height);
gl.glOrthof(0, width, height, 0, 0, 1);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
Drawing:
public void onDrawFrame(GL10 gl)
{
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
// Here I call draw of each texture object
}
Draw method of the texture object:
public void draw(GL10 gl)
{
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glFrontFace(GL10.GL_CW);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_MODULATE);
gl.glColor4f(1.0f, 1.0f, 1.0f, this.alpha); // This is where the magic doesn't happen (:
}
I've skipped the part where I create vertex/texture buffers and load the textures, that part follows this tutorial and seems to work just fine.
So.. what do you think am I missing? What could be the cause to this weird issue?
If I must guess, I'd say it's some weird hardcore OpenGL issue/bug or more like a flag I forgot to raise or something like that. It can't be the order of drawing or anything like that, I've double checked it all. I've also tried setting say 0.5f as opacity to each texture and it works perfectly, the problem only happens whenever the opacity of the textures differ from each other. I also don't think, that the weird between-draw flicker can be caused by user code, it's gotta be some OpenGL weirdness.
I must point out that I am using a 3rd party library to pack all of this GL magic into a live wallpaper, it's this awesome lib: GLWallpaperService
Why is the place where the magic happens not before the draw call? The default colour value is (1,1,1,1) so at first your start with alpha at 1.0, then you draw the texture and set the alpha to .5 and draw the 2nd texture with this value still set... Ergo the strange offset effect.
When the last texture is drawn you have an alpha of 1.0 (from the one before) and then set it to .5 which is used to draw the first texture in the next refresh: Since the alpha changes from 1.0 to .5 on the top most element it flickers. In the end you clearly get the result seen on the image you posted.
So this truly is where the magic happens:
gl.glColor4f(1.0f, 1.0f, 1.0f, this.alpha); // This is where the magic doesn't happen
So try removing that line and try it like this:
gl.glColor4f(1.0f, 1.0f, 1.0f, this.alpha); // This is where the magic does happen
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
gl.glColor4f(1.0f, 1.0f, 1.0f, 1.0f); // Low mana, stop the magic

JOGL - lighting/camera

Ive added a light source to my JOGL project which seems to work quite well when the object is stationary, when I move the camera it gradually gets darker as it rotates, which is what id expect but as soon as it rotates 90 degree the screen goes completely black, does anyone know why this is? Do I need to another light source for the other side? I was hoping it would kind of act like the sun, i.e. light up the whole scene but be slightly darker when the camera is on the other side of the object.
Lighting
float light_ambient[] = { 0.0f, 0.0f, 0.0f, 1.0f };
float light_diffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f };
float light_specular[] = { 1.0f, 1.0f, 1.0f, 1.0f };
float light_position[] = { 1.0f, 1.0f, 1.0f, 0.0f };
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_AMBIENT, light_ambient, 0);
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_DIFFUSE, light_diffuse, 0);
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_SPECULAR, light_specular, 0);
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_POSITION, light_position, 0);
gl.glEnable(GL2.GL_LIGHTING);
gl.glEnable(GL2.GL_LIGHT0);
gl.glDepthFunc(GL.GL_LESS);
gl.glEnable(GL.GL_DEPTH_TEST);
Secondly, when the camera rotates some of the shapes seem to deform and look like completely different shapes, i.e. cubes turning pincushion like, sides being stretched an incredible amount and its making my whole object look slightly deformed. Is there an easy way to change this? Ive tried messing with gluPerspective and that doesnt seem to do change what I want either. Is there any way around this?
You have added diffuse and specular light to your scene, but these will not reach surfaces that are facing away from the light source. You could add some ambient light (currently set to 0, 0, 0 in your code snippet) so that all surfaces receive some illumination.
As for the deformed shapes, that is really a separate question, and there is not enough detail given to know why this is happening.

How to translate the camera in GLES2.0?

I want to create a camera moving above a tiled plane. The camera is supposed to move in the XY-plane only and to look straight down all the time. With an orthogonal projection I expect a pseudo-2D renderer.
My problem is, that I don't know how to translate the camera. After some research it seems to me, that there is nothing like a "camera" in OpenGL and I have to translate the whole world. Changing the eye-position and view center coordinates in the Matrix.setLookAtM-function just leads to distorted results.
Translating the whole MVP-Matrix does not work either.
I'm running out of ideas now; do I have to translate every single vertex every frame directly in the vertex buffer? That does not seem plausible to me.
I derived GLSurfaceView and implemented the following functions to setup and update the scene:
public void onSurfaceChanged(GL10 unused, int width, int height) {
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
// Setup the projection Matrix for an orthogonal view
Matrix.orthoM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
public void onDrawFrame(GL10 unused) {
// Draw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
//Setup the camera
float[] camPos = { 0.0f, 0.0f, -3.0f }; //no matter what else I put in here the camera seems to point
float[] lookAt = { 0.0f, 0.0f, 0.0f }; // to the coordinate center and distorts the square
// Set the camera position (View matrix)
Matrix.setLookAtM( vMatrix, 0, camPos[0], camPos[1], camPos[2], lookAt[0], lookAt[1], lookAt[2], 0f, 1f, 0f);
// Calculate the projection and view transformation
Matrix.multiplyMM( mMVPMatrix, 0, projMatrix, 0, vMatrix, 0);
//rotate the viewport
Matrix.setRotateM(mRotationMatrix, 0, getRotationAngle(), 0, 0, -1.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mRotationMatrix, 0, mMVPMatrix, 0);
//I also tried to translate the viewport here
// (and several other places), but I could not find any solution
//draw the plane (actually a simple square right now)
mPlane.draw(mMVPMatrix);
}
Changing the eye-position and view center coordinates in the "LookAt"-function just leads to distorted results.
If you got this from the android tutorial, I think they have a bug in their code. (made a comment about it here)
Try the following fixes:
Use setLookatM to point to where you want the camera to be.
In the shader, change the gl_Position line
from: " gl_Position = vPosition * uMVPMatrix;"
to: " gl_Position = uMVPMatrix * vPosition;"
I'd think the //rotate the viewport section should be removed as well, as this is not rotating the camera properly. You can change the camera's orientation in the setlookat function.

Converting screen coordinates to world coordinates

I'm getting screen coordinates using this:
#Override
public boolean onTouchEvent(MotionEvent ev) {
x = ev.getX(0);
y = ev.getY(0);
return true;
}
And these are the verticles of my openGL 1.0 square:
private float vertices[] = {
-1.0f, -1.0f, 0.0f, // V1 - bottom left
-1.0f, 1.0f, 0.0f, // V2 - top left
1.0f, -1.0f, 0.0f, // V3 - bottom right
1.0f, 1.0f, 0.0f // V4 - top right
};
Everybody who have worked with openGL knows, that if i would paste x and y variables instead of verticles, i would get absolute nonsense. My question is: what formula should i use to convert screen coordinates x and y to world coordinates so i could use them to position my square to the touched point?
EDIT:
Oops, i forgot to say, that it's a 2D game...
Actualy i found a way myself, and the glUnProject is not the best way on android platform...
http://magicscrollsofcode.blogspot.com/2010/10/3d-picking-in-android.html
There is a function called 'gluunproject', that can do this for you. Link is here.
http://www.opengl.org/sdk/docs/man/xhtml/gluUnProject.xml
By the way, the screen coordinates will correspond to a 3D line passing from center of camera through the screen coordinates (image plane).
The ModelView, projection and viewport inputs can be obtained by querying OpenGL the current matrices. Refer the same link (function calls are specified).
Other than the x and y screen parameters, you need the depth parameter or z parameter. You can use the depth range to place the square in a particular z plane. Or give a default value. But make sure it is inside the visible region.
Once you receive the object co-ordinates, consider it as the center of square and draw a square of required length.
Satish

Categories

Resources