I am looking to use Vuforia for an AR project.
I understand that it utilises OpenGL ES and therefore typically uses arrays of vertex data to construct the object.
I want ot be able to use a standard OBJ file instead.
I already looked at using this but have no clue how to use a *.h file in the Android Java SDK.
So, I started looking at loading in an OBJ. I want to try and use something like this but just use the obj reader. But I'm stuck on how to get the vertex array.
Does anyone have knowledge on how to load an OBJ and pass it into the Vuforia Sample application?
(it needs to work in the Java SDK, not the NDK)
.obj and openGL aren't directly compatible, openGL requires only 1 index for the vertices (all attributes get the same index) while .obj allows different indices for each attribute.
What that script does is pre-process the obj file so you can include it directly in the code without needing to (slowly) parse at runtime,
The equivalent java code of the .h file is:
package resource.models;
public class ModelName{
public int NumVerts = //exactly the number that is <name>NumVerts in the header
public float[] Verts = {
//copy paste the numbers and append f after each number like so:
// f 1//2 7//2 5//2
-0.5f, -0.5f, -0.5f,
0.5f, 0.5f, -0.5f,
0.5f, -0.5f, -0.5f,
// f 1//2 3//2 7//2
-0.5f, -0.5f, -0.5f,
-0.5f, 0.5f, -0.5f,
0.5f, 0.5f, -0.5f,
//and so on
}
//repeat for Normals and textures:
public float[] Normals = {
// f 1//2 7//2 5//2
0f, 0f, -1f,
0f, 0f, -1f,
0f, 0f, -1f,
// f 1//2 3//2 7//2
0f, 0f, -1f,
0f, 0f, -1f,
0f, 0f, -1f,
//and so on
}
}
To fill the openGL buffers you wrap the float arrays in FloatBuffers and then pass them to the BufferData calls.
Changing the script in that article to be compatible with java requires changing the code from line 492 to emit the package and class declaration and append an f after each number plus the trailing closing brace.
Related
I am currently trying to convert the drawing methods for my 2D java game to OpenGL by using JOGL, because native java seems rather slow for drawing high-res images in rapid succession. Now i want to use a 16:9 aspect ratio, but the problem is that my image is stretched to the sides. Currently i am only drawing a white rotating quad for testing this:
public void resize(GLAutoDrawable d, int width, int height) {
GL2 gl = d.getGL().getGL2(); // get the OpenGL 2 graphics context
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glOrtho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
}
public void display(GLAutoDrawable d) {
GL2 gl = d.getGL().getGL2(); // get the OpenGL 2 graphics context
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
gl.glClear(GL.GL_COLOR_BUFFER_BIT);
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glColor3f(1.0f,1.0f,1.0f);
degree += 0.1f;
gl.glRotatef(degree, 0.0f, 0.0f, 1.0f);
gl.glBegin(GL2.GL_QUADS);
gl.glVertex2f(-0.25f, 0.25f);
gl.glVertex2f(0.25f, 0.25f);
gl.glVertex2f(0.25f, -0.25f);
gl.glVertex2f(-0.25f, -0.25f);
gl.glEnd();
gl.glRotatef(-degree, 0.0f, 0.0f, 1.0f);
gl.glFlush();
}
I know that you can somehow adress this problem by using glOrtho() and I tried many different values for this now, but none of them achieved a unstretched image. How do I have to use this? Or is there another simple solution for this?
The projection matrix transforms all vertex data from the eye coordinates to the clip coordinates.
Then, these clip coordinates are also transformed to the normalized device coordinates (NDC) by dividing with w component of the clip coordinates.
The normalized device coordinates is in range (-1, -1, -1) to (1, 1, 1).
With the orthographic projection, the eye space coordinates are linearly mapped to the NDC.
If the viewport is rectangular this has to be considered by mapping the coordinates.
float aspect = (float)width/height;
gl.glOrtho(-aspect, aspect, -1.0f, 1.0f, -1.0f, 1.0f);
I am trying to create a simple JOGL program, and have run into a problem. I just imported all of the necessary packages into my class file, however now, every time I use the glTranslate function, it is being flagged red as an error, for example at the end of this block of code.
public void lab2a(GLAutoDrawable drawable) {
GL gl = drawable.getGL();
gl.glTranslatef(-1.0f, -1.0f, -6f);
// Drawing first rectangle (blue)
gl.glBegin(GL.GL_QUADS);
gl.glColor3f(0f, 0f, 1f); // sets color to blue
gl.glVertex3f(-0.5f, 0.5f, 0.0f); //Top left vertice
gl.glVertex3f(0.5f, 0.5f, 0.0f); //Top right vertice
gl.glVertex3f(-0.5f, -0.5f, 0.0f); //Bottom left vertice
gl.glVertex3f(0.5f, -0.5f, 0.0f); //Bottom right vertice
gl.glEnd();
gl.glTranslate(1.1f, 0f, 0f);
The flag reads: "cannot find symbol", and is present for each use of glTranslate I use. Does anybody have any idea how to fix this?
Your source code is obsolete, it uses JOGL 1 whose maintenance was stopped in 2010.
Please switch to JOGL 2, go to jogamp.org. Replace GL.GL_QUADS by GL2.GL_QUADS, replace GL gl = drawable.getGL() by GL2 gl = drawable.getGL().getGL2(), etc... Look at the API documentation here.
I have a small issue with IntelliJ not making my code more readable through auto-formatting, but more confusing.
Hence the following code excerpt:
private static float _triangleCoords[] = {
0.0f, 0.62f, 0f,
-0.5f, -0.31f, 0f,
0.5f, -0.31f, 0f
};
If I run Java auto-formatting, IntelliJ formats it like this:
private static float _triangleCoords[] = {
0.0f, 0.62f, 0f,
-0.5f, -0.31f, 0f,
0.5f, -0.31f, 0f
};
I don't want it to strip away the spaces since they align the floats nicely. I also don't want to write + signs in front of positive floats just to keep it that way.
How can I turn this formatting off?
You can disable reformatting for blocks of codes with comments, eg
private static float _triangleCoords[] = {
// #formatter:off
0.0f, 0.62f, 0f,
-0.5f, -0.31f, 0f,
0.5f, -0.31f, 0f
// #formatter:on
};
Go to Preferences -> Code Style -> General -> Formatter Control to enable format control with comments. You can also define a macro/template to surround a block of code with formatter comments to make life a bit easier.
i want to make more than one texture on a cube as easy as possible. there should be a different texture (picture like the one loaded in the loadtexture method, android.png) on each side of the cube. part of my code:
public class Square2 {
private FloatBuffer vertexBuffer; // buffer holding the vertices
private FloatBuffer vertexBuffer2; // buffer holding the vertices
[and so on for all 6 sides]
private float vertices[] = {
-1.0f, -1.0f, -1.0f, // V1 - bottom left
-1.0f, 1.0f, -1.0f, // V2 - top left
1.0f, -1.0f, -1.0f, // V3 - bottom right
1.0f, 1.0f, 1.0f // V4 - top right
};
private float vertices2[] = {
1.0f, -1.0f, -1.0f, // V1 - bottom left
1.0f, 1.0f, -1.0f, // V2 - top left
1.0f, -1.0f, 1.0f, // V3 - bottom right
1.0f, 1.0f, 1.0f // V4 - top right
};
[for all 6 sides too]
private FloatBuffer textureBuffer; // buffer holding the texture coordinates
private FloatBuffer textureBuffer2; // buffer holding the texture coordinates
[and so on for all 6 sides]
private float texture[] = {
// Mapping coordinates for the vertices
0.0f, 1.0f, // top left (V2)
0.0f, 0.0f, // bottom left (V1)
1.0f, 1.0f, // top right (V4)
1.0f, 0.0f // bottom right (V3)
};
private float texture2[] = {
// Mapping coordinates for the vertices
0.0f, 1.0f, // top left (V2)
0.0f, 0.0f, // bottom left (V1)
1.0f, 1.0f, // top right (V4)
1.0f, 0.0f // bottom right (V3)
};
[dont really understand the texture array, is one enough for all sides, even if i want different textures?]
/** The texture pointer */
private int[] textures = new int[6];
public Square2() {
// a float has 4 bytes so we allocate for each coordinate 4 bytes
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(vertices.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
// allocates the memory from the byte buffer
vertexBuffer = byteBuffer.asFloatBuffer();
// fill the vertexBuffer with the vertices
vertexBuffer.put(vertices);
// set the cursor position to the beginning of the buffer
vertexBuffer.position(0);
ByteBuffer byteBuffer2 = ByteBuffer.allocateDirect(vertices2.length * 4);
byteBuffer2.order(ByteOrder.nativeOrder());
// allocates the memory from the byte buffer
vertexBuffer2 = byteBuffer2.asFloatBuffer();
// fill the vertexBuffer with the vertices
vertexBuffer2.put(vertices2);
// set the cursor position to the beginning of the buffer
vertexBuffer2.position(0);
[and so on for all 6 sides]
byteBuffer = ByteBuffer.allocateDirect(texture.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer = byteBuffer.asFloatBuffer();
textureBuffer.put(texture);
textureBuffer.position(0);
[and so on]
}
/**
* Load the texture for the square
* #param gl
* #param context
*/
public void loadGLTexture(GL10 gl, Context context) {
// loading texture
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(),
R.drawable.android);
// generate one texture pointer
gl.glGenTextures(1, textures, 0);
// ...and bind it to our array
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
// create nearest filtered texture
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
//Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE
//gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT);
//gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT);
// Use Android GLUtils to specify a two-dimensional texture image from our bitmap
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
// Clean up
bitmap.recycle();
}
/** The draw method for the square with the GL context */
public void draw(GL10 gl) {
// bind the previously generated texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[1]);
// Point to our buffers
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Set the face rotation
gl.glFrontFace(GL10.GL_CW);
// Point to our vertex buffer1 vorn
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
// Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
//2 rechts
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer2);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer2);
// Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices2.length / 3);
[and so on]
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
}
unfortunately this just gives me the same texture on all sides. how can i make different textures on each side? do i have to call the loadtexture method more than one time or edit it? thought about not recycling the bitmap, would that help?
Thera are may ways to achieve this. The most straight forward would be to load all the textures, then bind a different one before each side is drawn (so all you are missing is glBindTexture before each glDrawArrays). For this case you do not need multiple texture coordinate buffers and overall you wouldn't even need multiple vertex buffers. You can create 1 square buffer and position (rotate) it with matrices for each side (+ binding the correct texture).
Another approach is to use a single texture to which you can load all 6 images via glTextureSubImage (all being at different parts of the texture, no overlapping) and then use different texture coordinates for each side of the cube. By doing this you can create a single vertex/texture buffer for the whole cube and are able to draw it with a single glDrawArrays call. Though in your specific case that would be no less then 4x6=24 vertices.
As you said you do not understand texture coordinate arrays.. Texture coordinates are coordinates corresponding to the relative position on the texture. That being, (0,0) is the top left part of the texture(image) and (1,1) is the bottom right part of it. If the texture had 4 images on it as in 2x2 and you would for example want to use the bottom left one, your texture coordinates would look like this:
private float texture[] = {
0.0f, 1.0f,
0.0f, .5f,
.5f, 1.0f,
.5f, 0.5f
};
As for the coordinate values being larger then 1.0 or smaller then .0 I believe they can be used ether to clamp some part of the object or to repeat the texture (those 2 are bought possible parameters for texture, usually set when loading the texture). As for the number of texture coordinates it should be at least as many as the last parameter in glDrawArrays or as many as is the largest index+1 when using glDrawElements. So for your specific case you can have only 1 texture coordinate buffer unless you will at some point change your mind and want one of the sides to have rotated image on it and solve that by changing texture coordinates.
I'm trying to implement a blurring mechanic on a java game. How do I create a blur effect on runtime?
Google "Gaussian Blur", try this: http://www.jhlabs.com/ip/blurring.html
Read about/Google "Convolution Filters", it's a method of changing a pixels value based on the values of pixels around it. So apart from blurring, you can also do image sharpening and line-finding.
If you are doing java game development, I'm willing to bet you are using java2d.
You want to create a convolution filter like so:
// Create the kernel.
kernel = new KernelJAI
float[] = { 0.0F, -1.0F, 0.0F,
-1.0F, 5.0F, -1.0F,
0.0F, -1.0F, 0.0F };
// Create the convolve operation.
blurredImage = JAI.create("convolve", originalImage, kernel);
You can find more information at: http://java.sun.com/products/java-media/jai/forDevelopers/jai1_0_1guide-unc/Image-enhance.doc.html#51172 (which is where the code is from too)