JOGL - lighting/camera - java

Ive added a light source to my JOGL project which seems to work quite well when the object is stationary, when I move the camera it gradually gets darker as it rotates, which is what id expect but as soon as it rotates 90 degree the screen goes completely black, does anyone know why this is? Do I need to another light source for the other side? I was hoping it would kind of act like the sun, i.e. light up the whole scene but be slightly darker when the camera is on the other side of the object.
Lighting
float light_ambient[] = { 0.0f, 0.0f, 0.0f, 1.0f };
float light_diffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f };
float light_specular[] = { 1.0f, 1.0f, 1.0f, 1.0f };
float light_position[] = { 1.0f, 1.0f, 1.0f, 0.0f };
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_AMBIENT, light_ambient, 0);
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_DIFFUSE, light_diffuse, 0);
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_SPECULAR, light_specular, 0);
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_POSITION, light_position, 0);
gl.glEnable(GL2.GL_LIGHTING);
gl.glEnable(GL2.GL_LIGHT0);
gl.glDepthFunc(GL.GL_LESS);
gl.glEnable(GL.GL_DEPTH_TEST);
Secondly, when the camera rotates some of the shapes seem to deform and look like completely different shapes, i.e. cubes turning pincushion like, sides being stretched an incredible amount and its making my whole object look slightly deformed. Is there an easy way to change this? Ive tried messing with gluPerspective and that doesnt seem to do change what I want either. Is there any way around this?

You have added diffuse and specular light to your scene, but these will not reach surfaces that are facing away from the light source. You could add some ambient light (currently set to 0, 0, 0 in your code snippet) so that all surfaces receive some illumination.
As for the deformed shapes, that is really a separate question, and there is not enough detail given to know why this is happening.

Related

OpenGL 2D projection stretched by aspect ratio

I am currently trying to convert the drawing methods for my 2D java game to OpenGL by using JOGL, because native java seems rather slow for drawing high-res images in rapid succession. Now i want to use a 16:9 aspect ratio, but the problem is that my image is stretched to the sides. Currently i am only drawing a white rotating quad for testing this:
public void resize(GLAutoDrawable d, int width, int height) {
GL2 gl = d.getGL().getGL2(); // get the OpenGL 2 graphics context
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glOrtho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
}
public void display(GLAutoDrawable d) {
GL2 gl = d.getGL().getGL2(); // get the OpenGL 2 graphics context
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
gl.glClear(GL.GL_COLOR_BUFFER_BIT);
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glColor3f(1.0f,1.0f,1.0f);
degree += 0.1f;
gl.glRotatef(degree, 0.0f, 0.0f, 1.0f);
gl.glBegin(GL2.GL_QUADS);
gl.glVertex2f(-0.25f, 0.25f);
gl.glVertex2f(0.25f, 0.25f);
gl.glVertex2f(0.25f, -0.25f);
gl.glVertex2f(-0.25f, -0.25f);
gl.glEnd();
gl.glRotatef(-degree, 0.0f, 0.0f, 1.0f);
gl.glFlush();
}
I know that you can somehow adress this problem by using glOrtho() and I tried many different values for this now, but none of them achieved a unstretched image. How do I have to use this? Or is there another simple solution for this?
The projection matrix transforms all vertex data from the eye coordinates to the clip coordinates.
Then, these clip coordinates are also transformed to the normalized device coordinates (NDC) by dividing with w component of the clip coordinates.
The normalized device coordinates is in range (-1, -1, -1) to (1, 1, 1).
With the orthographic projection, the eye space coordinates are linearly mapped to the NDC.
If the viewport is rectangular this has to be considered by mapping the coordinates.
float aspect = (float)width/height;
gl.glOrtho(-aspect, aspect, -1.0f, 1.0f, -1.0f, 1.0f);

How to properly draw a texture with opacity in Android OpenGL?

I'm trying to draw 5 different textures on the screen, but I can't seem to make the alpha work. Textures are rendered fine if there is no alpha, but when there is, there's a really weird "effect".
Ok, first off I call draw to all 5 of my textures with opacity (in the given order): 0.5,1.0,0.5,1.0,0.5. But when I start the app I actually get 1.0,0.5,1.0,0.5,1.0 like there's an offset of 1. That's not all that's weird. About a half of second after my app launches the very first texture gets opacity of 0.5, even weirder is that this is not happening on the second draw method, but somewhere between the first and second call to draw. How is this even possible?
This is the final result (it should be 0.5,1.0,0.5,1.0,0.5, but as you can see it's not even close):
Now for some code (I've skipped some unrelevant parts).
Creating the surface:
public void onSurfaceCreated(GL10 gl, EGLConfig config)
{
// Textures are being loaded in here, which is just fine (code skipped)
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glShadeModel(GL10.GL_SMOOTH);
gl.glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
gl.glClearDepthf(1.0f);
gl.glEnable(GL10.GL_DEPTH_TEST);
gl.glDepthFunc(GL10.GL_LEQUAL);
gl.glEnable(GL10.GL_BLEND);
gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
}
Updating the surface:
public void onSurfaceChanged(GL10 gl, int width, int height)
{
// Here I pass width & height to my texture objects for future reference (code skipped)
gl.glViewport(0, 0, width, height);
gl.glOrthof(0, width, height, 0, 0, 1);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
Drawing:
public void onDrawFrame(GL10 gl)
{
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
// Here I call draw of each texture object
}
Draw method of the texture object:
public void draw(GL10 gl)
{
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glFrontFace(GL10.GL_CW);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_MODULATE);
gl.glColor4f(1.0f, 1.0f, 1.0f, this.alpha); // This is where the magic doesn't happen (:
}
I've skipped the part where I create vertex/texture buffers and load the textures, that part follows this tutorial and seems to work just fine.
So.. what do you think am I missing? What could be the cause to this weird issue?
If I must guess, I'd say it's some weird hardcore OpenGL issue/bug or more like a flag I forgot to raise or something like that. It can't be the order of drawing or anything like that, I've double checked it all. I've also tried setting say 0.5f as opacity to each texture and it works perfectly, the problem only happens whenever the opacity of the textures differ from each other. I also don't think, that the weird between-draw flicker can be caused by user code, it's gotta be some OpenGL weirdness.
I must point out that I am using a 3rd party library to pack all of this GL magic into a live wallpaper, it's this awesome lib: GLWallpaperService
Why is the place where the magic happens not before the draw call? The default colour value is (1,1,1,1) so at first your start with alpha at 1.0, then you draw the texture and set the alpha to .5 and draw the 2nd texture with this value still set... Ergo the strange offset effect.
When the last texture is drawn you have an alpha of 1.0 (from the one before) and then set it to .5 which is used to draw the first texture in the next refresh: Since the alpha changes from 1.0 to .5 on the top most element it flickers. In the end you clearly get the result seen on the image you posted.
So this truly is where the magic happens:
gl.glColor4f(1.0f, 1.0f, 1.0f, this.alpha); // This is where the magic doesn't happen
So try removing that line and try it like this:
gl.glColor4f(1.0f, 1.0f, 1.0f, this.alpha); // This is where the magic does happen
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
gl.glColor4f(1.0f, 1.0f, 1.0f, 1.0f); // Low mana, stop the magic

Java openGL - drawing a 3D object

Im looking to draw a 3D object in openGL but whats the best way to approach this? I was thinking of drawing the side profile of it in 2D and maybe then fleshing it out to become 3D but is that possible? I think it would be easier to do it that way then just go straight into 3D but if you cant then id just be wasting my time.
I also cant figure out how to add the sky and maybe even the sea with a reflection, is this easily done?
gl.glClear(GL.GL_COLOR_BUFFER_BIT);
gl.glColor3f(1.0f, 1.0f, 1.0f);
gl.glPushMatrix();
gl.glTranslatef(-1.0f, 0.0f, 0.0f);
gl.glRotatef((float) shoulder, 0.0f, 0.0f, 1.0f);
gl.glTranslatef(1.0f, 0.0f, 0.0f);
// gl.glPushMatrix();
gl.glScalef(2.0f, 0.4f, 1.0f);
glut.glutWireCube(1.0f);
// gl.glPopMatrix();
gl.glTranslatef(1.0f, 0.0f, 0.0f);
gl.glRotatef((float) elbow, 0.0f, 0.0f, 1.0f);
gl.glTranslatef(1.0f, 0.0f, 0.0f);
// gl.glPushMatrix();
gl.glScalef(2.0f, 0.4f, 1.0f);
glut.glutWireCube(1.0f);
// gl.glPopMatrix();
gl.glTranslatef(1.0f, 1.0f, 1.0f);
gl.glRotatef((float) hand, 0.0f, 0.0f, 1.0f);
gl.glTranslatef(1.0f, 0.0f, 0.0f);
// gl.glPushMatrix();
gl.glScalef(2.0f, 0.4f, 1.0f);
glut.glutWireCube(1.0f);
// gl.glPopMatrix();
Ive just been trying random numbers to try and get it to work but not such luck so far!
Ok first, reflections can be hard depending on how you want to do them. You will definitely need to learn more OpenGL before attempting something like that. Second, 3D objects require some more matrix stuff, this is an example from my init method in my 3D game:
private void initGl() {
glViewport(0, 0, Display.getWidth(), Display.getHeight());
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
GLU.gluPerspective(45.0f, Display.getWidth() / Display.getHeight(), 1.0f, 100.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepth(1.0f);
glDepthFunc(GL_LEQUAL);
glEnable(GL_DEPTH_TEST);
glShadeModel(GL_SMOOTH);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glEnable(GL_FOG);
glFogi(GL_FOG_MODE, GL_EXP2);
glFogf(GL_FOG_DENSITY, density);
glHint(GL_FOG_DENSITY, GL_FASTEST);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
}
You will also need to clear the buffer like this before drawing:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
You are almost correct when you say just flesh out a 2D object. If we were still using immediate mode (glBegin()/glEnd()), your approach would be correct. However, immediate mode is deprecated now and we usually use VBOs. I would suggest going on YouTube and searching for theCodingUniverse if you are using LWJGL, he has a video on VBOs and advanced rendering, its how I learned them!
Good luck in the world of 3D, its not easy at all, but its (in my opinion), much more satisfying than 2D when you get something working.
Also, consider investing in the RedBook, its all about LWJGL.

Converting screen coordinates to world coordinates

I'm getting screen coordinates using this:
#Override
public boolean onTouchEvent(MotionEvent ev) {
x = ev.getX(0);
y = ev.getY(0);
return true;
}
And these are the verticles of my openGL 1.0 square:
private float vertices[] = {
-1.0f, -1.0f, 0.0f, // V1 - bottom left
-1.0f, 1.0f, 0.0f, // V2 - top left
1.0f, -1.0f, 0.0f, // V3 - bottom right
1.0f, 1.0f, 0.0f // V4 - top right
};
Everybody who have worked with openGL knows, that if i would paste x and y variables instead of verticles, i would get absolute nonsense. My question is: what formula should i use to convert screen coordinates x and y to world coordinates so i could use them to position my square to the touched point?
EDIT:
Oops, i forgot to say, that it's a 2D game...
Actualy i found a way myself, and the glUnProject is not the best way on android platform...
http://magicscrollsofcode.blogspot.com/2010/10/3d-picking-in-android.html
There is a function called 'gluunproject', that can do this for you. Link is here.
http://www.opengl.org/sdk/docs/man/xhtml/gluUnProject.xml
By the way, the screen coordinates will correspond to a 3D line passing from center of camera through the screen coordinates (image plane).
The ModelView, projection and viewport inputs can be obtained by querying OpenGL the current matrices. Refer the same link (function calls are specified).
Other than the x and y screen parameters, you need the depth parameter or z parameter. You can use the depth range to place the square in a particular z plane. Or give a default value. But make sure it is inside the visible region.
Once you receive the object co-ordinates, consider it as the center of square and draw a square of required length.
Satish

How do I blur an image?

I'm trying to implement a blurring mechanic on a java game. How do I create a blur effect on runtime?
Google "Gaussian Blur", try this: http://www.jhlabs.com/ip/blurring.html
Read about/Google "Convolution Filters", it's a method of changing a pixels value based on the values of pixels around it. So apart from blurring, you can also do image sharpening and line-finding.
If you are doing java game development, I'm willing to bet you are using java2d.
You want to create a convolution filter like so:
// Create the kernel.
kernel = new KernelJAI
float[] = { 0.0F, -1.0F, 0.0F,
-1.0F, 5.0F, -1.0F,
0.0F, -1.0F, 0.0F };
// Create the convolve operation.
blurredImage = JAI.create("convolve", originalImage, kernel);
You can find more information at: http://java.sun.com/products/java-media/jai/forDevelopers/jai1_0_1guide-unc/Image-enhance.doc.html#51172 (which is where the code is from too)

Categories

Resources