How do I rotate a triangle, given that rotation in latest OpenGL is deprecated? Before deprecation:
gl.glRotated(i, 0, 0, 1);
gl.glBegin(GL2.GL_TRIANGLES);
gl.glVertex3f(0.0f, 1.0f, 0.0f );
gl.glVertex3f(-1.0f, -1.0f, 0.0f );
gl.glVertex3f(1.0f, -1.0f, 0.0f );
gl.glEnd();
I tried doing this, it's just a translation though:
double rotCos = Math.cos(i);
double rotSine = Math.sin(i);
gl.glBegin(GL2.GL_TRIANGLES);
gl.glVertex3d(0.0f + rotSine, 1.0f + rotCos, 0.0f );
gl.glVertex3d(-1.0f + rotSine, -1.0f + rotCos, 0.0f );
gl.glVertex3d(1.0f + rotSine, -1.0f + rotCos, 0.0f );
gl.glEnd();
How to achieve the math behind glRotated?
What you did is not what the idea behind the deprecation of those function was; this deprecation included the functions glBegin, glVertex and glEnd, too, so if you're using those, you're missing the point
What you should to is implement a vertex shader, in which you perform the usual steps of vertex transformation i.e. multiply the vertex with first a modelview, then a projection matrix; you can also contract modelview and projection into one matrix, but this makes things a bit trickier regarding illumination.
The matrices are passed to OpenGL through so called uniforms. To create the matrices use some vector math library like GLM or Eigen (with the unofficial OpenGL module accompanying Eigen).
How to achieve the math behind glRotated?
The matrix glRotate() constructs is right there in the documentation.
It is still OK to use depreciated functions, but if you want to use "modern" OpenGL (OpenGL 3 and higher) you are going to have to do things rather differently.
For one, you no longer use glBegin/glEnd and instead draw everything using vertex buffer objects. Secondly, the fixed function pipeline has been removed so vertex and fragment shaders are required to draw anything. There are also number of other changes (including the addition of vertex array objects, and geometry shaders).
The way to do rotation in OpenGL 3 is to pass modelView and projection matrices in uniforms, and use them to compute vertex positions in the vertex shader.
Ultimately, if you want to learn "modern" OpenGL, you are probably best off just looking online for tutorials on OpenGL 3.0 (or higher).
Related
As you can see on the pictures the string rotates around its origin.
No rotation:
Rotated:
Changing the RasterPos or translate it does not change this at all. I tried glutStrokeString and glutBitmapString. The code for the example:
gl.glColor4f((float) 1, (float) 0, (float) 0, 1.0f);
gl.glScalef(0.0015f, 0.0015f, 0.0015f);
gl.glRotatef(-angleHorizontal, 0, 1, 0);
glut.glutStrokeString(GLUT.STROKE_ROMAN, "ABCDEF");
glutBitmapCharacter:
You can't. It makes use of the (outdated, deprecated, legacy) OpenGL bitmap operations, which are always aligned to the pixel grid.
glutStrokeCharacter:
These are just regular line segments that transform through the fixed function pipeline; or if you're in a compatibility profile through an early GLSL version shader program that uses the set of built-in variables to access the fixed function pipeline state. In one of my codesamples programs (which I wrote to explain how the projection frustum works) I have some helper function to draw arrows with annotations. You can find the full code here https://github.com/datenwolf/codesamples/blob/master/samples/OpenGL/frustum/frustum.c the relevant function starts in line 114.
I am trying to draw a trigonometric graph with OpenGL. This is a part of my code:
double x = -1;
gl.glColor3d(0, 0, 0);
gl.glBegin(gl.GL_LINES);
gl.glVertex2d(x, Math.sin(Math.toRadians(x))*0.01);
gl.glVertex2d(x+0.01, Math.sin(Math.toRadians(x+0.01))*0.01);
gl.glEnd();
x += 0.01;
This part is repeated in my full code. When this is executed, I see nothing. Can anybody tell me why this might be happening?
I do not see any loop in your code
and also the scales are suspicious (without matrices hard to tell) try this instead:
double x;
gl.glColor3d(0.0, 0.0, 0.0);
gl.glBegin(gl.GL_LINE_STRIP);
for (x=-180.0;x<=180.0;x+=0.01)
gl.glVertex2d(x/180.0, Math.sin(Math.toRadians(x)));
gl.glEnd();
it uses LINE_STRIP instead of LINES and the graph is scaled to <-1,+1> which is most likely your viewing area...
settings
Also there might be other problems like the comments suggest. On top of them check the obvious:
gl.glDisable(gl.GL_DEPTH_TEST);
gl.glDisable(gl.GL_TEXTURE_2D);
in case they has been set from other part of your code...
wrong camera settings
I do not see any matrix code so I hope your camera is facing the correct direction and your graph is inside frustrum or whatever you use...
GL rendering
In case your GL code is missing or has wrong structure then it should look like this (puted together all the bullets):
// (double) buffering
gl.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
gl.glClear(gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT); // non eed for the depth although
// here set matrices in case they are not persistent
gl.glMatrixMode(gl.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glMatrixMode(gl.GL_PROJECTION);
gl.glLoadIdentity();
// here set/reset config the pipeline
gl.glDisable(gl.GL_DEPTH_TEST);
gl.glDisable(gl.GL_TEXTURE_2D);
gl.glLineWidth(1.0);
// here render
double x;
gl.glColor3d(0.0, 0.0, 0.0);
gl.glBegin(gl.GL_LINE_STRIP);
for (x=-180.0;x=<180.0;x+=0.01)
gl.glVertex2d(x/180.0, Math.sin(Math.toRadians(x)));
gl.glEnd();
// (double) buffering
gl.glFlush();
SwapBuffers(hdc);
I do not code in JAVA so instead of SwapBuffers(hdc); use whatever you got in JAVA for the same purpose. Hope I did not make any mistake while translating to JAVA.
For more info see:
simple complete GL+VAO/VBO+GLSL+shaders example in C++
I have two vertex buffers, one for XY co-ordinate data and one for UV data, passed to a shader as attributes.
XY_Data (Two Triangles) : { 0f, 0f, 10f, 0f, 10f, 10f,
50f, 50f, 60f, 50f, 60f, 60f }
UV_Data (Single Triangle) : { 0f, 0f, .5f, 0f, 1f, 1f }
Is it possible to reuse the UV data for a single triangle when drawing two triangles, without having to extend the size of the buffer to match the XY Data?
In older version, (assuming your openGL to java wrapper was a thin one) you got C-style undefined behavior. In other words anything could happen up to and including making demons come out of your nose. But it usually just means a crash or garbage on the screen.
In newer versions if one of the rubustness extensions are available then you won't get a crash but the values passed to the shader may still be garbage or just set to 0.
There is no way to reuse the UV data like that that doesn't involve using a sampler and slowing down the rendering. It's just easier to duplicate the data and be done with it.
I'm using LWJGL and Slick framework to load Textures to my OpenGL-application.
I'm using this image:
And this code to import and utilize the texture:
flagTexture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("japan.png"));
......
flagTexture.bind();
GL11.glColor3f(1.0f, 1.0f, 1.0f);
GL11.glPushMatrix();
GL11.glTranslatef(0.0f, 0.0f, -10.0f);
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0.0f, 0.0f);
GL11.glVertex2f(0.0f, 0.0f);
GL11.glTexCoord2f(1.0f, 0.0f);
GL11.glVertex2f(2.5f, 0.0f);
GL11.glTexCoord2f(1.0f, 1.0f);
GL11.glVertex2f(2.5f, 2.5f);
GL11.glTexCoord2f(0.0f, 1.0f);
GL11.glVertex2f(0.0f, 2.5f);
GL11.glEnd();
GL11.glPopMatrix();
But the end-result becomes this:
I'm not using any special settings like GL_REPEAT or anything like that. Whats going on? How can I make the texture fill the given vertices?
It looks like the texture is getting padded out to the nearest power of two. There are two solutions here:
Stretch the texture out to the nearest power of two.
Calculate the difference between your texture's size and the nearest power of two and change the texture coordinates from 1.0f to textureWidth/nearestPowerOfTwoWidth and textureHeight/nearestPowerOfTwoHeight.
There might also be some specific LWJGL method to allow for non-power-of-two textures, look into that.
If you need to support non-power-of-two textures, you can modify the loading method. If you use Slick2D, there's no way to do it other than to implement your "own" texture class (You can get some examples on those here: Texture.java and TextureLoader.java
The TextureLoader class contains a method "get2Fold", this is used to calculate the next power of two bigger than the texture width/height. So, if you want to use textures with non-power-of-two size, just change this method to simply return fold; (=the input), so that the program "thinks" that the next power of two is the size of the image, which it isn't in many cases, but if the hardware supports it (Most does), this shouldn't be a problem. A more "abstract" way would be to change this line:
GL11.glTexImage2D(target, 0, dstPixelFormat, get2Fold(bufferedImage.getWidth()), get2Fold(bufferedImage.getHeight()), 0, srcPixelFormat, GL11.GL_UNSIGNED_BYTE, textureBuffer);
Here, the 4th argument = the width of the texture and the 5th = the height of the texture. If you set these to the IMAGE's width/height, it will work. Since this method is basically the same as the one before, there are the same problems for both.. As said before, this will slow down your image processing, and it might not be supported..
Hopefully this link will be of some help
http://www.lwjgl.org/wiki/index.php?title=Slick-Util_Library_-_Part_1_-_Loading_Images_for_LWJGL
looks like its very similar to what your doing here.
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0,0);
GL11.glVertex2f(100,100);
GL11.glTexCoord2f(1,0);
GL11.glVertex2f(100+texture.getTextureWidth(),100);
GL11.glTexCoord2f(1,1);
GL11.glVertex2f(100+texture.getTextureWidth(),100+texture.getTextureHeight());
GL11.glTexCoord2f(0,1);
GL11.glVertex2f(100,100+texture.getTextureHeight());
GL11.glEnd();
I'm trying to render a colored cube after rendering other cubes that have textures. I have multiple "Drawer" objects that conform to the Drawer interface, and I pass each a reference to the GL object to the draw( final GL gl ) method of each individual implementing class. However, no matter what I do, I seem unable to render a colored cube.
Code sample:
gl.glDisable(GL.GL_TEXTURE_2D);
gl.glColor3f( 1f, 0f, 0f );
gl.glBegin(GL.GL_QUADS);
// Front Face
Point3f point = player.getPosition();
gl.glNormal3f(0.0f, 0.0f, 1.0f);
//gl.glTexCoord2f(0.0f, 0.0f);
gl.glVertex3f(-point.x - 1.0f, -1.0f, -point.z + 1.0f);
//gl.glTexCoord2f(1.0f, 0.0f);
gl.glVertex3f(-point.x + 1.0f, -1.0f, -point.z + 1.0f);
//continue rendering rest of cube. ...
gl.glEnd();
gl.glEnable(GL.GL_TEXTURE_2D);
I've also tried throwing the glColor3f calls before each vertex call, but that still gives me a white cube. What's up?
There are a few things you need to make sure you do.
First off:
gl.glEnable(gl.GL_COLOR_MATERIAL);
This will let you apply colors to your vertices. (Do this before your calls to glColor3f.)
If this still does not resolve the problem, ensure that you are using blending properly (if you're using blending at all.)
For most applications, you'll probably want to use
gl.glEnable(gl.GL_BLEND);
gl.glBlendFunc(gl.GL_SRC_ALPHA,gl.GL_ONE_MINUS_SRC_ALPHA);
If neither of these things solve your problem, you might have to give us some more information about what you're doing/setting up prior to this section of your code.
If lighting is enabled, color comes from the material, not the glColor vertex colors. If your draw function that you mentioned is setting a material for the textured objects (and a white material under the texture would be common) then the rest of the cubes would be white. Using GL_COLOR_MATERIAL sets up OpenGL to take the glColor commands and update the material instead of just the vertex color, so that should work.
So, simply put, if you have lighting enabled, try GL_COLOR_MATERIAL.
One thing you might want to try is: glBindTexture(GL_TEXTURE_2D, 0); to bind the texture to nil.
Some things to check:
Is there a shader active?
Any gl-errors?
What other states did you change? For example GL_COLOR_MATERIAL, blending or lighting will change the appearance of your geometry.
Does it work if you draw the non-textured cube first? And if it does try to figure out at which point it turns white. It's also possible that the cube will only show up in the correct color in the first frame, then there's definitely a GL state involved.
Placing glPushAttrib/glPopAttrib at the beginning/end of your drawing methods might help, but it's better to figure out what caused the problem in the first place.