I have two vertex buffers, one for XY co-ordinate data and one for UV data, passed to a shader as attributes.
XY_Data (Two Triangles) : { 0f, 0f, 10f, 0f, 10f, 10f,
50f, 50f, 60f, 50f, 60f, 60f }
UV_Data (Single Triangle) : { 0f, 0f, .5f, 0f, 1f, 1f }
Is it possible to reuse the UV data for a single triangle when drawing two triangles, without having to extend the size of the buffer to match the XY Data?
In older version, (assuming your openGL to java wrapper was a thin one) you got C-style undefined behavior. In other words anything could happen up to and including making demons come out of your nose. But it usually just means a crash or garbage on the screen.
In newer versions if one of the rubustness extensions are available then you won't get a crash but the values passed to the shader may still be garbage or just set to 0.
There is no way to reuse the UV data like that that doesn't involve using a sampler and slowing down the rendering. It's just easier to duplicate the data and be done with it.
Related
How do I rotate a triangle, given that rotation in latest OpenGL is deprecated? Before deprecation:
gl.glRotated(i, 0, 0, 1);
gl.glBegin(GL2.GL_TRIANGLES);
gl.glVertex3f(0.0f, 1.0f, 0.0f );
gl.glVertex3f(-1.0f, -1.0f, 0.0f );
gl.glVertex3f(1.0f, -1.0f, 0.0f );
gl.glEnd();
I tried doing this, it's just a translation though:
double rotCos = Math.cos(i);
double rotSine = Math.sin(i);
gl.glBegin(GL2.GL_TRIANGLES);
gl.glVertex3d(0.0f + rotSine, 1.0f + rotCos, 0.0f );
gl.glVertex3d(-1.0f + rotSine, -1.0f + rotCos, 0.0f );
gl.glVertex3d(1.0f + rotSine, -1.0f + rotCos, 0.0f );
gl.glEnd();
How to achieve the math behind glRotated?
What you did is not what the idea behind the deprecation of those function was; this deprecation included the functions glBegin, glVertex and glEnd, too, so if you're using those, you're missing the point
What you should to is implement a vertex shader, in which you perform the usual steps of vertex transformation i.e. multiply the vertex with first a modelview, then a projection matrix; you can also contract modelview and projection into one matrix, but this makes things a bit trickier regarding illumination.
The matrices are passed to OpenGL through so called uniforms. To create the matrices use some vector math library like GLM or Eigen (with the unofficial OpenGL module accompanying Eigen).
How to achieve the math behind glRotated?
The matrix glRotate() constructs is right there in the documentation.
It is still OK to use depreciated functions, but if you want to use "modern" OpenGL (OpenGL 3 and higher) you are going to have to do things rather differently.
For one, you no longer use glBegin/glEnd and instead draw everything using vertex buffer objects. Secondly, the fixed function pipeline has been removed so vertex and fragment shaders are required to draw anything. There are also number of other changes (including the addition of vertex array objects, and geometry shaders).
The way to do rotation in OpenGL 3 is to pass modelView and projection matrices in uniforms, and use them to compute vertex positions in the vertex shader.
Ultimately, if you want to learn "modern" OpenGL, you are probably best off just looking online for tutorials on OpenGL 3.0 (or higher).
I'm using LWJGL and Slick framework to load Textures to my OpenGL-application.
I'm using this image:
And this code to import and utilize the texture:
flagTexture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("japan.png"));
......
flagTexture.bind();
GL11.glColor3f(1.0f, 1.0f, 1.0f);
GL11.glPushMatrix();
GL11.glTranslatef(0.0f, 0.0f, -10.0f);
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0.0f, 0.0f);
GL11.glVertex2f(0.0f, 0.0f);
GL11.glTexCoord2f(1.0f, 0.0f);
GL11.glVertex2f(2.5f, 0.0f);
GL11.glTexCoord2f(1.0f, 1.0f);
GL11.glVertex2f(2.5f, 2.5f);
GL11.glTexCoord2f(0.0f, 1.0f);
GL11.glVertex2f(0.0f, 2.5f);
GL11.glEnd();
GL11.glPopMatrix();
But the end-result becomes this:
I'm not using any special settings like GL_REPEAT or anything like that. Whats going on? How can I make the texture fill the given vertices?
It looks like the texture is getting padded out to the nearest power of two. There are two solutions here:
Stretch the texture out to the nearest power of two.
Calculate the difference between your texture's size and the nearest power of two and change the texture coordinates from 1.0f to textureWidth/nearestPowerOfTwoWidth and textureHeight/nearestPowerOfTwoHeight.
There might also be some specific LWJGL method to allow for non-power-of-two textures, look into that.
If you need to support non-power-of-two textures, you can modify the loading method. If you use Slick2D, there's no way to do it other than to implement your "own" texture class (You can get some examples on those here: Texture.java and TextureLoader.java
The TextureLoader class contains a method "get2Fold", this is used to calculate the next power of two bigger than the texture width/height. So, if you want to use textures with non-power-of-two size, just change this method to simply return fold; (=the input), so that the program "thinks" that the next power of two is the size of the image, which it isn't in many cases, but if the hardware supports it (Most does), this shouldn't be a problem. A more "abstract" way would be to change this line:
GL11.glTexImage2D(target, 0, dstPixelFormat, get2Fold(bufferedImage.getWidth()), get2Fold(bufferedImage.getHeight()), 0, srcPixelFormat, GL11.GL_UNSIGNED_BYTE, textureBuffer);
Here, the 4th argument = the width of the texture and the 5th = the height of the texture. If you set these to the IMAGE's width/height, it will work. Since this method is basically the same as the one before, there are the same problems for both.. As said before, this will slow down your image processing, and it might not be supported..
Hopefully this link will be of some help
http://www.lwjgl.org/wiki/index.php?title=Slick-Util_Library_-_Part_1_-_Loading_Images_for_LWJGL
looks like its very similar to what your doing here.
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0,0);
GL11.glVertex2f(100,100);
GL11.glTexCoord2f(1,0);
GL11.glVertex2f(100+texture.getTextureWidth(),100);
GL11.glTexCoord2f(1,1);
GL11.glVertex2f(100+texture.getTextureWidth(),100+texture.getTextureHeight());
GL11.glTexCoord2f(0,1);
GL11.glVertex2f(100,100+texture.getTextureHeight());
GL11.glEnd();
I'm trying to create a spinning airplane propeller in Java 3D. At the moment, it is rotating around the origin. However, I need it to rotate around itself. I haven't done much 3D graphics in Java, so I'm pretty much stumped.
TransformGroup rotateTheBlades = new TransformGroup();
rotateTheBlades.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
Alpha rotationAlpha = new Alpha(-1,5000);
RotationInterpolator rotator = new RotationInterpolator(
rotationAlpha,rotateTheBlades);
Transform3D abc = new Transform3D();
abc.rotZ(Math.PI/2);
rotator.setTransformAxis(abc);
rotator.setSchedulingBounds(new BoundingSphere());
rotateTheBlades.addChild(rotator);
rotateTheBlades.addChild(propeller);
sceneBG.addChild(rotateTheBlades);
Any help would be greatly appreciated. PS: I tried to translate it to the origin before and after but it doesn't seem to do anything whatsoever.
Without knowing enough Java to give a complete answer; the correct solution in maths terms is as you suspect: translate centre of the object to origin, rotate, then translate it back to its original position.
However, it's likely (ie, I'm guessing) that when you combine transformations, they're post-multiplied to give the effect that transformations happen in model space. What you probably want is pre-multiplication, to give the effect that transformations occur in world space.
Consider the sails of a windmill. In code you'd want to be able to translate to the top of the windmill, then call the routine that draws the sails. That routine might apply a rotation and then draw. But in terms of transformations, you actually you want to rotate the sails while at the origin so that they rotate around their centre, then move them out. So the transformations are applied in the opposite order to the order in which you request them.
What that means is, you want to apply the transformations as:
move away from the origin
rotate
move to the origin
For example, if you were in OpenGL (which is also a post-multiplier, and easy enough to follow in this example even if you don't actually know it) you might do:
glTranslatef(centreOfBody.x, centreOfBody.y, 0.0f);
glRotatef(angle, 0.0f, 0.0f, 1.0f);
glTranslatef(-centreOfBody.x, -centreOfBody.y, 0.0f);
The approach suggested by #Tommy is correct. Like GL, transformations of the Java graphics context are concatenated "in the most commonly useful way," which I think of as last-in, first-out. A typical paintComponent() method is shown below and in this complete example.
#Override
public void paintComponent(Graphics g) {
super.paintComponent(g);
Graphics2D g2d = (Graphics2D) g;
g2d.translate(this.getWidth() / 2, this.getHeight() / 2);
g2d.rotate(theta);
g2d.translate(-image.getWidth(null) / 2, -image.getHeight(null) / 2);
g2d.drawImage(image, 0, 0, null);
}
I'm currently playing with the JME-Jbullet physics engine, and having issues with my terrain.
I have 2 flat boxes, one for the floor, and one to act as a ramp. The issue is as follows:
With the following code:
Box slope = new Box("Slope", new Vector3f(0, -1, 0), 10f, 0f, 15f);
PhysicsNode pSlope = new PhysicsNode(slope, CollisionShape.ShapeTypes.MESH);
pSlope.setMass(0);
pSlope.getLocalRotation().fromAngleNormalAxis( 0.5f, new Vector3f( 0, 0, -1 ) );
Before the rotation is applied, the box acts as normal, if another object is dropped on top, then they collide correctly. After the rotation however, the box is rotated, but its "Physics" doesn't change, so when an object is dropped ontop of what appears to be the ramp, it is acting as though the rotation never happened.
Is there some way to update the ramp so that when an object is dropped on to it, it slides down?
Thanks.
are you remembering to update the physics world in your update method?
public void update(float tpf) {
super.update(tpf);
pSpace.update(tpf);
}
where pSpace comes from PhysicsSpace pSpace=PhysicsSpace.getPhysicsSpace();
The problem is in the collision shape. A mesh is an extremely expensive shape to calculate collisions for, and as far as I am aware of not working properly (yet) in JME. Replacing it by a box collision shape will solve your problem.
As indicated in the javadocs:
getLocalTranslation().set() does not set the physics object location, use setLocalTranslation(), same applies for getLocalRotation()
I would guess from that that you will need to call pSlope.setLocalRotation(...) instead of getting the rotation and modifying it in place.
I'm trying to render a colored cube after rendering other cubes that have textures. I have multiple "Drawer" objects that conform to the Drawer interface, and I pass each a reference to the GL object to the draw( final GL gl ) method of each individual implementing class. However, no matter what I do, I seem unable to render a colored cube.
Code sample:
gl.glDisable(GL.GL_TEXTURE_2D);
gl.glColor3f( 1f, 0f, 0f );
gl.glBegin(GL.GL_QUADS);
// Front Face
Point3f point = player.getPosition();
gl.glNormal3f(0.0f, 0.0f, 1.0f);
//gl.glTexCoord2f(0.0f, 0.0f);
gl.glVertex3f(-point.x - 1.0f, -1.0f, -point.z + 1.0f);
//gl.glTexCoord2f(1.0f, 0.0f);
gl.glVertex3f(-point.x + 1.0f, -1.0f, -point.z + 1.0f);
//continue rendering rest of cube. ...
gl.glEnd();
gl.glEnable(GL.GL_TEXTURE_2D);
I've also tried throwing the glColor3f calls before each vertex call, but that still gives me a white cube. What's up?
There are a few things you need to make sure you do.
First off:
gl.glEnable(gl.GL_COLOR_MATERIAL);
This will let you apply colors to your vertices. (Do this before your calls to glColor3f.)
If this still does not resolve the problem, ensure that you are using blending properly (if you're using blending at all.)
For most applications, you'll probably want to use
gl.glEnable(gl.GL_BLEND);
gl.glBlendFunc(gl.GL_SRC_ALPHA,gl.GL_ONE_MINUS_SRC_ALPHA);
If neither of these things solve your problem, you might have to give us some more information about what you're doing/setting up prior to this section of your code.
If lighting is enabled, color comes from the material, not the glColor vertex colors. If your draw function that you mentioned is setting a material for the textured objects (and a white material under the texture would be common) then the rest of the cubes would be white. Using GL_COLOR_MATERIAL sets up OpenGL to take the glColor commands and update the material instead of just the vertex color, so that should work.
So, simply put, if you have lighting enabled, try GL_COLOR_MATERIAL.
One thing you might want to try is: glBindTexture(GL_TEXTURE_2D, 0); to bind the texture to nil.
Some things to check:
Is there a shader active?
Any gl-errors?
What other states did you change? For example GL_COLOR_MATERIAL, blending or lighting will change the appearance of your geometry.
Does it work if you draw the non-textured cube first? And if it does try to figure out at which point it turns white. It's also possible that the cube will only show up in the correct color in the first frame, then there's definitely a GL state involved.
Placing glPushAttrib/glPopAttrib at the beginning/end of your drawing methods might help, but it's better to figure out what caused the problem in the first place.