I wrote a small particle system in Java with the LWJGL.
So therefore I am using openGL.
Now I read, that changing a texture is very slow. So I make sure, that I only change a texture which is used for a bunch of particles and then draw all these particles.
Still my FPS drop from about 8000 to 4000 although there are only 60 particles at the same time drawn.
I know that the problem is the rendering, because when I just update particles but do not draw them, then I have no drop in FPS.
It would be great if someone could tell me what is wrong with my OpenGL code.
This is the particle emitter which manages the particles:
public void render() {
//Initialize....
GL11.glMatrixMode(GL11.GL_MODELVIEW);
BloodParticle.PARTICLE_TEXTURE.bind();
for(Integer index : this.getBlockedParticles()){
this.getParticles()[index].render();
}
GL11.glDisable(GL11.GL_TEXTURE_2D);
}
This is the render method of a single particle:
public void render() {
if(!this.isAlive()){
return;
}
// Store matrix
GL11.glPushMatrix();
// Set to particle's position and scale
GL11.glTranslatef(this.getPosition().getX(), this.getPosition().getY(), 0);
GL11.glScalef(3.5f, 3.5f, 1.0f);
// Set the current alpha.
GL11.glColor4f(1.0f, 1.0f, 1.0f, this.getAlphaFade());
//Draw the texture on a quad.
GL11.glBegin(GL11.GL_QUADS);
{
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(-PARTICLE_TEXTURE.getImageWidth()/2.0f,
-PARTICLE_TEXTURE.getImageHeight()/2.0f);
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(-PARTICLE_TEXTURE.getImageWidth()/2.0f,
PARTICLE_TEXTURE.getImageHeight()/2.0f);
GL11.glTexCoord2f(1, 1);
GL11.glVertex2f(PARTICLE_TEXTURE.getImageWidth()/2.0f,
PARTICLE_TEXTURE.getImageHeight()/2.0f);
GL11.glTexCoord2f(1, 0);
GL11.glVertex2f(PARTICLE_TEXTURE.getImageWidth()/2.0f,
-PARTICLE_TEXTURE.getImageHeight()/2.0f);
}
GL11.glEnd();
//Get the matrix again.
GL11.glPopMatrix();
}
I know that 4000 FPS are much more than enough. But I am just scared about this drop to half the FPS and with only one emitter in the game currently.
I know that this will become a problem with a fixed FPS of 60 and more than 3 emitter.
State changes have nothing to do with your performance problem.
This has:
//Draw the texture on a quad.
GL11.glBegin(GL11.GL_QUADS);
{
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(-PARTICLE_TEXTURE.getImageWidth()/2.0f,
-PARTICLE_TEXTURE.getImageHeight()/2.0f);
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(-PARTICLE_TEXTURE.getImageWidth()/2.0f,
PARTICLE_TEXTURE.getImageHeight()/2.0f);
GL11.glTexCoord2f(1, 1);
GL11.glVertex2f(PARTICLE_TEXTURE.getImageWidth()/2.0f,
PARTICLE_TEXTURE.getImageHeight()/2.0f);
GL11.glTexCoord2f(1, 0);
GL11.glVertex2f(PARTICLE_TEXTURE.getImageWidth()/2.0f,
-PARTICLE_TEXTURE.getImageHeight()/2.0f);
}
GL11.glEnd();
Think about it: Each and every particle causes 8 calls into OpenGL. At several thousand particles this is a huge overhead. Even worse, you're not even batching several quads into a single glBegin/*glEnd* block.
Here's how to improve performance: Don't use glBegin/glVertex/glEnd aka the immediate mode
Use Vertex Arrays!
Related
I keep failing on getting it to scroll infinatly, any one else had the same problem.
The game is ThrustCopter and it should scroll TextureBelow and TextureAbove.
I am ony developing for Android. I'm sure the problem lies somewhere in this snippet of code.
public void updateScene() {
float graphicsDeltaTime = Gdx.graphics.getDeltaTime();
terrainOffset -= 100*graphicsDeltaTime;
}
public void resetScene() {
terrainOffset = 0;
}
public void drawScene() {
camera.update();
batch.setProjectionMatrix(camera.combined);
batch.begin();
batch.disableBlending();
batch.draw(background, 0, 0);
batch.enableBlending();
batch.draw(terrainBelow, terrainOffset, 0);
batch.draw(terrainBelow, terrainOffset + terrainBelow.getRegionWidth(), 0);
batch.draw(terrainAbove, terrainOffset, 480 - terrainAbove.getRegionHeight());
batch.draw(terrainAbove, terrainOffset + terrainAbove.getRegionWidth(), 480 - terrainAbove.getRegionHeight());
batch.end();
}
You're moving two copies of the texture left forever. You should prevent terrainOffset from going farther left off the screen than the width of the texture. By using the remainder of dividing out the texture width, you will cause it to seamlessly pop forward by the width of the texture.
I'm assuming these textures are at least as wide as the camera's viewportWidth, and x = 0 aligns with the left edge of the view.
Replace
terrainOffset -= 100*graphicsDeltaTime;
with
terrainOffset = (terrainOffset - 100 * graphicsDeltaTime) % terrainBelow.getRegionWidth();
If your two terrain textures have different widths, then you need two terrainOffset variables, one for each texture.
I am programming a GUI framework in lwjgl (opengl for java). I've recently implemented rounded rectangles by rendering a couple of normal rectangles surrounded by circles. To render the circles I used GL11.GL_POINTS. I now reached the point, where I am trying to implement animations and for a window open animation, I decided to GL11.glScaled() it from small to normal. That works fine, but unfortunately my circles don't get resized.
I tried changing my GL_POINTS circle render method against a method that uses TRIANGLE_FANs and that worked fine. My problem there was, that the circles didn't look smooth and round at all and if I increase the rendered triangles it starts to lag very quick. Even though my computer isn't bad at all.
This is the code I've used to render circles with GL_POINTS.
GL11.glEnable(GL11.GL_POINT_SMOOTH);
GL11.glHint(GL11.GL_POINT_SMOOTH_HINT, GL11.GL_NICEST);
GL11.glPointSize(radius);
GL11.glBegin(GL11.GL_POINTS);
GL11.glVertex2d(x, y);
GL11.glEnd();
GL11.glDisable(GL11.GL_POINT_SMOOTH);
This is the code I've used to scale the circles
GL11.glPushMatrix();
GL11.glTranslated(x, y, 0);
GL11.glScaled(2.0f, 2.0f, 1);
GL11.glTranslated(-x, -y, 0);
render circles
GL11.glPopMatrix();
I expect the circles to scale accordingly to the number I've put into glScaled()
Currently they aren't rescaling at all, just rendered at their normal size.
Here's a demonstration of how to properly render a circle using triangle fans:
public void render() {
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
// Coordinate system starts out as screen space coordinates
glOrtho(0, 400, 300, 0, 1, -1);
glColor3d(1, 0.5, 0.5);
renderCircle(120, 120, 100);
glColor3d(0.5, 1, 0.5);
renderCircle(300, 200, 50);
glColor3d(0.5, 0.5, 1);
renderCircle(200, 250, 30);
}
private void renderCircle(double centerX, double centerY, double radius) {
glPushMatrix();
glTranslated(centerX, centerY, 0);
glScaled(radius, radius, 1);
// Another translation here would be wrong
renderUnitCircle();
glPopMatrix();
}
private void renderUnitCircle() {
glBegin(GL_TRIANGLE_FAN);
int numVertices = 100;
double angle = 2 * Math.PI / numVertices;
for (int i = 0; i < numVertices; ++i) {
glVertex2d(Math.cos(i*angle), Math.sin(i*angle));
}
glEnd();
}
Output image:
The GL_POINT_SIZE value is actually the size of the point in pixels onscreen, not current coordinate units. For that reason your circles were unaffected by GL_SCALE. That's one reason not to use GL_POINTS to render circles. The other (arguably more important) reason being that GL_POINT_SIZE is severely deprecated and unsupported in newer OpenGL profiles.
I can't figure out why this:
glPushMatrix();
GL11.glBindTexture(this.target, this.textureID);
glColor3f(1, 1, 1);
glTranslated(posX, posY, 0);
glBegin(GL_QUADS);
{
glTexCoord2d(posXLeft, posYTop);
glVertex2d(0, 0);
glTexCoord2d(posXLeft, posYBottom);
glVertex2d(0, verH);
glTexCoord2d(posXRight, posYBottom);
glVertex2d(verW, verH);
glTexCoord2d(posXRight, posYTop);
glVertex2d(verW, 0);
}
glEnd();
glPopMatrix();
is working perfectly, where posX and posY are obviously the position in pixels, posXLeft etc is the ratio of the texture to show.
But this:
glPushMatrix();
GL11.glBindTexture(this.target, this.textureID);
glColor3f(1, 1, 1);
glTranslated(posX, posY, 0);
glBegin(GL_LINES);
{
glVertex2d(10, 10);
glVertex2d(800, 600);
}
glEnd();
glPopMatrix();
isn't. And it should be even easier to draw lines instead of a piece of a texture.
What I want to reach is to add some zig-zag lines on a texture to simulate cracks as it is damaged or broken, but I can't even draw a single line, so I am stuck here.
Any advice?
You still got texturing enabled in your line drawing code. But you don't specify texture coordinates, so you'll draw your line with a solid color as defined by texture at the currently set texture coordinate.
My suggestion: Disable texturing for drawing that line.
As said by datenwolf you have to disable the texturing, thes thing is that than you have to re-enable it, although you will have problems the next cycle of drawing if that property is not set correctly.
the solution is:
glPushMatrix();
GL11.glBindTexture(this.target, this.textureID);
glColor3f(1, 1, 1);
glDisable(GL_TEXTURE_2D);
glTranslated(posX, posY, 0);
glBegin(GL_LINES);
{
glVertex2d(10, 10);
glVertex2d(800, 600);
}
glEnd();
glEnable(GL_TEXTURE_2D);
glPopMatrix();
and that should solve your problem.
Im currently trying to work on my Java project which includes LWJGL.
I have come so far with my "game" that im now drawing an image to the screen.
The problem im getting is that the drawn image gets drawn with black boxes in it and its not the correct size.
here is an image of how it looks visually.
Here is how the actualy red square image should look like:
Here is the code i use for rendering with OpenGL, I cannot figure out what im doing wrong.
public class Renderer {
//Integers used for player cordinates, Taken from player class by using Static variables
int playerX;
int playerY;
SpriteSheetLoader spriteLoader;
Texture player;
public Renderer(){
}
public void initRenderer(){
//Initialize OpenGL
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity(); // Resets any previous projection matrices
GL11.glOrtho(0, 800, 600, 0, 1, -1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
try {
player = TextureLoader.getTexture("PNG",ResourceLoader.getResourceAsStream("res/PaddleTemp.png"));
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public void update(){
GL11.glClear( GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT );
playerX = Player.playerX; //Gets player x and y from the player class using static variables
playerY = Player.playerY;
player.bind();
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0, 0); //top left
GL11.glVertex2f(playerX, playerY);
GL11.glTexCoord2f(1,0); //Top right
GL11.glVertex2f(playerX + 50, playerY);
GL11.glTexCoord2f(1, 1); //Bottom right
GL11.glVertex2f(playerX + 50, playerY + 150);
GL11.glTexCoord2f(0, 1); //bottom left
GL11.glVertex2f(playerX, playerY + 150);
GL11.glEnd();
}
Sorry, I'm not absolutely sure what the problem is. I can't seem to reproduce it afterwards, unfortunately. Yet I figured it should be solved by one of the following:
Unsupported Image Size
OpenGL relies heavily on so images, whose resolution's width and height are powers of 2 (when width/height=n*2). If the images files aren't up to that specification, LWJGL might act oddly. Also, don't worry about the image files being squished. That's dependent on the vertex input, not on the texture input.
Unsupported Image Extension
Try saving your image files as non-interlaced PNG files. Slick_Util, or whatever you use for loading the image files, might not fully support the images you're giving to it.
Correction Hints
As a last resort, you could add the following lines of code to your initialization code:
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
Hopefully one of these tips helps you.
Source: LWJGL not rendering textures correctly?
I have a really easy fix for you. Your main problem is that you are using an unsupported image size, but OpenGL is good when it just expands the actual size of the sprite to be correct. It also records the exact float value for the width and height of the image so that you can in fact use images that are any size you want. Just replace your current rendering code with this:
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0, 0); //top left
GL11.glVertex2f(playerX, playerY);
GL11.glTexCoord2f(player.getWidth(),0); //Top right
GL11.glVertex2f(playerX + 50, playerY);
GL11.glTexCoord2f(player.getWidth(), player.getHeight()); //Bottom right
GL11.glVertex2f(playerX + 50, playerY + 150);
GL11.glTexCoord2f(0, player.getHeight()); //bottom left
GL11.glVertex2f(playerX, playerY + 150);
GL11.glEnd();
Also, you can disregard the other answer as this one should be your solution.
this is a big problem I have been running into.
I am trying to render multiple tiles using glTranslate but when I call my draw function with the x, y coordinates the tiles are spaced weirdly(I don't want spaces).
Here is what happens.
here is my code:
Draw:
public void draw(float Xa, float Ya) {
GL11.glTranslatef(Xa, Ya, 0);
if(hasTexture) {
Texture.bind();
GL11.glBegin(GL11.GL_QUADS);
GL11.glColor3f(0.5f, 0.5f, 1);
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(0, 0);
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(0, S);
GL11.glTexCoord2f(1, 1);
GL11.glVertex2f(S, S);
GL11.glTexCoord2f(1, 0);
GL11.glVertex2f(S, 0);
GL11.glEnd();
}
and my render code:
public void a() throws IOException {
GL11.glTranslatef(0, 0, -10);
int x = 0;
while (x < World.BLOCKS_WIDTH - 1) {
int y = 0;
while (y < World.BLOCKS_HEIGHT - 1) {
blocks.b[data.blocks[x][y]].draw(x, y);
y++;
}
x++;
}
there are no errors (except the visible ones)
You do not appear to be initialising or pushing / popping the current transform. So the translations will accumulate, producing the effect you see, getting further and further apart as you translate by ever larger values.
Lets say your blocks are 10 units apart. The first is drawn with a translation of (0, 0), then next (0, 10), then (0, 20), (0, 30), etc.
However as the translations accumulate in the view matrix, what you actually get are translations of (0,0), (0,10), (0,30), (0,60), etc.
This is important, as it allows you to build a complex transform from a series of simple discrete steps. However when you want to render multiple objects, each with their own transform, you need to have some form of reset in between each object.
You could reinitialise the whole matrix, but that's a bit untidy and involves knowing what other transforms (such as the camera, etc.) have been done previously.
Instead, you can "push" the current matrix onto a stack, perform whatever local transformations you want to do, render stuff, and then "pop" the matrix back off so that you're back where you started, ready to render the next object.
I should point out that all this functionality is deprecated in the later versions of GL. With the more modern API you use shaders, and can supply whatever transforms you care to calculate.