this is a big problem I have been running into.
I am trying to render multiple tiles using glTranslate but when I call my draw function with the x, y coordinates the tiles are spaced weirdly(I don't want spaces).
Here is what happens.
here is my code:
Draw:
public void draw(float Xa, float Ya) {
GL11.glTranslatef(Xa, Ya, 0);
if(hasTexture) {
Texture.bind();
GL11.glBegin(GL11.GL_QUADS);
GL11.glColor3f(0.5f, 0.5f, 1);
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(0, 0);
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(0, S);
GL11.glTexCoord2f(1, 1);
GL11.glVertex2f(S, S);
GL11.glTexCoord2f(1, 0);
GL11.glVertex2f(S, 0);
GL11.glEnd();
}
and my render code:
public void a() throws IOException {
GL11.glTranslatef(0, 0, -10);
int x = 0;
while (x < World.BLOCKS_WIDTH - 1) {
int y = 0;
while (y < World.BLOCKS_HEIGHT - 1) {
blocks.b[data.blocks[x][y]].draw(x, y);
y++;
}
x++;
}
there are no errors (except the visible ones)
You do not appear to be initialising or pushing / popping the current transform. So the translations will accumulate, producing the effect you see, getting further and further apart as you translate by ever larger values.
Lets say your blocks are 10 units apart. The first is drawn with a translation of (0, 0), then next (0, 10), then (0, 20), (0, 30), etc.
However as the translations accumulate in the view matrix, what you actually get are translations of (0,0), (0,10), (0,30), (0,60), etc.
This is important, as it allows you to build a complex transform from a series of simple discrete steps. However when you want to render multiple objects, each with their own transform, you need to have some form of reset in between each object.
You could reinitialise the whole matrix, but that's a bit untidy and involves knowing what other transforms (such as the camera, etc.) have been done previously.
Instead, you can "push" the current matrix onto a stack, perform whatever local transformations you want to do, render stuff, and then "pop" the matrix back off so that you're back where you started, ready to render the next object.
I should point out that all this functionality is deprecated in the later versions of GL. With the more modern API you use shaders, and can supply whatever transforms you care to calculate.
Related
I keep failing on getting it to scroll infinatly, any one else had the same problem.
The game is ThrustCopter and it should scroll TextureBelow and TextureAbove.
I am ony developing for Android. I'm sure the problem lies somewhere in this snippet of code.
public void updateScene() {
float graphicsDeltaTime = Gdx.graphics.getDeltaTime();
terrainOffset -= 100*graphicsDeltaTime;
}
public void resetScene() {
terrainOffset = 0;
}
public void drawScene() {
camera.update();
batch.setProjectionMatrix(camera.combined);
batch.begin();
batch.disableBlending();
batch.draw(background, 0, 0);
batch.enableBlending();
batch.draw(terrainBelow, terrainOffset, 0);
batch.draw(terrainBelow, terrainOffset + terrainBelow.getRegionWidth(), 0);
batch.draw(terrainAbove, terrainOffset, 480 - terrainAbove.getRegionHeight());
batch.draw(terrainAbove, terrainOffset + terrainAbove.getRegionWidth(), 480 - terrainAbove.getRegionHeight());
batch.end();
}
You're moving two copies of the texture left forever. You should prevent terrainOffset from going farther left off the screen than the width of the texture. By using the remainder of dividing out the texture width, you will cause it to seamlessly pop forward by the width of the texture.
I'm assuming these textures are at least as wide as the camera's viewportWidth, and x = 0 aligns with the left edge of the view.
Replace
terrainOffset -= 100*graphicsDeltaTime;
with
terrainOffset = (terrainOffset - 100 * graphicsDeltaTime) % terrainBelow.getRegionWidth();
If your two terrain textures have different widths, then you need two terrainOffset variables, one for each texture.
I am trying to make a simple 2D game, and I store the world in a 2D array of Block (an enum, with each value having its texture).
Since these are all simple opaque tiles, when rendering I sort them by texture and then render them by translating to their coordinate. However, I also need to specify the texture coordinates and the vertex for each tile that I draw, even though these are also the same.
Here's what I currently have:
public static void render() {
// Sorting...
for(SolidBlock block : xValues.keySet()) {
block.getTexture().bind();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
for(int coordinateIndex = 0; coordinateIndex < xValues.get(block).size(); coordinateIndex++) {
int x = xValues.get(block).get(coordinateIndex);
int y = yValues.get(block).get(coordinateIndex);
glTranslatef(x, y, Integer.MIN_VALUE);
// Here I use MIN_VALUE because I'll later have to do z sorting with other tiles
glBegin(GL_QUADS);
loadModel();
glEnd();
glLoadIdentity();
}
xValues.get(block).clear();
yValues.get(block).clear();
}
}
private static void loadModel() {
glTexCoord2f(0, 0);
glVertex2f(0, 0);
glTexCoord2f(1, 0);
glVertex2f(1, 0);
glTexCoord2f(1, 1);
glVertex2f(1, 1);
glTexCoord2f(0, 1);
glVertex2f(0, 1);
}
I'd like to know if it is possible to put loadModel() before the main loop, to avoid having to load the model thousands of times with the same Data, and also what else could be moved to make it as fast as possible!
Some quick optimizations:
glTexParameteri only needs to be called once per parameter per texture. You should put it in the part of your code where you load the textures.
You can draw multiple quads in one glBegin/glEnd pair simply by adding more vertices. However, you cannot do any coordinate changes between glBegin and glEnd (such as glTranslatef or glLoadIdentity or glPushMatrix) so you'll have to pass x and y to your loadModel function (which really should be called addQuad for accuracy). It's also not allowed to rebind textures between glBegin/glEnd, so you'll have to use one set of glBegin/glEnd per texture.
Minor, but instead of calling xValues.get(block) a whole bunch of times, just say List<Integer> blockXValues = xValues.get(block) at the beginning of your outer loop and then use blockXValues from there on.
Some more involved optimizations:
Legacy OpenGL has draw lists, which are basically macros for OpenGL. You can make OpenGL record all the OpenGL calls you're doing between glNewList and glEndList (with some exceptions), and store them somehow. The next time you want to run those exact OpenGL calls, you can use glCallList to make OpenGL do just that for you. Some optimizations will be done on the draw list in order to speed up subsequent draws.
Texture switching is relatively expensive, which you're probably already aware of since you sorted your quads by texture, but there is a better solution than sorting textures: Put all your textures into a single texture atlas. You'll want to store the subtexture coordinates of each block inside your SolidBlocks, and then pass block to addQuad as well so you can pass the appropriate subtexture coordinates to glTexCoord2f. Once you've done that, you don't need to sort by texture anymore and can just iterate over x and y coordinates.
Good practices:
Only use glLoadIdentity once per frame, at the beginning of your draw process. Then use glPushMatrix paired with glPopMatrix to save and restore the state of matrices. That way the inner parts of your code don't need to know about the matrix transformations the outer parts may or may not have done beforehand.
Don't use Integer.MIN_VALUE as a vertex coordinate. Use a constant of your own choosing, preferably one that won't make your depth range huge (the last two arguments to glOrtho which I assume you're using). Depth buffer precision is limited, you'll run into Z-fighting issues if you try to use Z coordinates of 1 or 2 or so after setting your Z range from Integer.MIN_VALUE to Integer.MAX_VALUE. Also, you're using float coordinates, so int constants don't make sense here anyway.
Here's the code after a quick pass (without the texture atlas changes):
private static final float BLOCK_Z_DEPTH = -1; // change to whatever works for you
private int blockCallList;
private boolean regenerateBlockCallList; // set to true whenever you need to update some blocks
public static void init() {
blockCallList = glGenLists(1);
regenerateBlockCallList = true;
}
public static void render() {
if (regenerateBlockCallList) {
glNewList(blockCallList, GL_COMPILE_AND_EXECUTE);
drawBlocks();
glEndList();
regenerateBlockCallList = false;
} else {
glCallList(blockCallList);
}
}
private static void drawBlocks() {
// Sorting...
glPushMatrix();
glTranslatef(0, 0, BLOCK_Z_DEPTH);
for (SolidBlock block : xValues.keySet()) {
List<Integer> blockXValues = xValues.get(block);
List<Integer> blockYValues = yValues.get(block);
block.getTexture().bind();
glBegin(GL_QUADS);
for(int coordinateIndex = 0; coordinateIndex < blockXValues.size(); coordinateIndex++) {
int x = blockXValues.get(coordinateIndex);
int y = blockYValues.get(coordinateIndex);
addQuad(x,y);
}
glEnd();
blockXValues.clear();
blockYValues.clear();
}
glPopMatrix();
}
private static void addQuad(float x, float y) {
glTexCoord2f(0, 0);
glVertex2f(x, y);
glTexCoord2f(1, 0);
glVertex2f(x+1, y);
glTexCoord2f(1, 1);
glVertex2f(x+1, y+1);
glTexCoord2f(0, 1);
glVertex2f(x, y+1);
}
With modern OpenGL (vertex buffers, shaders and instancing instead of display lists, matrix transformations and passing vertices one by one) you'd approach this problem very differently, but I'll keep that beyond the scope of my answer.
I am programming a GUI framework in lwjgl (opengl for java). I've recently implemented rounded rectangles by rendering a couple of normal rectangles surrounded by circles. To render the circles I used GL11.GL_POINTS. I now reached the point, where I am trying to implement animations and for a window open animation, I decided to GL11.glScaled() it from small to normal. That works fine, but unfortunately my circles don't get resized.
I tried changing my GL_POINTS circle render method against a method that uses TRIANGLE_FANs and that worked fine. My problem there was, that the circles didn't look smooth and round at all and if I increase the rendered triangles it starts to lag very quick. Even though my computer isn't bad at all.
This is the code I've used to render circles with GL_POINTS.
GL11.glEnable(GL11.GL_POINT_SMOOTH);
GL11.glHint(GL11.GL_POINT_SMOOTH_HINT, GL11.GL_NICEST);
GL11.glPointSize(radius);
GL11.glBegin(GL11.GL_POINTS);
GL11.glVertex2d(x, y);
GL11.glEnd();
GL11.glDisable(GL11.GL_POINT_SMOOTH);
This is the code I've used to scale the circles
GL11.glPushMatrix();
GL11.glTranslated(x, y, 0);
GL11.glScaled(2.0f, 2.0f, 1);
GL11.glTranslated(-x, -y, 0);
render circles
GL11.glPopMatrix();
I expect the circles to scale accordingly to the number I've put into glScaled()
Currently they aren't rescaling at all, just rendered at their normal size.
Here's a demonstration of how to properly render a circle using triangle fans:
public void render() {
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
// Coordinate system starts out as screen space coordinates
glOrtho(0, 400, 300, 0, 1, -1);
glColor3d(1, 0.5, 0.5);
renderCircle(120, 120, 100);
glColor3d(0.5, 1, 0.5);
renderCircle(300, 200, 50);
glColor3d(0.5, 0.5, 1);
renderCircle(200, 250, 30);
}
private void renderCircle(double centerX, double centerY, double radius) {
glPushMatrix();
glTranslated(centerX, centerY, 0);
glScaled(radius, radius, 1);
// Another translation here would be wrong
renderUnitCircle();
glPopMatrix();
}
private void renderUnitCircle() {
glBegin(GL_TRIANGLE_FAN);
int numVertices = 100;
double angle = 2 * Math.PI / numVertices;
for (int i = 0; i < numVertices; ++i) {
glVertex2d(Math.cos(i*angle), Math.sin(i*angle));
}
glEnd();
}
Output image:
The GL_POINT_SIZE value is actually the size of the point in pixels onscreen, not current coordinate units. For that reason your circles were unaffected by GL_SCALE. That's one reason not to use GL_POINTS to render circles. The other (arguably more important) reason being that GL_POINT_SIZE is severely deprecated and unsupported in newer OpenGL profiles.
I'm building a 2D physics engine in Java using OpenGL (from LWJGL) to display the objects. The problem I am having is that the transformation matrices I apply to the frame seem to be getting applied in a different order to what it says that are.
/**
* Render the current frame
*/
private void render() {
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glOrtho(0, Display.getDisplayMode().getWidth(), 0, Display
.getDisplayMode().getHeight(), -1, 1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_STENCIL_BUFFER_BIT);
GL11.glPushMatrix();
GL11.glTranslatef(Display.getDisplayMode().getWidth() / 2, Display
.getDisplayMode().getHeight() / 2, 0.0f);
renderFrameObjects();
GL11.glPopMatrix();
}
public void renderFrameObjects() {
Vector<ReactiveObject> objects = frame.getObjects();
for (int i = 0; i < objects.size(); i++) {
ReactiveObject currentObject = objects.get(i);
Mesh2D mesh = currentObject.getMesh();
GL11.glRotatef((float)(currentObject.getR() / Math.PI * 180), 0, 0, 1.0f);
GL11.glTranslated(currentObject.getX(), currentObject.getY(), 0);
GL11.glBegin(GL11.GL_POLYGON);
for (int j = 0; j < mesh.size(); j++) {
GL11.glVertex2d(mesh.get(j).x, mesh.get(j).y);
}
GL11.glEnd();
GL11.glTranslated(-currentObject.getX(), -currentObject.getY(), 0);
GL11.glRotatef((float)(-currentObject.getR() / Math.PI * 180), 0, 0, 1.0f);
}
}
In renderFrameObjects() I apply a rotation, a translation, draw the object (mesh coordinates are relative to the object's x, y), reverse the translation, and reverse the rotation. Yet the effect it gives when an object rotates (on collision) is similar to when one would apply a translation then a rotation (ie. rotates around a point at a radius). I can't seem to be able to figure this one out having tried various combinations of transformations.
Any help would be appreciated.
That is beacause they are applied to the local coordinate system of the object, not the object itself.
So the rotate rotates the coordinate system and the translation is applied within that rotated coordinate system.
BTW: Don't undo your matrix changes by applying negative transformations. Roundoff error's will accumulate and it probably is also less efficient then using glPushMatrix and glPopMatrix
How can I have that functionality in my game through which the players can change their hairstyle, look, style of clothes, etc., and so whenever they wear a different item of clothing their avatar is updated with it.
Should I:
Have my designer create all possible combinations of armor, hairstyles, and faces as sprites (this could be a lot of work).
When the player chooses what they should look like during their introduction to the game, my code would automatically create this sprite, and all possible combinations of headgear/armor with that sprite. Then each time they select some different armor, the sprite for that armor/look combination is loaded.
Is it possible to have a character's sprite divided into components, like face, shirt, jeans, shoes, and have the pixel dimensions of each of these. Then whenever the player changes his helmet, for example, we use the pixel dimensions to put the helmet image in place of where its face image would normally be. (I'm using Java to build this game)
Is this not possible in 2D and I should use 3D for this?
Any other method?
Please advise.
One major factor to consider is animation. If a character has armour with shoulder pads, those shoulderpads may need to move with his torso. Likewise, if he's wearing boots, those have to follow the same cycles as hid bare feet would.
Essentially what you need for your designers is a Sprite Sheet that lets your artists see all possible frames of animation for your base character. You then have them create custom hairstyles, boots, armour, etc. based on those sheets. Yes, its a lot of work, but in most cases, the elements will require a minimal amount of redrawing; boots are about the only thing I could see really taking a lot of work to re-create since they change over multiple frames of animation. Be rutheless with your sprites, try to cut down the required number as much as possible.
After you've amassed a library of elements you can start cheating. Recycle the same hair style and adjust its colour either in Photoshop or directly in the game with sliders in your character creator.
The last step, to ensure good performance in-game, would be to flatten all the different elements' sprite sheets into a single sprite sheet that is then split up and stored in sprite buffers.
3D will not be necessary for this, but the painter algorithm that is common in the 3D world might IMHO save you some work:
The painter algorithm works by drawing the most distant objects first, then overdrawing with objects closer to the camera. In your case, it would boild down to generating the buffer for your sprite, drawing it onto the buffer, finding the next dependant sprite-part (i.e. armour or whatnot), drawing that, finding the next dependant sprite-part (i.e. a special sign that's on the armour), and so on. When there are no more dependant parts, you paint the full generated sprite on to the display the user sees.
The combinated parts should have an alpha channel (RGBA instead of RGB) so that you will only combine parts that have an alpha value set to a value of your choice. If you cannot do that for whatever reason, just stick with one RGB combination that you will treat as transparent.
Using 3D might make combining the parts easier for you, and you'd not even have to use an offscreen buffer or write the pixel combinating code. The flip-side is that you need to learn a little 3D if you don't know it already. :-)
Edit to answer comment:
The combination part would work somewhat like this (in C++, Java will be pretty similar - please note that I did not run the code below through a compiler):
//
// #param dependant_textures is a vector of textures where
// texture n+1 depends on texture n.
// #param combimed_tex is the output of all textures combined
void Sprite::combineTextures (vector<Texture> const& dependant_textures,
Texture& combined_tex) {
vector< Texture >::iterator iter = dependant_textures.begin();
combined_tex = *iter;
if (dependant_textures.size() > 1)
for (iter++; iter != dependant_textures.end(); iter++) {
Texture& current_tex = *iter;
// Go through each pixel, painting:
for (unsigned char pixel_index = 0;
pixel_index < current_tex.numPixels(); pixel_index++) {
// Assuming that Texture had a method to export the raw pixel data
// as an array of chars - to illustrate, check Alpha value:
int const BYTESPERPIXEL = 4; // RGBA
if (!current_tex.getRawData()[pixel_index * BYTESPERPIXEL + 3])
for (int copied_bytes = 0; copied_bytes < 3; copied_bytes++)
{
int index = pixel_index * BYTESPERPIXEL + copied_bytes;
combined_tex.getRawData()[index] =
current_tex.getRawData()[index];
}
}
}
}
To answer your question for a 3D solution, you would simply draw rectangles with their respective textures (that would have an alpha channel) over each other. You would set the system up to display in an orthogonal mode (for OpenGL: gluOrtho2D()).
I'd go with the procedural generation solution (#2). As long as there isn't a limiting amount of sprites to be generated, such that the generation takes too long. Maybe do the generation when each item is acquired, to lower the load.
Since I was asked in comments to supply a 3D way aswell, here is some, that is an excerpt of some code I wrote quite some time ago. It's OpenGL and C++.
Each sprite would be asked to draw itself. Using the Adapter pattern, I would combine sprites - i.e. there would be sprites that would hold two or more sprites that had a (0,0) relative position and one sprite with a real position having all those "sub-"sprites.
void Sprite::display (void) const
{
glBindTexture(GL_TEXTURE_2D, tex_id_);
Display::drawTranspRect(model_->getPosition().x + draw_dimensions_[0] / 2.0f,
model_->getPosition().y + draw_dimensions_[1] / 2.0f,
draw_dimensions_[0] / 2.0f, draw_dimensions_[1] / 2.0f);
}
void Display::drawTranspRect (float x, float y, float x_len, float y_len)
{
glPushMatrix();
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glColor4f(1.0, 1.0, 1.0, 1.0);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex3f(x - x_len, y - y_len, Z);
glTexCoord2f(1.0f, 0.0f); glVertex3f(x + x_len, y - y_len, Z);
glTexCoord2f(1.0f, 1.0f); glVertex3f(x + x_len, y + y_len, Z);
glTexCoord2f(0.0f, 1.0f); glVertex3f(x - x_len, y + y_len, Z);
glEnd();
glDisable(GL_BLEND);
glPopMatrix();
}
The tex_id_ is an integral value that identifies which texture is used to OpenGL. The relevant parts of the texture manager are these. The texture manager actually emulates an alpha channel by checking to see if the color read is pure white (RGB of (ff,ff,ff)) - the loadFile code operates on 24 bits per pixel BMP files:
TextureManager::texture_id
TextureManager::createNewTexture (Texture const& tex) {
texture_id id;
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, 4, tex.width_, tex.height_, 0,
GL_BGRA_EXT, GL_UNSIGNED_BYTE, tex.texture_);
return id;
}
void TextureManager::loadImage (FILE* f, Texture& dest) const {
fseek(f, 18, SEEK_SET);
signed int compression_method;
unsigned int const HEADER_SIZE = 54;
fread(&dest.width_, sizeof(unsigned int), 1, f);
fread(&dest.height_, sizeof(unsigned int), 1, f);
fseek(f, 28, SEEK_SET);
fread(&dest.bpp_, sizeof (unsigned short), 1, f);
fseek(f, 30, SEEK_SET);
fread(&compression_method, sizeof(unsigned int), 1, f);
// We add 4 channels, because we will manually set an alpha channel
// for the color white.
dest.size_ = dest.width_ * dest.height_ * dest.bpp_/8 * 4;
dest.texture_ = new unsigned char[dest.size_];
unsigned char* buffer = new unsigned char[3 * dest.size_ / 4];
// Slurp in whole file and replace all white colors with green
// values and an alpha value of 0:
fseek(f, HEADER_SIZE, SEEK_SET);
fread (buffer, sizeof(unsigned char), 3 * dest.size_ / 4, f);
for (unsigned int count = 0; count < dest.width_ * dest.height_; count++) {
dest.texture_[0+count*4] = buffer[0+count*3];
dest.texture_[1+count*4] = buffer[1+count*3];
dest.texture_[2+count*4] = buffer[2+count*3];
dest.texture_[3+count*4] = 0xff;
if (dest.texture_[0+count*4] == 0xff &&
dest.texture_[1+count*4] == 0xff &&
dest.texture_[2+count*4] == 0xff) {
dest.texture_[0+count*4] = 0x00;
dest.texture_[1+count*4] = 0xff;
dest.texture_[2+count*4] = 0x00;
dest.texture_[3+count*4] = 0x00;
dest.uses_alpha_ = true;
}
}
delete[] buffer;
}
This was actually a small Jump'nRun that I developed occasionally in my spare time. It used gluOrtho2D() mode aswell, btw. If you leave means to contact you, I will send you the source if you want.
Older 2d games such as Diablo and Ultima Online use a sprite compositing technique to do this. You could search for art from those kind of older 2d isometric games to see how they did it.