Combining JavaFX 3D Shapes seem transparent - java

I'm working on a 3D project in JavaFX 8. I have built a Car 3d Model with several TriangleMesh objects I'm also using several other JavaFX 'Shape 3D's to create the wheels and axles.
The problem is that the MeshViews elements seem transparent. I can see the other Shape3D objects thru it
2 Cylinders are visible even though the MeshView is in front of it
Here is an example of one of the TriangleMesh's I made
// ============================= ROOF ============================= //
TriangleMesh roofMesh = new TriangleMesh(VertexFormat.POINT_TEXCOORD);
roofMesh.getPoints().addAll(
/* X */ -roofWidth/2.f, /* Y */ roofHeight + wheelDiameter / 2 + wheelGap + doorHeight, /* Z */ - roofLength/2, //PT0
/* X */ roofWidth/2.f, /* Y */ roofHeight + wheelDiameter / 2 + wheelGap + doorHeight, /* Z */ - roofLength/2, //PT1
/* X */ -roofWidth/2.f, /* Y */ roofHeight + wheelDiameter / 2 + wheelGap + doorHeight, /* Z */ roofLength/2, //PT2
/* X */ roofWidth/2.f, /* Y */ roofHeight + wheelDiameter / 2 + wheelGap + doorHeight, /* Z */ roofLength/2 //PT3
);
roofMesh.getTexCoords().addAll(
0, 0, // t0
1, 0, // t1
0, 1, // t2
1, 1 // t3
);
roofMesh.getFaces().addAll(
1,1, 0,0,2,2,
3,3, 1,2 ,2,1
);
After Creating the mesh I'm creating a new MeshView object
meshViewMap.put("roof", new MeshView(roofMesh));
I have also applied a Material to the MeshViews:
private void setTexColor(Shape3D shape, Color c, String imagePath )
{
PhongMaterial pm = new PhongMaterial();
pm.setDiffuseColor(c);
pm.setSpecularColor(c);
shape.setMaterial(pm);
}
These are the Cylinder that you can see in the image:
//Create Axles
Cylinder frontCylinder = new Cylinder(0.5, bodyWidth);
Cylinder rearCylinder = new Cylinder(0.5, bodyWidth);
PhongMaterial cylinderMat = new PhongMaterial();
cylinderMat.setDiffuseColor(Color.BLACK);
cylinderMat.setSpecularColor(Color.BLACK);
frontCylinder.setMaterial(cylinderMat);
rearCylinder.setMaterial(cylinderMat);
frontCylinder.setRotate(90);
rearCylinder.setRotate(90);
frontCylinder.setTranslateZ( 0.7f * (bodyLength/2 + hoodLength/2));
rearCylinder.setTranslateZ( -0.4f * (bodyLength/2 + hoodLength/2));
frontCylinder.setTranslateY(wheelDiameter/2);
rearCylinder.setTranslateY(wheelDiameter/2);
this.getChildren().add(frontCylinder);
this.getChildren().add(rearCylinder);
I have tried to set the opacity to 1 even though it is the default value.
Java Version 8.0.121-b13

By default, a JavaFX Scene doesn't include a depth buffer. When used for 3D, this may result in weird Escherian artifacts where objects or surfaces farther from the camera are drawn on top of those closer to the camera.
An application may request depth buffer support or scene anti-aliasing support at the creation of a Scene. A scene with only 2D shapes and without any 3D transforms does not need a depth buffer nor scene anti-aliasing support.
To enable the depth buffer, use one of the constructors that takes a boolean depthBuffer argument.
For a SubScene, the corresponding constructor also requires a SceneAntialiasing argument. (The default value would be SceneAntialiasing.DISABLED.)
(Based on Fabian's comment, for those who don't look closely at comments.)

Related

3D Object selection in opengl

I am currently making a 3d chess game in opengl. I still struggle with the selection of the different figures. I followed the tutorials by thinmatrix and came this far: https://imgur.com/gallery/oLv5ReI.
Now I want the user to be able to select the figures by clicking on them. I have the camera position, the ray in which direction the mouse is pointing and the position of the figures. How can I detect if the ray hits the figure (probably using a rectangle hitbox) when it starts at the position of the camera?
My code so far:
public void update(Vector3f mouseRay, Camera camera, Figure figure){
Vector3f start = camera.getPosition();
Vector3f figurePos = figure.getPosition();
if(intersect()){
selectFigure();
}
}
EDIT:
I tried this:
Ray-Sphere intersection
but it somehow didn't work. A sphere intersection also seemed very inefficient in respect of a ray box intersection.
You'll have to follow following steps (I'm assuming you are aware of rendering pipeline and aware of OpenGL/WebGL)
Get the list of all the objects you have.
Assign every object a unique color. Following would be an easy way to assign the unique color based on index of the object in the list.
int i = // Index of the object
// We've added 1 to the index because 0 index is translated to black color
// And our background is also rendered as black so we skip will that color.
int r = (i + 1 & 0x000000FF) >> 0;
int g = (i + 1 & 0x0000FF00) >> 8;
int b = (i + 1 & 0x00FF0000) >> 16;
glm::vec4 unique_color = glm::vec4(r / 255.0f, g / 255.0f, b / 255.0f, 1.0);
Create a frame-buffer and render all the objects with their uniquely assigned solid colors.
When the rendering is complete, you now read the click position pixel color from the rendered frame buffer texture.
Decode the color into index of object back like given below. (This is exactly revers of what we've done in step 2)
int triangle_index =
color.r +
color.g * 256 +
color.b * 256 * 256;
With this index you have the selected object from the initial list of all objects.
You can read more about this technique here, http://www.opengl-tutorial.org/miscellaneous/clicking-on-objects/picking-with-an-opengl-hack/

Transformations from pixels to NDC

Let's say that my screen is (800 * 600) and I have a Quad (2D) drawn with the following vertices positions using Triangle_Strip (in NDC) :
float[] vertices = {-0.2f,0.2f,-0.2f,-0.2f,0.2f,0.2f,0.2f,-0.2f};
And I set up my Transformation Matrix in this way :
Vector2f position = new Vector2f(0,0);
Vector2f size = new Vector2f(1.0f,1.0f);
Matrix4f tranMatrix = new Matrix4f();
tranMatrix.setIdentity();
Matrix4f.translate(position, tranMatrix, tranMatrix);
Matrix4f.scale(new Vector3f(size.x, size.y, 1f), tranMatrix, tranMatrix);
And my vertex Shader :
#version 150 core
in vec2 in_Position;
uniform mat4 transMatrix;
void main(void) {
gl_Position = transMatrix * vec4(in_Position,0,1.0);
}
My question is, which formula should I use to modify the transformations of my quad with coordinates (in Pixels) ?
For example :
set Scale : (50px, 50px) => Vector2f(width,height)
set Position : (100px, 100px) => Vector2f(x,y)
To better understand, I would create a function to convert my Pixels data to NDCs to send them next to the vertex shader. I was advised to use an Orthographic projection but I don't know how to create it correctly and as you can see in my vertex shader I don't use any projection matrix.
Here is a topic similar to mine but not very clear - Transform to NDC, calculate and transform back to worldspace
EDIT:
I created my orthographic projection matrix by following the formula but
nothing seems to appear, here is how I proceeded :
public static Matrix4f glOrtho(float left, float right, float bottom, float top, float near, float far){
final Matrix4f matrix = new Matrix4f();
matrix.setIdentity();
matrix.m00 = 2.0f / (right - left);
matrix.m01 = 0;
matrix.m02 = 0;
matrix.m03 = 0;
matrix.m10 = 0;
matrix.m11 = 2.0f / (top - bottom);
matrix.m12 = 0;
matrix.m13 = 0;
matrix.m20 = 0;
matrix.m21 = 0;
matrix.m22 = -2.0f / (far - near);
matrix.m23 = 0;
matrix.m30 = -(right+left)/(right-left);
matrix.m31 = -(top+bottom)/(top-bottom);
matrix.m32 = -(far+near)/(far-near);
matrix.m33 = 1;
return matrix;
}
I then included my matrix in the vertex shader
#version 140
in vec2 position;
uniform mat4 projMatrix;
void main(void){
gl_Position = projMatrix * vec4(position,0.0,1.0);
}
What did I miss ?
New Answer
After clarifications in the comments, the question being asked can be summed up as:
How do I effectively transform a quad in terms of pixels for use in a GUI?
As mentioned in the original question, the simplest approach to this will be using an Orthographic Projection. What is an Orthographic Projection?
a method of projection in which an object is depicted or a surface mapped using parallel lines to project its shape onto a plane.
In practice, you may think of this as a 2D projection. Distance plays no role, and the OpenGL coordinates map to pixel coordinates. See this answer for a bit more information.
By using an Orthographic Projection instead of a Perspective Projection you can start thinking of all of your transformations in terms of pixels.
Instead of defining a quad as (25 x 25) world units in dimension, it is (25 x 25) pixels in dimension.
Or instead of translating by 50 world units along the world x-axis, you translate by 50 pixels along the screen x-axis (to the right).
So how do you create an Orthographic Projection?
First, they are usually defined using the following parameters:
left - X coordinate of the left vertical clipping plane
right - X coordinate of the right vertical clipping plane
bottom - Y coordinate of the bottom horizontal clipping plane
top - Y Coordinate of the top horizontal clipping plane
near - Near depth clipping plane
far - Far depth clipping plane
Remember, all units are in pixels. A typical Orthographic Projection would be defined as:
glOrtho(0.0, windowWidth, windowHeight, 0.0f, 0.0f, 1.0f);
Assuming you do not (or can not) make use of glOrtho (you have your own Matrix class or another reason), then you must calculate the Orthographic Projection matrix yourself.
The Orthographic Matrix is defined as:
2/(r-l) 0 0 -(r+l)/(r-l)
0 2/(t-b) 0 -(t+b)/(t-b)
0 0 -2/(f-n) -(f+n)/(f-n)
0 0 0 1
Source A, Source B
At this point I recommend using a pre-made mathematics library unless you are determined to use your own. One of the most common bug sources I see in practice are matrix-related and the less time you spend debugging matrices, the more time you have to focus on other more fun endeavors.
GLM is a widely-used and respected library that is built to model GLSL functionality. The GLM implementation of glOrtho can be seen here at line 100.
How to use an Orthographic Projection?
Orthographic projections are commonly used to render a GUI on top of your 3D scene. This can be done easily enough by using the following pattern:
Clear Buffers
Apply your Perspective Projection Matrix
Render your 3D objects
Apply your Orthographic Projection Matrix
Render your 2D/GUI objects
Swap Buffers
Old Answer
Note that this answered the wrong question. It assumed the question boiled down to "How do I convert from Screen Space to NDC Space?". It is left in case someone searching comes upon this question looking for that answer.
The goal is convert from Screen Space to NDC Space. So let's first define what those spaces are, and then we can create a conversion.
Normalized Device Coordinates
NDC space is simply the result of performing perspective division on our vertices in clip space.
clip.xyz /= clip.w
Where clip is the coordinate in clip space.
What this does is place all of our un-clipped vertices into a unit cube (on the range of [-1, 1] on all axis), with the screen center at (0, 0, 0). Any vertices that are clipped (lie outside the view frustum) are not within this unit cube and are tossed away by the GPU.
In OpenGL this step is done automatically as part of Primitive Assembly (D3D11 does this in the Rasterizer Stage).
Screen Coordinates
Screen coordinates are simply calculated by expanding the normalized coordinates to the confines of your viewport.
screen.x = ((view.w * 0.5) * ndc.x) + ((w * 0.5) + view.x)
screen.y = ((view.h * 0.5) * ndc.y) + ((h * 0.5) + view.y)
screen.z = (((view.f - view.n) * 0.5) * ndc.z) + ((view.f + view.n) * 0.5)
Where,
screen is the coordinate in screen-space
ndc is the coordinate in normalized-space
view.x is the viewport x origin
view.y is the viewport y origin
view.w is the viewport width
view.h is the viewport height
view.f is the viewport far
view.n is the viewport near
Converting from Screen to NDC
As we have the conversion from NDC to Screen above, it is easy to calculate the reverse.
ndc.x = ((2.0 * screen.x) - (2.0 * x)) / w) - 1.0
ndc.y = ((2.0 * screen.y) - (2.0 * y)) / h) - 1.0
ndc.z = ((2.0 * screen.z) - f - n) / (f - n)) - 1.0
Example:
viewport (w, h, n, f) = (800, 600, 1, 1000)
screen.xyz = (400, 300, 200)
ndc.xyz = (0.0, 0.0, -0.599)
screen.xyz = (575, 100, 1)
ndc.xyz = (0.4375, -0.666, -0.998)
Further Reading
For more information on all of the transform spaces, read OpenGL Transformation.
Edit for Comment
In the comment on the original question, Bo specifies screen-space origin as top-left.
For OpenGL, the viewport origin (and thus screen-space origin) lies at the bottom-left. See glViewport.
If your pixel coordinates are truly top-left origin then that needs to be taken into account when transforming screen.y to ndc.y.
ndc.y = 1.0 - ((2.0 * screen.y) - (2.0 * y)) / h)
This is needed if you are transforming, say, a coordinate of a mouse-click on screen/gui into NDC space (as part of a full transform to world space).
NDC coordinates are transformed to screen (i.e. window) coordinates using glViewport. This function (you must use t in your app) defines a portion of the window by an origin and a size.
The formulas used can be seen at https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/glViewport.xml
(x,y) are the origin, normally (0,0) the bottom left corner of the window.
While you can derivate the inverse formulas on your own, here you have them: https://www.khronos.org/opengl/wiki/Compute_eye_space_from_window_space#From_window_to_ndc
If I understand the question, you're trying to get screen space coords (the ones that define the size of your screen) to the -1 to 1 coords. If yes then it's quite simple. The equation is:
((coords_of_NDC_space / width_or_height_of_screen) * 2) - 1
This would work because for example a screen of 800 × 600:
  800 / 800 = 1
  1 * 2 = 2
  2 - 1 = 1
and to check for a coordinate from half the screen on the height:
  300 / 600 = 0.5
  0.5 * 2 = 1
  1 - 1 = 0 (NDC is from -1 to 1 so 0 is middle)

Libgdx culling between modelinstances decals

I can't figure out, how I to tell libgdx to draw my green spheres behind my transparent decal.
Here is an example picture of my problem:
The decal creation: first two params are width and height, last flag is wether transparent or not.
Decal.newDecal(count * (GUTTER + BUTTONWIDTH) + GUTTER, 2 * GUTTER + BUTTONHEIGHT,
new TextureRegion(new Texture(Gdx.files.internal("icons/uibg.png"))), true);
The sphere creation:
builder.createSphere(
FINGERTIPRADIUS * 2, FINGERTIPRADIUS * 2, FINGERTIPRADIUS * 2,
6, 6,
new Material(ColorAttribute.createDiffuse(Color.GREEN)),
VertexAttributes.Usage.Position | VertexAttributes.Usage.Normal);
And the render method:
this.models = new ModelBatch();
this.decals = new DecalBatch(new CameraGroupStrategy(camera));
...
// adding decals and models to render queue
...
public void update(float deltaTime){
super.update(deltaTime);
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
models.begin(camera);
for (Entity entity : queue) {
ModelInstance model = Mappers.Object.get(entity).instance;
models.render(model, environment);
}
decals.flush();
models.end();
queue.clear();
}
I appreciate every advice.
//EDIT
Added the Blendingattribute to the spheres and a .7 opacitiy. This works. But i guess the problem is somehow between the decal and model rendering, because the grid in the background is a decal and it can be seen through the black transparent decal but the spheres can't.
The new material code:
Material mat = new Material();
mat.set(ColorAttribute.createDiffuse(Color.GREEN));
mat.set(new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA, 0.7f));
fingerTip = builder.createSphere(
FINGERTIPRADIUS * 2, FINGERTIPRADIUS * 2, FINGERTIPRADIUS * 2,
6, 6,
mat,
VertexAttributes.Usage.Position | VertexAttributes.Usage.Normal);
Here another picture: the two mid spheres are not rendered behind the transparent decal as they should.
Call models.end() before decals.flush(). Transparent stuff must be drawn after opaque stuff. Right now you are drawing the decal first, so it is writing its depth to the buffer before the spheres are drawn.
If your models are also transparent, this gets more complicated. You would need to sort your decals with your models somehow, and flush the rear models before you flush the decal, and finally flush the near models.

AndEngine - Getting Center Coordinates of a Sprite

I have a sprite in my game and I want to get it's center coordinates. The sprite can be rotated though so I can't just get it's x/y coordinates then add half of the sprite's width/height because the getWidth and getHeight methods return the dimensions of the original, unrotated sprite.
I tried getSceneCenterCoordinates() but that for some reason returns the same coordinates for all sprites even if they are nowhere near each other.
Here's a graphic to describe what I want, the red dot is the coordinate I want, and the width/height labels on the right side figure respresent what I WANT the getWidth/Height methods to return (but they don't):
You can use coordinates transformer
final float[] spriteCoordinates = sprite.convertLocalToSceneCoordinates(x,y);
final float canonX = spriteCoordinates[VERTEX_INDEX_X];
final float canonY = spriteCoordinates[VERTEX_INDEX_Y];
Sprite sprite_in_fixedpoint_of_the_first_sprite=new Sprite(canonX,canonY,textureregion)
Get the minimum and maximum value of all the corners (for both - x and y components).
Then take average from max and min to get middle:
middleX = (maxX + minX) / 2
Repeat the same for y component.
Consider something like this
Rectangle test = new Rectangle(100, 100, 50, 100, vboManager);
mainScene.attachChild(test);
System.out.println("Before RotCtrX = " + test.getRotationCenterX());
System.out.println("Before RotCtrY = " + test.getRotationCenterY());
test.setRotation(45);
System.out.println("After RotCtrX = " + test.getRotationCenterX());
System.out.println("After RotCtrY = " + test.getRotationCenterY());
and the result are
System.out(4526): Before RotCtrX = 25.0
System.out(4526): Before RotCtrY = 50.0
System.out(4526): After RotCtrX = 25.0
System.out(4526): After RotCtrY = 50.0
AndEngine Entities rotate around the center (unless you change the RotationCenter values), so applying setRotate() will not affect the "center" point location
those "center" points are relative to the Entity, so if you need the actual screen coordinates, you will need to add those to the getX() and getY() values - which BTW also won't change based solely on applying a setRotate()
I figured out how to use getSceneCenterCoordinates() properly, which is to hand it a float[]. Here is my solution:
float[] objCenterPos = new float[2];
obj.sprite.getSceneCenterCoordinates(objCenterPos);
Log.d(this.toString(), "Coordinates: ("+objCenterPos[0]+","+objCenterPos[1]+")");

Customizable player avatar in a 2D Game

How can I have that functionality in my game through which the players can change their hairstyle, look, style of clothes, etc., and so whenever they wear a different item of clothing their avatar is updated with it.
Should I:
Have my designer create all possible combinations of armor, hairstyles, and faces as sprites (this could be a lot of work).
When the player chooses what they should look like during their introduction to the game, my code would automatically create this sprite, and all possible combinations of headgear/armor with that sprite. Then each time they select some different armor, the sprite for that armor/look combination is loaded.
Is it possible to have a character's sprite divided into components, like face, shirt, jeans, shoes, and have the pixel dimensions of each of these. Then whenever the player changes his helmet, for example, we use the pixel dimensions to put the helmet image in place of where its face image would normally be. (I'm using Java to build this game)
Is this not possible in 2D and I should use 3D for this?
Any other method?
Please advise.
One major factor to consider is animation. If a character has armour with shoulder pads, those shoulderpads may need to move with his torso. Likewise, if he's wearing boots, those have to follow the same cycles as hid bare feet would.
Essentially what you need for your designers is a Sprite Sheet that lets your artists see all possible frames of animation for your base character. You then have them create custom hairstyles, boots, armour, etc. based on those sheets. Yes, its a lot of work, but in most cases, the elements will require a minimal amount of redrawing; boots are about the only thing I could see really taking a lot of work to re-create since they change over multiple frames of animation. Be rutheless with your sprites, try to cut down the required number as much as possible.
After you've amassed a library of elements you can start cheating. Recycle the same hair style and adjust its colour either in Photoshop or directly in the game with sliders in your character creator.
The last step, to ensure good performance in-game, would be to flatten all the different elements' sprite sheets into a single sprite sheet that is then split up and stored in sprite buffers.
3D will not be necessary for this, but the painter algorithm that is common in the 3D world might IMHO save you some work:
The painter algorithm works by drawing the most distant objects first, then overdrawing with objects closer to the camera. In your case, it would boild down to generating the buffer for your sprite, drawing it onto the buffer, finding the next dependant sprite-part (i.e. armour or whatnot), drawing that, finding the next dependant sprite-part (i.e. a special sign that's on the armour), and so on. When there are no more dependant parts, you paint the full generated sprite on to the display the user sees.
The combinated parts should have an alpha channel (RGBA instead of RGB) so that you will only combine parts that have an alpha value set to a value of your choice. If you cannot do that for whatever reason, just stick with one RGB combination that you will treat as transparent.
Using 3D might make combining the parts easier for you, and you'd not even have to use an offscreen buffer or write the pixel combinating code. The flip-side is that you need to learn a little 3D if you don't know it already. :-)
Edit to answer comment:
The combination part would work somewhat like this (in C++, Java will be pretty similar - please note that I did not run the code below through a compiler):
//
// #param dependant_textures is a vector of textures where
// texture n+1 depends on texture n.
// #param combimed_tex is the output of all textures combined
void Sprite::combineTextures (vector<Texture> const& dependant_textures,
Texture& combined_tex) {
vector< Texture >::iterator iter = dependant_textures.begin();
combined_tex = *iter;
if (dependant_textures.size() > 1)
for (iter++; iter != dependant_textures.end(); iter++) {
Texture& current_tex = *iter;
// Go through each pixel, painting:
for (unsigned char pixel_index = 0;
pixel_index < current_tex.numPixels(); pixel_index++) {
// Assuming that Texture had a method to export the raw pixel data
// as an array of chars - to illustrate, check Alpha value:
int const BYTESPERPIXEL = 4; // RGBA
if (!current_tex.getRawData()[pixel_index * BYTESPERPIXEL + 3])
for (int copied_bytes = 0; copied_bytes < 3; copied_bytes++)
{
int index = pixel_index * BYTESPERPIXEL + copied_bytes;
combined_tex.getRawData()[index] =
current_tex.getRawData()[index];
}
}
}
}
To answer your question for a 3D solution, you would simply draw rectangles with their respective textures (that would have an alpha channel) over each other. You would set the system up to display in an orthogonal mode (for OpenGL: gluOrtho2D()).
I'd go with the procedural generation solution (#2). As long as there isn't a limiting amount of sprites to be generated, such that the generation takes too long. Maybe do the generation when each item is acquired, to lower the load.
Since I was asked in comments to supply a 3D way aswell, here is some, that is an excerpt of some code I wrote quite some time ago. It's OpenGL and C++.
Each sprite would be asked to draw itself. Using the Adapter pattern, I would combine sprites - i.e. there would be sprites that would hold two or more sprites that had a (0,0) relative position and one sprite with a real position having all those "sub-"sprites.
void Sprite::display (void) const
{
glBindTexture(GL_TEXTURE_2D, tex_id_);
Display::drawTranspRect(model_->getPosition().x + draw_dimensions_[0] / 2.0f,
model_->getPosition().y + draw_dimensions_[1] / 2.0f,
draw_dimensions_[0] / 2.0f, draw_dimensions_[1] / 2.0f);
}
void Display::drawTranspRect (float x, float y, float x_len, float y_len)
{
glPushMatrix();
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glColor4f(1.0, 1.0, 1.0, 1.0);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex3f(x - x_len, y - y_len, Z);
glTexCoord2f(1.0f, 0.0f); glVertex3f(x + x_len, y - y_len, Z);
glTexCoord2f(1.0f, 1.0f); glVertex3f(x + x_len, y + y_len, Z);
glTexCoord2f(0.0f, 1.0f); glVertex3f(x - x_len, y + y_len, Z);
glEnd();
glDisable(GL_BLEND);
glPopMatrix();
}
The tex_id_ is an integral value that identifies which texture is used to OpenGL. The relevant parts of the texture manager are these. The texture manager actually emulates an alpha channel by checking to see if the color read is pure white (RGB of (ff,ff,ff)) - the loadFile code operates on 24 bits per pixel BMP files:
TextureManager::texture_id
TextureManager::createNewTexture (Texture const& tex) {
texture_id id;
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, 4, tex.width_, tex.height_, 0,
GL_BGRA_EXT, GL_UNSIGNED_BYTE, tex.texture_);
return id;
}
void TextureManager::loadImage (FILE* f, Texture& dest) const {
fseek(f, 18, SEEK_SET);
signed int compression_method;
unsigned int const HEADER_SIZE = 54;
fread(&dest.width_, sizeof(unsigned int), 1, f);
fread(&dest.height_, sizeof(unsigned int), 1, f);
fseek(f, 28, SEEK_SET);
fread(&dest.bpp_, sizeof (unsigned short), 1, f);
fseek(f, 30, SEEK_SET);
fread(&compression_method, sizeof(unsigned int), 1, f);
// We add 4 channels, because we will manually set an alpha channel
// for the color white.
dest.size_ = dest.width_ * dest.height_ * dest.bpp_/8 * 4;
dest.texture_ = new unsigned char[dest.size_];
unsigned char* buffer = new unsigned char[3 * dest.size_ / 4];
// Slurp in whole file and replace all white colors with green
// values and an alpha value of 0:
fseek(f, HEADER_SIZE, SEEK_SET);
fread (buffer, sizeof(unsigned char), 3 * dest.size_ / 4, f);
for (unsigned int count = 0; count < dest.width_ * dest.height_; count++) {
dest.texture_[0+count*4] = buffer[0+count*3];
dest.texture_[1+count*4] = buffer[1+count*3];
dest.texture_[2+count*4] = buffer[2+count*3];
dest.texture_[3+count*4] = 0xff;
if (dest.texture_[0+count*4] == 0xff &&
dest.texture_[1+count*4] == 0xff &&
dest.texture_[2+count*4] == 0xff) {
dest.texture_[0+count*4] = 0x00;
dest.texture_[1+count*4] = 0xff;
dest.texture_[2+count*4] = 0x00;
dest.texture_[3+count*4] = 0x00;
dest.uses_alpha_ = true;
}
}
delete[] buffer;
}
This was actually a small Jump'nRun that I developed occasionally in my spare time. It used gluOrtho2D() mode aswell, btw. If you leave means to contact you, I will send you the source if you want.
Older 2d games such as Diablo and Ultima Online use a sprite compositing technique to do this. You could search for art from those kind of older 2d isometric games to see how they did it.

Categories

Resources