I am currently working on a program to help photographers with the creation of timelapses.
It calculates an decline or rise in brightness over a series of images. So the change in Exposure and iso for example dont affect the overall decline in brightness.
For this i use a simple Swing-based Interface which displays the first and last image. Under them are sliders to adjust the Brightness of the image.
This is applied via a direct manipulation of the BufferedImages underlying DataBuffer.
Mostly this works but i encountered some images which seem to have kind of a problem.
Do you have an idea why this is happening?
public BufferedImage getImage(float mult){
BufferedImage retim;
retim = new BufferedImage(img.getWidth(), img.getHeight(), img.getType());
Graphics g = retim.getGraphics();
g.drawImage(img, 0, 0, null);
g.dispose();
DataBufferByte db = (DataBufferByte) retim.getRaster().getDataBuffer();
byte[] bts = db.getData();
for(int i=0;i<bts.length;i++){
float n = bts[i]*mult;
if(n > 255){
bts[i]= (byte) 255;
}else{
bts[i] = (byte) n;
}
}
return retim;
}
This is the method which takes an float and multiplies every pixel in the image with it. (And some code to prevent the byte values from overflowing).
This is the unwanted behaviour (on the left) and the expected on the right.
Your problem is this line, and it occurs due to the fact that Java bytes are signed (in the range [-128...127]):
float n = bts[i] * mult;
After the multiplication, your n variable may be negative, thus causing the overflow to occur.
To fix it, use a bit mask to get the value as an unsigned integer (in the range [0...255]), before multiplying with the constant:
float n = (bts[i] & 0xff) * mult;
A better fix yet, is probably to use the RescaleOp, which is built to do brightness adjustments on BufferedImages.
Something like:
public BufferedImage getImage(float mult) {
return new RescaleOp(mult, 0, null).filter(img, null);
}
This is due to the capping of the value in certain bytes in the image.
For example (assuming RGB simple colour space):
The pixel starts at (125,255,0), if you multiply by factor 2.0, the result is (255,255,0). This is a different hue than the original.
This is also why the strange results only occur on pixels that already have high brightness to start with.
This link may help with better algorithm for adjusting brightness.
You could also refer to this related question.
Related
I want to achieve interpolation between red and blue. something like this
but in a single line.
My java code:
private PixelData InterpolateColour(float totalLength, float curLength){
float startColourV[] = new float[3];
Color.RGBtoHSB(m_start.getColour().getR() & 0xFF, m_start.getColour().getG() & 0xFF, m_start.getColour().getB() & 0xFF, startColourV);
float endColourV[] = new float[3];
Color.RGBtoHSB(m_end.getColour().getR() & 0xFF, m_end.getColour().getG() & 0xFF, m_end.getColour().getB() & 0xFF, endColourV);
float endPercent = curLength / totalLength;
float startPercent = 1 - curLength / totalLength;
float h = endColourV[0] * endPercent + startColourV[0] * startPercent;
float s = endColourV[1] * endPercent + startColourV[1] * startPercent;
float b = endColourV[2] * endPercent + startColourV[2] * startPercent;
int colourRGB = Color.HSBtoRGB(h, s, b);
byte[] ByteArray = ByteBuffer.allocate(4).putInt(colourRGB).array();
return new PixelData(ByteArray[0], ByteArray[3], ByteArray[2], ByteArray[1]);
}
and the result i am getting is this
.
I don't understand, from where all that green is coming from. Can somebody please help me ?
why not use just RGB with simple linear interpolation for this:
color(t)=(color0*t)+(color1*(1.0-t))
where t=<0.0,1.0> is the parameter. So just loop it in the full range with as many steps as you need.
Integer C++/VCL example (sorry not a JAVA coder):
// borland GDI clear screen
Canvas->Brush->Color=clBlack;
Canvas->FillRect(ClientRect);
// easy access to RGB channels
union _color
{
DWORD dd;
BYTE db[4];
} c0,c1,c;
// 0x00BBGGRR
c0.dd=0x000000FF; // Red
c1.dd=0x00FF0000; // Blue
int x,y,t0,t1;
for (x=0,y=ClientHeight/2;x<ClientWidth;x++)
{
t0=x;
t1=ClientWidth-1-x;
c.db[0]=((DWORD(c0.db[0])*t0)+(DWORD(c1.db[0])*t1))/(ClientWidth-1);
c.db[1]=((DWORD(c0.db[1])*t0)+(DWORD(c1.db[1])*t1))/(ClientWidth-1);
c.db[2]=((DWORD(c0.db[2])*t0)+(DWORD(c1.db[2])*t1))/(ClientWidth-1);
c.db[3]=((DWORD(c0.db[3])*t0)+(DWORD(c1.db[3])*t1))/(ClientWidth-1);
Canvas->Pixels[x][y]=c.dd;
}
where ClientWidth,ClientHeight is my app form resolution, Canvas is access to the GDI interface of the form and Canvas->Pixels[x][y] is single pixel access (slow but for this example it is enough). The only important stuff is the for loop. Here resulting image:
Color interpolation is actually a fairly complex topic due to the way human vision works.
Physical intensity and wavelengths don't map directly to perceived luminance and hues. After all human eyes are not photon spectrometers, they just measure the intensity + three primaries, each with different sensitivity.
To have a metric, linear space that represents human color perception instead of physical attributes we have the CIELab. Since it's a linear metric doing interpolation between points should generally give you a linear transition between hues and also luminance.
But CIELab may not be sufficient since it only models perceptual sensitivity. If you need to match real lighting you also have to take into account that natural light sources do not illuminate all colors evenly.
If you need to match photorealistic material then additionally correcting for the intensity spectrum of natural light may also be necessary. I.e. something illuminated by a candle will not have intense blue components simply because the candle emits very little blue light that could be reflected.
I have copy pasted some code I found on stackoverflow to convert the default camera preview YUV into RGB format and then uploaded it to OpenGL for processing.
That worked fine, the issue is that most of the CPU was busy at converting the YUV images into the RGB format and it turned into the bottle neck.
I want to upload the YUV image into the GPU and then convert it into RGB in a fragment shader.
I took the same Java YUV to RGB function I found which worked on the CPU and tried to make it work on the GPU.
It turned to be quite a little nightmare, since there are several differences on doing calculations on Java and the GPU.
First, the preview image comes in byte[] in Java, but bytes are signed, so there might be negative values.
In addition, the fragment shader normally deals with [0..1] floating values for instead of a byte.
I am sure this is solveable and I almost solved it. But I spent a few hours trying to figure out what I was doing wrong and couldn't make it work.
Bottom line, I ask for someone to just write this shader function and preferably test it. For me it would be a tedious monkey job since I don't really understand why this conversion works the way it is, and I just try to mimic the same function on the GPU.
This is a very similar function to what I used on Java:
Displaying YUV Image in Android
What I did some of the job on the CPU, such as turnning the 1.5*wh bytes YUV format into a wh*YUV, as follows:
static public void decodeYUV420SP(int[] rgba, byte[] yuv420sp, int width,
int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (int) yuv420sp[yp]+127;
if ((i & 1) == 0) {
v = (int)yuv420sp[uvp++]+127;
u = (int)yuv420sp[uvp++]+127;
}
rgba[yp] = 0xFF000000+(y<<16) | (u<<8) | v;
}
}
}
I added 127 because byte is signed.
I then loaded the rgba into a OpenGL texture and tried to do the rest of the calculation on the GPU.
Any help would be appreaciated...
I used this code from wikipedia to calculate the conversion from YUV to RGB on the GPU:
private static int convertYUVtoRGB(int y, int u, int v) {
int r,g,b;
r = y + (int)1.402f*v;
g = y - (int)(0.344f*u +0.714f*v);
b = y + (int)1.772f*u;
r = r>255? 255 : r<0 ? 0 : r;
g = g>255? 255 : g<0 ? 0 : g;
b = b>255? 255 : b<0 ? 0 : b;
return 0xff000000 | (b<<16) | (g<<8) | r;
}
I converted the floats to 0.0..255.0 and then use the above code.
The part on the CPU was to rearrange the original YUV pixels into a YUV matrix(also shown in wikipdia).
Basically I used the wikipedia code and did the simplest float<->byte conersions to make it work out.
Small mistakes like adding 16 to Y or not adding 128 to U and V would give undesirable results. So you need to take care of it.
But it wasn't a lot of work once I used the wikipedia code as the base.
Converting on CPU sounds easy but I believe question is how to do it on GPU?
I did it recently in my project where I needed to get very fast QR code detection even when camera angle is 45 degrees to surface where code is printed, and it worked with great performance:
(following code is trimmed just to contain key lines, it is assumed that you have both Java and OpenGLES solid understanding)
Create a GL texture that will contain stored Camera image:
int[] txt = new int[1];
GLES20.glGenTextures(1,txt,0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,txt[0]);
GLES20.glTextParameterf(... set min filter to GL_LINEAR );
GLES20.glTextParameterf(... set mag filter to GL_LINEAR );
GLES20.glTextParameteri(... set wrap_s to GL_CLAMP_TO_EDGE );
GLES20.glTextParameteri(... set wrap_t to GL_CLAMP_TO_EDGE );
Pay attention that texture type is not GL_TEXTURE_2D. This is important, since only a GL_TEXTURE_EXTERNAL_OES type is supported by SurfaceTexture object, which will be used in the next step.
Setup SurfaceTexture:
SurfaceTexture surfTex = new SurfaceTeture(txt[0]);
surfTex.setOnFrameAvailableListener(this);
Above assumes that 'this' is an object that implements 'onFrameAvailable' function.
public void onFrameAvailable(SurfaceTexture st)
{
surfTexNeedUpdate = true;
// this flag will be read in GL render pipeline
}
Setup camera:
Camera cam = Camera.open();
cam.setPreviewTexture(surfTex);
This Camera API is deprecated if you target Android 5.0, so if you are, you have to use new CameraDevice API.
In your render pipeline, have following block to check if camera has frame available, and update surface texture with it. When surface texture is updated, will fill in GL texture that is linked with it.
if( surfTexNeedUpdate )
{
surfTex.updateTexImage();
surfTexNeedUpdate = false;
}
To bind GL texture which has Camera -> SurfaceTeture link to, just do this in rendering pipe:
GLES20.glBindTexture(GLES20.GL_TEXTURE_EXTERNAL_OS, txt[0]);
Goes without saying, you need to set current active texture.
In your GL shader program which will use above texture in it's fragment part, you must have first line:
#extension GL_OES_EGL_imiage_external : require
Above is a must-have.
Texture uniform must be samplerExternalOES type:
uniform samplerExternalOES u_Texture0;
Reading pixel from it is just like from GL_TEXTURE_2D type, and UV coordinates are in same range (from 0.0 to 1.0):
vec4 px = texture2D(u_Texture0, v_UV);
Once you have your render pipeline ready to render a quad with above texture and shader, just start the camera:
cam.startPreview();
You should see quad on your GL screen with live camera feed. Now you just need to grab the image with glReadPixels:
GLES20.glReadPixels(0,0,width,height,GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bytes);
Above line assumes that your FBO is RGBA, and that bytes is already initialized byte[] array to proper size, and that width and height are size of your FBO.
And voila! You have captured RGBA pixels from camera instead of converting YUV bytes received in onPreviewFrame callback...
You can also use RGB framebuffer object and avoid alpha if you don't need it.
It is important to note that camera will call onFrameAvailable in it's own thread which is not your GL render pipeline thread, thus you should not perform any GL calls in that function.
In February 2011, Renderscript was first introduced. Since Android 3.0 Honeycomb (API 11), and definitely since Android 4.2 JellyBean (API 17), when ScriptIntrinsicYuvToRGB was added, the easiest and most efficient solution has been to use renderscript for YUV to RGB conversion. I have recently generalized this solution to handle device rotation.
I am trying to make a color change on a canvas in java to respresent how strong or weak the values respresented by color are relative to each other.
The rgb colors have to be the same color just different shades, like white to grey to black and every shade of grey in between. How can I change the rgb values considering that the values I am representing vary a lot, from -9999999 to positive 9999999.
I think you should take a look at HSL/HSV instead of RGB.
While RGB is elementary in nature in that it expresses colors in terms of the primaries, it does not allow you to make "understandable" changes to the R, G or B values to arrive at "similar" colors. With a HSL/HSV model, you will be able to make changes to Brightness/Lightness/Value (L/V) to arrive at colors with varying amounts of gray, or make changes to Hue (H) to obtain similar colors across the spectrum. You can start at full brightness (White) and create darker tones of gray by decreasing the value of L/V and eventually reach the color of no brightness (Black).
A very mild introduction to color theory, for developers is available here.
As to your question, you should express your colors in terms of HSL, with increasing values of Saturation to have a range of colors starting from white to black. Of course, if you want gray tones in between white and black without any other color, you should keep the hue to a minimum.
A short example on how to get a range of colors follows. For brevity, I've populated the colors into an array, but that is not required since you might want to use the color rightaway (besides considering memory requirements).
private Color[] produceColorRange(int steps)
{
float value = 1.0f; //Starting with full brightness
Color[] colors = new Color[steps];
for(int ctr = 0; ctr < steps; ctr++)
{
value = value - (1.0f/steps); //tend to darkness
int rgb = Color.HSBtoRGB(0.7f, 0.0f, value); //create a darker color
//Hue is Blue, not noticeable
//because Saturation is 0
Color color = new Color(rgb);
colors[ctr] = color;
}
return colors;
}
If you use the above method and paint a JFrame, you will be able to get a result similar to the one below (except that I've modified the hue and saturation to get my color range).
Note that if you want a simpler way of getting a color range, initialize a Color object with Color.WHITE and invoke color.darker(). Of course, you will not be able to control the increment.
Yes, scale your values to fit your domain. That depends on how your RGB values are stored. Usually, 8 bits are used for each. Since grey has R = G = B, you want to scale values in range (-9999999,9999999) to (0, 255).
Consider x in the first interval. Since the first range covers also negative numbers, first, do a shift.
x = x + 9999999
Now x is in the interval (0, 19999998). And the next step is to scale it down to (0, 255). Since the colour values grow linearly in that interval, all you have to do is this:
x = x * 255 / 19999998
Now x is in the interval (0, 255) just like you want.
Generally, if your inital values are in an interval (a, b) and want to transform it into (0, c), apply this formula: (Note that a can be negative)
x = (x - a) * c / (b - a)
So if you R, G, B values are 16 bits long, c will be 2**16 = 65536 and the formula:
x = (x + 9999999) * 65536 / 19999998
Hope that helps.
I'm not completely sure I understand your question, but, if I do:
Why not just scale the RGB values to the values in your range (from -9999999 to positive 9999999)? Moreover, set R, G, and B all to the same value so that you're using shades of gray to represent the value.
Like this:
private final int MIN = -9999999;
private final int MAX = 9999999;
public Color getScaledColor(int val) {
int gray = (int) Math.round((double) (val - MIN) / (double) (MAX - MIN)
* 255.0);
Color color = new Color(gray, gray, gray);
return color;
}
Note that this solution will not give unique colors for all the values in the range you specified. But also keep in mind that the human eye can only distinguish between so many shades (and 2 * 9999999 + 1 is probably more than the number of shades than it can distinguish between).
The HSL Color class implements the formulas provided in the Wikipedia link on HSL/HSV provide above.
Maybe I've had too much coffee, maybe I've been working too long, regardless, I'm at a loss as to what this method does, or rather, why and how it does it, could anyone shed some light upon me? What is the nextColor?
public Color nextColor() {
int max = 0, min = 1000000000, cr = 0, cg = 0, cb = 0;
for (int r = 0; r < 256; r += 4) {
for (int g = 0; g < 256; g += 4) {
for (int b = 0; b < 256; b += 4) {
if (r + g + b < 256 || r + g + b > 512) {
continue;
}
min = 1000000000;
for (Color c : colorTable) {
int dred = r - c.getRed();
int dgreen = g - c.getGreen();
int dblue = b - c.getBlue();
int dif = dred * dred + dgreen * dgreen + dblue * dblue;
if (min > dif) {
min = dif;
}
}
if (max < min) {
max = min;
cr = r;
cg = g;
cb = b;
}
}
}
}
return new Color(cr, cg, cb, 0x90);
}
UPDATE
Thanks for the responses everyone. Looking at the context of the method within the program it is clear that their intent was indeed to return a new Color that is "furthest away" from the set of existing Colors.
Thanks Sparr for posing the followup to this question, I will definitely rewrite the above with your advice in mind.
I am not very well versed in the RGB color scale. Knowing the intention of the above method is to retrieve a "complimentary?" color to the existing set of colors, will the solution provided in 1 actually be complimentary in the sense of how we perceive the color? Is there a simpler way to choose a color that will compliment the set, or does the numerical analysis of the RGB components actually yield the appropriate color?
It seems like you have colortable which is a storing a list of colors.
Then you have this strangely hardcoded colorspace of
Colors that have component which are a
multiple of 4 and are "not too bright"
but not "too dark either".
This function seems to be giving you the color in the latter which "contrasts" the best with your color table.
When I say contrast , this is defined by choosing the color that is as far as possible from the color table using the 2-norm.
Given a global array of Color objects named colorTable, this function will find the color from the following colorspace that is the closest* to each one in that array, and then the one of those colors that was farthest away:
Red, Green, Blue components a multiple of 4
Red+Green+Blue between 256 and 512
*:"closest" is defined as the lowest sum of squares of difference for each color component.
As Paul determined, this seems like a plausible, if insanely inefficiently implemented, naive approach to finding a single color that provides a high contrast with the contents of colorTable. The same result could be found with a single pass through colorTable and a bit more math, instead of some 5 million passes through colorTable, and there are much better ways to find a different color that provides a much higher average contrast.
Consider the case where the pseudo-solid defined by the points in the colorTable has a large "hollow" in its interior, such that nextColor selects the point in the center of that hollow as the nextColor. Depending on what you know about the colorTable, this case could be exceedingly rare. If it is predicted to be rare enough, and you are willing to accept a less than optimal (assuming we take nextColor's output to be optimal) solution in those cases, then a significant optimization presents itself.
In all cases except the above-described one, the color selected by nextColor will be somewhere on the surface of the minimal convex hull enclosing all of the points in the 1/64-dense colorspace defined by your loops. Generating the list of points on that surface is slightly more computationally complex than the simple loops that generate the list of all the points, but it would reduce your search space by about a factor of 25.
In the vast majority of cases, the result of that simplified search will be a point on one of the corners of that convex hull. Considering only those reduces your search space to a trivial list (24 candidates, if my mental geometry serves me well) that could simply be stored ahead of time.
If the nextColor selected from those is "too close" to your colorTable, then you could fall back on running the original type of search in hopes of finding the sort of "hollow" mentioned above. The density of that search could be adapted based on how close the first pass got, and narrowed down from there. That is, if the super fast search finds a nextColor 8 units away from its nearest neighbor in colorTable, then to do better than that you would have to find a hollow at least 16 units across within the colorTable. Run the original search with a step of 8 and store any candidates more than 4 units distant (the hollow is not likely to be aligned with your search grid), then center a radius-12 search of higher density on each of those candidates.
It occurs to me that the 1/64-dense nature (all the multiples of 4) of your search space was probably instituted by the original author for the purpose of speeding up the search in the first place. Given these improvements, you do away with that compromise.
All of this presumes that you want to stick with improvements on this naive method of finding a contrasting color. There are certainly better ways, given equal or more (which colors in colorTable are the most prevalent in your usage? what colors appear more contrast-y to the human eye?) information.
It's trying to get you another color for
a) false-color coding a data set.
b) drawing another line on the graph.
How can I have that functionality in my game through which the players can change their hairstyle, look, style of clothes, etc., and so whenever they wear a different item of clothing their avatar is updated with it.
Should I:
Have my designer create all possible combinations of armor, hairstyles, and faces as sprites (this could be a lot of work).
When the player chooses what they should look like during their introduction to the game, my code would automatically create this sprite, and all possible combinations of headgear/armor with that sprite. Then each time they select some different armor, the sprite for that armor/look combination is loaded.
Is it possible to have a character's sprite divided into components, like face, shirt, jeans, shoes, and have the pixel dimensions of each of these. Then whenever the player changes his helmet, for example, we use the pixel dimensions to put the helmet image in place of where its face image would normally be. (I'm using Java to build this game)
Is this not possible in 2D and I should use 3D for this?
Any other method?
Please advise.
One major factor to consider is animation. If a character has armour with shoulder pads, those shoulderpads may need to move with his torso. Likewise, if he's wearing boots, those have to follow the same cycles as hid bare feet would.
Essentially what you need for your designers is a Sprite Sheet that lets your artists see all possible frames of animation for your base character. You then have them create custom hairstyles, boots, armour, etc. based on those sheets. Yes, its a lot of work, but in most cases, the elements will require a minimal amount of redrawing; boots are about the only thing I could see really taking a lot of work to re-create since they change over multiple frames of animation. Be rutheless with your sprites, try to cut down the required number as much as possible.
After you've amassed a library of elements you can start cheating. Recycle the same hair style and adjust its colour either in Photoshop or directly in the game with sliders in your character creator.
The last step, to ensure good performance in-game, would be to flatten all the different elements' sprite sheets into a single sprite sheet that is then split up and stored in sprite buffers.
3D will not be necessary for this, but the painter algorithm that is common in the 3D world might IMHO save you some work:
The painter algorithm works by drawing the most distant objects first, then overdrawing with objects closer to the camera. In your case, it would boild down to generating the buffer for your sprite, drawing it onto the buffer, finding the next dependant sprite-part (i.e. armour or whatnot), drawing that, finding the next dependant sprite-part (i.e. a special sign that's on the armour), and so on. When there are no more dependant parts, you paint the full generated sprite on to the display the user sees.
The combinated parts should have an alpha channel (RGBA instead of RGB) so that you will only combine parts that have an alpha value set to a value of your choice. If you cannot do that for whatever reason, just stick with one RGB combination that you will treat as transparent.
Using 3D might make combining the parts easier for you, and you'd not even have to use an offscreen buffer or write the pixel combinating code. The flip-side is that you need to learn a little 3D if you don't know it already. :-)
Edit to answer comment:
The combination part would work somewhat like this (in C++, Java will be pretty similar - please note that I did not run the code below through a compiler):
//
// #param dependant_textures is a vector of textures where
// texture n+1 depends on texture n.
// #param combimed_tex is the output of all textures combined
void Sprite::combineTextures (vector<Texture> const& dependant_textures,
Texture& combined_tex) {
vector< Texture >::iterator iter = dependant_textures.begin();
combined_tex = *iter;
if (dependant_textures.size() > 1)
for (iter++; iter != dependant_textures.end(); iter++) {
Texture& current_tex = *iter;
// Go through each pixel, painting:
for (unsigned char pixel_index = 0;
pixel_index < current_tex.numPixels(); pixel_index++) {
// Assuming that Texture had a method to export the raw pixel data
// as an array of chars - to illustrate, check Alpha value:
int const BYTESPERPIXEL = 4; // RGBA
if (!current_tex.getRawData()[pixel_index * BYTESPERPIXEL + 3])
for (int copied_bytes = 0; copied_bytes < 3; copied_bytes++)
{
int index = pixel_index * BYTESPERPIXEL + copied_bytes;
combined_tex.getRawData()[index] =
current_tex.getRawData()[index];
}
}
}
}
To answer your question for a 3D solution, you would simply draw rectangles with their respective textures (that would have an alpha channel) over each other. You would set the system up to display in an orthogonal mode (for OpenGL: gluOrtho2D()).
I'd go with the procedural generation solution (#2). As long as there isn't a limiting amount of sprites to be generated, such that the generation takes too long. Maybe do the generation when each item is acquired, to lower the load.
Since I was asked in comments to supply a 3D way aswell, here is some, that is an excerpt of some code I wrote quite some time ago. It's OpenGL and C++.
Each sprite would be asked to draw itself. Using the Adapter pattern, I would combine sprites - i.e. there would be sprites that would hold two or more sprites that had a (0,0) relative position and one sprite with a real position having all those "sub-"sprites.
void Sprite::display (void) const
{
glBindTexture(GL_TEXTURE_2D, tex_id_);
Display::drawTranspRect(model_->getPosition().x + draw_dimensions_[0] / 2.0f,
model_->getPosition().y + draw_dimensions_[1] / 2.0f,
draw_dimensions_[0] / 2.0f, draw_dimensions_[1] / 2.0f);
}
void Display::drawTranspRect (float x, float y, float x_len, float y_len)
{
glPushMatrix();
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glColor4f(1.0, 1.0, 1.0, 1.0);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex3f(x - x_len, y - y_len, Z);
glTexCoord2f(1.0f, 0.0f); glVertex3f(x + x_len, y - y_len, Z);
glTexCoord2f(1.0f, 1.0f); glVertex3f(x + x_len, y + y_len, Z);
glTexCoord2f(0.0f, 1.0f); glVertex3f(x - x_len, y + y_len, Z);
glEnd();
glDisable(GL_BLEND);
glPopMatrix();
}
The tex_id_ is an integral value that identifies which texture is used to OpenGL. The relevant parts of the texture manager are these. The texture manager actually emulates an alpha channel by checking to see if the color read is pure white (RGB of (ff,ff,ff)) - the loadFile code operates on 24 bits per pixel BMP files:
TextureManager::texture_id
TextureManager::createNewTexture (Texture const& tex) {
texture_id id;
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, 4, tex.width_, tex.height_, 0,
GL_BGRA_EXT, GL_UNSIGNED_BYTE, tex.texture_);
return id;
}
void TextureManager::loadImage (FILE* f, Texture& dest) const {
fseek(f, 18, SEEK_SET);
signed int compression_method;
unsigned int const HEADER_SIZE = 54;
fread(&dest.width_, sizeof(unsigned int), 1, f);
fread(&dest.height_, sizeof(unsigned int), 1, f);
fseek(f, 28, SEEK_SET);
fread(&dest.bpp_, sizeof (unsigned short), 1, f);
fseek(f, 30, SEEK_SET);
fread(&compression_method, sizeof(unsigned int), 1, f);
// We add 4 channels, because we will manually set an alpha channel
// for the color white.
dest.size_ = dest.width_ * dest.height_ * dest.bpp_/8 * 4;
dest.texture_ = new unsigned char[dest.size_];
unsigned char* buffer = new unsigned char[3 * dest.size_ / 4];
// Slurp in whole file and replace all white colors with green
// values and an alpha value of 0:
fseek(f, HEADER_SIZE, SEEK_SET);
fread (buffer, sizeof(unsigned char), 3 * dest.size_ / 4, f);
for (unsigned int count = 0; count < dest.width_ * dest.height_; count++) {
dest.texture_[0+count*4] = buffer[0+count*3];
dest.texture_[1+count*4] = buffer[1+count*3];
dest.texture_[2+count*4] = buffer[2+count*3];
dest.texture_[3+count*4] = 0xff;
if (dest.texture_[0+count*4] == 0xff &&
dest.texture_[1+count*4] == 0xff &&
dest.texture_[2+count*4] == 0xff) {
dest.texture_[0+count*4] = 0x00;
dest.texture_[1+count*4] = 0xff;
dest.texture_[2+count*4] = 0x00;
dest.texture_[3+count*4] = 0x00;
dest.uses_alpha_ = true;
}
}
delete[] buffer;
}
This was actually a small Jump'nRun that I developed occasionally in my spare time. It used gluOrtho2D() mode aswell, btw. If you leave means to contact you, I will send you the source if you want.
Older 2d games such as Diablo and Ultima Online use a sprite compositing technique to do this. You could search for art from those kind of older 2d isometric games to see how they did it.