Customizable player avatar in a 2D Game - java

How can I have that functionality in my game through which the players can change their hairstyle, look, style of clothes, etc., and so whenever they wear a different item of clothing their avatar is updated with it.
Should I:
Have my designer create all possible combinations of armor, hairstyles, and faces as sprites (this could be a lot of work).
When the player chooses what they should look like during their introduction to the game, my code would automatically create this sprite, and all possible combinations of headgear/armor with that sprite. Then each time they select some different armor, the sprite for that armor/look combination is loaded.
Is it possible to have a character's sprite divided into components, like face, shirt, jeans, shoes, and have the pixel dimensions of each of these. Then whenever the player changes his helmet, for example, we use the pixel dimensions to put the helmet image in place of where its face image would normally be. (I'm using Java to build this game)
Is this not possible in 2D and I should use 3D for this?
Any other method?
Please advise.

One major factor to consider is animation. If a character has armour with shoulder pads, those shoulderpads may need to move with his torso. Likewise, if he's wearing boots, those have to follow the same cycles as hid bare feet would.
Essentially what you need for your designers is a Sprite Sheet that lets your artists see all possible frames of animation for your base character. You then have them create custom hairstyles, boots, armour, etc. based on those sheets. Yes, its a lot of work, but in most cases, the elements will require a minimal amount of redrawing; boots are about the only thing I could see really taking a lot of work to re-create since they change over multiple frames of animation. Be rutheless with your sprites, try to cut down the required number as much as possible.
After you've amassed a library of elements you can start cheating. Recycle the same hair style and adjust its colour either in Photoshop or directly in the game with sliders in your character creator.
The last step, to ensure good performance in-game, would be to flatten all the different elements' sprite sheets into a single sprite sheet that is then split up and stored in sprite buffers.

3D will not be necessary for this, but the painter algorithm that is common in the 3D world might IMHO save you some work:
The painter algorithm works by drawing the most distant objects first, then overdrawing with objects closer to the camera. In your case, it would boild down to generating the buffer for your sprite, drawing it onto the buffer, finding the next dependant sprite-part (i.e. armour or whatnot), drawing that, finding the next dependant sprite-part (i.e. a special sign that's on the armour), and so on. When there are no more dependant parts, you paint the full generated sprite on to the display the user sees.
The combinated parts should have an alpha channel (RGBA instead of RGB) so that you will only combine parts that have an alpha value set to a value of your choice. If you cannot do that for whatever reason, just stick with one RGB combination that you will treat as transparent.
Using 3D might make combining the parts easier for you, and you'd not even have to use an offscreen buffer or write the pixel combinating code. The flip-side is that you need to learn a little 3D if you don't know it already. :-)
Edit to answer comment:
The combination part would work somewhat like this (in C++, Java will be pretty similar - please note that I did not run the code below through a compiler):
//
// #param dependant_textures is a vector of textures where
// texture n+1 depends on texture n.
// #param combimed_tex is the output of all textures combined
void Sprite::combineTextures (vector<Texture> const& dependant_textures,
Texture& combined_tex) {
vector< Texture >::iterator iter = dependant_textures.begin();
combined_tex = *iter;
if (dependant_textures.size() > 1)
for (iter++; iter != dependant_textures.end(); iter++) {
Texture& current_tex = *iter;
// Go through each pixel, painting:
for (unsigned char pixel_index = 0;
pixel_index < current_tex.numPixels(); pixel_index++) {
// Assuming that Texture had a method to export the raw pixel data
// as an array of chars - to illustrate, check Alpha value:
int const BYTESPERPIXEL = 4; // RGBA
if (!current_tex.getRawData()[pixel_index * BYTESPERPIXEL + 3])
for (int copied_bytes = 0; copied_bytes < 3; copied_bytes++)
{
int index = pixel_index * BYTESPERPIXEL + copied_bytes;
combined_tex.getRawData()[index] =
current_tex.getRawData()[index];
}
}
}
}
To answer your question for a 3D solution, you would simply draw rectangles with their respective textures (that would have an alpha channel) over each other. You would set the system up to display in an orthogonal mode (for OpenGL: gluOrtho2D()).

I'd go with the procedural generation solution (#2). As long as there isn't a limiting amount of sprites to be generated, such that the generation takes too long. Maybe do the generation when each item is acquired, to lower the load.

Since I was asked in comments to supply a 3D way aswell, here is some, that is an excerpt of some code I wrote quite some time ago. It's OpenGL and C++.
Each sprite would be asked to draw itself. Using the Adapter pattern, I would combine sprites - i.e. there would be sprites that would hold two or more sprites that had a (0,0) relative position and one sprite with a real position having all those "sub-"sprites.
void Sprite::display (void) const
{
glBindTexture(GL_TEXTURE_2D, tex_id_);
Display::drawTranspRect(model_->getPosition().x + draw_dimensions_[0] / 2.0f,
model_->getPosition().y + draw_dimensions_[1] / 2.0f,
draw_dimensions_[0] / 2.0f, draw_dimensions_[1] / 2.0f);
}
void Display::drawTranspRect (float x, float y, float x_len, float y_len)
{
glPushMatrix();
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glColor4f(1.0, 1.0, 1.0, 1.0);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex3f(x - x_len, y - y_len, Z);
glTexCoord2f(1.0f, 0.0f); glVertex3f(x + x_len, y - y_len, Z);
glTexCoord2f(1.0f, 1.0f); glVertex3f(x + x_len, y + y_len, Z);
glTexCoord2f(0.0f, 1.0f); glVertex3f(x - x_len, y + y_len, Z);
glEnd();
glDisable(GL_BLEND);
glPopMatrix();
}
The tex_id_ is an integral value that identifies which texture is used to OpenGL. The relevant parts of the texture manager are these. The texture manager actually emulates an alpha channel by checking to see if the color read is pure white (RGB of (ff,ff,ff)) - the loadFile code operates on 24 bits per pixel BMP files:
TextureManager::texture_id
TextureManager::createNewTexture (Texture const& tex) {
texture_id id;
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, 4, tex.width_, tex.height_, 0,
GL_BGRA_EXT, GL_UNSIGNED_BYTE, tex.texture_);
return id;
}
void TextureManager::loadImage (FILE* f, Texture& dest) const {
fseek(f, 18, SEEK_SET);
signed int compression_method;
unsigned int const HEADER_SIZE = 54;
fread(&dest.width_, sizeof(unsigned int), 1, f);
fread(&dest.height_, sizeof(unsigned int), 1, f);
fseek(f, 28, SEEK_SET);
fread(&dest.bpp_, sizeof (unsigned short), 1, f);
fseek(f, 30, SEEK_SET);
fread(&compression_method, sizeof(unsigned int), 1, f);
// We add 4 channels, because we will manually set an alpha channel
// for the color white.
dest.size_ = dest.width_ * dest.height_ * dest.bpp_/8 * 4;
dest.texture_ = new unsigned char[dest.size_];
unsigned char* buffer = new unsigned char[3 * dest.size_ / 4];
// Slurp in whole file and replace all white colors with green
// values and an alpha value of 0:
fseek(f, HEADER_SIZE, SEEK_SET);
fread (buffer, sizeof(unsigned char), 3 * dest.size_ / 4, f);
for (unsigned int count = 0; count < dest.width_ * dest.height_; count++) {
dest.texture_[0+count*4] = buffer[0+count*3];
dest.texture_[1+count*4] = buffer[1+count*3];
dest.texture_[2+count*4] = buffer[2+count*3];
dest.texture_[3+count*4] = 0xff;
if (dest.texture_[0+count*4] == 0xff &&
dest.texture_[1+count*4] == 0xff &&
dest.texture_[2+count*4] == 0xff) {
dest.texture_[0+count*4] = 0x00;
dest.texture_[1+count*4] = 0xff;
dest.texture_[2+count*4] = 0x00;
dest.texture_[3+count*4] = 0x00;
dest.uses_alpha_ = true;
}
}
delete[] buffer;
}
This was actually a small Jump'nRun that I developed occasionally in my spare time. It used gluOrtho2D() mode aswell, btw. If you leave means to contact you, I will send you the source if you want.

Older 2d games such as Diablo and Ultima Online use a sprite compositing technique to do this. You could search for art from those kind of older 2d isometric games to see how they did it.

Related

3D Object selection in opengl

I am currently making a 3d chess game in opengl. I still struggle with the selection of the different figures. I followed the tutorials by thinmatrix and came this far: https://imgur.com/gallery/oLv5ReI.
Now I want the user to be able to select the figures by clicking on them. I have the camera position, the ray in which direction the mouse is pointing and the position of the figures. How can I detect if the ray hits the figure (probably using a rectangle hitbox) when it starts at the position of the camera?
My code so far:
public void update(Vector3f mouseRay, Camera camera, Figure figure){
Vector3f start = camera.getPosition();
Vector3f figurePos = figure.getPosition();
if(intersect()){
selectFigure();
}
}
EDIT:
I tried this:
Ray-Sphere intersection
but it somehow didn't work. A sphere intersection also seemed very inefficient in respect of a ray box intersection.
You'll have to follow following steps (I'm assuming you are aware of rendering pipeline and aware of OpenGL/WebGL)
Get the list of all the objects you have.
Assign every object a unique color. Following would be an easy way to assign the unique color based on index of the object in the list.
int i = // Index of the object
// We've added 1 to the index because 0 index is translated to black color
// And our background is also rendered as black so we skip will that color.
int r = (i + 1 & 0x000000FF) >> 0;
int g = (i + 1 & 0x0000FF00) >> 8;
int b = (i + 1 & 0x00FF0000) >> 16;
glm::vec4 unique_color = glm::vec4(r / 255.0f, g / 255.0f, b / 255.0f, 1.0);
Create a frame-buffer and render all the objects with their uniquely assigned solid colors.
When the rendering is complete, you now read the click position pixel color from the rendered frame buffer texture.
Decode the color into index of object back like given below. (This is exactly revers of what we've done in step 2)
int triangle_index =
color.r +
color.g * 256 +
color.b * 256 * 256;
With this index you have the selected object from the initial list of all objects.
You can read more about this technique here, http://www.opengl-tutorial.org/miscellaneous/clicking-on-objects/picking-with-an-opengl-hack/

Reuse texture and vertices in OpenGL

I am trying to make a simple 2D game, and I store the world in a 2D array of Block (an enum, with each value having its texture).
Since these are all simple opaque tiles, when rendering I sort them by texture and then render them by translating to their coordinate. However, I also need to specify the texture coordinates and the vertex for each tile that I draw, even though these are also the same.
Here's what I currently have:
public static void render() {
// Sorting...
for(SolidBlock block : xValues.keySet()) {
block.getTexture().bind();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
for(int coordinateIndex = 0; coordinateIndex < xValues.get(block).size(); coordinateIndex++) {
int x = xValues.get(block).get(coordinateIndex);
int y = yValues.get(block).get(coordinateIndex);
glTranslatef(x, y, Integer.MIN_VALUE);
// Here I use MIN_VALUE because I'll later have to do z sorting with other tiles
glBegin(GL_QUADS);
loadModel();
glEnd();
glLoadIdentity();
}
xValues.get(block).clear();
yValues.get(block).clear();
}
}
private static void loadModel() {
glTexCoord2f(0, 0);
glVertex2f(0, 0);
glTexCoord2f(1, 0);
glVertex2f(1, 0);
glTexCoord2f(1, 1);
glVertex2f(1, 1);
glTexCoord2f(0, 1);
glVertex2f(0, 1);
}
I'd like to know if it is possible to put loadModel() before the main loop, to avoid having to load the model thousands of times with the same Data, and also what else could be moved to make it as fast as possible!
Some quick optimizations:
glTexParameteri only needs to be called once per parameter per texture. You should put it in the part of your code where you load the textures.
You can draw multiple quads in one glBegin/glEnd pair simply by adding more vertices. However, you cannot do any coordinate changes between glBegin and glEnd (such as glTranslatef or glLoadIdentity or glPushMatrix) so you'll have to pass x and y to your loadModel function (which really should be called addQuad for accuracy). It's also not allowed to rebind textures between glBegin/glEnd, so you'll have to use one set of glBegin/glEnd per texture.
Minor, but instead of calling xValues.get(block) a whole bunch of times, just say List<Integer> blockXValues = xValues.get(block) at the beginning of your outer loop and then use blockXValues from there on.
Some more involved optimizations:
Legacy OpenGL has draw lists, which are basically macros for OpenGL. You can make OpenGL record all the OpenGL calls you're doing between glNewList and glEndList (with some exceptions), and store them somehow. The next time you want to run those exact OpenGL calls, you can use glCallList to make OpenGL do just that for you. Some optimizations will be done on the draw list in order to speed up subsequent draws.
Texture switching is relatively expensive, which you're probably already aware of since you sorted your quads by texture, but there is a better solution than sorting textures: Put all your textures into a single texture atlas. You'll want to store the subtexture coordinates of each block inside your SolidBlocks, and then pass block to addQuad as well so you can pass the appropriate subtexture coordinates to glTexCoord2f. Once you've done that, you don't need to sort by texture anymore and can just iterate over x and y coordinates.
Good practices:
Only use glLoadIdentity once per frame, at the beginning of your draw process. Then use glPushMatrix paired with glPopMatrix to save and restore the state of matrices. That way the inner parts of your code don't need to know about the matrix transformations the outer parts may or may not have done beforehand.
Don't use Integer.MIN_VALUE as a vertex coordinate. Use a constant of your own choosing, preferably one that won't make your depth range huge (the last two arguments to glOrtho which I assume you're using). Depth buffer precision is limited, you'll run into Z-fighting issues if you try to use Z coordinates of 1 or 2 or so after setting your Z range from Integer.MIN_VALUE to Integer.MAX_VALUE. Also, you're using float coordinates, so int constants don't make sense here anyway.
Here's the code after a quick pass (without the texture atlas changes):
private static final float BLOCK_Z_DEPTH = -1; // change to whatever works for you
private int blockCallList;
private boolean regenerateBlockCallList; // set to true whenever you need to update some blocks
public static void init() {
blockCallList = glGenLists(1);
regenerateBlockCallList = true;
}
public static void render() {
if (regenerateBlockCallList) {
glNewList(blockCallList, GL_COMPILE_AND_EXECUTE);
drawBlocks();
glEndList();
regenerateBlockCallList = false;
} else {
glCallList(blockCallList);
}
}
private static void drawBlocks() {
// Sorting...
glPushMatrix();
glTranslatef(0, 0, BLOCK_Z_DEPTH);
for (SolidBlock block : xValues.keySet()) {
List<Integer> blockXValues = xValues.get(block);
List<Integer> blockYValues = yValues.get(block);
block.getTexture().bind();
glBegin(GL_QUADS);
for(int coordinateIndex = 0; coordinateIndex < blockXValues.size(); coordinateIndex++) {
int x = blockXValues.get(coordinateIndex);
int y = blockYValues.get(coordinateIndex);
addQuad(x,y);
}
glEnd();
blockXValues.clear();
blockYValues.clear();
}
glPopMatrix();
}
private static void addQuad(float x, float y) {
glTexCoord2f(0, 0);
glVertex2f(x, y);
glTexCoord2f(1, 0);
glVertex2f(x+1, y);
glTexCoord2f(1, 1);
glVertex2f(x+1, y+1);
glTexCoord2f(0, 1);
glVertex2f(x, y+1);
}
With modern OpenGL (vertex buffers, shaders and instancing instead of display lists, matrix transformations and passing vertices one by one) you'd approach this problem very differently, but I'll keep that beyond the scope of my answer.

Weird interpolation between colors in hsv?

I want to achieve interpolation between red and blue. something like this
but in a single line.
My java code:
private PixelData InterpolateColour(float totalLength, float curLength){
float startColourV[] = new float[3];
Color.RGBtoHSB(m_start.getColour().getR() & 0xFF, m_start.getColour().getG() & 0xFF, m_start.getColour().getB() & 0xFF, startColourV);
float endColourV[] = new float[3];
Color.RGBtoHSB(m_end.getColour().getR() & 0xFF, m_end.getColour().getG() & 0xFF, m_end.getColour().getB() & 0xFF, endColourV);
float endPercent = curLength / totalLength;
float startPercent = 1 - curLength / totalLength;
float h = endColourV[0] * endPercent + startColourV[0] * startPercent;
float s = endColourV[1] * endPercent + startColourV[1] * startPercent;
float b = endColourV[2] * endPercent + startColourV[2] * startPercent;
int colourRGB = Color.HSBtoRGB(h, s, b);
byte[] ByteArray = ByteBuffer.allocate(4).putInt(colourRGB).array();
return new PixelData(ByteArray[0], ByteArray[3], ByteArray[2], ByteArray[1]);
}
and the result i am getting is this
.
I don't understand, from where all that green is coming from. Can somebody please help me ?
why not use just RGB with simple linear interpolation for this:
color(t)=(color0*t)+(color1*(1.0-t))
where t=<0.0,1.0> is the parameter. So just loop it in the full range with as many steps as you need.
Integer C++/VCL example (sorry not a JAVA coder):
// borland GDI clear screen
Canvas->Brush->Color=clBlack;
Canvas->FillRect(ClientRect);
// easy access to RGB channels
union _color
{
DWORD dd;
BYTE db[4];
} c0,c1,c;
// 0x00BBGGRR
c0.dd=0x000000FF; // Red
c1.dd=0x00FF0000; // Blue
int x,y,t0,t1;
for (x=0,y=ClientHeight/2;x<ClientWidth;x++)
{
t0=x;
t1=ClientWidth-1-x;
c.db[0]=((DWORD(c0.db[0])*t0)+(DWORD(c1.db[0])*t1))/(ClientWidth-1);
c.db[1]=((DWORD(c0.db[1])*t0)+(DWORD(c1.db[1])*t1))/(ClientWidth-1);
c.db[2]=((DWORD(c0.db[2])*t0)+(DWORD(c1.db[2])*t1))/(ClientWidth-1);
c.db[3]=((DWORD(c0.db[3])*t0)+(DWORD(c1.db[3])*t1))/(ClientWidth-1);
Canvas->Pixels[x][y]=c.dd;
}
where ClientWidth,ClientHeight is my app form resolution, Canvas is access to the GDI interface of the form and Canvas->Pixels[x][y] is single pixel access (slow but for this example it is enough). The only important stuff is the for loop. Here resulting image:
Color interpolation is actually a fairly complex topic due to the way human vision works.
Physical intensity and wavelengths don't map directly to perceived luminance and hues. After all human eyes are not photon spectrometers, they just measure the intensity + three primaries, each with different sensitivity.
To have a metric, linear space that represents human color perception instead of physical attributes we have the CIELab. Since it's a linear metric doing interpolation between points should generally give you a linear transition between hues and also luminance.
But CIELab may not be sufficient since it only models perceptual sensitivity. If you need to match real lighting you also have to take into account that natural light sources do not illuminate all colors evenly.
If you need to match photorealistic material then additionally correcting for the intensity spectrum of natural light may also be necessary. I.e. something illuminated by a candle will not have intense blue components simply because the candle emits very little blue light that could be reflected.

LWJGL glTranslate doing weird things

this is a big problem I have been running into.
I am trying to render multiple tiles using glTranslate but when I call my draw function with the x, y coordinates the tiles are spaced weirdly(I don't want spaces).
Here is what happens.
here is my code:
Draw:
public void draw(float Xa, float Ya) {
GL11.glTranslatef(Xa, Ya, 0);
if(hasTexture) {
Texture.bind();
GL11.glBegin(GL11.GL_QUADS);
GL11.glColor3f(0.5f, 0.5f, 1);
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(0, 0);
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(0, S);
GL11.glTexCoord2f(1, 1);
GL11.glVertex2f(S, S);
GL11.glTexCoord2f(1, 0);
GL11.glVertex2f(S, 0);
GL11.glEnd();
}
and my render code:
public void a() throws IOException {
GL11.glTranslatef(0, 0, -10);
int x = 0;
while (x < World.BLOCKS_WIDTH - 1) {
int y = 0;
while (y < World.BLOCKS_HEIGHT - 1) {
blocks.b[data.blocks[x][y]].draw(x, y);
y++;
}
x++;
}
there are no errors (except the visible ones)
You do not appear to be initialising or pushing / popping the current transform. So the translations will accumulate, producing the effect you see, getting further and further apart as you translate by ever larger values.
Lets say your blocks are 10 units apart. The first is drawn with a translation of (0, 0), then next (0, 10), then (0, 20), (0, 30), etc.
However as the translations accumulate in the view matrix, what you actually get are translations of (0,0), (0,10), (0,30), (0,60), etc.
This is important, as it allows you to build a complex transform from a series of simple discrete steps. However when you want to render multiple objects, each with their own transform, you need to have some form of reset in between each object.
You could reinitialise the whole matrix, but that's a bit untidy and involves knowing what other transforms (such as the camera, etc.) have been done previously.
Instead, you can "push" the current matrix onto a stack, perform whatever local transformations you want to do, render stuff, and then "pop" the matrix back off so that you're back where you started, ready to render the next object.
I should point out that all this functionality is deprecated in the later versions of GL. With the more modern API you use shaders, and can supply whatever transforms you care to calculate.

Android Camera Preview YUV format into RGB on the GPU

I have copy pasted some code I found on stackoverflow to convert the default camera preview YUV into RGB format and then uploaded it to OpenGL for processing.
That worked fine, the issue is that most of the CPU was busy at converting the YUV images into the RGB format and it turned into the bottle neck.
I want to upload the YUV image into the GPU and then convert it into RGB in a fragment shader.
I took the same Java YUV to RGB function I found which worked on the CPU and tried to make it work on the GPU.
It turned to be quite a little nightmare, since there are several differences on doing calculations on Java and the GPU.
First, the preview image comes in byte[] in Java, but bytes are signed, so there might be negative values.
In addition, the fragment shader normally deals with [0..1] floating values for instead of a byte.
I am sure this is solveable and I almost solved it. But I spent a few hours trying to figure out what I was doing wrong and couldn't make it work.
Bottom line, I ask for someone to just write this shader function and preferably test it. For me it would be a tedious monkey job since I don't really understand why this conversion works the way it is, and I just try to mimic the same function on the GPU.
This is a very similar function to what I used on Java:
Displaying YUV Image in Android
What I did some of the job on the CPU, such as turnning the 1.5*wh bytes YUV format into a wh*YUV, as follows:
static public void decodeYUV420SP(int[] rgba, byte[] yuv420sp, int width,
int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (int) yuv420sp[yp]+127;
if ((i & 1) == 0) {
v = (int)yuv420sp[uvp++]+127;
u = (int)yuv420sp[uvp++]+127;
}
rgba[yp] = 0xFF000000+(y<<16) | (u<<8) | v;
}
}
}
I added 127 because byte is signed.
I then loaded the rgba into a OpenGL texture and tried to do the rest of the calculation on the GPU.
Any help would be appreaciated...
I used this code from wikipedia to calculate the conversion from YUV to RGB on the GPU:
private static int convertYUVtoRGB(int y, int u, int v) {
int r,g,b;
r = y + (int)1.402f*v;
g = y - (int)(0.344f*u +0.714f*v);
b = y + (int)1.772f*u;
r = r>255? 255 : r<0 ? 0 : r;
g = g>255? 255 : g<0 ? 0 : g;
b = b>255? 255 : b<0 ? 0 : b;
return 0xff000000 | (b<<16) | (g<<8) | r;
}
I converted the floats to 0.0..255.0 and then use the above code.
The part on the CPU was to rearrange the original YUV pixels into a YUV matrix(also shown in wikipdia).
Basically I used the wikipedia code and did the simplest float<->byte conersions to make it work out.
Small mistakes like adding 16 to Y or not adding 128 to U and V would give undesirable results. So you need to take care of it.
But it wasn't a lot of work once I used the wikipedia code as the base.
Converting on CPU sounds easy but I believe question is how to do it on GPU?
I did it recently in my project where I needed to get very fast QR code detection even when camera angle is 45 degrees to surface where code is printed, and it worked with great performance:
(following code is trimmed just to contain key lines, it is assumed that you have both Java and OpenGLES solid understanding)
Create a GL texture that will contain stored Camera image:
int[] txt = new int[1];
GLES20.glGenTextures(1,txt,0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,txt[0]);
GLES20.glTextParameterf(... set min filter to GL_LINEAR );
GLES20.glTextParameterf(... set mag filter to GL_LINEAR );
GLES20.glTextParameteri(... set wrap_s to GL_CLAMP_TO_EDGE );
GLES20.glTextParameteri(... set wrap_t to GL_CLAMP_TO_EDGE );
Pay attention that texture type is not GL_TEXTURE_2D. This is important, since only a GL_TEXTURE_EXTERNAL_OES type is supported by SurfaceTexture object, which will be used in the next step.
Setup SurfaceTexture:
SurfaceTexture surfTex = new SurfaceTeture(txt[0]);
surfTex.setOnFrameAvailableListener(this);
Above assumes that 'this' is an object that implements 'onFrameAvailable' function.
public void onFrameAvailable(SurfaceTexture st)
{
surfTexNeedUpdate = true;
// this flag will be read in GL render pipeline
}
Setup camera:
Camera cam = Camera.open();
cam.setPreviewTexture(surfTex);
This Camera API is deprecated if you target Android 5.0, so if you are, you have to use new CameraDevice API.
In your render pipeline, have following block to check if camera has frame available, and update surface texture with it. When surface texture is updated, will fill in GL texture that is linked with it.
if( surfTexNeedUpdate )
{
surfTex.updateTexImage();
surfTexNeedUpdate = false;
}
To bind GL texture which has Camera -> SurfaceTeture link to, just do this in rendering pipe:
GLES20.glBindTexture(GLES20.GL_TEXTURE_EXTERNAL_OS, txt[0]);
Goes without saying, you need to set current active texture.
In your GL shader program which will use above texture in it's fragment part, you must have first line:
#extension GL_OES_EGL_imiage_external : require
Above is a must-have.
Texture uniform must be samplerExternalOES type:
uniform samplerExternalOES u_Texture0;
Reading pixel from it is just like from GL_TEXTURE_2D type, and UV coordinates are in same range (from 0.0 to 1.0):
vec4 px = texture2D(u_Texture0, v_UV);
Once you have your render pipeline ready to render a quad with above texture and shader, just start the camera:
cam.startPreview();
You should see quad on your GL screen with live camera feed. Now you just need to grab the image with glReadPixels:
GLES20.glReadPixels(0,0,width,height,GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bytes);
Above line assumes that your FBO is RGBA, and that bytes is already initialized byte[] array to proper size, and that width and height are size of your FBO.
And voila! You have captured RGBA pixels from camera instead of converting YUV bytes received in onPreviewFrame callback...
You can also use RGB framebuffer object and avoid alpha if you don't need it.
It is important to note that camera will call onFrameAvailable in it's own thread which is not your GL render pipeline thread, thus you should not perform any GL calls in that function.
In February 2011, Renderscript was first introduced. Since Android 3.0 Honeycomb (API 11), and definitely since Android 4.2 JellyBean (API 17), when ScriptIntrinsicYuvToRGB was added, the easiest and most efficient solution has been to use renderscript for YUV to RGB conversion. I have recently generalized this solution to handle device rotation.

Categories

Resources