I'm currently having an issue with alpha channels when reading PNG files with ImageIO.read(...)
fileInputStream = new FileInputStream(path);
BufferedImage image = ImageIO.read(fileInputStream);
//Just copying data into an integer array
int[] pixels = new int[image.getWidth() * image.getHeight()];
image.getRGB(0, 0, width, height, pixels, 0, width);
However, when trying to read values from the pixel array by bit shifting as seen below, the alpha channel is always returning -1
int a = (pixels[i] & 0xff000000) >> 24;
int r = (pixels[i] & 0xff0000) >> 16;
int g = (pixels[i] & 0xff00) >> 8;
int b = (pixels[i] & 0xff);
//a = -1, the other channels are fine
By Googling the problem I understand that the BufferedImage type needs to be defined as below to allow for the alpha channel to work:
BufferedImage image = new BufferedImage(width, height BufferedImage.TYPE_INT_ARGB);
But ImageIO.read(...) returns a BufferedImage without giving the option to specify the image type. So how can I do this?
Any help is much appreciated.
Thanks in advance
I think, your "int unpacking" code might be wrong.
I used (pixel >> 24) & 0xff (where pixel is the rgba value of a specific pixel) and it worked fine.
I compared this with the results of java.awt.Color and they worked fine.
I "stole" the "extraction" code directly from java.awt.Color, this is, yet another reason, I tend not to perform these operations this way, it's to easy to screw them up
And my awesome test code...
BufferedImage image = ImageIO.read(new File("BYO image"));
int width = image.getWidth();
int height = image.getHeight();
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int pixel = image.getRGB(x, y);
//value = 0xff000000 | rgba;
int a = (pixel >> 24) & 0xff;
Color color = new Color(pixel, true);
System.out.println(x + "x" + y + " = " + color.getAlpha() + "; " + a);
}
}
nb: Before some one tells that this is inefficient, I wasn't going for efficiency, I was going for quick to write
You may also want to have a look at How to convert get.rgb(x,y) integer pixel to Color(r,g,b,a) in Java?, which I also used to validate my results
I think the problem is that you're using arithmetic shift (>>) instead of logical shift (>>>). Thus 0xff000000 >> 24 becomes 0xffffffff (i.e. -1)
This question already has an answer here:
Can't import javax.imageio.ImageIO in Android application
(1 answer)
Closed 5 years ago.
first of all thank for your time. I have a jar library which would be included as library in my Android Application.
This jar, among other things, is able to get the RGB values from a jpg image. This works perfectly in my java application but when I runs it in my Android application it does not work because the class ImageIO.read(File file) (Bufferedimage) does not implemented in Android.
I read something about using Bitmap class but i do not find out anything about it.
Could you help me with this method you find here below?
public static int[][][] getImageRgb(BufferedImage image) {
int width = image.getWidth();
int height = image.getHeight();
int[][][] rgb = new int[height][width][3];
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
int pixel = image.getRGB(j, i);
rgb[i][j] = getPixelRgb(pixel); }
}
return rgb;
}
Where getPixelRgb is a function aims this:
public static int[] getPixelRgb(int pixel) {
// int alpha = (pixel >> 24) & 0xff;
int red = (pixel >> 16) & 0xff;
int green = (pixel >> 8) & 0xff;
int blue = (pixel) & 0xff;
return new int[]{red, green, blue};
}
I really I do not know how to transform this methods for Android.
I look forward to hearing from you.
Thank a lot.
What you need is in the official docs:
int getPixel (int x, int y)
Returns the Color at the specified location.
You can create a Bitmap from a resource in res/drawable folder or if you're downloading the image, you need to first save it to the device storage.
I do a lot of game programming in my free time, and am currently working on a game engine library. Previous to this point I have made customized per game engines built straight into the application, however, to challenge my logical skills even further, I decided I wanted to make an engine that I could literally use with any game that I write, kind of like a plugin.
Before this point, I have been pulling textures in using a BufferedImage using getRBG to pull the pixel[] out and by hand writing over the background int[] with the texture int[] array in the (X,Y) position that the Renderable resided. Then when everything was written to the master int[] I would make a new BufferedImage and use setRGB using the master int[] and use a BufferStrategy and it's Graphic to drawImage of the BufferedImage. I liked this method because I felt like i had complete control over the way things were rendered, but I don't think it was very efficient.
Here is a look at the way I used to write to the master int[]
public void draw(Render render, int x, int y, float alphaMul){
for(int i=0; i<render.Width; i++){
int xPix = i + x;
if(Width - 1<xPix || xPix<0)continue;
for(int j=0; j<render.Height; j++){
int yPix = j + y;
if(Height - 1<yPix || yPix<0)continue;
int srcARGB = render.Pixels[i + j * render.Width];
int dstARGB = Pixels[xPix + yPix * Width];
int srcAlpha = (int)((0xFF & (srcARGB >> 24))*alphaMul);
int srcRed = 0xFF & (srcARGB >> 16);
int srcGreen = 0xFF & (srcARGB >> 8);
int srcBlue = 0xFF & (srcARGB);
int dstAlpha = 0xFF & (dstARGB >> 24);
int dstRed = 0xFF & (dstARGB >> 16);
int dstGreen = 0xFF & (dstARGB >> 8);
int dstBlue = 0xFF & (dstARGB);
float srcAlphaF = srcAlpha/255.0f;
float dstAlphaF = dstAlpha/255.0f;
int outAlpha = (int)((srcAlphaF + (dstAlphaF)*(1 - (srcAlphaF)))*255);
int outRed = (int)(srcRed*srcAlphaF) + (int)(dstRed * (1 - srcAlphaF));
int outGreen = (int)(srcGreen*srcAlphaF) + (int)(dstGreen * (1 - srcAlphaF));
int outBlue = (int)(srcBlue*srcAlphaF) + (int)(dstBlue * (1 - srcAlphaF));
int outARGB = (outAlpha<<24)|(outRed << 16) | (outGreen << 8) | (outBlue);
Pixels[xPix + yPix * Width] = outARGB;
}
}
}
I have recently found out it may be multitudes faster, where using drawImage I can loop through all of the Renderables and draw them as BufferedImages using their respective (X,Y) positions. But, I do not know how to work alphaBlending with that. So my questions are, how would I go about getting the results that I want?, and Would it be resource and time beneficial over my previous method?
Thanks
-Craig
I'm currently working on a platformer game in java, and I'm having trouble figuring out why this decreases performance so much. Only the textures in view of the camera are rendered, and I've even tried clearing all objects outside the camera's view, so that the array was almost empty, and I still was unable to get a good framerate. When I comment out the call to this method, the game runs at 300 FPS, but when I run it, even when I remove everything afterwords, I still only get 40 FPS. This is not an issue with rendering, as I have tested this thoroughly. Any feedback would be much appreciated. Here is the code:
public void buildTerrain(BufferedImage bi) {
// this method will take an image and build a level based on it.
int width = bi.getWidth();
int height = bi.getHeight();
for(int x = 0; x < width; x++){
for(int y = 0; y < height; y++){
int pixel = bi.getRGB(x, y);
int r = (pixel >> 16) & 0xff;
int g = (pixel >> 8) & 0xff;
int b = (pixel) & 0xff;
if(r == 255 &&
g == 255 &&
b == 255)
h.addObject(new Block(x*32, y*32,
ID.blockStone,GameState.level1, tex));
if(r == 0 &&
g == 0 &&
b == 255){
p.setX(x*32);
p.setY(y*32);
p.setHeight(64);
}
}
}
}
references:
h is a Handler object, witch contains a method addObject(GameObject)
Block extends GameObject
p is a Player, witch also extends GameObject.
EDIT: this code is not called in a loop, it is ran once at the beginning of each level to load the terrain. All the AddObject() method does is add the Blocks to an array where then are then iterated over in the tick() and render() methods. Only objects in the scope of the camera are rendered, and the tick() method of blocks is empty.
Could you try:
if(0xffffff00 == (pixel & 0xffffff00))
h.addObject(new Block(x*32, y*32,
ID.blockStone,GameState.level1, tex));
if(0x0000ff00 == (pixel & 0x0000ff00)){
p.setX(x*32);
p.setY(y*32);
p.setHeight(64);
}
Because I don't understand the need of decomposing (r, g, b) for each pixel while you can do that using a binary & (0xffffff00, it might be 0x00ffffff).
in your code, you do width*height*(3 shift + 3 and + 3 equals + 3 equals) operations.
in my code, you do width*height*2*(and + test) operations.
What I need to do is quite simple, I want to manually display preview from camera using camera callback and I want to get at least 15fps on a real device. I don't even need the colors, I just need to preview grayscale image.
Images from camera are in YUV format and you have to process it somehow, which is the main performance problem. I'm using API 8.
In all cases I'm using camera.setPreviewCallbackWithBuffer(), that is faster than camera.setPreviewCallback(). It seems that I cant get about 24 fps here, if I'm not displaying the preview. So there is not the problem.
I have tried these solutions:
1. Display camera preview on a SurfaceView as a Bitmap. It works, but the performance is about 6fps.
baos = new ByteOutputStream();
yuvimage=new YuvImage(cameraFrame, ImageFormat.NV21, prevX, prevY, null);
yuvimage.compressToJpeg(new Rect(0, 0, prevX, prevY), 80, baos);
jdata = baos.toByteArray();
bmp = BitmapFactory.decodeByteArray(jdata, 0, jdata.length); // Convert to Bitmap, this is the main issue, it takes a lot of time
canvas.drawBitmap(bmp , 0, 0, paint);
2. Display camera preview on a GLSurfaceView as a texture. Here I was displaying only luminance data (greyscale image), which is quite easy, it requires only one arraycopy() on each frame. I can get about 12fps, but I need to apply some filters to the preview and it seems, that it can't be done fast in OpenGL ES 1. So I can't use this solution. Some details of this in another question.
3. Display camera preview on a (GL)SurfaceView using NDK to process the YUV data. I find a solution here that uses some C function and NDK. But I didn't manage to use it, here some more details. But anyway, this solution is done to return ByteBuffer to display it as a texture in OpenGL and it won't be faster than the previous attempt. So I would have to modify it to return int[] array, that can be drawn with canvas.drawBitmap(), but I don't understand C enough to do this.
So, is there any other way that I'm missing or some improvement to the attempts I tried?
I'm working on exactly the same issue, but haven't got quite as far as you have.
Have you considered drawing the pixels directly to the canvas without encoding them to JPEG first? Inside the OpenCV kit http://sourceforge.net/projects/opencvlibrary/files/opencv-android/2.3.1/OpenCV-2.3.1-android-bin.tar.bz2/download (which doesn't actually use opencv; don't worry), there's a project called tutorial-0-androidcamera that demonstrates converting the YUV pixels to RGB and then writing them directly to a bitmap.
The relevant code is essentially:
public void onPreviewFrame(byte[] data, Camera camera, int width, int height) {
int frameSize = width*height;
int[] rgba = new int[frameSize+1];
// Convert YUV to RGB
for (int i = 0; i < height; i++)
for (int j = 0; j < width; j++) {
int y = (0xff & ((int) data[i * width + j]));
int u = (0xff & ((int) data[frameSize + (i >> 1) * width + (j & ~1) + 0]));
int v = (0xff & ((int) data[frameSize + (i >> 1) * width + (j & ~1) + 1]));
y = y < 16 ? 16 : y;
int r = Math.round(1.164f * (y - 16) + 1.596f * (v - 128));
int g = Math.round(1.164f * (y - 16) - 0.813f * (v - 128) - 0.391f * (u - 128));
int b = Math.round(1.164f * (y - 16) + 2.018f * (u - 128));
r = r < 0 ? 0 : (r > 255 ? 255 : r);
g = g < 0 ? 0 : (g > 255 ? 255 : g);
b = b < 0 ? 0 : (b > 255 ? 255 : b);
rgba[i * width + j] = 0xff000000 + (b << 16) + (g << 8) + r;
}
Bitmap bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bmp.setPixels(rgba, 0/* offset */, width /* stride */, 0, 0, width, height);
Canvas canvas = mHolder.lockCanvas();
if (canvas != null) {
canvas.drawBitmap(bmp, (canvas.getWidth() - width) / 2, (canvas.getHeight() - height) / 2, null);
mHolder.unlockCanvasAndPost(canvas);
} else {
Log.w(TAG, "Canvas is null!");
}
bmp.recycle();
}
Of course you'd have to adapt it to meet your needs (ex. not allocating rgba each frame), but it might be a start. I'd love to see if it works for you or not -- i'm still fighting problems orthogonal to yours at the moment.
I think Michael's on the right track. First you can try this method to convert from RGB to Grayscale. Clearly it's doing almost the same thing as his,but a little more succinctly for what you want.
//YUV Space to Greyscale
static public void YUVtoGrayScale(int[] rgb, byte[] yuv420sp, int width, int height){
final int frameSize = width * height;
for (int pix = 0; pix < frameSize; pix++){
int pixVal = (0xff & ((int) yuv420sp[pix])) - 16;
if (pixVal < 0) pixVal = 0;
if (pixVal > 255) pixVal = 255;
rgb[pix] = 0xff000000 | (pixVal << 16) | (pixVal << 8) | pixVal;
}
}
}
Second, don't create a ton of work for the garbage collector. Your bitmaps and arrays are going to be a fixed size. Create them once, not in onFramePreview.
Doing that you'll end up with something that looks like this:
public PreviewCallback callback = new PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
if ( (mSelectView == null) || !inPreview )
return;
if (mSelectView.mBitmap == null)
{
//initialize SelectView bitmaps, arrays, etc
//mSelectView.mBitmap = Bitmap.createBitmap(mSelectView.mImageWidth, mSelectView.mImageHeight, Bitmap.Config.RGB_565);
//etc
}
//Pass Image Data to SelectView
System.arraycopy(data, 0, mSelectView.mYUVData, 0, data.length);
mSelectView.invalidate();
}
};
And then the canvas where you want to put it looks like this:
class SelectView extends View {
Bitmap mBitmap;
Bitmap croppedView;
byte[] mYUVData;
int[] mRGBData;
int mImageHeight;
int mImageWidth;
public SelectView(Context context){
super(context);
mBitmap = null;
croppedView = null;
}
#Override
protected void onDraw(Canvas canvas){
if (mBitmap != null)
{
int canvasWidth = canvas.getWidth();
int canvasHeight = canvas.getHeight();
// Convert from YUV to Greyscale
YUVtoGrayScale(mRGBData, mYUVData, mImageWidth, mImageHeight);
mBitmap.setPixels(mRGBData, 0, mImageWidth, 0, 0, mImageWidth, mImageHeight);
Rect crop = new Rect(180, 220, 290, 400);
Rect dst = new Rect(0, 0, canvasWidth, (int)(canvasHeight/2));
canvas.drawBitmap(mBitmap, crop, dst, null);
}
super.onDraw(canvas);
}
This example shows a cropped and distorted selection of the camera preview in real time, but you get the idea. It runs at high FPS on a Nexus S in greyscale and should work for your needs as well.
Is this not what you want? Just use a SurfaceView in your layout, then somewhere in your init like onResume():
SurfaceView surfaceView = ...
SurfaceHolder holder = surfaceView.getHolder();
...
Camera camera = ...;
camera.setPreviewDisplay(holder);
It just sends the frames straight to the view as fast as they arrive.
If you want grayscale, modify the camera parameters with setColorEffect("mono").
For very basic and simple effects, there is
Camera.Parameters parameters = mCamera.getParameters();
parameters.setColorEffect(Parameters.EFFECT_AQUA);
I figured out that this effects do DIFFERENTLY depending on the device.
For instance, on my phone (galaxy s II) it looks kinda like a comic effect as in contrast to the galaxy s 1 it is 'just' a blue shade.
It's pro: It's working as live-preview.
I looked around some other camera apps and they obviously also faced this problem.
So what did they do?
They are capturing the default camera image, applying a filter to the bitmap data, and show this image in a simple ImageView. It's for sure not that cool as in live preview, but you won't ever face performance problems.
I believe I read in a blog that the grayscale data is in the first x*y bytes. Yuv should represent luminance, so the data is there, although it isn't a perfect grayscale. Its great for relative brightness, but not grayscale, as each color isn't as bright as each other in rgb. Green is usually given a stronger weight in luminosity conversions. Hope this helps!
Is there any special reason that you are forced to use GLES 1.0 ?
Because if not, see the accepted answer here:
Android SDK: Get raw preview camera image without displaying it
Generally it mentions using Camera.setPreviewTexture() in combination with GLES 2.0.
In GLES 2.0 you can render a full-screen-quad all over the screen, and create whatever effect you want.
It's most likely the fastest way possible.