Convert raw pixel data (as byte array) to BufferedImage - java

I have a requirement to use a native library which reads some proprietary image file formats (something about not reinventing our own wheel). The library works fine, it's just sometimes the images can get pretty big (the record I've seen being 13k x 15k pixels). The problem is my poor JVM keeps dying a painful death and / or throwing OutOfMemoryError's any time the images start getting huge.
Here's what I have running
//the bands, width, and height fields are set in the native code
//And the rawBytes array is also populated in the native code.
public BufferedImage getImage(){
int type = bands == 1 ? BufferedImage.TYPE_BYTE_GRAY : BufferedImage.TYPE_INT_BRG;
BufferedImage bi = new BufferedImage(width, height, type);
ImageFilter filter = new RGBImageFilter(){
#Override
public int filterRGB(int x, int y, int rgb){
int r, g, b;
if (bands == 3) {
r = (((int) rawBytes[y * (width / bands) * 3 + x * 3 + 2]) & 0xFF) << 16;
g = (((int) rawBytes[y * (width / bands) * 3 + x * 3 + 1]) & 0xFF) << 8;
b = (((int) rawBytes[y * (width / bands) * 3 + x * 3 + 0]) & 0xFF);
} else {
b = (((int) rawBytes[y * width + x]) & 0xFF);
g = b << 8;
r = b << 16;
}
return 0xFF000000 | r | g | b;
}
};
//this is the problematic block
ImageProducer ip = new FilteredImageSource(bi.getSource(), filter);
Image i = Toolkit.getDefaultToolkit().createImage(ip);
Graphics g = bi.createGraphics();
//with this next line being where the error tends to occur.
g.drawImage(i, 0, 0, null);
return bi;
}
This snippet works great for most images so long as they're not obscenely large. Its speed is also just fine. The problem is that that Image drawing onto the BufferedImage step swallows way too much memory.
Is there a way I could skip that step and go directly from raw bytes to buffered image?

Use the RawImageInputStream from jai. This does require knowing information about the SampleModel, which you appear to have from the native code.
Another option would be to put your rawBytes into a DataBuffer, then create a WritableRaster, and finally create a BufferedImage.
This is essentially what the RawImageInputStream would do.

Related

Handling borders when applying a filter to an image

I'm trying to tidy up some of my github projects for a portfolio and was hoping for some help.
I have a basic image convolution kernel program in java to apply a filter to an input image.
The kernels are 3x3 and 5x5 2D arrays and are applied across each pixel in the image. Obviously this causes issues around the edges where 3-5 cells in the kernels will be multiplied against empty values (causing the outer pixels in the image to be black or white)
Below are some examples of what I mean (the white pixel bar, not the grey part)
The following is the crux of the code involving the image processing.
for (int xCoord = 0; xCoord < (width); xCoord++) { // -2
for (int yCoord = 0; yCoord < (height); yCoord++) {
// Output Red, Green and Blue Values
int outR, outG, outB, outA;
// Temp red, green, blue values that will contain the total r/g/b values after
// kernel multiplication
double red = 0, green = 0, blue = 0, alpha = 0;
int outRGB = 0;
/*
* Loop over the Kernel (For each cell in kernel)
The offset is added to the xCoord and yCoord to follow the footprint of the
kernel
The logic behind below is that all kernels are uneven numbers (so they can
have a centre pixel, 3x3 5x5 etc),
the offset needs to run from negative half their length (rounded down) to the
positive of half their length (rounded down)
This allows us to input kernels of various sizes (5x5, 9x9 etc) and the
calculation should not be thrown off
*/
try {
for (int xOffset = Math.negateExact(k.getKernels().length / 2); xOffset <= k.getKernels().length
/ 2; xOffset++) {
for (int yOffset = Math.negateExact(k.getKernels().length / 2); yOffset <= k.getKernels().length
/ 2; yOffset++) {
The first 2 loops iterate over each pixel. The second 2 loops are looping over the kernel (A 2D array enum class) The offset is used to loop across the 3x3 or 5x5 kernel at each pixel in the image.
Follows is my very basic attempt to wrap the pixels sampled so edge pixels will use values at the opposite side of the image but my main question is how would be best to handle these edge cases. If anyone has any pointers of where to start solving this it would be much appreciated.
// TODO //very basic wrapping logic
int realX = (xCoord - k.getKernels().length / 2 + xOffset + width) % width;
int realY = (yCoord - k.getKernels().length / 2 + yOffset + height) % height;
int RGB = image.getRGB((realX), (realY)); // The RGB value for the pixel, will be split out
// below
int A = (RGB >> 24) & 0xFF; // Bitshift 24 to get alpha value
int R = (RGB >> 16) & 0xFF; // Bitshift 16 to get Red Value
int G = (RGB >> 8) & 0xFF; // Bit Shift 8 to get Green Value
int B = (RGB) & 0xFF;
// actual rgb * kernel logic
red += (R * (k.getKernels()[yOffset + k.getKernels().length / 2])[xOffset
+ k.getKernels().length / 2] * multiFactor);
green += (G * k.getKernels()[yOffset + k.getKernels().length / 2][xOffset
+ k.getKernels().length / 2] * multiFactor);
blue += (B * k.getKernels()[yOffset + k.getKernels().length / 2][xOffset
+ k.getKernels().length / 2] * multiFactor);
alpha += (A * k.getKernels()[yOffset + k.getKernels().length / 2][xOffset
+ k.getKernels().length / 2] * multiFactor);
}
}
} catch (ArrayIndexOutOfBoundsException e) {
System.out.println("Error");
}
// Logic here prevents pixel going over 255 or under 0
// The "winner"of the max between the Colour value or 0(min pixel value) will be
// Math.min with 255 (the max value for a pixel)
outR = (int) Math.min(Math.max((red + bias), 0), 255);
outG = (int) Math.min(Math.max((green + bias), 0), 255);
outB = (int) Math.min(Math.max((blue + bias), 0), 255);
outA = (int) Math.min(Math.max((alpha + bias), 0), 255);
// Reassembling the separate color channels into one variable again.
outRGB = outRGB | (outA << 24);
outRGB = outRGB | (outR << 16);
outRGB = outRGB | (outG << 8);
outRGB = outRGB | outB;
// Setting with the reassembled RGB value
// output.setRGB(xCoord, yCoord, outRGB);
I'm a bit stumped how to proceed but if anyone can suggest efficient ways to handle these edge cases or even point me in the right direction or even some constructive criticism for how to improve any of the code. If anyone's interested the full project is here

Alpha channel ignored when using ImageIO.read()

I'm currently having an issue with alpha channels when reading PNG files with ImageIO.read(...)
fileInputStream = new FileInputStream(path);
BufferedImage image = ImageIO.read(fileInputStream);
//Just copying data into an integer array
int[] pixels = new int[image.getWidth() * image.getHeight()];
image.getRGB(0, 0, width, height, pixels, 0, width);
However, when trying to read values from the pixel array by bit shifting as seen below, the alpha channel is always returning -1
int a = (pixels[i] & 0xff000000) >> 24;
int r = (pixels[i] & 0xff0000) >> 16;
int g = (pixels[i] & 0xff00) >> 8;
int b = (pixels[i] & 0xff);
//a = -1, the other channels are fine
By Googling the problem I understand that the BufferedImage type needs to be defined as below to allow for the alpha channel to work:
BufferedImage image = new BufferedImage(width, height BufferedImage.TYPE_INT_ARGB);
But ImageIO.read(...) returns a BufferedImage without giving the option to specify the image type. So how can I do this?
Any help is much appreciated.
Thanks in advance
I think, your "int unpacking" code might be wrong.
I used (pixel >> 24) & 0xff (where pixel is the rgba value of a specific pixel) and it worked fine.
I compared this with the results of java.awt.Color and they worked fine.
I "stole" the "extraction" code directly from java.awt.Color, this is, yet another reason, I tend not to perform these operations this way, it's to easy to screw them up
And my awesome test code...
BufferedImage image = ImageIO.read(new File("BYO image"));
int width = image.getWidth();
int height = image.getHeight();
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int pixel = image.getRGB(x, y);
//value = 0xff000000 | rgba;
int a = (pixel >> 24) & 0xff;
Color color = new Color(pixel, true);
System.out.println(x + "x" + y + " = " + color.getAlpha() + "; " + a);
}
}
nb: Before some one tells that this is inefficient, I wasn't going for efficiency, I was going for quick to write
You may also want to have a look at How to convert get.rgb(x,y) integer pixel to Color(r,g,b,a) in Java?, which I also used to validate my results
I think the problem is that you're using arithmetic shift (>>) instead of logical shift (>>>). Thus 0xff000000 >> 24 becomes 0xffffffff (i.e. -1)

Convert byte array to Image

I have a byte array which size is 640*480*3 and its byte order is r,g,b. I'm trying to convert it into Image. The following code doesn't work:
BufferedImage img = ImageIO.read(new ByteArrayInputStream(data));
with the excepton
Exception in thread "main" java.lang.IllegalArgumentException: image == null!
at javax.imageio.ImageTypeSpecifier.createFromRenderedImage(ImageTypeSpecifier.java:925)
at javax.imageio.ImageIO.getWriter(ImageIO.java:1591)
at javax.imageio.ImageIO.write(ImageIO.java:1520)
I also tried this code:
ImageIcon imageIcon = new ImageIcon(data);
Image img = imageIcon.getImage();
BufferedImage bi = new BufferedImage(img.getWidth(null),img.getHeight(null),BufferedImage.TYPE_3BYTE_BGR); //Exception
but unsuccessfully:
Exception in thread "main" java.lang.IllegalArgumentException: Width (-1) and height (-1) must be > 0
How can I receive the Image from this array?
A plain byte array is not a gnerally recognized image format. You have to code the conversion yourself. Luckily its not very hard to do:
int w = 640;
int h = 480;
BufferedImage i = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
for (int y=0; y<h; ++y) {
for (int x=0; x<w; ++x) {
// calculate index of pixel
// depends on exact organization of image
// sample assumes linear storage with r, g, b pixel order
int index = (y * w * 3) + (x * 3);
// combine to RGB format
int rgb = ((data[index++] & 0xFF) << 16) |
((data[index++] & 0xFF) << 8) |
((data[index++] & 0xFF) ) |
0xFF000000;
i.setRGB(x, y, rgb);
}
}
The exact formula for pixel index depends on how you organized the data in the array - which you didn't really specify precisely. The prinicple is always the same though, combine the R, G, B value into an RGB (ARGB to be precise) value and put it in the BufferedImage using setRGB() method.

Java byte Image Manipulation

I need to create a simple demo for image manipulation in Java. My code is swing based. I don't have to do anything complex, just show that the image has changed in some way. I have the image read as byte[]. Is there anyway that I can manipulate this byte array without corrupting the bytes to show some very simple manipulation. I don't wish to use paint() etc. Is there anything that I can do directly to the byte[] array to show some change?
edit:
I am reading jpg image as byteArrayInputStream using apache io library. The bytes are read ok and I can confirm it by writing them back as jpeg.
You can try to convert your RGB image to Grayscale. If the image as 3 bytes per pixel rapresented as RedGreenBlue you can use the followinf formula: y=0.299*r+0.587*g+0.114*b.
To be clear iterate over the byte array and replace the colors. Here an example:
byte[] newImage = new byte[rgbImage.length];
for (int i = 0; i < rgbImage.length; i += 3) {
newImage[i] = (byte) (rgbImage[i] * 0.299 + rgbImage[i + 1] * 0.587
+ rgbImage[i + 2] * 0.114);
newImage[i+1] = newImage[i];
newImage[i+2] = newImage[i];
}
UPDATE:
Above code assumes you're using raw RGB image, if you need to process a Jpeg file you can do this:
try {
BufferedImage inputImage = ImageIO.read(new File("input.jpg"));
BufferedImage outputImage = new BufferedImage(
inputImage.getWidth(), inputImage.getHeight(),
BufferedImage.TYPE_INT_RGB);
for (int x = 0; x < inputImage.getWidth(); x++) {
for (int y = 0; y < inputImage.getHeight(); y++) {
int rgb = inputImage.getRGB(x, y);
int blue = 0x0000ff & rgb;
int green = 0x0000ff & (rgb >> 8);
int red = 0x0000ff & (rgb >> 16);
int lum = (int) (red * 0.299 + green * 0.587 + blue * 0.114);
outputImage
.setRGB(x, y, lum | (lum << 8) | (lum << 16));
}
}
ImageIO.write(outputImage, "jpg", new File("output.jpg"));
} catch (IOException e) {
e.printStackTrace();
}

Android: how to display camera preview with callback?

What I need to do is quite simple, I want to manually display preview from camera using camera callback and I want to get at least 15fps on a real device. I don't even need the colors, I just need to preview grayscale image.
Images from camera are in YUV format and you have to process it somehow, which is the main performance problem. I'm using API 8.
In all cases I'm using camera.setPreviewCallbackWithBuffer(), that is faster than camera.setPreviewCallback(). It seems that I cant get about 24 fps here, if I'm not displaying the preview. So there is not the problem.
I have tried these solutions:
1. Display camera preview on a SurfaceView as a Bitmap. It works, but the performance is about 6fps.
baos = new ByteOutputStream();
yuvimage=new YuvImage(cameraFrame, ImageFormat.NV21, prevX, prevY, null);
yuvimage.compressToJpeg(new Rect(0, 0, prevX, prevY), 80, baos);
jdata = baos.toByteArray();
bmp = BitmapFactory.decodeByteArray(jdata, 0, jdata.length); // Convert to Bitmap, this is the main issue, it takes a lot of time
canvas.drawBitmap(bmp , 0, 0, paint);
2. Display camera preview on a GLSurfaceView as a texture. Here I was displaying only luminance data (greyscale image), which is quite easy, it requires only one arraycopy() on each frame. I can get about 12fps, but I need to apply some filters to the preview and it seems, that it can't be done fast in OpenGL ES 1. So I can't use this solution. Some details of this in another question.
3. Display camera preview on a (GL)SurfaceView using NDK to process the YUV data. I find a solution here that uses some C function and NDK. But I didn't manage to use it, here some more details. But anyway, this solution is done to return ByteBuffer to display it as a texture in OpenGL and it won't be faster than the previous attempt. So I would have to modify it to return int[] array, that can be drawn with canvas.drawBitmap(), but I don't understand C enough to do this.
So, is there any other way that I'm missing or some improvement to the attempts I tried?
I'm working on exactly the same issue, but haven't got quite as far as you have.
Have you considered drawing the pixels directly to the canvas without encoding them to JPEG first? Inside the OpenCV kit http://sourceforge.net/projects/opencvlibrary/files/opencv-android/2.3.1/OpenCV-2.3.1-android-bin.tar.bz2/download (which doesn't actually use opencv; don't worry), there's a project called tutorial-0-androidcamera that demonstrates converting the YUV pixels to RGB and then writing them directly to a bitmap.
The relevant code is essentially:
public void onPreviewFrame(byte[] data, Camera camera, int width, int height) {
int frameSize = width*height;
int[] rgba = new int[frameSize+1];
// Convert YUV to RGB
for (int i = 0; i < height; i++)
for (int j = 0; j < width; j++) {
int y = (0xff & ((int) data[i * width + j]));
int u = (0xff & ((int) data[frameSize + (i >> 1) * width + (j & ~1) + 0]));
int v = (0xff & ((int) data[frameSize + (i >> 1) * width + (j & ~1) + 1]));
y = y < 16 ? 16 : y;
int r = Math.round(1.164f * (y - 16) + 1.596f * (v - 128));
int g = Math.round(1.164f * (y - 16) - 0.813f * (v - 128) - 0.391f * (u - 128));
int b = Math.round(1.164f * (y - 16) + 2.018f * (u - 128));
r = r < 0 ? 0 : (r > 255 ? 255 : r);
g = g < 0 ? 0 : (g > 255 ? 255 : g);
b = b < 0 ? 0 : (b > 255 ? 255 : b);
rgba[i * width + j] = 0xff000000 + (b << 16) + (g << 8) + r;
}
Bitmap bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bmp.setPixels(rgba, 0/* offset */, width /* stride */, 0, 0, width, height);
Canvas canvas = mHolder.lockCanvas();
if (canvas != null) {
canvas.drawBitmap(bmp, (canvas.getWidth() - width) / 2, (canvas.getHeight() - height) / 2, null);
mHolder.unlockCanvasAndPost(canvas);
} else {
Log.w(TAG, "Canvas is null!");
}
bmp.recycle();
}
Of course you'd have to adapt it to meet your needs (ex. not allocating rgba each frame), but it might be a start. I'd love to see if it works for you or not -- i'm still fighting problems orthogonal to yours at the moment.
I think Michael's on the right track. First you can try this method to convert from RGB to Grayscale. Clearly it's doing almost the same thing as his,but a little more succinctly for what you want.
//YUV Space to Greyscale
static public void YUVtoGrayScale(int[] rgb, byte[] yuv420sp, int width, int height){
final int frameSize = width * height;
for (int pix = 0; pix < frameSize; pix++){
int pixVal = (0xff & ((int) yuv420sp[pix])) - 16;
if (pixVal < 0) pixVal = 0;
if (pixVal > 255) pixVal = 255;
rgb[pix] = 0xff000000 | (pixVal << 16) | (pixVal << 8) | pixVal;
}
}
}
Second, don't create a ton of work for the garbage collector. Your bitmaps and arrays are going to be a fixed size. Create them once, not in onFramePreview.
Doing that you'll end up with something that looks like this:
public PreviewCallback callback = new PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
if ( (mSelectView == null) || !inPreview )
return;
if (mSelectView.mBitmap == null)
{
//initialize SelectView bitmaps, arrays, etc
//mSelectView.mBitmap = Bitmap.createBitmap(mSelectView.mImageWidth, mSelectView.mImageHeight, Bitmap.Config.RGB_565);
//etc
}
//Pass Image Data to SelectView
System.arraycopy(data, 0, mSelectView.mYUVData, 0, data.length);
mSelectView.invalidate();
}
};
And then the canvas where you want to put it looks like this:
class SelectView extends View {
Bitmap mBitmap;
Bitmap croppedView;
byte[] mYUVData;
int[] mRGBData;
int mImageHeight;
int mImageWidth;
public SelectView(Context context){
super(context);
mBitmap = null;
croppedView = null;
}
#Override
protected void onDraw(Canvas canvas){
if (mBitmap != null)
{
int canvasWidth = canvas.getWidth();
int canvasHeight = canvas.getHeight();
// Convert from YUV to Greyscale
YUVtoGrayScale(mRGBData, mYUVData, mImageWidth, mImageHeight);
mBitmap.setPixels(mRGBData, 0, mImageWidth, 0, 0, mImageWidth, mImageHeight);
Rect crop = new Rect(180, 220, 290, 400);
Rect dst = new Rect(0, 0, canvasWidth, (int)(canvasHeight/2));
canvas.drawBitmap(mBitmap, crop, dst, null);
}
super.onDraw(canvas);
}
This example shows a cropped and distorted selection of the camera preview in real time, but you get the idea. It runs at high FPS on a Nexus S in greyscale and should work for your needs as well.
Is this not what you want? Just use a SurfaceView in your layout, then somewhere in your init like onResume():
SurfaceView surfaceView = ...
SurfaceHolder holder = surfaceView.getHolder();
...
Camera camera = ...;
camera.setPreviewDisplay(holder);
It just sends the frames straight to the view as fast as they arrive.
If you want grayscale, modify the camera parameters with setColorEffect("mono").
For very basic and simple effects, there is
Camera.Parameters parameters = mCamera.getParameters();
parameters.setColorEffect(Parameters.EFFECT_AQUA);
I figured out that this effects do DIFFERENTLY depending on the device.
For instance, on my phone (galaxy s II) it looks kinda like a comic effect as in contrast to the galaxy s 1 it is 'just' a blue shade.
It's pro: It's working as live-preview.
I looked around some other camera apps and they obviously also faced this problem.
So what did they do?
They are capturing the default camera image, applying a filter to the bitmap data, and show this image in a simple ImageView. It's for sure not that cool as in live preview, but you won't ever face performance problems.
I believe I read in a blog that the grayscale data is in the first x*y bytes. Yuv should represent luminance, so the data is there, although it isn't a perfect grayscale. Its great for relative brightness, but not grayscale, as each color isn't as bright as each other in rgb. Green is usually given a stronger weight in luminosity conversions. Hope this helps!
Is there any special reason that you are forced to use GLES 1.0 ?
Because if not, see the accepted answer here:
Android SDK: Get raw preview camera image without displaying it
Generally it mentions using Camera.setPreviewTexture() in combination with GLES 2.0.
In GLES 2.0 you can render a full-screen-quad all over the screen, and create whatever effect you want.
It's most likely the fastest way possible.

Categories

Resources