Can anybody explain how to get an array of rgb value from a BufferedImage?
I have a grey scale image in a BufferedImage and need to extract an array of 0 to 255 values that describe the image.
I know the BufferedImage is correct because I can save it to PNG. However, if I use int[] dataBuffInt = ((DataBufferInt) heightMap.getDataBuffer()).getData(); I get a bunch of huge negative numbers.
I have searched for a while and seen some references to shifting some values (post) but don't really understand what they are saying.
Basically I want to go from a BufferedImage to an array of 0 to 255 RBG values.
I'm not sure I explained myself properly, plaese ask for more details is needed.
Edit:
#Garbage Thanks for the tip. I tried int[] dataBuffInt = heightMap.getRGB(0, 0, heightMap.getWidth(), heightMap.getHeight(), null, 0, heightMap.getWidth()); But get the same result.
#Greg Kopff The result is 2 and it was set to TYPE_INT_ARGB
You get negative numbers since the int value you get from one of the pixels are composed by red, green, blue and alpha. You need to split the colors to get a value for each color component.
The simplest way to do this is to create a Color object and use the getRed, getGreen and getBlue (aswell as getAlpha) methods to get the components:
public static void main(String... args) throws Exception {
BufferedImage image = ImageIO.read(
new URL("http://upload.wikimedia.org/wikipedia/en/2/24/Lenna.png"));
int w = image.getWidth();
int h = image.getHeight();
int[] dataBuffInt = image.getRGB(0, 0, w, h, null, 0, w);
Color c = new Color(dataBuffInt[100]);
System.out.println(c.getRed()); // = (dataBuffInt[100] >> 16) & 0xFF
System.out.println(c.getGreen()); // = (dataBuffInt[100] >> 8) & 0xFF
System.out.println(c.getBlue()); // = (dataBuffInt[100] >> 0) & 0xFF
System.out.println(c.getAlpha()); // = (dataBuffInt[100] >> 24) & 0xFF
}
Outputs:
173
73
82
255
Related
Now I am learning about Image.I want to copy an image. I try:
private BufferedImage mImage, mNewImage;
private int mWidth, mHeight;
private int[] mPixelData;
public void generate() {
try {
mImage = ImageIO.read(new File("D:\\Documents\\Pictures\\image.png"));
mWidth = mImage.getWidth();
mHeight = mImage.getHeight();
mPixelData = new int[mWidth * mHeight];
// get pixel data from image
for (int i = 0; i < mHeight; i++) {
for (int j = 0; j < mWidth; j++) {
int rgb = mImage.getRGB(j, i);
int a = rgb >>> 24;
int r = (rgb >> 16) & 0xff;
int g = (rgb >> 8) & 0xff;
int b = rgb & 0xff;
int newRgb = (a << 24 | r << 16 | g << 8 | b);
mPixelData[i * mWidth + j] = newRgb;
}
mNewImage = new BufferedImage(mWidth, mHeight, mImage.getType());
WritableRaster raster = (WritableRaster) mNewImage.getData();
raster.setPixels(0, 0, mWidth, mHeight, mPixelData);
File file = new File("D:\\Documents\\Pictures\\image2.png");
ImageIO.write(mNewImage, "png", file);
}
} catch (IOException e) {
e.printStackTrace();
}
}
But I got an exception:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 222748 at sun.awt.image.ByteInterleavedRaster.setPixels(ByteInterleavedRaster.java:1108)
The logic in your code is sane, but there are multiple minor issues with the code above, so I'll try to point them out one by one: :-)
Your mPixelData is in packed ARGB layout, this is the same as use by BufferedImage.TYPE_INT_ARGB. So you want to use this type, rather than the type of the original image. If you see from your stack trace, the type of your raster is ByteInterleavedRaster, and this is not compatible with your int[] pixels (another issue that may arise from using the original type, is that it may be TYPE_CUSTOM, which can't be created using this constructor). So, first change:
mNewImage = new BufferedImage(mWidth, mHeight, BufferedImage.TYPE_INT_ARGB);
(Note: You will still get an IndexOutOfBoundsException after this change, I'll return to that later).
BufferedImage.getData() will give you a copy of the pixel data, rather than a reference to the current data. So, setting the pixels on this copy will have no effect on the data being written to disk later. Instead, use the getRaster() method, that does exactly what you want:
WritableRaster raster = mNewImage.getRaster();
The Raster.setPixels(x, y, w, h, pixels) method expects an array containing one sample per array element (A, R, G and B as separate samples). This means that the length of your array is only one fourth of what the method expects, and this is finally the cause of the exception you see. Instead, as you array is in int-packed ARGB layout (which is the native layout of the type you now use), you should use the setDataElements method:
raster.setDataElements(0, 0, mWidth, mHeight, mPixelData);
Finally, I just like to point out that all the bit shifting in your loop will simply unpack all the pixels into single components (A, R, G and B) and then pack them back together again... So, newRGB == rgb in this case. But you are maybe planning to add color manipulation here later, in which case it makes sense. :-)
PS: If all you want to do is creating an exact copy of the original image, the fastest way to do it is probably:
ColorModel cm = mImage.getColorModel();
WritableRaster raster = (WritableRaster) mImage.getData(); // Here we want a copy of the original image
mNewImage = new BufferedImage(cm, raster, cm.isAlphaPremultiplied(), null);
I'm working on an Android app and would like to locate the edges of an image taken by the phone camera. I have figured that my best bet at locating these edges is by looking for the pixel in between two pixels that are two significant colors. For instance a shade of green and shade of black.
How would I come across this pixel? Is there a range of numbers that correlate with the various colors and there shades? I.e. 100-200 is red, 200-300 is blue, etc.?
First, you can use android.graphics.Bitmap.
If your image is from the device camera or device media, you can do this:
Bitmap bitmap = MediaStore.Images.Media.getBitmap(activity.getContentResolver(), uri);
You may also want a scaled down version of the image so there are less total pixels in the Bitmap; this will help a lot with performance:
bitmap = createScaledBitmap(bitmap, bitmap.getWidth()/scaleFactor, bitmap.getHeight()/scaleFactor, false);
Second, to get the pixels you can do this:
int [] pixels = new int[bitmap.getWidth() * bitmap.getHeight()];
bitmap.getPixels(pixels, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
Third, to evaluate red, blue, green, black, white, etc, use the following convention:
black = 0x000000
red = 0xff0000
green = 0x00ff00
blue = 0x0000ff
white = 0xffffff
It is a matter of your exact needs what range you want to use qualify a particular color. I would suggest the a significant change would be calculated by the following:
final static int SIGNIFICANT_CHANGE_AMOUNT = 0xff;
int pixelA = pixels[i];
int pixelB = pixels[i+1];
boolean sigChange = false;
int changeInRed = Math.abs(((pixelA & 0xff0000) >> 16) - ((pixelB & 0xff0000) >> 16));
int changeInGreen = Math.abs(((pixelA & 0x00ff00) >> 8) - ((pixelB & 0x00ff00) >> 8));
int changeInBlue = Math.abs(((pixelA & 0x0000ff) >> 0) - ((pixelB & 0x0000ff) >> 0));
int overallChange = changeInRed + changeInGreen + changeInBlue;
if (overallChange > SIGNIFCANT_CHANGE_AMOUNT) {
sigChange = true;
}
You'll still have to write an algorithm to detect an area but I think if you follow the Flood Fill Algorithm wiki it will help a lot. I have used the queue-based implementation since the recursive one is not really feasible.
Also note that when you getPixels from a bitmap that has been used as an android view you will want to mask out the transparency byte... you can see this post
I have 1D array of pixel values and i can get red, green and blue this way.
int rgb[] = new int[]
{
(argb >> 16) & 0xff, //red
(argb >> 8) & 0xff, //green
(argb ) & 0xff //blue
};
I know width height of image as well which I want to create.
So, in total I have following data.
1) width of new image
2) height of new image
3) one dimension array of pixel value.
My supervisor has advised me to use createRaster method but function arguments are hard to understand for me.
Can you suggest me some simple code?
Thanks.
As stated in this previous SO post:
public static Image getImageFromArray(int[] pixels, int width, int height) {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0,0,width,height,pixels);
return image;
}
If you are having trouble understanding what the parameters are, you should take a look at the Java Documentation.
You could:
InputStream is = new ByteArrayInputStream(rgb);
Image img = ImageIO.read(is);
rgb should be a byte array.
I would like to extract the Alpha Channel from a bufferedimage and paint it in a separate image using greyscale. (like it is displayed in photoshop)
Not tested but contains the main points.
public Image alpha2gray(BufferedImage src) {
if (src.getType() != BufferedImage.TYPE_INT_ARGB)
throw new RuntimeException("Wrong image type.");
int w = src.getWidth();
int h = src.getHeight();
int[] srcBuffer = src.getData().getPixels(0, 0, w, h, (int[]) null);
int[] dstBuffer = new int[w * h];
for (int i=0; i<w*h; i++) {
int a = (srcBuffer[i] >> 24) & 0xff;
dstBuffer[i] = a | a << 8 | a << 16;
}
return Toolkit.getDefaultToolkit().createImage(new MemoryImageSource(w, h, pix, 0, w));
}
I don't believe there's a single method call to do this; you would have to get all the image data and mask off the alpha byte for each pixel. So for example, use getRGB() to get the ARGB pixels for the image. The most significant byte is the alpha value. So for each pixel in the array you get from getRGB(),
int alpha = (pixel >> 24) & 0xFF;
You could grab the Raster from the BufferedImage, and then create a child Raster of this one which contains only the band you're interested in (the bandList parameter). From this child raster you can create a new BufferedImage with a suitable ColorModel which would only contain the grayscale aplpha mask.
The benefit of doing it this way instead of manually iterating over the pixels is that the runtime has chance to get an idea of what you are doing, and thus this might get accelerated by exploiting the hardware capabilities. Honestly I doubt it will be accelerated with current JVMs, but who knows what the future brings?
I know its possible to convert an image to CS_GRAY using
public static BufferedImage getGrayBufferedImage(BufferedImage image) {
BufferedImageOp op = new ColorConvertOp(ColorSpace
.getInstance(ColorSpace.CS_GRAY), null);
BufferedImage sourceImgGray = op.filter(image, null);
return sourceImgGray;
}
however, this is a chokepoint of my entire program. I need to do this often, on 800x600 pixel images and takes about 200-300ms for this operation to complete, on average. I know I can do this alot faster by using one for loop to loop through the image data and set it right away. The code above on the other hand constructs a brand new 800x600 BufferedImage that is gray scale. I would rather just transform the image I pass in.
Does any one know how to do this with a for loop and given that the image is RGB color space?
ColorConvertOp.filter takes two parameters. The second parameter is also a BufferedImage, which will be the destination. If you pass a correct BufferedImage to the filter method it saves you from the hassle to create a fresh BufferedImage.
private static int grayscale(int rgb) {
int r = rgb >> 16 & 0xff;
int g = rgb >> 8 & 0xff;
int b = rgb & 0xff;
int cmax = Math.max(Math.max(r, g),b);
return (rgb & 0xFF000000) | (cmax << 16) | (cmax << 8) | cmax;
}
public static BufferedImage grayscale(BufferedImage bi) {
BufferedImage bout = new BufferedImage(bi.getWidth(), bi.getHeight(), BufferedImage.TYPE_INT_ARGB);
int[] rgbArray = new int[bi.getWidth() * bi.getHeight()];
rgbArray = bi.getRGB(0, 0, bi.getWidth(), bi.getHeight(), rgbArray, 0, bi.getWidth());
for (int i = 0, q = rgbArray.length; i < q; i++) {
rgbArray[i] = grayscale(rgbArray[i]);
}
bout.setRGB(0, 0, bout.getWidth(), bout.getHeight(), rgbArray, 0, bout.getWidth());
return bout;
}
Whatever you're doing you are likely doing something wrong. You shouldn't be regenerating a buffered image over and over again. But, rather figuring out a scheme to simply update the buffered image, or take the original pixels from the original and just using the grayscale which is the max of the RGB components, in each of the sections.