Now I am learning about Image.I want to copy an image. I try:
private BufferedImage mImage, mNewImage;
private int mWidth, mHeight;
private int[] mPixelData;
public void generate() {
try {
mImage = ImageIO.read(new File("D:\\Documents\\Pictures\\image.png"));
mWidth = mImage.getWidth();
mHeight = mImage.getHeight();
mPixelData = new int[mWidth * mHeight];
// get pixel data from image
for (int i = 0; i < mHeight; i++) {
for (int j = 0; j < mWidth; j++) {
int rgb = mImage.getRGB(j, i);
int a = rgb >>> 24;
int r = (rgb >> 16) & 0xff;
int g = (rgb >> 8) & 0xff;
int b = rgb & 0xff;
int newRgb = (a << 24 | r << 16 | g << 8 | b);
mPixelData[i * mWidth + j] = newRgb;
}
mNewImage = new BufferedImage(mWidth, mHeight, mImage.getType());
WritableRaster raster = (WritableRaster) mNewImage.getData();
raster.setPixels(0, 0, mWidth, mHeight, mPixelData);
File file = new File("D:\\Documents\\Pictures\\image2.png");
ImageIO.write(mNewImage, "png", file);
}
} catch (IOException e) {
e.printStackTrace();
}
}
But I got an exception:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 222748 at sun.awt.image.ByteInterleavedRaster.setPixels(ByteInterleavedRaster.java:1108)
The logic in your code is sane, but there are multiple minor issues with the code above, so I'll try to point them out one by one: :-)
Your mPixelData is in packed ARGB layout, this is the same as use by BufferedImage.TYPE_INT_ARGB. So you want to use this type, rather than the type of the original image. If you see from your stack trace, the type of your raster is ByteInterleavedRaster, and this is not compatible with your int[] pixels (another issue that may arise from using the original type, is that it may be TYPE_CUSTOM, which can't be created using this constructor). So, first change:
mNewImage = new BufferedImage(mWidth, mHeight, BufferedImage.TYPE_INT_ARGB);
(Note: You will still get an IndexOutOfBoundsException after this change, I'll return to that later).
BufferedImage.getData() will give you a copy of the pixel data, rather than a reference to the current data. So, setting the pixels on this copy will have no effect on the data being written to disk later. Instead, use the getRaster() method, that does exactly what you want:
WritableRaster raster = mNewImage.getRaster();
The Raster.setPixels(x, y, w, h, pixels) method expects an array containing one sample per array element (A, R, G and B as separate samples). This means that the length of your array is only one fourth of what the method expects, and this is finally the cause of the exception you see. Instead, as you array is in int-packed ARGB layout (which is the native layout of the type you now use), you should use the setDataElements method:
raster.setDataElements(0, 0, mWidth, mHeight, mPixelData);
Finally, I just like to point out that all the bit shifting in your loop will simply unpack all the pixels into single components (A, R, G and B) and then pack them back together again... So, newRGB == rgb in this case. But you are maybe planning to add color manipulation here later, in which case it makes sense. :-)
PS: If all you want to do is creating an exact copy of the original image, the fastest way to do it is probably:
ColorModel cm = mImage.getColorModel();
WritableRaster raster = (WritableRaster) mImage.getData(); // Here we want a copy of the original image
mNewImage = new BufferedImage(cm, raster, cm.isAlphaPremultiplied(), null);
Related
I have an int array where each value stores a bitpacked rgb value (8 bits per channel) and alpha is always 255(opaque) and i want to display that in javafx.
My current approach is using a canvas like this:
GraphicsContext graphics = canvas.getGraphicsContext2D();
PixelWriter pw = graphics.getPixelWriter();
pw.setPixels(0, 0, width, height, PixelFormat.getIntArgbInstance(), pixels, 0, width);
However before that i actually have to set the alpha component of each pixel by iterating each pixel and OR'ing it with a mask that turns the pixel from rgb to argb like this:
for (int i = 0; i < pixels.length; i++) {
pixels[i] = 0xFF000000 | pixels[i];
}
Is there a more efficient to do this (as the pixels array is updated many times every second)?
I was hoping there's a IntRgbInstance but unfortunately there isn't (only ByteRgbInstance)
Other approaches i've tested:
Approach 1: Creating a IntBuffer that is filled up like this:
IntBuffer buffer = IntBuffer.allocate(pixels.length * 4);
for (int pixel : pixels) {
buffer.put(0xFF000000 | pixel);
}
And then generating a PixelBuffer that uses this buffer, the pixel buffer is then used as an input to this WritableImage constructor: https://openjfx.io/javadoc/17/javafx.graphics/javafx/scene/image/WritableImage.html#%3Cinit%3E(javafx.scene.image.PixelBuffer)
and then i display that WritableImage using a ImageView
This however still didn't speed up anything(rather made it a bit slower) and im guessing that because i have to construct a new WritableImage instance each time the pixels int array is updated.
Approach 2 (that didn't work for some reason, i.e. it displayed nothing in the screen): Creating a buffer the same way as above and using that in one of the setPixels() methods that takes in a buffer:
IntBuffer buffer = IntBuffer.allocate(pixels.length * 4);
for (int pixel : pixels) {
buffer.put(0xFF000000 | pixel);
}
pw.setPixels(0, 0, width, height, PixelFormat.getIntArgbInstance(), buffer, width);
After a bit of more research i found out that i don't need to create a new WritableImage instance each time the pixels array is updated but i can just use the updateBuffer method here: https://openjfx.io/javadoc/17/javafx.graphics/javafx/scene/image/PixelBuffer.html#updateBuffer(javafx.util.Callback)
So the code currently looks like this:
pb.updateBuffer(callback -> {
buffer.clear();
for (int pixel : pixels) {
buffer.put(0xFF000000 | pixel);
}
return null;
});
Where pb, buffer is only created once like this:
IntBuffer buffer = IntBuffer.allocate(pixels.length * 4);
PixelBuffer<IntBuffer> pb = new PixelBuffer<>(width, height, buffer, PixelFormat.getIntArgbPreInstance());
view.setImage(new WritableImage(pb));
and this did indeed result in a nice speedup (close to 2x compared to my initial approach)
Maybe this https://openjfx.io/javadoc/17/javafx.graphics/javafx/scene/image/WritableImage.html#%3Cinit%3E(javafx.scene.image.PixelBuffer) is what you are looking for. You could create a PixelBuffer from an IntBuffer of your data.
Can anybody explain how to get an array of rgb value from a BufferedImage?
I have a grey scale image in a BufferedImage and need to extract an array of 0 to 255 values that describe the image.
I know the BufferedImage is correct because I can save it to PNG. However, if I use int[] dataBuffInt = ((DataBufferInt) heightMap.getDataBuffer()).getData(); I get a bunch of huge negative numbers.
I have searched for a while and seen some references to shifting some values (post) but don't really understand what they are saying.
Basically I want to go from a BufferedImage to an array of 0 to 255 RBG values.
I'm not sure I explained myself properly, plaese ask for more details is needed.
Edit:
#Garbage Thanks for the tip. I tried int[] dataBuffInt = heightMap.getRGB(0, 0, heightMap.getWidth(), heightMap.getHeight(), null, 0, heightMap.getWidth()); But get the same result.
#Greg Kopff The result is 2 and it was set to TYPE_INT_ARGB
You get negative numbers since the int value you get from one of the pixels are composed by red, green, blue and alpha. You need to split the colors to get a value for each color component.
The simplest way to do this is to create a Color object and use the getRed, getGreen and getBlue (aswell as getAlpha) methods to get the components:
public static void main(String... args) throws Exception {
BufferedImage image = ImageIO.read(
new URL("http://upload.wikimedia.org/wikipedia/en/2/24/Lenna.png"));
int w = image.getWidth();
int h = image.getHeight();
int[] dataBuffInt = image.getRGB(0, 0, w, h, null, 0, w);
Color c = new Color(dataBuffInt[100]);
System.out.println(c.getRed()); // = (dataBuffInt[100] >> 16) & 0xFF
System.out.println(c.getGreen()); // = (dataBuffInt[100] >> 8) & 0xFF
System.out.println(c.getBlue()); // = (dataBuffInt[100] >> 0) & 0xFF
System.out.println(c.getAlpha()); // = (dataBuffInt[100] >> 24) & 0xFF
}
Outputs:
173
73
82
255
I have 1D array of pixel values and i can get red, green and blue this way.
int rgb[] = new int[]
{
(argb >> 16) & 0xff, //red
(argb >> 8) & 0xff, //green
(argb ) & 0xff //blue
};
I know width height of image as well which I want to create.
So, in total I have following data.
1) width of new image
2) height of new image
3) one dimension array of pixel value.
My supervisor has advised me to use createRaster method but function arguments are hard to understand for me.
Can you suggest me some simple code?
Thanks.
As stated in this previous SO post:
public static Image getImageFromArray(int[] pixels, int width, int height) {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0,0,width,height,pixels);
return image;
}
If you are having trouble understanding what the parameters are, you should take a look at the Java Documentation.
You could:
InputStream is = new ByteArrayInputStream(rgb);
Image img = ImageIO.read(is);
rgb should be a byte array.
I would like to extract the Alpha Channel from a bufferedimage and paint it in a separate image using greyscale. (like it is displayed in photoshop)
Not tested but contains the main points.
public Image alpha2gray(BufferedImage src) {
if (src.getType() != BufferedImage.TYPE_INT_ARGB)
throw new RuntimeException("Wrong image type.");
int w = src.getWidth();
int h = src.getHeight();
int[] srcBuffer = src.getData().getPixels(0, 0, w, h, (int[]) null);
int[] dstBuffer = new int[w * h];
for (int i=0; i<w*h; i++) {
int a = (srcBuffer[i] >> 24) & 0xff;
dstBuffer[i] = a | a << 8 | a << 16;
}
return Toolkit.getDefaultToolkit().createImage(new MemoryImageSource(w, h, pix, 0, w));
}
I don't believe there's a single method call to do this; you would have to get all the image data and mask off the alpha byte for each pixel. So for example, use getRGB() to get the ARGB pixels for the image. The most significant byte is the alpha value. So for each pixel in the array you get from getRGB(),
int alpha = (pixel >> 24) & 0xFF;
You could grab the Raster from the BufferedImage, and then create a child Raster of this one which contains only the band you're interested in (the bandList parameter). From this child raster you can create a new BufferedImage with a suitable ColorModel which would only contain the grayscale aplpha mask.
The benefit of doing it this way instead of manually iterating over the pixels is that the runtime has chance to get an idea of what you are doing, and thus this might get accelerated by exploiting the hardware capabilities. Honestly I doubt it will be accelerated with current JVMs, but who knows what the future brings?
I know its possible to convert an image to CS_GRAY using
public static BufferedImage getGrayBufferedImage(BufferedImage image) {
BufferedImageOp op = new ColorConvertOp(ColorSpace
.getInstance(ColorSpace.CS_GRAY), null);
BufferedImage sourceImgGray = op.filter(image, null);
return sourceImgGray;
}
however, this is a chokepoint of my entire program. I need to do this often, on 800x600 pixel images and takes about 200-300ms for this operation to complete, on average. I know I can do this alot faster by using one for loop to loop through the image data and set it right away. The code above on the other hand constructs a brand new 800x600 BufferedImage that is gray scale. I would rather just transform the image I pass in.
Does any one know how to do this with a for loop and given that the image is RGB color space?
ColorConvertOp.filter takes two parameters. The second parameter is also a BufferedImage, which will be the destination. If you pass a correct BufferedImage to the filter method it saves you from the hassle to create a fresh BufferedImage.
private static int grayscale(int rgb) {
int r = rgb >> 16 & 0xff;
int g = rgb >> 8 & 0xff;
int b = rgb & 0xff;
int cmax = Math.max(Math.max(r, g),b);
return (rgb & 0xFF000000) | (cmax << 16) | (cmax << 8) | cmax;
}
public static BufferedImage grayscale(BufferedImage bi) {
BufferedImage bout = new BufferedImage(bi.getWidth(), bi.getHeight(), BufferedImage.TYPE_INT_ARGB);
int[] rgbArray = new int[bi.getWidth() * bi.getHeight()];
rgbArray = bi.getRGB(0, 0, bi.getWidth(), bi.getHeight(), rgbArray, 0, bi.getWidth());
for (int i = 0, q = rgbArray.length; i < q; i++) {
rgbArray[i] = grayscale(rgbArray[i]);
}
bout.setRGB(0, 0, bout.getWidth(), bout.getHeight(), rgbArray, 0, bout.getWidth());
return bout;
}
Whatever you're doing you are likely doing something wrong. You shouldn't be regenerating a buffered image over and over again. But, rather figuring out a scheme to simply update the buffered image, or take the original pixels from the original and just using the grayscale which is the max of the RGB components, in each of the sections.