I'm doing 2D filteing and want to do element by element addition on grayscale BufferedImages. Is there an existing function that will complete this for me or do i need to make one from scrach?
Is there some sort of matrix class that converts a raster to a matrix to simplyfy this problem?
Edit: here is the general gist of it
BufferedImageOp opX = new ConvolveOp(new Kernel(3,3, kernelX));
BufferedImageOp opY = new ConvolveOp(new Kernel(3,3, kernelY));
BufferedImage filtImageX = opX.filter(sourceImage, null);
BufferedImage filtImageY = opY.filter(sourceImage, null);
BufferedImage outputImage = addBufferedImages(filtImageX, filtImageY);
Grayscale Conversion:
public void toGrayscale() {
BufferedImageOp op = new ColorConvertOp(ColorSpace.getInstance(ColorSpace.CS_GRAY), null);
sourceImage = op.filter(sourceImage, null);
}
I am not familiar with any java libs that do that for you.
You can get pixel at [i,j] with: image.getRGB(i, j);
BufferedImage image = ...;
BufferedImage resultImage = ...
int rgb= image.getRGB(i, j);
resultImage.setRGB(i, j, rgb);
You can also convert a buffered image to a byte array [ https://stackoverflow.com/a/7388025/1007845 ].
See this thread: how to convert image to byte array in java? to get a WritableRaster
EDIT:
It seems that WritableRaster might be useful in this case: http://docs.oracle.com/javase/1.4.2/docs/api/java/awt/image/WritableRaster.html
WritableRaster raster = image.getRaster();
for(int h=0;h<height;h++) {
for(int w=0;w<width;w++) {
int colour = 127;
raster.setSample(w,h,0,colour);
}
}
I don't know of a direct way to do this.
But i can suggest a slightly underhanded approach. First, take your two images, and combine them into a single image with two bands. I'm hazy on the details of how to do this. I suspect you will want to create a Raster with a BandedSampleModel, and then blit the contents of the other two images into its DataBuffer. Although it looks like you should be able to create a two-bank DataBuffer which uses the arrays of the source images' (one-banked) DataBuffers as banks, which would avoid copying.
Once you have a two-band image, simply apply a BandCombineOp which sums the bands. You will need to express the summation as a matrix, but that shouldn't be hard. I think it would be [1.0, 1.0],or [0.5, 0.5] if you want to rescale the result.
Related
I am trying to retrieve an jpg image as a BufferedImage then decompose it into a 3D array [RED][GREEN][BLUE] and then turn it back into a BufferedImage and store it under a different file-name. All looks fine to me BUT, when I am trying to reload the 3D array using the new file created I get different values for RGB although the new image looks fine to the naked eye. I did the following.
BufferedImage bi = ImageIO.read(new File("old.jpg"));
int[][][] one = getArray(bi);
save("kite.jpg", one);
BufferedImage bi2 = ImageIO.read(new File("new.jpg"));
int[][][] two = getArray(bi2);
private void save(String destination, int[][][] in) {
try {
BufferedImage out = new BufferedImage(in.length, in[0].length, BufferedImage.TYPE_3BYTE_BGR);
for (int x=0; x<out.getWidth(); x++) {
for (int y = 0; y < out.getHeight(); y++) {
out.setRGB(x, y, new Color(in[x][y][0], in[x][y][1], in[x][y][2]).getRGB());
}
}
File f = new File("name");
ImageIO.write(out, "JPEG", f);
} catch (IOException e) {
System.out.println(e.getMessage());
}
}
so in the example above the values that arrays one and two are holding are different.
I am guessing there is something to do with the different types of retrieving and restoring images? I am trying to figure out what is going on all day but with no luck. Any help appreciated.
Pretty simple:
JPEG is a commonly used method of lossy compression for digital images
(from wikipedia).
Each time the image is compressed, the image is altered to reduce file-size. In fact repeating the steps of decompression and compression for several hundred times alters the image to the point where the entire image turns into a plain gray area in most cases. There are a few compressions that work lossless, but most operation-modes will alter the image.
This question already has answers here:
How can I display an image from an array of pixels' values in java?
(2 answers)
Closed 8 years ago.
I have an int[image.height][image.width] array of the image. Now I'm trying to turn it back to image file but I've failed.
This is what I have done:
private void intToImg(int[][] pxls,String path){
BufferedImage image = new BufferedImage(pxls[0].length, pxls.length, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = (WritableRaster)image.getData();
int[] pxlsr=new int[pxls[0].length*pxls.length];
int k=0;
for(int i=0;i<pxls.length;i++)for(int j=0;j<pxls[0].length;j++)pxlsr[k++]=pxls[i][j];
raster.setPixels(0,0,pxls[0].length-10,pxls.length-10,pxlsr);//index out of bounds error here...
File f = new File(path);
try{ImageIO.write(image, "JPG", f);}
catch (IOException x){x.printStackTrace();}
}
However I always have the same error java.lang.ArrayIndexOutOfBoundsException. What did I do wrong and what is the right way to transform array of pixels into real image?
I've just increased array length in 4 and java.lang.ArrayIndexOutOfBoundsException has finally disapeared. But I'm still can't create real image. Instead I have image filled with #000009 no matter what value of array.
This is what i was doing:
for(int i=0;i<pxls.length;i++)for(int j=0;j<pxls[0].length;j++){
pxlsr[k++]=pxls[i][j];
pxlsr[k++]=pxls[i][j];
pxlsr[k++]=pxls[i][j];
pxlsr[k++]=pxls[i][j];
}
raster.setPixels(0,0,pxls[0].length,pxls.length,pxlsr);
and
for(int i=0;i<pxls.length;i++)for(int j=0;j<pxls[0].length;j++){
pxlsr[k++]=111;
pxlsr[k++]=111;
pxlsr[k++]=111;
pxlsr[k++]=111;
}
raster.setPixels(0,0,pxls[0].length,pxls.length,pxlsr);
and many other thing but the result is allways the same - image filled with black color!
finally i did it! i found error!
instead of:
WritableRaster raster = (WritableRaster)image.getData();
should be:
WritableRaster raster = image.getRaster();
You can write your bytes directly by calling
image.setRGB(0, 0, pxls[0].length, pxls.length, pxlsr, 0, pxls[0].length);
That works for me.
A classic error with images is confusion about what width and height represent in terms of rows and columns in the array. A BufferedImage must be of size width x height, which might correspond to pxls[0].length x pxls[length] and not the contrary.
i have 2D array to keep color component value :
p[pixel_value][red]
p[pixel_value][green]
p[pixel_value][blue]
i just dont know how to use them to make an image.
i read about setRGB, what i understand is i should mix all of them to become a RGBArray. then how to make it?
is it any better way to make an image without setRGB ? i need some explanation.
The method setRGB() can be used to set the color of a pixel of an already existing image. You can open an already existing image and set all the pixels of it, using the values stored in your 2D array.
You can do it like this:
BufferedImage img = ImageIO.read(new File("image which you want to change the pixels"));
for(int width=0; width < img.getWidth(); width++)
{
for(int height=0; height < img.getHeight(); height++)
{
Color temp = new Color(p[pixel_value][red], p[pixel_value][green], p[pixel_value][blue]);
img.setRGB(width, height, temp.getRGB());
}
}
ImageIO.write(img, "jpg", new File("where you want to store this new image"));
Like this, you can iterate over all the pixels and set their color accordingly.
NOTE: By doing this, you will lose your original image. This is just a way which I know.
What you need is a BufferedImage. Create a BufferedImage of type TYPE_3BYTE_BGR, if RGB is what you want, with a specified width and height.
QuickFact:
The BufferedImage subclass describes an Image with an accessible
buffer of image data.
Then, call the getRaster() method to get a WritableRaster
QuickFact:
This class extends Raster to provide pixel writing capabilities.
Then, use the setPixel(int x, int y, double[] dArray) method to set the pixels.
Update:
If all you want is to read an image, use the ImageIO.read(File f) method. It will allow you to read an image file in just one method call.
Somewhat SSCCE:
BufferedImage img = null;
try {
img = ImageIO.read(new File("strawberry.jpg"));
} catch (IOException e) {
}
You want to manually set RGB values?
You need to know that since an int is 32bit it contains all 4 rgb values (1 for the transparency).
xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx
^Alpha ^red ^green ^blue
You can accomplish using 4 int values by the use of binary arithmetic:
int rgb = (red << 16) && () (green << 8) & (blue);
bufferedImage.setRGB(x, y, rgb);
IN the above you can add Alpha as well if needed. You just "push" the binary codes into the right places.
I am trying to use the underlying DataBufferByte of a BufferedImage of type TYPE_3BYTE_BGR to set pixel values as quick as possible.
Perhaps I am not understanding, but when I do the following...
byte[] imgBytes = ((DataBufferByte) img.getData().getDataBuffer()).getData();
... it seems as though I am getting a copy of the byte[] and not a reference. For example, if I run...
System.out.println(System.identityHashCode(imgBytes));
System.out.println(System.identityHashCode((DataBufferByte) img.getData().getDataBuffer()).getData());
... I get two clearly different object hashes. If I'm not mistaken, this indicates that I am not getting a reference to the underlying byte[] but rather a copy. If this is the case, how am I supposed to edit the DataBufferByte directly???
Or perhaps I am just setting the pixels wrong... When I set pixels in the imgBytes it doesn't seem to do anything to the BufferedImage. Once I get the byte[], I set each pixel value like so:
imgBytes[intOffset] = byteBlue;
imgBytes[intOffset+1] = byteGreen;
imgBytes[intOffset+2] = byteRed;
To me, this all seems fine. I can read pixels just fine this way so it seems I should be able to write them the same way!
I had the same problem. You may not use getData() but use getRaster() which gives you an array you can use to write to.
I once played around with pixel manipulations for Images in Java. Instead of directly answering your question I will offer an alternative solution to your problem. You can do the following to create an array of pixels to manipulate:
final int width = 800;
final int height = 600;
final int[] pixels = new int[width * height]; // 0xAARRGGBB
MemoryImageSource source = new MemoryImageSource(width, height, pixels, 0, width);
source.setAnimated(true);
source.setFullBufferUpdates(true);
Image image = Toolkit.getDefaultToolkit().createImage(source);
image.setAccelerationPriority(1f);
Then to draw the image, you can simply call the drawImage method from the Graphics class.
There are a few other ways to achieve what you are looking for, but this method was the simplest to me.
Here is how it's implemented in JDK7. You may have an error somewhere else if the stuff doesn't work for you.
public byte[] getData() {
theTrackable.setUntrackable();
return data;
}
I'm currently turning an array of pixel values (originally created with a java.awt.image.PixelGrabber object) into an Image object using the following code:
public Image getImageFromArray(int[] pixels, int width, int height) {
MemoryImageSource mis = new MemoryImageSource(width, height, pixels, 0, width);
Toolkit tk = Toolkit.getDefaultToolkit();
return tk.createImage(mis);
}
Is it possible to achieve the same result using classes from the ImageIO package(s) so I don't have to use the AWT Toolkit?
Toolkit.getDefaultToolkit() does not seem to be 100% reliable and will sometimes throw an AWTError, whereas the ImageIO classes should always be available, which is why I'm interested in changing my method.
You can create the image without using ImageIO. Just create a BufferedImage using an image type matching the contents of the pixel array.
public static Image getImageFromArray(int[] pixels, int width, int height) {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0,0,width,height,pixels);
return image;
}
When working with the PixelGrabber, don't forget to extract the RGBA info from the pixel array before calling getImageFromArray. There's an example of this in the handlepixelmethod in the PixelGrabber javadoc. Once you do that, make sure the image type in the BufferedImage constructor to BufferedImage.TYPE_INT_ARGB.
Using the raster I got an ArrayIndexOutOfBoundsException even when I created the BufferedImage with TYPE_INT_ARGB. However, using the setRGB(...) method of BufferedImage worked for me.
JavaDoc on BufferedImage.getData() says: "a Raster that is a copy of the image data."
This code works for me but I doubt in it's efficiency:
// Получаем картинку из массива.
int[] pixels = new int[width*height];
// Рисуем диагональ.
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
if (i == j) {
pixels[j*width + i] = Color.RED.getRGB();
}
else {
pixels[j*width + i] = Color.BLUE.getRGB();
//pixels[j*width + i] = 0x00000000;
}
}
}
BufferedImage pixelImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
pixelImage.setRGB(0, 0, width, height, pixels, 0, width);
I've had good success using java.awt.Robot to grab a screen shot (or a segment of the screen), but to work with ImageIO, you'll need to store it in a BufferedImage instead of the memory image source. Then you can call one static method of ImageIO and save the file. Try something like:
// Capture whole screen
Rectangle region = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
BufferedImage capturedImage = new Robot().createScreenCapture(region);
// Save as PNG
File imageFile = new File("capturedImage.png");
ImageIO.write(capturedImage, "png", imageFile);
As this is one of the highest voted question tagged with ImageIO on SO, I think there's still room for a better solution, even if the question is old. :-)
Have a look at the BufferedImageFactory.java class from my open source imageio project at GitHub.
With it, you can simply write:
BufferedImage image = new BufferedImageFactory(image).getBufferedImage();
The other good thing is that this approach, as a worst case, has about the same performance (time) as the PixelGrabber-based examples already in this thread. For most of the common cases (typically JPEG), it's about twice as fast. In any case, it uses less memory.
As a side bonus, the color model and pixel layout of the original image is kept, instead of translated to int ARGB with default color model. This might save additional memory.
(PS: The factory also supports subsampling, region-of-interest and progress listeners if anyone's interested. :-)
I had the same problem of everyone else trying to apply the correct answer of this question, my int array actually get an OutOfboundException where i fixed it adding one more index because the length of the array has to be widht*height*3 after this i could not get the image so i fixed it setting the raster to the image
public static Image getImageFromArray(int[] pixels, int width, int height) {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0,0,width,height,pixels);
image.setData(raster);
return image;
}
And you can see the image if u show it on a label on a jframe like this
JFrame frame = new JFrame();
frame.getContentPane().setLayout(new FlowLayout());
frame.getContentPane().add(new JLabel(new ImageIcon(image)));
frame.pack();
frame.setVisible(true);
setting the image on the imageIcon().
Last advice you can try to change the Bufferedimage.TYPE_INT_ARGB to something else that matches the image you got the array from this type is very important i had an array of 0 and -1 so I used this type BufferedImage.TYPE_3BYTE_BGR