Im having trouble getting pixel data.
My program takes screenshots, every loop it will store the previous screenshot.
My goal is to do a comparison at the pixel level between the current screenshot and the old one.
Ive ran this code which tells me what format the screenshots are in:
System.out.println(image.getType());
The output of this (for my program) is 1 meaning its a BufferedImage.TYPE_INT_RGB
From what ive read, the types determine what order the pixel values are in the byte array.
I'm using this code to convert my Buffered image to a byte array (The buffered image is created using awt.Robot class):
public byte[] convertToByteArray(BufferedImage img){
byte[] imageInByte = null;
try {
// convert BufferedImage to byte array
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write(img, "png", baos);
baos.flush();
imageInByte = baos.toByteArray();
baos.close();
} catch (IOException ex) {
Logger.getLogger(OverlayWindow.class.getName()).log(Level.SEVERE, null, ex);
}
return imageInByte;
}
Finally i use a comparison method to check the byte array. For now this only prints the color values of the array:
final byte[] pixels = convertToByteArray(image);
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel < 1; pixel += pixelLength) {
int argb = 0;
argb += -16777216; // 255 alpha
argb += ((int) pixels[pixel] & 0xff); // blue
argb += (((int) pixels[pixel + 1] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 2] & 0xff) << 16); // red
int r = (argb >> 16) & 0xff, g = (argb >> 8)& 0xff, b = argb & 0xff;
System.out.println("R = " + r);
System.out.println("G = " + g);
System.out.println("B = " + b);
col++;
if (col == width) {
col = 0;
row++;
}
}
My issue with this code is that even though i take a screenshot of a solid color, the pixel values are all over the place. Im expecting each pixel to have the same color values.
-Edit-
I'm avoiding using getRGB for performance reasons. Iterating through two large images calling getRGB each time is very costly in my application.
The easiest way to access the pixels values in a BufferedImage is to use the Raster:
BufferedImage image = ...
for (int y=0 ; y < image.getHeight() ; y++)
for (int x=0 ; x < image.getWidth() ; x++)
for (int c=0 ; c < image.getRaster().getNumBands() ; c++)
final int value = image.getRaster().getSample(x, y, c) ; // Returns the value of the channel C of the pixel (x,y)
The raster will take care of the encoding for you, making it the easiest way to access the pixel values. However, the fastest way is to use the DataBuffer, but then you have to manage all the encodings.
/* This method takes a BufferedImage encoded with TYPE_INT_ARGB and copies the pixel values into an image encoded with TYPE_4BYTE_ABGR.*/
public static void IntToByte(BufferedImage source, BufferedImage result)
{
final byte[] bb = ((DataBufferByte)result.getRaster().getDataBuffer()).getData() ;
final int[] ib = ((DataBufferInt)source.getRaster().getDataBuffer()).getData() ;
switch ( source.getType() )
{
case BufferedImage.TYPE_INT_ARGB :
for (int i=0, b=0 ; i < ib.length ; i++, b+=4)
{
int p = ib[i] ;
bb[b] = (byte)((p & 0xFF000000) >> 24) ;
bb[b+3] = (byte)((p & 0xFF0000) >> 16) ;
bb[b+2] = (byte)((p & 0xFF00) >> 8) ;
bb[b+1] = (byte)( p & 0xFF) ;
}
break ;
// Many other case to manage...
}
}
Related
I'm currently having an issue with alpha channels when reading PNG files with ImageIO.read(...)
fileInputStream = new FileInputStream(path);
BufferedImage image = ImageIO.read(fileInputStream);
//Just copying data into an integer array
int[] pixels = new int[image.getWidth() * image.getHeight()];
image.getRGB(0, 0, width, height, pixels, 0, width);
However, when trying to read values from the pixel array by bit shifting as seen below, the alpha channel is always returning -1
int a = (pixels[i] & 0xff000000) >> 24;
int r = (pixels[i] & 0xff0000) >> 16;
int g = (pixels[i] & 0xff00) >> 8;
int b = (pixels[i] & 0xff);
//a = -1, the other channels are fine
By Googling the problem I understand that the BufferedImage type needs to be defined as below to allow for the alpha channel to work:
BufferedImage image = new BufferedImage(width, height BufferedImage.TYPE_INT_ARGB);
But ImageIO.read(...) returns a BufferedImage without giving the option to specify the image type. So how can I do this?
Any help is much appreciated.
Thanks in advance
I think, your "int unpacking" code might be wrong.
I used (pixel >> 24) & 0xff (where pixel is the rgba value of a specific pixel) and it worked fine.
I compared this with the results of java.awt.Color and they worked fine.
I "stole" the "extraction" code directly from java.awt.Color, this is, yet another reason, I tend not to perform these operations this way, it's to easy to screw them up
And my awesome test code...
BufferedImage image = ImageIO.read(new File("BYO image"));
int width = image.getWidth();
int height = image.getHeight();
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int pixel = image.getRGB(x, y);
//value = 0xff000000 | rgba;
int a = (pixel >> 24) & 0xff;
Color color = new Color(pixel, true);
System.out.println(x + "x" + y + " = " + color.getAlpha() + "; " + a);
}
}
nb: Before some one tells that this is inefficient, I wasn't going for efficiency, I was going for quick to write
You may also want to have a look at How to convert get.rgb(x,y) integer pixel to Color(r,g,b,a) in Java?, which I also used to validate my results
I think the problem is that you're using arithmetic shift (>>) instead of logical shift (>>>). Thus 0xff000000 >> 24 becomes 0xffffffff (i.e. -1)
I am trying to use a tflite model in my android app. The problem arises when I have to create a ByteBuffer out of the Bitmap and use it as Input to the model.
Problem: Bitmap is ARGB_8888 (32 bit) whereas I need (8 bit) grayscale image.
Method to convert Bitmap to ByteBuffer:
mImgData = ByteBuffer
.allocateDirect(4 * 28 * 28 * 1);
private void convertBitmapToByteBuffer(Bitmap bitmap) throws NullPointerException {
if (mImgData == null) {
throw new NullPointerException("Error: ByteBuffer not initialized.");
}
mImgData.rewind();
for (int i = 0; i < DIM_IMG_SIZE_WIDTH; i++) {
for (int j = 0; j < DIM_IMG_SIZE_HEIGHT; j++) {
int pixelIntensity = bitmap.getPixel(i, j);
unpackPixel(pixelIntensity, i, j);
Log.d(TAG, String.format("convertBitmapToByteBuffer: %d -> %f", pixelIntensity, convertToGrayScale(pixelIntensity)));
mImgData.putFloat(convertToGrayScale(pixelIntensity));
}
}
}
private float convertToGrayScale(int color) {
return (((color >> 16) & 0xFF) + ((color >> 8) & 0xFF) + (color & 0xFF)) / 3.0f / 255.0f;
}
However, all the pixel values are either -1 or -16777216. Note that that unpackPixel method mentioned here doesn't work, since all values have the same int value anyway. (Posted with changes below for reference.)
private void unpackPixel(int pixel, int row, int col) {
short red,green,blue;
red = (short) ((pixel >> 16) & 0xFF);
green = (short) ((pixel >> 8) & 0xFF);
blue = (short) ((pixel >> 0) & 0xFF);
}
You can call Color.red() or green/blue on the pixel value and it will return the gray intensity. Then just put it in the byte buffer using putFloat(). Also getting all pixel values in a single array using bitmap.getPixels() is comparatively faster than bitmap.getPixel(i, j). Here's how I am doing it to load grayscale images in my tflite model:
private ByteBuffer getByteBuffer(Bitmap bitmap){
int width = bitmap.getWidth();
int height = bitmap.getHeight();
ByteBuffer mImgData = ByteBuffer
.allocateDirect(4 * width * height);
mImgData.order(ByteOrder.nativeOrder());
int[] pixels = new int[width*height];
bitmap.getPixels(pixels, 0, width, 0, 0, width, height);
for (int pixel : pixels) {
mImgData.putFloat((float) Color.red(pixel));
}
return mImgData;
}
If you need normalized values just divide by 255:
float value = (float) Color.red(pixel)/255.0f;
mImgData.putFloat(value);
You can then use this in your interpreter as:
ByteBuffer input = getByteBuffer(bitmap);
tflite.run(input, outputValue);
Hope this helps people looking for this in the future!
I first tried to convert jpg into array of rgb values and then tried to revert same array into jpg
picw = selectedImage.getWidth();
pich = selectedImage.getHeight();
int[] pix = new int[picw * pich];
selectedImage.getPixels(pix, 0, picw, 0, 0, picw, pich);
int R, G, B;
for (int y = 0; y < pich; y++) {
for (int x = 0; x < picw; x++) {
int index = y * picw + x;
R = (pix[index] >> 16) & 0xff;
G = (pix[index] >> 8) & 0xff;
B = pix[index] & 0xff;
pix[index] = (R << 16) | (G << 8) | B;
}
}
Untill this point all things are fine(i checked by Loging the array), but when i create bitmap to compress it in jpg, the output is of black image.
Bitmap bmp = Bitmap.createBitmap(pix, picw, pich,Bitmap.Config.ARGB_8888);
File folder = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS);
File file = new File(folder,"Wonder.jpg");
FileOutputStream fileOutputStream = null;
try {
fileOutputStream = new FileOutputStream(file);
bmp.compress(Bitmap.CompressFormat.JPEG, 100, fileOutputStream);
} catch (FileNotFoundException e) {
e.printStackTrace();
}finally {
if (fileOutputStream != null) {
try {
fileOutputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
Please help me to move Further, Thanks
First let me explain how this data is stored per pixel:
Each pixel has 32bits of data to store a value for: Alpha, Red, Green and Blue. Each of these values is just 8 bits (or a byte). (there are a lot of other formats to store color information, but the one you specified is ARGB_8888).
In this format, white is 0xffffffff and black is 0xff000000.
So, like i said in the comments, the alpha seems to be missing. A red pixel without any alpha like 0x00ff0000 is not going to be visible.
Alpha can be added by first storing it:
A = (pix[index] >> 24) & 0xff;
Although the value is probably going to be 255 (because JPEG doesn't have alpha), i think it would be wise to use it like this in case you decide to use another format that does have alpha.
Then you should put the alpha back in:
pix[index] = (A << 24) | (R << 16) | (G << 8) | B;
This should write the exact same value to pix[index] which it already contains, not changing anything. But it will leave you with the original image instead of just black.
There is this image comparison code I am supposed to modify to highlight/point out the difference between two images. Is there a way to modify this code so as to highlight the differences in images. If not any suggestion on how to go about it would be greatly appreciated.
int width1 = img1.getWidth(null);
int width2 = img2.getWidth(null);
int height1 = img1.getHeight(null);
int height2 = img2.getHeight(null);
if ((width1 != width2) || (height1 != height2)) {
System.err.println("Error: Images dimensions mismatch");
System.exit(1);
}
long diff = 0;
for (int i = 0; i < height1; i++) {
for (int j = 0; j < width1; j++) {
int rgb1 = img1.getRGB(j, i);
int rgb2 = img2.getRGB(j, i);
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = (rgb1) & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = (rgb2) & 0xff;
diff += Math.abs(r1 - r2);
diff += Math.abs(g1 - g2);
diff += Math.abs(b1 - b2);
}
}
double n = width1 * height1 * 3;
double p = diff / n / 255.0;
return (p * 100.0);
This solution did the trick for me. It highlights differences, and has the best performance out of the methods I've tried. (Assumptions: images are the same size. This method hasn't been tested with transparencies.)
Average time to compare a 1600x860 PNG image 50 times (on same machine):
JDK7 ~178 milliseconds
JDK8 ~139 milliseconds
Does anyone have a better/faster solution?
public static BufferedImage getDifferenceImage(BufferedImage img1, BufferedImage img2) {
// convert images to pixel arrays...
final int w = img1.getWidth(),
h = img1.getHeight(),
highlight = Color.MAGENTA.getRGB();
final int[] p1 = img1.getRGB(0, 0, w, h, null, 0, w);
final int[] p2 = img2.getRGB(0, 0, w, h, null, 0, w);
// compare img1 to img2, pixel by pixel. If different, highlight img1's pixel...
for (int i = 0; i < p1.length; i++) {
if (p1[i] != p2[i]) {
p1[i] = highlight;
}
}
// save img1's pixels to a new BufferedImage, and return it...
// (May require TYPE_INT_ARGB)
final BufferedImage out = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
out.setRGB(0, 0, w, h, p1, 0, w);
return out;
}
Usage:
import javax.imageio.ImageIO;
import java.io.File;
ImageIO.write(
getDifferenceImage(
ImageIO.read(new File("a.png")),
ImageIO.read(new File("b.png"))),
"png",
new File("output.png"));
Some inspiration...
What I would do is set each pixel to be the difference between one pixel in one image and the corresponding pixel in the other image. The difference that is being calculated in your original code is based on the L1 norm. This is also called the sum of absolute differences too. In any case, write a method that would take in your two images, and return an image of the same size that sets each location to be the difference for each pair of pixels that share the same location in the final image. Basically, this will give you an indication as to which pixels are different. The whiter the pixel, the more difference there is between these two corresponding locations.
I'm also going to assume you're using a BufferedImage class, as getRGB() methods are used and you are bit-shifting to access individual channels. In other words, make a method that looks like this:
public static BufferedImage getDifferenceImage(BufferedImage img1, BufferedImage img2) {
int width1 = img1.getWidth(); // Change - getWidth() and getHeight() for BufferedImage
int width2 = img2.getWidth(); // take no arguments
int height1 = img1.getHeight();
int height2 = img2.getHeight();
if ((width1 != width2) || (height1 != height2)) {
System.err.println("Error: Images dimensions mismatch");
System.exit(1);
}
// NEW - Create output Buffered image of type RGB
BufferedImage outImg = new BufferedImage(width1, height1, BufferedImage.TYPE_INT_RGB);
// Modified - Changed to int as pixels are ints
int diff;
int result; // Stores output pixel
for (int i = 0; i < height1; i++) {
for (int j = 0; j < width1; j++) {
int rgb1 = img1.getRGB(j, i);
int rgb2 = img2.getRGB(j, i);
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = (rgb1) & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = (rgb2) & 0xff;
diff = Math.abs(r1 - r2); // Change
diff += Math.abs(g1 - g2);
diff += Math.abs(b1 - b2);
diff /= 3; // Change - Ensure result is between 0 - 255
// Make the difference image gray scale
// The RGB components are all the same
result = (diff << 16) | (diff << 8) | diff;
outImg.setRGB(j, i, result); // Set result
}
}
// Now return
return outImg;
}
To call this method, simply do:
outImg = getDifferenceImage(img1, img2);
This is assuming that you are calling this within a method of your class. Have fun and good luck!
Just to note that the answer from #NickGrealy can be made 10 times faster if you don't need to keep the first image and modify it in place.
Example:
// img1 will be updated with the changes from img2
public static BufferedImage getDifferenceImage(BufferedImage img1, BufferedImage img2) {
byte[] magenta = {-1, 0, -1};
byte[] buff1 = ((DataBufferByte) img1.getRaster().getDataBuffer()).getData();
byte[] buff2 = ((DataBufferByte) img2.getRaster().getDataBuffer()).getData();
for (int i = 1; i < buff1.lenght; i += 4) {
if (buff1[i] != buff2[i]) {
System.arraycopy(magenta, 0, buff1, i, 3);
}
}
}
I needed a fast approach to use on potentially lot of images for visual regression checking.
It runs in < 2 ms on my machine, and I am in a case where img1 is already saved on disk so I don't need to play with it, I'm just interested in the differences to be updated in the buffered image and write it to a new location for further inspection.
I need to create a simple demo for image manipulation in Java. My code is swing based. I don't have to do anything complex, just show that the image has changed in some way. I have the image read as byte[]. Is there anyway that I can manipulate this byte array without corrupting the bytes to show some very simple manipulation. I don't wish to use paint() etc. Is there anything that I can do directly to the byte[] array to show some change?
edit:
I am reading jpg image as byteArrayInputStream using apache io library. The bytes are read ok and I can confirm it by writing them back as jpeg.
You can try to convert your RGB image to Grayscale. If the image as 3 bytes per pixel rapresented as RedGreenBlue you can use the followinf formula: y=0.299*r+0.587*g+0.114*b.
To be clear iterate over the byte array and replace the colors. Here an example:
byte[] newImage = new byte[rgbImage.length];
for (int i = 0; i < rgbImage.length; i += 3) {
newImage[i] = (byte) (rgbImage[i] * 0.299 + rgbImage[i + 1] * 0.587
+ rgbImage[i + 2] * 0.114);
newImage[i+1] = newImage[i];
newImage[i+2] = newImage[i];
}
UPDATE:
Above code assumes you're using raw RGB image, if you need to process a Jpeg file you can do this:
try {
BufferedImage inputImage = ImageIO.read(new File("input.jpg"));
BufferedImage outputImage = new BufferedImage(
inputImage.getWidth(), inputImage.getHeight(),
BufferedImage.TYPE_INT_RGB);
for (int x = 0; x < inputImage.getWidth(); x++) {
for (int y = 0; y < inputImage.getHeight(); y++) {
int rgb = inputImage.getRGB(x, y);
int blue = 0x0000ff & rgb;
int green = 0x0000ff & (rgb >> 8);
int red = 0x0000ff & (rgb >> 16);
int lum = (int) (red * 0.299 + green * 0.587 + blue * 0.114);
outputImage
.setRGB(x, y, lum | (lum << 8) | (lum << 16));
}
}
ImageIO.write(outputImage, "jpg", new File("output.jpg"));
} catch (IOException e) {
e.printStackTrace();
}