I'm trying to analyze an image-based 3digit number captcha from an online resource. The numbers do not move at all. I use BufferedImage's getSubimage(...) method to extract each number from the captcha. I have saved (0-9) for each of the ones, tens and hundreds place. (So 30 numbers in total)
I read the bytes of the online image into a byte[] and then create a BufferedImage object like this:
BufferedImage captcha = ImageIO.read(new ByteArrayInputStream(captchaBytes));
Then I compare this image to a list of images on my drive:
BufferedImage[] nums = new BufferedImage[10];
//Load images into the array here... The code is removed.
for(int i = 0; i < nums.length; i++) {
double x;
System.out.println(x = bufferedImagesEqualConfidence(nums[i], firstNumberImage));
if(x > 0.98) {
System.out.println("equal to image " + i + ".jpeg");
isNewEntry = false;
break;
}
}
This is how I compare two images:
static double bufferedImagesEqualConfidence(BufferedImage img1, BufferedImage img2) {
double difference = 0;
int pixels = img1.getWidth() * img1.getHeight();
if (img1.getWidth() == img2.getWidth() && img1.getHeight() == img2.getHeight()) {
for (int x = 0; x < img1.getWidth(); x++) {
for (int y = 0; y < img1.getHeight(); y++) {
int rgbA = img1.getRGB(x, y);
int rgbB = img2.getRGB(x, y);
int redA = (rgbA >> 16) & 0xff;
int greenA = (rgbA >> 8) & 0xff;
int blueA = (rgbA) & 0xff;
int redB = (rgbB >> 16) & 0xff;
int greenB = (rgbB >> 8) & 0xff;
int blueB = (rgbB) & 0xff;
difference += Math.abs(redA - redB);
difference += Math.abs(greenA - greenB);
difference += Math.abs(blueA - blueB);
}
}
} else {
return 0.0;
}
return 1-((difference/(double)pixels) / 255.0);
}
The image is loaded completely from a HttpURLConnection object wrapped in my own HttpGet object. And so I do: byte[] captchaBytes = hg.readAndGetBytes(); Which I know works because when I save BufferedImage captcha = ImageIO.read(new ByteArrayInputStream(captchaBytes));, it saves as a valid image on my drive.
However, even though 2 images are actually the same, the result shows they are not similar at all. BUT, when I save the image I downloaded from the online resource first, re-read it, and compare, it shows they are equal. This is what I'm doing when I say I save it and re-read it:
File temp = new File("temp.jpeg");
ImageIO.write(secondNumberImage, "jpeg", temp);
secondNumberImage = ImageIO.read(temp);
Image format: JPEG
I know this may have something to do with compression from ImageIO.write(...), but how can I make it so that I don't have to save the image?
The problem was within my bufferedImagesEqualConfidence method. Simply comparing RGB was not enough. I had to compare individual R/G/B values.
My initial bufferedImagesEqualConfidence that didn't work was:
static double bufferedImagesEqualConfidence(BufferedImage img1, BufferedImage img2) {
int similarity = 0;
int pixels = img1.getWidth() * img1.getHeight();
if (img1.getWidth() == img2.getWidth() && img1.getHeight() == img2.getHeight()) {
for (int x = 0; x < img1.getWidth(); x++) {
for (int y = 0; y < img1.getHeight(); y++) {
if (img1.getRGB(x, y) == img2.getRGB(x, y)) {
similarity++;
}
}
}
} else {
return 0.0;
}
return similarity / (double)pixels;
}
(Source: Java Compare one BufferedImage to Another)
The bufferedImagesEqualConfidence that worked is:
static double bufferedImagesEqualConfidence(BufferedImage img1, BufferedImage img2) {
double difference = 0;
int pixels = img1.getWidth() * img1.getHeight();
if (img1.getWidth() == img2.getWidth() && img1.getHeight() == img2.getHeight()) {
for (int x = 0; x < img1.getWidth(); x++) {
for (int y = 0; y < img1.getHeight(); y++) {
int rgbA = img1.getRGB(x, y);
int rgbB = img2.getRGB(x, y);
int redA = (rgbA >> 16) & 0xff;
int greenA = (rgbA >> 8) & 0xff;
int blueA = (rgbA) & 0xff;
int redB = (rgbB >> 16) & 0xff;
int greenB = (rgbB >> 8) & 0xff;
int blueB = (rgbB) & 0xff;
difference += Math.abs(redA - redB);
difference += Math.abs(greenA - greenB);
difference += Math.abs(blueA - blueB);
}
}
} else {
return 0.0;
}
return 1-((difference/(double)pixels) / 255.0);
}
(Source: Image Processing in Java)
I guess to find similarity between two images you have to compare the individual R/G/B values for each pixel rather than just the whole RGB value.
Related
I am new to Java and want to count red pixels in a given image. I have below code so far but not sure of what condition to add to check if pixel is red. I have below code so far. Thanks in advance.
public static int countRedPixels(Picture v){
BufferedImage image = (v.getBufferedImage());
int width = image.getWidth();
int height = image.getHeight();
int redCount = 0;
int pixelCount = 0;
for (int x = 0; x < width; x++) {
for (int y = 0; y < height ; y++) {
int rgb = image.getRGB(x, y);
//get rgbs
//int alpha = (rgb >>> 24) & 0xFF;
int red = (rgb >>> 16) & 0xFF;
int green = (rgb >>> 8) & 0xFF;
int blue = (rgb >>> 0) & 0xFF;
if (red == 255 && green == 0 && blue == 0 || image.getRGB(x, y) == 0xFFFF0000) {
redCount++;
}
pixelCount++;
}
}
System.out.println("Red Pixel Count:" + redCount);
System.out.println("Pixel Count:" + pixelCount);
return redCount;
}
Not sure if I'm missing something or you're really not seeing the forest for the trees. Anyway, comment turned answer:
Given that red means (255, 0, 0), you could do:
if (image.getRGB(x, y) == 0xFFFF0000) {
++redCount;
}
Or alternatively, if you don't care about alpha:
if (red == 255 && green == 0 && blue == 0) {
++redCount;
}
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I found this Java class online and have been using it to compare images. However I want to add an offset to it. So for example, if the 2 images are 99% or 98% similar, I want it to return true.
public int compareImage(File fileA, File fileB) {
try {
// take buffer data from botm image files //
BufferedImage biA = ImageIO.read(fileA);
DataBuffer dbA = biA.getData().getDataBuffer();
int sizeA = dbA.getSize();
BufferedImage biB = ImageIO.read(fileB);
DataBuffer dbB = biB.getData().getDataBuffer();
int sizeB = dbB.getSize();
// compare data-buffer objects //
if (sizeA == sizeB) {
for (int i = 0; i < sizeA; i++) {
if (dbA.getElem(i) != dbB.getElem(i)) {
return false;
}
}
return true;
} else {
return false;
}
} catch (Exception e) {
System.out.println("Failed to compare image files ...");
return 0;
}
}
What is the best way to do this?
To see if they are 99% or 98% similar, you have to compare all the pixels, instead of returning false at the first instance of dbA.getElem(i) != dbB.getElem(i)
Try a counter:
int total = 0;
int is_silimar = 0;
for (int i = 0; i < sizeA; i++) {
total++;
if (dbA.getElem(i) == dbB.getElem(i)) { //change it to ==
is_similar ++;
}
}
//don't return anything yet
Then, you can return true when is_similar/total is 98% or 99%
Edit: change sizeA to min(sizeA, sizeB) if you want to test cases when the image sizes are different as well.
There are many approaches for image similarities, some of them requiring advanced AI algorithms. This Wikipedia post offers a pixel by pixel colour-distance image comparison (but it works for equal-sized images only). I am just copy pasting the Java implementation from the above link.
*Note that if you wanted absolute pixel equality (and not colour distance on a single pixel) you should just keep a counter with different pixels and no distance checking would be required.
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
import java.io.IOException;
import java.net.URL;
public class ImgDiffPercent
{
public static void main(String args[])
{
BufferedImage img1 = null;
BufferedImage img2 = null;
try {
URL url1 = new URL("http://rosettacode.org/mw/images/3/3c/Lenna50.jpg");
URL url2 = new URL("http://rosettacode.org/mw/images/b/b6/Lenna100.jpg");
img1 = ImageIO.read(url1);
img2 = ImageIO.read(url2);
} catch (IOException e) {
e.printStackTrace();
}
int width1 = img1.getWidth(null);
int width2 = img2.getWidth(null);
int height1 = img1.getHeight(null);
int height2 = img2.getHeight(null);
if ((width1 != width2) || (height1 != height2)) {
System.err.println("Error: Images dimensions mismatch");
System.exit(1);
}
long diff = 0;
for (int y = 0; y < height1; y++) {
for (int x = 0; x < width1; x++) {
int rgb1 = img1.getRGB(x, y);
int rgb2 = img2.getRGB(x, y);
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = (rgb1 ) & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = (rgb2 ) & 0xff;
diff += Math.abs(r1 - r2);
diff += Math.abs(g1 - g2);
diff += Math.abs(b1 - b2);
}
}
double n = width1 * height1 * 3;
double p = diff / n / 255.0;
System.out.println("diff percent: " + (p * 100.0));
}
}
i am trying to get difference between two images ( same size ) i found this code :
BufferedImage img1 = null;
BufferedImage img2 = null;
try{
URL url1 = new URL("http://rosettacode.org/mw/images/3/3c/Lenna50.jpg");
URL url2 = new URL("http://rosettacode.org/mw/images/b/b6/Lenna100.jpg");
img1 = ImageIO.read(url1);
img2 = ImageIO.read(url2);
} catch (IOException e) {
e.printStackTrace();
}
int width1 = img1.getWidth(null);
int width2 = img2.getWidth(null);
int height1 = img1.getHeight(null);
int height2 = img2.getHeight(null);
if ((width1 != width2) || (height1 != height2)) {
System.err.println("Error: Images dimensions mismatch");
System.exit(1);
}
long diff = 0;
for (int i = 0; i < height1; i++) {
for (int j = 0; j < width1; j++) {
int rgb1 = img1.getRGB(i, j);
int rgb2 = img2.getRGB(i, j);
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = (rgb1 ) & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = (rgb2 ) & 0xff;
diff += Math.abs(r1 - r2);
diff += Math.abs(g1 - g2);
diff += Math.abs(b1 - b2);
}
}
double n = width1 * height1 * 3;
double p = diff / n / 255.0;
System.out.println("diff percent: " + (p * 100.0)); `
it works fine for the two images given in the url , but when i changed the images given i get this exception :
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Coordinate out of bounds!
at sun.awt.image.ByteInterleavedRaster.getDataElements(ByteInterleavedRaster.java:299)
at java.awt.image.BufferedImage.getRGB(BufferedImage.java:871)
at Main.main(Main.java:77)
I changed the code to :
File sourceimage1 = new File("C:\\lo.jpg");
File sourceimage2 = new File("C:\\lo1.jpg");
img1 = ImageIO.read(sourceimage1);
img2 = ImageIO.read(sourceimage2);
the two images are black and white , and their dimensions are smaller than the two previous images (lenna50 and lenna100)
the lo.jpg and lo1.jpg are the same image to test the algorithm also they are in black and white
how can I change the code to make it work for any image dimension ?
Toggle the i and j
in the following code as I have done below:
int rgb1 = img1.getRGB(j, i);
int rgb2 = img2.getRGB(j, i);
Your error clearly says that while reading rgb point as code in line img1.getRGB(i, j); it is going out of Array for image RGB. Check the values of i & j inside your inner for loop and check whether something you are doing wrong. As already Hirak pointed may be that you are not initializing your variables properly and so the reason it is going out of height or width.
I am using Android phone with ICECREAMSANDWICH version. In that, I am checking the face-detection behaviour. Additionally, referring the codes available on google.
During debugging with my phone int getMaxNumDetectedFaces () returns 3. So my phone is supported for this.
Then the following code is not working.
public void onFaceDetection(Face[] faces, Camera face_camera1) {
// TODO Auto-generated method stub
if(faces.length>0)
{
Log.d("FaceDetection","face detected:" +faces.length + "Face 1 location X:"+faces[0].rect.centerX()+"Y:"+faces[0].rect.centerY());
}
}
in this faces.length retruning zero tell some suggestions to solve this error.
I had work with the FaceDetection some time ago. When I was working on that the onFaceDetection didn't work for me, so,I found another way for work on it.
I worked with PreviewCallback, this method takes each frame and you can use it to recognize faces. The only problem here is the format, the default format is NV21 , and you can change it by setPreviewFormat(int), but that didn't work for me too, so, I had to make de conversion for get a Bitmap type that recives the FaceDetector. Here is my code:
public PreviewCallback mPreviewCallback = new PreviewCallback(){
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Size size = camera.getParameters().getPreviewSize();
Bitmap mfoto_imm = this.getBitmapFromNV21(data, size.width, size.height, true); //here I get the Bitmap from getBitmapFromNV21 that is the conversion method
Bitmap mfoto= mfoto_imm.copy(Bitmap.Config.RGB_565, true);
imagen.setImageBitmap(mfoto);
int alto= mfoto.getHeight();
int ancho= mfoto.getWidth();
int count;
canvas= new Canvas(mfoto);
dibujo.setColor(Color.GREEN);
dibujo.setAntiAlias(true);
dibujo.setStrokeWidth(8);
canvas.drawBitmap(mfoto, matrix, dibujo);
FaceDetector mface= new FaceDetector(ancho,alto,1);
FaceDetector.Face [] face= new FaceDetector.Face[1];
count = mface.findFaces(mfoto, face);
PointF midpoint = new PointF();
int fpx = 0;
int fpy = 0;
if (count > 0) {
face[count-1].getMidPoint(midpoint); // you have to take the last one less 1
fpx= (int)midpoint.x; // middle pint of the face in x.
fpy= (int)midpoint.y; // middle point of the face in y.
}
canvas.drawCircle(fpx, fpy, 10, dibujo); // here I draw a circle on the middle of the face
imagen.invalidate();
}
}
and here are the conversion methods.
public Bitmap getBitmapFromNV21(byte[] data, int width, int height, boolean rotated) {
Bitmap bitmap = null;
int[] pixels = new int[width * height];
// Conver the array
this.yuv2rgb(pixels, data, width, height, rotated);
if(rotated)
{
bitmap = Bitmap.createBitmap(pixels, height, width, Bitmap.Config.RGB_565);
}
else
{
bitmap = Bitmap.createBitmap(pixels, width, height, Bitmap.Config.RGB_565);
}
return bitmap;
}
public void yuv2rgb(int[] out, byte[] in, int width, int height, boolean rotated)
throws NullPointerException, IllegalArgumentException
{
final int size = width * height;
if(out == null) throw new NullPointerException("buffer 'out' == null");
if(out.length < size) throw new IllegalArgumentException("buffer 'out' length < " + size);
if(in == null) throw new NullPointerException("buffer 'in' == null");
if(in.length < (size * 3 / 2)) throw new IllegalArgumentException("buffer 'in' length != " + in.length + " < " + (size * 3/ 2));
// YCrCb
int Y, Cr = 0, Cb = 0;
int Rn = 0, Gn = 0, Bn = 0;
for(int j = 0, pixPtr = 0, cOff0 = size - width; j < height; j++) {
if((j & 0x1) == 0)
cOff0 += width;
int pixPos = height - 1 - j;
for(int i = 0, cOff = cOff0; i < width; i++, cOff++, pixPtr++, pixPos += height) {
// Get Y
Y = 0xff & in[pixPtr]; // 0xff es por el signo
// Get Cr y Cb
if((pixPtr & 0x1) == 0) {
Cr = in[cOff];
if(Cr < 0) Cr += 127; else Cr -= 128;
Cb = in[cOff + 1];
if(Cb < 0) Cb += 127; else Cb -= 128;
Bn = Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
Gn = - (Cb >> 2) + (Cb >> 4) + (Cb >> 5) - (Cr >> 1) + (Cr >> 3) + (Cr >> 4) + (Cr >> 5);
Rn = Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
}
int R = Y + Rn;
if(R < 0) R = 0; else if(R > 255) R = 255;
int B = Y + Bn;
if(B < 0) B = 0; else if(B > 255) B = 255;
int G = Y + Gn;
if(G < 0) G = 0; else if(G > 255) G = 255; //At this point the code could apply some filter From the separate components of the image.For example, they could swap 2 components or remove one
int rgb = 0xff000000 | (R << 16) | (G << 8) | B; //Depending on the option the output buffer is filled or not applying the transformation
if(rotated)
out[pixPos] = rgb;
else
out[pixPtr] = rgb;
}
}
}
};
The setPreviewFormat(int) in some devices doesn't work, but maybe you can try and create the Bitmap without use the conversion.
I hope this help you.
I want to read in a JPEG image with a uniform gray background with several colored balls on it of the same size. I want a program which can take this image and record the coordinates of each ball. What's the best way to do this?
I agree with James. I used the following program once to find red boxes in an image (before most of the red boxes were recolored by the community):
/**
* #author karnokd, 2008.11.07.
* #version $Revision 1.0$
*/
public class RedLocator {
public static class Rect {
int x;
int y;
int x2;
int y2;
}
static List<Rect> rects = new LinkedList<Rect>();
static boolean checkRect(int x, int y) {
for (Rect r : rects) {
if (x >= r.x && x <= r.x2 && y >= r.y && y <= r.y2) {
return true;
}
}
return false;
}
public static void main(String[] args) throws Exception {
BufferedImage image = ImageIO.read(new File("fallout3worldmapfull.png"));
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++) {
int c = image.getRGB(x,y);
int red = (c & 0x00ff0000) >> 16;
int green = (c & 0x0000ff00) >> 8;
int blue = c & 0x000000ff;
// check red-ness
if (red > 180 && green < 30 && blue < 30) {
if (!checkRect(x, y)) {
int tmpx = x;
int tmpy = y;
while (red > 180 && green < 30 && blue < 30) {
c = image.getRGB(tmpx++,tmpy);
red = (c & 0x00ff0000) >> 16;
green = (c & 0x0000ff00) >> 8;
blue = c & 0x000000ff;
}
tmpx -= 2;
c = image.getRGB(tmpx,tmpy);
red = (c & 0x00ff0000) >> 16;
green = (c & 0x0000ff00) >> 8;
blue = c & 0x000000ff;
while (red > 180 && green < 30 && blue < 30) {
c = image.getRGB(tmpx,tmpy++);
red = (c & 0x00ff0000) >> 16;
green = (c & 0x0000ff00) >> 8;
blue = c & 0x000000ff;
}
Rect r = new Rect();
r.x = x;
r.y = y;
r.x2 = tmpx;
r.y2 = tmpy - 2;
rects.add(r);
}
}
}
}
}
}
Might give you some hints. The image originates from here.
You can use the ImageIO library to read in an image by using one of the read() methods. This produces a BufferedImage which you can analyze for the separate colors. getRGB() is probably the best way to do this. You can compare this to the getRGB() of a Color object if you need to. That should be enough to get you started.