Java image comparing class - add offset [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I found this Java class online and have been using it to compare images. However I want to add an offset to it. So for example, if the 2 images are 99% or 98% similar, I want it to return true.
public int compareImage(File fileA, File fileB) {
try {
// take buffer data from botm image files //
BufferedImage biA = ImageIO.read(fileA);
DataBuffer dbA = biA.getData().getDataBuffer();
int sizeA = dbA.getSize();
BufferedImage biB = ImageIO.read(fileB);
DataBuffer dbB = biB.getData().getDataBuffer();
int sizeB = dbB.getSize();
// compare data-buffer objects //
if (sizeA == sizeB) {
for (int i = 0; i < sizeA; i++) {
if (dbA.getElem(i) != dbB.getElem(i)) {
return false;
}
}
return true;
} else {
return false;
}
} catch (Exception e) {
System.out.println("Failed to compare image files ...");
return 0;
}
}
What is the best way to do this?

To see if they are 99% or 98% similar, you have to compare all the pixels, instead of returning false at the first instance of dbA.getElem(i) != dbB.getElem(i)
Try a counter:
int total = 0;
int is_silimar = 0;
for (int i = 0; i < sizeA; i++) {
total++;
if (dbA.getElem(i) == dbB.getElem(i)) { //change it to ==
is_similar ++;
}
}
//don't return anything yet
Then, you can return true when is_similar/total is 98% or 99%
Edit: change sizeA to min(sizeA, sizeB) if you want to test cases when the image sizes are different as well.

There are many approaches for image similarities, some of them requiring advanced AI algorithms. This Wikipedia post offers a pixel by pixel colour-distance image comparison (but it works for equal-sized images only). I am just copy pasting the Java implementation from the above link.
*Note that if you wanted absolute pixel equality (and not colour distance on a single pixel) you should just keep a counter with different pixels and no distance checking would be required.
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
import java.io.IOException;
import java.net.URL;
public class ImgDiffPercent
{
public static void main(String args[])
{
BufferedImage img1 = null;
BufferedImage img2 = null;
try {
URL url1 = new URL("http://rosettacode.org/mw/images/3/3c/Lenna50.jpg");
URL url2 = new URL("http://rosettacode.org/mw/images/b/b6/Lenna100.jpg");
img1 = ImageIO.read(url1);
img2 = ImageIO.read(url2);
} catch (IOException e) {
e.printStackTrace();
}
int width1 = img1.getWidth(null);
int width2 = img2.getWidth(null);
int height1 = img1.getHeight(null);
int height2 = img2.getHeight(null);
if ((width1 != width2) || (height1 != height2)) {
System.err.println("Error: Images dimensions mismatch");
System.exit(1);
}
long diff = 0;
for (int y = 0; y < height1; y++) {
for (int x = 0; x < width1; x++) {
int rgb1 = img1.getRGB(x, y);
int rgb2 = img2.getRGB(x, y);
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = (rgb1 ) & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = (rgb2 ) & 0xff;
diff += Math.abs(r1 - r2);
diff += Math.abs(g1 - g2);
diff += Math.abs(b1 - b2);
}
}
double n = width1 * height1 * 3;
double p = diff / n / 255.0;
System.out.println("diff percent: " + (p * 100.0));
}
}

Related

Why do I have to save my BufferedImage before comparison?

I'm trying to analyze an image-based 3digit number captcha from an online resource. The numbers do not move at all. I use BufferedImage's getSubimage(...) method to extract each number from the captcha. I have saved (0-9) for each of the ones, tens and hundreds place. (So 30 numbers in total)
I read the bytes of the online image into a byte[] and then create a BufferedImage object like this:
BufferedImage captcha = ImageIO.read(new ByteArrayInputStream(captchaBytes));
Then I compare this image to a list of images on my drive:
BufferedImage[] nums = new BufferedImage[10];
//Load images into the array here... The code is removed.
for(int i = 0; i < nums.length; i++) {
double x;
System.out.println(x = bufferedImagesEqualConfidence(nums[i], firstNumberImage));
if(x > 0.98) {
System.out.println("equal to image " + i + ".jpeg");
isNewEntry = false;
break;
}
}
This is how I compare two images:
static double bufferedImagesEqualConfidence(BufferedImage img1, BufferedImage img2) {
double difference = 0;
int pixels = img1.getWidth() * img1.getHeight();
if (img1.getWidth() == img2.getWidth() && img1.getHeight() == img2.getHeight()) {
for (int x = 0; x < img1.getWidth(); x++) {
for (int y = 0; y < img1.getHeight(); y++) {
int rgbA = img1.getRGB(x, y);
int rgbB = img2.getRGB(x, y);
int redA = (rgbA >> 16) & 0xff;
int greenA = (rgbA >> 8) & 0xff;
int blueA = (rgbA) & 0xff;
int redB = (rgbB >> 16) & 0xff;
int greenB = (rgbB >> 8) & 0xff;
int blueB = (rgbB) & 0xff;
difference += Math.abs(redA - redB);
difference += Math.abs(greenA - greenB);
difference += Math.abs(blueA - blueB);
}
}
} else {
return 0.0;
}
return 1-((difference/(double)pixels) / 255.0);
}
The image is loaded completely from a HttpURLConnection object wrapped in my own HttpGet object. And so I do: byte[] captchaBytes = hg.readAndGetBytes(); Which I know works because when I save BufferedImage captcha = ImageIO.read(new ByteArrayInputStream(captchaBytes));, it saves as a valid image on my drive.
However, even though 2 images are actually the same, the result shows they are not similar at all. BUT, when I save the image I downloaded from the online resource first, re-read it, and compare, it shows they are equal. This is what I'm doing when I say I save it and re-read it:
File temp = new File("temp.jpeg");
ImageIO.write(secondNumberImage, "jpeg", temp);
secondNumberImage = ImageIO.read(temp);
Image format: JPEG
I know this may have something to do with compression from ImageIO.write(...), but how can I make it so that I don't have to save the image?
The problem was within my bufferedImagesEqualConfidence method. Simply comparing RGB was not enough. I had to compare individual R/G/B values.
My initial bufferedImagesEqualConfidence that didn't work was:
static double bufferedImagesEqualConfidence(BufferedImage img1, BufferedImage img2) {
int similarity = 0;
int pixels = img1.getWidth() * img1.getHeight();
if (img1.getWidth() == img2.getWidth() && img1.getHeight() == img2.getHeight()) {
for (int x = 0; x < img1.getWidth(); x++) {
for (int y = 0; y < img1.getHeight(); y++) {
if (img1.getRGB(x, y) == img2.getRGB(x, y)) {
similarity++;
}
}
}
} else {
return 0.0;
}
return similarity / (double)pixels;
}
(Source: Java Compare one BufferedImage to Another)
The bufferedImagesEqualConfidence that worked is:
static double bufferedImagesEqualConfidence(BufferedImage img1, BufferedImage img2) {
double difference = 0;
int pixels = img1.getWidth() * img1.getHeight();
if (img1.getWidth() == img2.getWidth() && img1.getHeight() == img2.getHeight()) {
for (int x = 0; x < img1.getWidth(); x++) {
for (int y = 0; y < img1.getHeight(); y++) {
int rgbA = img1.getRGB(x, y);
int rgbB = img2.getRGB(x, y);
int redA = (rgbA >> 16) & 0xff;
int greenA = (rgbA >> 8) & 0xff;
int blueA = (rgbA) & 0xff;
int redB = (rgbB >> 16) & 0xff;
int greenB = (rgbB >> 8) & 0xff;
int blueB = (rgbB) & 0xff;
difference += Math.abs(redA - redB);
difference += Math.abs(greenA - greenB);
difference += Math.abs(blueA - blueB);
}
}
} else {
return 0.0;
}
return 1-((difference/(double)pixels) / 255.0);
}
(Source: Image Processing in Java)
I guess to find similarity between two images you have to compare the individual R/G/B values for each pixel rather than just the whole RGB value.

org.eclipse.swt.SWTException: Unsupported color depth

I have created a sample SWT application. I am uploading few images into the application. I have to resize all the images which are above 16x16 (Width*Height) resolution and save those in separate location.
For this reason I am scaling the image and saving the scaled image to my destination location. Below is the piece of code which I am using to do that.
Using getImageData() to get the image data and to save I am using ImageLoader save() method.
final Image mySampleImage = ImageResizer.scaleImage(img, 16, 16);
final ImageLoader imageLoader = new ImageLoader();
imageLoader.data = new ImageData[] { mySampleImage.getImageData() };
final String fileExtension = inputImagePath.substring(inputImagePath.lastIndexOf(".") + 1);
if ("GIF".equalsIgnoreCase(fileExtension)) {
imageLoader.save(outputImagePath, SWT.IMAGE_GIF);
} else if ("PNG".equalsIgnoreCase(fileExtension)) {
imageLoader.save(outputImagePath, SWT.IMAGE_PNG);
}
ImageLoader imageLoader.save(outputImagePath, SWT.IMAGE_GIF); is throwing the below exeception when I am trying to save few specific images (GIF or PNG format).
org.eclipse.swt.SWTException: Unsupported color depth
at org.eclipse.swt.SWT.error(SWT.java:4533)
at org.eclipse.swt.SWT.error(SWT.java:4448)
at org.eclipse.swt.SWT.error(SWT.java:4419)
at org.eclipse.swt.internal.image.GIFFileFormat.unloadIntoByteStream(GIFFileFormat.java:427)
at org.eclipse.swt.internal.image.FileFormat.unloadIntoStream(FileFormat.java:124)
at org.eclipse.swt.internal.image.FileFormat.save(FileFormat.java:112)
at org.eclipse.swt.graphics.ImageLoader.save(ImageLoader.java:218)
at org.eclipse.swt.graphics.ImageLoader.save(ImageLoader.java:259)
at mainpackage.ImageResizer.resize(ImageResizer.java:55)
at mainpackage.ImageResizer.main(ImageResizer.java:110)
Let me know If there is any other way to do the same (or) there is any way to resolve this issue.
Finally I got a solution by referring to this existing eclipse bug Unsupported color depth eclipse bug.
In the below code i have created a PaletteData with RGB values and updated my Image Data.
My updateImagedata() method will take the scaled image and will return the proper updated imageData if the image depth is 32 or more.
private static ImageData updateImagedata(Image image) {
ImageData data = image.getImageData();
if (!data.palette.isDirect && data.depth <= 8)
return data;
// compute a histogram of color frequencies
HashMap<RGB, ColorCounter> freq = new HashMap<>();
int width = data.width;
int[] pixels = new int[width];
int[] maskPixels = new int[width];
for (int y = 0, height = data.height; y < height; ++y) {
data.getPixels(0, y, width, pixels, 0);
for (int x = 0; x < width; ++x) {
RGB rgb = data.palette.getRGB(pixels[x]);
ColorCounter counter = (ColorCounter) freq.get(rgb);
if (counter == null) {
counter = new ColorCounter();
counter.rgb = rgb;
freq.put(rgb, counter);
}
counter.count++;
}
}
// sort colors by most frequently used
ColorCounter[] counters = new ColorCounter[freq.size()];
freq.values().toArray(counters);
Arrays.sort(counters);
// pick the most frequently used 256 (or fewer), and make a palette
ImageData mask = null;
if (data.transparentPixel != -1 || data.maskData != null) {
mask = data.getTransparencyMask();
}
int n = Math.min(256, freq.size());
RGB[] rgbs = new RGB[n + (mask != null ? 1 : 0)];
for (int i = 0; i < n; ++i)
rgbs[i] = counters[i].rgb;
if (mask != null) {
rgbs[rgbs.length - 1] = data.transparentPixel != -1 ? data.palette.getRGB(data.transparentPixel)
: new RGB(255, 255, 255);
}
PaletteData palette = new PaletteData(rgbs);
ImageData newData = new ImageData(width, data.height, 8, palette);
if (mask != null)
newData.transparentPixel = rgbs.length - 1;
for (int y = 0, height = data.height; y < height; ++y) {
data.getPixels(0, y, width, pixels, 0);
if (mask != null)
mask.getPixels(0, y, width, maskPixels, 0);
for (int x = 0; x < width; ++x) {
if (mask != null && maskPixels[x] == 0) {
pixels[x] = rgbs.length - 1;
} else {
RGB rgb = data.palette.getRGB(pixels[x]);
pixels[x] = closest(rgbs, n, rgb);
}
}
newData.setPixels(0, y, width, pixels, 0);
}
return newData;
}
To find minimum index:
static int closest(RGB[] rgbs, int n, RGB rgb) {
int minDist = 256*256*3;
int minIndex = 0;
for (int i = 0; i < n; ++i) {
RGB rgb2 = rgbs[i];
int da = rgb2.red - rgb.red;
int dg = rgb2.green - rgb.green;
int db = rgb2.blue - rgb.blue;
int dist = da*da + dg*dg + db*db;
if (dist < minDist) {
minDist = dist;
minIndex = i;
}
}
return minIndex;
}
ColourCounter Class:
class ColorCounter implements Comparable<ColorCounter> {
RGB rgb;
int count;
public int compareTo(ColorCounter o) {
return o.count - count;
}
}

How to extract pictogram using boofcv?

I have problems with extracting a pictogram into some further processable format, since now I have got like this:
Part of the current solution is taken from the BoofCV ImageTresholding example. My code for this solution the following:
import georegression.metric.UtilAngle;
import java.awt.Color;
import java.awt.event.MouseAdapter;
import java.awt.event.MouseEvent;
import java.awt.image.BufferedImage;
import java.awt.image.WritableRaster;
import java.io.IOException;
import boofcv.alg.color.ColorHsv;
import boofcv.alg.filter.binary.BinaryImageOps;
import boofcv.alg.filter.binary.GThresholdImageOps;
import boofcv.alg.filter.binary.ThresholdImageOps;
import boofcv.gui.ListDisplayPanel;
import boofcv.gui.binary.VisualizeBinaryData;
import boofcv.gui.image.ImagePanel;
import boofcv.gui.image.ShowImages;
import boofcv.io.image.ConvertBufferedImage;
import boofcv.io.image.UtilImageIO;
import boofcv.struct.image.ImageFloat32;
import boofcv.struct.image.ImageUInt8;
import boofcv.struct.image.MultiSpectral;
public class Binaryzation {
static double splitFraction = 0.05;
static double minimumSideFraction = 0.1;
static ListDisplayPanel gui = new ListDisplayPanel();
public static void printClickedColor(final BufferedImage image) {
ImagePanel gui = new ImagePanel(image);
gui.addMouseListener(new MouseAdapter() {
#Override
public void mouseClicked(MouseEvent e) {
float[] color = new float[3];
int rgb = image.getRGB(e.getX(), e.getY());
ColorHsv.rgbToHsv((rgb >> 16) & 0xFF, (rgb >> 8) & 0xFF,
rgb & 0xFF, color);
System.out.println("H = " + color[0] + " S = " + color[1]
+ " V = " + color[2]);
try {
showSelectedColor("Selected", image, color[0], color[1]);
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
}
});
ShowImages.showWindow(gui, "Color Selector");
}
public static void showSelectedColor(String name, BufferedImage image,
float hue, float saturation) throws IOException {
ImageUInt8 binary = binaryTreshold(name, image, hue, saturation);
// MAGIC HAPPENDS -removing small objects
ImageUInt8 filtered = BinaryImageOps.erode4(binary, 1, null);
filtered = BinaryImageOps.dilate8(filtered, 1, null);
filtered = BinaryImageOps.removePointNoise(filtered, filtered);
ShowImages.showWindow(filtered, "Binary " + name);
BufferedImage visualFiltered1 = VisualizeBinaryData.renderBinary(
filtered, true, null);
ShowImages.showWindow(visualFiltered1, "Mask");
BufferedImage visualFiltered12 = Fill.fill(visualFiltered1);
ShowImages.showWindow(visualFiltered12, "Filled Mask");
ImageUInt8 mask = ConvertBufferedImage.convertFromSingle(
visualFiltered12, null, ImageUInt8.class);
ImageUInt8 wynik = new ImageUInt8(mask.width, mask.height);
//subtraction of images: wynik=mask-filtered;
int min = 0;
int max = 1;
for (int i = 0; i < mask.height; i++) {
// System.out.println("i=" + i);
for (int j = 0; j < mask.width; j++) {
// System.out.println("j=" + j);
if (filtered.get(j, i) < min)
min = filtered.get(j, i);
if (filtered.get(j, i) > max)
max = filtered.get(j, i);
int filtInt = filtered.get(j, i);
if (filtInt >= 1)
filtInt = 1;
else if (filtInt < 1)
filtInt = 0;
int maskInt = mask.get(j, i);
if (maskInt >= 1)
maskInt = 0;
else if (maskInt < 1)
maskInt = 1;
int diff = maskInt - filtInt;
if (diff == 1) {
diff = 255;
wynik.set(j, i, diff);
} else if (diff == 0) {
diff = 0;
wynik.set(j, i, diff);
} else {
diff = 255;
wynik.set(j, i, diff);
}
}
}
ShowImages.showWindow(wynik, "Wynik=Mask-Filtered");
wynik = BinaryImageOps.erode4(wynik, 1, null);
wynik = BinaryImageOps.dilate8(wynik, 1, null);
wynik = BinaryImageOps.removePointNoise(wynik, wynik);
UtilImageIO.saveImage(wynik, "C:/dev/zdjeciaTestowe/wynik.jpg");
ShowImages.showWindow(wynik, "Wynik=Mask-Filtered After noise remove");
}
private static ImageUInt8 binaryTreshold(String name, BufferedImage image,
float hue, float saturation) throws IOException {
MultiSpectral<ImageFloat32> input = ConvertBufferedImage
.convertFromMulti(image, null, true, ImageFloat32.class);
MultiSpectral<ImageFloat32> hsv = input.createSameShape();
// Convert into HSV
ColorHsv.rgbToHsv_F32(input, hsv);
// Euclidean distance squared threshold for deciding which pixels are
// members of the selected set
float maxDist2 = 0.4f * 0.4f;
// Extract hue and saturation bands which are independent of intensity
ImageFloat32 H = hsv.getBand(0);
ImageFloat32 S = hsv.getBand(1);
// Adjust the relative importance of Hue and Saturation.
// Hue has a range of 0 to 2*PI and Saturation from 0 to 1.
float adjustUnits = (float) (Math.PI / 2.0);
// step through each pixel and mark how close it is to the selected
// color
BufferedImage output = new BufferedImage(input.width, input.height,
BufferedImage.TYPE_INT_RGB);
for (int y = 0; y < hsv.height; y++) {
for (int x = 0; x < hsv.width; x++) {
// Hue is an angle in radians, so simple subtraction doesn't
// work
float dh = UtilAngle.dist(H.unsafe_get(x, y), hue);
float ds = (S.unsafe_get(x, y) - saturation) * adjustUnits;
// this distance measure is a bit naive, but good enough for to
// demonstrate the concept
float dist2 = dh * dh + ds * ds;
if (dist2 <= maxDist2) {
System.out.println(image.getRGB(x, y));
output.setRGB(x, y, image.getRGB(x, y));
}
}
}
ImageFloat32 output1 = ConvertBufferedImage.convertFromSingle(output,
null, ImageFloat32.class);
ImageUInt8 binary = new ImageUInt8(input.width, input.height);
double threshold = GThresholdImageOps.computeOtsu(output1, 0, 255);
// Apply the threshold to create a binary image
ThresholdImageOps.threshold(output1, binary, (float) threshold, true);
return binary;
}
public static void main(String args[]) throws IOException {
BufferedImage image = UtilImageIO
.loadImage("C:/dev/zdjeciaTestowe/images.jpg");
// Let the user select a color
printClickedColor(image);
// Display pre-selected colors
showSelectedColor("Yellow", image, 1f, 1f);
}
}
import java.awt.image.BufferedImage;
import boofcv.struct.image.ImageUInt8;
public class Fill {
private static final int BLACK = -16777216;
private static final int WHITE = -1;
/**
* #param input Buffered image
* #return image with filled holes
*/
public static BufferedImage fill(BufferedImage input) {
int width = input.getWidth();
int height = input.getHeight();
BufferedImage output=new BufferedImage(width, height, input.getType());
for (int i = 0; i < height; i++) {
// System.out.println("i=" + i);
for (int j = 0; j < width; j++) {
// System.out.println("j=" + j);
if (input.getRGB(j, i) == WHITE) {
output.setRGB(j, i, WHITE);
} else if (isPreviusWhite(j, i, input)
&& isAnotherWhiteInLine(j, i, input)) {
output.setRGB(j, i, WHITE);
}
}
}
return output;
}
private static boolean isPreviusWhite(int i, int i2, BufferedImage input) {
boolean condition = false;
while (1 < i2) {
if (input.getRGB(i, i2) == WHITE)
return true;
i2--;
}
return condition;
}
private static boolean isAnotherWhiteInLine(int j, int i,
BufferedImage input) {
boolean condition = false;
while (j < input.getWidth()) {
if (input.getRGB(j, i) == WHITE)
return true;
j++;
}
return condition;
}
}
I know how to extract a pictogram on a sign, and i have done it by subtracting the Mask from Filled Mask but have problem to obtain some processable result,
because int the end I have an image in grayscale not a binary image (or as it is in boofCV ImageUInt8).
How do I properly do subtraction of two images in ImageUInt8 format so the result would be also ImageUInt8?
Today i have wrote futher part of that algorithm and now the problem which i want to ask about is more clarified. Here is added code (part from //subtraction of images: wynik=mask-filtered;) and 2 additional pictures as product of processing.
The problem is that last image after noise remove is solid black and without any information. How to correctly convert image to obtain processable content??
What i'm doing wrong?
I have found solution to my problem on the last picture "Wynik=Mask-Filtered After noise Remove" there is a pictogram but diffirence beetwen piksels in grayscale is so low that it's hard to see so problemsolver is adding:
GrayImageOps.stretch(wynik, 125, 125, 255, wynik);

java : get differences between two images

i am trying to get difference between two images ( same size ) i found this code :
BufferedImage img1 = null;
BufferedImage img2 = null;
try{
URL url1 = new URL("http://rosettacode.org/mw/images/3/3c/Lenna50.jpg");
URL url2 = new URL("http://rosettacode.org/mw/images/b/b6/Lenna100.jpg");
img1 = ImageIO.read(url1);
img2 = ImageIO.read(url2);
} catch (IOException e) {
e.printStackTrace();
}
int width1 = img1.getWidth(null);
int width2 = img2.getWidth(null);
int height1 = img1.getHeight(null);
int height2 = img2.getHeight(null);
if ((width1 != width2) || (height1 != height2)) {
System.err.println("Error: Images dimensions mismatch");
System.exit(1);
}
long diff = 0;
for (int i = 0; i < height1; i++) {
for (int j = 0; j < width1; j++) {
int rgb1 = img1.getRGB(i, j);
int rgb2 = img2.getRGB(i, j);
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = (rgb1 ) & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = (rgb2 ) & 0xff;
diff += Math.abs(r1 - r2);
diff += Math.abs(g1 - g2);
diff += Math.abs(b1 - b2);
}
}
double n = width1 * height1 * 3;
double p = diff / n / 255.0;
System.out.println("diff percent: " + (p * 100.0)); `
it works fine for the two images given in the url , but when i changed the images given i get this exception :
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Coordinate out of bounds!
at sun.awt.image.ByteInterleavedRaster.getDataElements(ByteInterleavedRaster.java:299)
at java.awt.image.BufferedImage.getRGB(BufferedImage.java:871)
at Main.main(Main.java:77)
I changed the code to :
File sourceimage1 = new File("C:\\lo.jpg");
File sourceimage2 = new File("C:\\lo1.jpg");
img1 = ImageIO.read(sourceimage1);
img2 = ImageIO.read(sourceimage2);
the two images are black and white , and their dimensions are smaller than the two previous images (lenna50 and lenna100)
the lo.jpg and lo1.jpg are the same image to test the algorithm also they are in black and white
how can I change the code to make it work for any image dimension ?
Toggle the i and j
in the following code as I have done below:
int rgb1 = img1.getRGB(j, i);
int rgb2 = img2.getRGB(j, i);
Your error clearly says that while reading rgb point as code in line img1.getRGB(i, j); it is going out of Array for image RGB. Check the values of i & j inside your inner for loop and check whether something you are doing wrong. As already Hirak pointed may be that you are not initializing your variables properly and so the reason it is going out of height or width.

Camera.FaceDetectionListener is not working

I am using Android phone with ICECREAMSANDWICH version. In that, I am checking the face-detection behaviour. Additionally, referring the codes available on google.
During debugging with my phone int getMaxNumDetectedFaces () returns 3. So my phone is supported for this.
Then the following code is not working.
public void onFaceDetection(Face[] faces, Camera face_camera1) {
// TODO Auto-generated method stub
if(faces.length>0)
{
Log.d("FaceDetection","face detected:" +faces.length + "Face 1 location X:"+faces[0].rect.centerX()+"Y:"+faces[0].rect.centerY());
}
}
in this faces.length retruning zero tell some suggestions to solve this error.
I had work with the FaceDetection some time ago. When I was working on that the onFaceDetection didn't work for me, so,I found another way for work on it.
I worked with PreviewCallback, this method takes each frame and you can use it to recognize faces. The only problem here is the format, the default format is NV21 , and you can change it by setPreviewFormat(int), but that didn't work for me too, so, I had to make de conversion for get a Bitmap type that recives the FaceDetector. Here is my code:
public PreviewCallback mPreviewCallback = new PreviewCallback(){
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Size size = camera.getParameters().getPreviewSize();
Bitmap mfoto_imm = this.getBitmapFromNV21(data, size.width, size.height, true); //here I get the Bitmap from getBitmapFromNV21 that is the conversion method
Bitmap mfoto= mfoto_imm.copy(Bitmap.Config.RGB_565, true);
imagen.setImageBitmap(mfoto);
int alto= mfoto.getHeight();
int ancho= mfoto.getWidth();
int count;
canvas= new Canvas(mfoto);
dibujo.setColor(Color.GREEN);
dibujo.setAntiAlias(true);
dibujo.setStrokeWidth(8);
canvas.drawBitmap(mfoto, matrix, dibujo);
FaceDetector mface= new FaceDetector(ancho,alto,1);
FaceDetector.Face [] face= new FaceDetector.Face[1];
count = mface.findFaces(mfoto, face);
PointF midpoint = new PointF();
int fpx = 0;
int fpy = 0;
if (count > 0) {
face[count-1].getMidPoint(midpoint); // you have to take the last one less 1
fpx= (int)midpoint.x; // middle pint of the face in x.
fpy= (int)midpoint.y; // middle point of the face in y.
}
canvas.drawCircle(fpx, fpy, 10, dibujo); // here I draw a circle on the middle of the face
imagen.invalidate();
}
}
and here are the conversion methods.
public Bitmap getBitmapFromNV21(byte[] data, int width, int height, boolean rotated) {
Bitmap bitmap = null;
int[] pixels = new int[width * height];
// Conver the array
this.yuv2rgb(pixels, data, width, height, rotated);
if(rotated)
{
bitmap = Bitmap.createBitmap(pixels, height, width, Bitmap.Config.RGB_565);
}
else
{
bitmap = Bitmap.createBitmap(pixels, width, height, Bitmap.Config.RGB_565);
}
return bitmap;
}
public void yuv2rgb(int[] out, byte[] in, int width, int height, boolean rotated)
throws NullPointerException, IllegalArgumentException
{
final int size = width * height;
if(out == null) throw new NullPointerException("buffer 'out' == null");
if(out.length < size) throw new IllegalArgumentException("buffer 'out' length < " + size);
if(in == null) throw new NullPointerException("buffer 'in' == null");
if(in.length < (size * 3 / 2)) throw new IllegalArgumentException("buffer 'in' length != " + in.length + " < " + (size * 3/ 2));
// YCrCb
int Y, Cr = 0, Cb = 0;
int Rn = 0, Gn = 0, Bn = 0;
for(int j = 0, pixPtr = 0, cOff0 = size - width; j < height; j++) {
if((j & 0x1) == 0)
cOff0 += width;
int pixPos = height - 1 - j;
for(int i = 0, cOff = cOff0; i < width; i++, cOff++, pixPtr++, pixPos += height) {
// Get Y
Y = 0xff & in[pixPtr]; // 0xff es por el signo
// Get Cr y Cb
if((pixPtr & 0x1) == 0) {
Cr = in[cOff];
if(Cr < 0) Cr += 127; else Cr -= 128;
Cb = in[cOff + 1];
if(Cb < 0) Cb += 127; else Cb -= 128;
Bn = Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
Gn = - (Cb >> 2) + (Cb >> 4) + (Cb >> 5) - (Cr >> 1) + (Cr >> 3) + (Cr >> 4) + (Cr >> 5);
Rn = Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
}
int R = Y + Rn;
if(R < 0) R = 0; else if(R > 255) R = 255;
int B = Y + Bn;
if(B < 0) B = 0; else if(B > 255) B = 255;
int G = Y + Gn;
if(G < 0) G = 0; else if(G > 255) G = 255; //At this point the code could apply some filter From the separate components of the image.For example, they could swap 2 components or remove one
int rgb = 0xff000000 | (R << 16) | (G << 8) | B; //Depending on the option the output buffer is filled or not applying the transformation
if(rotated)
out[pixPos] = rgb;
else
out[pixPtr] = rgb;
}
}
}
};
The setPreviewFormat(int) in some devices doesn't work, but maybe you can try and create the Bitmap without use the conversion.
I hope this help you.

Categories

Resources