OpenCV/JavaCV Android Face Detection Initialization - java

I am working on a Face Detection Problem, I have working code that uses Androids FaceDetector to find the faces but I need to figure out a way to implement OpenCV/JavaCV functions to detect faces. This is not using a live camera, it uses a image from the gallery, I am able to retrieve that images path, but I cant seem to get the CvHaarClassifierCascade classifier, and CvMemStorage storage to initialized, if anyone cant point me in the right direction or provide some source code that initializes these variable correctly in Java.
Thanks

You could do it like this: Just provide an BufferedImage.
Alternatively load the original IplImage directly with the image path using cvLoadImage(..).
// provide an BufferedImage
BufferedImage image;
// Preload the opencv_objdetect module to work around a known bug.
Loader.load(opencv_objdetect.class);
// Path to the cascade file provided by opencv
String cascade = "../haarcascade_frontalface_alt2.xml"
CvHaarClassifierCascade cvCascade = new CvHaarClassifierCascade(cvLoad(cascade));
// create storage for face detection
CvMemStorage tempStorage = CvMemStorage.create();
// create IplImage from BufferedImage
IplImage original = IplImage.createFrom(image);
IplImage grayImage = null;
if (original.nChannels() >= 3) {
// We need a grayscale image in order to do the recognition, so we
// create a new image of the same size as the original one.
grayImage = IplImage.create(image.getWidth(), image.getHeight(),
IPL_DEPTH_8U, 1);
// We convert the original image to grayscale.
cvCvtColor(original, grayImage, CV_BGR2GRAY);
} else {
grayImage = original.clone();
}
// We detect the faces with some default params
CvSeq faces = cvHaarDetectObjects(grayImage, cvCascade,
tempStorage, 1.1, 3,
0;
// Get face rectangles
CvRect[] fArray = new CvRect[faces.total()];
for (int i = 0; i < faces.total(); i++) {
fArray[i] = new CvRect(cvGetSeqElem(faces, i));
}
// print them out
for(CvRect f: fArray){
System.out.println("x: " + f.x() + "y: " + f.y() + "width: " + f.width() + "height: " + f.height());
}
tempStorage.release();

The class definitions are basically ports to Java of the original header files in C, plus the missing functionality exposed only by the C++ API of OpenCV. you can refer this link,it includes http://code.google.com/p/javacv/
and http://geekoverdose.wordpress.com/tag/opencv-javacv-android-haarcascade-face-detection/

Related

How to check whether human is straight looking or not using Java with opencv

I'm using java with opencv,javacv for an image processing project. The image is not taking from the camera. I'm giving the image as below.
Mat image = Imgcodecs.imread("E:/resources/PPHOTOO/test.jpg");
and using haarcascades I'm cropping the face image too.
CascadeClassifier faceDetector = new CascadeClassifier("C:\\opencv\\haarcascades\\haarcascade_frontalface_alt.xml");
String faces;
MatOfRect faceDetections = new MatOfRect();
Mat face;
Mat crop = null;
for (int i = 0; i< faceDetections.toArray().length; i++){
faces = "Face"+i+".png";
face = image.submat(faceDetections.toArray()[i]);
crop = face.submat(4, (2*face.width())/3, 0, face.height());
Imgcodecs.imwrite(faces, face);
}
I want to find whether person is person is straight looking or not. Which means not side faces. I need to know how to find this part.
straight looking image
side face image

Java and haarcascade face and mouth detection - mouth as the nose

Today I begin to test the project which detects a smile in Java and OpenCv. To recognition face and mouth project used haarcascade_frontalface_alt and haarcascade_mcs_mouth But i don't understand why in some reasons project detect nose as a mouth.
I have two methods:
private ArrayList<Mat> detectMouth(String filename) {
int i = 0;
ArrayList<Mat> mouths = new ArrayList<Mat>();
// reading image in grayscale from the given path
image = Highgui.imread(filename, Highgui.CV_LOAD_IMAGE_GRAYSCALE);
MatOfRect faceDetections = new MatOfRect();
// detecting face(s) on given image and saving them to MatofRect object
faceDetector.detectMultiScale(image, faceDetections);
System.out.println(String.format("Detected %s faces", faceDetections.toArray().length));
MatOfRect mouthDetections = new MatOfRect();
// detecting mouth(s) on given image and saving them to MatOfRect object
mouthDetector.detectMultiScale(image, mouthDetections);
System.out.println(String.format("Detected %s mouths", mouthDetections.toArray().length));
for (Rect face : faceDetections.toArray()) {
Mat outFace = image.submat(face);
// saving cropped face to picture
Highgui.imwrite("face" + i + ".png", outFace);
for (Rect mouth : mouthDetections.toArray()) {
// trying to find right mouth
// if the mouth is in the lower 2/5 of the face
// and the lower edge of mouth is above of the face
// and the horizontal center of the mouth is the enter of the face
if (mouth.y > face.y + face.height * 3 / 5 && mouth.y + mouth.height < face.y + face.height
&& Math.abs((mouth.x + mouth.width / 2)) - (face.x + face.width / 2) < face.width / 10) {
Mat outMouth = image.submat(mouth);
// resizing mouth to the unified size of trainSize
Imgproc.resize(outMouth, outMouth, trainSize);
mouths.add(outMouth);
// saving mouth to picture
Highgui.imwrite("mouth" + i + ".png", outMouth);
i++;
}
}
}
return mouths;
}
and detect smile
private void detectSmile(ArrayList<Mat> mouths) {
trainSVM();
CvSVMParams params = new CvSVMParams();
// set linear kernel (no mapping, regression is done in the original feature space)
params.set_kernel_type(CvSVM.LINEAR);
// train SVM with images in trainingImages, labels in trainingLabels, given params with empty samples
clasificador = new CvSVM(trainingImages, trainingLabels, new Mat(), new Mat(), params);
// save generated SVM to file, so we can see what it generated
clasificador.save("svm.xml");
// loading previously saved file
clasificador.load("svm.xml");
// returnin, if there aren't any samples
if (mouths.isEmpty()) {
System.out.println("No mouth detected");
return;
}
for (Mat mouth : mouths) {
Mat out = new Mat();
// converting to 32 bit floating point in gray scale
mouth.convertTo(out, CvType.CV_32FC1);
if (clasificador.predict(out.reshape(1, 1)) == 1.0) {
System.out.println("Detected happy face");
} else {
System.out.println("Detected not a happy face");
}
}
}
Examples:
For that picture
correctly detects this mounth:
but in other picture
nose is detected
What's the problem in your opinion ?
Most likely it detects it wrong on your picture, because of proportion of face (too long distance from eyes to mouth compared to distance between eyes). Detection of mouth and nose using haar detector isn't very stable, so algorithms usually use geometry model of face, to choose best combination of feature candidates for each facial feature. Some implementations can even try to predict mouth position based on eyes, if no mouth candidates was found.
Haar detector isn't the newest and best known at this time for feature detection. Try to use deformable parts model implementations. Try this, they have matlab code with efficient c++ optimized functions:
https://www.ics.uci.edu/~xzhu/face/

Exposure Fusion returns blue image with OpenCV on Android

I'm trying to implement Exposure Fusion with OpenCV 3.0.0 on Android using MergeMertens class.
The problem is that image is returned in blue. Here is the screenshot of how it looks like: http://take.ms/agYSD
I suppose the problem is with RGB/BGR representation of files. I tried to convert from RGB to BGR and vice versa before and after applying merger, anyway got the problem with color.
If I'm using grayscaled image then everything is allright.
Here is the code that I'm using:
public void process(String[] InFiles, float[] InTimes, String OutImage) {
List<Mat> images = new ArrayList<Mat>();
String path = Environment.getExternalStorageDirectory().toString() + "/" + _App.getPackageName() + "/";
Mat hdrImage = new Mat();
Mat ldrImage = new Mat();
Mat times = new MatOfFloat(InTimes);
for(int i = 0; i < InFiles.length; i++) {
Mat m = Imgcodecs.imread(path + InFiles[i]);
images.add(m);
}
Photo.createMergeMertens().process(images, hdrImage);
Core.multiply(hdrImage, new Scalar(255.0), ldrImage);
Imgcodecs.imwrite(path + OutImage, ldrImage);
}

Converting IplImage to BufferedImage to integrate

I'm making my own Image processing application that completely operates in BufferedImage.
Now i have stumbled upon a code on Face detection in a blog of [OpenShift.com]
Now i want to integrate that code into my own GUI application.But facing problems as the Face Detector code the image is an instance of iplImage object and for that i need to first convert the buffered image to IplImage so that the method accepts the now converted image.
Please help..
i am leaving below the Face detector code.
public class FaceDetection{
//Load haar classifier XML file
public static final String XML_FILE =
"C:\\opencv\\sources\\data\\haarcascades\\haarcascade_frontalface_alt2.xml";
public static void main(String[] args){
//Load image
IplImage img = cvLoadImage("C:\\Users\\The Blue Light\\Desktop\\13.jpg");
detect(img);
}
//Detect for face using classifier XML file
public static void detect(IplImage src){
//Define classifier
CvHaarClassifierCascade cascade = new CvHaarClassifierCascade(cvLoad(XML_FILE));
CvMemStorage storage = CvMemStorage.create();
//Detect objects
CvSeq sign = cvHaarDetectObjects(
src,
cascade,
storage,
1.5,
3,
CV_HAAR_DO_CANNY_PRUNING);
cvClearMemStorage(storage);
int total_Faces = sign.total();
//Draw rectangles around detected objects
for(int i = 0; i < total_Faces; i++){
CvRect r = new CvRect(cvGetSeqElem(sign, i));
cvRectangle (
src,
cvPoint(r.x(), r.y()),
cvPoint(r.width() + r.x(), r.height() + r.y()),
CvScalar.CYAN,
2,
CV_AA,
0);
}
//Display result
cvShowImage("Result", src);
cvWaitKey(0);
}
}
IplImage image = IplImage.createFrom(yourBufferedImage);
Thanks #Marco13
exactly what i needed..

Java: saving image as JPEG skew problem

I am trying to save an image to JPEG. The code below works fine when image width is a multiple of 4, but the image is skewed otherwise. It has something to do with padding. When I was debugging I was able to save the image as a bitmap correctly, by padding each row with 0s. However, this did not work out with the JPEG.
Main point to remember is my image is represented as bgr (blue green red 1 byte each) byte array which I receive from a native call.
byte[] data = captureImage(OpenGLCanvas.getLastFocused().getViewId(), x, y);
if (data.length != 3*x*y)
{
// 3 bytes per pixel
return false;
}
// create buffered image from raw data
DataBufferByte buffer = new DataBufferByte(data, 3*x*y);
ComponentSampleModel csm = new ComponentSampleModel(DataBuffer.TYPE_BYTE, x, y, 3, 3*x, new int[]{0,1,2} );
WritableRaster raster = Raster.createWritableRaster(csm, buffer, new Point(0,0));
BufferedImage buff_image = new BufferedImage(x, y, BufferedImage.TYPE_INT_BGR); // because windows goes the wrong way...
buff_image.setData(raster);
//save the BufferedImage as a jpeg
try
{
File file = new File(file_name);
FileOutputStream out = new FileOutputStream(file);
JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder(out);
JPEGEncodeParam param = encoder.getDefaultJPEGEncodeParam(buff_image);
param.setQuality(1.0f, false);
encoder.setJPEGEncodeParam(param);
encoder.encode(buff_image);
out.close();
// or JDK 1.4
// ImageIO.write(image, "JPEG", out);
}
catch (Exception ex)
{
// Write permissions on "file_name"
return false;
}
I also looked on creating the JPEG in C++ but there was even less material on that, but it is still an option.
Any help greatly apprecieated.
Leon
Thanks for your suggestions, but I have managed to work it out.
To capture the image I was using WINGDIAPI HBITMAP WINAPI CreateDIBSection in C++, then OpenGL would draw to that bitmap. Unbeknown to be, there was padding added to the bitmap automatically the width was not a multiple of 4.
Therefore Java was incorrectly interpreting the byte array.
Correct way is to interpret bytes is
byte[] data = captureImage(OpenGLCanvas.getLastFocused().getViewId(), x, y);
int x_padding = x%4;
BufferedImage buff_image = new BufferedImage(x, y, BufferedImage.TYPE_INT_RGB);
int val;
for (int j = 0; j < y; j++)
{
for (int i = 0; i < x; i++)
{
val = ( data[(i + j*x)*3 + j*x_padding + 2]& 0xff) +
((data[(i + j*x)*3 + j*x_padding + 1]& 0xff) << 8) +
((data[(i + j*x)*3 + j*x_padding + 0]& 0xff) << 16);
buff_image.setRGB(i, j, val);
}
}
//save the BufferedImage as a jpeg
try
{
File file = new File(file_name);
FileOutputStream out = new FileOutputStream(file);
JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder(out);
JPEGEncodeParam param = encoder.getDefaultJPEGEncodeParam(buff_image);
param.setQuality(1.0f, false);
encoder.setJPEGEncodeParam(param);
encoder.encode(buff_image);
out.close();
}
The JPEG standard is extremely complex. I am thinking it may be an issue with padding the output of the DCT somehow. The DCT is done to transform the content from YCrCb 4:2:2 to signal space with one DCT for each channel, Y,Cr, and Cb. The DCT is done on a "Macroblock" or "minimum coded block" depending on your context. JPEG usually has 8x8 macroblocks. When on the edge and there are not enough pixel it clamps the edge value and "drags it across" and does a DCT on that.
I am not sure if this helps, but it sounds like a non standard conforming file. I suggest you use JPEGSnoop to find out more. There are also several explanations about how JPEG compression works.
One possibility is that the sample rate may be encoded incorrectly. It might be something exotic such as 4:2:1 So you might be pulling twice as many X samples as there really are, thus distorting the image.
it is an image I capture from the screen
Maybe the Screen Image class will be easier to use.

Categories

Resources