Want to store detected cropped images in a file - java

I have successfully detected human faces using haarcascade in opencv. I have run a loop in which it will crop only the detected portion of the image and store into the system. But it crops only one detected face and rest of the faces doesn't get cropped and save into the system. I am attaching my code. please help to to solve my issue. Eg. if image contains 4 people then haarcascade detect 4 faces but the loop only crops one detected image and rest 3 images does not get cropped..!
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfRect;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import org.opencv.objdetect.CascadeClassifier;
public class FaceDetection
{
public static void main(String[] args)
{
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
CascadeClassifier faceDetector = new CascadeClassifier();
faceDetector.load("G:\\haarcascade_frontalface_alt.xml");
//System.out.println ( "Working" );
// Input image
Mat image = Imgcodecs.imread("G:\\sample.jpg");
// Detecting faces
MatOfRect faceDetections = new MatOfRect();
faceDetector.detectMultiScale(image, faceDetections);
System.out.println(String.format("Detected %s faces",
faceDetections.toArray().length));
// Creating a rectangular box showing faces detected
Rect rectCrop=null;
for (Rect rect : faceDetections.toArray())
{
Imgproc.rectangle(image, new Point(rect.x, rect.y),
new Point(rect.x + rect.width, rect.y + rect.height),
new Scalar(0, 255, 0),2);
rectCrop = new Rect(rect.x, rect.y, rect.width, rect.height);
}
Rect[] count = faceDetections.toArray();
System.out.println(""+count.length);
// Saving the output image
String filename = "Ouput.jpg";
Imgcodecs.imwrite("G:\\"+filename, image);
Mat markedImage = new Mat(image,rectCrop);
Imgcodecs.imwrite("G:\\crop1.jpg",markedImage );
}
}

Your code is looping over all of them and choosing the last face and then saving that face. You need it to either a) crop them separately and add meta that tags them together, or b)crop them all together.
For option a, inside your loop, you need to save each file separately. Simply move you saving file code into your loop and adjust it so that it works.
For option b, you might need some additional software to crop all of the faces together. This could be done a number of ways. You could concatenate all of the a)pixle arrays, or b)individual bytes. Then when you need to find connections between people you simply have to run the face recognition software on the files again.

Related

OpenCV library in Tomcat(8.5.32) Server unable to execute

Facing issue while image processing code setup. In spite of doing all code changes and different approaches facing the issue.
Libraries used – OpenCV – 3.4.2
Jars Used – opencv-3.4.2-0
tess4j-3.4.8
Lines added in pom.xml
<!-- https://mvnrepository.com/artifact/org.openpnp/opencv -->
<dependency>
<groupId>org.openpnp</groupId>
<artifactId>opencv</artifactId>
<version>3.4.2-0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/net.sourceforge.tess4j/tess4j -->
<dependency>
<groupId>net.sourceforge.tess4j</groupId>
<artifactId>tess4j</artifactId>
<version>3.4.8</version>
</dependency>
Steps for OpenCV installation :
Download opencv.exe from the official site’
Run opencv.exe, it will create an opencv folder
We have now the opencv library available which we can use for eclipse.
Steps for Tesseract installation :
Download tess4j.zip file from the official link
Extract the zip folder after download
Provide the path of the tess4j folder
Following are the steps which we have performed for the setup in eclipse :
We have added native library by providing path to openCV library from build path settings
We downloaded tesseract for image reading.
We provided the path to the Tesseraact in the code
We have used System.loadlibrary(Core.NATIVE_LIBRARY_NAME) and openCv.loadLocally() for loading the library.
Then we have made the WAR export for deployment
There has been no changes or setup in apache tomcat
For loading the libraries in Tomcat we have to provide some setup here :-
Now for the new code we have used, Load Library static class in the code (as solutions stated on stack overflow)
In here System.loadLibrary is not working
We had to use System.load along with hardcoded path which is resulting in internal error
We have used System.load – 2 time in the static class out of which the when the 1st one is giving -std error -bad allocation
As there are 2 paths in opencv-
This is the 1st one
System.load("C:\Users\Downloads\opencv\build\bin\opencv_java342.dll");
and the 2nd one is giving the assertion error based on which one is kept above
This is the 2nd one
System.load("C:\User\Downloads\opencv\build\java\x64\opencv_java3412.dll");
The code is executing till mid-way and then getting out and till now not yet code has reached till tesseract.
Here is the code for the same :
import java.awt.Image;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileWriter;
import java.io.PrintWriter;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.concurrent.TimeUnit;
import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;
import org.apache.commons.logging.impl.Log4JLogger;
import org.apache.log4j.Logger;
import org.apache.poi.ss.usermodel.Cell;
import org.apache.poi.ss.usermodel.Row;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import org.opencv.core.Core;
import org.opencv.core.CvException;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.Rect;
import org.opencv.core.Size;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import net.sourceforge.tess4j.Tesseract;
import nu.pattern.OpenCV;
public class ReadImageBox {
public String readDataFromImage(String imageToReadPath,String tesseractPath)
{
String result = "";
try {
String i = Core.NATIVE_LIBRARY_NAME;
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
logger.info("Img to read "+imageToReadPath);
String imagePath =imageToReadPath; // bufferNameOfImagePath = "";
logger.info(imagePath);
/*
* The class Mat represents an n-dimensional dense numerical single-channel or
* multi-channel array. It can be used to store real or complex-valued vectors
* and matrices, grayscale or color images, voxel volumes, vector fields, point
* clouds, tensors, histograms (though, very high-dimensional histograms may be
* better stored in a SparseMat ).
*/
logger.info("imagepath::"+imagePath);
OpenCV.loadLocally();
logger.info("imagepath::"+imagePath);
//logger.info("Library Information"+Core.getBuildInformation());
logger.info("imagepath::"+imagePath);
Mat source = Imgcodecs.imread(imagePath);
logger.info("Source image "+source);
String directoryPath = imagePath.substring(0,imagePath.lastIndexOf('/'));
logger.info("Going for Image Processing :" + directoryPath);
// calling image processing here to process the data from it
result = updateImage(100,20,10,3,3,2,source, directoryPath,tesseractPath);
logger.info("Data read "+result);
return result;
}
catch (UnsatisfiedLinkError error) {
// Output expected UnsatisfiedLinkErrors.
logger.error(error);
}
catch (Exception exception)
{
logger.error(exception);
}
return result;
}
public static String updateImage(int boxSize, int horizontalRemoval, int verticalRemoval, int gaussianBlur,
int denoisingClosing, int denoisingOpening, Mat source, String tempDirectoryPath,String tesseractPath) throws Exception{
// Tesseract Object
logger.info("Tesseract Path :"+tesseractPath);
Tesseract tesseract = new Tesseract();
tesseract.setDatapath(tesseractPath);
// Creating the empty destination matrix for further processing
Mat grayScaleImage = new Mat();``
Mat gaussianBlurImage = new Mat();
Mat thresholdImage = new Mat();
Mat morph = new Mat();
Mat morphAfterOpreation = new Mat();
Mat dilate = new Mat();
Mat hierarchy = new Mat();
logger.info("Image type"+source.type());
// Converting the image to gray scale and saving it in the grayScaleImage matrix
Imgproc.cvtColor(source, grayScaleImage, Imgproc.COLOR_RGB2GRAY);
//Imgproc.cvtColor(source, grayScaleImage, 0);
// Applying Gaussain Blur
logger.info("source image "+source);
Imgproc.GaussianBlur(grayScaleImage, gaussianBlurImage, new org.opencv.core.Size(gaussianBlur, gaussianBlur),
0);
// OTSU threshold
Imgproc.threshold(gaussianBlurImage, thresholdImage, 0, 255, Imgproc.THRESH_OTSU | Imgproc.THRESH_BINARY_INV);
logger.info("Threshold image "+gaussianBlur);
// remove the lines of any table inside the invoice
Mat horizontal = thresholdImage.clone();
Mat vertical = thresholdImage.clone();
int horizontal_size = horizontal.cols() / 30;
if(horizontal_size%2==0)
horizontal_size+=1;
// showWaitDestroy("Horizontal Lines Detected", horizontal);
Mat horizontalStructure = Imgproc.getStructuringElement(Imgproc.MORPH_RECT,
new org.opencv.core.Size(horizontal_size, 1));
Imgproc.erode(horizontal, horizontal, horizontalStructure);
Imgproc.dilate(horizontal, horizontal, horizontalStructure);
int vertical_size = vertical.rows() / 30;
if(vertical_size%2==0)
vertical_size+=1;
// Create structure element for extracting vertical lines through morphology
// operations
Mat verticalStructure = Imgproc.getStructuringElement(Imgproc.MORPH_RECT,
new org.opencv.core.Size(1, vertical_size));
// Apply morphology operations
Imgproc.erode(vertical, vertical, verticalStructure);
Imgproc.dilate(vertical, vertical, verticalStructure);
Core.absdiff(thresholdImage, horizontal, thresholdImage);
Core.absdiff(thresholdImage, vertical, thresholdImage);
logger.info("Vertical Structure "+verticalStructure);
Mat newImageFortest = thresholdImage;
logger.info("Threshold image "+thresholdImage);
// applying Closing operation
Imgproc.morphologyEx(thresholdImage, morph, Imgproc.MORPH_CLOSE, Imgproc.getStructuringElement(
Imgproc.MORPH_RECT, new Size(denoisingClosing, denoisingClosing)));
logger.info("Morph image "+morph);
// applying Opening operation
Imgproc.morphologyEx(morph, morphAfterOpreation, Imgproc.MORPH_OPEN, Imgproc.getStructuringElement(
Imgproc.MORPH_RECT, new Size(denoisingOpening, denoisingOpening)));
logger.info("Morph After operation image "+morphAfterOpreation);
// Applying dilation on the threshold image to create bounding box edges
Imgproc.dilate(morphAfterOpreation, dilate,
Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(boxSize, boxSize)));
logger.info("Dilate image "+dilate);
// creating string buffer object
String text = "";
try
{
// finding contours
List<MatOfPoint> contourList = new ArrayList<MatOfPoint>(); // A list to store all the contours
// finding contours
Imgproc.findContours(dilate, contourList, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_NONE);
logger.info("Contour List "+contourList);
// Creating a copy of the image
//Mat copyOfImage = source;
Mat copyOfImage = newImageFortest;
logger.info("Copy of Image "+copyOfImage);
// Rectangle for cropping
Rect rectCrop = new Rect();
logger.info("Rectangle Crop New Object "+rectCrop);
// loop through the identified contours and crop them from the image to feed
// into Tesseract-OCR
for (int i = 0; i < contourList.size(); i++) {
// getting bound rectangle
rectCrop = Imgproc.boundingRect(contourList.get(i));
logger.info("Rectangle cropped"+rectCrop);
// cropping Image
Mat croppedImage = copyOfImage.submat(rectCrop.y, rectCrop.y + rectCrop.height, rectCrop.x,
rectCrop.x + rectCrop.width);
// writing cropped image to disk
logger.info("Path to write cropped image "+ tempDirectoryPath);
String writePath = tempDirectoryPath + "/croppedImg.png";
logger.info("writepath"+writePath);
// imagepath = imagepath.
Imgcodecs.imwrite(writePath, croppedImage);
try {
// extracting text from cropped image, goes to the image, extracts text and adds
// them to stringBuffer
logger.info("Exact Path where Image was written with Name "+ writePath);
String textExtracted = (tesseract
.doOCR(new File(writePath)));
//Adding Seperator
textExtracted = textExtracted + "_SEPERATOR_";
logger.info("Text Extracted "+textExtracted);
textExtracted = textExtracted + "\n";
text = textExtracted + text;
logger.info("Text Extracted Completely"+text);
// System.out.println("Andar Ka Text => " + text.toString());
} catch (Exception exception) {
logger.error(exception);
}
writePath = "";
logger.info("Making write Path empty for next Image "+ writePath);
}
}
catch(CvException ae)
{
logger.error("cv",ae);
}
catch(UnsatisfiedLinkError ae)
{
logger.error("unsatdif",ae);
}
catch(Exception ae)
{
logger.error("general",ae);
}
// converting into string
return text.toUpperCase();
}
// convert Mat to Image for GUI output
public static Image toBufferedImage(Mat m) {
// getting BYTE_GRAY formed image
int type = BufferedImage.TYPE_BYTE_GRAY;
if (m.channels() > 1) {
type = BufferedImage.TYPE_3BYTE_BGR;
}
int bufferSize = m.channels() * m.cols() * m.rows();
byte[] b = new byte[bufferSize];
m.get(0, 0, b); // get all the pixels
// creating buffered Image
BufferedImage image = new BufferedImage(m.cols(), m.rows(), type);
final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(b, 0, targetPixels, 0, b.length);
// returning Image
return image;
}
// method to display Mat format images using the GUI
private static void showWaitDestroy(String winname, Mat img) {
HighGui.imshow(winname, img);
HighGui.moveWindow(winname, 500, 0);
HighGui.waitKey(0);
HighGui.destroyWindow(winname);
}
}

OpenCV 3 (Java Binding) : Apply CLAHE to image

I try to use the java bindings of open cv to apply an non-global contrast (histogram) optimization for a (color) png image, but I fail to get it to work.
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.File;
import javax.imageio.ImageIO;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.CLAHE;
import org.opencv.imgproc.Imgproc;
public class Main {
public static void main( String[] args ) {
try {
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
// fetch the png
File input = new File("test.png");
BufferedImage buffImage = ImageIO.read(input);
byte[] data = ((DataBufferByte) buffImage.getRaster().getDataBuffer()).getData();
// build MAT for original image
Mat orgImage = new Mat(buffImage.getHeight(),buffImage.getWidth(), CvType.CV_8UC3);
orgImage.put(0, 0, data);
// transform from to LAB
Mat labImage = new Mat(buffImage.getHeight(), buffImage.getWidth(), CvType.CV_8UC4);
Imgproc.cvtColor(orgImage, labImage, Imgproc.COLOR_BGR2Lab);
// apply CLAHE
CLAHE clahe = Imgproc.createCLAHE()
Mat destImage = new Mat(buffImage.getHeight(),buffImage.getWidth(), CvType.CV_8UC4);
clahe.apply(labImage, destImage);
Imgcodecs.imwrite("test_clahe.png", destImage);
} catch (Exception e) {
System.out.println("Error: " + e.getMessage());
}
}
I get the exception:
Error: cv::Exception: C:\builds\master_PackSlaveAddon-win64-vc12-static\opencv\modules\imgproc\src\clahe.cpp:354: error: (-215) _src.type() == CV_8UC1 || _src.type() == CV_16UC1 in function `anonymous
-namespace'::CLAHE_Impl::apply
I guess I need to work with the individual channels, but I cannot figure out how. The code is inspired from this c++ example, but somehow I fail to extract the corresponding layers (I guess I need only L chanel for clahe.apply())
This Example just splits the Lab image and applies Clahe on the L channel which is the intensity channel. So just use this code for java.
List<Mat> channels = new LinkedList();
Core.split(labImage, channels);
CLAHE clahe = Imgproc.createCLAHE()
Mat destImage = new Mat(buffImage.getHeight(),buffImage.getWidth(), CvType.CV_8UC4);
clahe.apply(channels.get(0), destImage);
Core.merge(channels, labImage);
and finally merge the intensity channel to the other channels. I haven't changed any parameters as I don't know how your image looks but I guess that isn't the problem. Hope it helps!

Is there any way that I can use SURF and SIFT ... in OpenCV 3 Java?

I am using OpenCV 3 which has a wrapper for Java, but I am not able to use SURF and SIFT and some other algorithms in Java. I have tried many ways and googled for a long time, but I cannot find any way for this problem. Also somewhere I even saw that some people says there is now way. Not only this, in Java wrapper I also cannot find VideoWriter class and also BOWTrainer class and so on. Now my question is why OpenCV has a wrapper which is not complete for Java, if there are a lot of problems such as above I mentioned some, then it is no need to publish an incomplete wrapper for another language which is not usable for the users. Before OpenCV 3 I can write video, but now no. I waited for a long time that OpenCV 3 will be good version and will take care of all of the problem of previous versions, but now it has more problem then previous versions (not good documentation for Eclispe ... etc). If anyone understanding my question and also has the way sole this, please tell me what to do. Thank you!
I fixed OpenCV wrappers manually and it works for me. See my answer at SURF and SIFT algorithms doesn't work in OpenCV 3.0 Java
U can implement sift in opencv3.
Here is the code...
package com.SR.view;
import static java.awt.Color.gray;
import java.awt.List;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import javax.imageio.ImageIO;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfKeyPoint;
import org.opencv.core.MatOfPoint;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import org.opencv.features2d.FeatureDetector;
import org.opencv.features2d.Features2d;
public class sift_opencv {
public static void main(String[] args) throws IOException {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
Mat blurredImage = new Mat();
Mat hsvImage = new Mat();
Mat mask = new Mat();
Mat morphOutput = new Mat();
Mat img;
Mat maskedImage;
//BufferedImage img;
//img=ImageIO.read(new File("C:\\Users\\softroniics\\Documents\\NetBeansProjects\\SceneRecogintion\\src\\com\\SR\\view\\Penguins.png"));
//File f= new File("C:\\Users\\softroniics\\Documents\\NetBeansProjects\\SceneRecogintion\\src\\com\\SR\\view\\Penguins.png");
img=Imgcodecs.imread("C:\\Users\\softroniics\\Documents\\NetBeansProjects\\SceneRecogintion\\src\\com\\SR\\view\\burj.png");
System.out.println(img);
// remove some noise
Imgcodecs.imwrite("out.png", img);
Imgproc.blur(img, blurredImage, new Size(7, 7));
// convert the frame to HSV
Imgproc.cvtColor(blurredImage, hsvImage, Imgproc.COLOR_BGR2HSV);
//convert to gray
//Mat mat = new Mat(img.width(), img.height(), CvType.CV_8U, new Scalar(4));
Mat gray = new Mat(img.width(), img.height(), CvType.CV_8U, new Scalar(4));
Imgproc.cvtColor(img, gray, Imgproc.COLOR_BGR2GRAY);
FeatureDetector fd = FeatureDetector.create(FeatureDetector.FAST);
MatOfKeyPoint regions = new MatOfKeyPoint();
fd.detect(gray, regions);
Mat output=new Mat();
//int r=regions.rows();
//System.out.println("REGIONS ARE: " + regions);
Features2d.drawKeypoints(gray, regions,output );
Imgcodecs.imwrite("out.png", output);
}
}

OpenCV Java Smile Detection

I've tried to create a smile detector with source code that I've found on the Internet. It detects face and works pretty well. It uses Haar classifiers, I've found the Haar classifiers for smile recognition and tried it, however it doesn't work. I've tried to use it in the same way that was used to recognize face. Tried the same with eye classifier - and it worked. All classifiers I've found in opencv/data folder, could somebody give me a tip, what could I do more with given code?
import java.io.File;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfRect;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.highgui.Highgui;
import org.opencv.objdetect.CascadeClassifier;
public class SmileDetector {
public void detectSmile(String filename) {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
System.out.println("\nRunning SmileDetector");
CascadeClassifier faceDetector = new CascadeClassifier(new File(
"src/main/resources/haarcascade_frontalface_alt.xml").getAbsolutePath());
CascadeClassifier smileDetector = new CascadeClassifier(
new File("src/main/resources/haarcascade_smile.xml").getAbsolutePath());
Mat image = Highgui.imread(filename);
MatOfRect faceDetections = new MatOfRect();
MatOfRect smileDetections = new MatOfRect();
faceDetector.detectMultiScale(image, faceDetections);
System.out.println(String.format("Detected %s faces", faceDetections.toArray().length));
for (Rect rect : faceDetections.toArray()) {
Core.rectangle(image, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height),
new Scalar(0, 255, 0));
}
Mat face = image.submat(faceDetections.toArray()[0]);
smileDetector.detectMultiScale(face, smileDetections);
for (Rect rect : smileDetections.toArray()) {
Core.rectangle(face, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height),
new Scalar(0, 255, 0));
}
String outputFilename = "ouput.png";
System.out.println(String.format("Writing %s", outputFilename));
Highgui.imwrite(outputFilename, image);
Highgui.imwrite("ee.png", face);
}
}
To answer Vi Matviichuk comment:
Yes I was partially able to fix the problem. I've used mouth classifier instead of smile, the name of mouth classifier from opencv samples is haarcascade_mcs_mouth.xml ; then you look for faces, crop them and look for mouths on faces. However it will give you a lot of mouths, so you have to filter them by:
/**
* Detects face(s) and then for each detects and crops mouth
*
* #param filename path to file on which smile(s) will be detected
* #return List of Mat objects with cropped mouth pictures.
*/
private ArrayList<Mat> detectMouth(String filename) {
int i = 0;
ArrayList<Mat> mouths = new ArrayList<Mat>();
// reading image in grayscale from the given path
image = Highgui.imread(filename, Highgui.CV_LOAD_IMAGE_GRAYSCALE);
MatOfRect faceDetections = new MatOfRect();
// detecting face(s) on given image and saving them to MatofRect object
faceDetector.detectMultiScale(image, faceDetections);
System.out.println(String.format("Detected %s faces", faceDetections.toArray().length));
MatOfRect mouthDetections = new MatOfRect();
// detecting mouth(s) on given image and saving them to MatOfRect object
mouthDetector.detectMultiScale(image, mouthDetections);
System.out.println(String.format("Detected %s mouths", mouthDetections.toArray().length));
for (Rect face : faceDetections.toArray()) {
Mat outFace = image.submat(face);
// saving cropped face to picture
Highgui.imwrite("face" + i + ".png", outFace);
for (Rect mouth : mouthDetections.toArray()) {
// trying to find right mouth
// if the mouth is in the lower 2/5 of the face
// and the lower edge of mouth is above of the face
// and the horizontal center of the mouth is the enter of the face
if (mouth.y > face.y + face.height * 3 / 5 && mouth.y + mouth.height < face.y + face.height
&& Math.abs((mouth.x + mouth.width / 2)) - (face.x + face.width / 2) < face.width / 10) {
Mat outMouth = image.submat(mouth);
// resizing mouth to the unified size of trainSize
Imgproc.resize(outMouth, outMouth, trainSize);
mouths.add(outMouth);
// saving mouth to picture
Highgui.imwrite("mouth" + i + ".png", outMouth);
i++;
}
}
}
return mouths;
}
Then you have to find a smile, I tried to do this with SVM training machine, but I hadn't got enough samples so it wasn't perfect. However, whole code I got can be found here: https://bitbucket.org/cybuch/smile-detector/src/ac8a309454c3467ffd8bc1c34ad95879cb059328/src/main/java/org/cybuch/smiledetector/SmileDetector.java?at=master

Incorrect number of Contours on javacv?

I have write simple code to retrieve number of Contours in a image and get the number of Contours in the image. But it always gives incorrect answer. Please can some one explain about this ?
import com.googlecode.javacpp.Loader;
import com.googlecode.javacv.CanvasFrame;
import static com.googlecode.javacpp.Loader.*;
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_highgui.*;
import java.io.File;
import javax.swing.JFileChooser;
public class TestBeam {
public static void main(String[] args) {
CvMemStorage storage=CvMemStorage.create();
CvSeq squares = new CvContour();
squares = cvCreateSeq(0, sizeof(CvContour.class), sizeof(CvSeq.class), storage);
JFileChooser f=new JFileChooser();
int result=f.showOpenDialog(f);//show dialog box to choose files
File myfile=null;
String path="";
if(result==0){
myfile=f.getSelectedFile();//selected file taken to myfile
path=myfile.getAbsolutePath();//get the path of the file
}
IplImage src = cvLoadImage(path);//hear path is actual path to image
IplImage grayImage = IplImage.create(src.width(), src.height(), IPL_DEPTH_8U, 1);
cvCvtColor(src, grayImage, CV_RGB2GRAY);
cvThreshold(grayImage, grayImage, 127, 255, CV_THRESH_BINARY);
CvSeq cvSeq=new CvSeq();
CvMemStorage memory=CvMemStorage.create();
cvFindContours(grayImage, memory, cvSeq, Loader.sizeof(CvContour.class), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
System.out.println(cvSeq.elem_size());
CanvasFrame cnvs=new CanvasFrame("Beam");
cnvs.setDefaultCloseOperation(javax.swing.JFrame.EXIT_ON_CLOSE);
cnvs.showImage(src);
//cvShowImage("Final ", src);
}
}
This is the sample image that I used
But Code always returns output as 8. Please can someone explain this?
cvSeq.elem_size() will return size of sequence element in bytes and not the number of contours. That is why output is 8 every time.
Please refer following link for more information.
http://opencv.willowgarage.com/documentation/dynamic_structures.html#cvseq
For finding number of contours you can use following snippet
int i = 0;
while(cvSeq != null){
i = i + 1;
cvSeq = cvSeq.h_next();
}
System.out.println(i);
With the parameters you have provided CV_RETR_EXTERNAL will only provide external contour that is image boundary in your image(provided you are not inverting the image). You can use CV_RETR_LIST to get all the contours. Visit following link for more information on the parameters.
http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#findcontours

Categories

Resources