I have an image:
I want to crop it so the book is by itself.
I am using OpenCV to attempt and get the contrours of the image. Once I draw them, it looks like this. How can I ignore the extra contours to the right of the image? I have already tries using outliers with standard deviation. Right now it takes every point inside the rectanlge, and adds it to an arraylist for later processing. I have an overall arraylist for points, and 2 more so when computing statistical analysis the points can be ordered from least to greatest.
This is what it looks like now:
import java.awt.Point;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.MatOfPoint2f;
import org.opencv.core.Rect;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
public class imtest {
public static void main(String args[]) throws IOException{
String filename="C:/image.png";
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
Mat torect=new Mat();
Mat torect1=Imgcodecs.imread(filename,0);
Imgproc.Canny(torect1, torect, 10, 100);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(torect.clone(), contours, new Mat(), Imgproc.RETR_LIST,Imgproc.CHAIN_APPROX_SIMPLE);
ArrayList<Point> outlie=new ArrayList<Point>();
ArrayList<Integer> ylist=new ArrayList<Integer>();
ArrayList<Integer> xlist=new ArrayList<Integer>();
MatOfPoint2f approxCurve = new MatOfPoint2f();
//For each contour found
for (int i=0; i<contours.size(); i++)
{
//Convert contours(i) from MatOfPoint to MatOfPoint2f
MatOfPoint2f contour2f = new MatOfPoint2f( contours.get(i).toArray() );
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(contour2f, true)*0.02;
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint( approxCurve.toArray() );
// Get bounding rect of contour
Rect rect = Imgproc.boundingRect(points);
int xoffset=rect.x;
int yoffset=rect.y;
for (int y = 0; y < rect.height; y++) {
for (int x = 0; x < rect.width; x++) {
if (yoffset>1 & xoffset>1)
{
outlie.add(new Point(xoffset+x,yoffset+y));
ylist.add(yoffset+y);
xlist.add(xoffset+x);
}
}
}
}
}
}
Adjusting the threshold of the canny controlled the amount of contours in the resulting image.
Related
so I am importing glTF format into JavaFX. I'm experiencing a weird effect where I can see through the Triangle mesh and see other parts of it / see other meshes through it.
A few Pics
It would be fantastic if someone knows a solution to this. I already tried using CullFace.Front and CullFace.Back.
Here is the code the generate the Triangle Meshes:
package fx;
import gltf.GLTFParseException;
import gltf.mesh.GLTFMesh;
import gltf.mesh.GLTFMeshPrimitive;
import javafx.scene.Group;
import javafx.scene.shape.CullFace;
import javafx.scene.shape.MeshView;
import javafx.scene.shape.TriangleMesh;
import java.util.Arrays;
public class FXglTFMesh extends Group {
public static FXglTFMesh fromGLTFMesh(GLTFMesh mesh) throws GLTFParseException {
FXglTFMesh fxMesh = new FXglTFMesh();
GLTFMeshPrimitive[] primitives = mesh.getPrimitives();
MeshView[] meshViews = new MeshView[primitives.length];
for (int i = 0;i < meshViews.length; i++){
meshViews[i] = fromPrimitive(primitives[i]);
}
FXglTFMaterial material;
fxMesh.getChildren().addAll(meshViews);
return fxMesh;
}
private static MeshView fromPrimitive(GLTFMeshPrimitive primitive) throws GLTFParseException {
TriangleMesh mesh = new TriangleMesh();
MeshView view = new MeshView(mesh);
view.setCullFace(CullFace.BACK);
//Reading texture coords
float[][] texCoords = convertArrayToNested(2,
primitive.getAttribute().getTexCoord_0().readDataAsFloats());
mesh.getTexCoords().addAll(primitive.getAttribute().getTexCoord_0().readDataAsFloats());
if (primitive.getIndices() != null){
//Mesh is indexed
mesh.getPoints().addAll(primitive.getAttribute().getPosition().readDataAsFloats());
int[] indices = primitive.getIndices().readDataAsInts();
for (int i = 0;i < indices.length; i+=3){
mesh.getFaces().addAll(indices[i], 0, indices[i+1], 0, indices[i+2], 0);
}
} else {
//Mesh is not indexed
//Parse the vertices and faces
float[][] vertices = convertArrayToNested(3,
primitive.getAttribute().getPosition().readDataAsFloats());
for (int i = 0;i < vertices.length; i+=3){
mesh.getPoints().addAll(vertices[i]);
mesh.getPoints().addAll(vertices[i+1]);
mesh.getPoints().addAll(vertices[i+2]);
//add 3 points
mesh.getFaces().addAll(i,i, i+1,i+1, i+2,i+2 ); //Add those three points as face
}
}
//Material
FXglTFMaterial material = FXglTFMaterial.fromGLTFMaterial(primitive.getMaterial());
view.setMaterial(material);
return view;
}
private static float[][] convertArrayToNested(int factor, float[] array){
float[][] floats = new float[array.length / factor][];
for (int i = 0;i < floats.length; i++){
int dataOffset = i * factor;
floats[i] = Arrays.copyOfRange(array, dataOffset, dataOffset+factor);
}
return floats;
}
}
Full code: https://github.com/NekoLvds/GLTFImporter (Development branch)
I'm working on program that detects pupil area from the eye using opencv language. Below is the code snippet. Its not compiling , throwing CV exception. I don't know what to do. How can i make it work. (opencv 2.4)
import java.util.ArrayList;
import java.util.List;
import java.lang.Math;
import org.opencv.core.Scalar;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Core;
import org.opencv.core.CvException;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.highgui.Highgui;
import org.opencv.imgproc.Imgproc;
public class Detect {
public static void main(String[] args) throws CvException{
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
// Load image
Mat src = Highgui.imread("tt.jpg");
// Invert the source image and convert to grayscale
Mat gray = new Mat();
Imgproc.cvtColor(src, gray, Imgproc.COLOR_BGR2GRAY);
Highgui.imwrite("gray.jpg", gray);
// Convert to binary image by thresholding it
Imgproc.threshold(gray, gray, 30, 255, Imgproc.THRESH_BINARY_INV);
Highgui.imwrite("binary.jpg", gray);
// Find all contours
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(gray.clone(), contours, new Mat(), Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_NONE);
// Fill holes in each contour
Imgproc.drawContours(gray, contours, -1, new Scalar(255,255,255), -1);
for (int i = 0; i < contours.size(); i++)
{
double area = Imgproc.contourArea(contours.get(i));
Rect rect = Imgproc.boundingRect(contours.get(i));
int radius = rect.width/2;
System.out.println("Area: "+area);
// If contour is big enough and has round shape
// Then it is the pupil
if (area >= 30 &&
Math.abs(1 - ((double)rect.width / (double)rect.height)) <= 0.2 &&
Math.abs(1 - (area / (Math.PI * Math.pow(radius, 2)))) <= 0.2)
{
Core.circle(src, new Point(rect.x + radius, rect.y + radius), radius, new Scalar(255,0,0), 2);
System.out.println("pupil");
}
}
Highgui.imwrite("processed.jpg", src);
}
}
Showing the following error
OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cv::cvtColor, file ..\..\..\..\opencv\modules\imgproc\src\color.cpp, line 3739
Exception in thread "main" CvException [org.opencv.core.CvException: cv::Exception: ..\..\..\..\opencv\modules\imgproc\src\color.cpp:3739: error: (-215) scn == 3 || scn == 4 in function cv::cvtColor
]
at org.opencv.imgproc.Imgproc.cvtColor_1(Native Method)
at org.opencv.imgproc.Imgproc.cvtColor(Imgproc.java:4598)
at Detect.main(Detect.java:24)
I think that OpenCV thinks that "tt.jpg" is already single-channel.
According to the documentation:
The function determines the type of an image by the content, not by the file extension.
To ensure the format, you can use a flag:
Mat src = Highgui.imread("tt.jpg"); // OpenCV decides the type based on the content
Mat src = Highgui.imread("tt.jpg", Highgui.IMREAD_GRAYSCALE); // single-channel image will be loaded, even if it is a 3-channel image
Mat src = Highgui.imread("tt.jpg", Highgui.IMREAD_COLOR); // 3-channel image will be loaded, even if it is a single-channel image
If you need only the grayscale image:
Mat src = Highgui.imread("tt.jpg", Highgui.IMREAD_GRAYSCALE);
I am fairly new to openCV on Android and I am using the ColorBlobDetector class from the OpenCV Samples to detect traffic light blobs such as Red, Green Amber.
I cant seem to understand the use of mColorRadius.
I also cannot figure out where to compare colors to find the appropriate blob I am looking for.
Here's my code
PS: I even tried entering values for mLowerBound and mUpperBound, but it kept highlighting black blobs.
package edu.csueb.ilab.blindbike.lightdetection;
import android.os.Environment;
import android.util.Log;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.MatOfPoint2f;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.highgui.Highgui;
import org.opencv.imgproc.Imgproc;
import java.io.File;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.util.Iterator;
import java.util.List;
// Lower and Upper bounds for range checking in HSV color space
private Scalar mLowerBound = new Scalar(0); //for blue 120,100,100 Current: 176,255,244 ::perfect working Green 70,20,100
// for flouracent green light 57,255,20
private Scalar mUpperBound = new Scalar(0); // for blue 179,255,255 , blue cap 28,28,37 Current: 177,255,252:: perfect working Green 85,35,125
// for flouracent green light 57,255,200
// for gray signs 76,55,28
// for gray signs 89,62,33 ,blue cap 80,109,149
// Minimum contour area in percent for contours filtering
private static double mMinContourArea = 0.01; //<></>ried 0.4
// Color radius for range checking in HSV color space
private Scalar mColorRadius = new Scalar(25,50,50,0); //initial val 25,50,50,0 //214,55,52,0 for the blue cap
private Mat mSpectrum = new Mat(); //
private List<MatOfPoint> mContours = new ArrayList<MatOfPoint>();
// Cache
Mat mPyrDownMat = new Mat();
Mat mHsvMat = new Mat();
Mat mMask = new Mat();
Mat mDilatedMask = new Mat();
Mat mHierarchy = new Mat();
SimpleDateFormat df= new SimpleDateFormat("yyyy_MM_dd_HH_mm_yyyy");
public void setColorRadius(Scalar radius) {
mColorRadius = radius;
}
public void setHsvColor(Scalar hsvColor) {
double minH = (hsvColor.val[0] >= mColorRadius.val[0]) ? hsvColor.val[0]-mColorRadius.val[0] : 0;
double maxH = (hsvColor.val[0]+mColorRadius.val[0] <= 255) ? hsvColor.val[0]+mColorRadius.val[0] : 255;
mLowerBound.val[0] = minH;
mUpperBound.val[0] = maxH;
mLowerBound.val[1] = hsvColor.val[1] - mColorRadius.val[1];
mUpperBound.val[1] = hsvColor.val[1] + mColorRadius.val[1];
mLowerBound.val[2] = hsvColor.val[2] - mColorRadius.val[2];
mUpperBound.val[2] = hsvColor.val[2] + mColorRadius.val[2];
mLowerBound.val[3] = 0;
mUpperBound.val[3] = 255;
Mat spectrumHsv = new Mat(1, (int)(maxH-minH), CvType.CV_8UC3);
for (int j = 0; j < maxH-minH; j++) {
byte[] tmp = {(byte)(minH+j), (byte)255, (byte)255};
spectrumHsv.put(0, j, tmp);
}
Imgproc.cvtColor(spectrumHsv, mSpectrum, Imgproc.COLOR_HSV2BGR_FULL, 4); //COLOR_HSV2RGB_FULL
}
public Mat getSpectrum() {
return mSpectrum;
}
public void setMinContourArea(double area) {
mMinContourArea = area;
}
public void process(Mat rgbaImage) {
Scalar colorGreen=new Scalar(0, 128, 0);
Imgproc.pyrDown(rgbaImage, mPyrDownMat);
Imgproc.pyrDown(mPyrDownMat, mPyrDownMat);
Imgproc.cvtColor(mPyrDownMat, mHsvMat, Imgproc.COLOR_BGR2HSV_FULL);
Core.inRange(mHsvMat, mLowerBound, mUpperBound, mMask);
Imgproc.dilate(mMask, mDilatedMask, new Mat());
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(mDilatedMask, contours, mHierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
// Find max contour area
double maxArea = 0;
Iterator<MatOfPoint> each = contours.iterator();
while (each.hasNext()) {
MatOfPoint wrapper = each.next();
double area = Imgproc.contourArea(wrapper);
if (area > maxArea)
maxArea = area;
}
// Filter contours by area and resize to fit the original image size
mContours.clear();
each = contours.iterator();
while (each.hasNext()) {
MatOfPoint contour = each.next(); //Current: >=50 && <200 //testig at jan 9700 || 25200
if (Imgproc.contourArea(contour) >= 49656 || Imgproc.contourArea(contour)<53177) { //mMinContourArea*maxArea //red 30 300-440 green 510 1600
Core.multiply(contour, new Scalar(4,4), contour); //Perfect working: Green 880 || 1800
mContours.add(contour);
}
}
File path =Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES);
String filename = "christ"+df.format(new Date()).toString()+".png";
File file = new File(path, filename);
filename = file.toString();
Boolean save;
MatOfPoint2f approxCurve=new MatOfPoint2f();
for(int i=0;i<contours.size();i++)
{
MatOfPoint2f countour2f = new MatOfPoint2f(contours.get(i).toArray());
double approxDistance = Imgproc.arcLength(countour2f, true)*0.02;
Imgproc.approxPolyDP(countour2f, approxCurve, approxDistance, true);
// Convert back to Contour
MatOfPoint points=new MatOfPoint(approxCurve.toArray());
//Get Bounding rect of contour
Rect rect=Imgproc.boundingRect(points);
//draw enclosing rectangle
Mat ROI = rgbaImage.submat(rect.y, rect.y + rect.height, rect.x, rect.x + rect.width);
// save= Highgui.imwrite(filename,ROI);
// if (save == true)
// Log.i("Save Status", "SUCCESS writing image to external storage");
// else
// Log.i("Save Status", "Fail writing image to external storage");
Core.rectangle(rgbaImage, new Point(rect.x,rect.y), new Point(rect.x+rect.width,rect.y+rect.height),new Scalar(255,225,0,0),3);
}
}
public List<MatOfPoint> getContours() {
return mContours;
}
}
-----------------CORRECTED VERSION-----------
I am currently working on a project using Processing. I need to do image processing inside my project, initially I thought of using opencv. But unfortunately I found out that opencv for Processing is not the complete version of the original one.
How can I start doing image processing using Processing? I found that since processing is a wrapper of java, java language is accepted. Can I use JavaCV inside processing? If so, how?
Here is the sample code:-
import gab.opencv.*;
import org.opencv.imgproc.Imgproc;
import org.opencv.core.Core;
import org.opencv.highgui.Highgui;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.MatOfPoint2f;
import org.opencv.core.MatOfPoint2f;
import org.opencv.core.CvType;
import org.opencv.core.Point;
import org.opencv.core.Size;
import org.opencv.core.Core.MinMaxLocResult;
PImage imgBack, rightSection, leftSection;
PImage img;
void setup(){
imgBack=loadImage("tk100backback.jpg");
leftSection=imgBack.get(0,0,14,200);
rightSection=imgBack.get(438,0,32,200);
img=createImage(46,200,RGB);
img.set(0,0,rightSection);
img.set(32,0,leftSection);
size(46,200);
Mat src= Highgui.imread(img.toString());
Mat tmp=Highgui.imread("templateStarMatching.jpg");
int result_cols=src.cols()-tmp.cols()+1;
int result_rows=src.rows()-tmp.rows()+1;
Mat result = new Mat(result_rows, result_cols, CvType.CV_32FC1);
Imgproc.matchTemplate(src, tmp, result, Imgproc.TM_CCOEFF_NORMED);
MatOfPoint minLoc = new MatOfPoint();
MatOfPoint maxLoc = new MatOfPoint();
MinMaxLocResult mrec=new MinMaxLocResult();
mrec=Core.minMaxLoc(result,null);
System.out.println(mrec.minVal);
System.out.println(mrec.maxVal);
Point point = new Point(mrec.maxLoc.x+tmp.width(), mrec.maxLoc.y+tmp.height());
// cvRectangle(src, maxLoc, point, CvScalar.WHITE, 2, 8, 0);//Draw a Rectangle for Matched Region
}
void draw(){
image(img,0,0);
}
It is giving me error continuously that Core doesn't exist and Highgui not properly installed, but they are properly installed
It looks like you're trying to do template matching with OpenCV? It sounds like your errors are with the install of OpenCV, not your code.
OpenCV Issues
1. Did you install OpenCV previously? It can cause issues with the Processing version.
2. If not, try reducing your code down until you reach the line that first causes an error. If it's on the import statements, you know it's an OpenCV install issue.
3. You don't list your OS, but if you're on a Mac you can follow my detailed instructions for installing it.
Can I Do It With Another Library?
Yes, see some of the comments to your question. But the specific process I think you are trying to do will be difficult. I think you're better off getting OpenCV working.
The below code reads an image in Processing and then applies a filter (in this case it pixilates it).
Note that img.pixels[i] is a 1 dimensional array, while pictures are 2d. A trick to access the 2d location is img.pixels[r*img.width + c] where r and c are the pixels row and column, respectively
// Declaring a variable of type PImage
PImage img;
void setup() {
// Make a new instance of a PImage by loading an image file
![enter image description here][1]img = loadImage("http://www.washingtonpost.com/wp-srv/special/lifestyle/the-age-of-obama/img/obama-v2/obama09.jpg");
//img = loadImage("background.jpg");
size(img.width, img.height);
int uberPixel = 25;
for(int row = 0; row < img.height; row+= uberPixel){
for(int col = 0; col < img.width; col+= uberPixel){
int colo[] = new int[3];
int cnt = 0;
for(int r = row; r <= row + uberPixel; r ++){
for(int c = col; c <= col +uberPixel; c ++){
if(r*img.width + c < img.pixels.length){
colo[0] += red(img.pixels[r*img.width + c]);
colo[1] += green(img.pixels[r*img.width + c]);
colo[2] += blue(img.pixels[r*img.width + c]);
cnt++;
}
}
}
//average color
for(int i = 0; i < 3; i ++){
colo[i] /= cnt;
}
//change pixel
for(int r = row; r <= row + uberPixel; r ++){
for(int c = col; c <= col +uberPixel; c ++){
if(r*img.width + c < img.pixels.length){
img.pixels[r*img.width+c] = color(colo[0],colo[1],colo[2]);
}
}
}
}
}
image(img,0,0);
}
void draw() {
image(img,0,0);
}
Result:
Here is an example to use javacv in Processing:
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_highgui.*;
void setup ()
{
size( 256, 256 );
String fn = sketchPath("data/lena.jpg");
IplImage ip= cvLoadImage(fn);
if ( ip != null )
{
cvSmooth( ip, ip, CV_GAUSSIAN, 3 );
PImage im = ipToPImage(ip);
image( im, 0, 0 );
cvReleaseImage(ip);
}
}
PImage ipToPImage ( IplImage ip )
{
java.awt.image.BufferedImage bImg = ip.getBufferedImage();
PImage im = new PImage( bImg.getWidth(), bImg.getHeight(), ARGB );
bImg.getRGB( 0, 0, im.width, im.height, im.pixels, 0, im.width );
im.updatePixels();
return im;
}
I am working on a project where we are trying to detect whether the eye is closed or open in a picture. What we have done so far is that we detected the face, then the eyes. Then we applied hough transform, hoping that the iris would be the only circle when the eye is open. The problem is that when the eye is closed, it produces a circle as well:
Here is the code:
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfRect;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.highgui.Highgui;
import org.opencv.objdetect.CascadeClassifier;
import org.opencv.imgproc.Imgproc;
public class FaceDetector {
public static void main(String[] args) {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
System.out.println("\nRunning FaceDetector");
CascadeClassifier faceDetector = new CascadeClassifier("D:\\CS\\opencv\\sources\\data\\haarcascades\\haarcascade_frontalface_alt.xml");
CascadeClassifier eyeDetector = new CascadeClassifier("D:\\CS\\opencv\\sources\\data\\haarcascades\\haarcascade_eye.xml");
Mat image = Highgui.imread("C:\\Users\\Yousra\\Desktop\\images.jpg");
Mat gray = Highgui.imread("C:\\Users\\Yousra\\Desktop\\eyes\\E7.png");
String faces;
String eyes;
MatOfRect faceDetections = new MatOfRect();
MatOfRect eyeDetections = new MatOfRect();
Mat face;
Mat crop = null;
Mat circles = new Mat();
faceDetector.detectMultiScale(image, faceDetections);
for (int i = 0; i< faceDetections.toArray().length; i++){
faces = "Face"+i+".png";
face = image.submat(faceDetections.toArray()[i]);
crop = face.submat(4, (2*face.width())/3, 0, face.height());
Highgui.imwrite(faces, face);
eyeDetector.detectMultiScale(crop, eyeDetections, 1.1, 2, 0,new Size(30,30), new Size());
if(eyeDetections.toArray().length ==0){
System.out.println(" Not a face" + i);
}else{
System.out.println("Face with " + eyeDetections.toArray().length + "eyes" );
for (int j = 0; j< eyeDetections.toArray().length ; j++){
System.out.println("Eye" );
Mat eye = crop.submat(eyeDetections.toArray()[j]);
eyes = "Eye"+j+".png";
Highgui.imwrite(eyes, eye);
}
}
}
Imgproc.cvtColor(gray, gray, Imgproc.COLOR_BGR2GRAY);
System.out.println("1 Hough :" +circles.size());
float circle[] = new float[3];
for (int i = 0; i < circles.cols(); i++)
{
circles.get(0, i, circle);
org.opencv.core.Point center = new org.opencv.core.Point();
center.x = circle[0];
center.y = circle[1];
Core.circle(gray, center, (int) circle[2], new Scalar(255,255,100,1), 4);
}
Imgproc.Canny( gray, gray, 200, 10, 3,false);
Imgproc.HoughCircles( gray, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 100, 80, 10, 10, 50 );
System.out.println("2 Hough:" +circles.size());
for (int i = 0; i < circles.cols(); i++)
{
circles.get(0, i, circle);
org.opencv.core.Point center = new org.opencv.core.Point();
center.x = circle[0];
center.y = circle[1];
Core.circle(gray, center, (int) circle[2], new Scalar(255,255,100,1), 4);
}
Imgproc.Canny( gray, gray, 200, 10, 3,false);
Imgproc.HoughCircles( gray, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 100, 80, 10, 10, 50 );
System.out.println("3 Hough" +circles.size());
//float circle[] = new float[3];
for (int i = 0; i < circles.cols(); i++)
{
circles.get(0, i, circle);
org.opencv.core.Point center = new org.opencv.core.Point();
center.x = circle[0];
center.y = circle[1];
Core.circle(gray, center, (int) circle[2], new Scalar(255,255,100,1), 4);
}
String hough = "afterhough.png";
Highgui.imwrite(hough, gray);
}
}
How to make it more accurate?
Circular Hough transform is unlikely to work well in the majority of cases i.e. where the eye is partially open or closed. You'd be better off isolating rectangular regions (bounding boxes) around the eyes and computing a measure based on pixel intensities (grey levels). For example the variance of pixels within the region would be a good discriminator between open and closed eyes. Obtaining a bounding box around the eyes can be done quite reliably using relative position from the bounding box detected around the face using OpenCV Haar cascades. Figure 3 in this paper gives some idea of the location process.
http://personal.ee.surrey.ac.uk/Personal/J.Collomosse/pubs/Malleson-IJCV-2012.pdf
You can check circles.cols() value if it is 2 then the eyes are open and if the value is 0 then the eyes are closed. You can also detect the blinking of eye if the value of circles.cols() changes from 2 to 0. Hough transform wil not detect a circle if the eyes are closed.