Related
Basically, I want to achieve this, and so far, I've written the following Java code...
// Display the camera frame
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
// The object's width and height are set to 0
objectWidth = objectHeight = 0;
// frame is captured as a coloured image
frame = inputFrame.rgba();
/** Since the Canny algorithm only works on greyscale images and the captured image is
* coloured, we transform the captured cam image into a greyscale one
*/
Imgproc.cvtColor(frame, grey, Imgproc.COLOR_RGB2GRAY);
// Calculating borders of image using the Canny algorithm
Imgproc.Canny(grey, canny, 180, 210);
/** To avoid background noise (given by the camera) that makes the system too sensitive
* small variations, the image is blurred to a small extent. Blurring is one of the
* required steps before any image transformation because this eliminates small details
* that are of no use. Blur is a low-pass filter.
*/
Imgproc.GaussianBlur(canny, canny, new Size(5, 5), 5);
// Calculate the contours
Imgproc.findContours(canny, contours, new Mat(), Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
/** The contours come in different sequences
* 1 sequence for each connected component.
* Taking the assumption only 1 object is in view, if we have more than 1 connected
* component, this'll be considered part of the details of the object.
*
* For this, we put all contours together in a single sequence
* If there is at least 1 contour, I can continue processing
*/
for (MatOfPoint mat : contours) {
// Retrieve and store all contours in one giant map
mat.copyTo(allContours);
}
MatOfPoint2f allCon = new MatOfPoint2f(allContours.toArray());
// Calculating the minimal rectangle to contain the contours
RotatedRect box = Imgproc.minAreaRect(allCon);
// Getting the vertices of the rectangle
Point[] vertices = initialiseWithDefaultPointInstances(4);
box.points(vertices);
// Now the vertices are in possession, temporal smoothing can be performed.
for (int i = 0; i < 4; i++) {
// Smooth coordinate x of the vertex
vertices[i].x = alpha * lastVertices[i].x + (1.0 - alpha) * vertices[i].x;
// Smooth coordinate y of the vertex
vertices[i].y = alpha * lastVertices[i].y + (1.0 - alpha) * vertices[i].y;
// Assign the present smoothed values as lastVertices for the next smooth
lastVertices[i] = vertices[i];
}
/** With the vertices, the object size is calculated.
* The object size is calculated through pythagoras theorm. In addition, it gives
* the distance between 2 points in a bi-dimensional space.
*
* For a rectangle, considering any vertex V, its two sizes (width and height) can
* be calculated by calculating the distance of V from the previous vertex and
* calculating the distance of V from the next vertex. This is the reason why I
* calculate the distance between vertici[0]/vertici[3] and vertici[0]/vertici[1]
*/
objectWidth = (int) (conversionFactor * Math.sqrt((vertices[0].x - vertices[3].x) * (vertices[0].x - vertices[3].x) + (vertices[0].y - vertices[3].y) * (vertices[0].y - vertices[3].y)));
objectHeight = (int) (conversionFactor * Math.sqrt((vertices[0].x - vertices[1].x) * (vertices[0].x - vertices[1].x) + (vertices[0].y - vertices[1].y) * (vertices[0].y - vertices[1].y)));
/** Draw the rectangle containing the contours. The line method draws a line from 1
* point to the next, and accepts only integer coordinates; for this reason, 2
* temporary Points have been created and why I used Math.round method.
*/
Point pt1 = new Point();
Point pt2 = new Point();
for (int i = 0; i < 4; i++) {
pt1.x = Math.round(vertices[i].x);
pt1.y = Math.round(vertices[i].y);
pt2.x = Math.round(vertices[(i + 1) % 4].x);
pt2.y = Math.round(vertices[(i + 1) % 4].y);
Imgproc.line(frame, pt1, pt2, red, 3);
}
//If the width and height are non-zero, then print the object size on-screen
if (objectWidth != 0 && objectHeight != 0) {
String text;
text = String.format("%d x %d", objectWidth, objectHeight);
widthValue.setText(text);
}
// This function must return
return frame;
}
// Initialising an array of points
public static Point[] initialiseWithDefaultPointInstances(int length) {
Point[] array = new Point[length];
for (int i = 0; i < length; i++) {
array[i] = new Point();
}
return array;
}
What I want to achieve is drawing a rectangle on-screen that contains the object's contours (edges). If anyone knows the answer to my question, please feel free to comment below, as I have been stuck on this for a couple of hours
Here's the code referenced in the comment How to draw a rectangle containing an object in Android (Java, OpenCV)
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
// The object's width and height are set to 0
List<Integer> objectWidth = new ArrayList<>();
List<Integer> objectHeight = new ArrayList<>();
// frame is captured as a coloured image
Mat frame = inputFrame.rgba();
Mat gray = new Mat();
Mat canny = new Mat();
List<MatOfPoint> contours = new ArrayList<>();
/** Since the Canny algorithm only works on greyscale images and the captured image is
* coloured, we transform the captured cam image into a greyscale one
*/
Imgproc.cvtColor(frame, gray, Imgproc.COLOR_RGB2GRAY);
// Calculating borders of image using the Canny algorithm
Imgproc.Canny(gray, canny, 180, 210);
/** To avoid background noise (given by the camera) that makes the system too sensitive
* small variations, the image is blurred to a small extent. Blurring is one of the
* required steps before any image transformation because this eliminates small details
* that are of no use. Blur is a low-pass filter.
*/
Imgproc.GaussianBlur(canny, canny, new Size(5, 5), 5);
// Calculate the contours
Imgproc.findContours(canny, contours, new Mat(), Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
/** The contours come in different sequences
* 1 sequence for each connected component.
* Taking the assumption only 1 object is in view, if we have more than 1 connected
* component, this'll be considered part of the details of the object.
*
* For this, we put all contours together in a single sequence
* If there is at least 1 contour, I can continue processing
*/
if(contours.size() > 0){
// Calculating the minimal rectangle to contain the contours
List<RotatedRect> boxes = new ArrayList<>();
for(MatOfPoint contour : contours){
RotatedRect box = Imgproc.minAreaRect(new MatOfPoint2f(contour.toArray()));
boxes.add(box);
}
// Getting the vertices of the rectangle
List<Point[]> vertices = initialiseWithDefaultPointInstances(boxes.size(), 4);
for(int i=0; i<boxes.size(); i++){
boxes.get(i).points(vertices.get(i));
}
/*
double alpha = 0.5;
// Now the vertices are in possession, temporal smoothing can be performed.
for(int i = 0; i<vertices.size(); i++){
for (int j = 0; j < 4; j++) {
// Smooth coordinate x of the vertex
vertices.get(i)[j].x = alpha * lastVertices.get(i)[j].x + (1.0 - alpha) * vertices.get(i)[j].x;
// Smooth coordinate y of the vertex
vertices.get(i)[j].y = alpha * lastVertices.get(i)[j].y + (1.0 - alpha) * vertices.get(i)[j].y;
// Assign the present smoothed values as lastVertices for the next smooth
lastVertices.get(i)[j] = vertices.get(i)[j];
}
}*/
/** With the vertices, the object size is calculated.
* The object size is calculated through pythagoras theorm. In addition, it gives
* the distance between 2 points in a bi-dimensional space.
*
* For a rectangle, considering any vertex V, its two sizes (width and height) can
* be calculated by calculating the distance of V from the previous vertex and
* calculating the distance of V from the next vertex. This is the reason why I
* calculate the distance between vertici[0]/vertici[3] and vertici[0]/vertici[1]
*/
double conversionFactor = 1.0;
for(Point[] points : vertices){
int width = (int) (conversionFactor * Math.sqrt((points[0].x - points[3].x) * (points[0].x - points[3].x) + (points[0].y - points[3].y) * (points[0].y - points[3].y)));
int height = (int) (conversionFactor * Math.sqrt((points[0].x - points[1].x) * (points[0].x - points[1].x) + (points[0].y - points[1].y) * (points[0].y - points[1].y)));
objectWidth.add(width);
objectHeight.add(height);
}
/** Draw the rectangle containing the contours. The line method draws a line from 1
* point to the next, and accepts only integer coordinates; for this reason, 2
* temporary Points have been created and why I used Math.round method.
*/
Scalar red = new Scalar(255, 0, 0, 255);
for (int i=0; i<vertices.size(); i++){
Point pt1 = new Point();
Point pt2 = new Point();
for (int j = 0; j < 4; j++) {
pt1.x = Math.round(vertices.get(i)[j].x);
pt1.y = Math.round(vertices.get(i)[j].y);
pt2.x = Math.round(vertices.get(i)[(j + 1) % 4].x);
pt2.y = Math.round(vertices.get(i)[(j + 1) % 4].y);
Imgproc.line(frame, pt1, pt2, red, 3);
}
if (objectWidth.get(i) != 0 && objectHeight.get(i) != 0){
Imgproc.putText(frame, "width: " + objectWidth + ", height: " + objectHeight, new Point(Math.round(vertices.get(i)[1].x), Math.round(vertices.get(i)[1].y)), 1, 1, red);
}
}
}
// This function must return
return frame;
}
// Initialising an array of points
public static List<Point[]> initialiseWithDefaultPointInstances(int n_Contours, int n_Points) {
List<Point[]> pointsList = new ArrayList<>();
for(int i=0; i<n_Contours; i++){
Point[] array = new Point[n_Points];
for (int j = 0; j < n_Points; j++) {
array[j] = new Point();
}
pointsList.add(array);
}
return pointsList;
}
I'm currently trying to close the contours on the right of this picture:
Sample.
The reason for the opened contour lies in kabeja, a library to convert DXF files to images. It seems that on some images it doesn't convert the last pixel column (or row) and that's why the Sample picture is open.
I had the idea to use Core.copyMakeBorder() in Opencv, to add some space to the picture. After that I tried to use Imgproc.approxPolyDP() to close the contour, but this doesn't work. I tried this with different Epsilon values: Pics EDIT: Can't post more than 2 links
The reason for that is maybe that the contour surrounds the line. It never closes the contour where i want it to do.
I tried another method using Imgproc.convexHull(), which delivers this one: ConvexHull.
This could be useful for me, but i have no idea how to take out the part of the convex hull i need and merge it together with the contour to close it.
I hope that someone has an idea.
Here is my method for Imgproc.approxPolyDP()
public static ArrayList<MatOfPoint> makeComplete(Mat mat) {
System.out.println("makeComplete: START");
Mat dst = new Mat();
Core.copyMakeBorder(mat, dst, 10, 10, 10, 10, Core.BORDER_CONSTANT);
ArrayList<MatOfPoint> cnts = Tools.getContours(dst);
ArrayList<MatOfPoint2f> opened = new ArrayList<>();
//convert to MatOfPoint2f to use approxPolyDP
for (MatOfPoint m : cnts) {
MatOfPoint2f temp = new MatOfPoint2f(m.toArray());
opened.add(temp);
System.out.println("First loop runs");
}
ArrayList<MatOfPoint> closed = new ArrayList<>();
for (MatOfPoint2f conts : opened) {
MatOfPoint2f temp = new MatOfPoint2f();
Imgproc.approxPolyDP(conts, temp, 3, true);
MatOfPoint closedTemp = new MatOfPoint(temp.toArray());
closed.add(closedTemp);
System.out.println("Second loop runs");
}
System.out.println("makeComplete: END");
return closed;
}
And here the code for Imgproc.convexHull()
public static ArrayList<MatOfPoint> getConvexHull(Mat mat) {
Mat dst = new Mat();
Core.copyMakeBorder(mat, dst, 10, 10, 10, 10, Core.BORDER_CONSTANT);
ArrayList<MatOfPoint> cnts = Tools.getContours(dst);
ArrayList<MatOfPoint> out = new ArrayList<MatOfPoint>();
MatOfPoint mopIn = cnts.get(0);
MatOfInt hull = new MatOfInt();
Imgproc.convexHull(mopIn, hull, false);
MatOfPoint mopOut = new MatOfPoint();
mopOut.create((int) hull.size().height, 1, CvType.CV_32SC2);
for (int i = 0; i < hull.size().height; i++) {
int index = (int) hull.get(i, 0)[0];
double[] point = new double[]{
mopIn.get(index, 0)[0], mopIn.get(index, 0)[1]
};
mopOut.put(i, 0, point);
}
out.add(mopOut);
return out;
}
Best regards,
Brk
Assuming the assumption is correct, that the last row (for column it is similar) isn't converting (i.e. missing), then try the following. Assume x goes from left to right and y from top to bottom. We add one row of empty (white?) pixels at the image bottom and then go from left to right. Below is pseudo code:
// EMPTY - value of backgroung e.g. white for the sample image
PixelType curPixel = EMPTY;
int y = height - 1; // last row, the one we added
for (int x = 0; x < width; ++x)
{
// img(y,x) - current pixel, is "empty"
// img (y-1, x) - pixel above the current
if (img(y-1, x) != img(y, x))
{
// pixel above isn't empty, so we make current pixel non-empty
img(y, x) = img(y-1, x);
// if we were drawing, then stop, otherwise - start
if (curPixel == EMPTY)
curPixel = img(y-1, x);
else
curPixel = EMPTY;
}
else
{
img(y, x) = curPixel;
}
}
I am using Processing in Java to perpetually draw a line graph. This requires a clearing rectangle to draw over drawn lines to make room for the new part of the graph. Everything works fine, but when I call a method, the clearing stops working as it did before. Basically the clearing works by drawing a rectangle in front of where the line is currently at
Below are the two main methods involved. The drawGraph function works fine until I call the redrawGraph which redraws the graph based on the zoom. I think the center variable is the cause of the problem but I cannot figure out why.
public void drawGraph()
{
checkZoom();
int currentValue = seismograph.getCurrentValue();
int lastValue = seismograph.getLastValue();
step = step + zoom;
if(step>offset){
if(restartDraw == true)
{
drawOnGraphics(step-zoom, lastY2, step, currentValue);
image(graphGraphics, 0, 0);
restartDraw = false;
}else{
drawOnGraphics(step-zoom, lastValue, step, currentValue);
image(graphGraphics, 0, 0);
}
} // draw graph (connect last to current point // increase step - x axis
float xClear = step+10; // being clearing area in front of current graph
if (xClear>width - 231) {
xClear = offset - 10; // adjust for far side of the screen
}
graphGraphics.beginDraw();
if (step>graphSizeX+offset) { // draw two clearing rectangles when graph isn't split
// left = bg.get(0, 0, Math.round(step-graphSizeX), height - 200); // capture clearing rectangle from the left side of the background image
// graphGraphics.image(left, 0, 0); // print left clearing rectangle
// if (step+10<width) {
// right = bg.get(Math.round(step+10), 0, width, height - 200); // capture clearing rectangle from the right side of the background image
// // print right clearing rectangle
// }
} else { // draw one clearing rectangle when graph is split
center = bg.get(Math.round(xClear), lastY2, offset, height - 200); // capture clearing rectangle from the center of the background image
graphGraphics.image(center, xClear - 5, 0);// print center clearing rectangle
}
if (step > graphSizeX+offset) { // reset set when graph reaches the end
step = 0+offset;
}
graphGraphics.endDraw();
image(graphGraphics, 0 , 0);
System.out.println("step: " + step + " zoom: " + zoom + " currentValue: "+ currentValue + " lastValue: " + lastValue);
}
private void redrawGraph() //resizes graph when zooming
{
checkZoom();
Object[] data = seismograph.theData.toArray();
clearGraph();
step = offset;
int y2, y1 = 0;
int zoomSize = (int)((width - offset) / zoom);
int tempCount = 0;
graphGraphics.beginDraw();
graphGraphics.strokeWeight(2); // line thickness
graphGraphics.stroke(242,100,66);
graphGraphics.smooth();
while(tempCount < data.length)
{
try
{
y2 = (int)data[tempCount];
step = step + zoom;
if(step > offset && y1 > 0 && step < graphSizeX+offset){
graphGraphics.line(step-zoom, y1, step, y2);
}
y1 = y2;
tempCount++;
lastY2 = y2;
}
catch (Exception e)
{
System.out.println(e.toString());
}
}
graphGraphics.endDraw();
image(graphGraphics, 0, 0);
restartDraw = true;
}
Any help and criticisms are welcome. Thank you for your valuable time.
I'm not sure if that approach is the best. You can try something as simple as this:
// Learning Processing
// Daniel Shiffman
// http://www.learningprocessing.com
// Example: a graph of random numbers
float[] vals;
void setup() {
size(400,200);
smooth();
// An array of random values
vals = new float[width];
for (int i = 0; i < vals.length; i++) {
vals[i] = random(height);
}
}
void draw() {
background(255);
// Draw lines connecting all points
for (int i = 0; i < vals.length-1; i++) {
stroke(0);
strokeWeight(2);
line(i,vals[i],i+1,vals[i+1]);
}
// Slide everything down in the array
for (int i = 0; i < vals.length-1; i++) {
vals[i] = vals[i+1];
}
// Add a new random value
vals[vals.length-1] = random(height);//use seismograph.getCurrentValue(); here instead
}
You can easily do the same using a PGraphics buffer as your code suggests:
// Learning Processing
// Daniel Shiffman
// http://www.learningprocessing.com
// Example: a graph of random numbers
float[] vals;
PGraphics graph;
void setup() {
size(400,200);
graph = createGraphics(width,height);
// An array of random values
vals = new float[width];
for (int i = 0; i < vals.length; i++) {
vals[i] = random(height);
}
}
void draw() {
graph.beginDraw();
graph.background(255);
// Draw lines connecting all points
for (int i = 0; i < vals.length-1; i++) {
graph.stroke(0);
graph.strokeWeight(2);
graph.line(i,vals[i],i+1,vals[i+1]);
}
graph.endDraw();
image(graph,0,0);
// Slide everything down in the array
for (int i = 0; i < vals.length-1; i++) {
vals[i] = vals[i+1];
}
// Add a new random value
vals[vals.length-1] = random(height);//use seismograph.getCurrentValue(); here instead
}
The main idea is to cycle the newest data in an array and simply draw the values from this shifting array. As long as you clear the previous frame (background()) the graph should look ok.
I went through many questions in StackOverflow and able to develop small program to detect squares and rectangles correctly. This is my sample code
public static CvSeq findSquares(final IplImage src, CvMemStorage storage) {
CvSeq squares = new CvContour();
squares = cvCreateSeq(0, sizeof(CvContour.class), sizeof(CvSeq.class), storage);
IplImage pyr = null, timg = null, gray = null, tgray;
timg = cvCloneImage(src);
CvSize sz = cvSize(src.width(), src.height());
tgray = cvCreateImage(sz, src.depth(), 1);
gray = cvCreateImage(sz, src.depth(), 1);
// cvCvtColor(gray, src, 1);
pyr = cvCreateImage(cvSize(sz.width() / 2, sz.height() / 2), src.depth(), src.nChannels());
// down-scale and upscale the image to filter out the noise
// cvPyrDown(timg, pyr, CV_GAUSSIAN_5x5);
// cvPyrUp(pyr, timg, CV_GAUSSIAN_5x5);
// cvSaveImage("ha.jpg",timg);
CvSeq contours = new CvContour();
// request closing of the application when the image window is closed
// show image on window
// find squares in every color plane of the image
for (int c = 0; c < 3; c++) {
IplImage channels[] = { cvCreateImage(sz, 8, 1), cvCreateImage(sz, 8, 1), cvCreateImage(sz, 8, 1) };
channels[c] = cvCreateImage(sz, 8, 1);
if (src.nChannels() > 1) {
cvSplit(timg, channels[0], channels[1], channels[2], null);
} else {
tgray = cvCloneImage(timg);
}
tgray = channels[c];
// // try several threshold levels
for (int l = 0; l < N; l++) {
// hack: use Canny instead of zero threshold level.
// Canny helps to catch squares with gradient shading
if (l == 0) {
// apply Canny. Take the upper threshold from slider
// and set the lower to 0 (which forces edges merging)
cvCanny(tgray, gray, 0, thresh, 5);
// dilate canny output to remove potential
// // holes between edge segments
cvDilate(gray, gray, null, 1);
} else {
// apply threshold if l!=0:
cvThreshold(tgray, gray, (l + 1) * 255 / N, 255,
CV_THRESH_BINARY);
}
// find contours and store them all as a list
cvFindContours(gray, storage, contours, sizeof(CvContour.class), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
CvSeq approx;
// test each contour
while (contours != null && !contours.isNull()) {
if (contours.elem_size() > 0) {
approx = cvApproxPoly(contours, Loader.sizeof(CvContour.class), storage, CV_POLY_APPROX_DP, cvContourPerimeter(contours) * 0.02, 0);
if (approx.total() == 4 && Math.abs(cvContourArea(approx, CV_WHOLE_SEQ, 0)) > 1000 && cvCheckContourConvexity(approx) != 0) {
double maxCosine = 0;
for (int j = 2; j < 5; j++) {
// find the maximum cosine of the angle between
// joint edges
double cosine = Math.abs(angle(
new CvPoint(cvGetSeqElem(
approx, j % 4)),
new CvPoint(cvGetSeqElem(
approx, j - 2)),
new CvPoint(cvGetSeqElem(
approx, j - 1))));
maxCosine = Math.max(maxCosine, cosine);
}
if (maxCosine < 0.2) {
CvRect x = cvBoundingRect(approx, l);
if ((x.width() * x.height()) < 50000) {
System.out.println("Width : " + x.width()
+ " Height : " + x.height());
cvSeqPush(squares, approx);
}
}
}
}
contours = contours.h_next();
}
contours = new CvContour();
}
}
return squares;
}
I use this image to detect rectangles and squares
I need to identify the following output
and
But when I run the above code, it detects only the following rectangles. But I don't know the reason for that. Please can someone explain the reason for that.
This is the output that I got.
Please be kind enough to explain the problem in above code and give some suggensions to detect this squares and rectangles.
Given a mask image (binary image, like your second figure), cvFindContours() gives you the contours (several list of points).
look at this link: http://dasl.mem.drexel.edu/~noahKuntz/openCVTut7.html
I am developing project on component identification using javacv package (Opencv ). I used a method to return set of rectangles on the image as "CvSeq" What I need to know is how to do following things
How can I get each rectangle from methods output (from CvSeq)?
How to access the lengths and width of the rectangle ?
This is the method that returns the rectangles
public static CvSeq findSquares( final IplImage src, CvMemStorage storage)
{
CvSeq squares = new CvContour();
squares = cvCreateSeq(0, sizeof(CvContour.class), sizeof(CvSeq.class), storage);
IplImage pyr = null, timg = null, gray = null, tgray;
timg = cvCloneImage(src);
CvSize sz = cvSize(src.width() & -2, src.height() & -2);
tgray = cvCreateImage(sz, src.depth(), 1);
gray = cvCreateImage(sz, src.depth(), 1);
pyr = cvCreateImage(cvSize(sz.width()/2, sz.height()/2), src.depth(), src.nChannels());
// down-scale and upscale the image to filter out the noise
cvPyrDown(timg, pyr, CV_GAUSSIAN_5x5);
cvPyrUp(pyr, timg, CV_GAUSSIAN_5x5);
cvSaveImage("ha.jpg", timg);
CvSeq contours = new CvContour();
// request closing of the application when the image window is closed
// show image on window
// find squares in every color plane of the image
for( int c = 0; c < 3; c++ )
{
IplImage channels[] = {cvCreateImage(sz, 8, 1), cvCreateImage(sz, 8, 1), cvCreateImage(sz, 8, 1)};
channels[c] = cvCreateImage(sz, 8, 1);
if(src.nChannels() > 1){
cvSplit(timg, channels[0], channels[1], channels[2], null);
}else{
tgray = cvCloneImage(timg);
}
tgray = channels[c]; // try several threshold levels
for( int l = 0; l < N; l++ )
{
// hack: use Canny instead of zero threshold level.
// Canny helps to catch squares with gradient shading
if( l == 0 )
{
// apply Canny. Take the upper threshold from slider
// and set the lower to 0 (which forces edges merging)
cvCanny(tgray, gray, 0, thresh, 5);
// dilate canny output to remove potential
// // holes between edge segments
cvDilate(gray, gray, null, 1);
}
else
{
// apply threshold if l!=0:
cvThreshold(tgray, gray, (l+1)*255/N, 255, CV_THRESH_BINARY);
}
// find contours and store them all as a list
cvFindContours(gray, storage, contours, sizeof(CvContour.class), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
CvSeq approx;
// test each contour
while (contours != null && !contours.isNull()) {
if (contours.elem_size() > 0) {
approx = cvApproxPoly(contours, Loader.sizeof(CvContour.class),storage, CV_POLY_APPROX_DP, cvContourPerimeter(contours)*0.02, 0);
if( approx.total() == 4
&&
Math.abs(cvContourArea(approx, CV_WHOLE_SEQ, 0)) > 1000 &&
cvCheckContourConvexity(approx) != 0
){
double maxCosine = 0;
//
for( int j = 2; j < 5; j++ )
{
// find the maximum cosine of the angle between joint edges
double cosine = Math.abs(angle(new CvPoint(cvGetSeqElem(approx, j%4)), new CvPoint(cvGetSeqElem(approx, j-2)), new CvPoint(cvGetSeqElem(approx, j-1))));
maxCosine = Math.max(maxCosine, cosine);
}
if( maxCosine < 0.2 ){
cvSeqPush(squares, approx);
}
}
}
contours = contours.h_next();
}
contours = new CvContour();
}
}
return squares;
}
This is the sample original image that I used
And this is the image that I got after drawing lines around the matching rectangles
Actually in above images I'm tying to remove those large rectangles and just need to identify other rectangles so I need some code example to understand how to archive above goals. Please be kind enough to share your experience with me. Thanks !
OpenCV finds contours of the white objects in black background. In your case it is reverse, objects are black. And in that way, even image border is also an object. So to avoid that, just reverse image such that background is black.
Below I have demonstrated it (using OpenCV-Python):
import numpy as np
import cv2
im = cv2.imread('sofsqr.png')
img = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(img,127,255,1)
Remember, instead of using seperate function for inverting, I used it in threshold. Just convert the threshold type to BINARY_INV (ie '1').
Now you have an image as below :
Now we find the contours. Then for each contour, we approximate it and check if it is a rectangle by looking at the length of approximated contour, which should be four for a rectangle.
If drawn, you get like this:
And at the same time, we also find the bounding rect of each contour. The bounding rect has a shape like this : [initial point x, initial point y, width of rect, height of rect]
So you get the width and height.
Below is the code:
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
approx = cv2.approxPolyDP(cnt,cv2.arcLength(cnt,True)*0.02,True)
if len(approx)==4:
cv2.drawContours(im,[approx],0,(0,0,255),2)
x,y,w,h = cv2.boundingRect(cnt)
EDIT :
After some comments, I understood that, the real aim of the question is to avoid large rectangles and select only smaller ones.
It can be done using bounding rect values we obtained. ie, Select only those rectangles whose length is less than a threshold value, or breadth or area. As an example, in this image, I took area should be less than 10000.(A rough estimate). If it is less than 10000, it should be selected and we denote it in red color, otherwise, false candidate, represented in blue color ( just for visualization).
for cnt in contours:
approx = cv2.approxPolyDP(cnt,cv2.arcLength(cnt,True)*0.02,True)
if len(approx)==4:
x,y,w,h = cv2.boundingRect(approx)
if w*h < 10000:
cv2.drawContours(im,[approx],0,(0,0,255),-1)
else:
cv2.drawContours(im,[approx],0,(255,0,0),-1)
Below is the output i got :
How to get that threshold value? :
It completely depends on you and your application. Or you can find it by trial and error methods. ( i did so).
Hope that solves your problem. All functions are standard opencv functions. So i think you won't find any problem to convert to JavaCV.
Just noticed that there is a bug in the code provided in the question:
IplImage channels[] = {cvCreateImage(sz, 8, 1), cvCreateImage(sz, 8, 1), cvCreateImage(sz, 8, 1)};
channels[c] = cvCreateImage(sz, 8, 1);
if(src.nChannels() > 1){
cvSplit(timg, channels[0], channels[1], channels[2], null);
}else{
tgray = cvCloneImage(timg);
}
tgray = channels[c];
This means if there is a single channel, tgray will be an empty image.
It should read:
IplImage channels[] = {cvCreateImage(sz, 8, 1), cvCreateImage(sz, 8, 1), cvCreateImage(sz, 8, 1)};
channels[c] = cvCreateImage(sz, 8, 1);
if(src.nChannels() > 1){
cvSplit(timg, channels[0], channels[1], channels[2], null);
tgray = channels[c];
}else{
tgray = cvCloneImage(timg);
}