here is what I have now
//Thresholding
Mat mat2 = mat.clone(); //mat is origin picture
Imgproc.cvtColor(mat2, mat2, Imgproc.COLOR_BGR2GRAY);
Imgproc.threshold(mat2, mat2, -1, 255, Imgproc.THRESH_BINARY+ Imgproc.THRESH_OTSU);
//Erosion and Dilation
Mat element = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(2*dilation_size + 1, 2*dilation_size+1));
Imgproc.dilate(mat2, mat2, element);
element = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(2*erosion_size + 1, 2*erosion_size+1));
Imgproc.erode(mat2, mat2, element);
I make a program to detect the rectangle block. After thresholding, the rectangle blocks become weird.
Is there have anyway to make a good effect of threshold or fix it?
Update
cut from Origin image, dpi is low.
After Threadholding
I would try first to apply some blur to get rid of noise and test the threshold again if still does not work there is an OpenCV page with examples how to denoise.
You should try to sharpen your image. There are many many methods for this, but I don't know which ones are implemented in OpenCV.
For example, using GIMP's unsharp mask tool (parameters 300, 5, 0), I get a result which may be easier to threshold:
Related
It is the original image:
And the after pre-processing the image
to gray
do canny edge
dilate
erode
bitwise_not
The result become as below:
Now I want to detect all the filled circle in the above image, the result which I
want:
I have tried something like this:
MatOfPoint2f approxCurve = new MatOfPoint2f();
matOfPoint2f.fromList(contour.toList());
Imgproc.approxPolyDP(matOfPoint2f, approxCurve, Imgproc.arcLength(matOfPoint2f, true) * 0.02, true);
long total = approxCurve.total();
// now check the total if it was greater than 6, then it can be a circle
And the result is like this: Which is not something I want
Update: (includes more sample image)
UPDATE: updating my solution using contours. you can find the solution
using Hough circles below this.
Using Contours method.
I tried finding contours to mark pipes again today. Results I got with contour. I have filtered the results based on the contour length and area. But you can apply more constraints based on the images you have. It seems like I have overfitted the solution to this one image, but that is the only image I have access to. You can also play with laplacian/canny in place of adaptive threshold. Hope this helps :)
import cv2 as cv2
img_color = cv2.imread('yNxlz.jpg')
img_gray = cv2.cvtColor(img_color, cv2.COLOR_BGR2GRAY)
image = cv2.GaussianBlur(img_gray, (5, 5), 0)
thresh = cv2.adaptiveThreshold(image,255,cv2.ADAPTIVE_THRESH_MEAN_C,\
cv2.THRESH_BINARY_INV,11,2)
contours,hierarchy = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnt = contours
contour_list = []
for contour in contours:
approx = cv2.approxPolyDP(contour,0.01*cv2.arcLength(contour,True),True)
area = cv2.contourArea(contour)
# Filter based on length and area
if (7 < len(approx) < 18) & (900 >area > 200):
# print area
contour_list.append(contour)
cv2.drawContours(img_color, contour_list, -1, (255,0,0), 2)
cv2.imshow('Objects Detected',img_color)
cv2.waitKey(5000)
Hough Circles method
I tried taking your image and applied hough circles(opencv). I do not have Java setup, hence I used python. Here is the code and corresponding results I got.
Before that some tips to fine tune this.
Important is preprocessing, a simple Gaussianblur got me very good improvement, so play with gaussian filter size.
Since you already know the pipe radius/diameter, exploit that information. That is, play with minradius and maxradius param in Houghcircles.
You can also play with mindist param if you know the minimum distance between the pipes.
If you know the region where pipes could be present you can ignore false positive pipes detected in region other than that.
Hope this helps :)
Code I used
import cv2 as cv2
img_color = cv2.imread('yNxlz.jpg')
img_gray = cv2.cvtColor(img_color, cv2.COLOR_BGR2GRAY)
img_gray = cv2.GaussianBlur(img_gray, (7, 7), 0)
#Hough circle
circles = cv2.HoughCircles(img_gray, cv2.cv.CV_HOUGH_GRADIENT, 1, minDist=15,
param1=50, param2=18, minRadius=12, maxRadius=22)
if circles is not None:
for i in circles[0, :]:
# draw the outer circle
cv2.circle(img_color, (i[0], i[1]), i[2], (0, 255, 0), 2)
# draw the center of the circle
cv2.circle(img_color, (i[0], i[1]), 2, (0, 0, 255), 3)
cv2.imwrite('with_circles.png', img_color)
cv2.imshow('circles', img_color)
cv2.waitKey(5000)
And here is the result I got.
I am using OpenCV in an Android application. I want the mobile application to automatically take a photo when a rectangle (something in the shape of a receipt for example) is in view. I am using Canny edge detection but when I am looking for contours, the array size is greater than 1500. Obviously it is not optimal to loop through all the contours and find the largest one so I was wondering is it possible to filter out the largest contour automatically through an api?
My code so far:
ArrayList contours;
#Override
public Mat onCameraFrame(final CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
// Clear contours array on each frame
contours.clear();
// Get Grayscale image
final Mat gray = inputFrame.gray();
// Canny edge detection
Imgproc.Canny(gray, gray, 300, 1000, 5, true);
// New empty black matrix to store the edges captured
Mat dest = new Mat();
Core.add(dest, Scalar.all(0), dest);
// Copy the edge data over to the empty black matrix
gray.copyTo(dest);
// Is there a way to filter the size of contours so that not everything is returned? Right now this function is returning a lot of contours (1500 +)
Imgproc.findContours(gray, contours, hirearchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
return dest;
}
EDIT
The user will be holding the phone and I want the application to automatically take a photo when the receipt is in view.
Example receipt
I have covered the basic techniques you may use, in the following Python code, it won't be hard to translate the code in the language of your choice, java in this case. So the technique involves:
Estimate the color of object you want to segment, which is white in your case, so safe limits for upper and lower bound can be approximated as:
RECEIPT_LOWER_BOUND = np.array([200, 200, 200])
RECEIPT_UPPER_BOUND = np.array([255, 255, 255])
Apply some Blur to input image to make the color distribution smooth, which would reduce the smaller contours in future.
img_blurred = cv2.blur(img, (5, 5))
Apply dilation to the binary image to remove the neighbouring smaller contours which surround your target largest contour
kernel = np.ones((10, 10), dtype=np.uint8)
mask = cv2.dilate(mask, kernel)
Now find contours in the mask after applying above operations and filter out the contour on the basis of contourArea.
im, contours, hierarchy = cv2.findContours(receipt_mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
largest_contour = max(contours, key=lambda x: cv2.contourArea(x))
Finally you may apply some threshold over the area to verify if the input was really a ticket or not.
Code:
import cv2
import numpy as np
# You may change the following ranges to define your own lower and upper BGR bounds.
RECEIPT_LOWER_BOUND = np.array([200, 200, 200])
RECEIPT_UPPER_BOUND = np.array([255, 255, 255])
def segment_receipt(img):
# Blur the input image to reduce the noise which in-turn reduces the number of contours
img_blurred = cv2.blur(img, (5, 5))
mask = cv2.inRange(img_blurred, RECEIPT_LOWER_BOUND, RECEIPT_UPPER_BOUND)
# Also dilate the binary mask which further reduces the salt and pepper noise
kernel = np.ones((10, 10), dtype=np.uint8)
mask = cv2.dilate(mask, kernel)
return mask
def get_largest_contour_rect(image):
receipt_mask = segment_receipt(image)
im, contours, hierarchy = cv2.findContours(receipt_mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
print "Number of contours found :", len(contours)
# Sorting the contours to get the largest one
largest_contour = max(contours, key=lambda x: cv2.contourArea(x))
# Return the last contour in sorted list as the list is sorted in increasing order.
return cv2.boundingRect(largest_contour)
image = cv2.imread("path/to/your/image.jpg")
rect = get_largest_contour_rect(image)
Output:
#J.Doe I am currently working on such a project and I have successfully being able to isolate the largest contour in the image after a whole lot of processing. The only part remaining is recognizing a rectangular contour and taking a picture.
mRgba = inputFrame.rgba();
Imgproc.Canny(mRgba,mCanny,50,200);
Imgproc.cvtColor(mRgba, mGray, Imgproc.COLOR_RGB2GRAY);
Imgproc.GaussianBlur(mGray, mGray1, new Size(3, 3), 1);
Mat kernel = Imgproc.getStructuringElement(Imgproc.MORPH_RECT,new Size(9,9));
Imgproc.dilate(mGray1, mGray2, kernel);
Imgproc.Canny(mGray2, mCanny, 50, 200);
Imgproc.findContours(mCanny,contours,hierarchy,Imgproc.RETR_TREE,Imgproc.CHAIN_APPROX_SIMPLE);
double maxVal = 0;
int maxValIdx = 0;
for(int contourIdx = 0; contourIdx < contours.size(); contourIdx++){
double contourArea = Imgproc.contourArea(contours.get(contourIdx));
if(maxVal < contourArea)
{
maxVal = contourArea;
maxValIdx = contourIdx;
}
}
Imgproc.drawContours(mRgba,contours,maxValIdx,new Scalar(0,255,255),-1);
return mRgba;
Be wary of the image names i changed them over different processes.
I'm using opencv in Java to try to detect circles (iris, and pupil) in images with eyes, but I didn't get the expected results.
Here is my code
// convert source image to gray
org.opencv.imgproc.Imgproc.cvtColor(mRgba, imgCny, Imgproc.COLOR_BGR2GRAY);
//fliter
org.opencv.imgproc.Imgproc.blur(imgCny, imgCny, new Size(3, 3));
//apply canny
org.opencv.imgproc.Imgproc.Canny(imgCny, imgCny, 10, 30);
//apply Hough circle
Mat circles = new Mat();
Point pt;
org.opencv.imgproc.Imgproc.HoughCircles(imgCny, circles, Imgproc.CV_HOUGH_GRADIENT, imgCny.rows() / 4, 2, 200, 100, 0, 0);
//draw the found circles
for (int i = 0; i < circles.cols(); i++) {
double vCircle[] = circles.get(0, i);
pt = new Point((int) Math.round((vCircle[0])), (int) Math.round((vCircle[1])));
int radius = (int) Math.round(vCircle[2]);
Core.circle(mRgba, pt, radius, new Scalar(0, 0, 255), 3);
}
the original image
canny result
I don't know what is the problem. Whether the problem is in the parameters of the found circle function or something else.
Has anyone faced such problem or knows how to fix it?
There is no way that the Hough transform will detect THE circle you want in this canny result! There are too many edges. You must clean the image first.
Start with black (the pupil, iris inner part) and white detection. These two zones will delimitate the ROI.
Moreover, I would also try to perform a skin detection (simple threshold into HSV color space. It will eliminate 90% of the research area.
I tried FAST corner detection in Android using Opencv4Android 2.4.6.
The keypoints are detected,but the view is not showing the drawn keypoints,or
Features2d.drawKeypoints
is not working,i dont know.
public Mat onCameraFrame(Mat inputFrame) {
MatOfKeyPoint points = new MatOfKeyPoint();
Mat mat = inputFrame;
FeatureDetector fast = FeatureDetector.create(FeatureDetector.FAST);
fast.detect(mat, points);
Scalar redcolor = new Scalar(255,0,0);
Mat mRgba= mat.clone();
Imgproc.cvtColor(mat, mRgba, Imgproc.COLOR_RGBA2BGRA,4);
Core.line(mRgba, new Point(100, 100), new Point(300,300), new Scalar(0, 0, 255));
Features2d.drawKeypoints(mRgba, points, mRgba, redcolor, 3);
return mRgba;
}
By logging ,i can see many keypoints are detected.but not drawn in the seen.
The line i tried to draw in the view is also displayed in the view,but not keypoints.
pls help.
Thanking you
I think the issue is with the DrawMatchesFlags, which is the last input in the drawKeypoints function.. Referring to the function description, you can see all the flags that are being used.. i would suggest that you use DrawMatchesFlags::DEFAULT if you don't want to go into the details..
Hope this helps.
The answer i found out from this link
The problem was with Imgproc.cvtColor .
The problem is that unfortunately drawKeypoints() can't work with RGBA Mats, it accepts 8UC3 and 8UC1 only.
So if you'd like to call drawKeypoints(), you need convert the picture to RGB and then back to RGBA to display.
So i changed the code to Imgproc.cvtColor(mat, mRgba, Imgproc.COLOR_RGBA2RGB,4);
Now it is working fine,and problem of bluish color also removed
Thanks for the answers
I've looked at JavaCV wrapper for OpenCV library and I saw that it is possible to use that library in Java for face detection on an image, but I was wondering is it possible to use that library for detecting traffic warning signs on an image and how?
I have pictures taken from the road that look like this: http://www.4shared.com/photo/5QxoVDwd/041.html
and the result of detection should look sometning like this or similar: http://www.4shared.com/photo/z_pL0lSK/overlay-0.html
EDIT:
After I detect red color I get this image:
And I have a problem detecting just the warning sign triangle shape and ignore all other shapes. I tried changing the cvApproxPoly parameters but with no result. This is my code:
public void myFindContour(IplImage image)
{
IplImage grayImage = cvCreateImage(cvGetSize(image), IPL_DEPTH_8U, 1);
cvCvtColor(image, grayImage, CV_BGR2GRAY);
CvMemStorage mem;
CvSeq contours = new CvSeq();
CvSeq ptr = new CvSeq();
cvThreshold(grayImage, grayImage, 150, 255, CV_THRESH_BINARY);
mem = cvCreateMemStorage(0);
cvFindContours(grayImage, mem, contours, Loader.sizeof(CvContour.class) , CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0));
Random rand = new Random();
while (contours != null && !contours.isNull()) {
if (contours.elem_size() > 0) {
CvSeq points = cvApproxPoly(contours, Loader.sizeof(CvContour.class),
mem, CV_POLY_APPROX_DP, cvContourPerimeter(contours)*0.02, 0);
Color randomColor = new Color(rand.nextFloat(), rand.nextFloat(), rand.nextFloat());
CvScalar color = CV_RGB( randomColor.getRed(), randomColor.getGreen(), randomColor.getBlue());
cvDrawContours(image, points, color, CV_RGB(0,0,0), -1, CV_FILLED, 8, cvPoint(0,0));
}
contours = contours.h_next();
}
cvSaveImage("myfindcontour.png", image);
}
This is the output that i get (I used different colors for every shape, but in the final output i will use only white for detected warning sign and everything other left black):
You have to do the following:
Detect red color on image - you will get 1bit image where: 0=non-red, 1=red.
Detect triangles on created in previous step image. You can do that using approxPoly function.
see ,the first find the contour area.
compare it with the precalculated value and keep it with in a range
like
if(area>Mincontourarea && area<maxcontourare)
{
thats it now we have the signboard do
}
the value if calculated wouldnot be bigger than the car conotur,
to get the contoutr
up to my knowledge u need
Moments operator
code for the momnts operator:
Moments moment = moments((cv::Mat)contours[index]);
area = moment.m00; //m00 gives the area of the detected contour
put the above code before the if block discussed above
if you want the x and y coordinates put a post again..
take a look of my answer, it is in c++ but using opencv it is able to detect road signs, you can take it as a good example.
https://stackoverflow.com/a/52651521/8035924