I am new to OpenCV java and I have an android app that will match two images using ORB FeatureDetector and DescriptorExtractor. I use DescriptorMatcher BRUTEFORCE_HAMMING. All the time the matcher works but other times it shows duplicates of Keypoints. When Image on the Scene is too bright or too dark, it shows duplicate key points which is not what I wanted.
The image that accurately matches:
The image that is bad matches:
try {
bmpObjToRecognize = bmpObjToRecognize.copy(Bitmap.Config.ARGB_8888, true);
bmpScene = bmpScene.copy(Bitmap.Config.ARGB_8888, true);
img1 = new Mat();
img2 = new Mat();
Utils.bitmapToMat(bmpObjToRecognize, img1);
Utils.bitmapToMat(bmpScene, img2);
Imgproc.cvtColor(img1, img1, Imgproc.COLOR_RGBA2GRAY);
Imgproc.cvtColor(img2, img2, Imgproc.COLOR_RGBA2GRAY);
Imgproc.equalizeHist(img1, img1);
Imgproc.equalizeHist(img2, img2);
detector = FeatureDetector.create(FeatureDetector.ORB);
descExtractor = DescriptorExtractor.create(DescriptorExtractor.ORB);
matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
keypoints1 = new MatOfKeyPoint();
keypoints2 = new MatOfKeyPoint();
descriptors = new Mat();
dupDescriptors = new Mat();
detector.detect(img1, keypoints1);
Log.d("LOG!", "number of query Keypoints= " + keypoints1.size());
detector.detect(img2, keypoints2);
Log.d("LOG!", "number of dup Keypoints= " + keypoints2.size());
// Descript keypoints1
descExtractor.compute(img1, keypoints1, descriptors);
descExtractor.compute(img2, keypoints2, dupDescriptors);
// matching descriptors
List<MatOfDMatch> knnMatches = new ArrayList<>();
matcher.knnMatch(descriptors, dupDescriptors, knnMatches, DescriptorMatcher.BRUTEFORCE);
goodMatches = new ArrayList<>();
knnMatchesValue = knnMatches.size();
Log.i("xxx", "xxx match count knnMatches = " + knnMatches.size());
for (int i = 0; i < knnMatches.size(); i++) {
if (knnMatches.get(i).rows() > 1) {
DMatch[] matches = knnMatches.get(i).toArray();
if (matches[0].distance < 0.89f * matches[1].distance) {
goodMatches.add(matches[0]);
}
}
}
// get keypoint coordinates of good matches to find homography and remove outliers using ransac
List<Point> pts1 = new ArrayList<>();
List<Point> pts2 = new ArrayList<>();
for (int i = 0; i < goodMatches.size(); i++) {
Point destinationPoint = keypoints2.toList().get(goodMatches.get(i).trainIdx).pt;
pts1.add(keypoints1.toList().get(goodMatches.get(i).queryIdx).pt);
pts2.add(destinationPoint);
}
// convertion of data types - there is maybe a more beautiful way
Mat outputMask = new Mat();
MatOfPoint2f pts1Mat = new MatOfPoint2f();
pts1Mat.fromList(pts1);
MatOfPoint2f pts2Mat = new MatOfPoint2f();
pts2Mat.fromList(pts2);
// Find homography - here just used to perform match filtering with RANSAC, but could be used to e.g. stitch images
// the smaller the allowed reprojection error (here 15), the more matches are filtered
Mat Homog = Calib3d.findHomography(pts1Mat, pts2Mat, Calib3d.RANSAC, 15, outputMask, 2000, 0.995);
// outputMask contains zeros and ones indicating which matches are filtered
better_matches = new LinkedList<>();
for (int i = 0; i < goodMatches.size(); i++) {
if (outputMask.get(i, 0)[0] != 0.0) {
better_matches.add(goodMatches.get(i));
}
}
matches_final_mat = new MatOfDMatch();
matches_final_mat.fromList(better_matches);
imgOutputMat = new Mat();
MatOfByte drawnMatches = new MatOfByte();
Features2d.drawMatches(img1, keypoints1, img2, keypoints2, matches_final_mat,
imgOutputMat, GREEN, RED, drawnMatches, Features2d.NOT_DRAW_SINGLE_POINTS);
bmp = Bitmap.createBitmap(imgOutputMat.cols(), imgOutputMat.rows(), Bitmap.Config.ARGB_8888);
Imgproc.cvtColor(imgOutputMat, imgOutputMat, Imgproc.COLOR_BGR2RGB);
Utils.matToBitmap(imgOutputMat, bmp);
List<DMatch> betterMatchesList = matches_final_mat.toList();
final int matchesFound = betterMatchesList.size();
} catch (Exception e) {
e.printStackTrace();
}
Is there a part of the code that I am missing?
TL;DR Use the class BFMatcher and its create method explicitly then your are able set the crosscheck flag to true. This will enable your wanted "vice versa check".
To cite the OpenCV documentation of knnMatch and its header:
Finds the k best matches for each descriptor from a query set.
knnMatch(InputArray queryDescriptors, InputArray trainDescriptors, ...)
So this means that it is possible that more than one of the "query descriptors" match to the same descriptor in the "training set". It just gives you the k best and if there are more query descriptors than training descriptors you will inevitably get duplicates. Especially, when you almost have no features and therefore descriptors in the training image/set (due to the lack of any texture e.g. your black input), that will be the case.
If you want to get rid of your duplicates, set the "crosscheck" flag of the BFMatcher to true. Otherwise (i.e. other matcher) you would need to go trough your matches "group" them by the respective training descriptors and remove all but the one with the smallest distance.
Related
Im trying to merge two images but ORB does not find keypoints. This is just an example using the same image, dark points are caused by acquisition and are not good reference points so I apply a mask to avoid them. The problem is that no keypoints are detected if black points are masked and I wonder what the problem is.
final ORB orb = ORB.create(100);
MatOfKeyPoint keypoints = new MatOfKeyPoint();
Mat descriptors = new Mat();
final Mat ones = Mat.ones(426, 195, CV_8U);
orb.detectAndCompute(draw, zeroMask, keypoints, descriptors);
MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
Mat descriptors2 = new Mat();
final List<KeyPoint> keyPoints = keypoints.toList();
orb.detectAndCompute(draw, zeroMask, keypoints2, descriptors2);
final DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
final MatOfDMatch matches = new MatOfDMatch();
matcher.match(descriptors, descriptors2, matches);
final Mat links = new Mat();
final MatOfByte ones1 = new MatOfByte();
org.opencv.features2d.Features2d.drawMatches(draw, keypoints, draw, keypoints2, matches, links, new Scalar(0), new Scalar(0), ones1);
I am detecting a rectangle and comparing the color to a urine test strip.
How can i detect all of squares? I want to detect the remaining squares in the picture below. I have tried changing the brightness and contrast
Here is my code:
MainActivity.java
...
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
...
Bitmap img = BitmapFactory.decodeStream(in);
in.close();
Bitmap changeImg = changeBitmapContrastBrightness(img, (float)1, 10);
Mat cMap = new Mat();
Utils.bitmapToMat(changeImg, cMap);
List<MatOfPoint> squres = processImage(cMap);
for (int i = 0; i < squres.size(); i++) {
setLabel(cMap, String.valueOf(i), squres.get(i));
}
Bitmap resultBitmap = Bitmap.createBitmap(cMap.cols(), cMap.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(cMap, resultBitmap);
imgView.setImageBitmap(resultBitmap);
...
}
...
private static List<MatOfPoint> processImage(Mat img){
ArrayList<MatOfPoint> squares = new ArrayList<>();
Mat matGray = new Mat();
Mat matCny = new Mat();
Mat matBlur = new Mat();
Mat matThresh = new Mat();
Mat close = new Mat();
// 노이즈 제거위해 다운스케일 후 업스케일
// Imgproc.pyrDown(matInit, matBase, matBase.size());
// Imgproc.pyrUp(matBase, matInit, matInit.size());
// GrayScale
Imgproc.cvtColor(img, matGray, Imgproc.COLOR_BGR2GRAY);
// Blur
Imgproc.medianBlur(matGray, matBlur, 5);
// // Canny Edge 검출
// Imgproc.Canny(matBlur, matCny, 0, 255);
// // Binary
Imgproc.threshold(matBlur, matThresh, 160, 255, Imgproc.THRESH_BINARY_INV);
Imgproc.morphologyEx(matThresh, close, Imgproc.MORPH_CLOSE, Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(3,3)));
// // 노이즈 제거
// Imgproc.erode(matCny, matCny, Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new org.opencv.core.Size(6, 6)));
// Imgproc.dilate(matCny, matCny, Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new org.opencv.core.Size(12, 12)));
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(close, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
double min_area = 0;
double max_area = 10000;
for(MatOfPoint cnt : contours){
double contourArea = Imgproc.contourArea(cnt);
if(contourArea > min_area && contourArea < max_area){
squares.add(cnt);
}
}
return squares;
}
App result Image
Original Image
Please help me..
Your code is correctly identifying the smaller boxes and ignoring the very large box which is the strip, so the basics are all in place.
It is not recognising the smaller boxes on the strip - given that your contour finding is clearly working this suggests that the threshold value in your threshold function (160 in your code above) may need to be adjusted so it includes the color boxes on the strip which do not have a black contour. The black contour will be definitely detached.
Whatever the root causes you'll probably find the easiest way to debug it is to output and look at the intermediate images generated - this will allow you check visually yourself very quickly the result of your blurring and thresholding.
You could also take a look at using adaptive thresholding if you are working with multiple images and find the threshold is not something you can reliably determine in advance. The documentation is here: https://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html?highlight=adaptivethreshold and there is a very nice example in this answer here: https://stackoverflow.com/a/31290735/334402
adaptiveThreshold parameters allow you fine tune its behaviour and it is worth experimenting with them see what works best for a given type if image:
I'm working on a project where I have to detect drilled holes on a surface. (the top two holes and there for orientation purposes only)
After detecting the holes the pattern will judge the placement of the holes and give results. I have created an overlay grid layout and placed it over the camera2api preview so the user can align the holes and scan (The real testing will not be of a picture from the LCD as shown in the screenshot)
Currently, I'm cropping the image based on the grid and resizing it to 1920x2560 to have a consistent frame for pattern judgement, which makes a single grid of roughly about 300px. I am unable to detect the blobs can someone suggest what sort of filtering I should choose for this work and if there is a better approach for doing this rather than using a grid layout as the placement of the holes in regard to the orientation holes matter for final results (both x and y axis)
Here is my code:
Mat srcMat = resizeAndCropMatToGrid(mats[0]);
if (srcMat == null) {
exception = new Exception("Cropping Failed");
errorMessage = "Unable to crop image based on grid";
return null;
}
matProgressTask = srcMat;
Mat processedMat = new Mat();
Imgproc.cvtColor(srcMat, processedMat, Imgproc.COLOR_BGR2GRAY);
Imgproc.GaussianBlur(processedMat, processedMat, new org.opencv.core.Size(5, 5), 5);
Imgproc.threshold(processedMat, processedMat, 115, 255, Imgproc.THRESH_BINARY);
matProgressTask = processedMat;
FeatureDetector featureDetector = FeatureDetector.create(FeatureDetector.SIMPLEBLOB);
featureDetector.read(Environment.getExternalStorageDirectory() + "/Android/blob.xml");
MatOfKeyPoint matOfKeyPoint = new MatOfKeyPoint();
featureDetector.detect(processedMat, matOfKeyPoint);
KeyPoint[] keyPointsArray = matOfKeyPoint.toArray();
Log.e("keypoints", "" + Arrays.toString(keyPointsArray));
if (keyPointsArray.length < 1) {
exception = new Exception("Blobs Missing");
errorMessage = "Error: Unable to filter blobs";
} else {
try {
MatOfKeyPoint matOfKeyPointFilteredBlobs = new MatOfKeyPoint(keyPointsArray);
Features2d.drawKeypoints(srcMat, matOfKeyPointFilteredBlobs, srcMat, new Scalar(255, 0, 0), Features2d.DRAW_OVER_OUTIMG);
} catch (Exception e) {
e.printStackTrace();
exception = e;
errorMessage = "Error: Unable to draw Blobs";
return null;
}
matProgressTask = srcMat;
onProgressUpdate();
patterData = pinpointBlobsToGetData(keyPointsArray);
if (patterData == null) {
exception = new Exception("Unable to establish pattern");
errorMessage = "Error: Key points array is null";
}
}
And here is the blobby file configuration that I'm using:
<?xml version="1.0"?>
<opencv_storage>
<format>3</format>
<thresholdStep>10.</thresholdStep>
<minThreshold>50.</minThreshold>
<maxThreshold>120.</maxThreshold>
<minRepeatability>2</minRepeatability>
<minDistBetweenBlobs>20.</minDistBetweenBlobs>
<filterByColor>1</filterByColor>
<blobColor>0</blobColor>
<filterByArea>1</filterByArea>
<minArea>2300.</minArea>
<maxArea>4500.</maxArea>
<filterByCircularity>1</filterByCircularity>
<minCircularity>0.2</minCircularity>
<maxCircularity>1.0</maxCircularity>
<filterByInertia>1</filterByInertia>
<minInertiaRatio>0.2</minInertiaRatio>
<maxInertiaRatio>1.0</maxInertiaRatio>
<filterByConvexity>1</filterByConvexity>
<minConvexity>0.2</minConvexity>
<maxConvexity>1.0</maxConvexity>
</opencv_storage>
I am using Python.
For the second image you provided I successfully detected the holes...
...using this code...
import cv2
import numpy as np
img = cv2.imread("C:\\Users\\Link\\Desktop\\2.jpg")
# cv2.imshow("original", img)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# cv2.imshow("gray", gray)
blur = cv2.medianBlur(gray, 31)
# cv2.imshow("blur", blur)
ret, thresh = cv2.threshold(blur, 127, 255, cv2.THRESH_OTSU)
# cv2.imshow("thresh", thresh)
canny = cv2.Canny(thresh, 75, 200)
# cv2.imshow('canny', canny)
im2, contours, hierarchy = cv2.findContours(canny, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
contour_list = []
for contour in contours:
approx = cv2.approxPolyDP(contour, 0.01 * cv2.arcLength(contour, True), True)
area = cv2.contourArea(contour)
if 5000 < area < 15000:
contour_list.append(contour)
msg = "Total holes: {}".format(len(approx)//2)
cv2.putText(img, msg, (20, 40), cv2.FONT_HERSHEY_PLAIN, 2, (0, 0, 255), 2, cv2.LINE_AA)
cv2.drawContours(img, contour_list, -1, (0, 255, 0), 2)
cv2.imshow('Objects Detected', img)
cv2.imwrite("detected_holes.png", img)
cv2.waitKey(0)
Now, the first is a bit different. The same code will not work in detecting the right amount of holes. The program keep detecting also what is clearly not a hole (crack in left bottom angle..) with missing some main holes.
Here is an example of what I am talking about:
Not only the counter in that case is wrong but also, the main problem, is that the hole at right bottom can't be detected.
So, I have managed to figure it by passing the mat directly to FeatureDetector class without any prior processing...
Mat srcMat = mats[0];
if (srcMat == null) {
exception = new Exception("Cropping Failed");
errorMessage = "Unable to crop image based on grid";
return null;
}
matProgressTask = srcMat;
FeatureDetector featureDetector = FeatureDetector.create(FeatureDetector.SIMPLEBLOB);
featureDetector.read(Environment.getExternalStorageDirectory() + "/Android/blob.xml");
Log.e("LoadingBlob", "wqfqfwq");
MatOfKeyPoint matOfKeyPoint = new MatOfKeyPoint();
featureDetector.detect(srcMat, matOfKeyPoint);
KeyPoint[] keyPointsArray = matOfKeyPoint.toArray();
Log.e("keypoints", "" + Arrays.toString(keyPointsArray));
if (keyPointsArray.length < 1) {
exception = new Exception("Blobs Missing");
errorMessage = "Error: Unable to filter blobs";
} else {
try {
MatOfKeyPoint matOfKeyPointFilteredBlobs = new MatOfKeyPoint(keyPointsArray);
Features2d.drawKeypoints(srcMat, matOfKeyPointFilteredBlobs, srcMat, new Scalar(0, 255, 0), Features2d.DRAW_OVER_OUTIMG);
} catch (Exception e) {
e.printStackTrace();
exception = e;
errorMessage = "Error: Unable to draw Blobs";
return null;
}
matProgressTask = srcMat;
onProgressUpdate();
patterData = pinpointBlobsToGetData(keyPointsArray);
if (patterData == null) {
exception = new Exception("Unable to establish pattern");
errorMessage = "Error: Key points array is null";
}
}
And my feature detector parameters file is:
<?xml version="1.0"?>
<opencv_storage>
<format>3</format>
<thresholdStep>10.</thresholdStep>
<minThreshold>50.</minThreshold>
<maxThreshold>120.</maxThreshold>
<minRepeatability>2</minRepeatability>
<minDistBetweenBlobs>20.</minDistBetweenBlobs>
<filterByColor>0</filterByColor>
<blobColor>0</blobColor>
<filterByArea>1</filterByArea>
<minArea>3000.</minArea>
<maxArea>10000.</maxArea>
<filterByCircularity>1</filterByCircularity>
<minCircularity>0.3</minCircularity>
<maxCircularity>1.0</maxCircularity>
<filterByInertia>1</filterByInertia>
<minInertiaRatio>0.3</minInertiaRatio>
<maxInertiaRatio>1.0</maxInertiaRatio>
<filterByConvexity>1</filterByConvexity>
<minConvexity>0.3</minConvexity>
<maxConvexity>1.0</maxConvexity>
</opencv_storage>
The result images:
I'm currently working on an application that will split a scanned image (that contains multiple receipts) into individual receipt images.
Below is the sample image:
sample image
I was able to detect the edges of each receipts in the scanned image using canny function of OpenCV.
Below is the sample image with detected edges:
sample image with detected edges
... and the sample code is
Mat src = Highgui.imread(filename);
Mat gray = new Mat();
int threshold = 12;
Imgproc.cvtColor(src, gray, Imgproc.COLOR_BGR2GRAY);
Imgproc.blur(gray, gray, new Size(3, 3));
Imgproc.Canny(gray, gray, threshold, threshold * 3, 3, true);
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(gray, contours, hierarchy,
Imgproc.RETR_CCOMP,
Imgproc.CHAIN_APPROX_SIMPLE);
if (hierarchy.size().height > 0 && hierarchy.size().width > 0) {
for (int idx = 0; idx >= 0; idx = (int) hierarchy.get(0, idx)[0]) {
Rect rect = Imgproc.boundingRect(contours.get(idx));
Core.rectangle(src, new Point(rect.x, rect.y),
new Point(rect.x + rect.width, rect.y + rect.height),
new Scalar(255, 0, 0));
}
}
Now my problem is, I don't know how am I going to identify the 3rd receipt since unlike with the first 2 it is not enclosed in one rectangular shape which I will use as the basis for splitting the image.
I've heard that for me to extract the 3rd image, I must use a clustering algorithm like DBSCAN, unfortunately I can't find one.
Anyone knows how am I going to identify the 3rd image?
Thank you in advance!
I want to compare two image in percent in android , here is my codes
Mat img1 = Highgui.imread("storage/external_SD/a.png");
Mat img2 = Highgui.imread("storage/external_SD/b.png");
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
Mat descriptors1 = new Mat();
Mat descriptors2 = new Mat();
//Definition of ORB keypoint detector and descriptor extractors
FeatureDetector detector = FeatureDetector.create(FeatureDetector.ORB);
DescriptorExtractor extractor = DescriptorExtractor.create(DescriptorExtractor.ORB);
//Detect keypoints
detector.detect(img1, keypoints1);
detector.detect(img2, keypoints2);
//Extract descriptors
extractor.compute(img1, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);
//Definition of descriptor matcher
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
//Match points of two images
MatOfDMatch matches = new MatOfDMatch();
matcher.match(descriptors1,descriptors2 ,matches);
For example my method out put should be :
Images is 90% same
But i dont know what should i do after matcher.match(descriptors1,descriptors2 ,matches); method, can you please advise me ?
As I remember method match() must return value of double or float.
I did same thing with face comparison and I did (according to your case):
double value = matcher.match(descriptors1,descriptors2 ,matches);
There are 2 similar questions around this. Check it out. Must be useful.
OpenCV filtering ORB matches
OpenCV - Java - No match with 2 opposite images using DescriptorMatcher