I wrote a program using OpenCV in C / C ++.
Now I would like to move it to the Android platform.
I have a problem with this piece of code
Mat picture;
vector<Rect> limitsRectangle;
vector<Rect> tableRectangle;
vector<pair<float, float> > x;
void search()
{
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(picture, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0,0));
limitsRectangle.clear();
limitsRectangle.resize( contours.size() );
vector<vector<Point> > contours_poly( contours.size() );
for(unsigned int i = 0; i < contours.size() ; i++)
{
approxPolyDP( Mat(contours[i]), contours_poly[i], 100, true );
limitsRectangle[i] = boundingRect( Mat(contours_poly[i]) );
}
float lb=3.84;
float ub=6.87;
tableRectangle.clear();
for(unsigned int i = 0; i< limitsRectangle.size(); i++ )
{
float proportions = ((float)limitsRectangle[i].width/(float)limitsRectangle[i].height);
if( (proportions > lb) && (proportions < ub))
{
limitsRectangle[i].x += 8;
limitsRectangle[i].y += 0;
limitsRectangle[i].width *= 0.95;
limitsRectangle[i].height *= 0.9;
tableRectangle.push_back(limitsRectangle[i]);
}}}
Below are pieces of code that I managed to change it. I do not know how well I'm doing, so I ask for support and help
Mat picture;
List<MatOfRect> limitRectangles = new ArrayList<MatOfRect>();
List<MatOfRect> tableRectangle = new ArrayList<MatOfRect>();
// vector<pair<float, float> > x; ???
void search()
{
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat hierarchy;
Imgproc.findContours(resultMat, contours, hierarchy, Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE, new Point(0, 0));
limitsRectangle.clear();
// limitsRectangle.resize ??? no resize in Java
List<MatOfPoint> contours_poly = new ArrayList<MatOfPoint>();
// contours_poly( contours.size() ); ??? don't work
for(int i = 0; i < contours.size() ; i++)
{
// Imgproc.approxPolyDP(contours[i], contours_poly[i], 100, true); ??? dont work
// limitRectangles[i] = Imgproc.boundingRect(Mat(contours_poly[i])); ??? dont work
}
double lb=3.84;
double ub=6.87;
tableRectangle.clear();
#phoenix37, Have you been able to get your Java code working? I have been trying to adapt some C++ code into my Android project with some success. I believe(in Java) you need to convert your Array List into an Array to be able to access each element. I know this is true for integer array lists. Here are some Java constructors for working with OpenCV specific to MatOfPoint. I am still trying to figure these out myself as I am fairly new to Java and OpenCV. I know this doesn't answer your question but hopefully leads you down the right path.
Related
I am kind of hopeless in my quest to write a screenshot reader for a game I am addicted to.
We take a screenshot, regardless of size/coloring (so custom settings ingame) and I have a library of images I want to check it against. I am using OpenCV
Example screenshot:
now I have a library of all ingame materials, for example this one
I already know how to rescale and stuffs, but I just know too little for it to find a decent match. I am quite new in image filtering/matching and such so if you have any ideas/tipps, please let me know. My code so far:
public void scan2( String image, String template ) {
Mat iterateImg = Imgcodecs.imread(image, Imgcodecs.IMREAD_COLOR);
Mat templ2 = Imgcodecs.imread(template, Imgcodecs.IMREAD_COLOR);
Mat templ2resized = new Mat();
double templx = templ2.size().width;
double temply = templ2.size().height;
System.out.println(templx+"-"+temply);
for(double scale = 1;scale <2;scale = scale+0.01 ){
Imgproc.resize(templ2, templ2resized, new Size(templx/scale, temply/scale));
MatchResultWrapper match = match(iterateImg, templ2resized);
match.setScaledx((int) (templx/scale));
match.setScaledy((int) (temply/scale));
vals.add(match);
}
double[]results= new double[vals.size()];
for(int i = 0; i < vals.size();i++){
results[i]=vals.get(i).getMatch();
}
double diff = Integer.MAX_VALUE;
int closestIndex = 0;
for (int i = 0; i < results.length; ++i) {
double abs = Math.abs(results[i]);
if (abs < diff) {
closestIndex = i;
diff = abs;
} else if (abs == diff && results[i] > 0 && results[closestIndex] < 0) {
//same distance to zero but positive
closestIndex =i;
}
}
System.out.println(vals.get(closestIndex));
}
private MatchResultWrapper match( Mat source, Mat template ) {
Mat result = new Mat();
Mat img_display = new Mat();
source.copyTo(img_display);
int result_cols = source.cols() - template.cols() + 1;
int result_rows = source.rows() - template.rows() + 1;
result.create(result_rows, result_cols, CvType.CV_32FC1);
Imgproc.matchTemplate(source, template, result, Imgproc.TM_SQDIFF);
Core.normalize(result, result, 0, 1, Core.NORM_MINMAX, - 1, new Mat());
Core.MinMaxLocResult mmr = Core.minMaxLoc(result);
MatchResultWrapper wrapper = new MatchResultWrapper();
wrapper.setMatch(mmr.minVal);
wrapper.setX((int)mmr.minLoc.x);
wrapper.setY((int)mmr.minLoc.y);
return wrapper;
}
Thanks to #christoph-rackwitz we now have the following result. Which sadly does not work either :(
Probably you need a mask for your template. For template matching, the match mode documentation is not 100% clear. But TM_SQDIFF, TM_SQDIFF_NORMED, TM_CCORR, and TM_CCORR_NORMED seem to support masks. I generally prefer the normed match modes, but you can try both of these modes to see what gives you best results. If you do try TM_CCORR_NORMED be sure to use max instead of min.
My aim is to detect the largest rectangle in an image, whether its skewed or not. After some research and googling I came up with a code that theoretically should work, however in half of the cases I see puzzling results.
I used OpenCV for Android, here is the Code:
private void find_parallels() {
Utils.bitmapToMat(selectedPicture,img);
Mat temp = new Mat();
Imgproc.resize(img,temp,new Size(640,480));
img = temp.clone();
Mat imgGray = new Mat();
Imgproc.cvtColor(img,imgGray,Imgproc.COLOR_BGR2GRAY);
Imgproc.GaussianBlur(imgGray,imgGray,new Size(5,5),0);
Mat threshedImg = new Mat();
Imgproc.adaptiveThreshold(imgGray,threshedImg,255,Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C,Imgproc.THRESH_BINARY,11,2);
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
Mat imageContours = imgGray.clone();
Imgproc.cvtColor(imageContours,imageContours,Imgproc.COLOR_GRAY2BGR);
Imgproc.findContours(threshedImg,contours,hierarchy,Imgproc.RETR_TREE,Imgproc.CHAIN_APPROX_SIMPLE);
max_area = 0;
int num = 0;
for (int i = 0; i < contours.size(); i++) {
area = Imgproc.contourArea(contours.get(i));
if (area > 100) {
MatOfPoint2f mop = new MatOfPoint2f(contours.get(i).toArray());
peri = Imgproc.arcLength(mop, true);
Imgproc.approxPolyDP(mop, approx, 0.02 * peri, true);
if(area > max_area && approx.toArray().length == 4) {
biggest = approx;
num = i;
max_area = area;
}
}
}
selectedPicture = Bitmap.createBitmap(640,480, Bitmap.Config.ARGB_8888) ;
Imgproc.drawContours(img,contours,num,new Scalar(0,0,255));
Utils.matToBitmap(img, selectedPicture);
imageView1.setImageBitmap(selectedPicture);}
In some cases it works excellent as can be seen in this image(See the white line between monitor bezel and screen.. sorry for the color):
Example that works:
However when in this image, and most images where the screen is greyish it gives crazy result.
Example that doesn't work:
Try use morphology, dilate and then erode with same kernel should make it better.
Or use pyrDown + pyrUp, or just blur it.
In short use low-pass filter class of methods, because your object of interest is much larger than noise.
I want to use the imgradient() function of matlab in my android application using opencv. how can i do so and which function of opencv is equivalent to to Matlab imgradient() function.
i m using below mentioned function is it right ?
public Mat imgradient(Mat grayScaleImage)
{
Mat grad_x=new Mat();
Mat grad_y = new Mat();
Mat abs_grad_x=new Mat();
Mat abs_grad_y=new Mat();
Mat gradientImag = new Mat(grayScaleImage.rows(),grayScaleImage.cols(),CvType.CV_8UC1);
Imgproc.Sobel(grayScaleImage, grad_x, CvType.CV_16S, 1, 0,3,1,0,Imgproc.BORDER_DEFAULT );
Core.convertScaleAbs( grad_x, abs_grad_x );
Imgproc.Sobel( grayScaleImage, grad_y, CvType.CV_16S, 0, 1, 3, 1,0,Imgproc.BORDER_DEFAULT );
Core.convertScaleAbs( grad_y, abs_grad_y );
double[] buff_grad = new double[1];
for(int i = 0; i < abs_grad_y.cols(); i++)
{
for(int j =0 ; j<abs_grad_y.rows() ; j++)
{
double[] buff_x = abs_grad_x.get(j, i);
double[] buff_y = abs_grad_y.get(j, i);
double x = buff_x[0];
double y = buff_y[0];
double ans=0;
try
{
ans = Math.sqrt(Math.pow(x,2)+Math.pow(y,2));
}catch(NullPointerException e)
{
ans = 0;
}
buff_grad[0] = ans;
gradientImag.put(j, i, buff_grad);
}
}
return gradientImag;
}
Have you tried using something like sobel or canny operators?
As matlab imgradient() returns the gradient "magnitude" (i.e. sqrt(dx(x,y)² + dy(x,y)²) for each pixel with coordinates x,y), you may want to do something like:
// 1) Get the horizontal gradient
Mat kH = (cv::Mat_<double>(1,3) << -1,0,1); // differential kernel in x
Mat Dx;
filter2D(image, Dx, -1, kH, cv::Point(-1,-1), 0);
// 2) Get the vertical gradient
Mat kV = (cv::Mat_<double>(3,1) << -1,0,1); // differential kernel in y
Mat Dy;
filter2D(image, Dy, -1, kV, cv::Point(-1,-1), 0);
// 3) Get sqrt(dx²+dy²) in each point
for(int i=0; i<Dx.rows; i++)
for(int j=0; j<Dx.cols; j++)
Dmag.at<double>(i,j) = sqrt(pow(Dx.at<double>(i,j),2)+pow(Dy.at<double>(i,j),2));
It should get you what you you want. You can achieve a better performance by accessing gradient data instead of using .at(i,j) for each pixel.
Hope it helps!
I'm trying to automate a process where someone manually converts a code to a digital one.
Then I started reading about OCR. So I installed tesseract OCR and tried it on some images. It doesn't even detect something close to the code.
I figured after reading some questions on stackoverflow, that the images need some preprocessing like skewing the image to a horizontal one, which can been done by openCV for example.
Now my questions are:
What kind of preprocessing or other methods should be used in a case like the above image?
Secondly, can I rely on the output? Will it always work in cases like the above image?
I hope someone can help me!
I have decided to capture the whole card instead of the code only. By capturing the whole card it is possible to transform it to a plain perspective and then I could easily get the "code" region.
Also I learned a lot of things. Especially regarding speed. This function is slow on high resolution images. It can take up to 10 seconds with a size of 3264 x 1836.
What I did to speed things up, is re-sizing the input matrix by a factor of 1 / 4. Which makes it 4^2 times faster and gave me a minimal lose of precision. The next step is scaling the quadrangle which we found back to the normal size. So that we can transform the quadrangle to a plain perspective using the original source.
The code I created for detecting the largest area is heavily based on code I found on stackoverflow. Unfortunately they didn't work as expected for me, so I combined more code snippets and modified a lot.
This is what I got:
private static double angle(Point p1, Point p2, Point p0 ) {
double dx1 = p1.x - p0.x;
double dy1 = p1.y - p0.y;
double dx2 = p2.x - p0.x;
double dy2 = p2.y - p0.y;
return (dx1 * dx2 + dy1 * dy2) / Math.sqrt((dx1 * dx1 + dy1 * dy1) * (dx2 * dx2 + dy2 * dy2) + 1e-10);
}
private static MatOfPoint find(Mat src) throws Exception {
Mat blurred = src.clone();
Imgproc.medianBlur(src, blurred, 9);
Mat gray0 = new Mat(blurred.size(), CvType.CV_8U), gray = new Mat();
List<MatOfPoint> contours = new ArrayList<>();
List<Mat> blurredChannel = new ArrayList<>();
blurredChannel.add(blurred);
List<Mat> gray0Channel = new ArrayList<>();
gray0Channel.add(gray0);
MatOfPoint2f approxCurve;
double maxArea = 0;
int maxId = -1;
for (int c = 0; c < 3; c++) {
int ch[] = {c, 0};
Core.mixChannels(blurredChannel, gray0Channel, new MatOfInt(ch));
int thresholdLevel = 1;
for (int t = 0; t < thresholdLevel; t++) {
if (t == 0) {
Imgproc.Canny(gray0, gray, 10, 20, 3, true); // true ?
Imgproc.dilate(gray, gray, new Mat(), new Point(-1, -1), 1); // 1 ?
} else {
Imgproc.adaptiveThreshold(gray0, gray, thresholdLevel, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY, (src.width() + src.height()) / 200, t);
}
Imgproc.findContours(gray, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
for (MatOfPoint contour : contours) {
MatOfPoint2f temp = new MatOfPoint2f(contour.toArray());
double area = Imgproc.contourArea(contour);
approxCurve = new MatOfPoint2f();
Imgproc.approxPolyDP(temp, approxCurve, Imgproc.arcLength(temp, true) * 0.02, true);
if (approxCurve.total() == 4 && area >= maxArea) {
double maxCosine = 0;
List<Point> curves = approxCurve.toList();
for (int j = 2; j < 5; j++)
{
double cosine = Math.abs(angle(curves.get(j % 4), curves.get(j - 2), curves.get(j - 1)));
maxCosine = Math.max(maxCosine, cosine);
}
if (maxCosine < 0.3) {
maxArea = area;
maxId = contours.indexOf(contour);
//contours.set(maxId, getHull(contour));
}
}
}
}
}
if (maxId >= 0) {
return contours.get(maxId);
//Imgproc.drawContours(src, contours, maxId, new Scalar(255, 0, 0, .8), 8);
}
return null;
}
You can call it like so:
MathOfPoint contour = find(src);
See this answer for quadrangle detection from a contour and transforming it to a plain perspective:
Java OpenCV deskewing a contour
Please help me,
I have a problem for Convex Hull on Android. I use Java and OpenCV 2.3.
Before I made it on Java, I made it on C++ with Visual Studio 2008.
This code can running successfully on C++.
Now, i want to convert it from C++ to Java on Android. And I found error like "force close" when i run it on SDK Android simulator.
This is my code on C++:
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
drawing = Mat::zeros( canny_output.size(), CV_64F );
/// Find the convex hull object for each contour
vector<vector<Point> > hull ( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ convexHull( Mat(contours[i]), hull[i], false );
}
for(size_t i = 0; i < contours.size(); i++){
drawContours( drawing, hull, i, Scalar(255, 255, 255), CV_FILLED ); // FILL WHITE COLOR
}
And this is my code on Android:
Mat hierarchy = new Mat(img_canny.rows(),img_canny.cols(),CvType.CV_8UC1,new Scalar(0));
List<Mat> contours =new ArrayList<Mat>();
List<Mat> hull = new ArrayList<Mat>(contours.size());
drawing = Mat.zeros(img_canny.size(), im_gray);
Imgproc.findContours(img_dilasi, contours, hierarchy,Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE, new Point(0, 0));
for(int i=0; i<contours.size(); i++){
Imgproc.convexHull(contours.get(i), hull.get(i), false);
}
for(int i=0; i<contours.size(); i++){
Imgproc.drawContours(drawing, hull, i, new Scalar(255.0, 255.0, 255.0), 5);
}
For your info, I did a little modification on Convex Hull at my code. I fill a color inside contour.
Anyone can help me to solve my problem?
I'm very grateful for your help.
Don't have the rep to add comment, just wanted to say the two answers above helped me get Imgproc.convexHull() working for my use case with something like this (2.4.8):
MatOfPoint mopIn = ...
MatOfInt hull = new MatOfInt();
Imgproc.convexHull(mopIn, hull, false);
MatOfPoint mopOut = new MatOfPoint();
mopOut.create((int)hull.size().height,1,CvType.CV_32SC2);
for(int i = 0; i < hull.size().height ; i++)
{
int index = (int)hull.get(i, 0)[0];
double[] point = new double[] {
mopIn.get(index, 0)[0], mopIn.get(index, 0)[1]
};
mopOut.put(i, 0, point);
}
// do something interesting with mopOut
This code works well in my application. In my case, I had multiple contours to work with, so you will notice a lot of Lists, but if you only have one contour, just adjust it to work without the .get(i) iterations.
This thread explains the process more simply.
android java opencv 2.4 convexhull convexdefect
// Find the convex hull
List<MatOfInt> hull = new ArrayList<MatOfInt>();
for(int i=0; i < contours.size(); i++){
hull.add(new MatOfInt());
}
for(int i=0; i < contours.size(); i++){
Imgproc.convexHull(contours.get(i), hull.get(i));
}
// Convert MatOfInt to MatOfPoint for drawing convex hull
// Loop over all contours
List<Point[]> hullpoints = new ArrayList<Point[]>();
for(int i=0; i < hull.size(); i++){
Point[] points = new Point[hull.get(i).rows()];
// Loop over all points that need to be hulled in current contour
for(int j=0; j < hull.get(i).rows(); j++){
int index = (int)hull.get(i).get(j, 0)[0];
points[j] = new Point(contours.get(i).get(index, 0)[0], contours.get(i).get(index, 0)[1]);
}
hullpoints.add(points);
}
// Convert Point arrays into MatOfPoint
List<MatOfPoint> hullmop = new ArrayList<MatOfPoint>();
for(int i=0; i < hullpoints.size(); i++){
MatOfPoint mop = new MatOfPoint();
mop.fromArray(hullpoints.get(i));
hullmop.add(mop);
}
// Draw contours + hull results
Mat overlay = new Mat(binaryImage.size(), CvType.CV_8UC3);
Scalar color = new Scalar(0, 255, 0); // Green
for(int i=0; i < contours.size(); i++){
Imgproc.drawContours(overlay, contours, i, color);
Imgproc.drawContours(overlay, hullmop, i, color);
}
Example in Java (OpenCV 2.4.11)
hullMat contains the sub mat of gray, as identified by the convexHull method. You may want to filter the contours you really need, for example based on their area.
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
MatOfInt4 hierarchy = new MatOfInt4();
MatOfInt hull = new MatOfInt();
void foo(Mat gray) {
Imgproc.findContours(gray, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
for (int i = 0; i < contours.size(); i++) {
Imgproc.convexHull(contours.get(i), hull);
MatOfPoint hullContour = hull2Points(hull, contours.get(i));
Rect box = Imgproc.boundingRect(hullContour);
Mat hullMat = new Mat(gray, box);
...
}
}
MatOfPoint hull2Points(MatOfInt hull, MatOfPoint contour) {
List<Integer> indexes = hull.toList();
List<Point> points = new ArrayList<>();
MatOfPoint point= new MatOfPoint();
for(Integer index:indexes) {
points.add(contour.toList().get(index));
}
point.fromList(points);
return point;
}
Looking at the documentation of findContours() and convexHull(), it appears that you have declared the variables contours and hull incorrectly.
Try changing the declarations to:
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
List<MatOfInt> hull = new ArrayList<MatOfInt>();
Then, after you call convexHull(), hull contains the indices of the points in contours which comprise the convex hull. In order to draw the points with drawContours(), you will need to populate a new MatOfPoint containing only the points on the convex hull, and pass that to drawContours(). I leave this as an exercise for you.
To add on to what Aurelius said, in your C++ implementation you used a vector of points, therefore the hull matrix contains the actual convex Points:
"In the first case [integer vector of indices], the hull elements are 0-based indices of the convex hull points in the original array (since the set of convex hull points is a subset of the original point set). In the second case [vector of points], hull elements are the convex hull points themselves." - convexHull
This is why you were able to call
drawContours( drawing, hull, i, Scalar(255, 255, 255), CV_FILLED );
In your android version, the hull output is simply an array of indices which correspond to the points in the original contours.get(i) Matrix. Therefore you need to look up the convex points in the original matrix. Here is a very rough idea:
MatOfInt hull = new MatOfInt();
MatOfPoint tempContour = contours.get(i);
Imgproc.convexHull(tempContour, hull, false); // O(N*Log(N))
//System.out.println("hull size: " + hull.size() + " x" + hull.get(0,0).length);
//System.out.println("Contour matrix size: " + tempContour.size() + " x" + tempContour.get(0,0).length);
int index = (int) hull.get(((int) hull.size().height)-1, 0)[0];
Point pt, pt0 = new Point(tempContour.get(index, 0)[0], tempContour.get(index, 0)[1]);
for(int j = 0; j < hull.size().height -1 ; j++){
index = (int) hull.get(j, 0)[0];
pt = new Point(tempContour.get(index, 0)[0], tempContour.get(index, 0)[1]);
Core.line(frame, pt0, pt, new Scalar(255, 0, 100), 8);
pt0 = pt;
}
Use this fillconvexPoly
for( int i = 0; i < contours.size(); i++ ){
Imgproc.fillConvexPoly(image_2, point,new Scalar(255, 255, 255));
}