I am creating a project in android ,where i need to detect the edges of the object in very intial stage.But the edges are not appropriate they are broken .I have tried otsu thresholding method for edge detection.
Now i m trying to get the intensity from histogram ,can anyone help me to figure out the mean from the histogram of an image using calchist method in opencv. Also, i am giving a thought to dividing the image into blocks of 3* 3,as then computing each block.,but enable to find how to do that .
I am using opencv library.
Thanks,
Please do respond.
Canny, sobel,prewitt and many edge detection algorithms perform edge detection.
But, as you said,they remain incapable to extract all edges.
You can use some edge linking algorithms to link those broken edges.
I used ant colony algorithm(genetic algorithm) to make the image understandable.
But there are dozens of linking algorithm you can use.
Cheers.
Related
I found out that there is Chamfer Matching available in OpenCV. I can see there is a function chamferMatching() for C++ and there seems to be a way to use it in Python, too. However, I was unable to find how to use this feature in Java. I am using OpenCV 3.1 Where can I find it in the Java interface? Is it even available there?
If not, what can I use instead? I am trying to recognize fruits. For now apples in particular. I want to match precomputed apple contour to found contours in the image (chamfer matching). After finding a possible apple I am planning to use a classifier to make sure the color and texture are correct.
Template matching seems to be a bad choice because it doesn't work with recognizing rotated objects. I am wondering if I can use Feature Descriptors. Note that I am not trying to recognize a particular apple and I don't know if Feature Descriptors are good for this.
Any thoughts?
EDIT: Ok, I decided to use the findContours() function to get all of the contours in the image, then filter them by area and compare each of the filtered contours with others, designated as templates from training, using matchShapes(). I implemented this and it is not working right (because findContours() is not detecting the apple contours) but I'll post another question with this specific problem. Here I want to ask if this way sounds ok and if there is a better way to detect and compare contours.
Ok, I figured it out. There seems to be no Chamfer Matching in OpenCV. It is implemented in JavaCV and there is no sign of it in the native code. Since I'm using OpenCV for Java it is not a good solution for me.
This answer helped me a lot. It is in C++ but it can easily be written in Java.
Initially, I am training the program using a database of 100 images of green apples. The training is actually just storing the largest contour of every photo in a file.
The key to my problem was dividing the image into the 3 different channels resulting in 3 different grayscale images. I transform them using Canny and dilate. Now I check every one of them for contours and it is very likely I will detect the contours of the apple in at least one of them. Once I have all the contours from the 3 images, I filter them by size and then comparing them with every single contour from the training data. If the contour is close enough to one of them I assume it is a contour of an apple.
There seems to be quite a lot of false positives but they will be filtered out when my coleague implements the module doing checks for color and texture of the selected contours (their content).
Here's our project's repository if it would be of help to anyone.
I have been trying to find some reliable procedure for finding the directional information from image.I am using DICOM images and doing the whole procedure in Java.
Are there any techniques in Java that can help me?
I want to get a graph of angles from the image depending upon the intensity distribution. General angular plot is not helpful in this case(radius and theta, from polar conversion). I believe that the Fourier Transform is a great technique for finding the major direction of image.
But I want to evaluate the result quantitatively, how can I obtain that?
Can someone instruct me how to evaluate/ calculate structure tensor in this case?
I'm working with tesseract in android using tess-two wrapper. I've read the documentation about the library, but I'm facing a problem to regconize a square in my image. I'd like to recognize the outermost square in a sudoku board for instance.
There is an example in opencv but I cannot find something for tesseract.
Tesseract is an OCR framework. It is useful for recognising characters and words in an image. For a sudoku board, you have two main problems:
Recognise the outline of the game grid and the 9 rows and columns.
Recognise the digits which have already been filled in.
Locating the Sudoku grid can be done by finding the corners, or possibly the edges in the image using line detection or corner detection algorithms; you should try to Google Hough Lines or Corner Detection.
The grid may not actually be square in your image if you are holding the camera at an angle so you will also need to transform the shape into a square before processing. You should Google Homography.
Assuming that you locate the grid and are able to transform it to a square, you can now find each of the row and columns. At this point you can examine each cell, one at a time to see if it's empty or contains a digit. If it contains a digit, you need to work out which one.
Now you could use Tesseract for this final stage but it's massive overkill. A simple template matching approach which you could build yourself would be sufficient.
Once you have done the background research above, you will be able to pick a framework or library which supports the operations your need. OpenCV is a very strong contender in this space and there is a lot of support for it here and on the web but you really need to understand the problem a lot better before picking a tool to solve it.
Now i'm working on some image processing project, and i'm stuck.
I can't find proper algorithm to detect rectangles on the image.
Is there any methods in Java which will be helpful to me? Or implementations of Hough transform for rectangles?
Have you had a look at openCV? detect() is what you want. You'll have to train the classifier on your own, however generating rectangles to train it on shouldn't be too hard :-)
I have an image where I want to find a specific location (coordinate) based on its color. For example
I want to find coordinates of edges of this Black box.
How can i detect Black color in this image in Java?
Note: My target is to develop a program to detect eyes in a face
I would suggest using threshold filter and then convert image to 1-bit format. This should do the trick.
However locating eyes in image is much more harder. You might be interested in open source OpenCV library. Here is port dedicated for Java - javacv. And C++ example of face detection using OpenCV.
as far as I know Fourier transform is used in image processing. With that you get your picture in frequency domain that represents a signal (in case of image signal is two dimensional). You can use Fast Fourier Transform algorimth (FFT in java, Fun with Java, Understanding FFT). There's a lot of papers about eye detection issue you can read and take inspiration from:
http://www.jprr.org/index.php/jprr/article/viewFile/15/7
http://people.ee.ethz.ch/~bfasel/papers/avbpa_face.pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.117.2226&rep=rep1&type=pdf