I'm new to OpenCV and I want to detect the actions in a video by using openCV. Let say the video is a cricket match, then I want to detect who is the batmat and who is the bowler by using their motions. Can anyone guide me how can I do this with examples or some related videos. All your comments are highly appreciated.
With my limited understanding on image processing, I feel the approach ( especially haar classifiers ) in the accepted answer might not be the best option you have.
Read up on following
Optical flow based processing - That will help you identify motion.
Background subtraction - Ground color is usually green and player clothes also have color.
Contours - They let you identify shapes.
Hough Line transform - There will be lines in your video stream.
Edge detection
I assume one of intentions of doing this project is to learn image processing and just not to get the result. Training a haar xml for identifying a batsman/bowler with +ve/-ve image samples is more like repetitive job than a real learning process. More over, you will need to spend lots of time collecting samples, and then retraining xmls on failure etc. Also, haar classifiers are for object detection are not for motion detection as mentioned in question.
Aishack website has some references projects with image processing ideas. Wait for more responses to this question from experts.
Look into object recognition and Haar classifiers, specifically train_cascade methods of theOpenCV library. You'll need a LOT of still samples of each player type and their typical moves, then train a classifier and then analyze video frames to pick them out. You have a long, but awesome road ahead of you.
Related
I am working on a project where I want to detect the pores in a given skin image.
I have tried various methods(HoughCircles, BlobDetection and Contours) from OpenCv using Java, however, I am unable to proceed.
HoughCircles is showing me all false circles and same is the case with contours.
My current code uses blob detection technique which is also not showing what is required. Sample code is written below:
public void detectBlob() {
Mat orig = Highgui.imread("skin_pore.jpg",Highgui.IMREAD_GRAYSCALE);
Mat MatOut= new Mat();
FeatureDetector blobDetector;
blobDetector = FeatureDetector.create(FeatureDetector.SIFT);
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
blobDetector.detect(orig,keypoints1);
org.opencv.core.Scalar cores = new org.opencv.core.Scalar(0,0,255);
org.opencv.features2d.Features2d.drawKeypoints(orig,keypoints1,MatOut,cores,2);
Highgui.imwrite("PhotoOut.jpg", MatOut);
}
public static void main(String args[]) {
BlobDetection bd = new BlobDetection();
bd.detectBlob();
}
When I tried the same code using FeatureDetector.SIMPLEBLOB instead of FeatureDetector.SIFT it shows almost 0 blobs.
The output and source images are attached for the above code. Source Image
Output Image using SIFT
Is there any other algorithm which can help in achieving the result or what can be the appropriate approach to achieve this?
As you did not ask anything in your question I won't give you an answer. More some general advice.
That you tried to solve that problem using the named algorithms clearly shows that you have absolutely no clue what you are doing. You lack the very basics of image processing.
It's like trying to win vs decent chess player if you don't even know how the figures can move.
I highly recommend that you get yourself a beginners book, read it and make sure you understand its contents. Then do some more research on the algorithms you want to use, befor you use them.
You cannot take some arbitrary image, feed it into some random feature detection algorithm you find on the internet and expect to succeed.
Hough transform for cirles for example is good for finding circle shaped contours of a roughly known radius. If you know how it works internally you will know why it is not a good idea to use it on your image.
https://en.wikipedia.org/wiki/Circle_Hough_Transform
Blobdetection and contour based algorithms might work, but only after a lot of pre-processing. Your image is not very "segmentation-friendly"
https://en.wikipedia.org/wiki/Image_segmentation
https://en.wikipedia.org/wiki/Blob_detection
A SIFT detector usually has to be taught using reference images and reference keypoints. I don't see this either in your code.
https://en.wikipedia.org/wiki/Scale-invariant_feature_transform
Please note that reading those wikipedia articles will only give you a first idea of what's going on. You have to read a lot more.
Always start at the beginning of your processing chain. Can you get better images? (Better means more suitable for what you want to detect). This is like 10% camera and 90% illumination. I don't think detecting skin pores is a classical task for shitty cellphone pictures so why not put a bit effort into your imaging setup?
First rule of image processing: crap in = crap out. You should at least change the angle of illumination or even better approach like shape from shading.
An image optimized for the detection you have to do is cruicial. It will make image processing so much easier.
Then pre-processing: How can you transform the image you have into something you can easily extract features from?
And so on...
I have been working on face detection and i am able to detect frontal faces like all other people using the haarcascade xml files. My next task is to detect a side face (Non-Frontal). I am working in opencv. The profileface xml is not able to detect the side faces with accuracy. So i feel only option left is to make my own xml file which can detect side faces. Can anyone help me out?
Thanks
Did you try to combine the frontal and profile face recognition?
I was working with this as well, and the result was pretty good actually.
You also need to specify the min and max framesize as accurate as possible.
Unfortunately I did not find a side face haarcascade, so it looks like, you need to train your own one.
if you just want to test this out, you don't actually need so many pictures of faces.
You need a lot of negatives. Because opencv provides a function to generate positive pictures based on a single image of the face and a bunch of negative pictures.
To find the negative pictures you could simply take a video of the backgrounds where you want to detect the faces, and then just extract all the images form the video file. From only 3 minutes you get over 2000 images.
For the training I would recommend you to keep the size of all pictures very small, because otherwise it will take like forever to train the cascade file.
maybe you can see opencv Cascade Classifier Training for a reference. I didn't try it, but offer you for a reference.
website: http://docs.opencv.org/2.4/doc/user_guide/ug_traincascade.html
and there are some Q&A for training.
website: http://www.computer-vision-software.com/blog/2009/11/faq-opencv-haartraining/
I've not found anything here or on google. I'm looking for a way to identify shapes (circle, square, triangle and various other shapes) from a image file. Some examples:
You get the general idea. Not sure if BoofCV is the best choice here but it looks like it should be straightforward enough to use, but again I know nothing about it. I've looked at some of the examples and I though before I get in over my head (which is not hard to do some days), I thought I would ask if there is any info out there.
I'm taking a class on Knowledge Based AI solving Ravens Progressive Matrix problems and the final assignment will use strictly visual based images instead of the text files with attributes. We are not being graded on the visual since we only have a few weeks to work on this section of the project and we are encouraged to share this information. SOF has always been my go to source for information and I'm hoping someone out there might have some ideas on where to start with this...
Essentially what I want to do is detect the shapes (?? convert them into 2D geometry) and then make some assumptions about attributes such as size, fill, placement etc, create a text file with these attributes and then using that, send it through my existing code based that I wrote for my other projects to solve the problems.
Any suggestions????
There are a lot of ways you can do it. One way is to find the contour of the shape then fit a polygon to it or a oval. If you git a polygon to it and there are 4 sides with almost equal length then its a square. The contour can be found with binary blobs (my recommendation for the above images) or canny edge.
http://boofcv.org/index.php?title=Example_Fit_Polygon
http://boofcv.org/index.php?title=Example_Fit_Ellipse
I have an image where I want to find a specific location (coordinate) based on its color. For example
I want to find coordinates of edges of this Black box.
How can i detect Black color in this image in Java?
Note: My target is to develop a program to detect eyes in a face
I would suggest using threshold filter and then convert image to 1-bit format. This should do the trick.
However locating eyes in image is much more harder. You might be interested in open source OpenCV library. Here is port dedicated for Java - javacv. And C++ example of face detection using OpenCV.
as far as I know Fourier transform is used in image processing. With that you get your picture in frequency domain that represents a signal (in case of image signal is two dimensional). You can use Fast Fourier Transform algorimth (FFT in java, Fun with Java, Understanding FFT). There's a lot of papers about eye detection issue you can read and take inspiration from:
http://www.jprr.org/index.php/jprr/article/viewFile/15/7
http://people.ee.ethz.ch/~bfasel/papers/avbpa_face.pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.117.2226&rep=rep1&type=pdf
Here’s my task which I want to solve with as little effort as possible (preferrably with QT & C++ or Java): I want to use webcam video input to detect if there’s a (or more) crate(s) in front of the camera lens or not. The scene can change from "clear" to "there is a crate in front of the lens" and back while the cam feeds its video signal to my application. For prototype testing/ learning I have 2-3 images of the “empty” scene, and 2-3 images with one or more crates.
Do you know straightforward idea how to tackle this task? I found OpenCV, but isn't this framework too bulky for this simple task? I'm new to the field of computer vision. Is this generally a hard task or is it simple and robust to detect if there's an obstacle in front of the cam in live feeds? Your expert opinion is deeply appreciated!
Here's an approach I've heard of, which may yield some success:
Perform edge detection on your image to translate it into a black and white image, whereby edges are shown as black pixels.
Now create a histogram to record the frequency of black pixels in each vertical column of pixels in the image. The theory here is that a high frequency value in the histogram in or around one bucket is indicative of a vertical edge, which could be the edge of a crate.
You could also consider a second histogram to measure pixels on each row of the image.
Obviously this is a fairly simple approach and is highly dependent on "simple" input; i.e. plain boxes with "hard" edges against a blank background (preferable a background that contrasts heavily with the box).
You dont need a full-blown computer-vision library to detect if there is a crate or no crate in front of the camera. You can just take a snapshot and make a color-histogram (simple). To capture the snapshot take a look here:
http://msdn.microsoft.com/en-us/library/dd742882%28VS.85%29.aspx
Lots of variables here including any possible changes in ambient lighting and any other activity in the field of view. Look at implementing a Canny edge detector (which OpenCV has and also Intel Performance Primitives have as well) to look for the outline of the shape of interest. If you then kinda know where the box will be, you can perhaps sum pixels in the region of interest. If the box can appear anywhere in the field of view, this is more challenging.
This is not something you should start in Java. When I had this kind of problems I would start with Matlab (OpenCV library) or something similar, see if the solution would work there and then port it to Java.
To answer your question I did something similar by XOR-ing the 'reference' image (no crate in your case) with the current image then either work on the histogram (clustered pixels at right means large difference) or just sum the visible pixels and compare them with a threshold. XOR is not really precise but it is fast.
My point is, it took me 2hrs to install Scilab and the toolkits and write a proof of concept. It would have taken me two days in Java and if the first solution didn't work each additional algorithm (already done in Mat-/Scilab) another few hours. IMHO you are approaching the problem from the wrong angle.
If really Java/C++ are just some simple tools that don't matter then drop them and use Scilab or some other Matlab clone - prototyping and fine tuning would be much faster.
There are 2 parts involved in object detection. One is feature extraction, the other is similarity calculation. Some obvious features of the crate are geometry, edge, texture, etc...
So you can find some algorithms to extract these features from your crate image. Then comparing these features with your training sample images.