Here’s my task which I want to solve with as little effort as possible (preferrably with QT & C++ or Java): I want to use webcam video input to detect if there’s a (or more) crate(s) in front of the camera lens or not. The scene can change from "clear" to "there is a crate in front of the lens" and back while the cam feeds its video signal to my application. For prototype testing/ learning I have 2-3 images of the “empty” scene, and 2-3 images with one or more crates.
Do you know straightforward idea how to tackle this task? I found OpenCV, but isn't this framework too bulky for this simple task? I'm new to the field of computer vision. Is this generally a hard task or is it simple and robust to detect if there's an obstacle in front of the cam in live feeds? Your expert opinion is deeply appreciated!
Here's an approach I've heard of, which may yield some success:
Perform edge detection on your image to translate it into a black and white image, whereby edges are shown as black pixels.
Now create a histogram to record the frequency of black pixels in each vertical column of pixels in the image. The theory here is that a high frequency value in the histogram in or around one bucket is indicative of a vertical edge, which could be the edge of a crate.
You could also consider a second histogram to measure pixels on each row of the image.
Obviously this is a fairly simple approach and is highly dependent on "simple" input; i.e. plain boxes with "hard" edges against a blank background (preferable a background that contrasts heavily with the box).
You dont need a full-blown computer-vision library to detect if there is a crate or no crate in front of the camera. You can just take a snapshot and make a color-histogram (simple). To capture the snapshot take a look here:
http://msdn.microsoft.com/en-us/library/dd742882%28VS.85%29.aspx
Lots of variables here including any possible changes in ambient lighting and any other activity in the field of view. Look at implementing a Canny edge detector (which OpenCV has and also Intel Performance Primitives have as well) to look for the outline of the shape of interest. If you then kinda know where the box will be, you can perhaps sum pixels in the region of interest. If the box can appear anywhere in the field of view, this is more challenging.
This is not something you should start in Java. When I had this kind of problems I would start with Matlab (OpenCV library) or something similar, see if the solution would work there and then port it to Java.
To answer your question I did something similar by XOR-ing the 'reference' image (no crate in your case) with the current image then either work on the histogram (clustered pixels at right means large difference) or just sum the visible pixels and compare them with a threshold. XOR is not really precise but it is fast.
My point is, it took me 2hrs to install Scilab and the toolkits and write a proof of concept. It would have taken me two days in Java and if the first solution didn't work each additional algorithm (already done in Mat-/Scilab) another few hours. IMHO you are approaching the problem from the wrong angle.
If really Java/C++ are just some simple tools that don't matter then drop them and use Scilab or some other Matlab clone - prototyping and fine tuning would be much faster.
There are 2 parts involved in object detection. One is feature extraction, the other is similarity calculation. Some obvious features of the crate are geometry, edge, texture, etc...
So you can find some algorithms to extract these features from your crate image. Then comparing these features with your training sample images.
Related
I'm looking for some way to set background image with barrel distortion effect(FishEye/FOV) for node using JavaFX. I found algorithm with pixel manipulation, but I want to find some another way(some hack) for reach it. This effect will be use for create node background high definition image changing animation(animation wil be change factor(power/value/degree?)) of this effect.
I'd like to offer an alternative approach which is much more efficient (real-time capable). Any solution which is based on direct pixel manipulations is doomed to be very inefficient especially for a "high definition image".
Instead I'd propose to use a TriangleMesh for this and use the image as its texture. You can then apply any kind of distortion you like by just manipulating the texture coordinates. This approach can be easily integrated into any 2D graphics via the JavaFX scene graph.
I am actively using this concept for on-the-fly reprojection of raster map tiles, so I know it works.
I will answer this question in the spirit that it was asked, i.e. no code.
JavaFX has an effect framework.
There is no in-built fisheye effect.
You could create your own custom fisheye effect implementation and plug it into the effect framework if you are a skilled developer.
Easier would be to apply your algorithm using a WritableImage with a PixelWriter or Canvas. Perhaps that could even plug into the effect framework (if you actually needed to do that, which you probably don't) using an ImageInput.
For an example of applying an algorithm to the pixels in an input image see:
Reduce number of colors and get color of a single pixel
Of course, you would use a fisheye algorithm (coded for JavaFX instead of the linked implementations) for a fisheye transform.
To animate use an AnimationTimer or, again for skilled developers, create a custom transition that plugs into the JavaFX animation framework.
You can add properties to your custom effect and manipulate them using additional properties defined on the custom transition you create.
Providing a complete solution is out of scope for a StackOverflow answer. To get help with individual tasks, split the problem up into different pieces, e.g. creating a custom effect, manipulating pixels to create a fisheye, animating an effect on an image or timeline, etc. Write the code and ask questions about the actual code with a minimal example for the problem portion you are trying to solve when you get stuck.
I am using Java with OpenCV Library to detect Face,Eyes and Mouth using Laptop Camera.
What I have done so far:
Capture Video Frames using VideoCapture object.
Detect Face using Haar-Cascades.
Divide the Face region into Top Region and Bottom Region.
Search for Eyes inside Top region.
Search for Mouth inside Bottom region.
Problem I am facing:
At first Video is running normally and suddenly it becomes slower.
Main Questions:
Do Higher Cameras' Resolutions work better for Haar-Cascades?
Do I have to capture Video Frames in a certain scale? for example (100px X100px)?
Do Haar-Cascades work better in Gray-scale Images?
Does different lighting conditions make difference?
What does the method detectMultiScale(params) exactly do?
If I want to go for further analysis for Eye Blinking, Eye Closure Duration, Mouth Yawning, Head Nodding and Head Orientation to Detect Fatigue (Drowsiness) By Using Support Vector Machine, any advices?
Your help is appreciated!
The following article, would give you an overview of the things going under the hood, I would highly recommend to read the article.
Do Higher Cameras' Resolutions work better for Haar-Cascades?
Not necessarily, the cascade.detectMultiScale has params to adjust for various input width, height scenarios, like minSize and maxSize, These are optional params However, But you can tweak these to get robust predictions if you have control over the input image size. If you set the minSize to smaller value and ignore maxSize then it will work for smaller and high res images as well, but the performance would suffer. Also if you imagine now, How come there is no differnce between High-res and low-res images then you should consider that the cascade.detectMultiScale internally scales the images to lower resolutions for performance boost, that is why defining the maxSize and minSize is important to avoid any unnecessary iterations.
Do I have to capture Video Frames in a certain scale? for example
(100px X100px)
This mainly depends upon the params you pass to the cascade.detectMultiScale. Personally I guess that 100 x 100 would be too small for smaller face detection in the frame as some features would be completely lost while resizing the frame to smaller dimensions, and the cascade.detectMultiScale is highly dependent upon the gradients or features in the input image.
But if the input frame only has face as a major part, and there are no other smaller faces dangling behind then you may use 100 X 100. I have tested some sample faces of size 100 x 100 and it worked pretty well. And if this is not the case then 300 - 400 px width should work good. However you would need to tune the params in order to achieve accuracy.
Do Haar-Cascades work better in Gray-scale Images?
They work only in gray-scale images.
In the article, if you read the first part, you will come to know that it face detection is comprised of detecting many binary patterns in the image, This basically comes from the ViolaJones, paper which is the basic of this algorithm.
Does different lighting conditions make difference?
May be in some cases, largely Haar-features are lighting invariant.
If you are considering different lighting conditions as taking images under green or red light, then it may not affect the detection, The haar-features (since dependent on gray-scale) are independent of the RGB color of input image. The detection mainly depends upon the gradients/features in the input image. So as far as there are enough gradient differences in the input image such as eye-brow has lower intensity than fore-head, etc. it will work fine.
But consider a case when input image has back-light or very low ambient light, In that case it may be possible that some prominent features are not found, which may result in face not detected.
What does the method detectMultiScale(params) exactly do?
I guess, if you have read the article, by this time, then you must be knowing it well.
If I want to go for further analysis for Eye Blinking, Eye Closure
Duration, Mouth Yawning, Head Nodding and Head Orientation to Detect
Fatigue (Drowsiness) By Using Support Vector Machine, any advices?
No, I won't suggest you to perform these type of gesture detection with SVM, as it would be extremely slow to run 10 different cascades to conclude current facial state, However I would recommend you to use some Facial Landmark Detection Framework, such as Dlib, You may search for some other frameworks as well, because the model size of dlib is nearly 100MB and it may not suit your needs i f you want to port it to mobile device. So the key is ** Facial Landmark Detection **, once you get the full face labelled, you can draw conclusions like if the mouth if open or the eyes are blinking, and it works in Real-time, so your video processing won't suffer much.
I've got a ridiculously insane Linear Algebra professor at uni who asked us this last Friday to develop a programme in Java that loads a monochrome picture and then applies an edge-detecting filter on it.
The problem is nobody in my class has got the slightest clue how to do it and I have only a week to get it done.
As I'm still trying to get my head round it and start it from scratch, does anybody have anything ready to send me so I can study it and save my semester?
Any efforts will be much appreciated.
Here's a very basic approach you might go with:
1) What is an edge in a monochrome image? One could say that it is a steep intensity gradient. If you go from black to white that is an edge, and vice versa.
2) A very simple filter operation that builds on this idea is the Sobel operator. Read up on it here: Wikipedia.
3) You'll stumble across 2 terms that may be unfamiliar to you: Kernel and Convolution. A kernel is basically a window moved over each pixel, performing an operation on the pixel's environment. In case of the Sobel 3x3 kernel, you assign a new value to the filtered image based on the pixel's direct neighbours. The convolution operation can be thought of as - among other things - an operation that moves the kernel across every pixel in the image (note: This is a gross oversimplification to get you started and technically incorrect. It should, however, give you the right idea)
4) Now the simplest way of applying a Sobel kernel to a BufferedImage is by using the ConvolveOp class. It is a prebuilt java class that takes a kernel, applies it to a given image and returns the filtered image. However, if this is for class, you might want to implement this yourself.
I want to algorithmically specify every pixel on the screen (full screen) or window to paint in a Java application. I want to do an animation this way.
So, for each pixel, I'll run some type of calculation to determine what color it should be. I'll do this every frame for every pixel.
What is the highest performance (capable of highest frames per second) way to do that?
I understand graphics cards are programmable, but I'd like to stick with just coding in Java for this. If there is a straightforward way to code the algorithms in Java such that they run on the graphics card, that would be great, but I want a solution that does not involve another programming language (which I think OpenCL or such does).
I've done this type of animations before using a the PixelGrabber and MemoryImageSource combination. Here you have some documentation and samples.
Thats the technique with best performance I know. You usually work in the pixel array (do the frame animation transformations) and then render the pixels in the resulting image (Don't need to invoque getPixel/setPixel methods to set individual pixels, which, in old times, was a great optimization).
Don't have any code sample of my own right now, but I can provide one later if you're interested in using this.
As a side note, old editions of the book Java The Complete Reference make plenty use of this techique for image manipulation examples.
I need a suggestion/idea how to create a 3D Tag Cloud in Java (Swing)
(exactly like shown here: http://www.adesblog.com/2008/08/27/wp-cumulus-plugin/)
, could you help, please?
I'd go either with Swing and Java2D or OpenGL (JOGL).
I used OpenGL few times and drawing text is easy using JOGL's extenstions (TextRenderer).
If you choose Swing, than the hard part will be implementation of a 3D transformation. You'd have to write some sort of particle system. The particles would have to reside on a 3D sphere. You personally would be responsible of doing any 3D transformation, but using orthogonal projection that would be trivial. So it's a nice exercise - what You need is here: Wiki's spherical coord sys and here 3d to 2d projection.
After You made all of the transformation only drawing is left. And Java2D and Swing have very convenient API for this. It would boil down to pick font size and draw text at given coordinates. Custom JPanel with overriden paintComponent method would be enough to start and finish.
As for the second choice the hardest part is OpenGL API itself. It's procedural so if You're familiar mostly with Java You would have hard time using non-OO stuff. It can get used to and, to be honest, can be quite rewarding since You can do a lot with it. If you picked OpenGL than you would get all the 3D transformations for free, but still have to transform from spherical coordinate system to cartesian by yourself (first wiki article still helpful). After that it's just a matter of using some text drawing class, such as TextRenderer that comes with JOGL distribution.
So OpenGL helps You with view projection calculations and is hardware accelerated. The Java2D would require more math to use, but in my opinion, this approach seems a bit easier. Oh, and by the way - the Java2D tries to use any graphic acceleration there is (OpenGL or DirectDraw) internally. So You are shielded from certain low-level problems.
For both options You need also to bind mouse coordinates s to rotational speed of sphere. Whether it's Java2D or OpenGL the code will look very similar. Just map mouse coordinates related to the center of panel to some speed vector. At the drawing time You could use the vector to rotate the sphere accordingly.
And one more thing: if You would want to try OpenGL I'd recommend: Processing language created on MIT especially for rich graphic applets. Their 3D API, not so coincidentally, is almost the same as OpenGL, but without much of the cruft. So if You want the quickest prototype that's the best bet. Consult this discussion thread for actual example. Note: Processing is written in Java.
That's not really 3D. There are no perspective transformations or mapping the text on some 3D shape (such as, say, a sphere). What you have is a bunch of strings where each string has an associated depth (or Z order). Strings "closer" to you are painted with a stronger shade of gray and larger font size.
The motion of each string as you move the mouse is indeed a 3D shape which looks like a slanted circle around a fixed center - with the slant depending on where the mouse cursor is. That's simple math - if you figure it for one string, you figure it out for all. And then the last piece would be to scatter the strings so that they don't overlap too much, and give each one the initial weight based on their frequency.
That's what most of the code is doing. So you need to either do the math, or translate the ActionScript to Java2D blindly. And no, there is no need for JOGL.
Why don't you just download the source code, and have a look? Even if you can't write PHP, it should still be possible to read it and figure out how the algorithm works.