I have been trying to find some reliable procedure for finding the directional information from image.I am using DICOM images and doing the whole procedure in Java.
Are there any techniques in Java that can help me?
I want to get a graph of angles from the image depending upon the intensity distribution. General angular plot is not helpful in this case(radius and theta, from polar conversion). I believe that the Fourier Transform is a great technique for finding the major direction of image.
But I want to evaluate the result quantitatively, how can I obtain that?
Can someone instruct me how to evaluate/ calculate structure tensor in this case?
Related
I am creating a project in android ,where i need to detect the edges of the object in very intial stage.But the edges are not appropriate they are broken .I have tried otsu thresholding method for edge detection.
Now i m trying to get the intensity from histogram ,can anyone help me to figure out the mean from the histogram of an image using calchist method in opencv. Also, i am giving a thought to dividing the image into blocks of 3* 3,as then computing each block.,but enable to find how to do that .
I am using opencv library.
Thanks,
Please do respond.
Canny, sobel,prewitt and many edge detection algorithms perform edge detection.
But, as you said,they remain incapable to extract all edges.
You can use some edge linking algorithms to link those broken edges.
I used ant colony algorithm(genetic algorithm) to make the image understandable.
But there are dozens of linking algorithm you can use.
Cheers.
I'm not totally sure this is in the right place, so let me know...
Just so I'm being totally transparent, this is part of a coursework for my University course. To that end, please don't post an answer but please DO give me hints & nudges in the right direction. I've been working on it for a few days without much success.
I've been tasked with converting a grayscale image into an RGB image. It's been suggested that we must segment the image and add colours to each segment with an algorithm. It'a also noted that we could develop algorithms for both the RGB & HSI colourspace to improve the visualisation.
My first thought was that we could segment the image using some threshold technique (on the grayscale intensity values) and then add colours to segments, but I'm not sure if this is right.
I'm programming in Java and have use of the OpenCV library.
Any thought / ideas / hints / suggestions appreciated :)
A very nice presentation describing different colorization algorithms
http://www.cs.unc.edu/~lazebnik/research/fall08/lec06_colorization.pdf
The basic idea is to match texture/luminance in source and target images and then transfer the color information. The better match you have, better would be your solution. However, matching intensity values in Lab space may be misleading as many pixels in the source image can have similar luminance values around them. To overcome this problem, segmenting the source image using texture information can prove helpful and then you can transfer color values of matching textures instead of luminance values.
I have asked this question in this too. But since the topic was a different one, maybe it was not noticed. I got the eigenface algorithm for face recognition working using opencv in java. I wanted to increase the accuracy of the code as its a well known fact that eigenface relies greatly on the light intensity.
What I have Right Now
I get perfect results if I give a check for a image clicked at the same place where the pictures in my database have been clicked, but the results get weird as I give in images clicked in different places.
I figured out that the reason was that my images differ in the light intensity.
Hence , my question is
Is there any way to set a standard to the images saved in the database or the ones that are coming fresh into the system for a recognition check so that I can improve on the accuracy of the face-recognition system that I have currently?
Any kind of positive solution to the problem would be really helpful.
Identifying the lighting intensity and pose is the important factor of face recognition. Try to do histogram comparison with training and testing image (http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html). This parameter helps to avoid the worst lighting situation. And pre processing is one of the successful key factor of Face recognition. Gamma Correction and DOG filtering may reduce the lighting problems.
You can also elliptical filter out only the face,removing the noise created by hair,neck etc.
The OpenCV cookbook provides an excellent and simple tutorial on this.
Below are the following options which may help you boost your accuracy
1] Image Normalization:
Make your image pixel values from 0 to 1 so that to reduce the effect of lighting conditions
2] Image Alignment (This is a very important step to achieve good performance):
Align all the train images and test images so that eyes, nose, mouth of all the faces in all the images have almost the same co-ordinates
Check this post on face alignment (Highly recommended) : https://www.pyimagesearch.com/2017/05/22/face-alignment-with-opencv-and-python/
3] Data augmentation trick:
You can add filters to you faces that will have an effect of the same face in different lighting conditions
So from one face you can make several images in different lighting conditions
4] Removing Noise:
Before performing step 3 apply Gaussian blur to all the images
I have an image where I want to find a specific location (coordinate) based on its color. For example
I want to find coordinates of edges of this Black box.
How can i detect Black color in this image in Java?
Note: My target is to develop a program to detect eyes in a face
I would suggest using threshold filter and then convert image to 1-bit format. This should do the trick.
However locating eyes in image is much more harder. You might be interested in open source OpenCV library. Here is port dedicated for Java - javacv. And C++ example of face detection using OpenCV.
as far as I know Fourier transform is used in image processing. With that you get your picture in frequency domain that represents a signal (in case of image signal is two dimensional). You can use Fast Fourier Transform algorimth (FFT in java, Fun with Java, Understanding FFT). There's a lot of papers about eye detection issue you can read and take inspiration from:
http://www.jprr.org/index.php/jprr/article/viewFile/15/7
http://people.ee.ethz.ch/~bfasel/papers/avbpa_face.pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.117.2226&rep=rep1&type=pdf
Given:
two images of the same subject matter;
the images have the same resolution, colour depth, and file format;
the images differ in size and rotation; and
two lists of (x, y) co-ordinates that correlate the images.
I would like to know:
How do you transform the larger image so that it visually aligns to the second image?
(Optional.) What are the minimum number of points needed to get an accurate transformation?
(Optional.) How far apart do the points need to be to get an accurate transformation?
The transformation would need to rotate, scale, and possibly shear the larger image. Essentially, I want to create (or find) a program that does the following:
Input two images (e.g., TIFFs).
Click several anchor points on the small image.
Click the several corresponding anchor points on the large image.
Transform the large image such that it maps to the small image by aligning the anchor points.
This would help align pictures of the same stellar object. (For example, a hand-drawn picture from 1855 mapped to a photograph taken by Hubble in 2000.)
Many thanks in advance for any algorithms (preferably Java or similar pseudo-code), ideas or links to related open-source software packages.
This is called Image Registration.
Mathworks discusses this, Matlab has this ability, and more information is in the Elastix Manual.
Consider:
Open source Matlab equivalents
IRTK
IRAF
Hugin
you can use the javax.imageio or Java Advanced Imaging api's for rotating, shearing and scaling the images once you found out what you want to do with them.
For a C++ implementation (without GUI), try the old KLT (Kanade-Lucas-Tomasi) tracker.
http://www.ces.clemson.edu/~stb/klt/