Ok I'm writing a small Java app that accepts two images as inputs, compares them, then gives a quantitative output as a measure of similarity (eg. 50% similar).
To my understanding FFT is a good way to measure similarity of two images. But I can't for the love of god figure out how to code/implement it.
So far I've implemented another function which basically gives me two histograms (one for each image). All I need now is to write a method that will FFT an image and give me a quantifiable outcome.
Can anyone help me out with this? I'd really like to see some sample codes, if not at least a point in the right direction. Much thanks in advance.
Similarity is not an exact term. For example: if you have circle, and an ellipse are they similar? They are both round objects, so in this sense they are - but if we want to filter out circles only they are not. You will have to define a measure (or measures - for example roundness, intensity distribution, size, orientation, number of objects, euler number, etc.), than calculate it for each image. The similarity of the two images will be (some kind of) distance between the two calculated values. This could be euclidean distance (for two real measures), or some kind of error function (RMS for intensity distributions).
You will have to choose to which transforms should your measure stay invariant (is the rotated image similar to the original? If yes, simple fourier transform is not appropriate).
Measuring similarity of an image is hard, if you have to do that I would read about image stitching. If you just need to distinguish BLOB-s, first try to calculate some simple measures (I recommend calculating moments - area, orientation; read K-means clusteing), or 1D fourier transform of the distance of the contour from the center of the mass (whic is a little bit more difficult).
Before you attempt to code up a 2DFT, you should fully understand the math behind it. flolo is correct that you can compute it by first doing a 1D FFT on the rows and columns and then combining the results, but I have no reason to believe the L_inf norm is the best way to convert them to a metric, since it completely skips the usual combining step to create the full 2DFT. Take a look at http://fourier.eng.hmc.edu/e101/lectures/Image_Processing/node6.html at the very bottom of the page.
That said, there may be better ways to compare images that don't require comparing 2D arrays of information. For instance, PCA (Principal Component Analysis, which is just a matter of running SVD {Singular Value Decomposition} on your images after mean-centering them, though I'd take a look at the wikipedia article on it first) will give you a 1D vector which you could then apply some L_p norm to directly to compare, although in this case, i would use something like sum(min(a_i/b_i , b_i/a_i))/length(a), where a and b are the 1D vectors you got from the transform.
There are many good sites with code for a fft on an 1-D array of values. You just apply this fft row by row on your image. And afterwards you do fft columnwise on the results.
Now you need a metric to get from the resulting transformed image, my suggestion would be to try the max-norm (L_inf). That is max_{x,y}{fft2d(imag1)[x,y] - fft2d(imag2)[x,y]}.
If you just want to check if it is likely that one image is a quick edit of another for something like DRM of stock photography then check the percentages of a normalized color palette within probable regions. If they match within an THRESHOLD for a NUMBER_OF_TEST_COLORS in any one of a number of TEST_REGIONS within the image then you have a "suspect"... you still need a human to check the suspects. But this is a quick and dirty way to find many of the image re-sizers, horiz/vert flippers, and background color changers, file format changers, and other subtle variations... of course "normalizing the colors" to a quantized palette is an art unto itself. I would recommend quantizing images into nearest "web safe" colors for practicality.
I'm a blue collar garbage man in comparison to a mathematician, but garbage men are quite practical! I have had good success with this kind of approach in grouping similar images and search by color applications.
Related
I'm working on an Image Processing project for a while, which consists in a way to measure and classify some types of sugar in the production line by its color. Until now, my biggest concern was searching and implementing the appropriate mathematical techniques to calculate distance between two colors (a reference color and the color being analysed), and then, turn this value into something more meaningful, as an industry standard measure.
From this, I'm trying to figure out how should I reliably extract the average color value from an image, once the frame captured by a video camera may contain noises or dirt in the sugar (most likely almost black dots).
Language: Java with OpenCV library.
Current solution: Before taking average image value, I'm applying the fastNlMeansDenoisingColored function, provided by OpenCV. It removes some white dots, at cost of more defined details. Couldn't remove black dots with it (not shown in the following images).
From there, I'm using the org.opencv.core.Core.mean function to computate the mean value of array elements independently for each channel, so that I can have a scalar value to use in my calculations.
I tried to use some kinds of image thresholding filters to get rid of black and white dots, and then calculate the mean with a mask, It kinda works too. Also, I tried to find any weighted average function which could return scalar values as well, but without success.
I don't know If those are robust enough pre-processing techniques to such application, mean values can vary easily. Am I in the right way? Would you suggest a better way to get reliable value that will represent my sugar's color?
I am new to image processing and to opencv in particular.
I am working on an OCR project in which i need to identify numbers.
This is my image to process:
Lets say i already optimized the image, my questions are:
In the image the number are always apeared several times, lets say i found the contours, so how can i know which one if the the best one to process?
How can I know in what angle I need to rotate each contour to make It stright?
In the image the number are always apeared several times, lets say i found the contours, so how can i know which one if the the best one to process?
You want always the biggest number, because they are least warped by perspective. So you always want the numbers in the middle of the image, because they are also n the middle of the ball.
How can I know in what angle I need to rotate each contour to make It stright?
Have a look at rotated rect. I explained how to find the angle in this thread.
Since you always have a perfectly centered ball, you should think about using mapping to "unwarp" your ball (so do a projection like from the globe onto a map). It should be pretty straightforward afterwards to find the numbers on the flat image.
Edit: Since you only have 10 numbers you might also "brute force" the solution with a big enough training set. So just throw all numbers you detect into a classifier and keep the most likely solution.
1) I agree with #Sebastian in the first part. Exploit the fact that in your scenario the numbers are placed in the surface of a ball, so first select the blobs inside a centered region of interest.
2) The contours shown in the image are not rotated (the numbers are). Instead of "rotating" these bounding boxes, which seems to be quite a headache, I'd rather use them combined with rotation invariant keypoints. I'll clarify this:
a) You know where your numbers are, so you don't have to search in the entire image. OK, keep these already selected regions in mind.
b) You can take "straight" samples of the numbers 0-9 and use them as ground truth.
c) You can perform a matching search between each "ground truth" image and each candidate region. Now, forget the scale/rotation: use scale/rotation invariant keypoints! Something like this:
Again, notice that you have already selected the region-of-interest, so in your case the search will consist on checking the number of matches (number of blue lines) between each of the registered numbers and your candidate. I think it worth a try! :)
You can find more info on the different keypoints available in opencv here.
Hope that it helps!
I need to create a heatmap for android google maps. I have geolocation and points that have negative and positive weight attributed to them that I would like to visually represent. Unlike the majority of heatmaps, I want these positive and negative weights to destructively interfere; that is, when two points are close to each other and one is positive and the other is negative, the overlap of them destructively interferes, effectively not rendering areas that cancel out completely.
I plan on using the android google map's TileOverlay/TileProvider class that has the job of creating/rendering tiles based a given location and zoom. (I don't have an issue with this part.)
How should I go about rendering these Tiles? I plan on using java's Graphics class but the best that I can think of is going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel. This seems very inefficient, however, and I was looking for suggestions on a better approach.
Edit: I've considered everything from using a non-android Google Map inside of a WebView to using a TileOverlay to using a GroundOverlay. What I am now considering doing is having a large 2 dimensional array of "squares." Each square would have a long, lat, and total +/- weights. When a new data point is added, instead of rendering it exactly where it is, it will be added to the "square" that it is in. The weight of this data point will be added to the square and then I will use the GoogleMap Polygon object to render the square on the map. The ratio of +points to -points will determine the color that is rendered, with a ratio closer to 1:1 being clear, >1 being blue (cold point), and <1 being red (hot point).
Edit: a.k.a. clustering the data into small regional groups
I suggest trying
going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel.
Even if it slow, it will work. There are not too many Tiles on the screen, there are not too many pixels in each Tile and all this is done on a background thread.
All this is still followed by translating Bitmap into byte[]. The byte[] is a representation of PNG or JPG file, so it's not a simple pixel mapping from Bitmap. The last operation takes some time too and may possibly require more processing power than your whole algorithm.
Edit (moved from comment):
What you describe in the edit sounds like a simple clustering on LatLng. I can't say it's a better or worse idea, but it's something worth a try.
I'm looking for an algorithm that can find perceptual similarity between two images, actually i want to input one picture into system and it search whole my database which contain huge amount of picture and then retrieve the images which have more perceptual similarity with source image, could any body please help me ?
I mean i want to find similar pic. i heard some algorithm can find similar pictures base on the source pic's shape, color and etc (pixel by pixel). i wanna have the system that i input the source image and system retrieve the similar images based on perceptual features like shape, color, size and etc.
Thanks
You need to define carefully what 'perceptually similar' means to you, before trying to find a measurable entity that captures that. Imagine a picture of of a grass field under a blue sky with a horse. Should your application retrieve all horse pictures? Or all pictures with green grass and a blue sky? In the latter case, the above mentioned color histograms are a good start. Alternatively you could look at gaussian mixture models (GMM), they are used quite a bit in retrieval. This code could be a starting point and this article Image retrieval using color histograms
generated by Gauss mixture vector quantization
More complicated is the so called "bag of words" or "visual words" approach. It is increasingly used for image categorization and identification. This algorithm usually starts by detecting robust points in an image, meaning that these points will survive certain image distortions. Example popular algorithms are SIFT and SURF. The region around these found points is captured with a descriptor, which could for example be a smart histogram.
In the most simple form, one can collect all data from all descriptors from all images and cluster them, for example using k-means. Every original image then has descriptors that contribute to a number of clusters. The centroids of these clusters, i.e. the visual words, can be used as a new descriptor for the image. The VLfeat website contains a nice demo of this approach, classifying the caltech 101 dataset. Also noteworthy, are results and software from Caltech itself.
One simple way to start is comparing the Color Histogram.
But the following article proposes the use of Joint Histogram instead. You may also take a look.
http://www.cs.cornell.edu/rdz/joint-histograms.html
In weka I load an arff file. I can view the relationship between attributes using the visualize tab.
However I can't understand the meaning of the jitter slider. What is its purpose?
You can find the answer in the mailing list archives:
The jitter function in the Visualize panel just adds artificial random
noise to the coordinates of the plotted points in order to spread the
data out a bit (so that you can see points that might have been
obscured by others).
I don't know weka, but generally jitter is a term for the variation of a periodic signal to some reference interval. I'm guessing the slider allows you to set some range or threshold below which data points are treated as being regular, or to modify the output to introduce some variation. The wikipedia entry can give you some background.
Update: from this pdf, the jitter slider is for this purpose:
“Jitter” option to deal with nominal attributes (and to detect “hidden”data points)
Based on the accompanying slide it looks like it introduces some variation in the visualisation, perhaps to show when two data points overlap.
Update 2: This google books extract (to Data mining By Ian H. Witten, Eibe Frank) seems to confirm my guess:
[jitter] is a random displacement applied to X and Y values to separate points that lie on top of one another. Without jitter, 1000 instances at the same data point would look just the same as 1 instance
I don't know the products you mention, but jittering generally means randomising the sample positions. Eg, in ray tracing you would normally render a ray though each pixel on the screen. Jittering adds a random offset to each ray to reduce issues caused by regular aliasing.