Given:
two images of the same subject matter;
the images have the same resolution, colour depth, and file format;
the images differ in size and rotation; and
two lists of (x, y) co-ordinates that correlate the images.
I would like to know:
How do you transform the larger image so that it visually aligns to the second image?
(Optional.) What are the minimum number of points needed to get an accurate transformation?
(Optional.) How far apart do the points need to be to get an accurate transformation?
The transformation would need to rotate, scale, and possibly shear the larger image. Essentially, I want to create (or find) a program that does the following:
Input two images (e.g., TIFFs).
Click several anchor points on the small image.
Click the several corresponding anchor points on the large image.
Transform the large image such that it maps to the small image by aligning the anchor points.
This would help align pictures of the same stellar object. (For example, a hand-drawn picture from 1855 mapped to a photograph taken by Hubble in 2000.)
Many thanks in advance for any algorithms (preferably Java or similar pseudo-code), ideas or links to related open-source software packages.
This is called Image Registration.
Mathworks discusses this, Matlab has this ability, and more information is in the Elastix Manual.
Consider:
Open source Matlab equivalents
IRTK
IRAF
Hugin
you can use the javax.imageio or Java Advanced Imaging api's for rotating, shearing and scaling the images once you found out what you want to do with them.
For a C++ implementation (without GUI), try the old KLT (Kanade-Lucas-Tomasi) tracker.
http://www.ces.clemson.edu/~stb/klt/
Related
We are trying to crop the relevant area of an image (photo) with a square aspect ratio (1:1), similar to what Facebook does when creating thumbnails.
In our case, it doesn't really matter if the crop has the original height (or width when the image orientation is portrait h>w) of the image to be processed or the auto-crop is resizing itself as well
I am thinking of algorithms like comparing objects with background or focus or something like a heat-map, combining colors and/or areas to find the most relevant part. There could be several ideas/methods to find the main part of the image to be used, similar to face detection.
We are looking for a Java (Android)-based solution or anything that can be adopted for Java / Android. Any help or idea would be greatly appreciated! Thank you!
I would do this in two steps, where the initial step is more robust and the second could be based on, for example, entropy. For the first step, you can use SURF which is relatively common nowadays and I would expect to find Java implementations of it. SURF will give a set of key points that it considers important to describe your image. Considering where these key points are in your image, you have a set of (x, y) coordinates from which you use to reduce the area of your initial image to that which encloses this set of points. Now, since these key points might be anywhere in your image, you will probably want to discard some of them (i.e., those that are too far from the others -- outliers). A very simple way to do this discarding step is considering the convex hull from the initial set of key points, from there, you can peel this hull multiple times. Each time you "peel" it, you are effectively discarding the points in the current convex hull.
Here is a sample for such first step:
f = Import["http://fohn.net/duck-pictures-facts/mallard-duck.jpg"];
kp = ImageKeypoints[f, MaxFeatures -> 200];
Show[f, Graphics[{PointSize[Medium], Red, Point[kp]}]]
After peeling once the convex hull formed by the key points and trimming the image according to the bounding rectangle of the remaining points:
From the image above, you can decide which sub-region of it to pick based on some other method. One that is apparently common is the one used by Reddit, which successively remove slices of lesser entropy from the image. Quickly searching for it, I found one such implementation at https://github.com/christopherhan/pycrop/blob/master/pycrop.py#L33, it is very simple.
Another different kind of method that you might wanna try is called Seam-Carving. Also note that depending on how large is the initial image, it is unlikely that cropping a small piece of it will give anything relevant. In those cases, it is more interesting to first resize the image and then apply the relevant methods.
If I have an image of a table of boxes, with some coloured in, is there an image processing library that can help me turn this into an array?
Thanks
You can use a thresholding function to binarize the image into dark/light pixels so dark pixels are 0 and light ones are 1.
Then you would want to remove image artifacts using dilation and erosion functions to remove noise (all these are well defined on Wikipedia).
Finally if you know where the boxes are, you can just get the value in the center of each box to determine the array value, or possibly use an area near the center and take the prevailing value (i.e. more 0's is a filled in square, more 1's is and empty square).
If you are scanning these boxes and there is a lot of variation in the position of the boxes, you will have to perform some level of image registration using known points, or fiducials.
As far as what tools to use to do this, I'd recommend first trying this manually using a tool like ImageJ, which has a UI and can also be used programatically since it is written all in Java.
Other good libraries for this include OpenCV and the Java Advanced Imaging API.
Your results will definitely vary depending on the input images and how consistenly lit and positioned they are.
The best way to see how it will do for your data is to try applying these processing steps manually to see where your threshold value should be, how much dilating/eroding you need to get consistent results.
I'm looking for an algorithm that can find perceptual similarity between two images, actually i want to input one picture into system and it search whole my database which contain huge amount of picture and then retrieve the images which have more perceptual similarity with source image, could any body please help me ?
I mean i want to find similar pic. i heard some algorithm can find similar pictures base on the source pic's shape, color and etc (pixel by pixel). i wanna have the system that i input the source image and system retrieve the similar images based on perceptual features like shape, color, size and etc.
Thanks
You need to define carefully what 'perceptually similar' means to you, before trying to find a measurable entity that captures that. Imagine a picture of of a grass field under a blue sky with a horse. Should your application retrieve all horse pictures? Or all pictures with green grass and a blue sky? In the latter case, the above mentioned color histograms are a good start. Alternatively you could look at gaussian mixture models (GMM), they are used quite a bit in retrieval. This code could be a starting point and this article Image retrieval using color histograms
generated by Gauss mixture vector quantization
More complicated is the so called "bag of words" or "visual words" approach. It is increasingly used for image categorization and identification. This algorithm usually starts by detecting robust points in an image, meaning that these points will survive certain image distortions. Example popular algorithms are SIFT and SURF. The region around these found points is captured with a descriptor, which could for example be a smart histogram.
In the most simple form, one can collect all data from all descriptors from all images and cluster them, for example using k-means. Every original image then has descriptors that contribute to a number of clusters. The centroids of these clusters, i.e. the visual words, can be used as a new descriptor for the image. The VLfeat website contains a nice demo of this approach, classifying the caltech 101 dataset. Also noteworthy, are results and software from Caltech itself.
One simple way to start is comparing the Color Histogram.
But the following article proposes the use of Joint Histogram instead. You may also take a look.
http://www.cs.cornell.edu/rdz/joint-histograms.html
I need to to clip variablesized images into puzzle shaped pices like this(not squares): http://www.fernando.com.ar/jquery-puzzle/
I have considered the posibility of doing this with a php library like Cairo or GD, but have little to no experience with these librays, and see no immidiate soulution for creating a clipping mask dynamicaly scalable for different sized images.
I'm looking for guidance/tips on which serverside programing language to use to accomplish this task, and preferably an approach to this problem.
You can create an image using GD with the size of the puzzle piece. and then copy the full image on that image with the right cropping to get the right part of the image.
Then you can just dynamically color in every part of the piece you want to remove with a distinct color (eg #0f0) and then use imagecolorallocatealpha to make that color transparent. Do it for each piece and you have your server side image pieces.
However, if I where you I would create the clipping mask of each puzzle peace in advance in the distinct color. That would make two images per connection (one with the "circle" connecter sticking out and one where this circle connector fits into). That way you can just copy these masks onto the image to create nice edges quickly.
GD is quite complicated, I've heard very good things about Image Magick for which there is a PHP version and lots of documentation on php.net. However, not all web servers would have this installed by default.
http://www.php.net/manual/en/book.imagick.php
If you choose to do it using PHP with GD then the code here may help:
http://php.amnuts.com/index.php?do=view&id=15&file=class.imagemask.php
Essentially what you need to do with GD is to start with a mask at a particular size and then use the imagecopyresampled function to copy the mask image resource to a larger or smaller size. To see what I mean, check out the _getMaskImage method class shown at the url above. A working example of the output can be seen at:
http://php.amnuts.com/demos/image-mask/
The problem with doing it via GD, as far as I can tell, is that you need to do it a pixel at a time if you want to achieve varying opacity levels, so processing a large image could take a few seconds. With ImageMagick this may not be the case.
I have an image where I want to find a specific location (coordinate) based on its color. For example
I want to find coordinates of edges of this Black box.
How can i detect Black color in this image in Java?
Note: My target is to develop a program to detect eyes in a face
I would suggest using threshold filter and then convert image to 1-bit format. This should do the trick.
However locating eyes in image is much more harder. You might be interested in open source OpenCV library. Here is port dedicated for Java - javacv. And C++ example of face detection using OpenCV.
as far as I know Fourier transform is used in image processing. With that you get your picture in frequency domain that represents a signal (in case of image signal is two dimensional). You can use Fast Fourier Transform algorimth (FFT in java, Fun with Java, Understanding FFT). There's a lot of papers about eye detection issue you can read and take inspiration from:
http://www.jprr.org/index.php/jprr/article/viewFile/15/7
http://people.ee.ethz.ch/~bfasel/papers/avbpa_face.pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.117.2226&rep=rep1&type=pdf