how to extract each image with white color into separate sections as shown below?
from the picture above, there will be nine separate section after the extract
I have tried several algorithms such as fillgrid but not exactly what I expected.
so that I could store every sections into a PostGIS database as polygon geometry
What algorithm can I use?
or maybe there is a function in java or postgis library?
This thread may help you but I think you could do this by finding the differences in colour in your image.
You need to "polygonise" your raster data into vectors.
One such tool is from GDAL, with a method originally in C++, but also has bindings to Java.
Java bindings (search for "Polygonize")
C API description of parameters for GDALPolygonize
Some further hints on using the Polygonize method is that you will need to classify values of your raster as "NO DATA" values, then polygonise the remaining data portion. The resulting polygon can be placed in most geospatial vector formats, including a PostGIS database.
Another solution is to load the image into PostGIS raster, then use either ST_Polygon or ST_DumpAsPolygons to do essentially the same task as GDALPolygonize, but with fewer options.
Related
I wanted to use Tango's RGB camera along with its Depth data to create a specific point cloud involving only one color, but I'm not sure how to approach this.
What i want to do is ultimately re-construct an object in blender based on it's XYZ value and the way I'm trying to extract this object from its background is based on color because it doesn't have any depth on it's own. Like a drawing on a 3D object.
I will recommend to check the examples in the C api of tango. It should be possible to do it all in java but the example in c called cpp_rgb_depth_sync_example should give you several ideas
Check the code in https://github.com/googlesamples/tango-examples-c
This example puts the information of the pointcloud in the color image... you just want to do the inverse!
For each point cloud:
- Gather the previous color image
- Using the camera intrinsics (see the example above), you can link each point of the point cloud with a voxel in the image.
- Once you have the color for each point you can remove the points you are not interested in.
One thing to remind is that the color image is in a yuv format (you might want to convert it in RBG).
I hope this will help.
I want an efficient way to finding out sub-images from a Image for Ex. we have a image of country map and it contain states as sub-image.
Then i need a way to finding out sub-images of states from country map.
If you have only pixels, you'll need image processing algorithms to find sub-images.
Your solution will be specific to what the image you are processing looks like. For example, if states are outlined in a certain color, you could try edge detection. If states are each different colors, you could run a flood fill like algorithm to create boundaries for each state.
This is a difficult problem. Try computer vision or object recognition for keywords in your research.
However, I suggest instead you build up vector files which define boundaries of subimages by hand. If you've only a few to do, this isn't a big deal.
If I have an image of a table of boxes, with some coloured in, is there an image processing library that can help me turn this into an array?
Thanks
You can use a thresholding function to binarize the image into dark/light pixels so dark pixels are 0 and light ones are 1.
Then you would want to remove image artifacts using dilation and erosion functions to remove noise (all these are well defined on Wikipedia).
Finally if you know where the boxes are, you can just get the value in the center of each box to determine the array value, or possibly use an area near the center and take the prevailing value (i.e. more 0's is a filled in square, more 1's is and empty square).
If you are scanning these boxes and there is a lot of variation in the position of the boxes, you will have to perform some level of image registration using known points, or fiducials.
As far as what tools to use to do this, I'd recommend first trying this manually using a tool like ImageJ, which has a UI and can also be used programatically since it is written all in Java.
Other good libraries for this include OpenCV and the Java Advanced Imaging API.
Your results will definitely vary depending on the input images and how consistenly lit and positioned they are.
The best way to see how it will do for your data is to try applying these processing steps manually to see where your threshold value should be, how much dilating/eroding you need to get consistent results.
I'm looking for an algorithm that can find perceptual similarity between two images, actually i want to input one picture into system and it search whole my database which contain huge amount of picture and then retrieve the images which have more perceptual similarity with source image, could any body please help me ?
I mean i want to find similar pic. i heard some algorithm can find similar pictures base on the source pic's shape, color and etc (pixel by pixel). i wanna have the system that i input the source image and system retrieve the similar images based on perceptual features like shape, color, size and etc.
Thanks
You need to define carefully what 'perceptually similar' means to you, before trying to find a measurable entity that captures that. Imagine a picture of of a grass field under a blue sky with a horse. Should your application retrieve all horse pictures? Or all pictures with green grass and a blue sky? In the latter case, the above mentioned color histograms are a good start. Alternatively you could look at gaussian mixture models (GMM), they are used quite a bit in retrieval. This code could be a starting point and this article Image retrieval using color histograms
generated by Gauss mixture vector quantization
More complicated is the so called "bag of words" or "visual words" approach. It is increasingly used for image categorization and identification. This algorithm usually starts by detecting robust points in an image, meaning that these points will survive certain image distortions. Example popular algorithms are SIFT and SURF. The region around these found points is captured with a descriptor, which could for example be a smart histogram.
In the most simple form, one can collect all data from all descriptors from all images and cluster them, for example using k-means. Every original image then has descriptors that contribute to a number of clusters. The centroids of these clusters, i.e. the visual words, can be used as a new descriptor for the image. The VLfeat website contains a nice demo of this approach, classifying the caltech 101 dataset. Also noteworthy, are results and software from Caltech itself.
One simple way to start is comparing the Color Histogram.
But the following article proposes the use of Joint Histogram instead. You may also take a look.
http://www.cs.cornell.edu/rdz/joint-histograms.html
Given:
two images of the same subject matter;
the images have the same resolution, colour depth, and file format;
the images differ in size and rotation; and
two lists of (x, y) co-ordinates that correlate the images.
I would like to know:
How do you transform the larger image so that it visually aligns to the second image?
(Optional.) What are the minimum number of points needed to get an accurate transformation?
(Optional.) How far apart do the points need to be to get an accurate transformation?
The transformation would need to rotate, scale, and possibly shear the larger image. Essentially, I want to create (or find) a program that does the following:
Input two images (e.g., TIFFs).
Click several anchor points on the small image.
Click the several corresponding anchor points on the large image.
Transform the large image such that it maps to the small image by aligning the anchor points.
This would help align pictures of the same stellar object. (For example, a hand-drawn picture from 1855 mapped to a photograph taken by Hubble in 2000.)
Many thanks in advance for any algorithms (preferably Java or similar pseudo-code), ideas or links to related open-source software packages.
This is called Image Registration.
Mathworks discusses this, Matlab has this ability, and more information is in the Elastix Manual.
Consider:
Open source Matlab equivalents
IRTK
IRAF
Hugin
you can use the javax.imageio or Java Advanced Imaging api's for rotating, shearing and scaling the images once you found out what you want to do with them.
For a C++ implementation (without GUI), try the old KLT (Kanade-Lucas-Tomasi) tracker.
http://www.ces.clemson.edu/~stb/klt/