Converting floor plans into hierarchical graphs - java

I'm developing a navigation application for android. I need to convert the floor map into a hierarchical graph. As far as I know after converting the map into a graph this will produce an XML file presenting the rooms as nodes. I have searched about hierarchical graph but all I got was theoretical explanation of the graphs.
While searching also I read something about GoogleOpenStreet. I read that it can be used to convert floor plans into graphs but also I didn't find anything about how to do it.
Can anyone suggest methods to do this using Java?

Since I found a solution, and no one answered it. I will answer it to help other people who have the same problem.
I didn't find a direct method to convert floor plans into hierarchical graphs, instead I used a graph editor to draw a graph that represent the floor plan and then used the resulting file ( which is similar to XML format) and parsed in.

Related

Software pattern for matching object with handles

I have been thinking in an approach for this problem but I have not found any solution which convince me. I am programming a crawler and I have a downloading task for every url from a urls list. In addition, the different html documents are parsed in different mode depending of the site url and the information that I want to take. So my problem is how to link every task with its appropriate parse.
The ideas are:
Creating an huge 'if' where check the download type and to associate a parse.
(Avoided, because the 'if' is growing with every new different site added to crawler)
Using polymorphism, to create a download task different for every different site and related to type of information which I want to get, and then use a post-action where link its parse.
(Increase the complexity again with every new parser)
So I am looking for some kind of software pattern or idea for say:
Hey I am a download task with this information
Really? Then you need this parse for extract it. Here is the parse you need.
Additional information:
The architecture is very simple. A list with urls which are seeds for the crawler. A producer which download the pages. Other list with html documents downloaded. And a consumer who will should apply the right parse for the page.
Depending of the page download sometimes we need use a parse A, or a parse B, etc..
EDIT
An example:
We have three site webs: site1.com, site2.com and site3.com
There are three urls type which we want parsing: site1.com/A, site1.com/B, site1.com/C, site2.com/A, site2.com/B, site2.com/C, ... site3.com/C
Every url it parsed different and usually the same information is between site1.com/A - site2.com/A - site3.com/A ; ... ; site1.com/C - site2.com/C - site3.com/C
Looks like a Genetic Algorithm aproached solution fits for your description of the problem, what you need to find first is the basics (atomic) solutions.
Here's a tiny description from wikipedia:
In a genetic algorithm, a population of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible.[2]
The evolution usually starts from a population of randomly generated individuals, and is an iterative process, with the population in each iteration called a generation. In each generation, the fitness of every individual in the population is evaluated; the fitness is usually the value of the objective function in the optimization problem being solved. The more fit individuals are stochastically selected from the current population, and each individual's genome is modified (recombined and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population.
A typical genetic algorithm requires:
a genetic representation of the solution domain,
a fitness function to evaluate the solution domain.
A standard representation of each candidate solution is as an array of bits.[2] Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming; a mix of both linear chromosomes and trees is explored in gene expression programming.
Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators.
I would externalize the parsing pattern / structure in some form ( like XML ) and use them dynamically.
For example, I have to download site1.com an site2.com . Both are having two different layout . I will create two xml which holds the layout pattern .
And one master xml which can hold which url should use which xml .
While startup load this master xml and use it as dictionary. When you have to download , download the page and find the xml from dictionary and pass the dictionary and stream to the parser ( single generic parser) which can read the stream based on Xml flow and xml information.
In this way, we can create common patterns in xml and use it to read similar sites. Use Regular expressions in xml patterns to cover most of sites in single xml.
If the layout is completely different , just create one xml and modify master xml that's it.
The secret / success of this design is how you create such generic xmls and it is purely depends on what you need and what you are doing after parsing.
This seems to be a connectivity problem. I'd suggest considering the quick find algorithm.
See here for more details.
http://jaysonlagare.blogspot.com.au/2011/01/union-find-algorithms.html
and here's a simple java sample,
https://gist.github.com/gtkesh/3604922

Neo4j-spatial Finding Nodes in OSM and finding shortest way to POI

Hi im new to neo4j and trying to figure out how everything works atm
I importet a OSM file and now im working on a function which you can input a point in WGS84 format and a POI and then the function finds the shortest path to the POI.
So for finding the nearest Geometries to my WGS84 Point I use
Coordinate co = new Coordinate(12.9639158,56.070904);
List<SpatialDatabaseRecord> results2 = GeoPipeline
.startNearestNeighborLatLonSearch(layer, co, 1)
.toSpatialDatabaseRecordList();
but then my problems start because i don't really understand how the OSM file is built up
Is there a function so that i can get my POI Node by name?
I get an index from the OSM file
SpatialDatabaseService spatialService = new SpatialDatabaseService(database);
Layer layer = spatialService.getLayer(osm);
LayerIndexReader spatialIndex = layer.getIndex();
Can I use it to search Nodes by properties?
And for finding the shortest Way between the points I found a dijkstra Algorithm
PathFinder<WeightedPath> finder = GraphAlgoFactory.dijkstra(
Traversal.expanderForTypes( ExampleTypes.MY_TYPE, Direction.BOTH ), "cost" );
WeightedPath path = finder.findSinglePath( nodeA, nodeB );
The question now is what are my RelationshipTypes??? I think it shoud be NEXT but how do I include this in the code? Do I have to create an Enum with NEXT???
Can someone give me some feedback if im on the right way and give me some help please?
Okay finally found out how to find Nodes by id :D not too difficult but i searched for a long time :D
Thank you
I can make a few comments:
We currently do not add names or tags to lucene indexes in the OSM importer, but it is a good idea.
What is done is adding all geometries (poi, streets, polygons, etc.) to the RTree index. This is the index you got back when you called layer.getIndex(). This index can be used to find things within regions, and can also perform a filter by properties of the geometry (name or tags) while searching the index. This is probably your best bet for finding something by name. See below for two options for this.
Route finding in the OSM model is not trivial, because the current model is designed for OSM completeness with all osm-nodes represented as real nodes in one huge connected network including all geometries (not just roads). The graph is complex, and a traverser would need to know how to traverse it to achieve route finding. It is not as simple as following NEXT relationships. See for example slides 13 and 15 at http://www.slideshare.net/craigtaverner/neo4j-spatial-backing-a-gis-with-a-true-graph-database. Each line segment does have a length, but what you really want is a simpler graph that has nodes only for points of intersection, and single relationships for the total drive distance between these points. This graph is not there, but could be added by the OSM importer.
Finally, two suggestions for finding POI by name. Either:
- dynamic layers
- or geopipes
For Dynamic layers you can use either CQL syntax or key,value pairs (for tags). For examples of each see lines 81 and 84 of https://github.com/neo4j/spatial/blob/master/src/test/java/org/neo4j/gis/spatial/TestDynamicLayers.java. This approach allows the test for name to be done during the traversal in a callback from the RTree.
For GeoPipes, you define a stream, and each object returned by the RTree will be passed to the next filter. This should have the same performance as the dynamic layers, and also be a bit more intuitive to use. See for example the 'filter_by_osm_attribute' and 'filter_by_property' tests on lines 99 and 112 of https://github.com/neo4j/spatial/blob/master/src/test/java/org/neo4j/gis/spatial/pipes/GeoPipesTest.java. These both search for streets by name.

How to determine the position of a car inside an image?

Is it possible to analyse an image and determine the position of a car inside it?
If so, how would you approach this problem?
I'm working with a relatively small data-set (50-100) and most images will look similar to the following examples:
I'm mostly interested in only detecting vertical coordinates, not the actual shape of the car. For example, this is the area I want to highlight as my final output:
You could try OpenCV which has an object detection API. But you would need to "train" it...by supplying it with a large set of images that contained "cars".
http://docs.opencv.org/modules/objdetect/doc/objdetect.html
http://robocv.blogspot.co.uk/2012/02/real-time-object-detection-in-opencv.html
http://blog.davidjbarnes.com/2010/04/opencv-haartraining-object-detection.html
Look at the 2nd link above and it shows an example of detecting and creating a bounding box around the object....you could use that as a basis for what you want to do.
http://www.behance.net/gallery/Vehicle-Detection-Tracking-and-Counting/4057777
Various papers:
http://cbcl.mit.edu/publications/theses/thesis-masters-leung.pdf
http://cseweb.ucsd.edu/classes/wi08/cse190-a/reports/scheung.pdf
Various image databases:
http://cogcomp.cs.illinois.edu/Data/Car/
http://homepages.inf.ed.ac.uk/rbf/CVonline/Imagedbase.htm
http://cbcl.mit.edu/software-datasets/CarData.html
1) Your first and second images have two cars in them.
2) If you only have 50-100 images, I can almost guarantee that classifying them all by hand will be faster than writing or adapting an algorithm to recognize cars and deliver coordinates.
3) If you're determined to do this with computer vision, I'd recommend OpenCV. Tutorial here: http://docs.opencv.org/doc/tutorials/tutorials.html
You can use openCV latentSVM detector to detect the car and plot a bounding box around it:
http://docs.opencv.org/modules/objdetect/doc/latent_svm.html
No need to train a new model using HaarCascade, as there is already a trained model for cars:
https://github.com/Itseez/opencv_extra/tree/master/testdata/cv/latentsvmdetector/models_VOC2007
This is a supervised machine learning problem. You will need to use an API that features learning algorithms as colinsmith suggested or do some research and write on of your own. Python is pretty good for machine learning (it's what I use, personally) and has some nice tools like scikit: http://scikit-learn.org/stable/
I'd suggest for you to look into HAAR classifiers. Since you mentioned you have a set of 50-100 images, you can use this to build up a training dataset for the classifier and use it to classify your images.
You can also look into SURF and SIFT algorithms for the specified problem.

Java Graph Visualisation Library: Nodes with multiple connect points

Can anyone recommend a Java Graph Visualisation library in which graph nodes can be rendered with multiple connect points?
For example, supposing a graph node represents a processor that takes input from two sources and produces output. This would be visualised as 3 vertices. However, clearly each vertex has a defined role in the workflow and therefore ideally my node would appear with 3 distinct connection points that the user could attach vertices to.
I've taken a look at JUNG but I don't think it will suit my needs.
Any recommendations welcome; either on specific libraries or alternative approaches I can take.
You could try JGraph's java library
JGRAPH
It has a good amount of functionality and I have used it with success before. The only thing is that the documentation is a bit lacking, but if you read through some examples and code its pretty good when you get the hang of it.
Take a look at JGraph (http://www.jgraph.com/). I used jgraph-5.14.0.0 for a similar project before. Here are the graphs that I made for another project: https://github.com/eamocanu/spellcheck.graph/tree/master/graph%20photos

Image Comparison Techniques with Java

I'm looking for several methods to compare two images to see how similar they are. Currently I plan to have percentages as the 'similarity index' end-result. My program outline is something like this:
User selects 2 images to compare.
With a button, the images are compared using several different methods.
At the end, each method will have a percentage next to it indicating how similar the images are based on that method.
I've done a lot of reading lately and some of the stuff I've read seems to be incredibly complex and advanced and not for someone like me with only about a year's worth of Java experience. So far I've read about:
The Fourier Transform - im finding this rather confusing to implement in Java, but apparently the Java Advanced Imaging API has a class for it. Though I'm not sure how to convert the output to an actual result
SIFT algorithm - seems incredibly complex
Histograms - probably the easiest out of all mentioned so far
Pixel grabbing - seems viable but if theres a considerable amount of variation between the two images it doesn't look like it's going to produce any sort of accurate result. I might be wrong?
I also have the idea of pre-processing an image using a Sobel filter first, then comparing it. Problem is the actual comparing part.
So yeah I'm looking to see if anyone has ideas for comparing images in Java. Hoping that there are people here that have done similar projects before. I just want to get some input on viable comparison techniques that arent too hard to implement in Java.
Thanks in advance
Fourier Transform - This can be used to efficiently can compute the cross-correlation, which will tell you how to align the two images and how similar they are, when they are optimally aligned.
Sift descriptors - These can be used to compare local features. They are often used for correspondence analysis and object recognition. (See also SURF)
Histograms - The normalized cross-correlation often yields good results for comparing images on a global level. But since you are just comparing color distributions you could end up declaring an outdoor scene with lots of snow as similar to an indoor scene with lots of white wallpaper...
Pixel grabbing - No idea what this is...
You can get a good overview from this paper. Another field you might to look into is content based image retrieval (CBIR).
Sorry for not being Java specific. HTH.
As a better alternative to simple pixel grabbing, try SSIM. It does require that your images are essentially of the same object from the same angle, however. It's useful if you're comparing images that have been compressed with different algorithms, for example (e.g. JPEG vs JPEG2000). Also, it's a fairly simple approach that you should be able to implement reasonably quickly to see some results.
I don't know of a Java implementation, but there's a C++ implementation using OpenCV. You could try to re-use that (through something like javacv) or just write it from scratch. The algorithm itself isn't that complicated anyway, so you should be able to implement it directly.

Categories

Resources