Image region/tree detection from google map - java

I am planing to write a Java application to identify certain types of trees using google maps . I have knoweldge in google maps api. my problem is how to recognize trees from map. I found some libs like jjil. But i don't know whether it is useful or not. Can any one give some inputs for this project ?

The general idea would probably be something like this:
Collect a large number of positive and negative samples (i.e. images from Google Maps that you want to recognize and ones that you don't want to recognize). It's hard to say how many you'll really need; depending on how similar they are and how many features you need, 100 might be enough or 10000 might be too few.
Find a set of texture features. Nobody can tell you what features are optimal without seeing the sample images (1) first.
Train a machine learning algorithm (e.g. SVM, neural network). Split the sample set into a training set and a test set to judge the discrimination quality.
Step 1 is just work, and I doubt anyone on Stackoverflow will do it for you. If you have the samples and post them, we might be able to help with steps 2/3, though.

Related

How to create&sample routes using OSM

I wish to use OSM (or an open, OSM based service) in order to "create" routes between aleatory points and then sample those routes an unlimited amount of time. Sample as in get lan+lat and various other info if available (e.g. elevation, points of interest... etc).
I've been struggling to find similar projects, or documentation that might help me. Right now I am quite unsure even on the subject of how to download part of the OSM schema locally so that I don't have to use the API over the web and spam the OSM servers.
Most resources that I've been able to find online are, sadly, enough, sparsely documented and often unmaintained :/
If I were to split what I need to learn into 3 parts those would be:
a) Get the OSM schema for a certain "region" of the world downloaded and running locally and connect to & control it.
b) Figure out how to create an entity around the lines of "route" between two points (say addresses in a city).
c) Figure out how to query said entities for various samples along the route.
This needn't be done in OSM, if there is a product better suited for this, but I have to use something open and OSM seems to be by far the biggest most well maintained project.
(I should note I am building the app in scala, but I'm fine with documentation for other languages/language agnostic, as well as it actually explains stuff and goes into details, instead of just throwing some incomplete lines of code at you)
Re 1: Grab OSM from http://planet.osm.org/ and extract your area via osmcut, osmosis, osmconvert etc. Or use preprocessed extracts e.g. from geofabrik
Re 2: Street names are simple, everything else is complex. You'll need another preprocessing step which gives you the boundaries or some guesses of every city/village and then feed the routing engine with this.
Re 3: Once you have this stored in the routing engine this should be rather simple to query and return.
OSM seems to be by far the biggest most well maintained project.
Maybe here is a misunderstanding: OpenStreetMap is a database and so it is just data. Now it happens to be an ecosystem for various tools like tools for routing, see the tagged GraphHopper routing engine here and others here

Which JUNG algorithms are appropriate here?

So I have been set the task of making a program that takes in a wide set of text files, and calculating strange communication patterns in that set. Basically I think that we need to work out extremes of frequency of communication, along with clustering groups of people together that communicate very often.
As well as that I'd like to create a 'social network style' graph that shows who communicates with everyone who, and graphs of the form 'number of emails over time'.
Additionally, any other ideas of types of graphs to include would be great, though my main predicament is which algorithms to use for this.
Thanks.
Eventually, I decided on using K Means Clustering, alongside Barycenter.

Find sub image in a large image using JAVA and FAST algorithm

I am very new to image processing and my requirement is to detect whehter the given large image consist the given subimage (small image) using Java and FAST algorithm. I have found the java lib (jfeaturelib) which support APIs for FAST algorithms and other descriptors etc.
Follwing link is from jfeaturelib which explaing how to get features of a image using decriptors and I cound run this code and get the features of both the images.
https://github.com/locked-fg/JFeatureLib-Demo/tree/master/src/main/java/de/lmu/dbs/ifi/jfeaturelib/examples
But as jfeaturelib documention didn't provide sufficient information on useage of the API I feel helpless.
It would be a great help if some one can guide me on how to use these set of APIs to achive my requiement.
If some one can atleast tell me the major steps which are requied to deal with this task.( That is getting the images features etc.)
Thanks.
Anybody have the answer for this?
Probably already out of date, but either you asked already on the mailinglist or I never got your request. JFeatueLib is for feature extraction only.
Matching and finding subimages is usually the next step in the image processing pipeline. Also it is totally different from extraction as it usally involves distance measures, clustering, hashing and modeling. A commonly method is: applying a RANSAC algorithm to finding and matching similar vectors. AS you can see, simple matching / finding subimages is yet another problem domain. This is why JFeatureLib simply doesn't cover this topic.
Edit in 2018:
btw you could also use the masking feature of JFeatureLib: https://github.com/locked-fg/JFeatureLib/wiki/HowTo#using-masks
Yet not all Descriptors support masking. Please check https://github.com/locked-fg/JFeatureLib/wiki/FeaturesOverview for the supported capabilities (look for 'Masking' in the capabilities).

How page ranks are calculated in real time

I have read the explanation in http://en.wikipedia.org/wiki/PageRank and i understand that the page rank is calculated by incoming links and out going links.
I have a crawler while crawls a webpage and store in db i need an page-rank algorithm.
I have a db with following values
Title
url
content_html
outgoing_links(external domain)
internal_links(the links with same domain of the url)
can u please explain do i need any other value to compute the page rank and. please explain how to compute it using java
PageRank is, at its heart, a linear algebra eigenvalue problem:
http://www.rose-hulman.edu/~bryan/googleFinalVersionFixed.pdf
If you don't know linear algebra or eigenvalue problems, or aren't willing to read this paper, it's unlikely that you'll be able to tackle this problem. As Einstein said, "Make the problem as simple as possible, but no simpler..."
The paper's title is old; it refers to Google's market cap circa 2004. It's up to $211B this morning.
The technology hasn't stood still in all that time. Google continues to tweak the algorithm in proprietary ways. But this paper explains the heart of it.
You have a few options. If you want to do it all yourself then duffymo's solution is perfect but if you want to use existing libraries I would suggest something similar to Jung for graphs.
I'm not sure if your familiar with graphs but they can be used to store the structure of the links and pagerank is often included in most libraries. Depending on how you want to do it, a good in memory solution is Jung but if you need persistent database storage than loading your data into Neo4J would work(you would have to install gremlin to do the pagerank).
The above are Java solutions but if you want to do it yourself(and like me don't like dry research papers) then I would highly suggest the book programming collective intelligence. They go through(chapter 4? I think) creating a search engine from scratch that includes pagerank and neural networks to monitor clicks. The only problem, based on your requirements above, is the book is written in python but you can easily apply the logic to java. If you know a bit of python already then you can even download the books source code for free and check out the software(but there is no explanation on the math behind the code in the source code).
Hope that helps

Machine learning challenge: diagnosing program in java/groovy (datamining, machine learning)

I'm planning to develop program in Java which will provide diagnosis. The data set is divided into two parts one for training and the other for testing. My program should learn to classify from the training data (BTW which contain answer for 30 questions each in new column, each record in new line the last column will be diagnosis 0 or 1, in the testing part of data diagnosis column will be empty - data set contain about 1000 records) and then make predictions in testing part of data :/
I've never done anything similar so I'll appreciate any advice or information about solution to similar problem.
I was thinking about Java Machine Learning Library or Java Data Mining Package but I'm not sure if it's right direction... ? and I'm still not sure how to tackle this challenge...
Please advise.
All the best!
I strongly recommend you use Weka for your task
Its a collection of machine learning algorithms with a user friendly front-end which facilitates a lot of different kinds of feature and model selection strategies
You can do a lot of really complicated stuff using this without really having to do any coding or math
The makers have also published a pretty good textbook that explains the practical aspects of data mining
Once you get the hang of it, you could use its API to integrate any of its classifiers into your own java programs
Hi As Gann Bierner said, this is a classification problem. The best classification algorithm for your needs I know of is, Ross Quinlan algorithm. It's conceptually very easy to understand.
For off-the-shelf implementations of the classification algorithms, the best bet is Weka. http://www.cs.waikato.ac.nz/ml/weka/. I have studied Weka but not used, as I discovered it a little too late.
I used a much simpler implementation called JadTi. It works pretty good for smaller data sets such as yours. I have used it quite a bit, so can confidently tell so. JadTi can be found at:
http://www.run.montefiore.ulg.ac.be/~francois/software/jaDTi/
Having said all that, your challenge will be building a usable interface over web. To do so, the dataset will be of limited use. The data set basically works on the premise that you have the training set already, and you feed the new test dataset in one step, and you get the answer(s) immediately.
But my application, probably yours also, was a step by step user discovery, with features to go back and forth on the decision tree nodes.
To build such an application, I created a PMML document from my training set, and built a Java Engine that traverses each node of the tree asking the user to give an input (text/radio/list) and use the values as inputs to the next possible node predicate.
The PMML standard can be found here: http://www.dmg.org/ Here you need the TreeModel only. NetBeans XML Plugin is a good schema-aware editor for PMML authoring. Altova XML can do a better job, but costs $$.
It is also possible to use an RDBMS to store your dataset and create the PMML automagically! I have not tried that.
Good luck with your project, please feel free to let me know if you need further inputs.
There are various algorithms that fall into the category of "machine learning", and which is right for your situation depends on the type of data you're dealing with.
If your data essentially consists of mappings of a set of questions to a set of diagnoses each of which can be yes/no, then I think methods that could potentially work include neural networks and methods for automatically building a decision tree based on the test data.
I'd have a look at some of the standard texts such as Russel & Norvig ("Artificial Intelligence: A Modern Approach") and other introductions to AI/machine learning and see if you can easily adapt the algorithms they mention to your particular data. See also O'Reilly, "Programming Collective Intelligence" for some sample Python code of one or two algorithms that might be adaptable to your case.
If you can read Spanish, the Mexican publishing house Alfaomega have also published various good AI-related introductions in recent years.
This is a classification problem, not really data mining. The general approach is to extract features from each data instance and let the classification algorithm learn a model from the features and the outcome (which for you is 0 or 1). Presumably each of your 30 questions would be its own feature.
There are many classification techniques you can use. Support vector machines is popular as is maximum entropy. I haven't used the Java Machine Learning library, but at a glance I don't see either of these. The OpenNLP project has a maximum entropy implementation. LibSVM has a support vector machine implementation. You'll almost certainly have to modify your data to something that the library can understand.
Good luck!
Update: I agree with the other commenter that Russel and Norvig is a great AI book which discusses some of this. Bishop's "Pattern Recognition and Machine Learning" discusses classification issues in depth if you're interested in the down and dirty details.
Your task is classical for neural networks, which are intended first of all to solve exactly classification tasks. Neural network has rather simple realization in any language, and it is the "mainstream" of "machine learning", closer to AI than anything other.
You just implement (or get existing implementation) standart neural network, for example multilayered network with learning by error back propagation, and give it learning examples in cycle. After some time of such learning you will get it working on real examples.
You can read more about neural networks starting from here:
http://en.wikipedia.org/wiki/Neural_network
http://en.wikipedia.org/wiki/Artificial_neural_network
Also you can get links to many ready implementations here:
http://en.wikipedia.org/wiki/Neural_network_software

Categories

Resources