Implementing markov Chain Example - java - java

There are plenty of Markov Chain examples for text simulations, however for a state change (for ex weather change based on probability over time) I couldn't find any examples. For ex, lets say
Sunny --> Sunny = probability is 0.8
Sunny --> Rainy = probability is 0.2
what I am looking is a way to write an algorithm which will display the current weather till n no of steps.
for e.g: f(3) => S,S,R
I guess what I am really finding it difficult is how to put the randomness to the algorithm.
This algorithm generate a sentence based on the probability of given words in a phrase, but I am unable to map it into my requirement.( I am not good in maths)
And pls let me know how can I extend the algorithm, for ex
if the probability of a sunny day with high humidity is 0.3, the function should produce something like
f(4) -> [S,Low Hu],[S, Low Hu],[R,High Hu] etc..
Please let me know whether this approach is good for my requirement.
pseudo code would be enough.

You can use mockNeat.probabilities() method from the library with the same name, if you don't want to implement the same functionality by yourself. Or you can take a look on how it's implemented.

Related

how to replace variables like two-third, one-fifth, two-hundreth number form using java

I want to take input in the name form like two-third or one-fifth and I want my system to convert it into numerical form and give the answer.
Que: two-third of thirty is?
The system should output 20
How can I program it?
As a general problem natural language processing (NLP) - which is what you're talking about - is a difficult open-ended problem.
There are lots of libraries for this stuff. If you want background look here:
Is there a good natural language processing library
Or look up Natural Language Processing in Wikipedia.
However you said you want to do this and you're new to programming.
The first thing you need to do is break the problem down. That's how we solve programming problems.
So first try writing a program that can read a string containing a single word and map it to a number.
For example "One" outputs 1, "Two" outputs 2, "Thirty" outputs 30.
Next try and write a program that cuts a string into its constituent words.
You probably want to use an array here.
That's a process called tokenizing and Java has a built in StringTokenizer to do that.
You might want to code that yourself, but you're learning and it might be the moment to start learning using library code.
When you've got those try combining them so your program can convert "Thirty Seven" into 37 (i.e. numbers under 100).
That new program should combine the ideas of your program than can convert "Thirty" and "Seven" and the one that can split words up.
This is the other thing we do in programming - combining things.
We break it down to smaller problems solve them and then build them back up to solve the bigger problems.
(I apologize if I'm patronizing you but I have no idea of your experience).
After that you might add logic that handles "Five Hundred And Thirty Seven".
Again, notice how spotting Five followed by Hundred is like converting Five and then finding a token that tells you to multiply what you just saw by 100.
You could go on to handle Thousands, Hundred Thousand etc.
Or you could branch off into the fractions.
That's similar but you just have a different vocabulary.
Seven Forty-Seconds = 7/42.
As a learning challenge I would suggest you'll have come a long way if your program handles things like "forty two ninety-thirds of eight hundred and eighty nine".
The easy solution outputs 0.000508 - the floating point answer to (42/93)*889.
The extra credit solution outputs 2/3937 - (42/93)*889 can be simplified as a rational number to 2/3937.
To be honest, you'll be doing well if you can handle "nine-ninths of ninety nine".
Notice that the first word is the numerator (n). The second is the denominator (d). The third is always 'of'. The forth word is either the tens (t) or the units (u). If the forth was the units you're done otherwise if there is a fifth word it's the units.
The answer in that case is n/d*(t*10+u). If the tens or units are missing they're zero - obviously.
PS: You might need special handling for zero if you object to someone typing in ninety zero. It obviously means ninety but we don't say it in English!
you could try an mapping from
one ->1
two ->2
three ->3
four ->4
and so on
and on the other hand:
half ->2
third ->3
fourth ->4
then create an double to divide first value with 2nd..
at least multiply this value with the third (you can use the first mapping for this value) and you got the result.
At least, it is not easy due to you have to build the mapping between string and int manually.

How can I build a model to distinguish tweets about Apple (Inc.) from tweets about apple (fruit)?

See below for 50 tweets about "apple." I have hand labeled the positive matches about Apple Inc. They are marked as 1 below.
Here are a couple of lines:
1|“#chrisgilmer: Apple targets big business with new iOS 7 features http://bit.ly/15F9JeF ”. Finally.. A corp iTunes account!
0|“#Zach_Paull: When did green skittles change from lime to green apple? #notafan” #Skittles
1|#dtfcdvEric: #MaroneyFan11 apple inc is searching for people to help and tryout all their upcoming tablet within our own net page No.
0|#STFUTimothy have you tried apple pie shine?
1|#SuryaRay #India Microsoft to bring Xbox and PC games to Apple, Android phones: Report: Microsoft Corp... http://dlvr.it/3YvbQx #SuryaRay
Here is the total data set: http://pastebin.com/eJuEb4eB
I need to build a model that classifies "Apple" (Inc). from the rest.
I'm not looking for a general overview of machine learning, rather I'm looking for actual model in code (Python preferred).
What you are looking for is called Named Entity Recognition. It is a statistical technique that (most commonly) uses Conditional Random Fields to find named entities, based on having been trained to learn things about named entities.
Essentially, it looks at the content and context of the word, (looking back and forward a few words), to estimate the probability that the word is a named entity.
Good software can look at other features of words, such as their length or shape (like "Vcv" if it starts with "Vowel-consonant-vowel")
A very good library (GPL) is Stanford's NER
Here's the demo: http://nlp.stanford.edu:8080/ner/
Some sample text to try:
I was eating an apple over at Apple headquarters and I thought about
Apple Martin, the daughter of the Coldplay guy
(the 3class and 4class classifiers get it right)
I would do it as follows:
Split the sentence into words, normalise them, build a dictionary
With each word, store how many times they occurred in tweets about the company, and how many times they appeared in tweets about the fruit - these tweets must be confirmed by a human
When a new tweet comes in, find every word in the tweet in the dictionary, calculate a weighted score - words that are used frequently in relation to the company would get a high company score, and vice versa; words used rarely, or used with both the company and the fruit, would not have much of a score.
I have a semi-working system that solves this problem, open sourced using scikit-learn, with a series of blog posts describing what I'm doing. The problem I'm tackling is word-sense disambiguation (choosing one of multiple word sense options), which is not the same as Named Entity Recognition. My basic approach is somewhat-competitive with existing solutions and (crucially) is customisable.
There are some existing commercial NER tools (OpenCalais, DBPedia Spotlight, and AlchemyAPI) that might give you a good enough commercial result - do try these first!
I used some of these for a client project (I consult using NLP/ML in London), but I wasn't happy with their recall (precision and recall). Basically they can be precise (when they say "This is Apple Inc" they're typically correct), but with low recall (they rarely say "This is Apple Inc" even though to a human the tweet is obviously about Apple Inc). I figured it'd be an intellectually interesting exercise to build an open source version tailored to tweets. Here's the current code:
https://github.com/ianozsvald/social_media_brand_disambiguator
I'll note - I'm not trying to solve the generalised word-sense disambiguation problem with this approach, just brand disambiguation (companies, people, etc.) when you already have their name. That's why I believe that this straightforward approach will work.
I started this six weeks ago, and it is written in Python 2.7 using scikit-learn. It uses a very basic approach. I vectorize using a binary count vectorizer (I only count whether a word appears, not how many times) with 1-3 n-grams. I don't scale with TF-IDF (TF-IDF is good when you have a variable document length; for me the tweets are only one or two sentences, and my testing results didn't show improvement with TF-IDF).
I use the basic tokenizer which is very basic but surprisingly useful. It ignores # # (so you lose some context) and of course doesn't expand a URL. I then train using logistic regression, and it seems that this problem is somewhat linearly separable (lots of terms for one class don't exist for the other). Currently I'm avoiding any stemming/cleaning (I'm trying The Simplest Possible Thing That Might Work).
The code has a full README, and you should be able to ingest your tweets relatively easily and then follow my suggestions for testing.
This works for Apple as people don't eat or drink Apple computers, nor do we type or play with fruit, so the words are easily split to one category or the other. This condition may not hold when considering something like #definance for the TV show (where people also use #definance in relation to the Arab Spring, cricket matches, exam revision and a music band). Cleverer approaches may well be required here.
I have a series of blog posts describing this project including a one-hour presentation I gave at the BrightonPython usergroup (which turned into a shorter presentation for 140 people at DataScienceLondon).
If you use something like LogisticRegression (where you get a probability for each classification) you can pick only the confident classifications, and that way you can force high precision by trading against recall (so you get correct results, but fewer of them). You'll have to tune this to your system.
Here's a possible algorithmic approach using scikit-learn:
Use a Binary CountVectorizer (I don't think term-counts in short messages add much information as most words occur only once)
Start with a Decision Tree classifier. It'll have explainable performance (see Overfitting with a Decision Tree for an example).
Move to logistic regression
Investigate the errors generated by the classifiers (read the DecisionTree's exported output or look at the coefficients in LogisticRegression, work the mis-classified tweets back through the Vectorizer to see what the underlying Bag of Words representation looks like - there will be fewer tokens there than you started with in the raw tweet - are there enough for a classification?)
Look at my example code in https://github.com/ianozsvald/social_media_brand_disambiguator/blob/master/learn1.py for a worked version of this approach
Things to consider:
You need a larger dataset. I'm using 2000 labelled tweets (it took me five hours), and as a minimum you want a balanced set with >100 per class (see the overfitting note below)
Improve the tokeniser (very easy with scikit-learn) to keep # # in tokens, and maybe add a capitalised-brand detector (as user #user2425429 notes)
Consider a non-linear classifier (like #oiez's suggestion above) when things get harder. Personally I found LinearSVC to do worse than logistic regression (but that may be due to the high-dimensional feature space that I've yet to reduce).
A tweet-specific part of speech tagger (in my humble opinion not Standford's as #Neil suggests - it performs poorly on poor Twitter grammar in my experience)
Once you have lots of tokens you'll probably want to do some dimensionality reduction (I've not tried this yet - see my blog post on LogisticRegression l1 l2 penalisation)
Re. overfitting. In my dataset with 2000 items I have a 10 minute snapshot from Twitter of 'apple' tweets. About 2/3 of the tweets are for Apple Inc, 1/3 for other-apple-uses. I pull out a balanced subset (about 584 rows I think) of each class and do five-fold cross validation for training.
Since I only have a 10 minute time-window I have many tweets about the same topic, and this is probably why my classifier does so well relative to existing tools - it will have overfit to the training features without generalising well (whereas the existing commercial tools perform worse on this snapshop, but more reliably across a wider set of data). I'll be expanding my time window to test this as a subsequent piece of work.
You can do the following:
Make a dict of words containing their count of occurrence in fruit and company related tweets. This can be achieved by feeding it some sample tweets whose inclination we know.
Using enough previous data, we can find out the probability of a word occurring in tweet about apple inc.
Multiply individual probabilities of words to get the probability of the whole tweet.
A simplified example:
p_f = Probability of fruit tweets.
p_w_f = Probability of a word occurring in a fruit tweet.
p_t_f = Combined probability of all words in tweet occurring a fruit tweet
= p_w1_f * p_w2_f * ...
p_f_t = Probability of fruit given a particular tweet.
p_c, p_w_c, p_t_c, p_c_t are respective values for company.
A laplacian smoother of value 1 is added to eliminate the problem of zero frequency of new words which are not there in our database.
old_tweets = {'apple pie sweet potatoe cake baby https://vine.co/v/hzBaWVA3IE3': '0', ...}
known_words = {}
total_company_tweets = total_fruit_tweets =total_company_words = total_fruit_words = 0
for tweet in old_tweets:
company = old_tweets[tweet]
for word in tweet.lower().split(" "):
if not word in known_words:
known_words[word] = {"company":0, "fruit":0 }
if company == "1":
known_words[word]["company"] += 1
total_company_words += 1
else:
known_words[word]["fruit"] += 1
total_fruit_words += 1
if company == "1":
total_company_tweets += 1
else:
total_fruit_tweets += 1
total_tweets = len(old_tweets)
def predict_tweet(new_tweet,K=1):
p_f = (total_fruit_tweets+K)/(total_tweets+K*2)
p_c = (total_company_tweets+K)/(total_tweets+K*2)
new_words = new_tweet.lower().split(" ")
p_t_f = p_t_c = 1
for word in new_words:
try:
wordFound = known_words[word]
except KeyError:
wordFound = {'fruit':0,'company':0}
p_w_f = (wordFound['fruit']+K)/(total_fruit_words+K*(len(known_words)))
p_w_c = (wordFound['company']+K)/(total_company_words+K*(len(known_words)))
p_t_f *= p_w_f
p_t_c *= p_w_c
#Applying bayes rule
p_f_t = p_f * p_t_f/(p_t_f*p_f + p_t_c*p_c)
p_c_t = p_c * p_t_c/(p_t_f*p_f + p_t_c*p_c)
if p_c_t > p_f_t:
return "Company"
return "Fruit"
If you don't have an issue using an outside library, I'd recommend scikit-learn since it can probably do this better & faster than anything you could code by yourself. I'd just do something like this:
Build your corpus. I did the list comprehensions for clarity, but depending on how your data is stored you might need to do different things:
def corpus_builder(apple_inc_tweets, apple_fruit_tweets):
corpus = [tweet for tweet in apple_inc_tweets] + [tweet for tweet in apple_fruit_tweets]
labels = [1 for x in xrange(len(apple_inc_tweets))] + [0 for x in xrange(len(apple_fruit_tweets))]
return (corpus, labels)
The important thing is you end up with two lists that look like this:
([['apple inc tweet i love ios and iphones'], ['apple iphones are great'], ['apple fruit tweet i love pie'], ['apple pie is great']], [1, 1, 0, 0])
The [1, 1, 0, 0] represent the positive and negative labels.
Then, you create a Pipeline! Pipeline is a scikit-learn class that makes it easy to chain text processing steps together so you only have to call one object when training/predicting:
def train(corpus, labels)
pipe = Pipeline([('vect', CountVectorizer(ngram_range=(1, 3), stop_words='english')),
('tfidf', TfidfTransformer(norm='l2')),
('clf', LinearSVC()),])
pipe.fit_transform(corpus, labels)
return pipe
Inside the Pipeline there are three processing steps. The CountVectorizer tokenizes the words, splits them, counts them, and transforms the data into a sparse matrix. The TfidfTransformer is optional, and you might want to remove it depending on the accuracy rating (doing cross validation tests and a grid search for the best parameters is a bit involved, so I won't get into it here). The LinearSVC is a standard text classification algorithm.
Finally, you predict the category of tweets:
def predict(pipe, tweet):
prediction = pipe.predict([tweet])
return prediction
Again, the tweet needs to be in a list, so I assumed it was entering the function as a string.
Put all those into a class or whatever, and you're done. At least, with this very basic example.
I didn't test this code so it might not work if you just copy-paste, but if you want to use scikit-learn it should give you an idea of where to start.
EDIT: tried to explain the steps in more detail.
Using a decision tree seems to work quite well for this problem. At least it produces a higher accuracy than a naive bayes classifier with my chosen features.
If you want to play around with some possibilities, you can use the following code, which requires nltk to be installed. The nltk book is also freely available online, so you might want to read a bit about how all of this actually works: http://nltk.googlecode.com/svn/trunk/doc/book/ch06.html
#coding: utf-8
import nltk
import random
import re
def get_split_sets():
structured_dataset = get_dataset()
train_set = set(random.sample(structured_dataset, int(len(structured_dataset) * 0.7)))
test_set = [x for x in structured_dataset if x not in train_set]
train_set = [(tweet_features(x[1]), x[0]) for x in train_set]
test_set = [(tweet_features(x[1]), x[0]) for x in test_set]
return (train_set, test_set)
def check_accurracy(times=5):
s = 0
for _ in xrange(times):
train_set, test_set = get_split_sets()
c = nltk.classify.DecisionTreeClassifier.train(train_set)
# Uncomment to use a naive bayes classifier instead
#c = nltk.classify.NaiveBayesClassifier.train(train_set)
s += nltk.classify.accuracy(c, test_set)
return s / times
def remove_urls(tweet):
tweet = re.sub(r'http:\/\/[^ ]+', "", tweet)
tweet = re.sub(r'pic.twitter.com/[^ ]+', "", tweet)
return tweet
def tweet_features(tweet):
words = [x for x in nltk.tokenize.wordpunct_tokenize(remove_urls(tweet.lower())) if x.isalpha()]
features = dict()
for bigram in nltk.bigrams(words):
features["hasBigram(%s)" % ",".join(bigram)] = True
for trigram in nltk.trigrams(words):
features["hasTrigram(%s)" % ",".join(trigram)] = True
return features
def get_dataset():
dataset = """copy dataset in here
"""
structured_dataset = [('fruit' if x[0] == '0' else 'company', x[2:]) for x in dataset.splitlines()]
return structured_dataset
if __name__ == '__main__':
print check_accurracy()
Thank you for the comments thus far. Here is a working solution I prepared with PHP. I'd still be interested in hearing from others a more algorithmic approach to this same solution.
<?php
// Confusion Matrix Init
$tp = 0;
$fp = 0;
$fn = 0;
$tn = 0;
$arrFP = array();
$arrFN = array();
// Load All Tweets to string
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://pastebin.com/raw.php?i=m6pP8ctM');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$strCorpus = curl_exec($ch);
curl_close($ch);
// Load Tweets as Array
$arrCorpus = explode("\n", $strCorpus);
foreach ($arrCorpus as $k => $v) {
// init
$blnActualClass = substr($v,0,1);
$strTweet = trim(substr($v,2));
// Score Tweet
$intScore = score($strTweet);
// Build Confusion Matrix and Log False Positives & Negatives for Review
if ($intScore > 0) {
if ($blnActualClass == 1) {
// True Positive
$tp++;
} else {
// False Positive
$fp++;
$arrFP[] = $strTweet;
}
} else {
if ($blnActualClass == 1) {
// False Negative
$fn++;
$arrFN[] = $strTweet;
} else {
// True Negative
$tn++;
}
}
}
// Confusion Matrix and Logging
echo "
Predicted
1 0
Actual 1 $tp $fp
Actual 0 $fn $tn
";
if (count($arrFP) > 0) {
echo "\n\nFalse Positives\n";
foreach ($arrFP as $strTweet) {
echo "$strTweet\n";
}
}
if (count($arrFN) > 0) {
echo "\n\nFalse Negatives\n";
foreach ($arrFN as $strTweet) {
echo "$strTweet\n";
}
}
function LoadDictionaryArray() {
$strDictionary = <<<EOD
10|iTunes
10|ios 7
10|ios7
10|iPhone
10|apple inc
10|apple corp
10|apple.com
10|MacBook
10|desk top
10|desktop
1|config
1|facebook
1|snapchat
1|intel
1|investor
1|news
1|labs
1|gadget
1|apple store
1|microsoft
1|android
1|bonds
1|Corp.tax
1|macs
-1|pie
-1|clientes
-1|green apple
-1|banana
-10|apple pie
EOD;
$arrDictionary = explode("\n", $strDictionary);
foreach ($arrDictionary as $k => $v) {
$arr = explode('|', $v);
$arrDictionary[$k] = array('value' => $arr[0], 'term' => strtolower(trim($arr[1])));
}
return $arrDictionary;
}
function score($str) {
$str = strtolower($str);
$intScore = 0;
foreach (LoadDictionaryArray() as $arrDictionaryItem) {
if (strpos($str,$arrDictionaryItem['term']) !== false) {
$intScore += $arrDictionaryItem['value'];
}
}
return $intScore;
}
?>
The above outputs:
Predicted
1 0
Actual 1 31 1
Actual 0 1 17
False Positives
1|Royals apple #ASGame #mlb # News Corp Building http://instagram.com/p/bBzzgMrrIV/
False Negatives
-1|RT #MaxFreixenet: Apple no tiene clientes. Tiene FANS// error.... PAGAS por productos y apps, ergo: ERES CLIENTE.
In all the examples that you gave, Apple(inc) was either referred to as Apple or apple inc, so a possible way could be to search for:
a capital "A" in Apple
an "inc" after apple
words/phrases like "OS", "operating system", "Mac", "iPhone", ...
or a combination of them
To simplify answers based on Conditional Random Fields a bit...context is huge here. You will want to pick out in those tweets that clearly show Apple the company vs apple the fruit. Let me outline a list of features here that might be useful for you to start with. For more information look up noun phrase chunking, and something called BIO labels. See (http://www.cis.upenn.edu/~pereira/papers/crf.pdf)
Surrounding words: Build a feature vector for the previous word and the next word, or if you want more features perhaps the previous 2 and next 2 words. You don't want too many words in the model or it won't match the data very well.
In Natural Language Processing, you are going to want to keep this as general as possible.
Other features to get from surrounding words include the following:
Whether the first character is a capital
Whether the last character in the word is a period
The part of speech of the word (Look up part of speech tagging)
The text itself of the word
I don't advise this, but to give more examples of features specifically for Apple:
WordIs(Apple)
NextWordIs(Inc.)
You get the point. Think of Named Entity Recognition as describing a sequence, and then using some math to tell a computer how to calculate that.
Keep in mind that natural language processing is a pipeline based system. Typically, you break things in to sentences, move to tokenization, then do part of speech tagging or even dependency parsing.
This is all to get you a list of features you can use in your model to identify what you're looking for.
There's a really good library for processing natural language text in Python called nltk. You should take a look at it.
One strategy you could try is to look at n-grams (groups of words) with the word "apple" in them. Some words are more likely to be used next to "apple" when talking about the fruit, others when talking about the company, and you can use those to classify tweets.
Use LibShortText. This Python utility has already been tuned to work for short text categorization tasks, and it works well. The maximum you'll have to do is to write a loop to pick the best combination of flags. I used it to do supervised speech act classification in emails and the results were up to 95-97% accurate (during 5 fold cross validation!).
And it comes from the makers of LIBSVM and LIBLINEAR whose support vector machine (SVM) implementation is used in sklearn and cran, so you can be reasonably assured that their implementation is not buggy.
Make an AI filter to distinguish Apple Inc (the company) from apple (the fruit). Since these are tweets, define your training set with a vector of 140 fields, each field being the character written in the tweet at position X (0 to 139). If the tweet is shorter, just give a value for being blank.
Then build a training set big enough to get a good accuracy (subjective to your taste). Assign a result value to each tweet, a Apple Inc tweet get 1 (true) and an apple tweet (fruit) gets 0. It would be a case of supervised learning in a logistic regression.
That is machine learning, is generally easier to code and performs better. It has to learn from the set you give it, and it's not hardcoded.
I don't know Python, so I can not write the code for it, but if you were to take more time for machine learning's logic and theory you might want to look the class I'm following.
Try the Coursera course Machine Learning by Andrew Ng. You will learn machine learning on MATLAB or Octave, but once you get the basics you will be able to write machine learning in about any language if you do understand the simple math (simple in logistic regression).
That is, getting the code from someone won't make you able to understand what is going in the machine learning code. You might want to invest a couple of hours on the subject to see what is really going on.
I would recommend avoiding answers suggesting entity recognition. Because this task is a text-classification first and entity recognition second (you can do it without the entity recognition at all).
I think the fastest path to results will be spacy + prodigy.
Spacy has well thought through model for English language, so you don't have to build your own. While prodigy allows quickly create training datasets and fine tune spacy model for your needs.
If you have enough samples, you can have a decent model in 1 day.

Random geographic coordinates (on land, avoid ocean)

Any clever ideas on how to generate random coordinates (latitude / longitude) of places on Earth? Latitude / Longitude. Precision to 5 points and avoid bodies of water.
double minLat = -90.00;
double maxLat = 90.00;
double latitude = minLat + (double)(Math.random() * ((maxLat - minLat) + 1));
double minLon = 0.00;
double maxLon = 180.00;
double longitude = minLon + (double)(Math.random() * ((maxLon - minLon) + 1));
DecimalFormat df = new DecimalFormat("#.#####");
log.info("latitude:longitude --> " + df.format(latitude) + "," + df.format(longitude));
Maybe i'm living in a dream world and the water topic is unavoidable ... but hopefully there's a nicer, cleaner and more efficient way to do this?
EDIT
Some fantastic answers/ideas -- however, at scale, let's say I need to generate 25,000 coordinates. Going to an external service provider may not be the best option due to latency, cost and a few other factors.
To deal with the body of water problem is going to be largely a data issue, e.g. do you just want to miss the oceans or do you need to also miss small streams. Either you need to use a service with the quality of data that you need, or, you need to obtain the data yourself and run it locally. From your edit, it sounds like you want to go the local data route, so I'll focus on a way to do that.
One method is to obtain a shapefile for either land areas or water areas. You can then generate a random point and determine if it intersects a land area (or alternatively, does not intersect a water area).
To get started, you might get some low resolution data here and then get higher resolution data here for when you want to get better answers on coast lines or with lakes/rivers/etc. You mentioned that you want precision in your points to 5 decimal places, which is a little over 1m. Do be aware that if you get data to match that precision, you will have one giant data set. And, if you want really good data, be prepared to pay for it.
Once you have your shape data, you need some tools to help you determine the intersection of your random points. Geotools is a great place to start and probably will work for your needs. You will also end up looking at opengis code (docs under geotools site - not sure if they consumed them or what) and JTS for the geometry handling. Using this you can quickly open the shapefile and start doing some intersection queries.
File f = new File ( "world.shp" );
ShapefileDataStore dataStore = new ShapefileDataStore ( f.toURI ().toURL () );
FeatureSource<SimpleFeatureType, SimpleFeature> featureSource =
dataStore.getFeatureSource ();
String geomAttrName = featureSource.getSchema ()
.getGeometryDescriptor ().getLocalName ();
ResourceInfo resourceInfo = featureSource.getInfo ();
CoordinateReferenceSystem crs = resourceInfo.getCRS ();
Hints hints = GeoTools.getDefaultHints ();
hints.put ( Hints.JTS_SRID, 4326 );
hints.put ( Hints.CRS, crs );
FilterFactory2 ff = CommonFactoryFinder.getFilterFactory2 ( hints );
GeometryFactory gf = JTSFactoryFinder.getGeometryFactory ( hints );
Coordinate land = new Coordinate ( -122.0087, 47.54650 );
Point pointLand = gf.createPoint ( land );
Coordinate water = new Coordinate ( 0, 0 );
Point pointWater = gf.createPoint ( water );
Intersects filter = ff.intersects ( ff.property ( geomAttrName ),
ff.literal ( pointLand ) );
FeatureCollection<SimpleFeatureType, SimpleFeature> features = featureSource
.getFeatures ( filter );
filter = ff.intersects ( ff.property ( geomAttrName ),
ff.literal ( pointWater ) );
features = featureSource.getFeatures ( filter );
Quick explanations:
This assumes the shapefile you got is polygon data. Intersection on lines or points isn't going to give you what you want.
First section opens the shapefile - nothing interesting
you have to fetch the geometry property name for the given file
coordinate system stuff - you specified lat/long in your post but GIS can be quite a bit more complicated. In general, the data I pointed you at is geographic, wgs84, and, that is what I setup here. However, if this is not the case for you then you need to be sure you are dealing with your data in the correct coordinate system. If that all sounds like gibberish, google around for a tutorial on GIS/coordinate systems/datum/ellipsoid.
generating the coordinate geometries and the filters are pretty self-explanatory. The resulting set of features will either be empty, meaning the coordinate is in the water if your data is land cover, or not empty, meaning the opposite.
Note: if you do this with a really random set of points, you are going to hit water pretty often and it could take you a while to get to 25k points. You may want to try to scope your point generation better than truly random (like remove big chunks of the Atlantic/Pacific/Indian oceans).
Also, you may find that your intersection queries are too slow. If so, you may want to look into creating a quadtree index (qix) with a tool like GDAL. I don't recall which index types are supported by geotools, though.
This has being asked a long time ago and I now have the similar need. There are two possibilities I am looking into:
1. Define the surface ranges for the random generator.
Here it's important to identify the level of precision you are going for. The easiest way would be to have a very relaxed and approximate approach. In this case you can divide the world map into "boxes":
Each box has it's own range of lat lon. Then you first randomise to get a random box, then you randomise to get a random lat and random long within the boundaries of that box.
Precisions is of course not the best at all here... Though it depends:) If you do your homework well and define a lot of boxes covering most complex surface shapes - you might be quite ok with the precision.
2. List item
Some API to return continent name from coordinates OR address OR country OR district = something that WATER doesn't have. Google Maps API's can help here. I didn't research this one deeper, but I think it's possible, though you will have to run the check on each generated pair of coordinates and rerun IF it's wrong. So you can get a bit stuck if random generator keeps throwing you in the ocean.
Also - some water does belong to countries, districts...so yeah, not very precise.
For my needs - I am going with "boxes" because I also want to control exact areas from which the random coordinates are taken and don't mind if it lands on a lake or river, just not open ocean:)
Download a truckload of KML files containing land-only locations.
Extract all coordinates from them this might help here.
Pick them at random.
Definitely you should have a map as a resource. You can take it here: http://www.naturalearthdata.com/
Then I would prepare 1bit black and white bitmap resource with 1s marking land and 0x marking water.
The size of bitmap depends on your required precision. If you need 5 degrees then your bitmap will be 360/5 x 180/5 = 72x36 pixels = 2592 bits.
Then I would load this bitmap in Java, generate random integer withing range above, read bit, and regenerate if it was zero.
P.S. Also you can dig here http://geotools.org/ for some ready made solutions.
To get a nice even distribution over latitudes and longitudes you should do something like this to get the right angles:
double longitude = Math.random() * Math.PI * 2;
double latitude = Math.acos(Math.random() * 2 - 1);
As for avoiding bodies of water, do you have the data for where water is already? Well, just resample until you get a hit! If you don't have this data already then it seems some other people have some better suggestions than I would for that...
Hope this helps, cheers.
There is another way to approach this using the Google Earth Api. I know it is javascript, but I thought it was a novel way to solve the problem.
Anyhow, I have put together a full working solution here - notice it works for rivers too: http://www.msa.mmu.ac.uk/~fraser/ge/coord/
The basic idea I have used is implement the hiTest method of the GEView object in the Google Earth Api.
Take a look at the following example of the hitest from Google.
http://earth-api-samples.googlecode.com/svn/trunk/examples/hittest.html
The hitTest method is supplied a random point on the screen in (pixel coordinates) for which it returns a GEHitTestResult object that contains information about the geographic location corresponding to the point. If one uses the GEPlugin.HIT_TEST_TERRAIN mode with the method one can limit results only to land (terrain) as long as we screen the results to points with an altitude > 1m
This is the function I use that implements the hitTest:
var hitTestTerrain = function()
{
var x = getRandomInt(0, 200); // same pixel size as the map3d div height
var y = getRandomInt(0, 200); // ditto for width
var result = ge.getView().hitTest(x, ge.UNITS_PIXELS, y, ge.UNITS_PIXELS, ge.HIT_TEST_TERRAIN);
var success = result && (result.getAltitude() > 1);
return { success: success, result: result };
};
Obviously you also want to have random results from anywhere on the globe (not just random points visible from a single viewpoint). To do this I move the earth view after each successful hitTestTerrain call. This is achieved using a small helper function.
var flyTo = function(lat, lng, rng)
{
lookAt.setLatitude(lat);
lookAt.setLongitude(lng);
lookAt.setRange(rng);
ge.getView().setAbstractView(lookAt);
};
Finally here is a stripped down version of the main code block that calls these two methods.
var getRandomLandCoordinates = function()
{
var test = hitTestTerrain();
if (test.success)
{
coords[coords.length] = { lat: test.result.getLatitude(), lng: test.result.getLongitude() };
}
if (coords.length <= number)
{
getRandomLandCoordinates();
}
else
{
displayResults();
}
};
So, the earth moves randomly to a postition
The other functions in there are just helpers to generate the random x,y and random lat,lng numbers, to output the results and also to toggle the controls etc.
I have tested the code quite a bit and the results are not 100% perfect, tweaking the altitude to something higher, like 50m solves this but obviously it is diminishing the area of possible selected coordinates.
Obviously you could adapt the idea to suit you needs. Maybe running the code multiple times to populate a database or something.
As a plan B, maybe you can pick a random country and then pick a random coordinate inside of this country. To be fair when picking a country, you can use its area as weight.
There is a library here and you can use its .random() method to get a random coordinate. Then you can use GeoNames WebServices to determine whether it is on land or not. They have a list of webservices and you'll just have to use the right one. GeoNames is free and reliable.
Go there http://wiki.openstreetmap.org/
Try to use API: http://wiki.openstreetmap.org/wiki/Databases_and_data_access_APIs
I guess you could use a world map, define a few points on it to delimit most of water bodies as you say and use a polygon.contains method to validate the coordinates.
A faster algorithm would be to use this map, take some random point and check the color beneath, if it's blue, then water... when you have the coordinates, you convert them to lat/long.
You might also do the blue green thing , and then store all the green points for later look up. This has the benifit of being "step wise" refinable. As you figure out a better way to make your list of points you can just point your random graber at a more and more acurate group of points.
Maybe a service provider has an answer to your question already: e.g. https://www.google.com/enterprise/marketplace/viewListing?productListingId=3030+17310026046429031496&pli=1
Elevation api? http://code.google.com/apis/maps/documentation/elevation/ above sea level or below? (no dutch points for you!)
Generating is easy, the Problem is that they should not be on water. I would import the "Open Streetmap" for example here http://ftp.ecki-netz.de/osm/ and import it to an Database (verry easy data Structure). I would suggest PostgreSQL, it comes with some geo functions http://www.postgresql.org/docs/8.2/static/functions-geometry.html . For that you have to save the points in a "polygon"-column, then you can check with the "&&" operator if it is in an Water polygon. For the attributes of an OpenStreetmap Way-Entry you should have a look at http://wiki.openstreetmap.org/wiki/Category:En:Keys
Supplementary to what bsimic said about digging into GeoNames' Webservices, here is a shortcut:
they have a dedicated WebService for requesting an ocean name.
(I am aware the of OP's constraint to not using public web services due to the amount of requests. Nevertheless I stumbled upon this with the same basic question and consider this helpful.)
Go to http://www.geonames.org/export/web-services.html#astergdem and have a look at "Ocean / reverse geocoding". It is available as XML and JSON. Create a free user account to prevent daily limits on the demo account.
Request example on ocean area (Baltic Sea, JSON-URL):
http://api.geonames.org/oceanJSON?lat=54.049889&lng=10.851388&username=demo
results in
{
"ocean": {
"distance": "0",
"name": "Baltic Sea"
}
}
while some coordinates on land result in
{
"status": {
"message": "we are afraid we could not find an ocean for latitude and longitude :53.0,9.0",
"value": 15
}
}
Do the random points have to be uniformly distributed all over the world? If you could settle for a seemingly uniform distribution, you can do this:
Open your favorite map service, draw a rectangle inside the United States, Russia, China, Western Europe and definitely the northern part of Africa - making sure there are no big lakes or Caspian seas inside the rectangles. Take the corner coordinates of each rectangle, and then select coordinates at random inside those rectangles.
You are guaranteed non of these points will be on any sea or lake. You might find an occasional river, but I'm not sure how many geoservices are going to be accurate enough for that anyway.
This is an extremely interesting question, from both a theoretical and practical perspective. The most suitable solution will largely depend on your exact requirements. Do you need to account for every body of water, or just the major seas and oceans? How critical are accuracy and correctness; Will identifying sea as land or vice-versa be a catastrophic failure?
I think machine learning techniques would be an excellent solution to this problem, provided that you don't mind the (hopefully small) probability that a point of water is incorrectly classified as land. If that's not an issue, then this approach should have a number of advantages against other techniques.
Using a bitmap is a nice solution, simple and elegant. It can be produced to a specified accuracy and the classification is guaranteed to be correct (Or a least as correct as you made the bitmap). But its practicality is dependent on how accurate you need the solution to be. You mention that you want the coordinate accuracy to 5 decimal places (which would be equivalent to mapping the whole surface of the planet to about the nearest metre). Using 1 bit per element, the bitmap would weigh in at ~73.6 terabytes!
We don't need to store all of this data though; We only need to know where the coastlines are. Just by knowing where a point is in relation to the coast, we can determine whether it is on land or sea. As a rough estimate, the CIA world factbook reports that there are 22498km of coastline on Earth. If we were to store coordiates for every metre of coastline, using a 32 bit word for each latitude and longitude, this would take less than 1.35GB to store. It's still a lot if this is for a trivial application, but a few orders of magnitude less than using a bitmap. If having such a high degree of accuracy isn't neccessary though, these numbers would drop considerably. Reducing the mapping to only the nearest kilometre would make the bitmap just ~75GB and the coordinates for the world's coastline could fit on a floppy disk.
What I propose is to use a clustering algorithm to decide whether a point is on land or not. We would first need a suitably large number of coordinates that we already know to be on either land or sea. Existing GIS databases would be suitable for this. Then we can analyse the points to determine clusters of land and sea. The decision boundary between the clusters should fall on the coastlines, and all points not determining the decision boundary can be removed. This process can be iterated to give a progressively more accurate boundary.
Only the points determining the decision boundary/the coastline need to be stored, and by using a simple distance metric we can quickly and easily decide if a set of coordinates are on land or sea. A large amount of resources would be required to train the system, but once complete the classifier would require very little space or time.
Assuming Atlantis isn't in the database, you could randomly select cities. This also provides a more realistic distribution of points if you intend to mimic human activity:
https://simplemaps.com/data/world-cities
There's only 7,300 cities in the free version.

Schedule notation (time ranges)

I have some code which needs to do things based on a schedule: e.g. during business hours do X, after hours do Y. The schedule will be defined by our customer's so I need a notation which can be written by people and parsed by my program. I'm thinking of something like:
12/25:0730-1730 Do Y
[Mo-Fr]:0730-1730 Do X
[Mo-Tu]:1730-0730 Do Y
Fr:1730-Mo:0730 Do Y
There will definitely be weekly variation. Yearly variation (holidays) seems likely. I would like a notation that is efficient and flexible.
I also need java code which will parse the time ranges and tell me which range a given date time is in.
I've searched the web and found nothing. Closest is CRON notation, which is not quite what I need.
Any one know of an existing notation definition and implementation?
Thanks,
For Java Joda time (Scala wrapper scala-time) is a powerful library for time calculations. You could look at the google-rfc-2445 which does something like what you are asking for (?).
If you are looking for a Java scheduler http://www.quartz-scheduler.org/ is a good option.
I don't think you will find something out of the box. In such cases it's better to do the implementation by yourself and have a full control of the code. You can use antlr to create a parser.
Add a notion of priority to your syntax. Then it will be easier to schedule someting
01.01.2011-31.01.2011 prio 1 do-idle-stuff
[Mo-Fr] prio 2 do-work
[Sa-Su] prio 2 weekend
10.02.2011-17.02.2011 prio 3 go-on-holidays

Estimating a probability given other probabilities from a prior

I have a bunch of data coming in (calls to an automated callcenter) about whether or not a person buys a particular product, 1 for buy, 0 for not buy.
I want to use this data to create an estimated probability that a person will buy a particular product, but the problem is that I may need to do it with relatively little historical data about how many people bought/didn't buy that product.
A friend recommended that with Bayesian probability you can "help" your probability estimate by coming up with a "prior probability distribution", essentially this is information about what you expect to see, prior to taking into account the actual data.
So what I'd like to do is create a method that has something like this signature (Java):
double estimateProbability(double[] priorProbabilities, int buyCount, int noBuyCount);
priorProbabilities is an array of probabilities I've seen for previous products, which this method would use to create a prior distribution for this probability. buyCount and noBuyCount are the actual data specific to this product, from which I want to estimate the probability of the user buying, given the data and the prior. This is returned from the method as a double.
I don't need a mathematically perfect solution, just something that will do better than a uniform or flat prior (ie. probability = buyCount / (buyCount+noBuyCount)). Since I'm far more familiar with source code than mathematical notation, I'd appreciate it if people could use code in their explanation.
Here's the Bayesian computation and one example/test:
def estimateProbability(priorProbs, buyCount, noBuyCount):
# first, estimate the prob that the actual buy/nobuy counts would be observed
# given each of the priors (times a constant that's the same in each case and
# not worth the effort of computing;-)`
condProbs = [p**buyCount * (1.0-p)**noBuyCount for p in priorProbs]
# the normalization factor for the above-mentioned neglected constant
# can most easily be computed just once
normalize = 1.0 / sum(condProbs)
# so here's the probability for each of the prior (starting from a uniform
# metaprior)
priorMeta = [normalize * cp for cp in condProbs]
# so the result is the sum of prior probs weighed by prior metaprobs
return sum(pm * pp for pm, pp in zip(priorMeta, priorProbs))
def example(numProspects=4):
# the a priori prob of buying was either 0.3 or 0.7, how does it change
# depending on how 4 prospects bought or didn't?
for bought in range(0, numProspects+1):
result = estimateProbability([0.3, 0.7], bought, numProspects-bought)
print 'b=%d, p=%.2f' % (bought, result)
example()
output is:
b=0, p=0.31
b=1, p=0.36
b=2, p=0.50
b=3, p=0.64
b=4, p=0.69
which agrees with my by-hand computation for this simple case. Note that the probability of buying, by definition, will always be between the lowest and the highest among the set of priori probabilities; if that's not what you want you might want to introduce a little fudge by introducing two "pseudo-products", one that nobody will ever buy (p=0.0), one that anybody will always buy (p=1.0) -- this gives more weight to actual observations, scarce as they may be, and less to statistics about past products. If we do that here, we get:
b=0, p=0.06
b=1, p=0.36
b=2, p=0.50
b=3, p=0.64
b=4, p=0.94
Intermediate levels of fudging (to account for the unlikely but not impossible chance that this new product may be worse than any one ever previously sold, or better than any of them) can easily be envisioned (give lower weight to the artificial 0.0 and 1.0 probabilities, by adding a vector priorWeights to estimateProbability's arguments).
This kind of thing is a substantial part of what I do all day, now that I work developing applications in Business Intelligence, but I just can't get enough of it...!-)
A really simple way of doing this without any difficult math is to increase buyCount and noBuyCount artificially by adding virtual customers that either bought or didn't buy the product. You can tune how much you believe in each particular prior probability in terms of how many virtual customers you think it is worth.
In pseudocode:
def estimateProbability(priorProbs, buyCount, noBuyCount, faithInPrior=None):
if faithInPrior is None: faithInPrior = [10 for x in buyCount]
adjustedBuyCount = [b + p*f for b,p,f in
zip(buyCount, priorProbs, faithInPrior]
adjustedNoBuyCount = [n + (1-p)*f for n,p,f in
zip(noBuyCount, priorProbs, faithInPrior]
return [b/(b+n) for b,n in zip(adjustedBuyCount, adjustedNoBuyCount]
Sounds like what you're trying to do is Association Rule Learning. I don't have time right now to provide you with any code, but I will point you in the direction of WEKA which is a fantastic open source data mining toolkit for Java. You should find plenty of interesting things there that will help you solve your problem.
As I see it, the best you could do is use the uniform distribution, unless you have some clue regarding the distribution. Or are you talking about making a relationship between this products and products previously bought by the same person in the Amazon Fashion "people who buy this product also buy..." ??

Categories

Resources