Enumerate Possible Matches of Regular Expression in Java - java

I want to enumerate all the possible values of a finite regular expression in Java for testing purposes.
For some context, I have a regular expression that I'm using to match allowable color values in words. Here's a shortened version of it as an example:
(white|black)|((light|dark) )?(red|green|blue|gray)
I wanted to create a unit test that would enumerate all these values and pass each of them to my utility class which produces a Color object from these, that way if I change the regular expression, my unit tests will fail if an error occurs (i.e. the new color value is unsupported).
I know enumeration is possible, of course (see this question), but is there an existing library for Java which will enumerate all the possible matches for a regex?
Edit: I've implemented a library that does this. See my answer below for links.

You are right, didn't find such a tool online as well
but you can try Xeger from google
it can create a random matching string from a regexp, and with some code tweaking might do what you want.
generation a random match:
String regex = "[ab]{4,6}c";
Xeger generator = new Xeger(regex);
String result = generator.generate();
assert result.matches(regex);
Xeger code is very simple, it consists of 2 files which contain 5 methods between them..
it uses dk.brics.automaton to conver the regex to an automaton, then goes over the automaton transitions making random choices in every node.
the main function is generate:
private void generate(StringBuilder builder, State state) {
List<Transition> transitions = state.getSortedTransitions(true);
if (transitions.size() == 0) {
assert state.isAccept();
return;
}
int nroptions = state.isAccept() ? transitions.size() : transitions.size() - 1;
int option = XegerUtils.getRandomInt(0, nroptions, random);
if (state.isAccept() && option == 0) { // 0 is considered stop
return;
}
// Moving on to next transition
Transition transition = transitions.get(option - (state.isAccept() ? 1 : 0));
appendChoice(builder, transition);
generate(builder, transition.getDest());
}
you can see that in order to change it so you get all possible matches, you need to iterate over all possible combinations in every possible node (like incrementing a multi digit counter)
you will need a hash to prevent loops, but that shouldn't take more than 5 senconds to code..
i would also suggest first checking that the regex is actually finate, by checking that it doesn't have *,+ and other symbols that make this action impossible (just to make this a complete tool for reuse)...

For future browsers coming to this question, I wrote a library that uses dk.brics.automaton using a similar approach to Xeger from the accepted answer and published it. You can find it:
On GitHub
On the project site
In Maven Central
To add it as a dependency:
Maven
<dependency>
<groupId>com.navigamez</groupId>
<artifactId>greex</artifactId>
<version>1.0</version>
</dependency>
Gradle
compile 'com.navigamez:greex:1.0'
Sample Code
Using this question as an example:
GreexGenerator generator = new GreexGenerator("(white|black)|((light|dark) )?(red|green|blue|gray)");
List<String> matches = generator.generateAll();
System.out.println(matches.size()); // "14"

Related

Build code by using Concatenation?

Is there a way i can create code build code by using Concatenation in Android studio/eclipse?
In other words i have 2 sets of strings one for each country i am dealing with ZA and KE. They have 2 different EULA's.
So i would like to pull the string related to the respective country.
String message = mContext.getString(R.string.eula_string_za);
above is an example of the output code. is there someway i can go about "creating" that based on If statements?
String str = "mContext.getString(R.string.eula_string_";
if (something = "ZA") {
str += "za);";
} else {
str += "ke);";
}
so if the country selected is ZA then the output code should be
mContext.getString(R.string.eula_string_za);
and if its KE it should be
mContext.getString(R.string.eula_string_ke);
and then the result will then pull the correct string from strings.xml?
Java is a compiled code, not an executed one,you can't write code this way like in an interpreted language.
The best way to manage different languages in android is to use a string.xml file for each language.
Take a look at this tutorial, it will help you a lot :
Supporting different languages in android
If you want to go this route you could try to use reflection. Have a look at Class.getField(…) if you want to use reflection.
Instead of first building a code string using a if statement you can also use the same if statement to find the correct string:
String str;
if (something.equals("ZA")) {
str = mContext.getString(R.string.eula_string_za);
} else {
str = mContext.getString(R.string.eula_string_ke);
}
Note that your condition something = "ZA" does not do what you think it does: It assigns something the string "ZA" and then evaluates itself to "ZA", so this would not even compile. The correct way would be something == "ZA", but even this does not work in the general case. You need to use String.equals(…). Some even argue you should use it the other way around (i.e. "ZA".equals(something)) to avoid a NullPointerException…
Another possibility would be to first build a Map from county to the corresponding string ID for all the EULAs you have and then asking the Map to return the correct one.
But probably the cleanest solution would be to use Androids built in mechanism, as hkN suggests.

Apache Cayenne - I cannot find code defining the constants for Token.kind field

I'm using Cayenne to parse SQL conditions, through org.apache.cayenne.exp.parser.ExpressionParser, which produces a series of org.apache.cayenne.exp.parser.Tokens, and I want to determine the type of each Token (like identifier, equal sign, number, string etc.).
The token type is definitely identified by the ExpressionParser, and it seems to me that it is stored in the int field Token.kind. The values that this field shows in my parsing tests are definitely consistent (for ex. = is always 5, literal strings are always 42, and operators are always 2 etc.).
My problem is just that I cannot find the Java class containing the constants to compare Token.kind values with.
The Javadoc for field Token.kind says:
An integer that describes the kind of this token. This numbering
system is determined by JavaCCParser, and a table of these numbers is
stored in the file ...Constants.java.
It does not specify the full name of the file, so I downloaded JavaCCParser and I checked several *Constants.* files found in javacc-5.0src.zip, javacc-6.0.zip, the two javacc.jar contained in those two zip, and cayenne-3.0.2-src.tar.gz.
None of the classes I found there seems to me to have constants that consistently match the values I see in my tests.
The closest I was able to get to that was with class org.apache.cayenne.exp.parser.ExpressionParserConstants which for ex. contains int PROPERTY_PATH = 34 and int SINGLE_QUOTED_STRING = 42 which definitely match the actual tokens of my test expressions, but other tokens have no corresponding constant in that class, for ex. the = sign (kind = 5) and the and operator (kind = 2).
So my question is if anyone knows in which Java class are those constants defined.
First I should mention that ExpressionParser is designed to parse very specific format of Cayenne expressions. It certainly can not be used to parse SQL. So you might be looking in the wrong direction.
Parser itself is generated by JavaCC based on this grammar file. Tokens for the parser are formally defined in the bottom of this file, and are very specific to the task at hand.

How can I build a model to distinguish tweets about Apple (Inc.) from tweets about apple (fruit)?

See below for 50 tweets about "apple." I have hand labeled the positive matches about Apple Inc. They are marked as 1 below.
Here are a couple of lines:
1|“#chrisgilmer: Apple targets big business with new iOS 7 features http://bit.ly/15F9JeF ”. Finally.. A corp iTunes account!
0|“#Zach_Paull: When did green skittles change from lime to green apple? #notafan” #Skittles
1|#dtfcdvEric: #MaroneyFan11 apple inc is searching for people to help and tryout all their upcoming tablet within our own net page No.
0|#STFUTimothy have you tried apple pie shine?
1|#SuryaRay #India Microsoft to bring Xbox and PC games to Apple, Android phones: Report: Microsoft Corp... http://dlvr.it/3YvbQx #SuryaRay
Here is the total data set: http://pastebin.com/eJuEb4eB
I need to build a model that classifies "Apple" (Inc). from the rest.
I'm not looking for a general overview of machine learning, rather I'm looking for actual model in code (Python preferred).
What you are looking for is called Named Entity Recognition. It is a statistical technique that (most commonly) uses Conditional Random Fields to find named entities, based on having been trained to learn things about named entities.
Essentially, it looks at the content and context of the word, (looking back and forward a few words), to estimate the probability that the word is a named entity.
Good software can look at other features of words, such as their length or shape (like "Vcv" if it starts with "Vowel-consonant-vowel")
A very good library (GPL) is Stanford's NER
Here's the demo: http://nlp.stanford.edu:8080/ner/
Some sample text to try:
I was eating an apple over at Apple headquarters and I thought about
Apple Martin, the daughter of the Coldplay guy
(the 3class and 4class classifiers get it right)
I would do it as follows:
Split the sentence into words, normalise them, build a dictionary
With each word, store how many times they occurred in tweets about the company, and how many times they appeared in tweets about the fruit - these tweets must be confirmed by a human
When a new tweet comes in, find every word in the tweet in the dictionary, calculate a weighted score - words that are used frequently in relation to the company would get a high company score, and vice versa; words used rarely, or used with both the company and the fruit, would not have much of a score.
I have a semi-working system that solves this problem, open sourced using scikit-learn, with a series of blog posts describing what I'm doing. The problem I'm tackling is word-sense disambiguation (choosing one of multiple word sense options), which is not the same as Named Entity Recognition. My basic approach is somewhat-competitive with existing solutions and (crucially) is customisable.
There are some existing commercial NER tools (OpenCalais, DBPedia Spotlight, and AlchemyAPI) that might give you a good enough commercial result - do try these first!
I used some of these for a client project (I consult using NLP/ML in London), but I wasn't happy with their recall (precision and recall). Basically they can be precise (when they say "This is Apple Inc" they're typically correct), but with low recall (they rarely say "This is Apple Inc" even though to a human the tweet is obviously about Apple Inc). I figured it'd be an intellectually interesting exercise to build an open source version tailored to tweets. Here's the current code:
https://github.com/ianozsvald/social_media_brand_disambiguator
I'll note - I'm not trying to solve the generalised word-sense disambiguation problem with this approach, just brand disambiguation (companies, people, etc.) when you already have their name. That's why I believe that this straightforward approach will work.
I started this six weeks ago, and it is written in Python 2.7 using scikit-learn. It uses a very basic approach. I vectorize using a binary count vectorizer (I only count whether a word appears, not how many times) with 1-3 n-grams. I don't scale with TF-IDF (TF-IDF is good when you have a variable document length; for me the tweets are only one or two sentences, and my testing results didn't show improvement with TF-IDF).
I use the basic tokenizer which is very basic but surprisingly useful. It ignores # # (so you lose some context) and of course doesn't expand a URL. I then train using logistic regression, and it seems that this problem is somewhat linearly separable (lots of terms for one class don't exist for the other). Currently I'm avoiding any stemming/cleaning (I'm trying The Simplest Possible Thing That Might Work).
The code has a full README, and you should be able to ingest your tweets relatively easily and then follow my suggestions for testing.
This works for Apple as people don't eat or drink Apple computers, nor do we type or play with fruit, so the words are easily split to one category or the other. This condition may not hold when considering something like #definance for the TV show (where people also use #definance in relation to the Arab Spring, cricket matches, exam revision and a music band). Cleverer approaches may well be required here.
I have a series of blog posts describing this project including a one-hour presentation I gave at the BrightonPython usergroup (which turned into a shorter presentation for 140 people at DataScienceLondon).
If you use something like LogisticRegression (where you get a probability for each classification) you can pick only the confident classifications, and that way you can force high precision by trading against recall (so you get correct results, but fewer of them). You'll have to tune this to your system.
Here's a possible algorithmic approach using scikit-learn:
Use a Binary CountVectorizer (I don't think term-counts in short messages add much information as most words occur only once)
Start with a Decision Tree classifier. It'll have explainable performance (see Overfitting with a Decision Tree for an example).
Move to logistic regression
Investigate the errors generated by the classifiers (read the DecisionTree's exported output or look at the coefficients in LogisticRegression, work the mis-classified tweets back through the Vectorizer to see what the underlying Bag of Words representation looks like - there will be fewer tokens there than you started with in the raw tweet - are there enough for a classification?)
Look at my example code in https://github.com/ianozsvald/social_media_brand_disambiguator/blob/master/learn1.py for a worked version of this approach
Things to consider:
You need a larger dataset. I'm using 2000 labelled tweets (it took me five hours), and as a minimum you want a balanced set with >100 per class (see the overfitting note below)
Improve the tokeniser (very easy with scikit-learn) to keep # # in tokens, and maybe add a capitalised-brand detector (as user #user2425429 notes)
Consider a non-linear classifier (like #oiez's suggestion above) when things get harder. Personally I found LinearSVC to do worse than logistic regression (but that may be due to the high-dimensional feature space that I've yet to reduce).
A tweet-specific part of speech tagger (in my humble opinion not Standford's as #Neil suggests - it performs poorly on poor Twitter grammar in my experience)
Once you have lots of tokens you'll probably want to do some dimensionality reduction (I've not tried this yet - see my blog post on LogisticRegression l1 l2 penalisation)
Re. overfitting. In my dataset with 2000 items I have a 10 minute snapshot from Twitter of 'apple' tweets. About 2/3 of the tweets are for Apple Inc, 1/3 for other-apple-uses. I pull out a balanced subset (about 584 rows I think) of each class and do five-fold cross validation for training.
Since I only have a 10 minute time-window I have many tweets about the same topic, and this is probably why my classifier does so well relative to existing tools - it will have overfit to the training features without generalising well (whereas the existing commercial tools perform worse on this snapshop, but more reliably across a wider set of data). I'll be expanding my time window to test this as a subsequent piece of work.
You can do the following:
Make a dict of words containing their count of occurrence in fruit and company related tweets. This can be achieved by feeding it some sample tweets whose inclination we know.
Using enough previous data, we can find out the probability of a word occurring in tweet about apple inc.
Multiply individual probabilities of words to get the probability of the whole tweet.
A simplified example:
p_f = Probability of fruit tweets.
p_w_f = Probability of a word occurring in a fruit tweet.
p_t_f = Combined probability of all words in tweet occurring a fruit tweet
= p_w1_f * p_w2_f * ...
p_f_t = Probability of fruit given a particular tweet.
p_c, p_w_c, p_t_c, p_c_t are respective values for company.
A laplacian smoother of value 1 is added to eliminate the problem of zero frequency of new words which are not there in our database.
old_tweets = {'apple pie sweet potatoe cake baby https://vine.co/v/hzBaWVA3IE3': '0', ...}
known_words = {}
total_company_tweets = total_fruit_tweets =total_company_words = total_fruit_words = 0
for tweet in old_tweets:
company = old_tweets[tweet]
for word in tweet.lower().split(" "):
if not word in known_words:
known_words[word] = {"company":0, "fruit":0 }
if company == "1":
known_words[word]["company"] += 1
total_company_words += 1
else:
known_words[word]["fruit"] += 1
total_fruit_words += 1
if company == "1":
total_company_tweets += 1
else:
total_fruit_tweets += 1
total_tweets = len(old_tweets)
def predict_tweet(new_tweet,K=1):
p_f = (total_fruit_tweets+K)/(total_tweets+K*2)
p_c = (total_company_tweets+K)/(total_tweets+K*2)
new_words = new_tweet.lower().split(" ")
p_t_f = p_t_c = 1
for word in new_words:
try:
wordFound = known_words[word]
except KeyError:
wordFound = {'fruit':0,'company':0}
p_w_f = (wordFound['fruit']+K)/(total_fruit_words+K*(len(known_words)))
p_w_c = (wordFound['company']+K)/(total_company_words+K*(len(known_words)))
p_t_f *= p_w_f
p_t_c *= p_w_c
#Applying bayes rule
p_f_t = p_f * p_t_f/(p_t_f*p_f + p_t_c*p_c)
p_c_t = p_c * p_t_c/(p_t_f*p_f + p_t_c*p_c)
if p_c_t > p_f_t:
return "Company"
return "Fruit"
If you don't have an issue using an outside library, I'd recommend scikit-learn since it can probably do this better & faster than anything you could code by yourself. I'd just do something like this:
Build your corpus. I did the list comprehensions for clarity, but depending on how your data is stored you might need to do different things:
def corpus_builder(apple_inc_tweets, apple_fruit_tweets):
corpus = [tweet for tweet in apple_inc_tweets] + [tweet for tweet in apple_fruit_tweets]
labels = [1 for x in xrange(len(apple_inc_tweets))] + [0 for x in xrange(len(apple_fruit_tweets))]
return (corpus, labels)
The important thing is you end up with two lists that look like this:
([['apple inc tweet i love ios and iphones'], ['apple iphones are great'], ['apple fruit tweet i love pie'], ['apple pie is great']], [1, 1, 0, 0])
The [1, 1, 0, 0] represent the positive and negative labels.
Then, you create a Pipeline! Pipeline is a scikit-learn class that makes it easy to chain text processing steps together so you only have to call one object when training/predicting:
def train(corpus, labels)
pipe = Pipeline([('vect', CountVectorizer(ngram_range=(1, 3), stop_words='english')),
('tfidf', TfidfTransformer(norm='l2')),
('clf', LinearSVC()),])
pipe.fit_transform(corpus, labels)
return pipe
Inside the Pipeline there are three processing steps. The CountVectorizer tokenizes the words, splits them, counts them, and transforms the data into a sparse matrix. The TfidfTransformer is optional, and you might want to remove it depending on the accuracy rating (doing cross validation tests and a grid search for the best parameters is a bit involved, so I won't get into it here). The LinearSVC is a standard text classification algorithm.
Finally, you predict the category of tweets:
def predict(pipe, tweet):
prediction = pipe.predict([tweet])
return prediction
Again, the tweet needs to be in a list, so I assumed it was entering the function as a string.
Put all those into a class or whatever, and you're done. At least, with this very basic example.
I didn't test this code so it might not work if you just copy-paste, but if you want to use scikit-learn it should give you an idea of where to start.
EDIT: tried to explain the steps in more detail.
Using a decision tree seems to work quite well for this problem. At least it produces a higher accuracy than a naive bayes classifier with my chosen features.
If you want to play around with some possibilities, you can use the following code, which requires nltk to be installed. The nltk book is also freely available online, so you might want to read a bit about how all of this actually works: http://nltk.googlecode.com/svn/trunk/doc/book/ch06.html
#coding: utf-8
import nltk
import random
import re
def get_split_sets():
structured_dataset = get_dataset()
train_set = set(random.sample(structured_dataset, int(len(structured_dataset) * 0.7)))
test_set = [x for x in structured_dataset if x not in train_set]
train_set = [(tweet_features(x[1]), x[0]) for x in train_set]
test_set = [(tweet_features(x[1]), x[0]) for x in test_set]
return (train_set, test_set)
def check_accurracy(times=5):
s = 0
for _ in xrange(times):
train_set, test_set = get_split_sets()
c = nltk.classify.DecisionTreeClassifier.train(train_set)
# Uncomment to use a naive bayes classifier instead
#c = nltk.classify.NaiveBayesClassifier.train(train_set)
s += nltk.classify.accuracy(c, test_set)
return s / times
def remove_urls(tweet):
tweet = re.sub(r'http:\/\/[^ ]+', "", tweet)
tweet = re.sub(r'pic.twitter.com/[^ ]+', "", tweet)
return tweet
def tweet_features(tweet):
words = [x for x in nltk.tokenize.wordpunct_tokenize(remove_urls(tweet.lower())) if x.isalpha()]
features = dict()
for bigram in nltk.bigrams(words):
features["hasBigram(%s)" % ",".join(bigram)] = True
for trigram in nltk.trigrams(words):
features["hasTrigram(%s)" % ",".join(trigram)] = True
return features
def get_dataset():
dataset = """copy dataset in here
"""
structured_dataset = [('fruit' if x[0] == '0' else 'company', x[2:]) for x in dataset.splitlines()]
return structured_dataset
if __name__ == '__main__':
print check_accurracy()
Thank you for the comments thus far. Here is a working solution I prepared with PHP. I'd still be interested in hearing from others a more algorithmic approach to this same solution.
<?php
// Confusion Matrix Init
$tp = 0;
$fp = 0;
$fn = 0;
$tn = 0;
$arrFP = array();
$arrFN = array();
// Load All Tweets to string
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://pastebin.com/raw.php?i=m6pP8ctM');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$strCorpus = curl_exec($ch);
curl_close($ch);
// Load Tweets as Array
$arrCorpus = explode("\n", $strCorpus);
foreach ($arrCorpus as $k => $v) {
// init
$blnActualClass = substr($v,0,1);
$strTweet = trim(substr($v,2));
// Score Tweet
$intScore = score($strTweet);
// Build Confusion Matrix and Log False Positives & Negatives for Review
if ($intScore > 0) {
if ($blnActualClass == 1) {
// True Positive
$tp++;
} else {
// False Positive
$fp++;
$arrFP[] = $strTweet;
}
} else {
if ($blnActualClass == 1) {
// False Negative
$fn++;
$arrFN[] = $strTweet;
} else {
// True Negative
$tn++;
}
}
}
// Confusion Matrix and Logging
echo "
Predicted
1 0
Actual 1 $tp $fp
Actual 0 $fn $tn
";
if (count($arrFP) > 0) {
echo "\n\nFalse Positives\n";
foreach ($arrFP as $strTweet) {
echo "$strTweet\n";
}
}
if (count($arrFN) > 0) {
echo "\n\nFalse Negatives\n";
foreach ($arrFN as $strTweet) {
echo "$strTweet\n";
}
}
function LoadDictionaryArray() {
$strDictionary = <<<EOD
10|iTunes
10|ios 7
10|ios7
10|iPhone
10|apple inc
10|apple corp
10|apple.com
10|MacBook
10|desk top
10|desktop
1|config
1|facebook
1|snapchat
1|intel
1|investor
1|news
1|labs
1|gadget
1|apple store
1|microsoft
1|android
1|bonds
1|Corp.tax
1|macs
-1|pie
-1|clientes
-1|green apple
-1|banana
-10|apple pie
EOD;
$arrDictionary = explode("\n", $strDictionary);
foreach ($arrDictionary as $k => $v) {
$arr = explode('|', $v);
$arrDictionary[$k] = array('value' => $arr[0], 'term' => strtolower(trim($arr[1])));
}
return $arrDictionary;
}
function score($str) {
$str = strtolower($str);
$intScore = 0;
foreach (LoadDictionaryArray() as $arrDictionaryItem) {
if (strpos($str,$arrDictionaryItem['term']) !== false) {
$intScore += $arrDictionaryItem['value'];
}
}
return $intScore;
}
?>
The above outputs:
Predicted
1 0
Actual 1 31 1
Actual 0 1 17
False Positives
1|Royals apple #ASGame #mlb # News Corp Building http://instagram.com/p/bBzzgMrrIV/
False Negatives
-1|RT #MaxFreixenet: Apple no tiene clientes. Tiene FANS// error.... PAGAS por productos y apps, ergo: ERES CLIENTE.
In all the examples that you gave, Apple(inc) was either referred to as Apple or apple inc, so a possible way could be to search for:
a capital "A" in Apple
an "inc" after apple
words/phrases like "OS", "operating system", "Mac", "iPhone", ...
or a combination of them
To simplify answers based on Conditional Random Fields a bit...context is huge here. You will want to pick out in those tweets that clearly show Apple the company vs apple the fruit. Let me outline a list of features here that might be useful for you to start with. For more information look up noun phrase chunking, and something called BIO labels. See (http://www.cis.upenn.edu/~pereira/papers/crf.pdf)
Surrounding words: Build a feature vector for the previous word and the next word, or if you want more features perhaps the previous 2 and next 2 words. You don't want too many words in the model or it won't match the data very well.
In Natural Language Processing, you are going to want to keep this as general as possible.
Other features to get from surrounding words include the following:
Whether the first character is a capital
Whether the last character in the word is a period
The part of speech of the word (Look up part of speech tagging)
The text itself of the word
I don't advise this, but to give more examples of features specifically for Apple:
WordIs(Apple)
NextWordIs(Inc.)
You get the point. Think of Named Entity Recognition as describing a sequence, and then using some math to tell a computer how to calculate that.
Keep in mind that natural language processing is a pipeline based system. Typically, you break things in to sentences, move to tokenization, then do part of speech tagging or even dependency parsing.
This is all to get you a list of features you can use in your model to identify what you're looking for.
There's a really good library for processing natural language text in Python called nltk. You should take a look at it.
One strategy you could try is to look at n-grams (groups of words) with the word "apple" in them. Some words are more likely to be used next to "apple" when talking about the fruit, others when talking about the company, and you can use those to classify tweets.
Use LibShortText. This Python utility has already been tuned to work for short text categorization tasks, and it works well. The maximum you'll have to do is to write a loop to pick the best combination of flags. I used it to do supervised speech act classification in emails and the results were up to 95-97% accurate (during 5 fold cross validation!).
And it comes from the makers of LIBSVM and LIBLINEAR whose support vector machine (SVM) implementation is used in sklearn and cran, so you can be reasonably assured that their implementation is not buggy.
Make an AI filter to distinguish Apple Inc (the company) from apple (the fruit). Since these are tweets, define your training set with a vector of 140 fields, each field being the character written in the tweet at position X (0 to 139). If the tweet is shorter, just give a value for being blank.
Then build a training set big enough to get a good accuracy (subjective to your taste). Assign a result value to each tweet, a Apple Inc tweet get 1 (true) and an apple tweet (fruit) gets 0. It would be a case of supervised learning in a logistic regression.
That is machine learning, is generally easier to code and performs better. It has to learn from the set you give it, and it's not hardcoded.
I don't know Python, so I can not write the code for it, but if you were to take more time for machine learning's logic and theory you might want to look the class I'm following.
Try the Coursera course Machine Learning by Andrew Ng. You will learn machine learning on MATLAB or Octave, but once you get the basics you will be able to write machine learning in about any language if you do understand the simple math (simple in logistic regression).
That is, getting the code from someone won't make you able to understand what is going in the machine learning code. You might want to invest a couple of hours on the subject to see what is really going on.
I would recommend avoiding answers suggesting entity recognition. Because this task is a text-classification first and entity recognition second (you can do it without the entity recognition at all).
I think the fastest path to results will be spacy + prodigy.
Spacy has well thought through model for English language, so you don't have to build your own. While prodigy allows quickly create training datasets and fine tune spacy model for your needs.
If you have enough samples, you can have a decent model in 1 day.

Code testing for beginners - how to do?

Lets say that I have a lot of nested loops (3-4 levels) in a java method and each of these loops can have some if-else blocks. How can I check if all these things are working properly ? I am looking for a logical way to test this instead of using a brute force approach like substituting illegal values.
EDIT:
Can you also suggest some good testing books for beginners ?
The way I've always been taught basic testing is to handle around edge cases as much as possible.
For example, if you are checking the condition that variable i is between 0 and 10 if(i>0 &&i<10), what I would naturally test is a few values that make the test condition true, preferably near the edges, then a few on the edges that are a combination of true and false, and finally cases that are way out of bounds. With the aforementioned condition, I'd test 1,5 ,9, 0, 10, -1, 11, then finally an extremely large integer, both positive and negative.
This sort of goes against the "not substituting illegal values)", but I feel that you have to do that in order to ensure that your conditions fail properly.
EMMA is a code coverage tool. You run your unittests under EMMA and it will produce an HTML report with colorized source code showing which lines were reached and which were not. Based on that you can add tests to make sure you're testing all the various branches.
Each if/then in your code contains a boolean sub-expression as is the sub-expression used in a loop to decide whether to enter/rerun the loop. Predicate coverage should tell give you a good idea how thorough your tests are.
Wikipedia explains predicate coverage
Condition coverage (or predicate coverage) - Has each boolean sub-expression evaluated both to true and false? This does not necessarily imply decision coverage.
I believe that using debug is easiest way to find the mistake. You can find a full explanation about debug at this link: http://www.ibm.com/developerworks/library/os-ecbug/.
Also you can use this link: http://webster.cs.washington.edu:8080/practiceit/ for practising.
for example find input which will go through each of those loops with some values. then find input which will go through each branch of the if's. Then find input which will go through the loops with large or small or illegal values.
Set some input and output data. Make the calculations yourself.
Create a class to check if the output values match the ones you separately calculated.
Example:
input: Array(3,4,5,6);
output (sum of odd numbers) : 8
class TestClass{
//test case
//here you keep changing the array (extreme values, null etc..
public void test1(){
int[] anArray=new int[4];
anArray[0] = 3;
anArray[1] = 4;
anArray[2] = 5;
anArray[3] = 6;
int s=Calculator.oddSum(x);
if (s==8)
System.out.println("Passed");
else
System.out.println("Failed");
}
public static void main(){
TestClass t=new TestClass();
t.test1();
}
}

Java Simulating the keyboard

How would I send text to the computer (like a keyboard) via a Java class?
I have considered using the Robot class to press and release each key, but that would be tedious and there is no way to get the KeyCode from a char.
No, there is also a soft way (well, on Windows it works at least ;-)):
private static void outputString(Robot robot,String str)
{
Toolkit toolkit = Toolkit.getDefaultToolkit();
boolean numlockOn = toolkit.getLockingKeyState(KeyEvent.VK_NUM_LOCK);
int[] keyz=
{
KeyEvent.VK_NUMPAD0,
KeyEvent.VK_NUMPAD1,
KeyEvent.VK_NUMPAD2,
KeyEvent.VK_NUMPAD3,
KeyEvent.VK_NUMPAD4,
KeyEvent.VK_NUMPAD5,
KeyEvent.VK_NUMPAD6,
KeyEvent.VK_NUMPAD7,
KeyEvent.VK_NUMPAD8,
KeyEvent.VK_NUMPAD9
};
if(!numlockOn)
{
robot.keyPress(KeyEvent.VK_NUM_LOCK);
}
for(int i=0;i<str.length();i++)
{
int ch=(int)str.charAt(i);
String chStr=""+ch;
if(ch <= 999)
{
chStr="0"+chStr;
}
robot.keyPress(KeyEvent.VK_ALT);
for(int c=0;c<chStr.length();c++)
{
int iKey=(int)(chStr.charAt(c)-'0');
robot.keyPress(keyz[iKey]);
robot.keyRelease(keyz[iKey]);
}
robot.keyRelease(KeyEvent.VK_ALT);
}
if(!numlockOn)
{
robot.keyPress(KeyEvent.VK_NUM_LOCK);
}
}
Try use this :
http://javaprogrammingforums.com/java-se-api-tutorials/59-how-sendkeys-application-java-using-robot-class.html
Use a GUI testing framework (even if you do not use it for testing). I recommend FEST. In FEST you can search for GUI elements and automate all kinds of user interactions including entering text.
For example once you have a text field fixture (the FEST term for a wrapper that lets you control the component), you can do
JTextComponentFixture fixture = ...;
fixture.enterText("Some text");
#JavaCoder-1337 Not exactly...
Although some switch-case (hard way?) is still needed to handle some (special) characters, most of the characters can be handled fairly easily.
How much you need depends on your target audience, but whatever the case, you can handle it through a combination of:
AWTKeyStroke.getAWTKeyStroke(char yourChar).getKeyCode(); - Which
handles the most basic ones; a-zA-Z are translated to they'r base
(a-z) keyEvents, and a few other chars are also handled similarly (base key only, no modifiers thus no casing is applied).
As you can imagine, this method is particularly effective for simplifying English handling, since the language makes little use of accented letters compared to many others.
Normalizer.normalize(String textToNormalize, Form.NFD); - Which decomposes most composed (accented) characters, like áàãâä,éèêë,íìîï,etc, and they'r uppercase equivalents, to they'r base elements. Example: á (224) becomes a (97) followed by ´ [769].
If your send(String text) method is able to send accents, a simple swap of the accent (in the example it's VK_DEAD_ACUTE) and it's letter, so that they get to proper send order, and you will get the desired á output. Thus eliminating the need for an á filter.
Combined with the first simplification, for this example, that makes 1/3 [´] instead of 3/3 [a,á,´] switch-case needed!
These are only a few of many simplifications that can be done to shorten that dreadfully long switch-case method that is (unwisely) suggested by many fellow programmers.
For example, you can easily handle casing by detecting if the character to be sent is uppercase, and then detecting the current capslock state to invert the casing operation, if needed:
boolean useShift = Character.isUpperCase(c);
useShift = Toolkit.getDefaultToolkit().getLockingKeyState(KeyEvent.VK_CAPS_LOCK) ? !useShift : useShift;
if (useShift) {
keyPress(KeyEvent.VK_SHIFT);
sendChar(aChar);
keyRelease(KeyEvent.VK_SHIFT);
} else {
sendChar(aChar);
}
Another option (the one that I use), which is even simpler, is to simply code a macro in a tool/language that is (far) more suited for this kind of operation (I use and recommend AutoHotKey), and simply call it's execution from Java:
Runtime rt = Runtime.getRuntime();
//"Hello World!" is a command-line param, forwarded to the ahk script as it's text-to-send.
rt.exec(".../MyJavaBot/sendString.ahk \"Hello World!\"");

Categories

Resources