Calculating Mutual Information For Selecting a Training Set in Java - java

Scenario
I am attempting to implement supervised learning over a data set within a Java GUI application. The user will be given a list of items or 'reports' to inspect and will label them based on a set of available labels. Once the supervised learning is complete, the labelled instances will then be given to a learning algorithm. This will attempt to order the rest of the items on how likely it is the user will want to view them.
To get the most from the user's time I want to pre-select the reports that will provide the most information about the entire collection of reports, and have the user label them. As I understand it, to calculate this, it would be necessary to find the sum of all the mutual information values for each report, and order them by that value. The labelled reports from supervised learning will then be used to form a Bayesian network to find the probability of a binary value for each remaining report.
Example
Here, an artificial example may help to explain, and may clear up confusion when I've undoubtedly used the wrong terminology :-) Consider an example where the application displays news stories to the user. It chooses which news stories to display first based on the user's preference shown. Features of a news story which have a correlation are country of origin, category or date. So if a user labels a single news story as interesting when it came from Scotland, it tells the machine learner that there's an increased chance other news stories from Scotland will be interesting to the user. Similar for a category such as Sport, or a date such as December 12th 2004.
This preference could be calculated by choosing any order for all news stories (e.g. by category, by date) or randomly ordering them, then calculating preference as the user goes along. What I would like to do is to get a kind of "head start" on that ordering by having the user to look at a small number of specific news stories and say if they're interested in them (the supervised learning part). To choose which stories to show the user, I have to consider the entire collection of stories. This is where Mutual Information comes in. For each story I want to know how much it can tell me about all the other stories when it is classified by the user. For example, if there is a large number of stories originating from Scotland, I want to get the user to classify (at least) one of them. Similar for other correlating features such as category or date. The goal is to find examples of reports which, when classified, provide the most information about the other reports.
Problem
Because my math is a bit rusty, and I'm new to machine learning I'm having some trouble converting the definition of Mutual Information to an implementation in Java. Wikipedia describes the equation for Mutual Information as:
However, I'm unsure if this can actually be used when nothing has been classified, and the learning algorithm has not calculated anything yet.
As in my example, say I had a large number of new, unlabelled instances of this class:
public class NewsStory {
private String countryOfOrigin;
private String category;
private Date date;
// constructor, etc.
}
In my specific scenario, the correlation between fields/features is based on an exact match so, for instance, one day and 10 years difference in date are equivalent in their inequality.
The factors for correlation (e.g. is date more correlating than category?) are not necessarily equal, but they can be predefined and constant. Does this mean that the result of the function p(x,y) is the predefined value, or am I mixing up terms?
The Question (finally)
How can I go about implementing the mutual information calculation given this (fake) example of news stories? Libraries, javadoc, code examples etc. are all welcome information. Also, if this approach is fundamentally flawed, explaining why that is the case would be just as valuable an answer.
PS. I am aware of libraries such as Weka and Apache Mahout, so just mentioning them is not really useful for me. I'm still searching through documentation and examples for both these libraries looking for stuff on Mutual Information specifically. What would really help me is pointing to resources (code examples, javadoc) where these libraries help with mutual information.

I am guessing that your problem is something like...
"Given a list of unlabeled examples, sort the list by how much the predictive accuracy of the model would improve if the user labelled the example and added it to the training set."
If this is the case, I don't think mutual information is the right thing to use because you can't calculate MI between two instances. The definition of MI is in terms of random variables and an individual instance isn't a random variable, it's just a value.
The features and the class label can be though of as random variables. That is, they have a distribution of values over the whole data set. You can calculate the mutual information between two features, to see how 'redundant' one feature is given the other one, or between a feature and the class label, to get an idea of how much that feature might help prediction. This is how people usually use mutual information in a supervised learning problem.
I think ferdystschenko's suggestion that you look at active learning methods is a good one.
In response to Grundlefleck's comment, I'll go a bit deeper into terminology by using his idea of a Java object analogy...
Collectively, we have used the term 'instance', 'thing', 'report' and 'example' to refer to the object being clasified. Let's think of these things as instances of a Java class (I've left out the boilerplate constructor):
class Example
{ String f1;
String f2;
}
Example e1 = new Example("foo", "bar");
Example e2 = new Example("foo", "baz");
The usual terminology in machine learning is that e1 is an example, that all examples have two features f1 and f2 and that for e1, f1 takes the value 'foo' and f2 takes the value 'bar'. A collection of examples is called a data set.
Take all the values of f1 for all examples in the data set, this is a list of strings, it can also be thought of as a distribution. We can think of the feature as a random variable and that each value in the list is a sample taken from that random variable. So we can, for example, calculate the MI between f1 and f2. The pseudocode would be something like:
mi = 0
for each value x taken by f1:
{ sum = 0
for each value y taken by f2:
{ p_xy = number of examples where f1=x and f2=y
p_x = number of examples where f1=x
p_y = number of examples where f2=y
sum += p_xy * log(p_xy/(p_x*p_y))
}
mi += sum
}
However you can't calculate MI between e1 and e2, it's just not defined that way.

I know information gain only in connection with decision trees (DTs), where in the construction of a DT, the split to make on each node is the one which maximizes information gain. DTs are implemented in Weka, so you could probably use that directly, although I don't know if Weka lets you calculate information gain for any particular split underneath a DT node.
Apart from that, if I understand you correctly, I think what you're trying to do is generally referred to as active learning. There, you first need some initial labeled training data which is fed to your machine learning algorithm. Then you have your classifier label a set of unlabeled instances and return confidence values for each of them. Instances with the lowest confidence values are usually the ones which are most informative, so you show these to a human annotator and have him/her label these manually, add them to your training set, retrain your classifier, and do the whole thing over and over again until your classifier has a high enough accuracy or until some other stopping criterion is met. So if this works for you, you could in principle use any ML-algorithm implemented in Weka or any other ML-framework as long as the algorithm you choose is able to return confidence values (in case of Bayesian approaches this would be just probabilities).
With your edited question I think I'm coming to understand what your aiming at. If what you want is calculating MI, then StompChicken's answer and pseudo code couldn't be much clearer in my view. I also think that MI is not what you want and that you're trying to re-invent the wheel.
Let's recapitulate: you would like to train a classifier which can be updated by the user. This is a classic case for active learning. But for that, you need an initial classifier (you could basically just give the user random data to label but I take it this is not an option) and in order to train your initial classifier, you need at least some small amount of labeled training data for supervised learning. However, all you have are unlabeled data. What can you do with these?
Well, you could cluster them into groups of related instances, using one of the standard clustering algorithms provided by Weka or some specific clustering tool like Cluto. If you now take the x most central instances of each cluster (x depending on the number of clusters and the patience of the user), and have the user label it as interesting or not interesting, you can adopt this label for the other instances of that cluster as well (or at least for the central ones). Voila, now you have training data which you can use to train your initial classifier and kick off the active learning process by updating the classifier each time the user marks a new instance as interesting or not. I think what you're trying to achieve by calculating MI is essentially similar but may be just the wrong carriage for your charge.
Not knowing the details of your scenario, I should think that you may not even need any labeled data at all, except if you're interested in the labels themselves. Just cluster your data once, let the user pick an item interesting to him/her from the central members of all clusters and suggest other items from the selected clusters as perhaps being interesting as well. Also suggest some random instances from other clusters here and there, so that if the user selects one of these, you may assume that the corresponding cluster might generally be interesting, too. If there is a contradiction and a user likes some members of a cluster but not some others of the same one, then you try to re-cluster the data into finer-grained groups which discriminate the good from the bad ones. The re-training step could even be avoided by using hierarchical clustering from the start and traveling down the cluster hierarchy at every contradiction user input causes.

Related

How to decide what is the relevant group in a precsion and recall computation?

One of the most famous measurements for an information retrieval system is to compute its precision and recall. For both cases, we need to compute the number of total relevant documents and compare it with the documents that the system has returned. My question is that how can we find this super set of relevant documents in following scenario:
Consider we have an academic search engine which its job is to accept a full name of an academic paper and base on some algorithms, it returns a list of relevant papers. Here, to judge whether or not the system has a good accuracy, we wish to calculate its precision and recall. But We do not know how can we produce a set of relevant papers -which the search engine should return them, regarding to different user's query- and accordingly, computing the precision and recall.
Most relevant document sets for system design involve showing documents to a user (real person).
Non-human evaluation:
You could probably come up with a "fake" evaluation in your particular instance. I would expect that the highest ranked paper for the paper "Variations in relevance judgments and the measurement of retrieval effectiveness" [1] would be that paper itself. So you could take your data and create an automatic evaluation. It wouldn't tell you if your system was really finding new things (which you care about), but it would tell you if your system was terrible or not.
e.g. If you were at a McDonald's, and you asked a mapping system where the nearest McDonald's was, and it didn't find the one you were at, you would know this to be some kind of system failure.
For a real evaluation:
You come up with a set queries, and you judge the top K results from your systems for each of them. In practice, you can't look at all millions of papers for each query -- so you approximate the recall set by the recall set you currently know about. This is why it's important to have some diversity in the systems you're pooling. Relevance is tricky; people disagree a lot on what documents are relevant to a query.
In your case: people will disagree about what papers are relevant to another paper. But that's largely okay, since they will mostly agree on the obvious ones.
Disagreement is okay if you're comparing systems:
This paradigm only makes sense if you're comparing different information retrieval systems. It doesn't help you understand how good a single system is, but it helps you determine if one system is better than the other reliably [1].
[1] Voorhees, Ellen M. "Variations in relevance judgments and the measurement of retrieval effectiveness." Information processing & management 36.5 (2000): 697-716.

What does the TopScoreDocCollector in Lucene by default use for Scoring?

I want to use Lucene to process several millions of news data. I'm quite new to Lucene, so I'm trying to learn more and more about how it is working.
By several tutorials throughout the web I found the class TopScoreDocCollector to be of high relevance for querying the Lucene index.
You create it like this
int hitsPerPage = 10000;
TopScoreDocCollector collector = TopScoreDocCollector.create(hitsPerPage, true);
and it will later collect the results of your query (only the amount you defined in hitsPerPage). I initially thought the results taken in would be just randomly distributed or something (like you have 100.000 documents that match your query and just get random 10.000). I think now that I was wrong.
After reading more and more about Lucene I came to the javadoc of the class (please see here). Here it says
A Collector implementation that collects the top-scoring hits,
returning them as a TopDocs
So for me it now seems that Lucene is using some very smart technology to somehow return me the top scored Documents for my input query. But how does that Scorer work? What does he take into account? I've extended my research on this topic but could not find an answer I completely understood so far.
Can you explain me how the Scorer in TopcScoreDocCollector scores my news documents and if this can be of use for me?
Lucene uses an inverted index to produce an iterator over the list of doc ids that matches your query.
It then goes through each one of them and computes a score. By default that score is based on so-called Tf-idf. In a nutshell, it will take in account how many times the terms of your query appear in the document, and also the number of documents that contain the term.
The idea is that if you look for (warehouse work), having the word work many times is not as significant as having the word warehouse many times.
Then, rather than sorting the whole set of matching documents, lucene takes in account the fact that you really only need the top K documents or so. Using a heap (or priority queue), one can compute these top K, with a complexity of O(N log K) instead of O(N log N).
That's the role of the TopScoreDocCollector.
You can implement your own logic for a scorer (assign a score to a document), or a collector (aggregates results).
This might be not the best answer since there is defently sooner or later someone available to explain the internal behaviour of Lucene but based on my days as a student there are two things regarding "information retrieval" - one is taking benefit of existing solutions such as Lucene and those - the other is the whole theory behind it.
If you are interested in the later one i recomend to take http://en.wikipedia.org/wiki/Information_retrieval as a starting point to get a overview and dig into the whole thematics.
I personally think it is one of the most interesting fields with a huge potential yet i never had the "academic hardskills" to realy get in touch with it.
To parametrize the solutions available it is crucial to at least have a overview of the theory - there are for example "challanges" whereas there has information been manually indexed/ valued as a reference to be able to compare the quality of a programmic solution.
Based on such a challange we managed to aquire a slightly higher quality than "luceene out of the box" after we feed luceene with 4 different information bases (sorry its a few years back i can barely remember hence the missing key words..) which all were the result of luceene itself but with different parameters.
To come back to your question i can answer none directly but hope to give you a certain base to determine if you realy need/ want to know whats behind luceene or if you rather just want to use it as a blackbox (and or make it a greybox by parameterization)
Sorry if i got you totally wrong.

Extract important attributes in Weka

That's rather newbie question, so please take it with a grain of salt.
I'm new in the field of data mining and trying to get my head wrapped around this topic. Right now I'm trying to polish my existing model so that it classifies instances better. The problem is, that my model has around 480 attributes. I know for sure that not all of them are relevant, but it's hard for me point out which are indeed important.
The question is: having valid training and test sets, does one can use some sort of data mining algorithm which would throw away attributes that seem to not have any impact on the quality of classification?
I'm using Weka.
You should test using some of the Classifier algorithms that Weka has.
The basic idea is to use the Cross-validation option, so you can see which algorithm gives you the best Correctly Classified Instances value.
I can give you an example of one of my training set, using the Cross-validation option and choosing Folds 10.
As you can see, using the J48 classifier I will have:
Correctly Classified Instances 4310 83.2207 %
Incorrectly Classified Instances 869 16.7793 %
and if I will use for example the NaiveBayes Algorithm I will have:
Correctly Classified Instances 1996 38.5403 %
Incorrectly Classified Instances 3183 61.4597 %
and so on, the values differ depending on the algorithm.
So, test as many algorithms as possible and see which one gives you the best Correctly Classified Instances / Time consumed.
Comment converted to answer as OP suggested:
If You use weka 3.6.6 - select module explorer -> than go to tab "Select attributes" and choose "Attribute evaluator" and "Search method", you can also choose between using full data set or cv sets, for more details see e.g. http://forums.pentaho.com/showthread.php?68687-Selecting-Attributes-with-Weka or http://weka.wikispaces.com/Performing+attribute+selection
Read up on the topic of clustering algorithms (only on your training set though!)
Look into the InfoGainAttributeEval class.
The buildEvaluator() and the evaluateAttribute(int index) functions should help.

How can I build an algorithm to classify an HTML page based on keywords?

I'm trying to create an algorithm that set some relevance to a webpage based on keywords that it finds on the page.
I'm doing this at the moment:
I set some words and a value for they: "movie"(10), "cinema"(6), "actor"(5) and "hollywood"(4) and search on some parts of the page giving a weight for each part and multiplying the words weight.
Example: the "movie" word word was found in the URL(1.5) * 10 and in title(2.5) * 10 = 40
This is trash! It's my first attempt, and it return some relevant results, but I don't think that a relevance determined by a value like 244, 66, 30, 15 is useful.
I want to do something that be inside a range, from 0 to 1 or 1 to 100.
What type of weighting for words can I use?
Besides it, there are ready algorithms to set some relevance of an HTML page based in things like URL, keywords, title, etc., except the main content?
EDIT 1: All of this can be rebuilt, the weights are random, I want to use some weights concise, not ramdon numbers to represent the weight like 10, 5 and 3.
Something like: low importance = 1, medium importance = 2, high importante = 4, deterministic importance = 8.
Title > Link Part of URL > Domain > Keywords
movie > cinema> actor > hollywood
EDIT 2: At the moment, I want to analyze the page relevance for words excluding the body content of the page. I will include in the analysus the domain, the link part of the url, the title, keywords (and another meta informations I judge useful).
The reason for this is that the HTML content is dirty. I can find much words like 'movie' in menus and advertisements, but the main content of the page doesn't contains nothing relevant to the theme.
Another reason is that some pages has meta information indicating that pages contains info about a movie, but the main content no. Example: a page that contains the plot of the film telling the history, the characters, etc., but don't refers in that text nothing that can indicate that this is about a movie, only the page meta information.
Later, after running a relevance analysis on the HTML page, I will do a relevance analysis on the content (filtered) separatedly.
Are you able to index these documents in a search engine? If you are then maybe you should consider using this latent semantic library.
You can get the actual project from here: https://github.com/algoriffic/lsa4solr
What you are trying to do, is determine the meaning of a text corpus, and classify it based on it's meaning. However, words are not individually unique or to be considered in abstract away from the overall article.
For example, suppose that you have an article which talks a lot about "Windows". This word is used 7 times in a 300 word article. So you know that it is important. However, what you don't know, is if it is talking about the Operating System "Windows" or the things that you look through.
Suppose then that you also see words such as "Installation", well, that doesn't help you at all either. Because people install windows into their houses much like they install windows operating system. However, if the very same article talks about defragmentation, operating systems, command line and Windows 7, then you can guess that the meaning of this document is actual about the Windows operating system.
However, how can you determine this?
This is where Latent Semantic Indexing comes in. What you want to do, is extract the entire documents text and then apply some clever analysis to that document.
The matrix'es that you build (see here) are way above my head, and although I have looked at some libraries and used them, I have never been able to fully understand the complex math that goes behind building the space aware matrix that is unsed by Latent Semantic Analysis... so in my advice, I would recommend, just using an already existing library to do this for you.
Happy to remove this answer if you aren't looking for external libraries and want to do this yourself
A simple way to convert anything into a 0-100 range (for any positive value X):
(1-1/(1+X))*100
A higher X gives you a value closer to 100.
But this won't promise you a fair or correct distribution. That's up to your algorithm of deciding the actual X value.
your_sum / (max_score_per_word * num_words) * 100
Should work. But you'll get very small scores most of the time since few of the words will match those that have a non-zero score. Nonetheless I don't see an alternative. And it's not a bad thing that you get small scores: you will be comparing scores between webpages. You try many different webpages and you can figure out what a "high score" is with your system.
Check out this blog post on classifying webpages by topic, it talks about how to implement something that relates closely to your requirements. How do you define relevance in your scenario? No matter what weights you apply to the different inputs you will still be choosing a somewhat arbitrary value, once you've cleaned the raw data you would be better served to apply machine learning to generate a classifier for you. This is difficult if relevance is a scalar value, but it's trivial if it's a boolean value (ie. a page is or isn't relevant to a particular movie, for example).

Design for a Debate club assignment application

For my university's debate club, I was asked to create an application to assign debate sessions and I'm having some difficulties as to come up with a good design for it. I will do it in Java. Here's what's needed:
What you need to know about BP debates: There are four teams of 2 debaters each and a judge. The four groups are assigned a specific position: gov1, gov2, op1, op2. There is no significance to the order within a team.
The goal of the application is to get as input the debaters who are present (for example, if there are 20 people, we will hold 2 debates) and assign them to teams and roles with regards to the history of each debater so that:
Each debater should debate with (be on the same team) as many people as possible.
Each debater should uniformly debate in different positions.
The debate should be fair - debaters have different levels of experience and this should be as even as possible - i.e., there shouldn't be a team of two very experienced debaters and a team of junior debaters.
There should be an option for the user to restrict the assignment in various ways, such as:
Specifying that two people should debate together, in a specific position or not.
Specifying that a single debater should be in a specific position, regardless of the partner.
If anyone can try to give me some pointers for a design for this application, I'll be so thankful!
Also, I've never implemented a GUI before, so I'd appreciate some pointers on that as well, but it's not the major issue right now.
Also, there is the issue of keeping Debater information in file, which I also never implemented in Java, and would like some tips on that as well.
This seems like a textbook constraint problem. GUI notwithstanding, it'd be perfect for a technology like Prolog (ECLiPSe prolog has a couple of different Java integration libraries that ship with it).
But, since you want this in Java why not store the debaters' history in a sql database, and use the SQL language to structure the constraints. You can then wrap those SQL queries as Java methods.
There are two parts (three if you count entering and/or saving the data), the underlying algorithm and the UI.
For the UI, I'm weird. I use this technique (there is a link to my sourceforge project). A Java version would have to be done, which would not be too hard. It's weird because very few people have ever used it, but it saves an order of magnitude coding effort.
For the algorithm, the problem looks small enough that I would approach it with a simple tree search. I would have a scoring algorithm and just report the schedule with the best score.
That's a bird's-eye overview of how I would approach it.

Categories

Resources