I want to cluster the documents I get for Google scholar search using the Bag of words model. I thought of using Java as the language.
The documents should be clustered based on a set of words present in the documents. For example say I have a predefined set of 10 words. I want to rank the Google search results according the presence of the defined key words in them.
Do I have to use an algorithm like k-means algorithm? And do I need to perform NLP tasks? Could anyone please tell me the steps to perform this?
NLP is used for preprocessing on text before you do classification on your data.
Preprocessing like
POS(part of speech), NE(Named Entity) type of feature extraction
Sentence parsing
Text tokenization
Stop words removal
Once you perform preprocessing stuff, your data is ready for classification, clustering process.
Now you can apply k-mean algorithm on that data.
See you can directly apply k-mean in your case if you are not bothering data processing.
Related
I have a list of documents and I am indexing those documents based on User's query on Apache SOLR. I want to extract some news articles by using the keywords from the relevant indexed documents and display it along with the indexed documents to the user. Is there any algorithm or procedure by which we can extract the relevant keywords from the documents and use it for extracting the news?
You should research TF-IDF keyword extraction. I did a similar process with this about 2 years ago using the English Wiki and a simple Python Script. You need to answer a few questions though before proceeding with this. You can find a neat little writeup on using TF-IDF keyword extraction here
Do you only care about single keywords or will you evaluate phrases as well and up to what length?
Will you do any natural language processing on incoming data such as tagging and stemming?
Will you restrict the keywords to certain article types? Certain categories of article can have their own TF-IDF scores so you might want to experiment with what you need.
I'm creating a mini search engine in Java which basically grabs all of the RSS feeds that a user specifies and then allows him or her to choose a single word to search for. Since the RSS feed documents are fairly limited in number, I'm thinking about processing the documents first before the user enters his or her search term. I want to process them by creating hashmaps linking certain keywords to a collection of records which contain the articles themselves and the number of times the word appears in the article. But, how would I determine the keywords? How can I tell which words are meaningless and which aren't?
The concept of "what words should I ignore?" is generally named stopwords. The best search engines do not use stopwords. If I am a fan of the band "The The", I would be bummed if your search engine couldn't find them. Also, searching for exact phrases can be screwed up by a naive stopwords implementation.
By the way, the hashmap you're talking about is called an inverted index. I recommend reading this (free, online) book to get an introduction to how search engines are built: http://nlp.stanford.edu/IR-book/information-retrieval-book.html
In Solr, I believe these are called 'stopwords'.
I believe they just use a text file to define all the words that they will not search on.
A small extract re. stopwords from NLTK from Ch. 2:
There is also a corpus of stopwords, that is, high-frequency words
like the, to and also that we sometimes want to filter out of a
document before further processing. Stopwords usually have little
lexical content, and their presence in a text fails to distinguish it
from other texts.
>>> from nltk.corpus import stopwords
>>> stopwords.words('english')
['a', "a's", 'able', 'about', 'above', 'according', 'accordingly', 'across',
'actually', 'after', 'afterwards', 'again', 'against', "ain't", 'all', 'allow',
'allows', 'almost', 'alone', 'along', 'already', 'also', 'although', 'always', ...]
Stopwords are one thing you should use. Lots of stopword lists are available on the web.
However I'm writing an answer because the previous ones didn't mention TF-IDF which is a metric for how important a word is in the context of your corpus of documents.
A word is more likely to be a keyword foe a document if it appears a lot in it (term frequency) and doesn't appear frequently in other documents (inverse document frequency). This way words like a, the, where, are naturally ignored, because they appear in every document.
P.S. On a related topic, you'll probably be interested in other lists, i.e. swearwords :)
P.P.S. Hashmaps are a good thing, but you should also check suffix trees for your task.
I have a large set of documents stored inside a Lucene index and I am using a customAnalyzer which basically does tokenization and stemming for the documents content.
Now, if I search inside the documents for the word "love", I get results where love is being used either as a noun or a verb, while I want only those documents which use love only as a verb.
How can such s feature be implemented where I could also mention the part-of-speech of the word along with the word so that the results have only love used as a verb and not as a noun?
I can think of a way to initially part-of-speech tag each word of the document and store it by appending the POS with the word with a '_' or something and then to search accordingly, but wanted to know if there is a smarter way to do this in Lucene.
I can think of following approaches.
Approach 1
Just like you mentioned: Recognize and append the part-of-speech tag to the actual term while indexing. Do the same while querying.
I would like to discuss the cons associated.
Cons:
1) Future requirements might demand you to get results irrespective of part-of-speech. The Index that contains modified terms won't work.
2) You might want to execute a BooleanQuery like "term: noun or adjective". You've to write the query expander yourself.
Approach 2
Try using Payloads feature of Lucene.
Here is a brief tutorial on Lucene Payloads.
Steps to address your use-case.
1) Store the part-of-speech tag in the form of a Payload.
2) Have custom Similarity classes for each part-of-speech tag.
3) Based on the query, assign the corresponding CustomSimilarity to the IndexSearcher. For example, assign NounBoostingSimilarity for a noun query.
4) Boost or "Reduce" the score of a document based on Payload. Example given in the above tutorial.
5) Write a custom collector to filter out the documents with scores not conforming to above score-boosting logic.
Pros of this approach is that the Index remains compatible for any other normal search.
Cons:
1) Maintenance overhead : have to maintain multiple IndexSearchers for each similarity.
2) Somewhat complicated-to-code solution.
To be frank, I'm not satisfied with my own solution, but just wanted to let you know that there exists another way. It all depends on your scenario, whether the project is an academic one-time project or a commercial one, etc.
So we have many street names. They come in a file. Id probably cache them when booting the server up in production. The search should be auto complete like - e.g. you type 'lang ' and you would get maybe 8 hits : langstr, langestr. Etc
What you are looking for is some sort of compressed trie representation. You might want to look into succinct tries or DAWGs as a starting point, as they give excellent efficiency and very good space usage.
Hope this helps!
Autocomplete is usually implemented using one of the following:
Trees. By indexing the searchable text in a tree structure (prefix tree, suffix tree, dawg, etc..) one can execute very fast searches at the expense of memory storage. The tree traversal can be adapted for approximate matching.
Pattern Partitioning. By partitioning the text into tokens (ngrams) one can execute searches for pattern occurrences using a simple hashing scheme.
Filtering. Find a set of potential matches and then apply a sequential algorithm to check each candidate.
Take a look at completely, a Java autocomplete library that implements some of the latter concepts.
I am new to Lucene and my project is to provide specialized search for a set
of booklets. I am using Lucene Java 3.1.
The basic idea is to help people know where to look for information in the (rather
large and dry) booklets by consulting the index to find out what booklet and page numbers match their query. Each Document in my index represents a particular page in one of the booklets.
So far I have been able to successfully scrape the raw text from the booklets,
insert it into an index, and query it just fine using StandardAnalyzer on both
ends.
So here's my general question:
Many queries on the index will involve searching for place names mentioned in the
booklets. Some place names use notational variants. For instance, in the body text
it will be called "Ship Creek" on one page, but in a map diagram elsewhere it might be listed as "Ship Cr." or even "Ship Ck.". What I need to know is how to approach treating the two consecutive words as a single term and add the notational variants as synonyms.
My goal is of course to search with any of the variants and catch all occurrences. If I search for (Ship AND (Cr Ck Creek)) this does not give me what I want because other words may appear between [ship] and [cr]/[ck]/[creek] leading to false positives.
So, in a nutshell I probably still need the basic stuff provided by StandardAnalyzer, but with specific term grouping to emit place names as complete terms and possibly insert synonyms to cover the variants.
For instance, the text "...allowed from the mouth of Ship Creek upstream to ..." would
result in tokens [allowed],[mouth],[ship creek],[upstream]. Perhaps via a TokenFilter along
the way, the [ship creek] term would expand into [ship creek][ship ck][ship cr].
As a bonus it would be nice to treat the trickier text "..except in Ship, Bird, and
Campbell creeks where the limit is..." as [except],[ship creek],[bird creek],
[campbell creek],[where],[limit].
This seems like a pretty basic use case, but it's not clear to me how I might be able to use existing components from Lucene contrib or SOLR to accomplish this. Should the detection and merging be done in some kind of TokenFilter? Do I need a custom Analyzer implementation?
Some of the term grouping can probably be done heuristically [],[creek] is [ creek]
but I also have an exhaustive list of places mentioned in the text if that helps.
Thanks for any help you can provide.
You can use Solr's Synonym Filter. Just set up "creek" to have synonyms "ck", "cr" etc.
I'm not aware of any existing functionality to solve your "bonus" problem.