How to tell if a word is meaningless in text? - java

I'm creating a mini search engine in Java which basically grabs all of the RSS feeds that a user specifies and then allows him or her to choose a single word to search for. Since the RSS feed documents are fairly limited in number, I'm thinking about processing the documents first before the user enters his or her search term. I want to process them by creating hashmaps linking certain keywords to a collection of records which contain the articles themselves and the number of times the word appears in the article. But, how would I determine the keywords? How can I tell which words are meaningless and which aren't?

The concept of "what words should I ignore?" is generally named stopwords. The best search engines do not use stopwords. If I am a fan of the band "The The", I would be bummed if your search engine couldn't find them. Also, searching for exact phrases can be screwed up by a naive stopwords implementation.
By the way, the hashmap you're talking about is called an inverted index. I recommend reading this (free, online) book to get an introduction to how search engines are built: http://nlp.stanford.edu/IR-book/information-retrieval-book.html

In Solr, I believe these are called 'stopwords'.
I believe they just use a text file to define all the words that they will not search on.

A small extract re. stopwords from NLTK from Ch. 2:
There is also a corpus of stopwords, that is, high-frequency words
like the, to and also that we sometimes want to filter out of a
document before further processing. Stopwords usually have little
lexical content, and their presence in a text fails to distinguish it
from other texts.
>>> from nltk.corpus import stopwords
>>> stopwords.words('english')
['a', "a's", 'able', 'about', 'above', 'according', 'accordingly', 'across',
'actually', 'after', 'afterwards', 'again', 'against', "ain't", 'all', 'allow',
'allows', 'almost', 'alone', 'along', 'already', 'also', 'although', 'always', ...]

Stopwords are one thing you should use. Lots of stopword lists are available on the web.
However I'm writing an answer because the previous ones didn't mention TF-IDF which is a metric for how important a word is in the context of your corpus of documents.
A word is more likely to be a keyword foe a document if it appears a lot in it (term frequency) and doesn't appear frequently in other documents (inverse document frequency). This way words like a, the, where, are naturally ignored, because they appear in every document.
P.S. On a related topic, you'll probably be interested in other lists, i.e. swearwords :)
P.P.S. Hashmaps are a good thing, but you should also check suffix trees for your task.

Related

Is it possible to search for words inside a Lucene index by part of speech

I have a large set of documents stored inside a Lucene index and I am using a customAnalyzer which basically does tokenization and stemming for the documents content.
Now, if I search inside the documents for the word "love", I get results where love is being used either as a noun or a verb, while I want only those documents which use love only as a verb.
How can such s feature be implemented where I could also mention the part-of-speech of the word along with the word so that the results have only love used as a verb and not as a noun?
I can think of a way to initially part-of-speech tag each word of the document and store it by appending the POS with the word with a '_' or something and then to search accordingly, but wanted to know if there is a smarter way to do this in Lucene.
I can think of following approaches.
Approach 1
Just like you mentioned: Recognize and append the part-of-speech tag to the actual term while indexing. Do the same while querying.
I would like to discuss the cons associated.
Cons:
1) Future requirements might demand you to get results irrespective of part-of-speech. The Index that contains modified terms won't work.
2) You might want to execute a BooleanQuery like "term: noun or adjective". You've to write the query expander yourself.
Approach 2
Try using Payloads feature of Lucene.
Here is a brief tutorial on Lucene Payloads.
Steps to address your use-case.
1) Store the part-of-speech tag in the form of a Payload.
2) Have custom Similarity classes for each part-of-speech tag.
3) Based on the query, assign the corresponding CustomSimilarity to the IndexSearcher. For example, assign NounBoostingSimilarity for a noun query.
4) Boost or "Reduce" the score of a document based on Payload. Example given in the above tutorial.
5) Write a custom collector to filter out the documents with scores not conforming to above score-boosting logic.
Pros of this approach is that the Index remains compatible for any other normal search.
Cons:
1) Maintenance overhead : have to maintain multiple IndexSearchers for each similarity.
2) Somewhat complicated-to-code solution.
To be frank, I'm not satisfied with my own solution, but just wanted to let you know that there exists another way. It all depends on your scenario, whether the project is an academic one-time project or a commercial one, etc.

how to approach phrase queries and term grouping

I am new to Lucene and my project is to provide specialized search for a set
of booklets. I am using Lucene Java 3.1.
The basic idea is to help people know where to look for information in the (rather
large and dry) booklets by consulting the index to find out what booklet and page numbers match their query. Each Document in my index represents a particular page in one of the booklets.
So far I have been able to successfully scrape the raw text from the booklets,
insert it into an index, and query it just fine using StandardAnalyzer on both
ends.
So here's my general question:
Many queries on the index will involve searching for place names mentioned in the
booklets. Some place names use notational variants. For instance, in the body text
it will be called "Ship Creek" on one page, but in a map diagram elsewhere it might be listed as "Ship Cr." or even "Ship Ck.". What I need to know is how to approach treating the two consecutive words as a single term and add the notational variants as synonyms.
My goal is of course to search with any of the variants and catch all occurrences. If I search for (Ship AND (Cr Ck Creek)) this does not give me what I want because other words may appear between [ship] and [cr]/[ck]/[creek] leading to false positives.
So, in a nutshell I probably still need the basic stuff provided by StandardAnalyzer, but with specific term grouping to emit place names as complete terms and possibly insert synonyms to cover the variants.
For instance, the text "...allowed from the mouth of Ship Creek upstream to ..." would
result in tokens [allowed],[mouth],[ship creek],[upstream]. Perhaps via a TokenFilter along
the way, the [ship creek] term would expand into [ship creek][ship ck][ship cr].
As a bonus it would be nice to treat the trickier text "..except in Ship, Bird, and
Campbell creeks where the limit is..." as [except],[ship creek],[bird creek],
[campbell creek],[where],[limit].
This seems like a pretty basic use case, but it's not clear to me how I might be able to use existing components from Lucene contrib or SOLR to accomplish this. Should the detection and merging be done in some kind of TokenFilter? Do I need a custom Analyzer implementation?
Some of the term grouping can probably be done heuristically [],[creek] is [ creek]
but I also have an exhaustive list of places mentioned in the text if that helps.
Thanks for any help you can provide.
You can use Solr's Synonym Filter. Just set up "creek" to have synonyms "ck", "cr" etc.
I'm not aware of any existing functionality to solve your "bonus" problem.

Spelling correction for data normalization in Java

I am looking for a Java library to do some initial spell checking / data normalization on user generated text content, imagine the interests entered in a Facebook profile.
This text will be tokenized at some point (before or after spell correction, whatever works better) and some of it used as keys to search for (exact match). It would be nice to cut down misspellings and the like to produce more matches. It would be even better if the correction would perform well on tokens longer than just one word, e.g. "trinking coffee" would become "drinking coffee" and not "thinking coffee".
I found the following Java libraries for doing spelling correction:
JAZZY does not seem to be under active development. Also, the dictionary-distance based approach seems inadequate because of the use of non-standard language in social network profiles and multi-word tokens.
APACHE LUCENE seems to have a statistical spell checker that should be much more suited. Question here would how to create a good dictionary? (We are not using Lucene otherwise, so there is no existing index.)
Any suggestions are welcome!
What you want to implement is not spelling corrector but a fuzzy search. Peter Norvig's essay is a good starting point to build a fuzzy search from candidates checked against a dictionary.
Alternatively have a look at BK-Trees.
An n-gram index (used by Lucene) produces better results for longer words. The approach to produce candidates up to a given edit distance will probably work good enough for words found in normal text but will not work good enough for names, addresses and scientific texts. It will increase you index size, though.
If you have the texts indexed you have your text corpus (your dictionary). Only what is in your data can be found anyway. You need not use an external dictionary.
A good resource is Introduction to Information Retrieval - Dictionaries and tolerant retrieval . There is a short description of context sensitive spelling correction.
With regards to populating a Lucene index as the basis of a spell checker, this is a good way to solve the problem. Lucene has an out the box SpellChecker you can use.
There are plenty of word dictionaries available on the net that you can download and use as the basis for your lucene index. I would suggest supplementing these with a number of domain specific texts as well e.g. if your users are medics then maybe supplement the dictionary with source texts from medical thesis and publications.
Try Peter Norvig's spell checker.
You can hit the Gutenberg project or the Internet Archive for lots and lots of corpus.
Also, I think that the Wiktionary could help you. You can even make a direct download.
http://code.google.com/p/google-api-spelling-java is a good Java spell checking library, but I agree with Thomas Jung, that may not be the answer to your problem.

Full Text Search like Google

I would like to implement full-text-search in my off-line (android) application to search the user generated list of notes.
I would like it to behave just like Google (since most people are already used to querying to Google)
My initial requirements are:
Fast: like Google or as fast as possible, having 100000 documents with 200 hundred words each.
Searching for two words should only return documents that contain both words (not just one word) (unless the OR operator is used)
Case insensitive (aka: normalization): If I have the word 'Hello' and I search for 'hello' it should match.
Diacritical mark insensitive: If I have the word 'así' a search for 'asi' should match. In Spanish, many people, incorrectly, either do not put diacritical marks or fail in correctly putting them.
Stop word elimination: To not have a huge index meaningless words like 'and', 'the' or 'for' should not be indexed at all.
Dictionary substitution (aka: stem words): Similar words should be indexed as one. For example, instances of 'hungrily' and 'hungry' should be replaced with 'hunger'.
Phrase search: If I have the text 'Hello world!' a search of '"world hello"' should not match it but a search of '"hello world"' should match.
Search all fields (in multifield documents) if no field specified (not just a default field)
Auto-completion in search results while typing to give popular searches. (just like Google Suggest)
How may I configure a full-text-search engine to behave as much as possible as Google?
(I am mostly interested in Open Source, Java and in particular Lucene)
I think Lucene can address your requirements. You should also consider using Solr, which has similar functionality and is much easier to set up.
I will discuss each requirement separately, using Lucene. I believe Solr has similar mechanisms.
Fast: like Google or as fast as possible, having 100000 documents with 200 hundred words each.
This is a reasonable index size both for Lucene and Solr, enabling retrieval at several tens of milliseconds per query.
Searching for two words should only return documents that contain both words (not just one word) (unless the OR operator is used)
You can do that using a BooleanQuery with MUST as default in Lucene.
The next four requirements can be handled by customizing a Lucene Analyzer:
Case insensitive (aka: normalization): If I have the word 'Hello' and I search for 'hello' it should match.
A LowerCaseFilter can be used for this.
Diacritical mark insensitive: If I have the word 'así' a search for 'asi' should match. In Spanish, many people, incorrectly, either do not put diacritical marks or fail in correctly putting them.
This requires Unicode normalization followed by diacritic removal. You can build a custom Analyzer for this.
Stop word elimination: To not have a huge index meaningless words like 'and', 'the' or 'for' should not be indexed at all.
A StopFilter removes stop words in Lucene.
Dictionary substitution (aka: stem words): Similar words should be indexed as one. For example, instances of 'hungrily' and 'hungry' should be replaced with 'hunger'.
Lucene has many Snowball Stemmers. One of them may be appropriate.
Phrase search: If I have the text 'Hello world!' a search of '"world hello"' should not match it but a search of '"hello world"' should match.
This is covered by the Lucene PhraseQuery specialized query.
As you can see, Lucene covers all of the required functionality. To get a more general picture, I suggest the book Lucene in Action, The Apache Lucene Wiki or The Lucid Imagination Site.
A lot of these behaviors are default for Lucene. The first (including all terms) is not, but you can force this behavior by setting the default operator:
MultiFieldQueryParser parser = new MultiFieldQueryParser(fields, new StandardAnalyzer());
parser.setDefaultOperator(QueryParser.AND_OPERATOR);
I know that items 2, 4, and 6 are possible, and IIRC, they happen by default. I'm not sure about items 3 and 5, but Lucene offers a ton of customization options, so I'd suggest implementing a proof-of-concept with your data to see if it meets these requirements as well.
Buy a Google Search Appliance. Or, as the comments say, use Lucene like you already mentioned.
HyperSQL is a pure-java SQL implementation that can be ran quite easily, as can SQLite. You could use their full-text capabilities and querying to re-create the wheel, but as the other commenters have pointed out an existing implementation is probably best.
Unless you buy a search engine, you have Lucene, Nutch, Apache Solr and few others.

Can Lucene return several search results from a single indexed file?

I am using Lucene to index and search a small number of large documents. Using the demo from the Lucene site I have indexed the documents and am able to search them. However, the search result is not particularly useful as it points to the file of the document. With very large documents this isn't particularly useful.
I am wondering if Lucene can index these very large documents and create an abstraction over them which provides much more fine-grained results.
An example might better explain what I mean. Consider a very large book, such as the Bible. One file contains the entire text of the Bible, so with the demo, the result of searching for say, 'Damascus' would point to the file. What I would like to do is to retain the large document, but searches would return results pointing to a Book, Chapter or even as precise as a Verse. So a search for 'Damascus' could return (among others) Book 23, Chapter 7, Verse 8.
Is this possible (and best-practice in Lucene usage), or should I instead attempt to section the large document into many small files to index?
If it makes any difference, I am using Java Lucene 2.9.0 and am indexing HTML files approximately 1MB - 4MB in size. Which in terms of file size is not large, but it is large, relative to a person reading it.
I don't think I've explained this as well as I could. Here goes for another example.
Say I take my large HTML file, and (for arguments sake) the search term 'Damascus' appears 3 times. Once on line 100 within a <div> tag, on line 2000 within a <p> tag, and on line 5000 within a <h1> tag. Is it possible to index with Lucene, such that there will be 3 results, and they can point to the specific element the term was within?
I don't think I want to provide a different document result for the term. So if the term 'Damascus' appeared twice within a specific <div>, there would only be one match.
It appears from a comment from Kragen that what I would want to do is parse the HTML when Lucene is going through the indexing phase. Then I can decide the chunk I want to consider as one document from what is read in by the parser. So if I see a div with a certain class I can begin a new Lucene document and it will be returned as a separate hit when a word within the div content is searched on.
Does this sound like what I want to do, and is it possible?
Yes - Lucene records the offset of matching terms in a file, so that can be used to figure out where in the indexed content you need to look for matches.
There is a Lucene.Highlight add-on that does this exact task for you - try this article, there are also a couple of questions on StackOverflow concerning hit highlighting (many of these are tailored to use with web apps and so also do things like surrounding matching words with <b> tags)
UPDATE: Depending on how you search your index you might also find that its a good idea to split your large documents into smaller sections (for example chapters) as well - however this is more a question on how you want to organise, prioritise and present your results to the end user.
For example, supposing a user does a search for "foo" and there are 2 books containing that term. The first book (book A) might contain 2 chapters each of which have many references to "foo", however the term is barely mentioned in the rest of the book, however the second book (book B) contains many references to "foo", however they are scattered around the whole book. If you index by book, then you will probably find that book B is the first hit, however indexing by chapter you are likely to find that the 2 chapters from book A are the first 2 hits, followed by the chapters from book B.
Finally, obviously the user will be presented with 1 hit per matching document you have in your index - if you want to present your users with a list of matching books then obviously index by book, however you might find it more appropriate to present the user with a list of matching chapters in which case obviously index by chapter.
One way to do this is to create several documents out of a single book. The documents could represent books, chapters or verses. As the text need not be unique, this is what I would do.
This way, the first verse in the first chapter in the book of Genesis will be indexed four times: in the whole bible, in the book of Genesis, in the first chapter and as the verse.
A subtlety here is the exact goal of retrieval:
Do you want just to display the search keywords in context to a user? In this case consider using a Lucene highlighter. If you need the retrieval to be further used (i.e. take the retrieved pointer to a chapter or verse and do some processing on this place in the text) I would go with the finer-grained documents as I described before.

Categories

Resources