I am new to Lucene and my project is to provide specialized search for a set
of booklets. I am using Lucene Java 3.1.
The basic idea is to help people know where to look for information in the (rather
large and dry) booklets by consulting the index to find out what booklet and page numbers match their query. Each Document in my index represents a particular page in one of the booklets.
So far I have been able to successfully scrape the raw text from the booklets,
insert it into an index, and query it just fine using StandardAnalyzer on both
ends.
So here's my general question:
Many queries on the index will involve searching for place names mentioned in the
booklets. Some place names use notational variants. For instance, in the body text
it will be called "Ship Creek" on one page, but in a map diagram elsewhere it might be listed as "Ship Cr." or even "Ship Ck.". What I need to know is how to approach treating the two consecutive words as a single term and add the notational variants as synonyms.
My goal is of course to search with any of the variants and catch all occurrences. If I search for (Ship AND (Cr Ck Creek)) this does not give me what I want because other words may appear between [ship] and [cr]/[ck]/[creek] leading to false positives.
So, in a nutshell I probably still need the basic stuff provided by StandardAnalyzer, but with specific term grouping to emit place names as complete terms and possibly insert synonyms to cover the variants.
For instance, the text "...allowed from the mouth of Ship Creek upstream to ..." would
result in tokens [allowed],[mouth],[ship creek],[upstream]. Perhaps via a TokenFilter along
the way, the [ship creek] term would expand into [ship creek][ship ck][ship cr].
As a bonus it would be nice to treat the trickier text "..except in Ship, Bird, and
Campbell creeks where the limit is..." as [except],[ship creek],[bird creek],
[campbell creek],[where],[limit].
This seems like a pretty basic use case, but it's not clear to me how I might be able to use existing components from Lucene contrib or SOLR to accomplish this. Should the detection and merging be done in some kind of TokenFilter? Do I need a custom Analyzer implementation?
Some of the term grouping can probably be done heuristically [],[creek] is [ creek]
but I also have an exhaustive list of places mentioned in the text if that helps.
Thanks for any help you can provide.
You can use Solr's Synonym Filter. Just set up "creek" to have synonyms "ck", "cr" etc.
I'm not aware of any existing functionality to solve your "bonus" problem.
Related
I'm looking for an appropriate search engine that I can use my own similarity measure and tokenization approaches in it. Lucene search engine is introduced as a good one for this purpose but I have no idea about that. I searched on the internet about the tutorial of new versions of Lucene search engine but most of the pages are from a few years ago. Some of my questions are as follow:
Is it possible to change the similarity measure, tokenization and Stemming approaches and use self-built classes in the Lucene? If yes, How to do that?
Is there any difference between how we index the text for keywords search or phrasal search? should I make two different index for keyword search and phrasal search? (I think if we remove stop words, it will affect on the result of phrasal search and if I don't remove stop words, it will affect on the result of keyword search, won't it?)
Any information about this topic is appreciated.
This is possible, yes, and we do it on a couple solutions at my workplace. Here is a reasonable tutorial on how to do this. The tutorial uses Solr, which is a good Lucene implementation. To answer your questions directly:
Yes, there is a way to do this by overriding interfaces and providing your own implementation (see tutorial). Tokenization can be done without needing to override classes within Solr's default configuration, depending on how funky you need to get with Tokenization.
Yes, making an index that will return accurate results is a measure in understanding how your users will be searching the index. That having been said, a large part of the complexity in how queries search comes from people wanting matching results to float to the top of the results list, which is done via scoring. Given it sounds like you're looking to override the scoring, it may not matter for you. You should note though that by default, Lucene will match on hits to multiple columns higher than a single match exactly on a single column. That means that if you store data across many columns (and you search by default across many columns) your search will get less and less "accurate".
Full text search against a single column tends to be pretty accurate phrase vs words, but you'll end up with a pretty large index.
I'm not really sure if my question is correct per se to post here, but I thought I'd give it a go.
I'm working on a project where I take text data from a public knowledge base and want to use this text to automatically expand tag based search queries with additional terms that are supposed to be relevant to the original query. The public knowledge base is basically a collection of data from Wikipedia; in my case the abstracts of 3.74 million articles.
In the beginning I simply performed a search based on an original query, fetched the words used in articles describing the matches from my query and did a simple term frequency calculation to get the N most used terms.
It seemed to be a simple idea that worked to begin with, but as I tested more queries I started running into problems. It's clear that I need some kind of semantic analysis on my custom text collection, but I have no idea where to even begin doing something like this. Any tools I find online that are supposed to do semantic analysis' like this only works on a predefined collection of texts. As stated: I need something that can process a custom collection and later use that index to perform searches on.
Any ideas or suggestions?
I'm creating a mini search engine in Java which basically grabs all of the RSS feeds that a user specifies and then allows him or her to choose a single word to search for. Since the RSS feed documents are fairly limited in number, I'm thinking about processing the documents first before the user enters his or her search term. I want to process them by creating hashmaps linking certain keywords to a collection of records which contain the articles themselves and the number of times the word appears in the article. But, how would I determine the keywords? How can I tell which words are meaningless and which aren't?
The concept of "what words should I ignore?" is generally named stopwords. The best search engines do not use stopwords. If I am a fan of the band "The The", I would be bummed if your search engine couldn't find them. Also, searching for exact phrases can be screwed up by a naive stopwords implementation.
By the way, the hashmap you're talking about is called an inverted index. I recommend reading this (free, online) book to get an introduction to how search engines are built: http://nlp.stanford.edu/IR-book/information-retrieval-book.html
In Solr, I believe these are called 'stopwords'.
I believe they just use a text file to define all the words that they will not search on.
A small extract re. stopwords from NLTK from Ch. 2:
There is also a corpus of stopwords, that is, high-frequency words
like the, to and also that we sometimes want to filter out of a
document before further processing. Stopwords usually have little
lexical content, and their presence in a text fails to distinguish it
from other texts.
>>> from nltk.corpus import stopwords
>>> stopwords.words('english')
['a', "a's", 'able', 'about', 'above', 'according', 'accordingly', 'across',
'actually', 'after', 'afterwards', 'again', 'against', "ain't", 'all', 'allow',
'allows', 'almost', 'alone', 'along', 'already', 'also', 'although', 'always', ...]
Stopwords are one thing you should use. Lots of stopword lists are available on the web.
However I'm writing an answer because the previous ones didn't mention TF-IDF which is a metric for how important a word is in the context of your corpus of documents.
A word is more likely to be a keyword foe a document if it appears a lot in it (term frequency) and doesn't appear frequently in other documents (inverse document frequency). This way words like a, the, where, are naturally ignored, because they appear in every document.
P.S. On a related topic, you'll probably be interested in other lists, i.e. swearwords :)
P.P.S. Hashmaps are a good thing, but you should also check suffix trees for your task.
I have a large set of documents stored inside a Lucene index and I am using a customAnalyzer which basically does tokenization and stemming for the documents content.
Now, if I search inside the documents for the word "love", I get results where love is being used either as a noun or a verb, while I want only those documents which use love only as a verb.
How can such s feature be implemented where I could also mention the part-of-speech of the word along with the word so that the results have only love used as a verb and not as a noun?
I can think of a way to initially part-of-speech tag each word of the document and store it by appending the POS with the word with a '_' or something and then to search accordingly, but wanted to know if there is a smarter way to do this in Lucene.
I can think of following approaches.
Approach 1
Just like you mentioned: Recognize and append the part-of-speech tag to the actual term while indexing. Do the same while querying.
I would like to discuss the cons associated.
Cons:
1) Future requirements might demand you to get results irrespective of part-of-speech. The Index that contains modified terms won't work.
2) You might want to execute a BooleanQuery like "term: noun or adjective". You've to write the query expander yourself.
Approach 2
Try using Payloads feature of Lucene.
Here is a brief tutorial on Lucene Payloads.
Steps to address your use-case.
1) Store the part-of-speech tag in the form of a Payload.
2) Have custom Similarity classes for each part-of-speech tag.
3) Based on the query, assign the corresponding CustomSimilarity to the IndexSearcher. For example, assign NounBoostingSimilarity for a noun query.
4) Boost or "Reduce" the score of a document based on Payload. Example given in the above tutorial.
5) Write a custom collector to filter out the documents with scores not conforming to above score-boosting logic.
Pros of this approach is that the Index remains compatible for any other normal search.
Cons:
1) Maintenance overhead : have to maintain multiple IndexSearchers for each similarity.
2) Somewhat complicated-to-code solution.
To be frank, I'm not satisfied with my own solution, but just wanted to let you know that there exists another way. It all depends on your scenario, whether the project is an academic one-time project or a commercial one, etc.
I am looking for a Java library to do some initial spell checking / data normalization on user generated text content, imagine the interests entered in a Facebook profile.
This text will be tokenized at some point (before or after spell correction, whatever works better) and some of it used as keys to search for (exact match). It would be nice to cut down misspellings and the like to produce more matches. It would be even better if the correction would perform well on tokens longer than just one word, e.g. "trinking coffee" would become "drinking coffee" and not "thinking coffee".
I found the following Java libraries for doing spelling correction:
JAZZY does not seem to be under active development. Also, the dictionary-distance based approach seems inadequate because of the use of non-standard language in social network profiles and multi-word tokens.
APACHE LUCENE seems to have a statistical spell checker that should be much more suited. Question here would how to create a good dictionary? (We are not using Lucene otherwise, so there is no existing index.)
Any suggestions are welcome!
What you want to implement is not spelling corrector but a fuzzy search. Peter Norvig's essay is a good starting point to build a fuzzy search from candidates checked against a dictionary.
Alternatively have a look at BK-Trees.
An n-gram index (used by Lucene) produces better results for longer words. The approach to produce candidates up to a given edit distance will probably work good enough for words found in normal text but will not work good enough for names, addresses and scientific texts. It will increase you index size, though.
If you have the texts indexed you have your text corpus (your dictionary). Only what is in your data can be found anyway. You need not use an external dictionary.
A good resource is Introduction to Information Retrieval - Dictionaries and tolerant retrieval . There is a short description of context sensitive spelling correction.
With regards to populating a Lucene index as the basis of a spell checker, this is a good way to solve the problem. Lucene has an out the box SpellChecker you can use.
There are plenty of word dictionaries available on the net that you can download and use as the basis for your lucene index. I would suggest supplementing these with a number of domain specific texts as well e.g. if your users are medics then maybe supplement the dictionary with source texts from medical thesis and publications.
Try Peter Norvig's spell checker.
You can hit the Gutenberg project or the Internet Archive for lots and lots of corpus.
Also, I think that the Wiktionary could help you. You can even make a direct download.
http://code.google.com/p/google-api-spelling-java is a good Java spell checking library, but I agree with Thomas Jung, that may not be the answer to your problem.