Lucene: indexing documents based on dictionary terms/ implementing custom Analyzer - java

I have a dump of University webpages (documents), and my goal is to use Wikipedia's term dictionary for finding those terms in the given documents. Eventually, I'm supposed to calculate the document frequency of each Wikipedia term. (Term frequency for each document is not required)
Wikipedia (multi-word) dictionary entries look like the following -
<t id="34780065">Years of the 20th century in Mauritania</t>
<t id="34780066">1960 International Gold Cup</t>
<t id="34780067">Roman Lob songs</t>
I'm trying to use Lucene to achieve this.
Approach 1 : Use ShingleAnalyzer to index n-gram tokens from the documents. n-grams because the dictionary contains multi-word terms. Then loop through each of the dictionary terms to find their document frequency from the index.
Approach 2 : Using the technique suggested here, implement an Analyzer that looks up the Wikipedia dictionary for indexing. And then index token streams in the documents using this analyzer.
Question : Which of the 2 approaches is more efficient?
If I go with 2nd approach, how do I implement this custom Analyzer. I haven't found any good resource to help explain such an implementation.

I think you want to use Approach 1, as Approach 2 looks like you have to look up the Wikipedia dictionary for each word, then each 2 words, then each 3 words, ... (or in reverse order) for each n-gram. N-gram indexing as in Approach 1, then throwing out the n-grams not in the Wikipedia dictionary, I think would get you there faster as you look at each n-gram once (O(n) * Wikipedia-dictionary-lookup performance if I understand the problem correctly).

Related

Fuzzy String matching of Strings in Java

I have a very large list of Strings stored in a NoSQL DB. Incoming query is a string and I want to check if this String is there in the list or not. In case of Exact match, this is very simple. That NoSQL DB may have the String as the primary key and I will just check if there is any record with that string as primary key. But I need to check for Fuzzy match as well.
There is one approach to traverse every String in that list and check Levenshtein Distance of input String with the Strings in list, but this approach will result in O(n) complexity and the size of list is very large (10 million) and may even increase. This approach will result in higher latency of my solution.
Is there a better way to solve this problem?
Fuzzy matching is complicated for the reasons you have discovered. Calculating a distance metric for every combination of search term against database term is impractical for performance reasons.
The solution to this is usually to use an n-gram index. This can either be used standalone to give a result, or as a filter to cut down the size of possible results so that you have fewer distance scores to calculate.
So basically, if you have a word "stack" you break it into n-grams (commonly trigrams) such as "s", "st", "sta", "ack", "ck", "k". You index those in your database against the database row. You then do the same for the input and look for the database rows that have the same matching n-grams.
This is all complicated, and your best option is to use an existing implementation such as Lucene/Solr which will do the n-gram stuff for you. I haven't used it myself as I work with proprietary solutions, but there is a stackoverflow question that might be related:
Return only results that match enough NGrams with Solr
Some databases seem to implement n-gram matching. Here is a link to a Sybase page that provides some discussion of that:
Sybase n-gram text index
Unfortunately, discussions of n-grams would be a long post and I don't have time. Probably it is discussed elsewhere on stackoverflow and other sites. I suggest Googling the term and reading up about it.
First of all, if Searching is what you're doing, then you should use a Search Engine (ElasticSearch is pretty much the default). They are good at this and you are not re-inventing wheels.
Second, the technique you are looking for is called stemming. Along with the original String, save a normalized string in your DB. Normalize the search query with the same mechanism. That way you will get much better search results. Obviously, this is one of the techniques a search engine uses under the hood.
Use Solr (or Lucene) could be a suitable solution for you?
Lucene supports fuzzy searches based on the Levenshtein Distance, or Edit Distance algorithm. To do a fuzzy search use the tilde, "~", symbol at the end of a Single word Term. For example to search for a term similar in spelling to "roam" use the fuzzy search:
roam~
This search will find terms like foam and roams.
Starting with Lucene 1.9 an additional (optional) parameter can specify the required similarity. The value is between 0 and 1, with a value closer to 1 only terms with a higher similarity will be matched. For example:
roam~0.8
https://lucene.apache.org/core/2_9_4/queryparsersyntax.html

How to improve the performance when working with wikipedia data and huge no. of webpages?

I am supposed to extract representative terms from an organisation's website using wikipedia's article-link data dump.
To achieve this I've -
Crawled & downloaded organisation's webpages. (~110,000)
Created a dictionary of wikipedia ID and terms/title. (~40million records)
Now, I'm supposed to process each of the webpages using the dictionary to recognise terms and track their term IDs & frequencies.
For the dictionary to fit in memory, I've splitted the dictionary into smaller files. Based on my experiment with a small data-set, the processing time for the above will be around 75 days.
And this is just for 1 organisation. I have to do the same for more than 40 of them.
Implementation -
HashMap for storing dictionary in memory.
looping through each map entry to search the term in a webpage, using Boyer-Moore search implementation.
Repeating the above for each webpage, and storing results in a HashMap.
I've tried optimizing the code and tuning the JVM for better performance.
Can someone please advise on a more efficient way to implement the above, reducing the processing time to a few days.
Is Hadoop an option to consider?
Based on your question:
Number of Documents = 110000
Dictionary => List of [TermID, Title Terms] = 40million entries
Size of documents = 11000 * 1KB per document on an average = 26.9GB
(1KB per document on an average)
Size of dictionary = 40million * 256bytes = 9.5GB of raw data
(256bytes per entry on an average)
How did you arrive at the 75 days estimate?
There are number of performance targets:
How are you storing the Documents?
How are you storing/retrieving the Dictionary? ( assuming not all of it in memory unless you can afford to)
How many machines are you running it on?
Are you performing the dictionary lookups in parallel? ( of-course assuming dictionary is immutable once you have already processed whole of wikipedia )
Here is an outline of what I believe you are doing:
dictionary = read wikipedia dictionary
document = a sequence of documents
documents.map { doc =>
var docTermFreq = Map[String, Int]()
for(term <- doc.terms.map if(dictionary.contains(term)) ) {
docTermFreq = docTermFreq + (term -> docTermFreq.getOrElse(term, 0) + 1)
}
// store docTermFreq map
}
What this is essentially doing is breaking up each document into tokens and then performing a lookup in wikipedia dictionary for its token's existence.
This is exactly what a Lucene Analyzer does.
A Lucene Tokenizer will convert document into tokens. This happens before the terms are indexed into lucene. So all you have to do is implement a Analyzer which can lookup the Wikipedia Dictionary, for whether or not a token is in dictionary.
I would do it like this:
Take every document and prepare a token stream ( using an Analyzer described above )
Index the document terms.
At this point you will have wikipedia terms only, in the Lucene Index.
When you do this, you will have ready-made statistics from the Lucene Index such as:
Document Frequency of a Term
TermFrequencyVector ( exactly what you need )
and a ready to use inverted index! ( for a quick introduction to Inverted Index and Retrieval )
There are lot of things you can do to improve the performance. For example:
Parallelize the document stream processing.
You can store the dictionary in key-value database such as BerkeylyDB or Kyoto Cabinet, or even an in-memory key-value storage such as Redis or Memcache.
I hope that helps.
One of the ways that use only MR is to:
Assuming you already have N dictionaries of smaller size that fit to memory you can:
Launch N "map only" jobs that will be scanning all your data (each one with only one dictionary) and output smth like {pageId, termId, occurence, etc} to folder /your_tmp_folder/N/
As a result you will have N*M files where M is amount of mappers on each stage(should be the same).
Then second job will simply analyze your {pageId, termId, occurence, etc} objects and build stats per page id.
Map only jobs should be very fast in your case. If not - please paste your code.

Search for terms in the index which are a prefix of the search term or vice versa (!)

I would like for Lucene to find a document containing a term "bahnhofstr" if I search for "bahnhofstrasse", i.e., I don't only want to find documents containing terms of which my search term is a prefix but also documents that contain terms that are themselves a prefix of my search term...
How would I go about this?
If I understand you correctly, and your search string is an exact string, you can set queryParser.setAllowLeadingWildcard(true); in Lucene to allow for leading-wildcard searches (which may or may not be slow -- I have seen them reasonably fast but in a case where there were only 60,000+ Lucene documents).
Your example query syntax could look something like:
*bahnhofstr bahnhofstr*
or possibly (have not tested this) just:
*bahnhofstr*
I think a fuzzy query might be most helpful for you. This will score terms based on the Levenshtein distance from your query. Without a minimum similarity specified, it will effectively match every term available. This can make it less than performant, but does accomplish what you are looking for.
A fuzzy query is signalled by the ~ character, such as:
firstname:bahnhofstr~
Or with a minimum similarity (a number between 0 and 1, 0 being the loosest with no minimum)
firstname:bahnhofstr~0.4
Or if you are constructing your own queries, use the FuzzyQuery
This isn't quite Exactly what you specified, but is the easiest way to get close.
As far as exactly what you are looking for, I don't know of a simple Lucene call to accomplish it. I would probably just split the term into a series of termqueries, that you could represent in a query string something like:
firstname:b
firstname:ba
firstname:bah
firstname:bahn
firstname:bahnh
firstname:bahnho
firstname:bahnhof
firstname:bahnhofs
firstname:bahnhofst
firstname:bahnhofstr*
I wouldn't actually generate a query string for it myself, by the way. I'd just construct the TermQuery and PrefixQuery objects myself.
Scoring would be bit warped, and I'dd probably boost longer queries more highly to get better ordering out of it, but that's the method that comes to mind to accomplish exactly what you're looking for fairly easily. A DisjunctionMaxQuery would help you use something like this with other terms and acquire more reasonable scoring.
Hopefully a fuzzy query works well for you though. Seems a much nicer solution.
Another option, if you have a lot of need for queries of this nature, might be, when indexing, tokenize fields into n-grams (see NGramTokenizer), which would allow you to effectively use an NGramPhraseQuery to achieve the results you want.

Lucene Scoring Function - bias towards shorter documents

I want Lucene Scoring function to have no bias based on the length of the document. This is really a follow up question to Calculate the score only based on the documents have more occurance of term in lucene
I was wondering how Field.setOmitNorms(true) works? I see that there are two factors that make short documents get a high score:
"boost" that shorter length posts - using doc.getBoost()
"lengthNorm" in the definition of norm(t,d)
Here is the documentation
I was wondering - if I wanted no bias towards shorter documents, is Field.setOmitNorms(true) enough?
Using BM25Similarity you could reduce to 0f:
#param b Controls to what degree document length normalizes tf values
or
#param k1 Controls non-linear term frequency normalization (saturation).
Both params will affect SimWeight
indexSearcher.setSimilarity(new BM25Similarity(1.2f,0f));
More explanation can be found here : http://opensourceconnections.com/blog/2015/10/16/bm25-the-next-generation-of-lucene-relevation/
Shorter docs are meant to be more relevant when you use TF-IDF scoring.
You can use your custom scoring functions in Lucene. Its easy to customize the scoring algorithm. Subclass DefaultSimilarity and override the method you want to customize.
There's a code sample here that will help you implement it

In java - Grouping similar values

First of all,thanks for reading my question.
I used TF/IDF then on those values, I calculated cosine similarity to see how many documents are more similar. You can see the following matrix. Column names are like doc1, doc2, doc3 and rows names are same like doc1, doc2, doc3 etc. With the help of following matrix, I can see that doc1 and doc4 has 72% similarity (0.722711142). It is correct even if I see both documents they are similar. I have 1000 documents and I can see each document freq. in matrix to see how many of them are similar.
I used different clustering like k-means and agnes ( hierarchy) to combine them. It made clusters. For example Cluster1 has (doc4, doc5, doc3) becoz they have values (0.722711142, 0.602301766, 0.69912109) more close respectively. But when I see manually if these 3 documents are realy same so they are NOT. :( What am I doing or should I use something else other than clustering??????
1 0.067305859 -0.027552299 0.602301766 0.722711142
0.067305859 1 0.048492904 0.029151952 -0.034714695
-0.027552299 0.748492904 1 0.610617214 0.010912109
0.602301766 0.029151952 -0.061617214 1 0.034410392
0.722711142 -0.034714695 0.69912109 0.034410392 1
P.S: The values can be wrong, it is just to give you an idea.
If you have any question please do ask.
Thanks
I'm not familiar with TF/IDF, but the process can go wrong in many stages generally:
1, Did you remove stopwords?
2, Did you apply stemming? Porter stemmer for example.
3, Did you normalize frequencies for document length? (Maybe the TFIDF thing has a solution for that, I don't know)
4, Clustering is a discovery method but not a holy grail. The documents it retrieves as a group may be related more or less, but that depends on the data, tuning, clustering algorithm, etc.
What do you want to achieve? What is your setup?
Good luck!
My approach would be not to use pre-calculated similarity values at all, because the similarity between docs should be found by the clustering algorithm itself. I would simply set up a feature space with one column per term in the corpus, so that the number of columns equals the size of the vocabulary (minus stop word, if you want). Each feature value contains the relative frequency of the respective term in that document. I guess you could use tf*idf values as well, although I wouldn't expect that to help too much. Depending on the clustering algorithm you use, the discriminating power of a particular term should be found automatically, i.e. if a term appears in all documents with a similar relative frequency, then that term does not discriminate well between the classes and the algorithm should detect that.

Categories

Resources