I have indexed a set of text files by lucene. Also, I have stored TermVectors. But I want to know the frequency of some terms in some documents in O(1). Is it possible?
I mean, is there a function(Term term, Integer docNum) that returns the frequency of term in document docNum ?
There is no ready-made function, you'll have to write some code. First use IndexReader.termDocs(Term). That will give you a TermDocs instance which is, typically of Lucene, a Cursor-like object. Now call TermDocs.skipTo(int), then TermDocs.next(), then TermDocs.freq(). If you are sure at the outset that your document contains your term, this is it; otherwise check after each step whether you can proceed. The Javadocs are well-written for each step involved.
Related
I have a very large list of Strings stored in a NoSQL DB. Incoming query is a string and I want to check if this String is there in the list or not. In case of Exact match, this is very simple. That NoSQL DB may have the String as the primary key and I will just check if there is any record with that string as primary key. But I need to check for Fuzzy match as well.
There is one approach to traverse every String in that list and check Levenshtein Distance of input String with the Strings in list, but this approach will result in O(n) complexity and the size of list is very large (10 million) and may even increase. This approach will result in higher latency of my solution.
Is there a better way to solve this problem?
Fuzzy matching is complicated for the reasons you have discovered. Calculating a distance metric for every combination of search term against database term is impractical for performance reasons.
The solution to this is usually to use an n-gram index. This can either be used standalone to give a result, or as a filter to cut down the size of possible results so that you have fewer distance scores to calculate.
So basically, if you have a word "stack" you break it into n-grams (commonly trigrams) such as "s", "st", "sta", "ack", "ck", "k". You index those in your database against the database row. You then do the same for the input and look for the database rows that have the same matching n-grams.
This is all complicated, and your best option is to use an existing implementation such as Lucene/Solr which will do the n-gram stuff for you. I haven't used it myself as I work with proprietary solutions, but there is a stackoverflow question that might be related:
Return only results that match enough NGrams with Solr
Some databases seem to implement n-gram matching. Here is a link to a Sybase page that provides some discussion of that:
Sybase n-gram text index
Unfortunately, discussions of n-grams would be a long post and I don't have time. Probably it is discussed elsewhere on stackoverflow and other sites. I suggest Googling the term and reading up about it.
First of all, if Searching is what you're doing, then you should use a Search Engine (ElasticSearch is pretty much the default). They are good at this and you are not re-inventing wheels.
Second, the technique you are looking for is called stemming. Along with the original String, save a normalized string in your DB. Normalize the search query with the same mechanism. That way you will get much better search results. Obviously, this is one of the techniques a search engine uses under the hood.
Use Solr (or Lucene) could be a suitable solution for you?
Lucene supports fuzzy searches based on the Levenshtein Distance, or Edit Distance algorithm. To do a fuzzy search use the tilde, "~", symbol at the end of a Single word Term. For example to search for a term similar in spelling to "roam" use the fuzzy search:
roam~
This search will find terms like foam and roams.
Starting with Lucene 1.9 an additional (optional) parameter can specify the required similarity. The value is between 0 and 1, with a value closer to 1 only terms with a higher similarity will be matched. For example:
roam~0.8
https://lucene.apache.org/core/2_9_4/queryparsersyntax.html
I have a large document containing string like this, basically a non-delimited string -
mynameisjohnsmith
I also have a collection of names, this could be really large, assume a million records. What I intend to do it to check if the document contains a name that is available in the collection. One way to do it is to index the document and iterate over the collection and for each entry search the index for the name. This could be really inefficient in case the names is not there in the collection (1 million iterations).
I am wondering if there are better ways of doing it. Something like indexing both the document and the names and finding an intersection.
Thanks.
The Aho-Corasick string searching algorithm uses a finite state machine to search for a large number of strings simultaneously in a document. The complexity of the algorithm is linear in the length of the strings plus the length of the searched text plus the number of output matches. It's how virus scanning software is able to efficiently search for a large number of virus signatures in files in reasonable time.
I searched many posts but I didnt find answer. I'd like to search and access values in collection by searching by value. My object type is DictionaryWord with two values: String word and int wordUsage (number of times the word was used). I was wondering which collection would be the fastest. If I write down "wa" I'd like it to give me e.g. 5 strings that start with these letters. Any list or set would be probably way too slow as I have 100 000 objects.
I thought about using a HashMap by making its key values String word and its values int wordUsage. I could even write my own hash() function to just give every key same value after hashing - key: "writing", hash value: "writing". Considering there are no duplicates would it be a good idea or should I look for something else?
My point is: how and what do I use to search for values that have some part of the value used in the search condition. For example writing down "tea" i find in collection values like: "tea", "teacher", "tear", "teaching" etc.
The fastest I can think of is a binary search tree. I found this to be very helpful and it should make it clear why a tree is the best option.
Probably, you need prefix tree. Take a look at Trie wiki page for further information.
I would like for Lucene to find a document containing a term "bahnhofstr" if I search for "bahnhofstrasse", i.e., I don't only want to find documents containing terms of which my search term is a prefix but also documents that contain terms that are themselves a prefix of my search term...
How would I go about this?
If I understand you correctly, and your search string is an exact string, you can set queryParser.setAllowLeadingWildcard(true); in Lucene to allow for leading-wildcard searches (which may or may not be slow -- I have seen them reasonably fast but in a case where there were only 60,000+ Lucene documents).
Your example query syntax could look something like:
*bahnhofstr bahnhofstr*
or possibly (have not tested this) just:
*bahnhofstr*
I think a fuzzy query might be most helpful for you. This will score terms based on the Levenshtein distance from your query. Without a minimum similarity specified, it will effectively match every term available. This can make it less than performant, but does accomplish what you are looking for.
A fuzzy query is signalled by the ~ character, such as:
firstname:bahnhofstr~
Or with a minimum similarity (a number between 0 and 1, 0 being the loosest with no minimum)
firstname:bahnhofstr~0.4
Or if you are constructing your own queries, use the FuzzyQuery
This isn't quite Exactly what you specified, but is the easiest way to get close.
As far as exactly what you are looking for, I don't know of a simple Lucene call to accomplish it. I would probably just split the term into a series of termqueries, that you could represent in a query string something like:
firstname:b
firstname:ba
firstname:bah
firstname:bahn
firstname:bahnh
firstname:bahnho
firstname:bahnhof
firstname:bahnhofs
firstname:bahnhofst
firstname:bahnhofstr*
I wouldn't actually generate a query string for it myself, by the way. I'd just construct the TermQuery and PrefixQuery objects myself.
Scoring would be bit warped, and I'dd probably boost longer queries more highly to get better ordering out of it, but that's the method that comes to mind to accomplish exactly what you're looking for fairly easily. A DisjunctionMaxQuery would help you use something like this with other terms and acquire more reasonable scoring.
Hopefully a fuzzy query works well for you though. Seems a much nicer solution.
Another option, if you have a lot of need for queries of this nature, might be, when indexing, tokenize fields into n-grams (see NGramTokenizer), which would allow you to effectively use an NGramPhraseQuery to achieve the results you want.
I am looking for a simple java class that can compute tf-idf calculation. I want to do similarity test on 2 documents. I found so many BIG API who used tf-idf class. I do not want to use a big jar file, just to do my simple test. Please help !
Or atlest if some one can tell me how to find TF? and IDF? I will calculate the results :)
OR
If you can tell me some good java tutorial for this.
Please do not tell me for looking google, I already did for 3 days and couldn't find any thing :(
Please also do not refer me to Lucene :(
Term Frequency is the square root of the number of times a term occurs in a particular document.
Inverse Document Frequency is (the log of (the total number of documents divided by the number of documents containing the term)) plus one in case the term occurs zero times -- if it does, obviously don't try to divide by zero.
If it isn't clear from that answer, there is a TF per term per document, and an IDF per term.
And then TF-IDF(term, document) = TF(term, document) * IDF(term)
Finally, you use the vector space model to compare documents, where each term is a new dimension and the "length" of the part of the vector pointing in that dimension is the TF-IDF calculation. Each document is a vector, so compute the two vectors and then compute the distance between them.
So to do this in Java, read the file in one line at a time with a FileReader or something, and split on spaces or whatever other delimiters you want to use - each word is a term. Count the number of times each term appears in each file, and the number of files each term appears in. Then you have everything you need to do the above calculations.
And since I have nothing else to do, I looked up the vector distance formula. Here you go:
D=sqrt((x2-x1)^2+(y2-y1)^2+...+(n2-n1)^2)
For this purpose, x1 is the TF-IDF for term x in document 1.
Edit: in response to your question about how to count the words in a document:
Read the file in line by line with a reader, like new BufferedReader(new FileReader(filename)) - you can call BufferedReader.readLine() in a while loop, checking for null each time.
For each line, call line.split("\\s") - that will split your line on whitespace and give you an array of all of the words.
For each word, add 1 to the word's count for the current document. This could be done using a HashMap.
Now, after computing D for each document, you will have X values where X is the number of documents. To compare all documents against each other is to do only X^2 comparisons - this shouldn't take particularly long for 10,000. Remember that two documents are MORE similar if the absolute value of the difference between their D values is lower. So then you could compute the difference between the Ds of every pair of documents and store that in a priority queue or some other sorted structure such that the most similar documents bubble up to the top. Make sense?
agazerboy, Sujit Pal's blog post gives a thorough description of calculating TF and IDF.
WRT verifying results, I suggest you start with a small corpus (say 100 documents) so that you can see easily whether you are correct. For 10000 documents, using Lucene begins to look like a really rational choice.
While you specifically asked not to refer Lucene, please allow me to point to you the exact class. The class you are looking for is DefaultSimilarity. It has an extremely simple API to calculate TF and IDF. See the java code here. Or you could just implement yourself as specified in the DefaultSimilarity documentation.
TF = sqrt(freq)
and
IDF = log(numDocs/(docFreq+1)) + 1.
The log and sqrt functions are used to damp the actual values. Using the raw values can skew results dramatically.