How to search for multiple strings in a text file - java

i am working in text files. I want to implement a search algorithm in Java. I have a text files i need to search.
If I want to find one word I can do it by just putting all the text into the hashmap and store each word's occurrence. But is there any algorithm if i want to search for two strings (or may be more)? Should i hash the strings in pair of two ?

It depends a lot on the size of the text file. There are usually several cases you should consider:
Lot's of queries on very short documents (web pages, texts of essay length etc). Text distribution like normal language. A simple O(n^2) algorithm is fine. For a query of length n just take a window of length n and slide it over. Compare and move the window until you find a match. This algorithm does not care about words, so you just see the whole search as a big string (including spaces). This is probably what most browsers does. KMP or Boyer Moore is not worth the effort, since the O(n^2) case is very rare.
Lot's of queries on one large document. Preprocess your document and store it preprocessed. Common storage options are suffix trees and inverted lists. If you have multiple documents you can build one document from when by concatenating them and storing the end of documents seperately. This is the way to go for document databases where the collection is almost constant.
If you have several documents where you have a high redundancy and your collections changes often, use KMP or Boyer Moore. For example if you want to find certain sequences in DNA data and you often get new sequences to find as well new DNA from experiments, the O(n^2) part of the naive algorithm would kill your time.
There are probably lot's of more possibilities that need different algorithms and data structures, so you should figure out which one is the best in your case.

Some more detail is required before suggesting an approach:
Are you searching for whole words only or any substring?
Are you going to search for many different words in the same unchanged file?
Do you know the words you want to search for all at once?
There are many efficient (linear) search algorithms for strings. If possible I'd suggest using one that's already been written for you.
http://en.wikipedia.org/wiki/String_searching_algorithm
One simple idea is to use a sliding window hash with the window the same size as the search string. Then in a single pass you can quickly check to see where the window hash matches the hash of your search string. Where it matches you double check to see if you've got a real match.

Related

Text Classification, How to convert text strings to vector representation

I am working on a text classification program. My training data is 700+ text categories and each categories contains 1-350 text phrases. 16k+ phrases total. The data that needs to be classified are text phrases. I want to classify the data so it gives me 5 most similar categories. The training data shares a lot of common words.
My first attempt was using Naive Bayes Theorem using this library on github because this library was very easy to use and allowed me to load my training data as strings. But other users reported issues and when I tried to classify my data, my input is either classified wrong or not classified.
https://github.com/ptnplanet/Java-Naive-Bayes-Classifier
So I think the library was the issue, so Im going try different libraries and look into k means clustering since my data is high variance.
So when I looking at other libraries, they all require input and training data as a vector matrix. I looked at word2vec and td-idf to convert text vectors. I understand tf-idf, and that I can get the weight of the word compared to the rest of the documents. But how can I use it classify my input data to categories? Would each category be a document? Or would all categories be a single document?
edit:data sample
SEE_BILL-see bill
SEE_BILL-bill balance
SEE_BILL-wheres my bill
SEE_BILL-cant find bill
PAY_BILL-pay bill
PAY_BILL-make payment
PAY_BILL-lower balance
PAY_BILL-remove balance
PAST_BILL-last bill
PAST_BILL-previous bill
PAST_BILL-historical bill
PAST_BILL-bill last year
First of all, the end of your question doesn't make any sense because you didn't say what are the classes you want to to classify the text phrases to. Now, I can help you with the vectorization of the text phrases.
Tf-idf is pretty good but you have to have a good preprocessing to do it. Also, you would have to create the vectors yourself. The problem with it is that you will give the vector of length of all the distinct words in your dataset, even the same words in different forms in which they occur in the dataset. So if you have the word go in your dataset, it's likely that there will be several forms of that word including going, Go, gone, went and so on. That's why you have to have a good preprocessing tho put all of those forms of word go to it's root form. Also, you have to lowercase the whole dataset because words go and Go are not the same. But even if you do all of that and make a perfect preprocessing pipeline, you will get the vector of length 20k+. You would then have to manually select the features (words) you want to leave in the vector and delete the others. That means, if you want to have vector of size 300 you would have to delete the 19 700 words from the vector. Of course, you would be left with the 300 best distinctive. If you want to dive into it deeper and see how exactly it works, you can check it out here
On the other hand, word2vec maps any word to a 300 dimensional vector. Of course, you would have to do some preprocessing, similar to the tf-idf, but this method is much less sensitive. You can find how word2vec works here
In conclusion, I would recommend you go with word2vec because it's much easier to start with. There is pretrained model from Google which you can download here
The two most popular approaches would be to:
represent each phrase/sentence as a bag of words where you basically one-hot encode each word of the phrase and the dimension of the encoding is the dimension of your vocabulary (total number of words)
use embeddings based on popular models, like word2vec which put each word in a X-dimensional vector space (e.g. 300-dimensional) so each of your phrases/sentences would be a sequence of X-dimensional vectors
An even more extreme approach would be to embed whole sentences using models like universal-sentence-encoder. In short: it's similar to word2vec but instead of words converts whole sentences to (512-dimensional) vector space. Than it's easier to find "similar" sentences.

Efficient way of storing and matching names against large data sets

For a data loss prevention like tool, I have a requirement where I need to lookup different types of data such as driver's license number, social security number, names etc. While most of this is pattern based and hence could be looked up using pattern matching with regular expressions, name happens to be a very broad category. There could be virtually any set of characters that could form a name. However, to make it a meaningful lookup, I think I should only lookup them against a defined dictionary of names. Here is what I am thinking.
Provide a dictionary of names as a configuration item. This looks more sensible as for each use case, the names might vary from different geographic regions. I am looking for best practices for doing this in Java. Basically these are the questions-
What is a good data structure to store the names. Set comes to mind as the first option, are there better options like in memory databases.
How should I go about searching these names in the large data sets. These data sets are really large and I only have the facility to read them row by row.
Any other option?
Take a look at concurrent-trees and CQEngine projects.
You can do it with full text indexing or online search.
I would prefer full text indexing, e.g. with Lucene. You will have to define how the indexer finds tokens in the text (by defining the token patterns and the dont-care-patterns).
Known patterns (e.g. license numbers) should be annotated at indexing time with their type. Querying the index for an annotated type (e.g. license number) will return you all contained license numbers.
Flexible patterns (like names) should be index as tokens. You can then iterate over the collection of legal names and query the index with it.
This approach is not the most flexible, but it is very robust to changes to the set of data files (simply put the new file to the index) or to the set of names (simply query the new name in the index).
In this approach it is not really performance relevant how you store the set of names
The other approach would be to search for multiple strings (names). Note that there are special search algorithms for multiple strings and that most algorithms have a preferred range of params (pattern size, alphabet size, number of patterns to search). You can get some impressions at StringBench.
This approach allows you more flexible string patterns.
However it is not robust to modifications to the set of names (then the complete search has to be repeated).
Multi-string usually would accept a set of strings to search, but they will store this set in a algorithm-specific way (most use a trie)
edit:
Efficient search for multiple patterns/strings can be done with DFA-based automata.
The first time I wanted to search efficiently in text I chose dk.brics.automaton. Its automaton is very efficient, yet it is optimized for matching not for searching (search is done in naive way).
I then shifted to my own implementation rexlex. It is DFA-based, but slightly slower than brics. The search algorithm is not as naive as in brics, but adds some overhead.
You find a link to a benchmark comparing both. The benchmark visualizes the problem of DFA-based regexes - the time to compile such a DFA can get very expensive if the regex is large.
I currently favor the stringandchars implementation of multi-string/pattern-search. It is focused on search performance, yet I do not know how it compares to the solutions above. The most common case of searching multiple regex patterns in text will be much more performant as in the above solutions.

Most efficient way to exclude words from hashing

I am working on a small project that will essentially search for a user-given word in multiple text files. I plan on accomplishing this by hashing each file into a large hash table prior to the search, then hashing the user's choice of word and comparing it to the hashtable.
My issue is that I would like to exclude certain common words like "the" from my hashing. The two ways I have thought of to do this are as follows:
Create a regex which is essentially "\bword1\b|\bword2\b|" and so on and so forth, and then do a String.split(regex,"") to remove those words from the text before I start hashing
As I process each word, do a String.matches(regex) to check to see if that word falls into my regex of excluded words. If it does, simply skip to the next word.
I feel like these two solutions are very similar, and am wondering if there might be a more efficient way of doing this.
I would suggest maintaining a HashSet of stopwords (that's the official term in the field of Information Retrieval). You just check stopwords.contains(word).
Let me also suggest a technique used to quickly search for words in documents: the inverted index. Don't maintain a hashmap per file; maintain a single hashmap where keys are words and values are sets of document IDs containing the word.
Then, if you want to search for all documents containing two given words, you'll be able to serve that request by just fetching two sets and calculating their intersection.

how to search for a given word from a huge database?

What's the most efficient method to search for a word from a dictionary database. I searched for the answer and people had suggested to use trie data structure. But the strategy for creating the tree for a huge amount of words would be to load the primary memory. I am trying to make an android app which involves this implementation for my data structure project. So could anyone tell me how do the dictionary work.
Even when I use the t9 dictionary in my phone, the suggestions for words appear very quickly on the screen. Curious to know the algorithm and the design behind it.
You can use Trie which is most usefull for searching big dictionaries. Because too many words are using similar startup, trie brgins around constant factor search also you can use in place, with limited number of access to physical memory. You can find lots of implementation in the web.
If someone is not familiar with trie, I think this site is good and I'm just quoting their sample here:
A trie (from retrieval), is a multi-way tree structure useful for
storing strings over an alphabet. It has been used to store large
dictionaries of English (say) words in spelling-checking programs and
in natural-language "understanding" programs. Given the data:
an, ant, all, allot, alloy, aloe, are, ate, be
the corresponding trie would be:
This is good practical Trie implementation in java:
http://code.google.com/p/google-collections/issues/detail?id=5
There are lots of ways to do that. The one that I used some time ago (which is especially good if you don't make changes to your dictionary) is to create a prefix index.
That is, you sort your entries lexicologicaly. Then, you save the (end) positions of the ranges for different first letters. That is, if your entries have indexes from 1 to 1000, and words "aardvark -- azerbaijan" take the range from 1 to 200, you make an entry in a separate table "a | 200", then you do the same for first and second letters. Then, if you need to find a particular word, you greatly reduce the search scope. In my case, the index on first two letters was quite sufficient.
Again, this method requires you to use a DB, like SQLite, which I think is present on the Android.
Using a trie is indeed space conscious, just realized when I checked my RAM usage after loading 150,000 words in to trie, the usage was 150 MB (Trie was implemented in C++).The memory consumption was hugely due to pointers. I ended up with ternary tries which had very less memory wastage around 30 MB (compared to 150 MB) but the time complexity had increased a bit. Another option is to use "Left child Right sibling " in which there is very less wastage of memory but time complexity is more than that of ternary trie.

Is there a fast Java library to search for a string and its position in file?

I need to search a big number of files (i.e. 600 files, 0.5 MB each) for a specific string.
I'm using Java, so I'd prefer the answer to be a Java library or in the worst case a library in a different language which I could call from Java.
I need the search to return the exact position of the found string in a file (so it seems Lucene for example is out of the question).
I need the search to be as fast as possible.
EDIT START:
The files might have different format (i.e. EDI, XML, CSV) and contain sometimes pretty random data (i.e. numerical IDs etc.). This is why I preliminarily ruled out an index-based searching engine.
The files will be searched multiple times for similar but different strings (i.e. for IDs which might have similar length and format, but they will usually be different).
EDIT END
Any ideas?
600 files of 0.5 MB each is about 300MB - that can hardly be considered big nowadays, let alone large. A simple string search on any modern computer should actually be more I/O-bound than CPU-bound - a single thread on my system can search 300MB for a relatively simple regular expression in under 1.5 seconds - which goes down to 0.2 if the files are already present in the OS cache.
With that in mind, if your purpose is to perform such a search infrequently, then using some sort of index may result in an overengineered solution. Start by iterating over all files, reading each block-by-block or line-by-line and searching - this is simple enough that it barely merits its own library.
Set down your performance requirements, profile your code, verify that the actual string search is the bottleneck and then decide whether a more complex solution is warranted. If you do need something faster, you should first consider the following solutions, in order of complexity:
Use an existing indexing engine, such as Lucene, to filter out the bulk of the files for each query and then explicitly search in the (hopefully few) remaining files for your string.
If your files are not really text, so that word-based indexing would work, preprocess the files to extract a term list for each file and use a DB to create your own indexing system - I doubt you will find an FTS engine that uses anything else than words for its indexing.
If you really want to reduce the search time to the minimum, extract term/position pairs from your files, and enter those in your DB. You may still have to verify by looking at the actual file, but it would be significantly faster.
PS: You do not mention at all what king of strings we are discussing about. Does it contain delimited terms, e.g. words, or do your files contain random characters? Can the search string be broken into substrings in a meaningful manner, or is it a bunch of letters? Is your search string fixed, or could it also be a regular expression? The answer to each of these questions could significantly limit what is and what is not actually feasible - for example indexing random strings may not be possible at all.
EDIT:
From the question update, it seems that the concept of a term/token is generally applicable, as opposed to e.g. searching for totally random sequences in a binary file. That means that you can index those terms. By searching the index for any tokens that exist in your search string, you can significantly reduce the cases where a look at the actual file is needed.
You could keep a term->file index. If most terms are unique to each file, this approach might offer a good complexity/performance trade-off. Essentially you would narrow down your search to one or two files and then perform a full search on those files only.
You could keep a term->file:position index. For example, if your search string is "Alan Turing". you would first search the index for the tokens "Alan" and "Turing". You would get two lists of files and positions that you could cross-reference. By e.g. requiring that the positions of the token "Alan" precede those of the token "Turing" by at most, say, 30 characters, you would get a list of candidate positions in your files that you could verify explicitly.
I am not sure to what degree existing indexing libraries would help. Most are targeted towards text indexing and may mishandle other types of tokens, such as numbers or dates. On the other hand, your case is not fundamentally different either, so you might be able to use them - if necessary, by preprocessing the files you feed them to make them more palatable. Building an indexing system of your own, tailored to your needs, does not seem too difficult either.
You still haven't mentioned if there is any kind of flexibility in your search string. Do you expect being able to search for regular expressions? Is the search string expected to be found verbatim, or do you need to find just the terms in it? Does whitespace matter? Does the order of the terms matter?
And more importantly, you haven't mentioned if there is any kind of structure in your files that should be considered while searching. For example, do you want to be able to limit the search to specific elements of an XML file?
Unless you have an SSD, your main bottleneck will be all the file accesses. Its going to take about 10 seconds to read the files, regardless of what you in Java.
If you have an SSD, reading the files won't be a problem, and the CPU speed in Java will matter more.
If you can create an index for the files this will help enormously.

Categories

Resources