Handling special characters with lucene - java

i haven't found the answer to my problem so I decided to write my question to get some help.
I use lucene to index the objects in computer memory(they exist only in my java code). While processing the code I index (using WhitespaceAnalyzer) the field with value objA/4.
My problem starts when I want to find it after the indexation (also using WhitespaceAnalyzer).
When i create a query obj* , I find all objects that start with obj - if i create a query objA/4 I also can find this object.
However i don't know how to find all objects starting with objA/ , when I create a query objA/* lucene is changing it to obja/* and finds nothing.
I've checked and "/" is not a special character so i dont need any "\" preceding it.
So my question is how to ask to get all objects that starts with objA/ (for example - objA/0, objA/1, objA/2, objA/3)?

Are you using QueryParser.escape(String) to escape everything correctly?

The code i'm using:
String node = "objA/*";
Query node_query = MultiFieldQueryParser.parse(node, "nodeName", new WhitespaceAnalyzer());
BooleanQuery bq = new BooleanQuery();
bq.add(node_query, BooleanClause.Occur.MUST);
System.out.println("We're asking for - " + bq);
IndexSearcher looker = new IndexSearcher(rep_index);
Hits hits = looker.search(bq);

Related

Redisearch query with "begin with" instead of "contains"

I am trying to understand on how to perform queries in Redisearch strictly with "begins with" and I keep getting "contains".
For example if I have fields with values like 'football', 'myfootball', 'greenfootball' and would provide a search term like this:
> FT.SEARCH myIdx #myfield:foot*
I want just to get 'football' but I keep getting other fields that contain the word instead of beginning with that word.
Is there a way to avoid this?
I was trying to use VERBATIM and things like #myfield:^foot* but nothing.
I am using JRedisearch as a client but eventually I had to enter the DB and perform these queries manually in order to figure out what's happening. That being said, is this possible to do with this client at the moment?
Thanks
EDIT
A sample of my index setup:
Client client = new Client(INDEX_NAME, url, PORT);
Schema sc = new Schema().addSortableTextField("url", 1.0); // using this field for query
client.dropIndex(true);
client.createIndex(sc, Client.IndexOptions.Default());
return client;
Sample document:
id: // random uuid
urlPath: myfootbal
application: web
market: Europe
After checking the RDB provided I see that when searching foot* you are not getting myfootbal. The replies look like this: /dot-com/plp/football/x/index.html. You are getting those replies because this url is tokenized, and '/' is one of the tokenize chars. If you do not want those urls to be tokenized you need to declare them as TAGS and not as TEXT. This way the entire url will be indexed as is and when search for foot* it will not appear in the results.
For more information about TAGS see the FT.CREATE documentation: https://oss.redislabs.com/redisearch/Commands.html

Mimic Elasticsearch MatchQuery

I'm currently writing a program that currently uses elasticsearch as a back-end database/search index. I'd like to mimic the functionality of the /_search endpoint, which currently uses a match query:
{
"query": {
"match" : {
"message" : "Neural Disruptor"
}
}
}
Doing some sample queries, yielded the following results on a massive World of Warcraft database:
Search Term Search Result
------------------ -----------------------
Neural Disruptor Neural Needler
Lovly bracelet Ruby Bracelet
Lovely bracelet Lovely Charm Bracelet
After looking through elasticsearch's documentation, I found that the match query is fairly complex. What's the easiest way that I can simulate a match query with just lucene in java? (It appears to be doing some fuzzy matching, as well as looking for terms)
Importing elasticsearch code for MatchQuery (I believe org.elasticsearch.index.search.MatchQuery) doesn't seem to be that easy. It's heavily embedded into Elasticsearch, and doesn't look like something that can be easily pulled out.
I don't need a full proof "Must match exactly what elasticsearch matches", I just need something close, or that can fuzzy match/find the best match.
Whatever is sent to the q= parameter of the _search endpoint is used as is by the query_string query (not org.elasticsearch.index.search.MatchQuery) which understands the Lucene expression syntax.
The query parser syntax is defined in the Lucene project using JavaCC and the grammar can be found here if you wish to have a look. The end-product is a class called QueryParser (see below).
The class inside the ES source code that is responsible for parsing the query string is QueryStringQueryParser which delegates to Lucene's QueryParser class (generated by JavaCC).
So basically, if you get an equivalent query string as what gets passed to _search?q=..., then you can use that query string with QueryParser.parse("query-string-goes-here") and run the reified Query using just Lucene.
It's been awhile since I've worked directly with lucene, but what you want should be, initially, fairly straightforward. The base behavior of a lucene query is very similar to the match query (query_string is exactly equivalent to lucene, but match is very close). I put together a small example that works just with lucene (7.2.1) if you want to try it out. The main code is as follows:
public static void main(String[] args) throws Exception {
// Create the in memory lucence index
RAMDirectory ramDir = new RAMDirectory();
// Create the analyzer (has default stop words)
Analyzer analyzer = new StandardAnalyzer();
// Create a set of documents to work with
createDocs(ramDir, analyzer);
// Query the set of documents
queryDocs(ramDir, analyzer);
}
private static void createDocs(RAMDirectory ramDir, Analyzer analyzer)
throws IOException {
// Setup the configuration for the index
IndexWriterConfig config = new IndexWriterConfig(analyzer);
config.setOpenMode(IndexWriterConfig.OpenMode.CREATE);
// IndexWriter creates and maintains the index
IndexWriter writer = new IndexWriter(ramDir, config);
// Create the documents
indexDoc(writer, "document-1", "hello planet mercury");
indexDoc(writer, "document-2", "hi PLANET venus");
indexDoc(writer, "document-3", "howdy Planet Earth");
indexDoc(writer, "document-4", "hey planet MARS");
indexDoc(writer, "document-5", "ayee Planet jupiter");
// Close down the writer
writer.close();
}
private static void indexDoc(IndexWriter writer, String name, String content)
throws IOException {
Document document = new Document();
document.add(new TextField("name", name, Field.Store.YES));
document.add(new TextField("body", content, Field.Store.YES));
writer.addDocument(document);
}
private static void queryDocs(RAMDirectory ramDir, Analyzer analyzer)
throws IOException, ParseException {
// IndexReader maintains access to the index
IndexReader reader = DirectoryReader.open(ramDir);
// IndexSearcher handles searching of an IndexReader
IndexSearcher searcher = new IndexSearcher(reader);
// Setup a query
QueryParser parser = new QueryParser("body", analyzer);
Query query = parser.parse("hey earth");
// Search the index
TopDocs foundDocs = searcher.search(query, 10);
System.out.println("Total Hits: " + foundDocs.totalHits);
for (ScoreDoc scoreDoc : foundDocs.scoreDocs) {
// Get the doc from the index by id
Document document = searcher.doc(scoreDoc.doc);
System.out.println("Name: " + document.get("name")
+ " - Body: " + document.get("body")
+ " - Score: " + scoreDoc.score);
}
// Close down the reader
reader.close();
}
The important parts to extending this is going to be the analyzer and understanding lucene query parser syntax.
The Analyzer is used by both indexing and queries to tell both how to parse text so they can think about the text in the same way. It sets up how to tokenize (what to split on, whether to toLower(), etc). The StandardAnalyzer splits on spaces and a few others (I don't have this handy) and also looks to apply toLower().
The QueryParser is going to do some of the work for you. If you see above in my example. I do two things, I tell the parser what the default field is and I pass a string of hey earth. The parser is going to turn this into a query that looks like body:hey body:earth. This will look for documents that have either hey or earth in the body. Two documents will be found.
If we were to pass hey AND earth the query is parsed to look like +body:hey +body:earth which will require docs to have both terms. Zero documents will be found.
To apply fuzzy options you add a ~ to the terms you want to be fuzzy. So if the query is hey~ earth it will apply fuzziness to hey and the query will look like body:hey~2 body:earth. Three documents will be found.
You can more directly write the queries and the parser still handles things. So if you pass it hey name:\"document-1\" (it token splits on -) it will create a query like body:hey name:"document 1". Two documents will be returned as it looks for the phrase document 1 (since it still tokenizes on the -). Where if I did hey name:document-1 it writes body:hey (name:document name:1) which returns all documents since they all have document as a term. There is some nuance to understanding here.
I'll try to cover a bit more on how they are similar. Referencing match query. Elastic says the main difference will be, "It does not support field name prefixes, wildcard characters, or other "advanced" features." These would probably stand out more going the other direction.
Both the match query and the lucene query, when working with an analyzed field will take the query string and apply the analyzer to it (tokenize it, toLower, etc). So they will both turn HEY Earth into a query that looks for the terms hey or earth.
A match query can set the operator by providing "operator" : "and". This change our query to look for hey and earth. The analogy in lucene is to do something like parser.setDefaultOperator(QueryParser.Operator.AND);
The next thing is fuzziness. Both are working with the same settings. I believe elastic's "fuzziness": "AUTO" is equivalent to lucene's auto when applying ~ to a query (though I think you have to add it each term yourself which is a little cumbersome).
Zero terms query appears to be an elastic construct. If you wanted the ALL setting you would have to replicate the match all query if the query parser removed all tokens from the query.
Cutoff frequery looks to be related to the CommonTermsQuery. I've not used this so you may have some digging if you want to use it.
Lucene has a synonym filter to be applied to an analyzer but you may need to build the map yourself.
The differences you may find will probably be in scoring. When I run they query hey earth against lucene. It get document-3 and document-4 both returned with a score of 1.3862944. When I run the query in the form of:
curl -XPOST http://localhost:9200/index/_search?pretty -d '{
"query" : {
"match" : {
"body" : "hey earth"
}
}
}'
I get the same documents, but with a score of 1.219939. You can run an explain on both of them. In lucene by printing each document with
System.out.println(searcher.explain(query, scoreDoc.doc));
And in elastic by querying each document like
curl -XPOST http://localhost:9200/index/docs/3/_explain?pretty -d '{
"query" : {
"match" : {
"body" : "hey earth"
}
}
}'
I get some differences, but I cannot exactly explain them. I do actually get a value for the doc of 1.3862944 but the fieldLength is different and that affects the weight.

Java Selenium - Find string text from multiple divs with same class name using xpath

i'm hoping you can help me.
I've been going through all sorts of forums and questions on here on how to cycle through multiple divs with the same class name using xpath queries. I'm fairly new to WebDriver and Java so I'm probably not asking the question properly.
I have a table where i'm trying to identify the values within and ensure they're correct. each field has the same class identifier, and i'm able to successfully pull back the first result and confirm via report logging using the following
String className1 = driver.findElement(By.xpath("(//div[#class='table_class'])")).getText();
Reporter.log("=====Class Identified as "+className1+"=====", true);
However, when i then try and cycle through (I've seen multiple answers saying to add a [2] suffix to the xpath query) i'm getting a compile error:
String className2 = driver.findElement(By.xpath("(//div[#class='table_class'])")[2]).getText();
Reporter.log("=====Class Identified as "+className2+"=====", true);
The above gives an error saying "The type of the expression must be an array type but it resolved to By"
I'm not 100% sure how to structure this in order to set up an array and then cycle through.
Whilst this is just verifying field labels for now, ultimately i need to use this approach to verify the subsequent data that is pulled through, and i'll have the same problem then
You are getting the error for -
String className2 = driver.findElement(By.xpath("(//div[#class='table_class'])")[2]).getText();
because you are using index in wrong way modify it to -
String className2 = driver.findElement(By.xpath("(//div[#class='table_class'])[2]")).getText();
or
String className2 = driver.findElement(By.xpath("(//div[#class='table_class'][2])")).getText();
And the better way to do this is -
int index=1;
List <WebElement> allElement = driver.findElements(By.xpath("(//div[#class='table_class'])"));
for(WebElement element: allElement)
{
String className = element.getText();
Reporter.log("=====Class Identified as "+className+""+index+""+"=====", true);
index++
}

Apache Lucene: How to use TokenStream to manually accept or reject a token when indexing

I am looking for a way to write a custom index with Apache Lucene (PyLucene to be precise, but a Java answer is fine).
What I would like to do is the following : When adding a document to the index, Lucene will tokenize it, remove stop words, etc. This is usually done with the Analyzer if I am not mistaken.
What I would like to implement is the following : Before Lucene stores a given term, I would like to perform a lookup (say, in a dictionary) to check whether to keep the term or discard it (if the term is present in my dictionary, I keep it, otherwise I discard it).
How should I proceed ?
Here is (in Python) my custom implementation of the Analyzer :
class CustomAnalyzer(PythonAnalyzer):
def createComponents(self, fieldName, reader):
source = StandardTokenizer(Version.LUCENE_4_10_1, reader)
filter = StandardFilter(Version.LUCENE_4_10_1, source)
filter = LowerCaseFilter(Version.LUCENE_4_10_1, filter)
filter = StopFilter(Version.LUCENE_4_10_1, filter,
StopAnalyzer.ENGLISH_STOP_WORDS_SET)
ts = tokenStream.getTokenStream()
token = ts.addAttribute(CharTermAttribute.class_)
offset = ts.addAttribute(OffsetAttribute.class_)
ts.reset()
while ts.incrementToken():
startOffset = offset.startOffset()
endOffset = offset.endOffset()
term = token.toString()
# accept or reject term
ts.end()
ts.close()
# How to store the terms in the index now ?
return ????
Thank you for your guidance in advance !
EDIT 1 : After digging into Lucene's documentation, I figured it had something to do with the TokenStreamComponents. It returns a TokenStream with which you can iterate through the Token list of the field you are indexing.
Now there is something to do with the Attributes that I do not understand. Or more precisely, I can read the tokens, but have no idea how should I proceed afterward.
EDIT 2 : I found this post where they mention the use of CharTermAttribute. However (in Python though) I cannot access or get a CharTermAttribute. Any thoughts ?
EDIT3 : I can now access each term, see update code snippet. Now what is left to be done is actually storing the desired terms...
The way I was trying to solve the problem was wrong. This post and femtoRgon's answer were the solution.
By defining a filter extending PythonFilteringTokenFilter, I can make use of the function accept() (as the one used in the StopFilter for instance).
Here is the corresponding code snippet :
class MyFilter(PythonFilteringTokenFilter):
def __init__(self, version, tokenStream):
super(MyFilter, self).__init__(version, tokenStream)
self.termAtt = self.addAttribute(CharTermAttribute.class_)
def accept(self):
term = self.termAtt.toString()
accepted = False
# Do whatever is needed with the term
# accepted = ... (True/False)
return accepted
Then just append the filter to the other filters (as in the code snipped of the question) :
filter = MyFilter(Version.LUCENE_4_10_1, filter)

Lucene Highlighter Isn't Match Prefixes

I'm using Lucene's Highlighter to highlight parts of a string. The code below seems to work fine for finding the stemmed words but not for prefix matching.
EnglishAnalyzer analyzer = new EnglishAnalyzer(Version.LUCENE_34);
QueryParser parser = new QueryParser(Version.LUCENE_30, "", analyzer);
Query query = parser.parse(pQuery);
QueryScorer scorer = new QueryScorer(query);
Fragmenter fragmenter = new SimpleSpanFragmenter(scorer, 40);
Highlighter highlighter = new Highlighter(scorer);
highlighter.setTextFragmenter(fragmenter);
String[] frags = highlighter.getBestFragments(analyzer, "", pText, 4);
I've read in a few different places I need to call Query.rewrite to get the prefix matching to work. That method takes an IndexReader arguement though and I'm not sure how to get it. All of the example's I've found that call Query.rewreite don't show where the IndexReader came from. I'll add that that this is the only Lucene code I'm using. I'm not using Lucene to do the searching itself, just for the highlighting.
How do I create an IndexReader and is it possible to create one if I'm using Lucene the way that I am. Or perhaps there's a different way to get it to highlight the prefix matches? I'm very new to Lucene and I'm sure what all of these pieces do or if they're all necessary. I've just copied them from various example's I've found online. So if I've doing anything else wrong please let me know. Thanks.
Suppose you have a query field:abc* . What query.rewrite basically does is: it reads the index(this why you need an IndexReader) finds all terms that start with abc and changes your query as ,for ex., field:abc1 field:abc2 field:abc3. If you know the location of the index, you can use IndexReader.Open to get an IndexReader. If you don't have an index at all, you should search your pText, find all words that start with abc and update your query accordingly.

Categories

Resources