Hibernate Search Result Ranking - java

I am using Hibernate Search Along with Lucene to implement full text search on my data base. I want to know that do hibernate search query or lucene query return top ranked and the most relevant results? Documentation says:
Apache Lucene provides a very flexible and powerful way to sort
results. While the default sorting (by relevance) is appropriate most
of the time
Link: http://docs.jboss.org/hibernate/search/4.2/reference/en-US/html_single/#search-query
Section: 5.1.3.3. Sorting
But I am very confused with the results as they are always arranged with the IDs of the objects. I just need the top 100 most relevant records.

See Customizing Lucene's scoring formula

Sorting by relevance is affected by your Analyzer choices. If you are getting results in the order of primary keys it is likely that they are all having the same score, which is normally very unlikely so my guess is that you're not enabling tokenization on any searched field.
Make sure you're tokenizing the fields used in the Query and they are using an appropriate Analyzer. To pick an appropriate one you'll have to experiment a bit as it depends on the language (if it's natural language) or on what kind of data you're indexing.
To actually debug the sort order applied by Relevance sort, see usage of Projections in the Hibernate Search documentation: both FullTextQuery.SCORE and FullTextQuery.EXPLANATION can be very useful to understand what's going on.
A handy utility to quickly experiment the effect of different Analyzers is to use org.hibernate.search.util.AnalyzerUtils. You can either write unit tests creating the Analyzer instance yourself or you can retrieve the analyzers by name using org.hibernate.search.engine.SearchFactory.getAnalyzer(String) or the base one used for a specific indexed entity by entity type: org.hibernate.search.engine.SearchFactory.getAnalyzer(Class).

Related

Data structure for fast searching of custom object using its attributes (fields) in Java

I have abstract super class and some sub classes. My question is how is the best way to keep objects of those classes so I can easily find them using all the different parameters.
For example if I want to look up with resourceCode (every object is with unique resource code) I can use HashMap with key value resourceCode. But what happens if I want to look up with genre - there are many games with the same genre so I will get all those games. My first idea was with ArrayList of those objects, but isn’t it too slow if we have 1 000 000 games (about 1 000 000 operations).
My other idea is to have a HashTable with key value the product code. Complexity of the search is constant. After that I create that many HashSets as I have fields in the classes and for each field I get the productCode/product Codes of the objects, that are in the HashSet under that certain filed (for example game promoter). With those unique codes I can get everything I want from the HashTable. Is this a good idea? It seems there will be needed a lot of space for the date to be stored, but it will be fast.
So my question is what Data Structure should I use so I can implement fast finding of custom object, searching by its attributes (fields)
Please see the attachment: Classes Example
Thank you in advanced.
Stefan Stefanov
You can use Sorted or Ordered data structures to optimize search complexity.
You can introduce your own search index for custom data.
But it is better to use database or search engine.
Have a look at Elasticsearch, Apache Solr, PostgreSQL
It sounds like most of your fields can be mapped to a string (name, genre, promoter, description, year of release, ...). You could put all these strings in a single large index that maps each keyword to all objects that contain the word in any of their fields. Then if you search for certain keywords it will return a list of all entries that contain that word. For example searching for 'mine' should return 'minecraft' (because of title), as well as all mine craft clones (having 'minecraft-like' as genre) and all games that use the word 'mine' in the 'info text' field.
You can code this yourself, but I suppose some fulltext indexer, such as Lucene may be useful. I haven't used Lucene myself, but I suppose it would also allow you to search for multiple keyword at once, even if they occur in different fields.
This is not a very appealing answer.
Start with a database. Maybe an embedded database (like h2database).
Easy set of fixed develop/test data; can be easily changed. (The database dump.)
. Too many indices (hash maps) harm
Developing and optimizing queries is easier (declarative) than with data structures
Database tables are less coupled than data structures with help structures (maps)
The resulting system is far less complex and better scalable
After development has stabilized the set of queries, you can think of doing away of the DB part. Use at least a two tier separation of database and the classes.
Then you might find a stable and best fitting data model.
Should you still intend to do it all with pure objects, then work them out in detail as design documentation before you start programming. Example stories, and how one solves them.

ElasticSearch: Secondary indecies on field values using Java-API

I'm considering to use ElasticSearch as a search engine for large objects. There are about 500 millions objects on a single machine. For far is Elasticsearch a good solution for executing advanced queries. But a have the problem that i did find any technique to create secondary index on the document fields. Is in elasticsearch a possibility to create a secondary indecies like in MySQL on columns? Or are there any other technologies implemented to accelerate searches on field values? I'm using an single server enviroment and I have to store about 300 fields per row/object. At the moment there are about 500 million object in my database.
I apologize in advance it I don't understand the question. Elasticsearch is itself an index based technology (it's built on top of Lucene which is a build for index based search). You put documents into Elasticsearch and the individual fields on those documents are indexed and searchable. You should not have to worry about creating secondary indexes; the fields will be indexed by default (in most cases).
One of the differences between Elasticsearch and Solr is that in Solr, you have to specify a schema defining what the fields are on the documents and whether that field will be indexed (available to search against), stored (available as the result of a search) or both. Elasticsearch does not require an upfront schema, and in lieu of specific mappings for fields, then reasonable defaults are used instead. I believe that the core fields (string, number, etc..._) are indexed by default, meaning they are available to search against.
Now in your case, you have a document with a lot of fields on it. You will probably need to tweak the mappings a bit to only index the fields that you know you might search against. If you index too much, the size of the index itself will balloon and will not be as fast as if you had a trim index of only the fields you know you will search against. Also, Lucene loads parts of the index into memory to really enable fast searches. With a bloated index, you won't be able to keep as much stuff in memory and your searches will suffer as a result. You should look at the Mappings API and the Core Types section for more info on how to do this.

Performance of search in java list vs on database records using hibernate

Now I have a situation where I need to make some comparisons and result filtration that is not very simple to do, what I want is something like Lucenes search but only I will develop it, it is not my decision though I would have gone with Lucene.
What I will do is:
Find the element according to full word match of a certain field, if not then check if it starts with it the check if it just contains.
Every field has its weight according to matching case(full->begins->contains) and its priority to me.
After one has matched I will also check the weight of the other fields as well to make a final total row weight.
Then I will return an Map of both rows and their weights.
Now I realized that this is not easy done by hibernate's HQL meaning I would have to run multiple queries to do this.
So my question is should I do it in java meaning should I retrieve all records and do my calculations to find my target, or should I do it in hibernate by executing multiple queries? which is better according to performance and speed ?
Unfortunately, I think the right answer is "it depends": how many words, what data structure, whether the data fits in memory, how often you have to do the search, etc.
I am inclined to think that a database is a better solution, even if Hibernate is not part of it. You might need to learn how to write better SQL. Perhaps the dynamic SQL that Hibernate generates for you isn't sufficient. Proper JOINs and indexing might make this perform nicely.
There might be a third way to consider: Lucene and indexing. I'd need to know more about your problem to decide.

Lucene indexing strategy for documents that change often

I'm integrating search functionality into a desktop application and I'm using vanilla Lucene to do so. The application handles (potentially thousands) of POJOs each with its own set of key/value(s) properties. When mapping models between my application and Lucene I originally thought of assigning each POJO a Document and add the properties as Fields. This approach works great as far as indexing and searching goes but the main downside is that whenever a POJO changes its properties I have to reindex ALL the properties again, even the ones that didn't change, in order to update the index. I have been thinking of changing my approach and instead create a Document per property and assign the same id to all the Documents from the same POJO. This way when a POJO property changes I only update its corresponding Document without reindexing all the other unchanged properties. I think that the graph db Neo4J follows a similar approach when comes to indexing, but I'm not completely sure. Could anyone comment on possible impact on performance, querying, etc?
It depends fundamentally on what you want to return as a Document in a search result.
But indexing is pretty cheap. Does a changed POJO really have so many properties that reindexing them all is a major problem?
If you only search one field in every search request, splitting one POJO to several documents will speed up reindexing. But it will cause another problem if search one multiple fields, a POJO may appear many times.
Actually, I agree with EJP, building index is very fast in small dataset.

Keeping query statistics using lucene

I am developing a search component of a web application using Lucene. I would like to save the user queries to an index and use them to suggest alternate queries to users, and to keep query statistics (most often used queries, top scoring queries, ...).
To use this data for alternate query suggestions, I would analyze the queries to see which terms are most often used with one another and use that to create a suggestion to the user.
But I can't figure out in which form to index the data. I was thinking of simply adding the queries into the index, but in that way there could be a lot of redundant data since many documents in the index would have the same content. Does anyone have any ideas about the way this can be accomplished?
Thanks for the help.
"I was thinking of simply adding the queries into the index, but in that way there could be a lot of redundant data since many documents in the index would have the same content"
You can tell Lucene not to store document content, which means that the principal overhead will be the unique Terms, and the index itself. So, it might not be a large overhead to store each query as a unique Document...this way you will not be throwing away any information.
First, I believe that you should store the queries separately from the existing index. The problem is not redundant data but rather "watering down" your index - storing the queries in the same index may harm the relevance of your searches. Some options for this are:
Use a separate Lucene index.
Use Solr, with two separate cores, one for the documents and the other for the queries.
Use a query log. Store scores with the queries. Build query statistics using post-processing.As this is a web application, you can probably use a servlet container, such as Tomcat's, logs for this.
Second, Auto-Suggest From Popular Queries Using EdgeNGrams suggests an alternative implementation of query suggestion using Solr.

Categories

Resources