I'm using Spring Data / QueryDSL with web support, using the annotation #EnableSpringDataWebSupport.
This works well, and automatically maps a GET query to a Predicate. I can search my DTP objects using a query such as:
http://localhost/search/dtp?name=foo
I now need to add more complex queries, such as AND or OR clauses.
I found this library which seems to achieve what I want: spring-data-querydsl-value-operators
My understanding is that I need to add the following code to my Repository interface to leverage this library:
#Override
default void customize(QuerydslBindings bindings, QDtp root) {
bindings.bind(root.name).all(ExpressionProviderFactory::getPredicate);
bindings.bind(root.description).all(ExpressionProviderFactory::getPredicate);
...
}
However I didn't need to have a customize() method before, and now it seems I need to have these new bindings for all the fields and sub-fields of my object. This could lead to maintaining issues: if a new field is added but the developer forgets to add this binding, then the search on that field won't work as with the other fields.
This wasn't the case previously.
How could I make it so that these bindings are generic and applied to all fields and subfields of my object ?
I think that you may just use this library: https://github.com/turkraft/spring-filter
It will let you run search queries such as:
/search?filter= average(ratings) > 4.5 and brand.name in ('audi', 'land rover') and (year > 2018 or km < 50000) and color : 'white' and accidents is empty
Related
We're in the process of converting our java application from Hibernate Search 5 to 6 with an Elasticsearch backend.
For some good background info, see How to do highlighting within HibernateSearch over Elasticsearch for a question we had when upgrading our highlighting code from a Lucene to Elasticsearch backend and how it was resolved.
Hibernate Search 6 seems to support using 2 backends at the same time, Lucene and Elasticsearch, so we'd like to use Elasticsearch for all our queries and Lucene for the highlighting, if that's possible.
Here is basically what we're trying to do:
public boolean matchPhoneNumbers() {
String phoneNumber1 = "603-436-1234";
String phoneNumber2 = "603-436-1234";
LuceneBackend luceneBackend =
Search.mapping(entityManager.getEntityManagerFactory())
.backend().unwrap(LuceneBackend.class);
Analyzer analyzer = luceneBackend.analyzer("phoneNumberKeywordAnalyzer").get();
//... builds a Lucene Query using the analyzer and phoneNumber1 term
Query phoneNumberQuery = buildQuery(analyzer, phoneNumber1, ...);
return isMatch("phoneNumberField", phoneNumber2, phoneNumberQuery, analyzer);
}
private boolean isMatch(String field, String target, Query sourceQ, Analyzer analyzer) {
Highlighter highlighter = new Highlighter(new QueryScorer(sourceQ, field));
highlighter.setTextFragmenter(new NullFragmenter());
try {
String result = highlighter.getBestFragment(analyzer, field, target);
return StringUtils.hasText(result);
} catch (IOException e) {
...
}
}
What I've attempted so far is to configure two separate backends in the configuration properties, per the documentation, like this:
properties.setProperty("hibernate.search.backends.elasticsearch.analysis.configurer", "com.bt.demo.search.AnalysisConfigurer");
properties.setProperty("hibernate.search.backends.lucene.analysis.configurer", "com.bt.demo.search.CustomLuceneAnalysisConfigurer");
properties.setProperty("hibernate.search.backends.elasticsearch.type", "elasticsearch");
properties.setProperty("hibernate.search.backends.lucene.type", "lucene");
properties.setProperty("hibernate.search.backends.elasticsearch.uris", "http://127.0.0.1:9200");
The AnalysisConfigurer class implements ElasticsearchAnalysisConfigurer and
CustomLuceneAnalysisConfigurer implements from LuceneAnalysisConfigurer.
Analyzers are defined twice, once in the Elasticsearch configurer and again in the Lucene configurer.
I don't know why both hibernate.search.backends.elasticsearch.type and hibernate.search.backends.lucene.type are necessary but if I don't include the lucene.type, I get Ambiguous backend type: configuration property 'hibernate.search.backends.lucene.type' is not set.
But if I do have both backend properties types set, I get
HSEARCH000575: No default backend. Check that at least one entity is configured to target the default backend, when attempting to retrieve the Lucene backend, like:
Search.mapping(entityManager.getEntityManagerFactory())
.backend().unwrap(LuceneBackend.class);
And the same error when trying to retrieve the Elasticsearch backend.
I've also added #Indexed(..., backend = "elasticsearch") to my entities since I wish to have them saved into Elasticsearch and don't need them in Lucene. I also tried adding a fake entity with #Indexed(..., backend = "lucene") but it made no difference.
What have I got configured wrong?
I don't know why both hibernate.search.backends.elasticsearch.type and hibernate.search.backends.lucene.type are necessary but if I don't include the lucene.type, I get Ambiguous backend type: configuration property 'hibernate.search.backends.lucene.type' is not set.
That's because the backend name is just that: a name. Hibernate Search doesn't infer particular information from it, even if you name your backend "lucene" or "elasticsearch". You could have multiple Elasticsearch backends for all it knows :)
But if I do have both backend properties types set, I get HSEARCH000575: No default backend. Check that at least one entity is configured to target the default backend, when attempting to retrieve the Lucene backend, like:
Search.mapping(entityManager.getEntityManagerFactory())
.backend().unwrap(LuceneBackend.class);
``
You called .backend(), which retrieves the default backend, i.e. the backend that doesn't have a name and is configured through hibernate.search.backend.* instead of hibernate.search.backends.<somename>.* (see https://docs.jboss.org/hibernate/stable/search/reference/en-US/html_single/#configuration-structure ).
But you are apparently mapping all your entities to a named backends, one named elasticsearch and one named lucene. So the default backend just doesn't exist.
You should call this:
Search.mapping(entityManager.getEntityManagerFactory())
.backend("lucene").unwrap(LuceneBackend.class);
I've also added #Indexed(..., backend = "elasticsearch") to my entities since I wish to have them saved into Elasticsearch
Since you obviously only want to use one backend for indexing, I would recommend reverting that change (keeping #Indexed without setting #Indexed.backend) and simply making using the default backend.
In short, remove the #Indexed.backend and replace this:
properties.setProperty("hibernate.search.backends.elasticsearch.analysis.configurer", "com.bt.demo.search.AnalysisConfigurer");
properties.setProperty("hibernate.search.backends.lucene.analysis.configurer", "com.bt.demo.search.CustomLuceneAnalysisConfigurer");
properties.setProperty("hibernate.search.backends.elasticsearch.type", "elasticsearch");
properties.setProperty("hibernate.search.backends.lucene.type", "lucene");
properties.setProperty("hibernate.search.backends.elasticsearch.uris", "http://127.0.0.1:9200");
With this
properties.setProperty("hibernate.search.backend.analysis.configurer", "com.bt.demo.search.AnalysisConfigurer");
properties.setProperty("hibernate.search.backends.lucene.analysis.configurer", "com.bt.demo.search.CustomLuceneAnalysisConfigurer");
properties.setProperty("hibernate.search.backend.type", "elasticsearch");
properties.setProperty("hibernate.search.backends.lucene.type", "lucene");
properties.setProperty("hibernate.search.backend.uris", "http://127.0.0.1:9200");
You don't technically have to do that, but I think it will be simpler in the long term. It keeps the Lucene backend as a separate hack that doesn't affect your whole application.
I also tried adding a fake entity with #Indexed(..., backend = "lucene")
I confirm you will need that fake entity mapped to the "lucene" backend, otherwise Hibernate Search will not create the "lucene" backend.
We are trying to implement a query with cosmosTemplate from spring-data-cosmosdb project.
The query has the following syntax:
"select * from movie where ARRAY_CONTAINS(movie.countries, #country)".
CosmosTemplate accepts DocumentQuery object, that is build upon Criteria object. Criteria object supports a small subset of basic predicates like in, or, is equal and etc., but doesn't have array_contains predicate.
At the moment the query is performed by using cosmos client(from sdk), instead of cosmosTemplate.
This brings us two issues:
We have to mix the code by using cosmosTemplate and cosmos client together.
Since we have complex parameterized queries that use system functions, we have to concatenate sql query string and gather sql parameters.
How queries like this should be handled with cosmosTemplate and is that even possible?
P.S we are using com.microsoft.azure:azure-cosmosdb-spring-boot-starter:2.2.5 library.
In the current GA release you will have to use the Client and template together like you mentioned.
The latest beta release includes support for QueryAnnotation Using annotated queries in repositories. Following is an example:
#Query(value = "select * from c where c.firstName = #firstName and c.lastName = #lastName")
List<User> getUsersByTitleAndValue(#Param("firstName") int firstName, #Param("lastName") String lastName);
Ravi's answer is correct. To create custom queries directly from Spring Data connector, there are current two options.
First, you can follow the instructions in the "Custom Query" section of this document
https://learn.microsoft.com/en-us/azure/developer/java/spring-framework/how-to-guides-spring-data-cosmosdb
which guide you through using the Java SDK CosmosClient directly. The current GA release of Spring Data connector does not have #Query annotation which would enable custom query annotation, that is why you need to use the Java SDK directly.
Second, upgrade to the latest beta which enables #Query annotation
https://mvnrepository.com/artifact/com.azure/azure-spring-data-cosmos
Sample code for this will be released in the next few days, and the GA release is scheduled for 9/30 so it is not a long wait.
After a research in azure sdk, I've found that at the moment not all system functions are supported by cosmosTemplate.
List of supported system functions can be found there:
https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/cosmos/azure-spring-data-cosmos/src/main/java/com/azure/spring/data/cosmos/core/query/CriteriaType.java
If you wan't to write query that uses system function, that isn't in the list in the link above, you can use either #Query annotation or directly use cosmos client instead of cosmosTemplate.
Using Hibernate Search 5.11.3 with programmatic API (no annotations), is there a way to facet on dynamic fields added in a class or field bridge? I don't see any 'facet' config available in FieldMetadataBuilder when using MetadataProvidingFieldBridge.
I have tried various combinations of luceneOptions.addSortedDocValuesFieldToDocument() and luceneOptions.addFieldToDocument() in the set() method. This successfully updates the index, but I cannot perform facet queries.
I am trying to do a basic attribute facet/filter where I have a generic table of attributes with id/name and attribute values associated with products. For various reasons I am using the programmatic API and especially for attributes I can't make use of the #Facet annotation. So for a product, I added this class bridge to Product.class:
public class ProductClassTagValuesBridge implements FieldBridge
{
#Override
public void set(String name, Object value, Document document, LuceneOptions luceneOptions)
{
Product product = (Product) value;
for (TagValue v : product.getTagValues())
{
Tag tag = v.getTag();
String tagName = "tag-" + tag.getId();
String tagValue = v.getId().toString();
// not sure if this line is required? Have tried with and without
luceneOptions.addFieldToDocument(tagName, tagValue, document);
luceneOptions.addSortedDocValuesFieldToDocument(tagName, tagValue, document);
}
}
}
Then I build my (test) faceting request to search tag-56 (which I confirmed is in the index using Luke):
FacetParameterContext context = queryBuilder.facet()
.name("tag-56")
.onField("tag-56")
.discrete();
FacetingRequest facetingRequest = context.createFacetingRequest();
Which when used in the search/FacetManager gives me the error:
org.hibernate.search.exception.SearchException: HSEARCH000268: Facet request 'TAG_56' tries to facet on field 'tag-56' which either does not exist or is not configured for faceting (via #Facet). Check your configuration.
I have also tried the custom config solution from the solution in this post: Hibernate Search: configure Facet for custom FieldBridge
For the custom field I added a field bridge to tagValues on my product. The same error occurs.
mapping.entity(Product.class).indexed()
.property("tagValues", ElementType.FIELD).field()
.analyze(Analyze.NO).store(Store.YES)
.bridge(ProductTagValuesFieldBridge.class)
Short answer: Hibernate Search does not allow that... yet.
Long answer:
Hibernate Search 5 allows dynamic fields, but does not allow faceting on fields declared in custom bridges.
That is to say, you can add arbitrary values to your index that don't fit a pre-defined schema, but you cannot use faceting on those fields.
Hibernate search 6 allows faceting (now called "aggregations") on fields declared in custom bridges (just declare them as .aggregable(Aggregable.YES)), but does not allow dynamic fields yet.
EDIT: Starting with 6.0.0.Beta7, dynamic fields are supported thanks to field templates. So the rest of my message is not useful anymore.
See this section of the documentation for more information about field templates. It's totally possible to declare an aggregable, dynamic field in your bridge.
Original message about ways to work without dynamic fields (obsolete):
That is to say, if you know the list of tags upon startup, are able to list them all, and are certain they won't change while your application is up, you could declare the fields upfront and use faceting on them. But if you don't know the list of tags upon startup, none of this is possible (yet).
Until dynamic fields are added to Hibernate Search 6, the only solution is to use Hibernate Search 5 and to re-implement faceting yourself. As you can expect, this will be complex and you will have to get your hands dirty with Lucene. You will have to:
Add fields of type SortedSetDocValuesFacetField to your document in your custom bridge.
Ensure Hibernate Search calls FacetsConfig.build on your documents after they are populated. One way to do that (through a hack) would be to declare a dummy #Facet field on your entity, even if you don't use it.
Completely ignore Hibernate Search's query feature and perform faceting yourself from an IndexReader. You can get an IndexReader from Hibernate Search as explained here. There's an example of how to perform faceting in org.hibernate.search.query.engine.impl.QueryHits#updateStringFacets.
I'm building REST API connected to ORACLE 11G DB. API sends data to Android client using JSON. To get data I'm using JpaRepository, and #Query annotations.
I want to provide data for charts: number of contracts in years.
I have native SQL query:
select aa.ROK, count(aa.NUMER_UMOWY)
from (select distinct NUMER_UMOWY, ROK from AGR_EFEKTY) aa
group by aa.ROK order by aa.ROK
Result of query using SQL Developer look like this:
I tried to get result using native query:
But result is always like this:
or error depending what I try.
Is it possible to obtain list of count() results using #Query?
If not, what should I use?
Thanks in advance :-)
I think What you are trying to use here is spring data projection.
As mentioned in the reference doc:
Spring Data query methods usually return one or multiple instances of
the aggregate root managed by the repository. However, it might
sometimes be desirable to create projections based on certain
attributes of those types. Spring Data allows modeling dedicated
return types, to more selectively retrieve partial views of the
managed aggregates.
and particularly closed projection where all accessor methods match the target attributes. In your case the count is not an attribute of your aggregate.
To perform what you want you can use constructor as follow :
class ContractsDto{
private String rok;
private int count;
public ContractsDto(String rok, int count) {
this.rok=rok;
this.count =count;
}
// getters
}
The query will be:
#Query(value = "select new ContractsDto(aa.rok , /*count */) from fromClause")
List<ContractsDto> getContractsPerYear();
I am using hibernate reverse engineering and trying to get my Timestamps to map to a JodaTime type.
I have setup my hibernate.reveng.xml file properly
<sql-type jdbc-type="TIMESTAMP" hibernate-type="org.joda.time.contrib.hibernate.PersistentDateTime" not-null="true"></sql-type>
The issue is that when i run the rev-eng process my Java classes also get the members created as PersistentDateTime objects, but I don't want that because they are not usable. I need the java objects to be org.joda.time.DateTime
So I tried creating a custom engineering strategy
public class C3CustomRevEngStrategy extends DelegatingReverseEngineeringStrategy {
public C3CustomRevEngStrategy(ReverseEngineeringStrategy res) {
super(res);
}
public String columnToHibernateTypeName(TableIdentifier table, String columnName, int sqlType, int length, int precision, int scale, boolean nullable, boolean generatedIdentifier) {
if(sqlType==Types.TIMESTAMP) {
return "org.joda.time.DateTime";
} else {
return super.columnToHibernateTypeName(table, columnName, sqlType, length, precision, scale, nullable, generatedIdentifier);
}
}
}
My thought was that the hibernate mapping files would get the hibernate.reveng.xml file settings and the java objects would get the settings from the custom strategy file...but that was not the case. Both the mapping file and Object are of type "org.joda.time.DateTime" which is not what I want.
How can I achieve my goal? Also, I am NOT using annotations.
Hibernate 3.6
JodaTime 2.3
JodaTime-Hibernate 1.3
Thanks
EDIT: To clarify exactly what the issue is
After reverse engineering this is what I get in my mapping file and POJO class
<property name="timestamp" type="org.joda.time.contrib.hibernate.PersistentDateTime">
private PersistentDateTime timestamp;
As a POJO property, PersistentDateTime is useless to me as I cannot do anything with it such as time manipulations or anything. So this is what I want after my reverse engineering
<property name="timestamp" type="org.joda.time.contrib.hibernate.PersistentDateTime">
private org.joda.time.DateTime timestamp;
Using the Jidira library as suggested below gives me the same result, a POJO that I cannot use.
The JodaTime-Hibernate library is deprecated, and is probably the source of your problem. Don't dispair however as there is a (better) alternative.
You will need to use the JadiraTypes library to create the correct JodaTime objects from Hibernate. Add the library which can be found here to your project classpath and then change your type to org.jadira.usertype.dateandtime.joda.PersistantDateTime. All of the JodaTime objects have a corresponding mapping in that package, so if you decide to change to another object then just update your type.
This should ensure that your objects get created correctly.
I should add a caveat to my answer, which is that I have never used the JadiraTypes library with Hibernate 3. If it only supports Hibernate 4 (I don't see why it would, but...) let me know and I'll delete my answer.
The Hibernate Tools don't seem to separate Hibernate Types from the Java types. If you would be using annotations, this would be more clear as in that case you'd need an #Type annotation, which Hibernate will not generate at all. So using the exposed APIs won't help here.
Fortunately, Hibernate lets you plug into the actual code (or XML) generation after it does its processing. You can do that by replacing the Freemarker templates it uses to generate both XML and Java code. You'll need to use Ant for reverse engineering, however.
To start using Ant for this purpose (if you're not doing so already), you can either pull Hibernate Tools in as a build-time dependency using your dependency manager or download a JAR. Put the jar in Ant's classpath, but also extract the template files from it: since you're using XML configuration, you'll be interested in the /hbm directory. Then add the task using <taskdef> and, assuming that you extracted the templates to TPL_PATH/hbm/*.ftl, call it using <hibernatetool templatepath="TPL_PATH" ...>. (For more info, see below for a link to the docs). Using Ant also helps with automation, for example on CI servers where you won't have Eclipse.
You'll want to keep hibernate-type="org.joda.time.DateTime" in your hibernate.reveng.xml so that the Java files get generated correctly, but replace it with org.joda.time.contrib.hibernate.PersistentDateTime in the generated XML. To do that, edit TPL_PATH/hbm/property.hbm.ftl, replace ${property.value.typeName} with ${javaType}, and assign javaType to the right value before it's used:
<#assign javaType=property.value.typeName>
<#if javaType.equals("DateTime")>
<#assign javaType="org.jadira.usertype.dateandtime.joda.PersistentDateTime">
</#if>
<property
name="${property.name}"
type="${javaType}"
...
You might want to remove newlines to keep the generated XML clean.
This is all described in the Hibernate Tools documentation at http://docs.jboss.org/tools/archive/3.2.1.GA/en/hibernatetools/html/codegen.html#d0e6349 - except that the documentation doesn't tell you exactly which templates you need to modify, you need to figure that out by reading the templates.