I'm trying to build a faceted classification using Spring Data Mongo and I'm confused on how to use Aggregation.facet method.
As I'm trying to figure out how the thing work I'm using twice the same FacetOperation and I'm getting a java.lang.IllegalArgumentException: Invalid reference 'producer.fundings'!. This FacetOperation alone in the Aggregation works fine!
FacetOperation fo1 = facet(
unwind("producer.fundings"),
project().and("producer.fundings.type").as("type").and("producer.fundings.acronym").as("name"),
group("name", "type").count().as("count"),
project("count").and("_id.name").as("name").and("_id.type").as("type").andExclude("_id")
).as("fundingAcronymFacet");
FacetOperation fo2 = facet(
unwind("producer.fundings"),
project().and("producer.fundings.type").as("type").and("producer.fundings.acronym").as("name"),
group("name", "type").count().as("count"),
project("count").and("_id.name").as("name").and("_id.type").as("type").andExclude("_id")
).as("fundingNameFacet");
Aggregation agg = Aggregation.newAggregation(fo1,fo2);
AggregationResults<FacetClassification> groupResults = mongoTemplate.aggregate(agg, "observations", FacetClassification.class);
List<FacetClassification> facet = groupResults.getMappedResults();
So either I'm not using the facet method well and only one call is needed to create different facet. This would looks like how it is implemented in the MongoDB API: $facet (aggregation)
Or I need to chain facet call to create the different facets of my classification and need to know what happens after the first call and why the exact same reference is not found.
The documentation only provides examples that create one facet and could'nt find any example elsewhere: Spring Data Mongo Faceted Classification.
related to : Using multiple facets in MongoDB Spring Data
Any help would be appreciated!
You can chain multiple facet operation using the and() and .as()methods. The example should look like this to create two different facet in the same aggregation operation:
FacetOperation fo1 = facet(
unwind("producer.fundings"),
project().and("producer.fundings.type").as("type").and("producer.fundings.acronym").as("name"),
group("name", "type").count().as("count"),
project("count").and("_id.name").as("name").and("_id.type").as("type").andExclude("_id")
).as("fundingAcronymFacet")
.and(unwind("producer.fundings"),
project().and("producer.fundings.type").as("type").and("producer.fundings.acronym").as("name"),
group("name", "type").count().as("count"),
project("count").and("_id.name").as("name").and("_id.type").as("type").andExclude("_id")
).as("fundingNamesFacet");
Related
We're in the process of converting our java application from Hibernate Search 5 to 6 with an Elasticsearch backend.
For some good background info, see How to do highlighting within HibernateSearch over Elasticsearch for a question we had when upgrading our highlighting code from a Lucene to Elasticsearch backend and how it was resolved.
Hibernate Search 6 seems to support using 2 backends at the same time, Lucene and Elasticsearch, so we'd like to use Elasticsearch for all our queries and Lucene for the highlighting, if that's possible.
Here is basically what we're trying to do:
public boolean matchPhoneNumbers() {
String phoneNumber1 = "603-436-1234";
String phoneNumber2 = "603-436-1234";
LuceneBackend luceneBackend =
Search.mapping(entityManager.getEntityManagerFactory())
.backend().unwrap(LuceneBackend.class);
Analyzer analyzer = luceneBackend.analyzer("phoneNumberKeywordAnalyzer").get();
//... builds a Lucene Query using the analyzer and phoneNumber1 term
Query phoneNumberQuery = buildQuery(analyzer, phoneNumber1, ...);
return isMatch("phoneNumberField", phoneNumber2, phoneNumberQuery, analyzer);
}
private boolean isMatch(String field, String target, Query sourceQ, Analyzer analyzer) {
Highlighter highlighter = new Highlighter(new QueryScorer(sourceQ, field));
highlighter.setTextFragmenter(new NullFragmenter());
try {
String result = highlighter.getBestFragment(analyzer, field, target);
return StringUtils.hasText(result);
} catch (IOException e) {
...
}
}
What I've attempted so far is to configure two separate backends in the configuration properties, per the documentation, like this:
properties.setProperty("hibernate.search.backends.elasticsearch.analysis.configurer", "com.bt.demo.search.AnalysisConfigurer");
properties.setProperty("hibernate.search.backends.lucene.analysis.configurer", "com.bt.demo.search.CustomLuceneAnalysisConfigurer");
properties.setProperty("hibernate.search.backends.elasticsearch.type", "elasticsearch");
properties.setProperty("hibernate.search.backends.lucene.type", "lucene");
properties.setProperty("hibernate.search.backends.elasticsearch.uris", "http://127.0.0.1:9200");
The AnalysisConfigurer class implements ElasticsearchAnalysisConfigurer and
CustomLuceneAnalysisConfigurer implements from LuceneAnalysisConfigurer.
Analyzers are defined twice, once in the Elasticsearch configurer and again in the Lucene configurer.
I don't know why both hibernate.search.backends.elasticsearch.type and hibernate.search.backends.lucene.type are necessary but if I don't include the lucene.type, I get Ambiguous backend type: configuration property 'hibernate.search.backends.lucene.type' is not set.
But if I do have both backend properties types set, I get
HSEARCH000575: No default backend. Check that at least one entity is configured to target the default backend, when attempting to retrieve the Lucene backend, like:
Search.mapping(entityManager.getEntityManagerFactory())
.backend().unwrap(LuceneBackend.class);
And the same error when trying to retrieve the Elasticsearch backend.
I've also added #Indexed(..., backend = "elasticsearch") to my entities since I wish to have them saved into Elasticsearch and don't need them in Lucene. I also tried adding a fake entity with #Indexed(..., backend = "lucene") but it made no difference.
What have I got configured wrong?
I don't know why both hibernate.search.backends.elasticsearch.type and hibernate.search.backends.lucene.type are necessary but if I don't include the lucene.type, I get Ambiguous backend type: configuration property 'hibernate.search.backends.lucene.type' is not set.
That's because the backend name is just that: a name. Hibernate Search doesn't infer particular information from it, even if you name your backend "lucene" or "elasticsearch". You could have multiple Elasticsearch backends for all it knows :)
But if I do have both backend properties types set, I get HSEARCH000575: No default backend. Check that at least one entity is configured to target the default backend, when attempting to retrieve the Lucene backend, like:
Search.mapping(entityManager.getEntityManagerFactory())
.backend().unwrap(LuceneBackend.class);
``
You called .backend(), which retrieves the default backend, i.e. the backend that doesn't have a name and is configured through hibernate.search.backend.* instead of hibernate.search.backends.<somename>.* (see https://docs.jboss.org/hibernate/stable/search/reference/en-US/html_single/#configuration-structure ).
But you are apparently mapping all your entities to a named backends, one named elasticsearch and one named lucene. So the default backend just doesn't exist.
You should call this:
Search.mapping(entityManager.getEntityManagerFactory())
.backend("lucene").unwrap(LuceneBackend.class);
I've also added #Indexed(..., backend = "elasticsearch") to my entities since I wish to have them saved into Elasticsearch
Since you obviously only want to use one backend for indexing, I would recommend reverting that change (keeping #Indexed without setting #Indexed.backend) and simply making using the default backend.
In short, remove the #Indexed.backend and replace this:
properties.setProperty("hibernate.search.backends.elasticsearch.analysis.configurer", "com.bt.demo.search.AnalysisConfigurer");
properties.setProperty("hibernate.search.backends.lucene.analysis.configurer", "com.bt.demo.search.CustomLuceneAnalysisConfigurer");
properties.setProperty("hibernate.search.backends.elasticsearch.type", "elasticsearch");
properties.setProperty("hibernate.search.backends.lucene.type", "lucene");
properties.setProperty("hibernate.search.backends.elasticsearch.uris", "http://127.0.0.1:9200");
With this
properties.setProperty("hibernate.search.backend.analysis.configurer", "com.bt.demo.search.AnalysisConfigurer");
properties.setProperty("hibernate.search.backends.lucene.analysis.configurer", "com.bt.demo.search.CustomLuceneAnalysisConfigurer");
properties.setProperty("hibernate.search.backend.type", "elasticsearch");
properties.setProperty("hibernate.search.backends.lucene.type", "lucene");
properties.setProperty("hibernate.search.backend.uris", "http://127.0.0.1:9200");
You don't technically have to do that, but I think it will be simpler in the long term. It keeps the Lucene backend as a separate hack that doesn't affect your whole application.
I also tried adding a fake entity with #Indexed(..., backend = "lucene")
I confirm you will need that fake entity mapped to the "lucene" backend, otherwise Hibernate Search will not create the "lucene" backend.
I'm using Spring Data / QueryDSL with web support, using the annotation #EnableSpringDataWebSupport.
This works well, and automatically maps a GET query to a Predicate. I can search my DTP objects using a query such as:
http://localhost/search/dtp?name=foo
I now need to add more complex queries, such as AND or OR clauses.
I found this library which seems to achieve what I want: spring-data-querydsl-value-operators
My understanding is that I need to add the following code to my Repository interface to leverage this library:
#Override
default void customize(QuerydslBindings bindings, QDtp root) {
bindings.bind(root.name).all(ExpressionProviderFactory::getPredicate);
bindings.bind(root.description).all(ExpressionProviderFactory::getPredicate);
...
}
However I didn't need to have a customize() method before, and now it seems I need to have these new bindings for all the fields and sub-fields of my object. This could lead to maintaining issues: if a new field is added but the developer forgets to add this binding, then the search on that field won't work as with the other fields.
This wasn't the case previously.
How could I make it so that these bindings are generic and applied to all fields and subfields of my object ?
I think that you may just use this library: https://github.com/turkraft/spring-filter
It will let you run search queries such as:
/search?filter= average(ratings) > 4.5 and brand.name in ('audi', 'land rover') and (year > 2018 or km < 50000) and color : 'white' and accidents is empty
We are trying to implement a query with cosmosTemplate from spring-data-cosmosdb project.
The query has the following syntax:
"select * from movie where ARRAY_CONTAINS(movie.countries, #country)".
CosmosTemplate accepts DocumentQuery object, that is build upon Criteria object. Criteria object supports a small subset of basic predicates like in, or, is equal and etc., but doesn't have array_contains predicate.
At the moment the query is performed by using cosmos client(from sdk), instead of cosmosTemplate.
This brings us two issues:
We have to mix the code by using cosmosTemplate and cosmos client together.
Since we have complex parameterized queries that use system functions, we have to concatenate sql query string and gather sql parameters.
How queries like this should be handled with cosmosTemplate and is that even possible?
P.S we are using com.microsoft.azure:azure-cosmosdb-spring-boot-starter:2.2.5 library.
In the current GA release you will have to use the Client and template together like you mentioned.
The latest beta release includes support for QueryAnnotation Using annotated queries in repositories. Following is an example:
#Query(value = "select * from c where c.firstName = #firstName and c.lastName = #lastName")
List<User> getUsersByTitleAndValue(#Param("firstName") int firstName, #Param("lastName") String lastName);
Ravi's answer is correct. To create custom queries directly from Spring Data connector, there are current two options.
First, you can follow the instructions in the "Custom Query" section of this document
https://learn.microsoft.com/en-us/azure/developer/java/spring-framework/how-to-guides-spring-data-cosmosdb
which guide you through using the Java SDK CosmosClient directly. The current GA release of Spring Data connector does not have #Query annotation which would enable custom query annotation, that is why you need to use the Java SDK directly.
Second, upgrade to the latest beta which enables #Query annotation
https://mvnrepository.com/artifact/com.azure/azure-spring-data-cosmos
Sample code for this will be released in the next few days, and the GA release is scheduled for 9/30 so it is not a long wait.
After a research in azure sdk, I've found that at the moment not all system functions are supported by cosmosTemplate.
List of supported system functions can be found there:
https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/cosmos/azure-spring-data-cosmos/src/main/java/com/azure/spring/data/cosmos/core/query/CriteriaType.java
If you wan't to write query that uses system function, that isn't in the list in the link above, you can use either #Query annotation or directly use cosmos client instead of cosmosTemplate.
My Spring Boot app is using Couchbase 5.1 community.
My app needs both a primary & several secondary indexes.
Currently, in order to create the needed indexes, I access the UI and the query page and manually create the indexes that the app needs as described here.
I was looking for a way to do it automatically via code, so when the app is starting, it will check if the indexes are missing and will create them if needed.
Is there a way to do it via Spring Data or via the Couchbase client?
You can create them by using the DSL from the index class. There's an example of using it in the documentation under "Indexing the Data: N1QL & GSI"
From that example:
You can also create secondary indexes on specific fields of the JSON,
for better performance:
Index.createIndex("index_name").on(bucket.name(), "field_to_index")
In this case, give a name to your index, specify the target bucket AND
the field(s) in the JSON to index.
If the index already exists, there will be an IndexAlreadyExistsException (see documentation), so you'll need to check for that.
So this is how I solve it:
import com.couchbase.client.java.Bucket;
public class MyCouchBaseRepository{
private Bucket bucket;
public MyCouchBaseRepository(<My Repository that extends CouchbasePagingAndSortingRepository> myRepository){
bucket = myRepository.getCouchbaseOperations().getCouchbaseBucket();
createIndices();
}
private void createIndices(){
bucket.bucketManager().createN1qlPrimaryIndex(true, false)
bucket.query(N1qlQuery.simple("CREATE INDEX xyz ON `myBucket`(userId) WHERE _class = 'com.example.User'"))
...
}
}
What is the best way to change database table schema at run time in Play! Framework? I get an unspecified amount of columns from a client application and can not have domain objects.
Another option may be to use a schema-less datastore ala mongodb. Seems like there is a mongo module for play, http://www.playframework.org/modules/mongo.
Why not using Apache DDLUtils ?
It's not plugged by default in Play! but I use it in other projects and it's quite useful.
http://db.apache.org/ddlutils/
I think this is exactly what you were looking for
have a look at play snippets
http://www.playframework.org/community/snippets/13
public class Application extends Controller {
#Before
public static void setDBSchema() {
String schema = "someSchema"; // This can come from a Cache, for example
Play.configuration.setProperty("hibernate.default_schema", schema);
JPA.entityManagerFactory = null;
JPAPlugin plugin = new JPAPlugin();
plugin.onApplicationStart();
}
...
you just change the configured hibernate schema, and then force the JPAPlugin to restart