I'm new to azure cosmos db. i'm trying to read items from container in my spring boot application. i'm using cosmos template with criteria. lets say i have a document like this
{
"stop_id": 70021,
"stop_name": "CALTRAIN - 22ND ST STATION",
"stop_lat": 37.757692,
"stop_lon": -122.392318,
"zone_id": 3329,
"trip": [{
"trip_id": "RTD8997283",
"arrival_time": "05:40:00",
"departure_time": "05:40:00",
"stop_id": 70021,
"stop_sequence": 1
}, {
"trip_id": "RTD8997283",
"arrival_time": "05:52:00",
"departure_time": "05:52:00",
"stop_id": 70021,
"stop_sequence": 2
}]
}
if i want to fetch based on stop id, i can add criteria for stop id like this
Criteria criteria = Criteria.getInstance(CriteriaType.IS_EQUAL, "stop_id", Collections.singletonList("70021"), Part.IgnoreCaseType.ALWAYS);
CosmosQuery cosmosQuery = new CosmosQuery(criteria).with(Sort.unsorted());
Iterable<StopInfo> items = cosmosTemplate.find(cosmosQuery, StopInfo.class, "myContainer");
But if i want to add criteria for trip id, how can i do it?
This way you can search from array in cosmos using criteria
Map map = new HashMap();
map.put("trip_id","RTD8997283");
Criteria criteria = Criteria.getInstance(CriteriaType.ARRAY_CONTAINS, "trip", Collections.singletonList(map), Part.IgnoreCaseType.ALWAYS);
CosmosQuery cosmosQuery = new CosmosQuery(criteria).with(Sort.unsorted());
Iterable<StopInfo> items = cosmosTemplate.find(cosmosQuery, StopInfo.class, "myContainer");
Related
Given we have following documents in database
{id:0, SID:0, STATUS:"UNDONE", UID:1, AT:122}
{id:1, SID:1, STATUS:"DONE", UID:1, AT:123}
{id:2, SID:1, STATUS:"DONE", UID:2, AT:124}
{id:3, SID:2, STATUS:"PEN", UID:3, AT:125}
{id:4, SID:2, STATUS:"PEN", UID:4, AT:126}
{id:5, SID:3, STATUS:"DONE", UID:5, AT:127}
{id:6, SID:4, STATUS:"DONE", UID:6, AT:128}
I'm trying to get write a query using spring data mongodb to skip 'm' distinct SID and get next 'n' distinct SID (Sorted by AT field) matching the filter and return documents corresponding to those SIDs.
For this example m = 1 and n = 2 would return these documents
{id:3, SID:2, STATUS:"PEN", UID:3, AT:125}
{id:4, SID:2, STATUS:"PEN", UID:4, AT:126}
{id:5, SID:3, STATUS:"DONE", UID:5, AT:127}
So far I've managed to write this.
Aggregation aggregation = Aggregation.newAggregation(
Aggregation.project("STATUS", "AT", "SID"),
Aggregation.match(Criteria.where("STATUS").in("DONE", "PEN")),
Aggregation.sort(Sort.Direction.DESC, "AT"),
Aggregation.group("SID"),
Aggregation.skip(skip),
Aggregation.limit(limit));
AggregationResults<Map> results = mongoOperations.aggregate(aggregation, MyClass.class, Map.class);
List<String> SIDs = results.getMappedResults().stream().map(it -> it.get("_id").toString()).collect(Collectors.toList());
Query query = new Query();
query.addCriteria(Criteria.where("SID").in(SIDs));
return mongoOperations.find(query, MyClass.class);
This is returning unpredictable results ie every call returns different results and therefore not sorted as intended.
What am I missing here?
I've a feeling the sort stage in pipeline is incorrectly placed.
I am trying to make the below elasticsearch query to work with spring data. The intent is to return unique results for the field "serviceName". Just like a SELECT DISTINCT serviceName FROM table would do comparing to a SQL database.
{
"aggregations": {
"serviceNames": {
"terms": {
"field": "serviceName"
}
}
},
"size":0
}
I configured the field as a keyword and it made the query work perfectly in the index_name/_search api as per the response snippet below:
"aggregations": {
"serviceNames": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "service1",
"doc_count": 20
},
{
"key": "service2",
"doc_count": 8
},
{
"key": "service3",
"doc_count": 8
}
]
}
}
My problem is the same query doesn't work in Spring data when I try to run with a StringQuery I get the error below. I am guessing it uses a different api to run queries.
Cannot execute jest action , response code : 400 , error : {"root_cause":[{"type":"parsing_exception","reason":"no [query] registered for [aggregations]","line":2,"col":19}],"type":"parsing_exception","reason":"no [query] registered for [aggregations]","line":2,"col":19} , message : null
I have tried using the SearchQuery type to achieve the same results, no duplicates and no object loading, but I had no luck. The below sinnipet shows how I tried doing it.
final TermsAggregationBuilder aggregation = AggregationBuilders
.terms("serviceName")
.field("serviceName")
.size(1);
SearchQuery searchQuery = new NativeSearchQueryBuilder()
.withIndices("index_name")
.withQuery(matchAllQuery())
.addAggregation(aggregation)
.withSearchType(SearchType.DFS_QUERY_THEN_FETCH)
.withSourceFilter(new FetchSourceFilter(new String[] {"serviceName"}, new String[] {""}))
.withPageable(PageRequest.of(0, 10000))
.build();
Would someone know how to achieve no object loading and object property distinct aggregation on spring data?
I tried many things without success to print queries on spring data, but I could not, maybe because I am using the com.github.vanroy.springdata.jest.JestElasticsearchTemplate implementation.
I got the query parts with the below:
logger.info("query:" + searchQuery.getQuery());
logger.info("agregations:" + searchQuery.getAggregations());
logger.info("filter:" + searchQuery.getFilter());
logger.info("search type:" + searchQuery.getSearchType());
It prints:
query:{"match_all":{"boost":1.0}}
agregations:[{"serviceName":{"terms":{"field":"serviceName","size":1,"min_doc_count":1,"shard_min_doc_count":0,"show_term_doc_count_error":false,"order":[{"_count":"desc"},{"_key":"asc"}]}}}]
filter:null
search type:DFS_QUERY_THEN_FETCH
I figured out, maybe can help someone. The aggregation don't come with the query results, but in a result for it self and is not mapped to any object. The Objects results that comes apparently are samples of the query elasticsearch did to run your aggregation (not sure, maybe).
I ended up by creating a method which can do a simulation of what would be on the SQL SELECT DISTINCT your_column FROM your_table, but I think this will work only on keyword fields, they have a limitation of 256 characters if I am not wrong. I explained some lines in comments.
Thanks #Val since I was only able to figure it out when debugged into Jest code and check the generated request and raw response.
public List<String> getDistinctField(String fieldName) {
List<String> result = new ArrayList<>();
try {
final String distinctAggregationName = "distinct_field"; //name the aggregation
final TermsAggregationBuilder aggregation = AggregationBuilders
.terms(distinctAggregationName)
.field(fieldName)
.size(10000);//limits the number of aggregation list, mine can be huge, adjust yours
SearchQuery searchQuery = new NativeSearchQueryBuilder()
.withIndices("your_index")//maybe can be omitted
.addAggregation(aggregation)
.withSourceFilter(new FetchSourceFilter(new String[] { fieldName }, new String[] { "" }))//filter it to retrieve only the field we ar interested, probably we can take this out.
.withPageable(PageRequest.of(0, 1))//can't be zero, and I don't want to load 10 results every time it runs, will always return one object since I found no "size":0 in query builder
.build();
//had to use the JestResultsExtractor because com.github.vanroy.springdata.jest.JestElasticsearchTemplate don't have an implementation for ResultsExtractor, if you use Spring defaults, you can probably use it.
final JestResultsExtractor<SearchResult> extractor = new JestResultsExtractor<SearchResult>() {
#Override
public SearchResult extract(SearchResult searchResult) {
return searchResult;
}
};
final SearchResult searchResult = ((JestElasticsearchTemplate) elasticsearchOperations).query(searchQuery,
extractor);
final MetricAggregation aggregations = searchResult.getAggregations();
final TermsAggregation termsAggregation = aggregations.getTermsAggregation(distinctAggregationName);//this is where your aggregation results are, in "buckets".
result = termsAggregation.getBuckets().parallelStream().map(TermsAggregation.Entry::getKey)
.collect(Collectors.toList());
} catch (Exception e) {
// threat your error here.
e.printStackTrace();
}
return result;
}
ElasticSearch has GET API using which we can query on a single index for a particular document using the document Id.
From Elasticsearch 5.1, GET API supports querying on documents on an alias too that can point to multiple indexes like this:
GET /my_alias_name/_search/
{
"query": {
"bool": {
"filter": {
"term": {
"_id": "AUwNrOZsm6BwwrmnodbW"
}
}
}
}
}
What is the corresponding JAVA API to achieve this (using JestClient...)?
1) Client creation:
JestClientFactory factory = new JestClientFactory();
factory.setHttpClientConfig(new HttpClientConfig.Builder("http://localhost:9200")
.multiThreaded(true)
.build());
JestClient jestClient = factory.getObject();
2) Prepare the Search request:
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.boolQuery().filter(QueryBuilders.termQuery("_id", "AUwNrOZsm6BwwrmnodbW")));
Search search = new Search.Builder(searchSourceBuilder.toString())
.addIndex("my_alias_name") -> Add index name or an alias.
.addType("my_type") -> Add index type here.
.build();
3) Execute the search:
SearchResult result = jestClient.execute(search);
Note: We can add an alias name in place of index name and it works the
same way.
I wish to obtain a list of documents from a MongoDB collection by geospatial index. I have indexed the collection by 2dsphere
db.getCollection("Info").ensureIndex(new BasicDBObject("location", "2dsphere"), "geospatial");
A document in the Info collection look like this
{ "_id" : ObjectId("52631572fe38203a7388ebb5"), "location" : { "type" : "Point", "coordinates" : [ 144.6682361, -37.8978304 ] }
When I query the Info collection by the coordinates [ 144.6682361, -37.8978304 ], I am getting zero collections returned.
I am using JAVA API to perform the action. My JAVA code is below
DBCollection coll1=db.getCollection("Info");
BasicDBObject locQuery = new BasicDBObject();
locQuery.put("near", loc);
locQuery.append("maxDistance", 3000);
locCursor =coll1.find(locQuery);
System.out.println("LOCCURSOR"+locCursor.size());
The locCursor.size() returns always 0. Not sure where I am missing. At the same time, I am not getting any errors. It just gives me 0 documents returned. Any ideas Mongo users? Thanks for your time and help.
You can directly pass the value of your co-ordinates into your query:
double lat_lng_values[] = {144.6682361, -37.8978304};
BasicDBObject geo = new BasicDBObject("$geometry", new BasicDBObject("type","Point").append("coordinates",lat_lng_values));
BasicDBObject filter = new BasicDBObject("$near", geo);
filter.put("$maxDistance", 3000);
BasicDBObject locQuery = new BasicDBObject("location", filter);
System.out.println(locQuery);
I have JSON object :
{ "_id" : "1", "_class" : "com.model.Test", "projectList" : [ {
"projectID" : "Spring", "resourceIDList" : [ "Mark","David" ] },
{ "projectID" : "MongoDB", "resourceIDList" : [ "Nosa ] }
I need to be able to remove the resourceIDList for Project "Spring" and assign a new ResourceIDList.
The ResourceIDList is just List
Whenever I try to use the following,nothing is updated at DB:
Query query = new Query(where("_id").is("1").and("projectID").is("Spring"));
mongoOperations.updateMulti( query,new Update().set("ressourceIDList", populateResources()), Test.class);
Replacing the resourceIDList in the embedded document matching {"projectList.projectID":"Spring"} may be accomplished in the JavaScript shell like so:
(I like to start with the JS shell, because it is less verbose than Java and the syntax is relatively straightforward. Examples with JS can then be applied to any of the language drivers.)
> db.collection.update({_id:"1", "projectList.projectID":"Spring"}, {$set:{"projectList.$.resourceIDList":["Something", "new"]}})
The documentation on using the "$" operator to modify embedded documents may be found in the "The $ positional operator" section of the "Updating" documentation:
http://www.mongodb.org/display/DOCS/Updating#Updating-The%24positionaloperator
There is more information on embedded documents in the "Dot Notation (Reaching into Objects)" Documentation:
http://www.mongodb.org/display/DOCS/Dot+Notation+%28Reaching+into+Objects%29
The above may be done in the Java Driver like so:
Mongo m = new Mongo("localhost", 27017);
DB db = m.getDB("test");
DBCollection myColl = db.getCollection("collection");
ArrayList<String> newResourceIDList = new ArrayList<String>();
newResourceIDList.add("Something");
newResourceIDList.add("new");
BasicDBObject myQuery = new BasicDBObject("_id", "1");
myQuery.put("projectList.projectID", "Spring");
BasicDBObject myUpdate = new BasicDBObject("$set", new BasicDBObject("projectList.$.resourceIDList", newResourceIDList));
myColl.update(myQuery, myUpdate);
System.out.println(myColl.findOne().toString());
If you have multiple documents that match {"projectList.projectID" : "Spring"} you can update them at once using the multi = true option. With the Java driver it would look like this:
myColl.update(myQuery, myUpdate, false, true);
In the above, "false" represents "upsert = false", and "true" represents "multi = true". This is explained in the documentation on the "Update" command:
http://www.mongodb.org/display/DOCS/Updating#Updating-update%28%29
Unfortunately, I am not familiar with the Spring framework, so I am unable to tell you how to do this with the "mongoOperations" class. Hopefully, the above will improve your understanding on how embedded documents may be updated in Mongo, and you will be able to accomplish what you need to do with Spring.