Two documents can have same IMAGE_CONTENT_INSTANCE_HANDLE and state can be BOOKED or RELEASED
but I want all image instances handles which are only RELEASED state,
Currently I am doing this by firing two queries it introduced performance issues.
{
"state" : "RELEASED"
}
with projection { "imageContentInstance.handle" : 1}
i am iterating through the result which is coming out from this query
and firing another query as below and excluding the handles which are also in BOOKED state from adding to the list.So i gets handles only in the RELEASED state
while (cursor.hasNext()) {
ICI ici = objectMapper.readValue(result, ICI_COLLECTION_TYPE_REF);
String result = JSON.serialize(cursor.next());
try {
queryDocument = { "imageContentInstance.handle" : ici.getImageContentInstance().getHandle() , "state" : "BOOKED"}
Document bookedDoc = iciDAO.findOne(queryDocument);
if (null != bookedDoc)
LOGGER.debug("Calling continue and skipping booked state ");
continue;
}
iciHandles.add(ici.getImageContentInstance().getHandle().toString());
LOGGER.debug("ImageInstanceHandle is added to the lisr and the lise is "+iciHandles.size());
}
I want to achieve this in a single mongo query as an example query written in sql to increase performance .I really appreciate your comments .
SELECT *
FROM ici i
WHERE i.state = 'RELEASED'
AND NOT EXISTS
(SELECT * FROM ici ic WHERE ic.handle = i.handle AND ic.state = 'BOOKED'
);
example :
Suppose the documents are as below
{
"_id" : ObjectId("58c9f524fa8cd6a517cf5ddf"),
"imageContentInstance" : {
"handle" : "ICI:1234",
"key" : null,
}
"instanceHandle" : "LCI:RNBM12",
"state" : "BOOKED",
}
{
"_id" : ObjectId("58c9f524fa8cd6a517cf5ddf"),
"imageContentInstance" : {
"handle" : "ICI:1234",
"key" : null,
}
"instanceHandle" : "LCI:RNBM13",
"state" : "RELEASED",
}
{
"_id" : ObjectId("58c9f524fa8cd6a517cf5ddf"),
"imageContentInstance" : {
"handle" : "ICI:456",
"key" : null,
}
"instanceHandle" : "LCI:RNBM14",
"state" : "RELEASED"
}
My query should return the handle of the last document alone .ie, document with the status only with the RELEASED status .i am stuck, I really appreciate your ideas to improve this.
From Your question,i understand that you want all state ='Released' ans state!= 'BOOKED' which i think you have written little incorrect.
MongoDB query:
db.inventory.find({'state' : 'RELEASED'}})
Also go through mognodb docs
I hope it will help.I am also new to mongodb,if there is an error please make it correct.
Related
I have a json object something like this. I want to exclude the field "placeOfBirth" from the response. For that I am using projection. But somehow it is working only for fileds but not for the subfields. So placeofBirth is never excluded but status is removed in the response.
Here is my code
Projection projectionExclude = Projection.of().exclude("subObject.placeOfBirth").exclude("status");
MorphiaCursor<T> cursor = datastore.aggregate(T.class)
.match(Filters.eq("about", id))
.project(projectionExclude).execute(T.class);
if(cursor != null && cursor.hasNext()){
result = cursor.toList().get(0);
}
Json data
{
"about: " "testing/123",
"subObject" : [
{
"about" : "subobject/123",
"placeOfBirth": {
"birth": ["Lisbon"]
}
}
],
"status" : "approved"
}
How can make this work? Is there some other way to achieve this?
This actually works for me on 2.2. Here's the test I'm running:
MongoCollection<Document> collection = getDocumentCollection(User.class);
collection.insertOne(parse("{'about': 'testing/123', 'subObject' : [ {'about' : 'subobject/123', 'placeOfBirth': {'birth': " +
"['Lisbon']}}],'status' : 'approved'}"));
Document next = getDs().aggregate(User.class)
.match(eq("about", "testing/123"))
.project(project()
.exclude("subObject.placeOfBirth")
.exclude("status"))
.execute(Document.class)
.next();
assertFalse(next.toJson().contains("placeOfBirth"));
assertFalse(next.toJson().contains("status"));
I have been looking for a solution to create a sort of alert when new documents are added to ES via Logstash. I have seen some threads on here such as : stackoverflow.com/a/51980618/4604579, but that does not really serve my purposes as the plug-ins mentioned do not work with the newest version of ELK and there is no Changes API out yet.
So I have resorted to trying 2 different approaches:
Create a Scroll and run over all the documents in a given index using the Search API, retain the last document's ID and use it after a given timeout period to get all documents that were added after it
Creating a Watcher that checks after a given interval (for example 5 minutes) if new documents have been added to an index.
I have advanced on approach 1, where I can scroll through about 50k documents that are currently in ES and retrieve the last documents id (i sort the query based on timestamp in ascending order, that way I know that the last document will be the latest that was inserted). But I don't know how efficient this approach is and I know that a scroller may time out after a given delay, so if no new documents are inserted, that means the scroll will be removed.
I was looking also into using a Watcher, but I don't really understand how I can set up the condition to check if a new document was inserted in a given index.
I imagine I can do something of the genre:
PUT _watcher/watch/new_docs
{
"trigger" : {
"schedule" : {
"interval" : "5s"
}
},
"input" : {
"search" : {
"request" : {
"indices" : "logstash",
"body" : {
"size" : 0,
"query" : { "match" : { "#timestamp" : "now-5s" } }
}
}
}
},
"condition" : {
"compare" : { ?? }
},
"actions" : {
"my_webhook" : {
"webhook" : {
"method" : "POST",
"host" : "mylisteninghost",
"port" : 9200,
"path" : "/{{watch_id}}",
"body" : "New document {{document ID}} errors"
}
}
I am not exactly sure how to define or use the Watcher and if it would even work.
Can anyone let me know what the best course of action would be?
Thank you
EDIT:
For those interested I found a way to poll the ES REST API using Search After. The difference is that using Scroll, there is a snapshot taken of the documents in the ES DB, so any new documents added wont be in this snapshot. Contrary to that, Search After is state-less, which means that it will use unique sorting parameters (in my case timestamp/id) and hold the last one fetched, afterwards we query all documents that come after the held parameters. This way if any new documents are added, they will come after the held timestamp and will be fetched by the query.
Code:
public static void searchAfterElasticData()
throws FileNotFoundException, IOException, InterruptedException {
//create a search request for a given index
SearchRequest search_request = new SearchRequest(elastic_index);
SearchSourceBuilder source_builder =
getSearchSourceBuilder("#timestamp", "_id", 100);
search_request.source(source_builder);
SearchResponse search_response = null;
try {
search_response = client.search(search_request, RequestOptions.DEFAULT);
} catch (ElasticsearchException | ConnectException ex) {
log.info("Error while querying Elastic API: {}", ex.toString());
}
if (search_response != null) {
SearchHit[] search_hits = search_response.getHits().getHits();
Object[] sort_values = null;
while (search_hits != null) {
if (search_hits.length > 0) {
//if there are records retrieved, parse them
for (SearchHit hit: search_hits) {
Map<String, Object> source_map = hit.getSourceAsMap();
try {
parse((String)source_map.get("message"));
} catch (Exception ex) {
log.error("Error while parsing: {}",
(String)source_map.get("message"));
}
}
//get sorting value of last record and do new request
log.info("Getting sorting values");
sort_values = search_response.getHits()
.getAt(search_hits.length-1).getSortValues();
} else {
log.info("Waiting 1 minute for new entries");
Thread.sleep(60000);
}
source_builder.searchAfter(sort_values);
search_request.source(source_builder);
search_response =
client.search(search_request, RequestOptions.DEFAULT);
search_hits = search_response.getHits().getHits();
log.info("Fetched hits: {}", search_hits.length);
log.info("Searching after for new hits");
}
}
}
I still would like to know if it is possible to do the same using a Watcher, also if anyone has any suggestions to make the code more elegant, please share.
Thank you
I'm using firebase4j (a firebase library for Java, I know it is much better to go with node, I just wanted to try to do it with Java). In my database I need to persist the url of images with a bunch of the picture's information. The thing is that the picture url itself is very deep into the JSON
"users" : {
"aCategory" : {
"aUser" : {
"photos" : {
"photoUid1" : [ {
"value1" : false,
"value2" : "qwerty",
"score" : 40,
"url" : "http://someurl.com"
}
That is why I am trying to create an index for the pictures ordered by score, containing the url pointing to the location of the photo object in the firebase database. Here is where the issue begins. Firebase4j does not let you push, to a list for example, so the index ends up with this format:
{
"-UID1": {
"firebaseImgUrl": "users/aCategory/aUser/photos/photoUid1",
"score": 31
},
"-UID2": {
"firebaseImgUrl": "users/aCategory/aUser/photos/photoUid2",
"score": 30
}
}
I already added the rule ".indexOn" in order for firebase to answer with the right photos when asked for http://firebaseurl.com/users/...?orderBy="score"&limitToFirst=10, which is what I'm doing. I would like to know how should I iterate a JSON object of object as shown in the example above. I'm receiving the data from an Angular 4 client. I've tried a number of methods which haven't worked for me:
result: Photo[] = [];
for(let key in json){
console.log(key); //prints the UIDs
console.log(key.url); //url is not a property of string
//thus
result.push(new Photo(key.url, key.score)); //not working
}
The key is only a string, indicating the keys in your json. You should use it to access your object, like this:
result: Photo[] = [];
for(let key in json){
result.push(new Photo(json[key].firebaseImgUrl, json[key].score));
}
I have a collection of users:
> db.users.find().pretty()
{
"_id" : ObjectId("544ab933e4b099c3cfb62e12"),
"token" : "8c9f8cf4-1689-48ab-bf53-ee071a377f60",
"categories" : [
DBRef("cue_categories", ObjectId("544ab933e4b099c3cfb62e10")),
DBRef("cue_categories", ObjectId("544ab933e4b099c3cfb62e11"))
]
}
I want to find all users who have (let's say) ObjectId("544ab933e4b099c3cfb62e10") category and remove it (because this category was deleted and I don't want users to refer to it anymore).
The valid query to do it in JSON format would be:
db.users.update({
categories:{
$in:[
DBRef("cue_categories", ObjectId("544ab933e4b099c3cfb62e10"))
]
}
},
{
$unset:{
"categories.$":true
}
})
Here's a Spring mongodb query:
Query query = new Query();
query.addCriteria(Criteria.where("categories.$id").in(categoryIds));
Update update = new Update();
update.unset("categories.$");
operations.updateMulti(query, update, User.class);
In order to make an appropriate DB reference I have to provide a list of category IDs, each category ID (in categoryIds) is an instance of org.bson.types.ObjectId.
The problem is that the result query turns out to be without a positional operator:
DEBUG o.s.data.mongodb.core.MongoTemplate - Calling update using
query: { "categories.$id" : { "$in" : [ { "$oid" :
"544ab933e4b099c3cfb62e10"}]}} and update: { "$unset" : { "categories"
: 1}} in collection: users
So the update part must be { "$unset" : { "categories.$" : 1}}
P.S.
I managed to get around by falling back to the plain Java driver use
DBObject query = new BasicDBObject("categories.$id", new BasicDBObject("$in", categoryIds));
DBObject update = new BasicDBObject("$unset", new BasicDBObject("categories.$", true));
operations.getCollection("users").updateMulti(query, update);
But my question still remains open!
P.S.S.
My case is very similar to Update Array Field Using Positional Operator ($) Does Not Work bug and looks like it was fixed for versions 1.4.1 and 1.5. That being said I use spring-data-mongodb version 1.5.1. And I'm confused. Does anybody have a clue?
You can not use positional $ operator with unset as per MongoDB documentation. It will set the value as Null. https://docs.mongodb.com/manual/reference/operator/update/positional/
I have below document structure in mongodb database:
{
"_id" : ObjectId("52ec7b43e4b048cd48499b35"),
"eidlist" : [
{
"eid" : "64286",
"dst" : NumberLong(21044),
"score" : 0
},
{
"eid" : "65077",
"dst" : NumberLong(21044),
"score" : 0
}
],
"src" : NumberLong(21047)
}
I would like to update score field of first object using Java-mongodb driver:
I tried following code but it is not working :( :
DBObject update_query=new BasicDBObject("src", key).append("eidlist.eid", e.getEdgeid());
DBObject data=new BasicDBObject("$set",new BasicDBObject("eidlist.score",100));
coll.update(update_query, data);
Please help me to solve this problem..I have checked all the parameter which I have passed to update function.I think something wrong with the update logic :(
You were close. You omiited the positional operator from the update. Edit your code as shown.
DBObject data=new BasicDBObject("$set",new BasicDBObject("eidlist.$.score",100));
Solution for this problem is:
DBObject data=new BasicDBObject("$set",new BasicDBObject("eidlist.$.score",""+100));
Ensure data type of every field used in the update query.It should be compatible with what we have stored in the mongodb :)