I am novice User to Couchbase, I am trying to insert the documents into the default bucket like below.
I found the following 2 ways to insert the json documents in the bucket:
1) Inserting by preparing JsonDocument and upsert into the bucket
StringBuilder strBuilder = new StringBuilder();
strBuilder.append("{'phone':{'y':{'phonePropertyList':{'dskFlag':'false','serialId':1000,'inputTray':{'LIST': {'e':[{'inTray':{'id':'1','name':'BypassTray','amount': {'unit':'sheets','state':'empty','typical':'0','capacity':'100'}");
String LDATA = strBuilder.toString();
Cluster cluster = CouchbaseCluster.create("localhost");
Bucket bucket = cluster.openBucket("default");
JsonObject deviceinfoObj = JsonObject.create().put("phoneinfo", LDATA);
bucket.upsert(JsonDocument.create("phone", deviceinfoObj));
2) Or by using like direct query like SQL
String query = "upsert into default(KEY, VALUE) values(LDATA)"
I am not able to find how to execute the above query like noram sql statement
Example:
Statement st = connection.createStatement();
ResultSet rs = st.executeQuery(query);
How to use N1QLQuery to insert the json document in the Couchbase Bucket.
I found the two ways to fetching the document.
i) Getting the document directly by using document Id
bucket.get("phone").content().get("phoneinfo")
ii) Getting the documents by using N!QueryResultSet
N1qlQueryResult result = bucket.query(N1qlQuery.simple("select * from default;"));
for (N1qlQueryRow row : result) {
System.out.println(row);
}
I got confused with the different approaches for inserting and fetching the documents from/to bucket in the Couchbase.
If I insert the documents by using 1st approach I need to prepare JsonObject with some key and value as a entire jsondocument.
So I think I better to insert the document using the second approach, So I will be able to fetch the documents using N1QLResultSet(2nd approach). but by using first approach I need to get the number of documents in the bucket and and then only I can loop through all the documents
Queries:
1) How to get the selective nested nodes from the document
2) In json document, Should I need to split key-value for each node ant then put in the JSONObject to prepare JSONDocument?
3) How to create a view for the bucket to do fast retrieval?
Why dont you create a simple java pojo and define it as a #Document. Then use CrudRepo to save and retrieve documents from couchbase.
this sample might help you:
https://github.com/maverickmicky/spring-couchbase-cache
Related
I've been using Azure's cosmosDb for a while. Recently i had done bulk import using stored procedure in a collection of my database, and that used to work fine. Now i've to do the same in another collection which uses partitioning; I searched azure code samples and modified my previous bulk insert function like this:
public void createMany(JSONArray aDocumentList, PartitionKey aPartitionKey) throws DocumentClientException {
List<String> aList = new ArrayList<String>();
for(int aIndex = 0; aIndex < aDocumentList.length(); aIndex++) {
JSONObject aJsonObj = aDocumentList.getJSONObject(aIndex);
aList.add(aJsonObj.toString());
}
String aSproc = getCollectionLink() + BULK_INSERTION_PROCEDURE;
RequestOptions requestOptions = new RequestOptions();
requestOptions.setPartitionKey(aPartitionKey);
String result = documentClient.executeStoredProcedure(aSproc,
requestOptions
, new Object[] { aList}).getResponseAsString();
}
but this code gives me error:
com.microsoft.azure.documentdb.DocumentClientException: Message: {"Errors":["Encountered exception while executing function. Exception = Error: {\"Errors\":[\"Requests originating from scripts cannot reference partition keys other than the one for which client request was submitted.\"]}\r\nStack trace: Error: {\"Errors\":[\"Requests originating from scripts cannot reference partition keys other than the one for which client request was submitted.\"]}\n at callback (bulkInsertionStoredProcedure.js:1:1749)\n at Anonymous function (bulkInsertionStoredProcedure.js:689:29)"]}
I'm not quite certain what that error actually means. Since partitionKey is just a JSON key in the document, why would it need it in other Documents also. Do i need to append this in my document also(with partitionKey key) .Could anyone please tell me what i'm missing here? I've searched over the internet and haven't found anything useful that could make it work.
I've already answered this question here. The gist of it is that the documents you're inserting with your SPROC must have a partitionKey that matches the one you pass with the request options
// ALL documents inserted must have a parititionKey value that matches
// "aPartitionKey" value
requestOptions.setPartitionKey(aPartitionKey);
So if aPartitionKey == 123456 then all the documents you are inserting with the SPROC are required to belong to that partition. If you have documents spanning multiple partitions that you want to bulk insert you will have to group them by partition key and run the SPROC separately for each grouping.
I'm new to couchbase. I'm using Java for this. I'm trying to remove a document from a bucket by looking up its ID with query parameters(assuming the ID is unknown).
Lets say I have a bucket called test-data. In that bucked I have a document with ID of 555 and Content of {"name":"bob","num":"10"}
I want to be able to remove that document by querying using 'name' and 'num'.
So far I have this (hardcoded):
String statement = "SELECT META(`test-data`).id from `test-data` WHERE name = \"bob\" and num = \"10\"";
N1qlQuery query = N1qlQuery.simple(statement);
N1qlQueryResult result = bucket.query(query);
List<N1qlQueryRow> row = result.allRows();
N1qlQueryRow res1 = row.get(0);
System.out.println(res1);
//output: {"id":"555"}
So I'm getting a json that has the document's ID in it. What would be the best way to extract that ID so that I can then remove the queryed document from the bucket using its ID? Am I doing to many steps? Is there a better way to extract the document's ID?
bucket.remove(docID)
Ideally I'd like to use something like a N1q1QueryResult to get this going but I'm not sure how to set that up.
N1qlQueryResult result = bucket.query(select("META.id").fromCurrentBucket().where((x("num").eq("\""+num+"\"")).and(x("name").eq("\""+name+"\""))));
But that isn't working at the moment.
Any help or direction would be appreciated. Thanks.
There might be a better way which is running this kind of query:
delete from `test-data` use keys '00000874a09e749ab6f199c0622c5cb0' returning raw META(`test-data`).id
or if your fields has index:
delete from `test-data` where name='bob' and num='10' returning raw META(`test-data`).id
This query deletes the specified document with given document key (which is meta.id) and returns document id of deleted document if it deletes any document. Returns empty if no documents deleted.
You can implement this query with couchbase sdk as follows:
Statement statement = deleteFrom("test-data")
.where(x("name").eq(s("bob")).and(x("num").eq(s("10"))))
.returningRaw(meta(i("test-data")).get("id"));
You can make this statement parameterized or just execute like that.
I'm querying a DynamoDB table using the hash key. Each record in the table is uniquely identified by a hash key and a range key
DynamoDBMapper mapper;
....
MyClass myClass = new MyClass();
myClass.setHashKey(hashKey);
DynamoDBQueryExpression<MyClass> queryExpression = new DynamoDBQueryExpression<MyClass>()
.withHashKeyValues(myClass);
PaginatedQueryList<MyClass> entries = mapper.query(MyClass.class, queryExpression);
//Work with the elements of entries
When the result set is more than 1MB, how can I retrieve the rest.
I cannot find any method to get the LastEvaluatedKey as mentioned in the docs.
AWS SDK for Dynamodb Mapper handles pagination for you. It internally queries db and when you require data for more than 1MB, it queries it again and gets data for you.
If you want the complete list in one go you can use operations like size or copying the paginated result into a list which will require mapper to fetch complete result.
In short, you need not worry about LastEvaluatedKey and its handling it is handled for you.
Examples,
PaginatedQueryList<T> resultPaginatedList = dynamoDBMapper.query(getModelClass(), queryExpression);
List<T> queryList = new LinkedList<>(resultPaginatedList); //--- Line1
logger.info("Total elements found: " + queryList.size()); //--- Line2
Both line 1 and line 2 depicts operations for which mapper will fetch the complete result (by querying multiple times handled by SDK) and not just 1MB.
I'm having a problem with MongoDB using Java when I try adding documents with customized _id field. And when I insert new document to that collection, I want to ignore the document if it's _id has already existed.
In Mongo shell, collection.save() can be used in this case but I cannot find the equivalent method to work with MongoDB java driver.
Just to add an example:
I have a collection of documents containing websites' information
with the URLs as _id field (which is unique)
I want to add some more documents. In those new documents, some might be existing in the current collection. So I want to keep adding all the new documents except for the duplicate ones.
This can be achieve by collection.save() in Mongo Shell but using MongoDB Java Driver, I can't find the equivalent method.
Hopefully someone can share the solution. Thanks in advance!
In the MongoDB Java driver, you could try using the BulkWriteOperation object with the initializeOrderedBulkOperation() method of the DBCollection object (the one that contains your collection). This is used as follows:
MongoClient mongo = new MongoClient("localhost", port_number);
DB db = mongo.getDB("db_name");
ArrayList<DBObject> objectList; // Fill this list with your objects to insert
BulkWriteOperation operation = col.initializeOrderedBulkOperation();
for (int i = 0; i < objectList.size(); i++) {
operation.insert(objectList.get(i));
}
BulkWriteResult result = operation.execute();
With this method, your documents will be inserted one at a time with error handling on each insert, so documents that have a duplicated id will throw an error as usual, but the operation will still continue with the rest of the documents. In the end, you can use the getInsertedCount() method of the BulkWriteResult object to know how many documents were really inserted.
This can prove to be a bit ineffective if lots of data is inserted this way, though. This is just sample code (that was found on journaldev.com and edited to fit your situation.). You may need to edit it so it fits your current configuration. It is also untested.
I guess save is doing something like this.
fun save(doc: Document, col: MongoCollection<Document>) {
if (doc.getObjectId("_id") != null) {
doc.put("_id", ObjectId()) // generate a new id
}
col.replaceOne(Document("_id", doc.getObjectId("_id")), doc)
}
Maybe they removed save so you decide how to generate the new id.
I'm building a logging application that does the following:
gets JSON strings from many loggers continuously and saves them to a db
serves the collected data as a per logger bulk
my intention is to use a document based NoSQL storage to have the bulk structure right away. After some research I decided to go for MongoDB because of the following features:
- comprehensive functions to insert data into existing structures ($push, (capped) collection)
- automatic sharding with a key I choose (so I can shard on a per logger basis and therefore serve bulk data in no time - all data already being on the same db server)
The JSON I get from the loggers looks like this:
[
{"bdy":{
"cat":{"id":"36494h89h","toc":55,"boc":99},
"dataT":"2013-08-12T13:44:03Z","had":0,
"rng":23,"Iss":[{"id":10,"par":"dim, 10, dak"}]
},"hdr":{
"v":"0.2.7","N":2,"Id":"KBZD348940"}
}
]
The logger can send more than one element in the same array. I this example it is just one.
I started coding in Java with the mongo driver and the first problem I discovered was: I have to parse my with no doubt valid JSON to be able to save it in mongoDB. I learned that this is due to BSON being the native format of MongoDB. I would have liked to forward the JSON string to the db directly to save that extra execution time.
so what I do in a first Java test to save just this JSON string is:
String loggerMessage = "...the above JSON string...";
DBCollection coll = db.getCollection("logData");
DBObject message = (DBObject) JSON.parse(loggerMessage);
coll.insert(message);
the last line of this code causes the following exception:
Exception in thread "main" java.lang.IllegalArgumentException: BasicBSONList can only work with numeric keys, not: [_id]
at org.bson.types.BasicBSONList._getInt(BasicBSONList.java:161)
at org.bson.types.BasicBSONList._getInt(BasicBSONList.java:152)
at org.bson.types.BasicBSONList.get(BasicBSONList.java:104)
at com.mongodb.DBCollection.apply(DBCollection.java:767)
at com.mongodb.DBCollection.apply(DBCollection.java:756)
at com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:220)
at com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:204)
at com.mongodb.DBCollection.insert(DBCollection.java:76)
at com.mongodb.DBCollection.insert(DBCollection.java:60)
at com.mongodb.DBCollection.insert(DBCollection.java:105)
at mongomockup.MongoMockup.main(MongoMockup.java:65)
I tried to save this JSON via the mongo shell and it works perfectly.
How can I get this done in Java?
How could I maybe save the extra parsing?
What structure would you choose to save the data? Array of messages in the same document, collection of messages in single documents, ....
It didn't work because of the array. You need a BasicDBList to be able to save multiple messages. Here is my new solution that works perfectly:
BasicDBList data = (BasicDBList) JSON.parse(loggerMessage);
for(int i=0; i < data.size(); i++){
coll.insert((DBObject) data.get(i));
}