I'm trying to do upsert using mongodb driver, here is a code:
BulkWriteOperation builder = coll.initializeUnorderedBulkOperation();
DBObject toDBObject;
for (T entity : entities) {
toDBObject = morphia.toDBObject(entity);
builder.find(toDBObject).upsert().replaceOne(toDBObject);
}
BulkWriteResult result = builder.execute();
where "entity" is morphia object. When I'm running the code first time (there are no entities in the DB, so all of the queries should be insert) it works fine and I see the entities in the database with generated _id field. Second run I'm changing some fields and trying to save changed entities and then I receive the folowing error from mongo:
E11000 duplicate key error collection: statistics.counters index: _id_ dup key: { : ObjectId('56adfbf43d801b870e63be29') }
what I forgot to configure in my example?
I don't know the structure of dbObject, but that bulk Upsert needs a valid query in order to work.
Let's say, for example, that you have a unique (_id) property called "id". A valid query would look like:
builder.find({id: toDBObject.id}).upsert().replaceOne(toDBObject);
This way, the engine can (a) find an object to update and then (b) update it (or, insert if the object wasn't found). Of course, you need the Java syntax for find, but same rule applies: make sure your .find will find something, then do an update.
I believe (just a guess) that the way it's written now will find "all" docs and try to update the first one ... but the behavior you are describing suggests it's finding "no doc" and attempting an insert.
Related
Feeling a bit stupid, but I have a simple architecture where the repositories are the only ones accessing ~Record classes and the services work on POJOs.
So basic flow is
repository fetches into POJO
service modifies POJO
repository receives POJO to update DB
repository matches updated POJO to record
repository stores (insert or update) the record
repository maps updated record (may have received generated values from insert) back to POJO
service receives updated POJO
i.e. something like
fun save(set: MySet): MySet {
set.description = set.description ?: ""
val record = ctx.newRecord(MY_SET, set).apply {
store()
}
// "When store() performs an INSERT statement, jOOQ attempts to load any generated keys from the database back into the record."
// cf. https://www.jooq.org/doc/latest/manual/sql-execution/crud-with-updatablerecords/simple-crud/
return record.into(MySet::class.java)
}
This fails because to quote documentation for newRecord:
Create a new pre-filled Record that can be inserted into the corresponding table.
This performs roughly the inverse operation of Record.into(Class)
The resulting record will have its internal "changed" flags set to true for all values. This means that UpdatableRecord.store() will perform an INSERT statement. If you wish to store the record using an UPDATE statement, use executeUpdate(UpdatableRecord) instead.
I CAN, of course, check if I have an id, and then either fetch the record from the database or create a new one
fun save(set: MySet): MySet {
set.description = set.description ?: ""
val record = when (val setId = set.id) {
null -> ctx.newRecord(MY_SET, set)
else -> ctx.selectFrom(MY_SET).where(MY_SET.ID.eq(setId)).fetchSingle()
}
//TOOD: update record manually from `set`
record.store()
// "When store() performs an INSERT statement, jOOQ attempts to load any generated keys from the database back into the record."
// cf. https://www.jooq.org/doc/latest/manual/sql-execution/crud-with-updatablerecords/simple-crud/
return record.into(MySet::class.java)
}
But that kind of is a lot of boilerplate code.
I DO have access to the MySetDao but that one just has insert and update, there's no store or upsert, as far as I can see.
Is there a way to turn a POJO into an UpdatableRecord directly or is this fetch-and-manual-update the way to go?
(Worth noting: the MySet POJO used here was generated by jOOQ.)
i wanted to do a direct update to mongodb and setting someflag to either true or false for my use case. To be effecient i do not want to query all documents and set the someflag and save it back to db. i just want to directly update it on db just like when doing update on mongodb terminal.
Here is the sample document. NOTE: these documents can number from 1 ~ N so i need to handle efficiently big datas
{
_id: 60db378d0abb980372f06fc1
someid: 23cad24fc5d0290f7d5274f5
somedata: some data of mine
flag: false
}
Currently im doing an #Query method on my repository
#Query(value ="{someid: ?0}, {$set: {flag: false}}")
void updateFlag(String someid)
Using the above syntax, it doesnt work, i always get an exception message below;
Failed to instantiate void using constructor NO_CONSTRUCTOR with
arguments
How do i perform a direct update effeciently without querying all those document and updating it back to db?
Use the BulkOperations class (https://docs.spring.io/spring-data/mongodb/docs/current/api/org/springframework/data/mongodb/core/BulkOperations.html)
Sample codes:
Spring Data Mongodb Bulk Operation Example
I'm having a problem with MongoDB using Java when I try adding documents with customized _id field. And when I insert new document to that collection, I want to ignore the document if it's _id has already existed.
In Mongo shell, collection.save() can be used in this case but I cannot find the equivalent method to work with MongoDB java driver.
Just to add an example:
I have a collection of documents containing websites' information
with the URLs as _id field (which is unique)
I want to add some more documents. In those new documents, some might be existing in the current collection. So I want to keep adding all the new documents except for the duplicate ones.
This can be achieve by collection.save() in Mongo Shell but using MongoDB Java Driver, I can't find the equivalent method.
Hopefully someone can share the solution. Thanks in advance!
In the MongoDB Java driver, you could try using the BulkWriteOperation object with the initializeOrderedBulkOperation() method of the DBCollection object (the one that contains your collection). This is used as follows:
MongoClient mongo = new MongoClient("localhost", port_number);
DB db = mongo.getDB("db_name");
ArrayList<DBObject> objectList; // Fill this list with your objects to insert
BulkWriteOperation operation = col.initializeOrderedBulkOperation();
for (int i = 0; i < objectList.size(); i++) {
operation.insert(objectList.get(i));
}
BulkWriteResult result = operation.execute();
With this method, your documents will be inserted one at a time with error handling on each insert, so documents that have a duplicated id will throw an error as usual, but the operation will still continue with the rest of the documents. In the end, you can use the getInsertedCount() method of the BulkWriteResult object to know how many documents were really inserted.
This can prove to be a bit ineffective if lots of data is inserted this way, though. This is just sample code (that was found on journaldev.com and edited to fit your situation.). You may need to edit it so it fits your current configuration. It is also untested.
I guess save is doing something like this.
fun save(doc: Document, col: MongoCollection<Document>) {
if (doc.getObjectId("_id") != null) {
doc.put("_id", ObjectId()) // generate a new id
}
col.replaceOne(Document("_id", doc.getObjectId("_id")), doc)
}
Maybe they removed save so you decide how to generate the new id.
I am using spring, hibernate and postgreSQL.
Let's say I have a table looking like this:
CREATE TABLE test
(
id integer NOT NULL
name character(10)
CONSTRAINT test_unique UNIQUE (id)
)
So always when I am inserting record the attribute id should be unique
I would like to know what is better way to insert new record (in my spring java app):
1) Check if record with given id exists and if it doesn't insert record, something like this:
if(testDao.find(id) == null) {
Test test = new Test(Integer id, String name);
testeDao.create(test);
}
2) Call straight create method and wait if it will throw DataAccessException...
Test test = new Test(Integer id, String name);
try{
testeDao.create(test);
}
catch(DataAccessException e){
System.out.println("Error inserting record");
}
I consider the 1st way appropriate but it means more processing for DB. What is your opinion?
Thank you in advance for any advice.
Option (2) is subject to a race condition, where a concurrent session could create the record between checking for it and inserting it. This window is longer than you might expect because the record might be already inserted by another transaction, but not yet committed.
Option (1) is better, but will result in a lot of noise in the PostgreSQL error logs.
The best way is to use PostgreSQL 9.5's INSERT ... ON CONFLICT ... support to do a reliable, race-condition-free insert-if-not-exists operation.
On older versions you can use a loop in plpgsql.
Both those options require use of native queries, of course.
Depends on the source of your ID. If you generate it yourself you can assert uniqueness and rely on catching an exception, e.g. http://docs.oracle.com/javase/1.5.0/docs/api/java/util/UUID.html
Another way would be to let Postgres generate the ID using the SERIAL data type
http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE-SERIAL
If you have to take over from an untrusted source, do the prior check.
I am new to Mongo DB and having trouble as it is behaving differently in different environments ( Dev , QA and Production)
I am using findAndModify to Update the Records in my MongoDB .
There is a Job that runs daily which Updates /Inserts Data to Mongo DB , and i am using findAndModify to Update the Record .
But what i observed is that the first record that is returned by findAndModify is different in Dev , QA and Production environemnts although the three environments are having the same Data ??
As per the Mongo DB document , it states that findAndModify will modify the first document
Currently this is my code :
BasicDBObject update = new BasicDBObject();
update.append("$set", new BasicDBObject(dataformed));
coll.findAndModify(query, update);
Please let me know how can i make sure that , the findAndModify returns the Last Updated Record , rather than depending upon un predictable behaviour ??
Edited Part
I am trying to use sort for my code but it is giving me compilation errors
coll.findAndModify(query, sort: { rating: 1 }, update);
I have a field called as lastUpdated which is created using System.currentTimeMilis
So can i use this lastUpdated as shown this way to get the Last Updated Record
coll.findAndModify( query,
new BasicDBObject("sort", new BasicDBObject("lastUpdated ", -1)),
update);
It appears you are using Java, so you have to construct the sort parameter as a DBObject, just like the other parameters:
coll.findAndModify(
query,
new BasicDBObject("sort", new BasicDBObject("rating", 1)),
update);
As we already explained to you in your other question, you have to add a field to the document which contains the date it was changed and then sort by that field or you have to use a capped collection, because they guarantee that the insertion order is preserved.