I am using Orient DB Document model. My code to save a document-
private ODocument saveDocument(ODocument document) {
ODatabaseRecordThreadLocal.INSTANCE.set(database);
return document.save();
}
We create classes from some types and some Document classes are created at runtime hence schemeless.
The save code works fine when the ODocument is of a class that has been defined in a schema. Example we have a Status schema-
schema.createClass("Status");
So if i do
document = new ODocument("Status");
save(document)
then the above code works fine.
But if i do
doument = new ODocument("RawData");
save(document)
then i get OSchemaException -
Record saved into cluster collectionfile should be saved with class CollectionFile but saved with class RawData
Where CollectionFile is some other Schema that i have in my database. My question is that why is Orient trying to save RawData document in some other cluster.
P.S : This code was working fine one day back when i had single DB in my application. Then i changed to a multi DB approach where i have two DB instances in my application.
Thanks for the help.
You should set the current db you want to use in case of multiple dbs, with:
ODatabaseRecordThreadLocal.INSTANCE.set( database2 );
Look at: http://www.orientechnologies.com/docs/last/Java-Multi-Threading.html
Related
I have a Spring application that runs a cron on it. The cron every few minutes gets new data from external API. The data should be stored in a database (MySQL), in place of old data (Old data should be overwritten by new data). The data requires to be overwritten instead of updated. The application itself provides REST API so the client is able to get the data from the database. So there should not be situation that client sees an empty or just a part of data from database because there is an data update.
Currently I've tried deleting whole old data and insert new data but there is a place that a client gets just a part of the data. I've tried it via Spring Data deleteAll and saveAll methods.
#Override
#Transactional
public List<Country> overrideAll(#NonNull Iterable<Country> countries) {
removeAllAndFlush();
List<CountryEntity> countriesToCreate = stream(countries.spliterator(), false)
.map(CountryEntity::from)
.collect(toList());
List<CountryEntity> createdCountries = repository.saveAll(countriesToCreate);
return createdCountries.stream()
.map(CountryEntity::toCountry)
.collect(toList());
}
private void removeAllAndFlush() {
repository.deleteAll();
repository.flush();
}
I also thought about having a temporary table that gets new data and when the data is complete just replace main table with temporary table. Is it a good idea? Any other ideas?
It's a good idea. You can minimize the downtime by working on another table until it's ready and then switch tables quickly by renaming. This will also improve perceived performance by the users because no record needs to be locked like what happens when using UPDATE/DELETE.
In MySQL, you can use RENAME TABLE if you don't have triggers on the table. It allows multiple table renaming at once and it works atomically (i.e. transaction - if any error happens, no change is made). You can use the following for example
RENAME TABLE countries TO countries_old, countries_new TO countries;
DROP TABLE countries_old;
Refer here for more details
https://dev.mysql.com/doc/refman/5.7/en/rename-table.html
I'm trying to save a dataset to cassandra db using java spark.
I'm able to read data into dataset successfully using the below code
Dataset<Row> readdf = sparkSession.read().format("org.apache.spark.sql.cassandra")
.option("keyspace","dbname")
.option("table","tablename")
.load();
But when I try to write dataset I'm getting IOException: Could not load or find table, found similar tables in keyspace
Dataset<Row> dfwrite= readdf.write().format("org.apache.spark.sql.cassandra")
.option("keyspace","dbname")
.option("table","tablename")
.save();
I'm setting host and port in sparksession
The thing is I'm able to write in overwrite and append modes but not able to create table
Versions which I'm using are below:
spark java 2.0
spark cassandra connector 2.3
Tried with different jar versions but nothing worked
I have also gone through different stack overflow and github links
Any help is greatly appreciated.
The write operation in Spark doesn't have a mode that will automatically create a table for you - there are multiple reasons for that. One of them is that you need to define a primary key for your table, otherwise, you may just overwrite data if you set incorrect primary key. Because of this, Spark Cassandra Connector provides a separate method to create a table based on your dataframe structure, but you need to provide a list of partition & clustering key columns. In Java it will look as following (full code is here):
DataFrameFunctions dfFunctions = new DataFrameFunctions(dataset);
Option<Seq<String>> partitionSeqlist = new Some<>(JavaConversions.asScalaBuffer(
Arrays.asList("part")).seq());
Option<Seq<String>> clusteringSeqlist = new Some<>(JavaConversions.asScalaBuffer(
Arrays.asList("clust", "col2")).seq());
CassandraConnector connector = new CassandraConnector(
CassandraConnectorConf.apply(spark.sparkContext().getConf()));
dfFunctions.createCassandraTable("test", "widerows6",
partitionSeqlist, clusteringSeqlist, connector);
and then you can write data as usual:
dataset.write()
.format("org.apache.spark.sql.cassandra")
.options(ImmutableMap.of("table", "widerows6", "keyspace", "test"))
.save();
I'm having a problem with MongoDB using Java when I try adding documents with customized _id field. And when I insert new document to that collection, I want to ignore the document if it's _id has already existed.
In Mongo shell, collection.save() can be used in this case but I cannot find the equivalent method to work with MongoDB java driver.
Just to add an example:
I have a collection of documents containing websites' information
with the URLs as _id field (which is unique)
I want to add some more documents. In those new documents, some might be existing in the current collection. So I want to keep adding all the new documents except for the duplicate ones.
This can be achieve by collection.save() in Mongo Shell but using MongoDB Java Driver, I can't find the equivalent method.
Hopefully someone can share the solution. Thanks in advance!
In the MongoDB Java driver, you could try using the BulkWriteOperation object with the initializeOrderedBulkOperation() method of the DBCollection object (the one that contains your collection). This is used as follows:
MongoClient mongo = new MongoClient("localhost", port_number);
DB db = mongo.getDB("db_name");
ArrayList<DBObject> objectList; // Fill this list with your objects to insert
BulkWriteOperation operation = col.initializeOrderedBulkOperation();
for (int i = 0; i < objectList.size(); i++) {
operation.insert(objectList.get(i));
}
BulkWriteResult result = operation.execute();
With this method, your documents will be inserted one at a time with error handling on each insert, so documents that have a duplicated id will throw an error as usual, but the operation will still continue with the rest of the documents. In the end, you can use the getInsertedCount() method of the BulkWriteResult object to know how many documents were really inserted.
This can prove to be a bit ineffective if lots of data is inserted this way, though. This is just sample code (that was found on journaldev.com and edited to fit your situation.). You may need to edit it so it fits your current configuration. It is also untested.
I guess save is doing something like this.
fun save(doc: Document, col: MongoCollection<Document>) {
if (doc.getObjectId("_id") != null) {
doc.put("_id", ObjectId()) // generate a new id
}
col.replaceOne(Document("_id", doc.getObjectId("_id")), doc)
}
Maybe they removed save so you decide how to generate the new id.
I'm building a logging application that does the following:
gets JSON strings from many loggers continuously and saves them to a db
serves the collected data as a per logger bulk
my intention is to use a document based NoSQL storage to have the bulk structure right away. After some research I decided to go for MongoDB because of the following features:
- comprehensive functions to insert data into existing structures ($push, (capped) collection)
- automatic sharding with a key I choose (so I can shard on a per logger basis and therefore serve bulk data in no time - all data already being on the same db server)
The JSON I get from the loggers looks like this:
[
{"bdy":{
"cat":{"id":"36494h89h","toc":55,"boc":99},
"dataT":"2013-08-12T13:44:03Z","had":0,
"rng":23,"Iss":[{"id":10,"par":"dim, 10, dak"}]
},"hdr":{
"v":"0.2.7","N":2,"Id":"KBZD348940"}
}
]
The logger can send more than one element in the same array. I this example it is just one.
I started coding in Java with the mongo driver and the first problem I discovered was: I have to parse my with no doubt valid JSON to be able to save it in mongoDB. I learned that this is due to BSON being the native format of MongoDB. I would have liked to forward the JSON string to the db directly to save that extra execution time.
so what I do in a first Java test to save just this JSON string is:
String loggerMessage = "...the above JSON string...";
DBCollection coll = db.getCollection("logData");
DBObject message = (DBObject) JSON.parse(loggerMessage);
coll.insert(message);
the last line of this code causes the following exception:
Exception in thread "main" java.lang.IllegalArgumentException: BasicBSONList can only work with numeric keys, not: [_id]
at org.bson.types.BasicBSONList._getInt(BasicBSONList.java:161)
at org.bson.types.BasicBSONList._getInt(BasicBSONList.java:152)
at org.bson.types.BasicBSONList.get(BasicBSONList.java:104)
at com.mongodb.DBCollection.apply(DBCollection.java:767)
at com.mongodb.DBCollection.apply(DBCollection.java:756)
at com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:220)
at com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:204)
at com.mongodb.DBCollection.insert(DBCollection.java:76)
at com.mongodb.DBCollection.insert(DBCollection.java:60)
at com.mongodb.DBCollection.insert(DBCollection.java:105)
at mongomockup.MongoMockup.main(MongoMockup.java:65)
I tried to save this JSON via the mongo shell and it works perfectly.
How can I get this done in Java?
How could I maybe save the extra parsing?
What structure would you choose to save the data? Array of messages in the same document, collection of messages in single documents, ....
It didn't work because of the array. You need a BasicDBList to be able to save multiple messages. Here is my new solution that works perfectly:
BasicDBList data = (BasicDBList) JSON.parse(loggerMessage);
for(int i=0; i < data.size(); i++){
coll.insert((DBObject) data.get(i));
}
I am new to lotus. I need to get some info from Lotus database with Java. I have database:
Session session = NotesFactory.createSession(host, user, pwd);
Database database = session.getDatabase(server, database);
I have that info:
field - fldContractorCode;
form - form="formAgreement";
For example field is "abcde";
So how I can get info from that database? I need to use seatch formula? Or what methods I need to use? Thanx for help.
UPD
Now I am using such way:
DocumentCollection collection = DATABASE.search("form=\"formAgreement\"");
Document doc = collection.getFirstDocument();
while(doc != null) {
doc.getItemValueString("fldContractorCode");
doc = collection.getNextDocument();
}
And it works fine for me, but I think that way is not very comfortable because to find some document for example with field="abcd" I need to itearte over collection every time...
So that why I am asking for some way to find document by the field value. And I dont understand what is VIEW in database and where to get this VIEW name.
In your existing code, you can just change one line:
DocumentCollection collection = DATABASE.search("form=\"formAgreement\ & "fldContractorCode=\"abcd\"");
However, this will be slow if the database contains many documents. For best performance, you should consider using Domino Designer to add a new view to your database and using the getDocumentByKey() method suggested in the other answers. If that is not an option, Simon's suggestion of using the FTSearch() method is faster than the Search() method, but only if a full text index exists for the database. It also has a slightly different syntax for the search string.
There are a number of ways to get the document.
1. Search for the document from a view, where the first column of the view contains a sorted value of the fldContractorCode.
For example:
String key = "abide";
View view = db.getView("viewName");
Document doc = view.getDocumentByKey(key, true);
2. You can use the Database FTSearch Method to do a full text search to find the document. You will need the database to have a full text index created.
3. If you know the UNID or notes ID of the document you can use getDocumentByUNID() or getDocumentByID().
Your question is quite broad, so I recommend reading the Infocenter as it details sample code for each method.
http://publib.boulder.ibm.com/infocenter/domhelp/v8r0/topic/com.ibm.designer.domino.main.doc/H_NOTESDATABASE_CLASS_JAVA.html
You will have to drill down to the DOCUMENT (not Form) you want to retrieve the field from.
Lotus Notes has a very easy to understand hierarchical way to get to where you want. You will need to instantiate objects in this sequence:
Session
Database
View
Document
Let's say you have a view called $(sysAgreements) that list all forms "formAgreement".
Its selection formula would be something like this:
SELECT Form="formAgreement"
To get to the document or documents you want you will do something like this:
Session session = NotesFactory.createSession(host, user, pwd);
Database database = session.getDatabase(server, database);
View view = database.getView("$(sysAgreements)");
Document doc = view.getDocumentByKey(VIEW_KEY);
String fieldContent = doc.getItemValueString("fldContractorCode");
There are several ways to retrieve info from a Notes database. This is one of them. Bear in mind that they key used by Notes to search a view with getDocumentByKey is the 1st sorted column.
If you want to get multiple documents you can use:
DocumentCollection docCol = view.getAllDocumentsByKey(VIEW_KEY);
and then iterate over it.
Avoid doing ftsearch because it's slow and a bit painful to Notes. Prefere looking up in the views.
Also another powerful source of help is the Notes help. Get the help database from a computer that has the Notes Development Client installed. But pay attention to the name of the help you're picking, there are 3 helps in Notes: the client, development and administration. Development is what you want.