MongoDB - merge collection and map - can performance be improved - java

The function below merge word MongoDB collection and map content like this:
Collection:
cat 3,
dog 5
Map:
dog 2,
zebra 1
Collection after merge:
cat 3,
dog 7,
zebra 1
We have empty collection and map with about 14000 elements.
Oracle PL/SQL procedure using one merge SQL running on 15k RPM HD do it in less then a second.
MongoBD on SSD disk needs about 53 seconds.
It looks like Oracle prepares in memory image of file operation
and saves result in one i/o operation.
MongoDB probably does 14000 i/o - it is about 4 ms for each insert. It is corresponds with performance of SSD.
If I do just 14000 inserts without search for documents existence as in case of merge everything works also fast - less then a second.
My questions:
Can the code be improved?
Maybe it necessary to do something with MongoDB configuration?
Function code:
public void addBookInfo(String bookTitle, HashMap<String, Integer> bookInfo)
{
// insert information to the book collection
Document d = new Document();
d.append("book_title", bookTitle);
book.insertOne(d);
// insert information to the word collection
// prepare collection of word info and book_word info documents
List<Document> wordInfoToInsert = new ArrayList<Document>();
List<Document> book_wordInfoToInsert = new ArrayList<Document>();
for (String key : bookInfo.keySet())
{
Document d1 = new Document();
Document d2 = new Document();
d1.append("word", key);
d1.append("count", bookInfo.get(key));
wordInfoToInsert.add(d1);
d2.append("book_title", bookTitle);
d2.append("word", key);
d2.append("count", bookInfo.get(key));
book_wordInfoToInsert.add(d2);
}
// this is collection of insert/update DB operations
List<WriteModel<Document>> updates = new ArrayList<WriteModel<Document>>();
// iterator for collection of words
ListIterator<Document> listIterator = wordInfoToInsert.listIterator();
// generate list of insert/update operations
while (listIterator.hasNext())
{
d = listIterator.next();
String wordToUpdate = d.getString("word");
int countToAdd = d.getInteger("count").intValue();
updates.add(
new UpdateOneModel<Document>(
new Document("word", wordToUpdate),
new Document("$inc",new Document("count", countToAdd)),
new UpdateOptions().upsert(true)
)
);
}
// perform bulk operation
// this is slowly
BulkWriteResult bulkWriteResult = word.bulkWrite(updates);
boolean acknowledge = bulkWriteResult.wasAcknowledged();
if (acknowledge)
System.out.println("Write acknowledged.");
else
System.out.println("Write was not acknowledged.");
boolean countInfo = bulkWriteResult.isModifiedCountAvailable();
if (countInfo)
System.out.println("Change counters avaiable.");
else
System.out.println("Change counters not avaiable.");
int inserted = bulkWriteResult.getInsertedCount();
int modified = bulkWriteResult.getModifiedCount();
System.out.println("inserted: " + inserted);
System.out.println("modified: " + modified);
// insert information to the book_word collection
// this is very fast
book_word.insertMany(book_wordInfoToInsert);
}

Related

Improve performance of loading 100,000 records from database

We created a program to make the use of the database easier in other programs. So the code im showing gets used in multiple other programs.
One of those other programs gets about 10,000 records from one of our clients and has to check if these are in our database already. If not we insert them into the database (they can also change and have to be updated then).
To make this easy we load all the entries from our whole table (at the moment 120,000), create a class for every entry we get and put all of them into a Hashmap.
The loading of the whole table this way takes around 5 minutes. Also we sometimes have to restart the program because we run into a GC overhead error because we work on limited hardware. Do you have an idea of how we can improve the performance?
Here is the code to load all entries (we have a global limit of 10.000 entries per query so we use a loop):
public Map<String, IMasterDataSet> getAllInformationObjects(ISession session) throws MasterDataException {
IQueryExpression qe;
IQueryParameter qp;
// our main SDP class
Constructor<?> constructorForSDPbaseClass = getStandardConstructor();
SimpleDateFormat itaTimestampFormat = new SimpleDateFormat("yyyyMMddHHmmssSSS");
// search in standard time range (modification date!)
Calendar cal = Calendar.getInstance();
cal.set(2010, Calendar.JANUARY, 1);
Date startDate = cal.getTime();
Date endDate = new Date();
Long startDateL = Long.parseLong(itaTimestampFormat.format(startDate));
Long endDateL = Long.parseLong(itaTimestampFormat.format(endDate));
IDescriptor modDesc = IBVRIDescriptor.ModificationDate.getDescriptor(session);
// count once before to determine initial capacities for hash map/set
IBVRIArchiveClass SDP_ARCHIVECLASS = getMasterDataPropertyBag().getSDP_ARCHIVECLASS();
qe = SDP_ARCHIVECLASS.getQueryExpression(session);
qp = session.getDocumentServer().getClassFactory()
.getQueryParameterInstance(session, new String[] {SDP_ARCHIVECLASS.getDatabaseName(session)}, null, null);
qp.setExpression(qe);
qp.setHitLimitThreshold(0);
qp.setHitLimit(0);
int nrOfHitsTotal = session.getDocumentServer().queryCount(session, qp, "*");
int initialCapacity = (int) (nrOfHitsTotal / 0.75 + 1);
// MD sets; and objects already done (here: document ID)
HashSet<String> objDone = new HashSet<>(initialCapacity);
HashMap<String, IMasterDataSet> objRes = new HashMap<>(initialCapacity);
qp.close();
// do queries until hit count is smaller than 10.000
// use modification date
boolean keepGoing = true;
while(keepGoing) {
// construct query expression
// - basic part: Modification date & class type
// a. doc. class type
qe = SDP_ARCHIVECLASS.getQueryExpression(session);
// b. ID
qe = SearchUtil.appendQueryExpressionWithANDoperator(session, qe,
new PlainExpression(modDesc.getQueryLiteral() + " BETWEEN " + startDateL + " AND " + endDateL));
// 2. Query Parameter: set database; set expression
qp = session.getDocumentServer().getClassFactory()
.getQueryParameterInstance(session, new String[] {SDP_ARCHIVECLASS.getDatabaseName(session)}, null, null);
qp.setExpression(qe);
// order by modification date; hitlimit = 0 -> no hitlimit, but the usual 10.000 max
qp.setOrderByExpression(session.getDocumentServer().getClassFactory().getOrderByExpressionInstance(modDesc, true));
qp.setHitLimitThreshold(0);
qp.setHitLimit(0);
// Do not sort by modification date;
qp.setHints("+NoDefaultOrderBy");
keepGoing = false;
IInformationObject[] hits = null;
IDocumentHitList hitList = null;
hitList = session.getDocumentServer().query(qp, session);
IDocument doc;
if (hitList.getTotalHitCount() > 0) {
hits = hitList.getInformationObjects();
for (IInformationObject hit : hits) {
String objID = hit.getID();
if(!objDone.contains(objID)) {
// do something with this object and the class
// here: construct a new SDP sub class object and give it back via interface
doc = (IDocument) hit;
IMasterDataSet mdSet;
try {
mdSet = (IMasterDataSet) constructorForSDPbaseClass.newInstance(session, doc);
} catch (Exception e) {
// cause for this
String cause = (e.getCause() != null) ? e.getCause().toString() : MasterDataException.ERRMSG_PART_UNKNOWN;
throw new MasterDataException(MasterDataException.ERRMSG_NOINSTANCE_POSSIBLE, this.getClass().getSimpleName(), e.toString(), cause);
}
objRes.put(mdSet.getID(), mdSet);
objDone.add(objID);
}
}
doc = (IDocument) hits[hits.length - 1];
Date lastModDate = ((IDateValue) doc.getDescriptor(modDesc).getValues()[0]).getValue();
startDateL = Long.parseLong(itaTimestampFormat.format(lastModDate));
keepGoing = (hits.length >= 10000 || hitList.isResultSetTruncated());
}
qp.close();
}
return objRes;
}
Loading 120,000 rows (and more) each time will not scale very well, and your solution may not work in the future as the record size grows. Instead let the database server handle the problem.
Your table needs to have a primary key or unique key based on the columns of the records. Iterate through the 10,000 records performing JDBC SQL update to modify all field values with where clause to exactly match primary/unique key.
update BLAH set COL1 = ?, COL2 = ? where PKCOL = ?; // ... AND PKCOL2 =? ...
This modifies an existing row or does nothing at all - and JDBC executeUpate() will return 0 or 1 indicating number of rows changed. If number of rows changed was zero you have detected a new record which does not exist, so perform insert for that new record only.
insert into BLAH (COL1, COL2, ... PKCOL) values (?,?, ..., ?);
You can decide whether to run 10,000 updates followed by however many inserts are needed, or do update+optional insert, and remember JDBC batch statements / auto-commit off may help speed things up.

Deeplearning4j - how to iterate multiple DataSets for large data?

I'm studying Deeplearning4j (ver. 1.0.0-M1.1) for building neural networks.
I use IrisClassifier from Deeplearning4j as an example, it works fine:
//First: get the dataset using the record reader. CSVRecordReader handles loading/parsing
int numLinesToSkip = 0;
char delimiter = ',';
RecordReader recordReader = new CSVRecordReader(numLinesToSkip,delimiter);
recordReader.initialize(new FileSplit(new File(DownloaderUtility.IRISDATA.Download(),"iris.txt")));
//Second: the RecordReaderDataSetIterator handles conversion to DataSet objects, ready for use in neural network
int labelIndex = 4; //5 values in each row of the iris.txt CSV: 4 input features followed by an integer label (class) index. Labels are the 5th value (index 4) in each row
int numClasses = 3; //3 classes (types of iris flowers) in the iris data set. Classes have integer values 0, 1 or 2
int batchSize = 150; //Iris data set: 150 examples total. We are loading all of them into one DataSet (not recommended for large data sets)
DataSetIterator iterator = new RecordReaderDataSetIterator(recordReader,batchSize,labelIndex,numClasses);
DataSet allData = iterator.next();
allData.shuffle();
SplitTestAndTrain testAndTrain = allData.splitTestAndTrain(0.65); //Use 65% of data for training
DataSet trainingData = testAndTrain.getTrain();
DataSet testData = testAndTrain.getTest();
//We need to normalize our data. We'll use NormalizeStandardize (which gives us mean 0, unit variance):
DataNormalization normalizer = new NormalizerStandardize();
normalizer.fit(trainingData); //Collect the statistics (mean/stdev) from the training data. This does not modify the input data
normalizer.transform(trainingData); //Apply normalization to the training data
normalizer.transform(testData); //Apply normalization to the test data. This is using statistics calculated from the *training* set
final int numInputs = 4;
int outputNum = 3;
long seed = 6;
log.info("Build model....");
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(seed)
.activation(Activation.TANH)
.weightInit(WeightInit.XAVIER)
.updater(new Sgd(0.1))
.l2(1e-4)
.list()
.layer(new DenseLayer.Builder().nIn(numInputs).nOut(3)
.build())
.layer(new DenseLayer.Builder().nIn(3).nOut(3)
.build())
.layer( new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
.activation(Activation.SOFTMAX) //Override the global TANH activation with softmax for this layer
.nIn(3).nOut(outputNum).build())
.build();
//run the model
MultiLayerNetwork model = new MultiLayerNetwork(conf);
model.init();
//record score once every 100 iterations
model.setListeners(new ScoreIterationListener(100));
for(int i=0; i<1000; i++ ) {
model.fit(trainingData);
}
//evaluate the model on the test set
Evaluation eval = new Evaluation(3);
INDArray output = model.output(testData.getFeatures());
eval.eval(testData.getLabels(), output);
log.info(eval.stats());
For my project, I have inputs ~30000 records (in iris example - 150).
Each record is a vector size ~7000 (in iris example - 4).
Obviously, I can't process the whole data in one DataSet - in will produce OOM for JVM.
How I can process data in multiple DataSets?
I assume it should be something like this (store DataSets in List and iterate):
...
DataSetIterator iterator = new RecordReaderDataSetIterator(recordReader,batchSize,labelIndex,numClasses);
List<DataSet> trainingData = new ArrayList<>();
List<DataSet> testData = new ArrayList<>();
while (iterator.hasNext()) {
DataSet allData = iterator.next();
allData.shuffle();
SplitTestAndTrain testAndTrain = allData.splitTestAndTrain(0.65); //Use 65% of data for training
trainingData.add(testAndTrain.getTrain());
testData.add(testAndTrain.getTest());
}
//We need to normalize our data. We'll use NormalizeStandardize (which gives us mean 0, unit variance):
DataNormalization normalizer = new NormalizerStandardize();
for (DataSet dataSetTraining : trainingData) {
normalizer.fit(dataSetTraining); //Collect the statistics (mean/stdev) from the training data. This does not modify the input data
normalizer.transform(dataSetTraining); //Apply normalization to the training data
}
for (DataSet dataSetTest : testData) {
normalizer.transform(dataSetTest); //Apply normalization to the test data. This is using statistics calculated from the *training* set
}
...
for(int i=0; i<1000; i++ ) {
for (DataSet dataSetTraining : trainingData) {
model.fit(dataSetTraining);
}
}
But when I start evaluation, I got this error:
Exception in thread "main" java.lang.NullPointerException: Cannot read field "javaShapeInformation" because "this.jvmShapeInfo" is null
at org.nd4j.linalg.api.ndarray.BaseNDArray.dataType(BaseNDArray.java:5507)
at org.nd4j.linalg.api.ndarray.BaseNDArray.validateNumericalArray(BaseNDArray.java:5575)
at org.nd4j.linalg.api.ndarray.BaseNDArray.add(BaseNDArray.java:3087)
at com.aarcapital.aarmlclassifier.classification.FAClassifierLearning.main(FAClassifierLearning.java:117)
...
Evaluation eval = new Evaluation(26);
INDArray output = new NDArray();
for (DataSet dataSetTest : testData) {
output.add(model.output(dataSetTest.getFeatures())); // ERROR HERE
}
System.out.println("--- Output ---");
System.out.println(output);
INDArray labels = new NDArray();
for (DataSet dataSetTest : testData) {
labels.add(dataSetTest.getLabels());
}
System.out.println("--- Labels ---");
System.out.println(labels);
eval.eval(labels, output);
log.info(eval.stats());
What is correct way to iterate miltiple DataSet for learning network?
Thanx!
Firstly, always use Nd4j.create(..) for ndarrays.
Never use the implementation. That allows you to safely create ndarrays that will work whether you use cpus or gpus.
2nd: Always use the RecordReaderDataSetIterator's builder rather than the constructor. It's very long and error prone.
That is why we made the builder in the first place.
Your NullPointer actually isn't coming from where you think it is. it's due to how you're creating the ndarray. There's no data type or anything so it can't know what to expect. Nd4j.create(..) will properly setup the ndarray for you.
Beyond that you are doing things the right way. The record reader handles the batching for you.

Add weights to documents Lucene 8

I am currently working on a small search engine for college using Lucene 8. I already built it before, but without applying any weights to documents.
I am now required to add the PageRanks of documents as a weight for each document, and I already computed the PageRank values. How can I add a weight to a Document object (not query terms) in Lucene 8? I looked up many solutions online, but they only work for older versions of Lucene. Example source
Here is my (updated) code that generates a Document object from a File object:
public static Document getDocument(File f) throws FileNotFoundException, IOException {
Document d = new Document();
//adding a field
FieldType contentType = new FieldType();
contentType.setStored(true);
contentType.setTokenized(true);
contentType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS);
contentType.setStoreTermVectors(true);
String fileContents = String.join(" ", Files.readAllLines(f.toPath(), StandardCharsets.UTF_8));
d.add(new Field("content", fileContents, contentType));
//adding other fields, then...
//the boost coefficient (updated):
double coef = 1.0 + ranks.get(path);
d.add(new DoubleDocValuesField("boost", coef));
return d;
}
The issue with my current approach is that I would need a CustomScoreQuery object to search the documents, but this is not available in Lucene 8. Also, I don't want to downgrade now to Lucene 7 after all the code I wrote in Lucene 8.
Edit:
After some (lengthy) research, I added a DoubleDocValuesField to each document holding the boost (see updated code above), and used a FunctionScoreQuery for searching as advised by #EricLavault. However, now all my documents have a score of exactly their boost, regardless of the query! How do I fix that? Here is my searching function:
public static TopDocs search(String query, IndexSearcher searcher, String outputFile) {
try {
Query q_temp = buildQuery(query); //the original query, was working fine alone
Query q = new FunctionScoreQuery(q_temp, DoubleValuesSource.fromDoubleField("boost")); //the new query
q = q.rewrite(DirectoryReader.open(bm25IndexDir));
TopDocs results = searcher.search(q, 10);
ScoreDoc[] filterScoreDosArray = results.scoreDocs;
for (int i = 0; i < filterScoreDosArray.length; ++i) {
int docId = filterScoreDosArray[i].doc;
Document d = searcher.doc(docId);
//here, when printing, I see that the document's score is the same as its "boost" value. WHY??
System.out.println((i + 1) + ". " + d.get("path")+" Score: "+ filterScoreDosArray[i].score);
}
return results;
}
catch(Exception e) {
e.printStackTrace();
return null;
}
}
//function that builds the query, working fine
public static Query buildQuery(String query) {
try {
PhraseQuery.Builder builder = new PhraseQuery.Builder();
TokenStream tokenStream = new EnglishAnalyzer().tokenStream("content", query);
tokenStream.reset();
while (tokenStream.incrementToken()) {
CharTermAttribute charTermAttribute = tokenStream.getAttribute(CharTermAttribute.class);
builder.add(new Term("content", charTermAttribute.toString()));
}
tokenStream.end(); tokenStream.close();
builder.setSlop(1000);
PhraseQuery q = builder.build();
return q;
}
catch(Exception e) {
e.printStackTrace();
return null;
}
}
Starting from Lucene 6.5.0 :
Index-time boosts are deprecated. As a replacement,
index-time scoring factors should be indexed into a doc value field
and combined at query time using eg. FunctionScoreQuery. (Adrien
Grand)
The recommendation instead of using index time boost would be to encode scoring factors (ie. length normalization factors) into doc values fields instead. (cf. LUCENE-6819)
Regarding my edited problem (boost value completely replacing search score instead of boosting it), here is what the documentation says about FunctionScoreQuery (emphasis mine):
A query that wraps another query, and uses a DoubleValuesSource to replace or modify the wrapped query's score.
So, when does it replace, and when does it modify?
Turns out, the code I was using is for entirely replacing the score by the boost value:
Query q = new FunctionScoreQuery(q_temp, DoubleValuesSource.fromDoubleField("boost")); //the new query
What I needed to do instead was using the function boostByValue, that modifies the searching score (by multiplying the score by the boost value):
Query q = FunctionScoreQuery.boostByValue(q_temp, DoubleValuesSource.fromDoubleField("boost"));
And now it works! Thanks #EricLavault for the help!

Why MongoDB Java 3.0 driver which uses bulk upsert write and C++ 2.6 driver which does individual upserts have the same performance?

The code below represents bulk write using Java 3.0 driver - 14 000 upsert operations at once and c++ 2.6 which do 14 000 individual upserts. The execution time of the Java and C++ code is the same - about 53 seconds. Why bulk write doesn't have performance benefit?
// Java driver with bulk write operation (14 000 upsert operation preapeared and executed in one bulk operation:
public void addBookInfo(String bookTitle, HashMap<String, Integer> bookInfo)
{
// insert information to the book collection
Document d = new Document();
d.append("book_title", bookTitle);
book.insertOne(d);
// insert information to the word collection
// prepare collection of word info and book_word info documents
List<Document> wordInfoToInsert = new ArrayList<Document>();
List<Document> book_wordInfoToInsert = new ArrayList<Document>();
for (String key : bookInfo.keySet())
{
Document d1 = new Document();
Document d2 = new Document();
d1.append("word", key);
d1.append("count", bookInfo.get(key));
wordInfoToInsert.add(d1);
d2.append("book_title", bookTitle);
d2.append("word", key);
d2.append("count", bookInfo.get(key));
book_wordInfoToInsert.add(d2);
}
// this is collection of insert/update DB operations
List<WriteModel<Document>> updates = new ArrayList<WriteModel<Document>>();
// iterator for collection of words
ListIterator<Document> listIterator = wordInfoToInsert.listIterator();
// generate list of insert/update operations
while (listIterator.hasNext())
{
d = listIterator.next();
String wordToUpdate = d.getString("word");
int countToAdd = d.getInteger("count").intValue();
updates.add(
new UpdateOneModel<Document>(
new Document("word", wordToUpdate),
new Document("$inc",new Document("count", countToAdd)),
new UpdateOptions().upsert(true)
)
);
}
// perform bulk operation
// this is slowly
BulkWriteResult bulkWriteResult = word.bulkWrite(updates);
// C++ 2.6 driver - 14 000 single inserts executed one by one.
void DAO::addBookInfo(std::string bookTitle, std::map<std::string, int> bookInfo)
{
// insert information to the book collection
BSONObjBuilder b;
b.append("book_title", bookTitle);
BSONObj d = b.obj();
connection.insert(book_collection, d);
// insert information to the word collection
// prepare collection of word info and book_word info documents
vector<BSONObj> wordInfoToInsert;
wordInfoToInsert.clear();
vector<BSONObj> book_wordInfoToInsert;
book_wordInfoToInsert.clear();
std::map<std::string, int>::iterator itr;
itr = bookInfo.begin();
while (itr != bookInfo.end())
{
std::pair<string, int> wordInfo = *itr;
BSONObjBuilder b1;
b1.append("word", wordInfo.first);
b1.append("count", wordInfo.second);
BSONObj d1 = b1.obj();
wordInfoToInsert.push_back(d1);
BSONObjBuilder b2;
b2.append("book_title", bookTitle);
b2.append("word", wordInfo.first);
b2.append("count", wordInfo.second);
BSONObj d2 = b2.obj();
book_wordInfoToInsert.push_back(d2);
itr++;
}
cout << "wordInfoToInsert " << wordInfoToInsert.size() << endl;
cout << "book_wordInfoToInsert " << book_wordInfoToInsert.size() << endl;
// prepare Query document;
vector<BSONObj>::iterator witr;
// upsert info to word collection
witr = wordInfoToInsert.begin();
while (witr != wordInfoToInsert.end())
{
BSONObj widoc = *witr;
BSONObjBuilder bqwi;
bqwi.append("word", widoc.getStringField("word"));
BSONObj qwi = bqwi.obj();
Query q(qwi);
BSONObjBuilder buwidoc;
buwidoc.append("$inc", BSON("count" << widoc.getIntField("count")));
BSONObj uwidoc = buwidoc.obj();
connection.update(word_collection, q, uwidoc, true);
witr++;
}

Tracing operations in Mongodb Bulk Operation

I am using MongoDB 2.6.1. The question from me is that-
"Is it possible for keep a track of _id in Bulk Operations??"
Suppose if I have created one object for BulkWriteOperation, for example 50 documents to be inserted to the 'B' collection from 'A' collection. I need keep a list of successful write operations and failed write operations also.
Bulk Inserts and deletes are working fine. But the question is that-
-- "I need to keep a track of _ids, for a query- find the documents from A and insert to B collection. In the mean while, I need to keep a list of _ids (successful and failed operations). I need to delete the documents in A collection, only for those successful operations and keep failed documents as it is"--
Please help me out.
Thanking you :) :)
First, you'll need to use UnorderedBulkOperation for the entire batch to execute. You will need to use a try/catch around your BulkWriteOperation.execute(), catching BulkWriteException which will give you access to a list of BulkWriteError as well as the BulkWriteResult.
Here's a quick and dirty example:
MongoClient m = new MongoClient("localhost");
DB db = m.getDB( "test" );
DBCollection coll = db.getCollection( "bulk" );
coll.drop();
coll.createIndex(new BasicDBObject("i", 1), new BasicDBObject("unique", true));
BulkWriteOperation bulkWrite = coll.initializeUnorderedBulkOperation();
for (int i = 0; i < 100; i++) {
bulkWrite.insert(new BasicDBObject("i", i));
}
// Now add 10 documents to the batch that will generate a unique index error
for (int i = 0; i < 10; i++) {
bulkWrite.insert(new BasicDBObject("i", i));
}
BulkWriteResult result = null;
List<BulkWriteError> errors = null;
try {
result = bulkWrite.execute();
} catch (BulkWriteException bwe) {
bwe.printStackTrace();
errors = bwe.getWriteErrors();
result = bwe.getWriteResult();
}
for (BulkWriteError e : errors) {
System.out.println(e.getIndex() + " failed");
}
System.out.println(result);

Categories

Resources