I have Mongo set up and running. I have a problem where I cannot see any data being created in Mongo client in cmd prompt.
I am using Mongo 3.2.1 Java driver and Android Studio.
I have connected the port, address and working. Using the code below to create a database and collection works. But when I save the data it's not being saved (sometimes it works and sometimes not) and I don't know why.
MongoClient mongoClient = new MongoClient("xxx.xxx.x.x", xxxxx);
MongoDatabase database = mongoClient.getDatabase("testdb");
try {
database.createCollection("cars");
} catch (MongoCommandException e) {
database.getCollection("cars").drop();
}
List<Document> writes = new ArrayList<>();
MongoCollection<Document> carsCol = database.getCollection("cars");
Document d1 = new Document("_id", 1);
d1.append("name", "Audi");
d1.append("price", 52642);
writes.add(d1);
Document d2 = new Document("_id", 2);
d2.append("name", "Mercedes");
d2.append("price", 57127);
writes.add(d2);
Document d3 = new Document("_id", 3);
d3.append("name", "Skoda");
d3.append("price", 9000);
writes.add(d3);
Document d4 = new Document("_id", 4);
d4.append("name", "Volvo");
d4.append("price", 29000);
writes.add(d4);
Document d5 = new Document("_id", 5);
d5.append("name", "Bentley");
d5.append("price", 350000);
writes.add(d5);
Document d6 = new Document("_id", 6);
d6.append("name", "Citroen");
d6.append("price", 21000);
writes.add(d6);
Document d7 = new Document("_id", 7);
d7.append("name", "Hummer");
d7.append("price", 41400);
writes.add(d7);
Document d8 = new Document("_id", 8);
d8.append("name", "Volkswagen");
d8.append("price", 21600);
writes.add(d8);
carsCol.insertMany(writes);
MongoIterable<String> dbs = mongoClient.listDatabaseNames();
System.out.println("The Following are your Databases!");
for(String checkDBS : dbs){
System.out.println(checkDBS);
}
mongoClient.close();
}
After running the above I check the data by querying them but it doesn't seem to do anything. Sometimes it shows and at times, the commands don't do anything.
UPDATED ===============================================================
It does actually stores the data but I still cant query using cmd but I tried querying with Java and it worked
iterable.forEach(new Block<Document>() {
#Override
public void apply(final Document document) {
System.out.println(document);
}
});
I don't know why cmd does not work.
In your MongoShell you are using different DB than in your program.
You are trying to list all records in the cars database. But the collection cars is in the testdb database.
Proper sequence is:
use testdb
db.cars.find().pretty()
and you'll see your records.
Related
I want to generate test data for MongoDB. The size should be 200 Mb. I tried this code:
#Test
public void testMongoDBTestDataGenerate()
{
MongoClient mongoClient = new MongoClient("localhost", 27017);
DB db = mongoClient.getDB("development");
DBCollection collection = db.getCollection("ssv");
for (int i = 0; i < 100; i++)
{
BasicDBObject document = new BasicDBObject();
document.put("database", "test");
document.put("table", "hosting");
BasicDBObject documentDetail = new BasicDBObject();
documentDetail.put("records", 99);
documentDetail.put("index", "vps_index1");
documentDetail.put("active", "true");
document.put("detail", documentDetail);
collection.insert(document);
}
mongoClient.close();
}
How I can generate data exactly with this size?
I am not getting what are you trying to achieve by setting size 200 Mb.
You can add logical checks.
db.testCollection.stats() - You can check size of the collection before every insert.
Object.bsonsize(..) - Also you can check document size before inserting to make it exactly 200 MB.
And also you can make capped collection where you can notify number of documents or size of collection.
Hope this helps.
What I would probably do is create a capped collection with size 200MB (209715200 bytes):
db.createCollection( "ssv", { capped: true, size: 209715200 } )
Then insert records as you are doing. Then at intervals inside the for loop, check if the collection if full (or almost full).
So in your code, maybe (totally pseudo code):
if(i % 10 == 0) {
if(db.ssv.stats().size >= 209715100){ //Or an arbitrary value closer to 200MB
break;
}
}
Why would you copy the same data 100 times to make 200mb worth test data? Instead
1.append a counter to the value so that you can generate data sequentially
OR
Use random function to generate random data
#Test
public void testMongoDBTestDataGenerate()
{
MongoClient mongoClient = new MongoClient("localhost", 27017);
DB db = mongoClient.getDB("development");
DBCollection collection = db.getCollection("ssv");
int counter=0;
for (int i = 0; i < 873813; i++)
{
BasicDBObject document = new BasicDBObject();
document.put("database", "test");
document.put("table", "hosting");
BasicDBObject documentDetail = new BasicDBObject();
documentDetail.put("counter0", counter++);
documentDetail.put("counter1", counter++);
documentDetail.put("counter2", counter++);
documentDetail.put("counter3", counter++);
documentDetail.put("counter4", counter++);
documentDetail.put("counter5", counter++);
documentDetail.put("counter6", counter++);
documentDetail.put("counter7", counter++);
documentDetail.put("counter8", counter++);
documentDetail.put("counter9", counter++);
document.put("detail", documentDetail);
collection.insert(document);
}
mongoClient.close();
}
}
10 eight two-bytes char strings and 10 eight-bytes numbers => 240B
240B * 873813 = 200MB
A quick dirty solution based on Robert Udah and Somnath Muluk suggestions:
create a field with type nvarchar then generate a .txt file in your for example desktop with 200MB data. then put this string into that field. that's it.
We are using MongoDb for saving and fetching data.
All calls that are putting data into collections are working fine and are through common method.
All calls that are fetching data from collections are working fine sometimes and are through common method.
But Sometimes, only for one of the collection, i get my calls being stuck for forever, consuming CPU usage. I have to manually kill the threads otherwise it consumes my whole CPU.
Mongo Connection
MongoClient mongo = new MongoClient(hostName , Integer.valueOf(port));
DB mongoDb = mongo.getDB(dbName);
Code To fetch
DBCollection collection = mongoDb.getCollection(collectionName);
DBObject dbObject = new BasicDBObject("_id" , key);
DBCursor cursor = collection.find(dbObject);
Though i have figured out the collection for which it is causing issues, but how can i improve upon this, since it is occurring for this particular collection and sometimes.
EDIT
Code to save
DBCollection collection = mongoDb.getCollection(collectionName);
DBObject query = new BasicDBObject("_id" , key);
DBObject update = new BasicDBObject();
update.put("$set" , JSON.parse(value));
collection.update(query , update , true , false);
Bulk Write / collection
DB mongoDb = controllerFactory.getMongoDB();
DBCollection collection = mongoDb.getCollection(collectionName);
BulkWriteOperation bulkWriteOperation = collection.initializeUnorderedBulkOperation();
Map<String, Object> dataMap = (Map<String, Object>) JSON.parse(value);
for (Entry<String, Object> entrySet : dataMap.entrySet()) {
BulkWriteRequestBuilder bulkWriteRequestBuilder = bulkWriteOperation.find(new BasicDBObject("_id" ,
entrySet.getKey()));
DBObject update = new BasicDBObject();
update.put("$set" , entrySet.getValue());
bulkWriteRequestBuilder.upsert().update(update);
}
How can i set timeout for fetch calls..??
A different approach is to use the proposed method for MongoDB 3.2 Driver. Keep in mind that you have to update your .jar libraries (if you haven't) to the latest version.
public final MongoClient connectToClient(String hostName, String port) {
try {
MongoClient client = new MongoClient(hostName, Integer.valueOf(port));
return client;
} catch(MongoClientException e) {
System.err.println("Cannot connect to Client.");
return null;
}
}
public final MongoDatabase connectToDB(String databaseName) {
try {
MongoDatabase db = client.getDatabase(databaseName);
return db;
} catch(Exception e) {
System.err.println("Error in connecting to database " + databaseName);
return null;
}
public final void closeConnection(MongoClient client) {
client.close();
}
public final void findDoc(MongoDatabase db, String collectionName) {
MongoCollection<Document> collection = db.getCollection(collectionName);
try {
FindIterable<Document> iterable = collection
.find(new Document("_id", key));
Document doc = iterable.first();
//For an Int64 field named 'special_id'
long specialId = doc.getLong("special_id");
} catch(MongoException e) {
System.err.println("Error in retrieving document.");
} catch(NullPointerException e) {
System.err.println("Document with _id " + key + " does not exist.");
}
}
public final void insertToDB(MongoDatabase db, String collectioName) {
try {
db.getCollection(collectionName).insertOne(new Document()
.append("special_id", 5)
//Append anything
);
catch(MongoException e) {
System.err.println("Error in inserting new document.");
}
}
public final void updateDoc(MongoDatabase db, String collectionName, long id) {
MongoCollection<Document> collection = db.getCollection(collectionName);
try {
collection.updateOne(new Document("_id", id),
new Document("$set",
new Document("special_id",
7)));
catch(MongoException e) {
System.err.println("Error in updating new document.");
}
}
public static void main(String[] args) {
String hostName = "myHost";
String port = "myPort";
String databaseName = "myDB";
String collectionName = "myCollection";
MongoClient client = connectToClient(hostName, port);
if(client != null) {
MongoDatabase db = connectToDB(databaseName);
if(db != null) {
findDoc(db, collectionName);
}
client.closeConnection();
}
}
EDIT: As the others suggested, check from the command line if the procedure of finding the document by its ID is slow too. Then maybe this is a problem with your hard drive. The _id is supposed to be indexed but for better or for worse, re-create the index on the _id field.
The answers posted by others are great, but did not solve my purpose.
Actually issue was in my existing code itself , my cursor was waiting in while loop infinite time.
I was missing few checks which has been resolved now.
Just some possible explanations/thoughts.
In general "query by id" has to be fast since _id is supposed to be indexed, always. The code snippet looks correct, so probably the reason is in mongo itself. This leads me to a couple of suggestions:
Try to connect to mongo directly from the command line and run the "find" from there. The chances are that you'll still be able to observe occasional slowness.
In this case:
Maybe its about the disks (maybe this particular server is deployed on the slow disk or at least there is a correlation with some slowness of accessing the disk).
Maybe your have a sharded configuration and one shard is slower than others
Maybe its a network issue that occurs sporadically. If you run mongo locally/on staging env. with the same collection does this reproduce?
Maybe (Although I hardly believe that) the query runs in sub un-optimal way. In this case you can use "explain()" as someone has already suggested here.
If you happen to have replica set, please figure out what is the [Read Preference]. Who knows, maybe you prefer to get this id from the sub-optimal server
The function below merge word MongoDB collection and map content like this:
Collection:
cat 3,
dog 5
Map:
dog 2,
zebra 1
Collection after merge:
cat 3,
dog 7,
zebra 1
We have empty collection and map with about 14000 elements.
Oracle PL/SQL procedure using one merge SQL running on 15k RPM HD do it in less then a second.
MongoBD on SSD disk needs about 53 seconds.
It looks like Oracle prepares in memory image of file operation
and saves result in one i/o operation.
MongoDB probably does 14000 i/o - it is about 4 ms for each insert. It is corresponds with performance of SSD.
If I do just 14000 inserts without search for documents existence as in case of merge everything works also fast - less then a second.
My questions:
Can the code be improved?
Maybe it necessary to do something with MongoDB configuration?
Function code:
public void addBookInfo(String bookTitle, HashMap<String, Integer> bookInfo)
{
// insert information to the book collection
Document d = new Document();
d.append("book_title", bookTitle);
book.insertOne(d);
// insert information to the word collection
// prepare collection of word info and book_word info documents
List<Document> wordInfoToInsert = new ArrayList<Document>();
List<Document> book_wordInfoToInsert = new ArrayList<Document>();
for (String key : bookInfo.keySet())
{
Document d1 = new Document();
Document d2 = new Document();
d1.append("word", key);
d1.append("count", bookInfo.get(key));
wordInfoToInsert.add(d1);
d2.append("book_title", bookTitle);
d2.append("word", key);
d2.append("count", bookInfo.get(key));
book_wordInfoToInsert.add(d2);
}
// this is collection of insert/update DB operations
List<WriteModel<Document>> updates = new ArrayList<WriteModel<Document>>();
// iterator for collection of words
ListIterator<Document> listIterator = wordInfoToInsert.listIterator();
// generate list of insert/update operations
while (listIterator.hasNext())
{
d = listIterator.next();
String wordToUpdate = d.getString("word");
int countToAdd = d.getInteger("count").intValue();
updates.add(
new UpdateOneModel<Document>(
new Document("word", wordToUpdate),
new Document("$inc",new Document("count", countToAdd)),
new UpdateOptions().upsert(true)
)
);
}
// perform bulk operation
// this is slowly
BulkWriteResult bulkWriteResult = word.bulkWrite(updates);
boolean acknowledge = bulkWriteResult.wasAcknowledged();
if (acknowledge)
System.out.println("Write acknowledged.");
else
System.out.println("Write was not acknowledged.");
boolean countInfo = bulkWriteResult.isModifiedCountAvailable();
if (countInfo)
System.out.println("Change counters avaiable.");
else
System.out.println("Change counters not avaiable.");
int inserted = bulkWriteResult.getInsertedCount();
int modified = bulkWriteResult.getModifiedCount();
System.out.println("inserted: " + inserted);
System.out.println("modified: " + modified);
// insert information to the book_word collection
// this is very fast
book_word.insertMany(book_wordInfoToInsert);
}
I am new to java.
I am doing a search in window-builder using java-mongodb.
When I execute the below code i get Runtime exception error.
try{
// To connect to mongodb server
MongoClient mongoClient = new MongoClient( "localhost" , 27017 );
// Now connect to your databases
DB db = mongoClient.getDB( "Ticket" );
System.out.println("Connect to database successfully");
DBCollection coll = db.getCollection("OnlineT");
System.out.println("Collection created successfully");
F_stn = (String)fm.getSelectedItem();
T_stn = (String)to.getSelectedItem();
BasicDBObject doc = new BasicDBObject("From",F_stn);
BasicDBObject doc1 = new BasicDBObject("To",T_stn);
DBCursor ser = coll.find(doc);
DBCursor ser2 = coll.find(doc1);
while(ser.hasNext())
{
String data=ser.next().get("To").toString();
System.out.println(data);
if(data.equals(T_stn))
{
System.out.println("i m in");
String dis=ser.next().toString();
System.out.println(dis);
break;
}
else
System.out.println("No data found");
}
}
It is working fine but when it enters the if loop it did not print the DBobject.
Please suggest me some way to do this. Thanks in advance..
In the "if" loop, you have:
String dis=ser.next().toString();
This makes your cursor move to the next postion and it didn't check hasNext(). I think that is the problem
Instead, you may do something like:
while(ser.hasNext()){
DBObject dbObject = ser.next();
String data=dbObject.get("To").toString();
System.out.println(data);
if(data.equals(T_stn))
{
System.out.println("i m in");
System.out.println(dbObject);
break;
}
else
System.out.println("No data found");
In addition, you don't need toString() for printing, println() will call automatically toString() method of the object
I'm using Oracle Berkeley DB Java Edition with tables having key/value format. I'm trying to insert duplicate keys, but keep getting SecondaryIntegrityException. According to Oracle, if the setSortedDuplicates() is set to true, then duplicates are allowed. This does not work in my case. Below is some code with key=bob, value=smith. The first I run it, it runs as expected. If I run it a second time changing only value=johnson, I get SecondaryIntegrityException. Is there something I'm doing wrong? Thanks.
String key = "bob";
String value = "smith";
EnvironmentConfig envConfig = new EnvironmentConfig();
envConfig.setAllowCreate(true);
envConfig.setTransactional(false);
Environment myDBenvironment = new Environment(new File(filePath), envConfig);
DatabaseConfig dbConfig = new DatabaseConfig();
dbConfig.setAllowCreate(true);
dbConfig.setTransactional(false);
Database myDatabase = myDBenvironment.openDatabase(null, dbname,
dbConfig);
// create secondary database
SecondaryConfig mySecConfig = new SecondaryConfig();
mySecConfig.setAllowCreate(true);
mySecConfig.setSortedDuplicates(true);
mySecConfig.setTransactional(false);
mySecConfig.setKeyCreator(new SecondKeyCreator());
SecondaryDatabase mySecondaryDatabase = myDBenvironment
.openSecondaryDatabase(null, secdbname, myDatabase,
mySecConfig);
DatabaseEntry myKey = new DatabaseEntry(key.getBytes("UTF-8"));
Record mydata = new Record();
mydata.setobjectVal(value);
DatabaseEntry myrecord = new DatabaseEntry();
new RecordTupleBinding().objectToEntry(mydata, myrecord);
myDatabase.put(null, myKey, myrecord);
mySecondaryDatabase.close();
myDatabase.close();
myDBenvironment.close();
public class SecondKeyCreator implements SecondaryKeyCreator{
#Override
public boolean createSecondaryKey(SecondaryDatabase arg0,
DatabaseEntry key, DatabaseEntry data, DatabaseEntry secondKey) {
RecordTupleBinding binding = new RecordTupleBinding();
Record record = (Record) binding.entryToObject(data);
try {
secondKey.setData(data.getData());
} catch (Exception e) {
e.printStackTrace();
}
return true;
}
}
Although I am nota an expert on the topic, let me try to help you.
According to Oracle documentation, "If a primary database is to be associated with one or more secondary databases, it may not be configured for duplicates". Do you have an association from this database? If so, this may be the reason.
I hope it helps.
A secondary database is needed and required to allow duplicates. The above works if
secondKey.setData(data.getData());
is changed to
secondKey.setData(((String)record.getobjectVal()).getBytes());