In Java API, No Exception is thrown, albeit erroneous transaction:
try {
…………………………………
logger.info("Delete Document " + uri);
docMgr.delete("rocky-mountains");
System.out.println("Deleted");
} catch (Exception e) {
logger.error("Exception : " + e.toString() );
}
Document rocky-mountains doesn’t exist, however, the API happily declares Deleted:
Jul 05, 2020 9:35:04 PM com.fc.allegro.DeleteDocument deleteDocument
INFO: Delete Document rocky-mountains
Jul 05, 2020 9:35:04 PM com.marklogic.client.impl.DocumentManagerImpl delete
INFO: Deleting rocky-mountains
Deleted
In Query Console, eval detects and throws error:
[1.0-ml] XDMP-DOCNOTFOUND: xdmp:document-delete("rocky-mountains") -- Document not found
As the lesser of two evils, DMSDK implies no document deleted but still doesn’t throw exception:
QueryBatcher batcher = dmManager.newQueryBatcher(new StructuredQueryBuilder().document("rocky-mountains"));
batcher.onUrisReady(new DeleteListener())
.onQueryFailure( exception -> exception.printStackTrace() );
Result:
Jul 05, 2020 9:52:07 PM com.marklogic.client.datamovement.impl.QueryBatcherImpl withForestConfig
INFO: (withForestConfig) Using forests on [localhost] hosts for "allegro"
Batch Deleted
INFO: Job complete, jobBatchNumber=1, jobResultsSoFar=0
I tried checked and unchecked exceptions, but to no avail.
Which MarkLogic Class and Method does enforce throwing exceptions and mitigate risk?
A query transaction via Java API:
Failure:
Success:
There is an important difference between running xdmp:document-delete and using Java API to delete a document. The Java API is a wrapper for the MarkLogic REST-API, which follows the rules for a RESTful API. One important rule of a RESTful API is that calls are expected to be idempotent. In short that means that you should be able to run the call twice and get same reply both times. That is why calls to insert, update, and delete don't throw errors if the document does or does not exist.
See also for instance: https://restfulapi.net/http-methods/#delete
I'd recommend using Data Services, or custom REST extensions if you want your app to be more strict.
HTH!
Related
In ATG i got below exception, when this.getSmtpEmailSender().sendEmailMessage(msg) method called. But same code working fine with different environment. May be it configuration issue. what i need to check.
/com/ncr/base/common/services/EmailService
java.lang.Exception: The final format argument for a vlog call is a throwable, but is not referenced. The throwable will be logged, but please use an explicit Throwable argument before the format string to eliminate ambiguity.
**** Warning Fri Dec 21 02:49:02 -05:00 2018 1545378542129 /com/ncr/base/common/services/EmailService at atg.nucleus.logging.VariableArgumentApplicationLoggingUtil.getUnreferencedThrowable(VariableArgumentApplicationLoggingUtil.java:744)
**** Warning Fri Dec 21 02:49:02 -05:00 2018 1545378542129 /com/ncr/base/common/services/EmailService at atg.nucleus.logging.VariableArgumentApplicationLoggingUtil.vlogError(VariableArgumentApplicationLoggingUtil.java:344)
/com/ncr/base/common/services/EmailService nested exception is:
com.sun.mail.smtp.SMTPSenderFailedException: 501 5.1.7 Invalid address
My guess is one of your email addresses is not valid in either the To or From fields.
ATG allows you to configure defaults for the /atg/dynamo/service/SMTPEmail/ components and I think that you are missing a configuration for the environment in question. I would recommend you open dyn/admin and compare the configuration for SMTP email for environments which are working vs ones which are not.
I have a problem with the URL where I am trying to catch my DB.
Here is the code:
TransactionalGraph graph = new OrientGraph("/home/danicroque/Escritorio/demoCroque", "admin", "admin");
Here is the error:
run:
may 20, 2016 2:49:46 AM com.orientechnologies.common.log.OLogManager log
INFORMACIÓN: OrientDB auto-config DISKCACHE=907MB (heap=846MB os=3.802MB disk=447.700MB)
Exception in thread "main" com.orientechnologies.orient.core.exception.ODatabaseException: Error on opening database '/home/danicroque/Escritorio/demoCroque'
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.<init>(ODatabaseDocumentTx.java:204)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.<init>(ODatabaseDocumentTx.java:168)
at com.tinkerpop.blueprints.impls.orient.OrientBaseGraph.openOrCreate(OrientBaseGraph.java:1818)
at com.tinkerpop.blueprints.impls.orient.OrientBaseGraph.<init>(OrientBaseGraph.java:161)
at com.tinkerpop.blueprints.impls.orient.OrientTransactionalGraph.<init>(OrientTransactionalGraph.java:102)
at com.tinkerpop.blueprints.impls.orient.OrientTransactionalGraph.<init>(OrientTransactionalGraph.java:98)
at com.tinkerpop.blueprints.impls.orient.OrientGraph.<init>(OrientGraph.java:103)
at pruebatodook.PruebaTodoOk.run(PruebaTodoOk.java:23)
at pruebatodook.PruebaTodoOk.main(PruebaTodoOk.java:16)
Caused by: com.orientechnologies.orient.core.exception.OConfigurationException: Error in database URL: the engine was not specified. Syntax is: <engine>:<db-type>:<db-name>[?<db-param>=<db-value>[&]]*. URL was: /home/danicroque/Escritorio/demoCroque
at com.orientechnologies.orient.core.Orient.loadStorage(Orient.java:441)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.<init>(ODatabaseDocumentTx.java:187)
... 8 more
may 20, 2016 2:49:46 AM com.orientechnologies.common.log.OLogManager log
INFORMACIÓN: OrientDB Engine shutdown complete
/home/danicroque/.cache/netbeans/8.1/executor-snippets/run.xml:53: Java returned: 1
BUILD FAILED (total time: 1 second)
I am trying to run this code in Ubuntu with Netbeans. The name of the DB is demoCroque and the URL is /home/Escritorio/demoCroque.
How can I solve this problem?
It looks like your URL is formed incorrectly.
See here: http://orientdb.com/docs/2.2/Console-Command-Connect.html
<database-url> Defines the URL of the database you want to connect to.
It uses the format <mode>:<path>
<mode> Defines the mode you want to use in connecting to the database.
It can be PLOCAL or REMOTE.
<path> Defines the path to the database.
Try something like this as your URL:
REMOTE:192.168.1.1/demoCroque
Or
PLOCAL:../home/Escritorio/demoCroque
Edit: for the purpose of testing use a full file path until you know it works correctly, for example:
PLOCAL:C:/projects/myproject/Escritorio/demoCroque
I have set up a replica set using three machines (192.168.122.21, 192.168.122.147 and 192.168.122.148) and I am interacting with the MongoDB Cluster using the Java SDK:
ArrayList<ServerAddress> addrs = new ArrayList<ServerAddress>();
addrs.add(new ServerAddress("192.168.122.21", 27017));
addrs.add(new ServerAddress("192.168.122.147", 27017));
addrs.add(new ServerAddress("192.168.122.148", 27017));
this.mongoClient = new MongoClient(addrs);
this.db = this.mongoClient.getDB(this.db_name);
this.collection = this.db.getCollection(this.collection_name);
After the connection is established I do multiple inserts of a simple test document:
for (int i = 0; i < this.inserts; i++) {
try {
this.collection.insert(new BasicDBObject(String.valueOf(i), "test"));
} catch (Exception e) {
System.out.println("Error on inserting element: " + i);
e.printStackTrace();
}
}
When simulating a node crash of the master server (power-off), the MongoDB cluster does a successful failover:
19:08:03.907+0100 [rsHealthPoll] replSet info 192.168.122.21:27017 is down (or slow to respond):
19:08:03.907+0100 [rsHealthPoll] replSet member 192.168.122.21:27017 is now in state DOWN
19:08:04.153+0100 [rsMgr] replSet info electSelf 1
19:08:04.154+0100 [rsMgr] replSet couldn't elect self, only received -9999 votes
19:08:05.648+0100 [conn15] replSet info voting yea for 192.168.122.148:27017 (2)
19:08:10.681+0100 [rsMgr] replSet not trying to elect self as responded yea to someone else recently
19:08:10.910+0100 [rsHealthPoll] replset info 192.168.122.21:27017 heartbeat failed, retrying
19:08:16.394+0100 [rsMgr] replSet not trying to elect self as responded yea to someone else recently
19:08:22.876+.
19:08:22.912+0100 [rsHealthPoll] replset info 192.168.122.21:27017 heartbeat failed, retrying
19:08:23.623+0100 [SyncSourceFeedbackThread] replset setting syncSourceFeedback to 192.168.122.148:27017
19:08:23.917+0100 [rsHealthPoll] replSet member 192.168.122.148:27017 is now in state PRIMARY
This is also recognized by the MongoDB Driver on the Client Side:
Dec 01, 2014 7:08:16 PM com.mongodb.ConnectionStatus$UpdatableNode update
WARNING: Server seen down: /192.168.122.21:27017 - java.io.IOException - message: Read timed out
WARNING: Server seen down: /192.168.122.21:27017 - java.io.IOException - message: couldn't connect to [/192.168.122.21:27017] bc:java.net.SocketTimeoutException: connect timed out
Dec 01, 2014 7:08:36 PM com.mongodb.DBTCPConnector setMasterAddress
WARNING: Primary switching from /192.168.122.21:27017 to /192.168.122.148:27017
But it still keeps trying to connect to the old node (forever):
Dec 01, 2014 7:08:50 PM com.mongodb.ConnectionStatus$UpdatableNode update
WARNING: Server seen down: /192.168.122.21:27017 - java.io.IOException - message: couldn't connect to [/192.168.122.21:27017] bc:java.net.NoRouteToHostException: No route to host
.....
Dec 01, 2014 7:10:43 PM com.mongodb.ConnectionStatus$UpdatableNode update
WARNING: Server seen down: /192.168.122.21:27017 - java.io.IOException -message: couldn't connect to [/192.168.122.21:27017] bc:java.net.NoRouteToHostException: No route to host
The Document count on the Database stays the same from the moment the primary fails and a secondary becomes primary. Here is the Output from the same node during the process:
"rs0":SECONDARY> db.test_collection.find().count() 12260161
"rs0":PRIMARY> db.test_collection.find().count() 12260161
Update:
Using WriteConcern Unacknowledged it works as designed. Insert Operations are also performed on the new master and all operations during the election process get lost.
With WriteConcern Acknowleged it seems that the Operation is waiting infinitely for an ACK from the crashed master. This could explain why the program continuous after the crashed server boots up again and joins the cluster as a secondary. But in my case I don't want the driver to wait forever, it should raise an error after a certain time.
Update:
WriteConcern Acknowledged is also working as expected when killing the mongod process on the primary. In this case the failover only takes ~3 Seconds. During this time no inserts are done, and after the new primary is elected the insert operations continue.
So I only get the problem when simulating a node failure (power off/network down). In this case the operation hangs until the failed node starts up again.
Does your app still work? Since that server is still in your seed list, the driver will try to connect to it as far as I know. Your app should still work so long as any of the other servers in your seed list can gain primary status.
Explicit specifying a Connection Timeout Value solved the error. See also: http://api.mongodb.org/java/2.7.0/com/mongodb/MongoOptions.html
I'm currently working on a project using the MongoDB Java API. I have been working on this project for a while, but have recently come across an issue that I cannot resolve. I am trying to make a database system that is fault tolerant. To simulate a database crashing, I have my program connect to a Mongodb server that I have made, execute a simple read or write, and then shut down the database server. I had originally thought that this would cause certain methods that I am calling to throw a MongoException that I could catch and then recover from the database crash. However, I am getting a strange stack trace that says I am throwing an EOFException, among other things. Below is the stack trace itself.
Mar 04, 2013 8:06:15 PM com.mongodb.DBPortPool gotError
WARNING: emptying DBPortPool to polaris.cs.wcu.edu/152.30.5.5:12345 b/c of error
java.io.EOFException
at org.bson.io.Bits.readFully(Bits.java:48)
at org.bson.io.Bits.readFully(Bits.java:33)
at org.bson.io.Bits.readFully(Bits.java:28)
at com.mongodb.Response.<init>(Response.java:40)
at com.mongodb.DBPort.go(DBPort.java:124)
at com.mongodb.DBPort.call(DBPort.java:74)
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:282)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:256)
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:289)
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:274)
at com.mongodb.DBCursor._check(DBCursor.java:368)
at com.mongodb.DBCursor._hasNext(DBCursor.java:459)
at com.mongodb.DBCursor.hasNext(DBCursor.java:484)
at edu.wcu.cs.capstone.view.AbstractViewEngine.getView(AbstractViewEngine.java:57)
at edu.wcu.cs.capstone.transaction.ServerTransactionManager.getView(ServerTransactionManager.java:52)
at edu.wcu.cs.capstone.transaction.ServerTransactionManager.run(ServerTransactionManager.java:183)
at java.lang.Thread.run(Thread.java:722)
Caught exception
Mar 04, 2013 8:06:15 PM com.mongodb.DBPortPool gotError
WARNING: emptying DBPortPool to polaris.cs.wcu.edu/152.30.5.5:12345 b/c of error
java.io.IOException: couldn't connect to [polaris.cs.wcu.edu/152.30.5.5:12345] bc:java.net.ConnectException: Connec
at com.mongodb.DBPort._open(DBPort.java:214)
at com.mongodb.DBPort.go(DBPort.java:107)
at com.mongodb.DBPort.call(DBPort.java:74)
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:282)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:256)
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:289)
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:274)
at com.mongodb.DBCursor._check(DBCursor.java:368)
at com.mongodb.DBCursor._hasNext(DBCursor.java:459)
at com.mongodb.DBCursor.hasNext(DBCursor.java:484)
at edu.wcu.cs.capstone.view.AbstractViewEngine.getView(AbstractViewEngine.java:61)
at edu.wcu.cs.capstone.transaction.ServerTransactionManager.getView(ServerTransactionManager.java:52)
at edu.wcu.cs.capstone.transaction.ServerTransactionManager.run(ServerTransactionManager.java:183)
at java.lang.Thread.run(Thread.java:722)
DB is down.
Exception in thread "Thread-3" java.lang.NullPointerException
at edu.wcu.cs.capstone.transaction.ServerTransactionManager.run(ServerTransactionManager.java:184)
at java.lang.Thread.run(Thread.java:722)
The Caught Exception and DB is down. are print statements I am using to verify I am catching certain exceptions. Here is the relevant code:
public View getView(Mongo mongo, Query query) throws MongoException,
EOFException {
String connected = "";
try {
connected = mongo.getConnectPoint();
} catch (Exception e) {
throw new MongoException("Error.");
}
System.out.println("Connected: " + connected);
DB db = mongo.getDB(query.getServer());
List<DBObject> viewList = new ArrayList<DBObject>();
DBCollection collection = db.getCollection(query.getCollection());
DBCursor cursor = collection.find(query.getQuery(), excludeID);
try {
cursor.hasNext();
} catch (Exception e) {
System.out.println("Caught exception");
}
while (cursor.hasNext()) {
viewList.add(cursor.next());
}
return new View(viewList);
}
As you can see, the error is occurring when I call cursor.hasNext(). I am also actually still catching the exception that is being thrown because of the Caught exception. However, I am still getting a stack trace as if it was not being caught. I am suspicious that this has something to do with the DBPortPoolgotError() method, but I have looked at the code for this method, and cannot determine what it is actually doing or even how it is being called. (GrepCode link)
As stated above, I thought the behavior for this type of code would have been to throw a MongoException when a call on that specific Mongo object failed because the database was no longer active. Any help that anyone could provide would be greatly appreciated!
this happens due to the driver loosing connection. Here is an issue on the mongo bug tracker referring to it https://jira.mongodb.org/browse/JAVA-481
I had the same issue. It was because I restarted mongod without restart my java server (tomcat in my case). Restarting tomcat solved this issue because the mongo driver was lost
I tried using the following query:
Query q = getPersistenceManager().newQuery(
getPersistenceManager().getExtent(ICommentItem.class, false)
);
but got:
org.datanucleus.exceptions.NoPersistenceInformationException: The class
"com.sampleapp.data.dataobjects.ICommentItem" is required to be persistable yet no Meta -Data/Annotations can be found for this class. Please check that the Meta-Data/annotations is defined in a valid file location.
I saw in the Datanucleus forum somebody suggested (a few years ago) using :
<interface name=IComment/>
I tried that but it didn't create any table when I ran schema-update. Is tag still relavent? I coudnt see anything in docs on it.
I also tried :
<class name=IComment/>
But that gave this error when running schema-create:
SEVERE: Error thrown enhancing with ASMClassEnhancer
java.lang.NullPointerException
at org.datanucleus.enhancer.asm.method.DefaultConstructor.execute(DefaultConstructor.java:63)
at org.datanucleus.enhancer.asm.JdoClassAdapter.visitEnd(JdoClassAdapter.java:317)
at org.objectweb.asm.ClassReader.accept(Unknown Source)
at org.objectweb.asm.ClassReader.accept(Unknown Source)
at org.datanucleus.enhancer.asm.ASMClassEnhancer.enhance(ASMClassEnhancer.java:388)
at org.datanucleus.enhancer.DataNucleusEnhancer.enhanceClass(DataNucleusEnhancer.java:1035)
at org.datanucleus.enhancer.DataNucleusEnhancer.enhance(DataNucleusEnhancer.java:609)
at org.datanucleus.enhancer.DataNucleusEnhancer.main(DataNucleusEnhancer.java:1316)
Oct 23, 2010 6:46:33 PM org.datanucleus.enhancer.DataNucleusEnhancer addMessage
INFO: ERROR (PersistenceCapable) : com.sampleapp.data.dataobjects.ICommentItem
Oct 23, 2010 6:46:33 PM org.datanucleus.enhancer.asm.ASMClassEnhancer enhance
INFO: Class "com.sampleapp.data.dataobjects.Article" is already enhanced.
Oct 23, 2010 6:46:33 PM org.datanucleus.enhancer.DataNucleusEnhancer addMessage
SEVERE: DataNucleus Enhancer completed with an error. Please review the enhancer log for full details. Some classes may have been enhanced but some caused errors
Failure during enhancement of classes - see the log for details
org.datanucleus.exceptions.NucleusException: Failure during enhancement of classes - see the log for details
at org.datanucleus.enhancer.DataNucleusEnhancer.enhance(DataNucleusEnhancer.java:620)
at org.datanucleus.enhancer.DataNucleusEnhancer.main(DataNucleusEnhancer.java:1316)
Turns out this is not supported at this time but is planned to be added in version 2.2.0M3