what do these neo4j 2.0.1 exceptions mean? - java

We embed Neo4j 2.0.1 community edition on Linux 64 and JRE 1 7 51 in a multi-threaded application.
Intermittently (rarely) we get following exceptions when we delete a Relationship and execute a cypher within same transaction, respectively.
Should we be using pessimistic locks for these deletes? what do these exceptions mean?
ResourceAcquisitionFailedException
org.neo4j.kernel.impl.persistence.ResourceAcquisitionFailedException: The transaction is marked for rollback only.
at org.neo4j.kernel.impl.persistence.PersistenceManager$ResourceHolder.enlist(PersistenceManager.java:408)
at org.neo4j.kernel.impl.persistence.PersistenceManager$ResourceHolder.forWriting(PersistenceManager.java:384)
at org.neo4j.kernel.impl.api.KernelTransactionImplementation.ensureWriteTransaction(KernelTransactionImplementation.java:181)
at org.neo4j.kernel.impl.api.KernelTransactionImplementation.upgradeToDataTransaction(KernelTransactionImplementation.java:211)
at org.neo4j.kernel.impl.api.KernelStatement.dataWriteOperations(KernelStatement.java:84)
at org.neo4j.kernel.impl.core.RelationshipProxy.delete(RelationshipProxy.java:84)
org.neo4j.cypher.CypherExecutionException
org.neo4j.cypher.CypherExecutionException at org.neo4j.cypher.internal.compiler.v2_0.spi.ExceptionTranslatingQueryContext.org$neo4j$cypher$internal$compiler$v2_0$spi$ExceptionTranslatingQueryContext$$translateException(ExceptionTranslatingQueryContext.scala:151)
at org.neo4j.cypher.internal.compiler.v2_0.spi.ExceptionTranslatingQueryContext.getLabelsForNode(ExceptionTranslatingQueryContext.scala:44)
at org.neo4j.cypher.internal.compiler.v2_0.spi.DelegatingQueryContext.getLabelsForNode(DelegatingQueryContext.scala:39)
at org.neo4j.cypher.internal.compiler.v2_0.spi.QueryContext$class.isLabelSetOnNode(QueryContext.scala:54)
at org.neo4j.cypher.internal.compiler.v2_0.spi.DelegatingQueryContext.isLabelSetOnNode(DelegatingQueryContext.scala:26)
at org.neo4j.cypher.internal.compiler.v2_0.commands.HasLabel.isMatch(Predicate.scala:252)
at org.neo4j.cypher.internal.compiler.v2_0.commands.And.isMatch(Predicate.scala:62)
at org.neo4j.cypher.internal.compiler.v2_0.commands.And.isMatch(Predicate.scala:62)
at org.neo4j.cypher.internal.compiler.v2_0.commands.Predicate.isTrue(Predicate.scala:33)
at org.neo4j.cypher.internal.compiler.v2_0.pipes.matching.SingleStep$FilteringIterator.isValidNext(SingleStep.scala:135)
at org.neo4j.cypher.internal.compiler.v2_0.pipes.matching.SingleStep$FilteringIterator.computeNext(SingleStep.scala:118)
at org.neo4j.cypher.internal.compiler.v2_0.pipes.matching.SingleStep$FilteringIterator.next(SingleStep.scala:108)
at org.neo4j.cypher.internal.compiler.v2_0.pipes.matching.SingleStep$FilteringIterator.next(SingleStep.scala:98)
at scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:30)
at org.neo4j.kernel.impl.traversal.TraversalBranchImpl.next(TraversalBranchImpl.java:138)
at org.neo4j.kernel.impl.traversal.TraversalBranchWithState.next(TraversalBranchWithState.java:32)
at org.neo4j.kernel.impl.traversal.StartNodeTraversalBranch.next(StartNodeTraversalBranch.java:50)
at org.neo4j.graphdb.traversal.PreorderDepthFirstSelector.next(PreorderDepthFirstSelector.java:49)
at org.neo4j.kernel.impl.traversal.MonoDirectionalTraverserIterator.fetchNextOrNull(MonoDirectionalTraverserIterator.java:68)
at org.neo4j.kernel.impl.traversal.MonoDirectionalTraverserIterator.fetchNextOrNull(MonoDirectionalTraverserIterator.java:35)
at org.neo4j.helpers.collection.PrefetchingIterator.hasNext(PrefetchingIterator.java:55)
at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
at scala.collection.Iterator$class.isEmpty(Iterator.scala:256)
at scala.collection.AbstractIterator.isEmpty(Iterator.scala:1157)
at org.neo4j.cypher.internal.compiler.v2_0.pipes.SlicePipe.internalCreateResults(SlicePipe.scala:36)
at org.neo4j.cypher.internal.compiler.v2_0.pipes.PipeWithSource.createResults(Pipe.scala:71)
at org.neo4j.cypher.internal.compiler.v2_0.pipes.PipeWithSource.createResults(Pipe.scala:68)
at org.neo4j.cypher.internal.compiler.v2_0.pipes.PipeWithSource.createResults(Pipe.scala:71)
at org.neo4j.cypher.internal.compiler.v2_0.pipes.PipeWithSource.createResults(Pipe.scala:68)
at org.neo4j.cypher.internal.compiler.v2_0.executionplan.ExecutionPlanBuilder.org$neo4j$cypher$internal$compiler$v2_0$executionplan$ExecutionPlanBuilder$$prepareStateAndResult(ExecutionPlanBuilder.scala:149)
at org.neo4j.cypher.internal.compiler.v2_0.executionplan.ExecutionPlanBuilder$$anonfun$2.apply(ExecutionPlanBuilder.scala:126)
at org.neo4j.cypher.internal.compiler.v2_0.executionplan.ExecutionPlanBuilder$$anonfun$2.apply(ExecutionPlanBuilder.scala:125)
at org.neo4j.cypher.internal.compiler.v2_0.executionplan.ExecutionPlanBuilder$$anon$6.execute(ExecutionPlanBuilder.scala:50)
at org.neo4j.cypher.internal.ExecutionPlanWrapperForV2_0.execute(CypherCompiler.scala:93)
at org.neo4j.cypher.ExecutionEngine.execute(ExecutionEngine.scala:61)
at org.neo4j.cypher.ExecutionEngine.execute(ExecutionEngine.scala:65)
at org.neo4j.cypher.javacompat.ExecutionEngine.execute(ExecutionEngine.java:78)

Related

beanshell - Deadlock issue

Has anyone got any experience of having deadlocks with beanshell? This is something we have been encountering recently in our production system where script execution is blocking other threads, due to it's lock on classloading via tomcat. The following is the stacktrace for the lock owner in thread dump:
"Thread-64" : 150 : BLOCKED : cpu=37812500000 : cpuLoad= 0.0
BlockedCount:93354 BlockedTime:-1 LockName:java.lang.Object#219d66b6 LockOwnerID:151 LockOwnerName:Thread-65
WaitedCount:13 WaitedTime:-1 InNative:false IsSuspended:false at org.apache.catalina.webresources.AbstractSingleArchiveResourceSet.getArchiveEntries(AbstractSingleArchiveResourceSet.java:66)
at org.apache.catalina.webresources.AbstractArchiveResourceSet.getResource(AbstractArchiveResourceSet.java:262)
at org.apache.catalina.webresources.StandardRoot.getResourceInternal(StandardRoot.java:281)
at org.apache.catalina.webresources.Cache.getResource(Cache.java:62)
at org.apache.catalina.webresources.StandardRoot.getResource(StandardRoot.java:216)
at org.apache.catalina.webresources.StandardRoot.getClassLoaderResource(StandardRoot.java:225)
at org.apache.catalina.loader.WebappClassLoaderBase.findClassInternal(WebappClassLoaderBase.java:2173)
at org.apache.catalina.loader.WebappClassLoaderBase.findClass(WebappClassLoaderBase.java:811)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1260)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1119)
at java.lang.Class.forName0(Class.java:-2)
at java.lang.Class.forName(Class.java:348)
at bsh.classpath.ClassManagerImpl.classForName(null:-1)
at bsh.NameSpace.classForName(null:-1)
at bsh.NameSpace.getImportedClassImpl(null:-1)
at bsh.NameSpace.getClassImpl(null:-1)
at bsh.NameSpace.getClass(null:-1)
at bsh.Name.consumeNextObjectField(null:-1)
at bsh.Name.toObject(null:-1)
at bsh.BSHAmbiguousName.toObject(null:-1)
at bsh.BSHAmbiguousName.toObject(null:-1)
at bsh.BSHPrimaryExpression.eval(null:-1)
at bsh.BSHPrimaryExpression.eval(null:-1)
at bsh.BSHVariableDeclarator.eval(null:-1)
at bsh.BSHTypedVariableDeclaration.eval(null:-1)
at bsh.Interpreter.eval(null:-1)
at bsh.Interpreter.eval(null:-1)
at bsh.Interpreter.eval(null:-1)
at my.package.MyClassFile(MyClassFile:2332)
I see that Groovy is a more popular choice for Java scripting, but I haven't seen many posts where it says that bsh can cause deadlocks.
It would be good to get some ideas from SO users.
Regards,
There's a fix for one dead lock in GUI does not start in Java 8 found in Beanshell (almost latest) version 2.0b5.
You can open a new issue in Beanshell project.
It may be connected to ClassManagerImpl:
Bsh has a multi-tiered class loading architecture. No class loader is
created unless/until a class is generated, the classpath is modified,
or a class is reloaded.
Note: we may need some synchronization in here

Hazelcast : Portable Serialization : Incompatible class-definitions with same class-id

I recently upgraded a cluster from 3.7.2 to 3.9.2; shutting down all the boxes.
I'm using Portable Serialization and the current version number in the config file is 6. Yet, after a cold start, the cluster indicates a incompatible class definition.
I've restarted the cluster several times since, yet the same error remains.
How is it possible for the system to remain out of sync for some fields in some classes, yet the version for the class was upgraded?
Log:
Caused by:
com.hazelcast.nio.serialization.HazelcastSerializationException:
Incompatible class-definitions with same class-id:
ClassDefinition{factoryId=1, classId=8, version=6, fieldDefinitions=[
FieldDefinitionImpl{index=0, fieldName='feature', type=UTF, classId=0, factoryId=0, version=6},
FieldDefinitionImpl{index=1, fieldName='value', type=BOOLEAN, classId=0, factoryId=0, version=6}]}
VS
ClassDefinition{factoryId=1, classId=8, version=6, fieldDefinitions=[
FieldDefinitionImpl{index=0, fieldName='feature', type=UTF, classId=0, factoryId=0, version=0},
FieldDefinitionImpl{index=1, fieldName='value', type=BOOLEAN, classId=0, factoryId=0, version=0}]}
Config:
<serialization>
<portable-version>6</portable-version>
<portable-factories>
<portable-factory factory-id="1">
com.MyPortableFactory
</portable-factory>
</portable-factories>
</serialization>
It is a bug in Hazelcast library that got fixed in 3.9.4 and 3.10+.
Issue:
https://github.com/hazelcast/hazelcast/issues/12733
Fix for 3.10:
https://github.com/hazelcast/hazelcast/pull/12734
Fix for 3.9.4:
https://github.com/hazelcast/hazelcast/pull/12735

JDBC oracle connection error: ORA-12519, TNS:no appropriate service handler found

In my project I am using jdbc to connect to a oracle 12c instance in a multi-threading environment, earlier we had an oracle 9i instance and we were using ojdbc6 and it was working perfectly but we receltly got this oracle 12c instance which gave following error at JDBC connection point.
java.sql.SQLException: Listener refused the connection with the following error:
ORA-12519, TNS:no appropriate service handler found
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:774)
at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:688)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:39)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:691)
at java.sql.DriverManager.getConnection(Unknown Source)
at java.sql.DriverManager.getConnection(Unknown Source)
So I thought it may be because of the older driver version that we had, so I incorporated ojdbc8 which I found over the internet to be compatible with 12 but the above error is still there. My JDK version is 1.8.
I'd appreciate any input on resolving this issue. Thanks in advance.
Sid. I think the bug occurred dut to the Oracle initialization parameter setting problem.
Please use the command line as below:
SQL > SHOW the PARAMETER of the SESSION
------------------------------------ ----------- ------------------------------
java_max_sessionspace_size integer 0
java_soft_sessionspace_limit integer 0
license_max_sessions integer 0
license_sessions_warning integer 0
session_cached_cursors integer 50
session_max_open_files integer 10
sessions integer 600
shared_server_sessions integer
the other command line:
SQL> SHOW PARAMETER PROCESS
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes integer 0
db_writer_processes integer 2
gcs_server_processes integer 2
global_txn_processes integer 1
job_queue_processes integer 1000
log_archive_max_processes integer 4
processes integer 150
According to the Oracle documentation, SESSIONS and the TRANSACTIONS of the initialization parameters should be derived from the PROCESSES, according to the default setting SESSIONS = 1.1 + 5 by the PROCESSES.
SESSIONS currently set up to 600, and the PROCESSES of setting has not changed, still for 150, led to too much user session to connect to the Oracle, Oracle do not have enough background PROCESSES to support these SESSIONS.
The direct solution is set the appropriate the PROCESSES.

Cassandra NoHostAvailableException: All host(s) tried for query failed in Production

We have 10 Cassandra nodes in production running Cassandra-2.1.8. We recently upgraded to 2.1.8 version. Previously we were using only 3 nodes running Cassandra-2.1.2. First we upgraded the initial 3 nodes from 2.1.2 to 2.1.8 (following the procedure as described in Upgrading Cassandra). Then we added 7 more nodes running Cassandra-2.1.8 in cluster. Then we started our client programs. For first few hours everything worked fine, but after few hours, we saw some errors in client program logs like
Thread-0 [29/07/15 17:41:23.356] ERROR com.cleartrail.entityprofiling.engine.InterpretationWriter - Error:com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/172.50.33.161:9041, /172.50.33.162:9041, /172.50.33.95:9041, /172.50.33.96:9041, /172.50.33.165:9041, /172.50.33.166:9041, /172.50.33.163:9041, /172.50.33.164:9041, /172.50.33.42:9041, /172.50.33.167:9041] - use getErrors() for details)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:259)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:175)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at com.cleartrail.entityprofiling.engine.InterpretationWriter.WriteInterpretation(InterpretationWriter.java:430)
at com.cleartrail.entityprofiling.engine.Profiler.buildProfile(Profiler.java:1042)
at com.cleartrail.messageconsumer.consumer.KafkaConsumer.run(KafkaConsumer.java:336)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/172.50.33.161:9041, /172.50.33.162:9041, /172.50.33.95:9041, /172.50.33.96:9041, /172.50.33.165:9041, /172.50.33.166:9041, /172.50.33.163:9041, /172.50.33.164:9041, /172.50.33.42:9041, /172.50.33.167:9041] - use getErrors() for details)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:102)
at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:176)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Now, I double checked the Firewall (as suggested in few posts), ports, timeouts in client as well as nodes and they all are correct.
I am also not closing the connection anywhere in between. I am using batch queries with batch size of 1000 and the queries are update queries updating counters in my table with three columns
entity , twfwv , cvalue
where entity and twfwv columns are text and primary key and cvalue is counter column.
I even restarted all my nodes (because this trick helped me in my dev environment when I faced the same exception) but its not helping. Please suggest what can be the probable problem here.
My issue was resolved by checking the errors collection of NoHostAvailableException as advised by Olivier Michallat in the comments. For me it was the protocol version on the cluster configuration. Mine was null, setting it to 3 fixed the problem.
My issue was resolved by removing/using a property to set or unset the custom load balancing TokenAwarePolicy my connection was using, and relying on the default.
Specifically, I was trying to get a local spring boot app talking to a single dockerized Cassandra instance.
Cluster.Builder builder = Cluster.builder()
.addContactPoints(cassandraProperties.getHosts())
.withPort(cassandraProperties.getPort())
.withProtocolVersion(ProtocolVersion.V4)
.withRetryPolicy(new LoggingRetryPolicy(DefaultRetryPolicy.INSTANCE))
.withCredentials(cassandraProperties.getUsername(), cassandraProperties.getPassword())
.withCodecRegistry(codecRegistry);
if (loadBalanced) {
builder.withLoadBalancingPolicy(
new TokenAwarePolicy(DCAwareRoundRobinPolicy.builder().withLocalDc(localDc).build()));
}

can't serialize session for resin request

[13-04-27 11:27:30.890] {resin-port-10.156.76.24:8084-48} SessionImpl[aaaJ3Dcydow_igIqaIj5t,]: can't serialize session
java.lang.IllegalStateException: block Block[Table[mnode:2],72002] is not an index code=0
at com.caucho.db.block.Block.validateIsIndex(Block.java:152)
at com.caucho.db.index.BTree.validateIndex(BTree.java:1727)
at com.caucho.db.index.BTree.lookup(BTree.java:197)
at com.caucho.db.index.BTree.lookup(BTree.java:212)
at com.caucho.db.index.BTree.lookup(BTree.java:168)
at com.caucho.db.sql.IndexExpr.evalIndex(IndexExpr.java:152)
at com.caucho.db.sql.IndexExpr.initRow(IndexExpr.java:104)
at com.caucho.db.sql.Query$TailInitRow.initBlockRow(Query.java:952)
at com.caucho.db.sql.Query.start(Query.java:727)
at com.caucho.db.sql.SelectQuery.execute(SelectQuery.java:209)
at com.caucho.db.sql.SelectQuery.execute(SelectQuery.java:171)
at com.caucho.db.jdbc.PreparedStatementImpl.execute(PreparedStatementImpl.java:357)
at com.caucho.db.jdbc.PreparedStatementImpl.executeQuery(PreparedStatementImpl.java:325)
at com.caucho.server.distcache.MnodeStore.load(MnodeStore.java:535)
at com.caucho.server.distcache.CacheDataBackingImpl.loadLocalEntryValue(CacheDataBackingImpl.java:108)
at com.caucho.server.distcache.DistCacheEntry.loadLocalMnodeValue(DistCacheEntry.java:1189)
at com.caucho.server.distcache.CacheEntryManager.createCacheEntry(CacheEntryManager.java:83)
at com.caucho.server.distcache.CacheStoreManager.getCacheEntry(CacheStoreManager.java:143)
at com.caucho.server.distcache.CacheImpl.getDistCacheEntry(CacheImpl.java:663)
at com.caucho.server.distcache.CacheImpl.put(CacheImpl.java:459)
at com.caucho.server.session.SessionImpl.save(SessionImpl.java:906)
at com.caucho.server.session.SessionImpl.saveAfterRequest(SessionImpl.java:869)
at com.caucho.server.session.SessionImpl.finishRequest(SessionImpl.java:645)
at com.caucho.server.http.AbstractCauchoRequest.finishRequest(AbstractCauchoRequest.java:1047)
at com.caucho.server.http.HttpServletRequestImpl.finishRequest(HttpServletRequestImpl.java:1692)
at com.caucho.server.http.AbstractHttpRequest.finishRequest(AbstractHttpRequest.java:1848)
at com.caucho.server.http.HttpRequest.finishRequest(HttpRequest.java:1487)
at com.caucho.server.http.HttpRequest.handleRequest(HttpRequest.java:870)
at com.caucho.network.listen.TcpSocketLink.dispatchRequest(TcpSocketLink.java:1342)
at com.caucho.network.listen.TcpSocketLink.handleRequest(TcpSocketLink.java:1298)
at com.caucho.network.listen.TcpSocketLink.handleRequestsImpl(TcpSocketLink.java:1282)
at com.caucho.network.listen.TcpSocketLink.handleRequests(TcpSocketLink.java:1190)
at com.caucho.network.listen.TcpSocketLink.handleAcceptTaskImpl(TcpSocketLink.java:989)
at com.caucho.network.listen.ConnectionTask.runThread(ConnectionTask.java:117)
at com.caucho.network.listen.ConnectionTask.run(ConnectionTask.java:93)
at com.caucho.network.listen.SocketLinkThreadLauncher.handleTasks(SocketLinkThreadLauncher.java:169)
at com.caucho.network.listen.TcpSocketAcceptThread.run(TcpSocketAcceptThread.java:61)
at com.caucho.env.thread2.ResinThread2.runTasks(ResinThread2.java:173)
at com.caucho.env.thread2.ResinThread2.run(ResinThread2.java:118)
it happens such kind of error for my resin webserver, I have several resin servers and only one server throw out such error suddenly.
remove the resin-data diretory,then restart resin
It looks like you might have a database corruption there ...
Possibly related Resin bug report: http://bugs.caucho.com/view.php?id=4956
The resolution for the bug report says "Fixed", so you should probably check your Resin installation is up to date.

Categories

Resources