How to tweak lucene maxClauseCount from Neo4j? - java

Following is our cypher for a find by ID kind of service:
START n=node:PATIENTS('MEMBER_PLAN_ID:(1 2)') return n
Where 1 2 are the ids passed. When we pass around 2000 ids following error occurs:
java.lang.RuntimeException: org.apache.lucene.queryParser.ParseException: Cannot parse 'MEMBER_PLAN_ID:(1 2)': too many boolean clauses
at org.neo4j.index.impl.lucene.IndexType.query(IndexType.java:304)
at org.neo4j.index.impl.lucene.LuceneIndex.query(LuceneIndex.java:227)
at org.neo4j.index.impl.lucene.LuceneIndex.query(LuceneIndex.java:238)
at org.neo4j.cypher.internal.spi.gdsimpl.GDSBackedQueryContext$$anon$1.indexQuery(GDSBackedQueryContext.scala:87)
at org.neo4j.cypher.internal.executionplan.builders.IndexQueryBuilder$$anonfun$getNodeGetter$2.apply(IndexQueryBuilder.scala:83)
at org.neo4j.cypher.internal.executionplan.builders.IndexQueryBuilder$$anonfun$getNodeGetter$2.apply(IndexQueryBuilder.scala:81)
at org.neo4j.cypher.internal.pipes.StartPipe$$anonfun$internalCreateResults$1.apply(StartPipe.scala:36)
at org.neo4j.cypher.internal.pipes.StartPipe$$anonfun$internalCreateResults$1.apply(StartPipe.scala:35)
at scala.collection.Iterator$$anon$13.__AW_hasNext(Iterator.scala:371)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala)
at org.neo4j.cypher.internal.ClosingIterator$$anonfun$hasNext$1.apply$mcZ$sp(ClosingIterator.scala:36)
at org.neo4j.cypher.internal.ClosingIterator$$anonfun$hasNext$1.apply(ClosingIterator.scala:35)
at org.neo4j.cypher.internal.ClosingIterator$$anonfun$hasNext$1.apply(ClosingIterator.scala:35)
at org.neo4j.cypher.internal.ClosingIterator.failIfThrows(ClosingIterator.scala:86)
at org.neo4j.cypher.internal.ClosingIterator.hasNext(ClosingIterator.scala:35)
at org.neo4j.cypher.PipeExecutionResult.hasNext(PipeExecutionResult.scala:157)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:29)
at org.neo4j.cypher.PipeExecutionResult$$anon$1.hasNext(PipeExecutionResult.scala:73)
at net.ahm.graph.dao.PatientDAO.__AW_findPatients(PatientDAO.java:376)
Caused by: org.apache.lucene.queryParser.ParseException: Cannot parse 'MEMBER_PLAN_ID:(1 2)': too many boolean clauses
at org.apache.lucene.queryParser.QueryParser.parse(QueryParser.java:221)
at org.neo4j.index.impl.lucene.IndexType.query(IndexType.java:300)
... 38 more
Caused by: org.apache.lucene.search.BooleanQuery$TooManyClauses: maxClauseCount is set to 1024
at org.apache.lucene.search.BooleanQuery.add(BooleanQuery.java:136)
at org.apache.lucene.queryParser.QueryParser.getBooleanQuery(QueryParser.java:958)
at org.apache.lucene.queryParser.QueryParser.getBooleanQuery(QueryParser.java:933)
at org.apache.lucene.queryParser.QueryParser.Query(QueryParser.java:1281)
at org.apache.lucene.queryParser.QueryParser.Clause(QueryParser.java:1323)
at org.apache.lucene.queryParser.QueryParser.Query(QueryParser.java:1245)
at org.apache.lucene.queryParser.QueryParser.TopLevelQuery(QueryParser.java:1234)
at org.apache.lucene.queryParser.QueryParser.parse(QueryParser.java:206)
... 39 more
We see this in the stack trace: maxClauseCount is set to 1024
Is there a way to configure this limit from Neo4j while using Cypher?

However, if i add this line to our service, things work fine:
BooleanQuery.setMaxClauseCount(20000);
We did some load tests where 1000 concurrent users hit the service with 10,000 ids each. Not seeing service crash / any unexpected performance problems afterwards.

Try this (untested):
System.setProperty("org.apache.lucene.maxClauseCount", "3000");

Related

How do deal with h2 databases inability to deal with interrupts

How do deal with h2 database inability to deal with interrupts, I was occasionally seeing that my embedded h2 database appeared to get corrupted, in particular I had amended an ExecutorService so that if a task took too long it would cancel the task. The task would be cancelled okay but then subsequent database access failed with exceptions such as
23/07/2019 14.23.31:BST:DeleteDuplicatesController:start:SEVERE: commit failed
org.hibernate.TransactionException: commit failed
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.commit(AbstractTransactionImpl.java:187)
at com.jthink.songkong.db.ReportCache.save(ReportCache.java:46)
at com.jthink.songkong.reports.AbstractReport.setReportDatabaseObject(AbstractReport.java:365)
at com.jthink.songkong.reports.DeleteDuplicatesReport.setReportDatabaseObject(DeleteDuplicatesReport.java:333)
at com.jthink.songkong.reports.DeleteDuplicatesReport.closeReport(DeleteDuplicatesReport.java:377)
at com.jthink.songkong.analyse.toplevelanalyzer.DeleteDuplicatesController.deleteAnyDups(DeleteDuplicatesController.java:606)
at com.jthink.songkong.analyse.toplevelanalyzer.DeleteDuplicatesController.start(DeleteDuplicatesController.java:665)
at com.jthink.songkong.ui.swingworker.DeleteDuplicates.doInBackground(DeleteDuplicates.java:43)
at com.jthink.songkong.ui.swingworker.DeleteDuplicates.doInBackground(DeleteDuplicates.java:20)
at javax.swing.SwingWorker$1.call(SwingWorker.java:295)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at javax.swing.SwingWorker.run(SwingWorker.java:334)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.hibernate.TransactionException: unable to commit against JDBC connection
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doCommit(JdbcTransaction.java:116)
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.commit(AbstractTransactionImpl.java:180)
... 14 more
Caused by: org.h2.jdbc.JdbcSQLNonTransientException: General error: "java.lang.IllegalStateException: Reading from nio:C:/Users/Paul/AppData/Roaming/SongKong/Database/Database.mv.db failed; file length -1 read length 4096 at 1541494 [1.4.199/1]"; SQL statement:
COMMIT [50000-199]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:502)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
at org.h2.message.DbException.get(DbException.java:194)
at org.h2.message.DbException.convert(DbException.java:347)
at org.h2.command.Command.executeUpdate(Command.java:280)
at org.h2.jdbc.JdbcConnection.commit(JdbcConnection.java:542)
at com.mchange.v2.c3p0.impl.NewProxyConnection.commit(NewProxyConnection.java:1284)
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doCommit(JdbcTransaction.java:112)
... 15 more
Caused by: java.lang.IllegalStateException: Reading from nio:C:/Users/Paul/AppData/Roaming/SongKong/Database/Database.mv.db failed; file length -1 read length 4096 at 1541494 [1.4.199/1]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:883)
at org.h2.mvstore.DataUtils.readFully(DataUtils.java:420)
at org.h2.mvstore.FileStore.readFully(FileStore.java:98)
at org.h2.mvstore.MVStore.readBufferForPage(MVStore.java:1048)
at org.h2.mvstore.MVStore.readPage(MVStore.java:2186)
at org.h2.mvstore.MVMap.readPage(MVMap.java:554)
at org.h2.mvstore.Page$NonLeaf.getChildPage(Page.java:1086)
at org.h2.mvstore.Page.get(Page.java:221)
at org.h2.mvstore.MVMap.get(MVMap.java:402)
at org.h2.mvstore.MVMap.get(MVMap.java:389)
at org.h2.mvstore.MVStore.getMapName(MVStore.java:2737)
at org.h2.mvstore.MVStore.renameMap(MVStore.java:2650)
at org.h2.mvstore.tx.TransactionStore.commit(TransactionStore.java:453)
at org.h2.mvstore.tx.Transaction.commit(Transaction.java:389)
at org.h2.engine.Session.commit(Session.java:691)
at org.h2.command.dml.TransactionCommand.update(TransactionCommand.java:46)
at org.h2.command.CommandContainer.update(CommandContainer.java:133)
at org.h2.command.Command.executeUpdate(Command.java:267)
... 18 more
Caused by: java.nio.channels.ClosedChannelException
at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:110)
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:721)
at org.h2.store.fs.FileNio.read(FilePathNio.java:74)
at org.h2.mvstore.DataUtils.readFully(DataUtils.java:406)
... 34 more
23/07/2019 14.23.31:BST:Errors:addError:SEVERE: Adding Error:commit failed
I have since found this issue
Basically if using H2 in embedded mode, and it receives an interrupt then all subsequent access fails until the thread pool is close and reopened. In the example I give of a process having to be cancelled because it appears to be stuck there is no solution except for interrupting
I also have another case whereby usually the controller thread that doesn't directly do a database work itself so I was struggling to see why when an interrupt occurred why this would cause database errors since this is handled by controller thread. I have now worked out the issue is that Im using an ExecutorService with a fixed size BlockingQueue (so that we dont have a big queue build up in memory), but if the queue gets full then new task is actually executed by the controller thread (because of CallerRunsPolicy), so the controller thread can be making calls to database after all.
Im using H2 with hibernate and in both cases calling the following immediately after the interrupt
HibernateUtil.closeFactory();
seems to solve the issue, however I guess this means that any other threads with hibernate sessions will be broken, but at least newly opened sessins will be okay. So im not particularly happy with this workaround, any other ideas ?
Using H2 as a server is not a solution since the whole point of H2 was an embedded db self contained within application.
Although not properly documented using the async protocol allows a connection to be interrupted without breaking all other connections.

Phoenix hbase java.util.concurrent.CancellationException error

I have used batch process to execute some opration on hbase table using phoenix every 2 seconds and that is reflect on web admin dashboard but I when I am trying to check on dashbard then sometime I am getting this error.
[response] => error
[exceptions] => Array
(
[0] => java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException
at org.apache.calcite.avatica.jdbc.JdbcResultSet.create(JdbcResultSet.java:98)
at org.apache.calcite.avatica.jdbc.JdbcMeta.execute(JdbcMeta.java:867)
at org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:268)
at org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1024)
at org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1000)
at org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:95)
at org.apache.calcite.avatica.remote.JsonHandler.apply(JsonHandler.java:52)
at org.apache.calcite.avatica.server.AvaticaJsonHandler.handle(AvaticaJsonHandler.java:129)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.phoenix.exception.PhoenixIOException
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:808)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:714)
at org.apache.phoenix.iterate.MergeSortResultIterator.getMinHeap(MergeSortResultIterator.java:72)
at org.apache.phoenix.iterate.MergeSortResultIterator.minIterator(MergeSortResultIterator.java:93)
at org.apache.phoenix.iterate.MergeSortResultIterator.next(MergeSortResultIterator.java:58)
at org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
at org.apache.phoenix.iterate.OffsetResultIterator.next(OffsetResultIterator.java:45)
at org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
at org.apache.phoenix.iterate.LimitingResultIterator.next(LimitingResultIterator.java:47)
at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
at org.apache.calcite.avatica.jdbc.JdbcResultSet.frame(JdbcResultSet.java:133)
at org.apache.calcite.avatica.jdbc.JdbcResultSet.create(JdbcResultSet.java:91)
... 16 more
Caused by: java.util.concurrent.CancellationException
at java.util.concurrent.FutureTask.report(FutureTask.java:121)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:766)
... 28 more
)
Each Batch Process it will take 30 records from table and update some information in table. I have used php json API to fetch records from HBase using phoenix. Please help me.
Thanks in advance.

How to determine maximum amount of data that can be handled by 1 run of MR2 job?

I am running a YARN job on CDH 5.3 cluster. I have default configurations.
No of nodes=3
yarn.nodemanager.resource.cpu-vcores=8
yarn.nodemanager.resource.memory-mb=10GB
mapreduce.[map/reduce].cpu.vcores=1
mapreduce.[map/reduce].memory.mb=1GB
mapreduce.[map | reduce].java.opts.max.heap=756MB
While doing a run on 4.5GB csv data spread over 11 files ,I get following error:
2015-10-12 05:21:04,507 FATAL [IPC Server handler 18 on 50388] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1444634391081_0005_r_000000_0 - exited : org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#9
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.io.BoundedByteArrayOutputStream.<init>(BoundedByteArrayOutputStream.java:56)
at org.apache.hadoop.io.BoundedByteArrayOutputStream.<init>(BoundedByteArrayOutputStream.java:46)
at org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.<init>(InMemoryMapOutput.java:63)
at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.unconditionalReserve(MergeManagerImpl.java:303)
at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:293)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:511)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193)
Then I tuned mapreduce.reduce.memory.mb=1GB to mapreduce.reduce.memory.mb=3GB and job runned fine.
So how to decide on how much data maximum can be handled by 1 reducer assuming that all the input to mapper have to be processed by 1 reducer only?
Generally there is no limitation on the data that can be processed by a single reducer. The memory allocation can slow down the process but must not restrict or fail to process the data. I believe after allocating minimum memory to reducer the data processing should not be an issue. Can u pls share some code snippet to check for any memory leak issues.
We used to process 6+Gb of file in a single reducer withou any issues. I believe you might be having memory leak issues.

Cassandra NoHostAvailableException: All host(s) tried for query failed in Production

We have 10 Cassandra nodes in production running Cassandra-2.1.8. We recently upgraded to 2.1.8 version. Previously we were using only 3 nodes running Cassandra-2.1.2. First we upgraded the initial 3 nodes from 2.1.2 to 2.1.8 (following the procedure as described in Upgrading Cassandra). Then we added 7 more nodes running Cassandra-2.1.8 in cluster. Then we started our client programs. For first few hours everything worked fine, but after few hours, we saw some errors in client program logs like
Thread-0 [29/07/15 17:41:23.356] ERROR com.cleartrail.entityprofiling.engine.InterpretationWriter - Error:com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/172.50.33.161:9041, /172.50.33.162:9041, /172.50.33.95:9041, /172.50.33.96:9041, /172.50.33.165:9041, /172.50.33.166:9041, /172.50.33.163:9041, /172.50.33.164:9041, /172.50.33.42:9041, /172.50.33.167:9041] - use getErrors() for details)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:259)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:175)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at com.cleartrail.entityprofiling.engine.InterpretationWriter.WriteInterpretation(InterpretationWriter.java:430)
at com.cleartrail.entityprofiling.engine.Profiler.buildProfile(Profiler.java:1042)
at com.cleartrail.messageconsumer.consumer.KafkaConsumer.run(KafkaConsumer.java:336)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/172.50.33.161:9041, /172.50.33.162:9041, /172.50.33.95:9041, /172.50.33.96:9041, /172.50.33.165:9041, /172.50.33.166:9041, /172.50.33.163:9041, /172.50.33.164:9041, /172.50.33.42:9041, /172.50.33.167:9041] - use getErrors() for details)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:102)
at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:176)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Now, I double checked the Firewall (as suggested in few posts), ports, timeouts in client as well as nodes and they all are correct.
I am also not closing the connection anywhere in between. I am using batch queries with batch size of 1000 and the queries are update queries updating counters in my table with three columns
entity , twfwv , cvalue
where entity and twfwv columns are text and primary key and cvalue is counter column.
I even restarted all my nodes (because this trick helped me in my dev environment when I faced the same exception) but its not helping. Please suggest what can be the probable problem here.
My issue was resolved by checking the errors collection of NoHostAvailableException as advised by Olivier Michallat in the comments. For me it was the protocol version on the cluster configuration. Mine was null, setting it to 3 fixed the problem.
My issue was resolved by removing/using a property to set or unset the custom load balancing TokenAwarePolicy my connection was using, and relying on the default.
Specifically, I was trying to get a local spring boot app talking to a single dockerized Cassandra instance.
Cluster.Builder builder = Cluster.builder()
.addContactPoints(cassandraProperties.getHosts())
.withPort(cassandraProperties.getPort())
.withProtocolVersion(ProtocolVersion.V4)
.withRetryPolicy(new LoggingRetryPolicy(DefaultRetryPolicy.INSTANCE))
.withCredentials(cassandraProperties.getUsername(), cassandraProperties.getPassword())
.withCodecRegistry(codecRegistry);
if (loadBalanced) {
builder.withLoadBalancingPolicy(
new TokenAwarePolicy(DCAwareRoundRobinPolicy.builder().withLocalDc(localDc).build()));
}

Cassandra AssertionError

I received an OOM exception at one point in Cassandra. Mine is a single instance running on a modestly powered server, and i was doing some load testing, so no surprise there.
But, i have subsequently been unable to use the instance. When i list the keyspaces, only "system" is shown. But when i try to recreate the keyspace i was testing with, Hector responds with the dreaded "All host pools marked down. Retry burden pushed out to client." message, and the Cassandra log has the following stack trace:
ERROR [MigrationStage:1] 2012-04-27 20:47:00,863 AbstractCassandraDaemon.java (line 134) Exception in thread Thread[MigrationStage:1,5,main]
java.lang.AssertionError
at org.apache.cassandra.db.DefsTable.updateKeyspace(DefsTable.java:441)
at org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:339)
at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:269)
at org.apache.cassandra.service.MigrationManager$1.call(MigrationManager.java:214)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
ERROR [Thrift:9] 2012-04-27 20:47:00,864 CustomTThreadPoolServer.java (line 204) Error occurred during processing of message.
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.AssertionError
at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:372)
at org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:191)
at org.apache.cassandra.service.MigrationManager.announceNewKeyspace(MigrationManager.java:129)
at org.apache.cassandra.thrift.CassandraServer.system_add_keyspace(CassandraServer.java:987)
at org.apache.cassandra.thrift.Cassandra$Processor$system_add_keyspace.getResult(Cassandra.java:3370)
at org.apache.cassandra.thrift.Cassandra$Processor$system_add_keyspace.getResult(Cassandra.java:3358)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:186)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:368)
... 11 more
Caused by: java.lang.AssertionError
at org.apache.cassandra.db.DefsTable.updateKeyspace(DefsTable.java:441)
at org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:339)
at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:269)
at org.apache.cassandra.service.MigrationManager$1.call(MigrationManager.java:214)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
The old keyspace was still in the data dir, so i moved it, but that didn't help. It seems that the system data still has an invalid reference somewhere. Does anyone know how to fix this?
Edit: from the CLI, a "describe cluster;" only describes the "system" keyspace. But when i "use system;" and then "list schema_keyspaces;" the following is displayed:
Using default limit of 100
-------------------
RowKey: mango
=> (column=durable_writes, value=true, timestamp=29127788177516974)
=> (column=name, value=mango, timestamp=29127788177516974)
=> (column=strategy_class, value=org.apache.cassandra.locator.SimpleStrategy, timestamp=29127788177516974)
=> (column=strategy_options, value={"replication_factor":"1"}, timestamp=29127788177516974)
1 Row Returned.
Elapsed time: 1107 msec(s).
"mango" is the keyspace that i can no longer access, but it is still in there to some degree. Is there any way to fix it?
The problem almost certainly is that the recreated keyspace is inconsistent with the commit log or data stored with the original definition. Shut down the Cassandra server and clear out the commitlog, saved_caches, and data directory corresponding to the keyspace. The locations of these directories are in cassandra.yaml - look for data_file_directories, saved_caches_directory, and commitlog_directory.
This problem is due to inconsistency and you can go for following steps.
1) In your case this is OK to clear "data" , "saved_caches" and "commitlog" directories as you dont have any critical data and other Keyspaces.
2) In the scenarios where you have some critical data and you can not delete above mentioned directories do the following.
Use nodetool drain to Empty the commitlog on all the nodes of the cluster.
Then delete all the "LocationInfo*" files from "/data/system" directories and restart the cluster.

Categories

Resources