cassandra java client throws string index out of range exception - java

I am getting following exception when running a query from Cassandra cql java client.
java.lang.StringIndexOutOfBoundsException: String index out of range: -1
at java.lang.String.substring(String.java:1949)
at org.apache.cassandra.cql.jdbc.CassandraResultSet.createColumn(CassandraResultSet.java:1159)
at org.apache.cassandra.cql.jdbc.CassandraResultSet.populateMetaData(CassandraResultSet.java:220)
at org.apache.cassandra.cql.jdbc.CassandraResultSet.<init>(CassandraResultSet.java:190)
at org.apache.cassandra.cql.jdbc.CassandraStatement.doExecute(CassandraStatement.java:169)
at org.apache.cassandra.cql.jdbc.CassandraStatement.executeQuery(CassandraStatement.java:229)
at com.jolbox.bonecp.StatementHandle.executeQuery(StatementHandle.java:503)
at com.wyh.cluster.node.connect.JDBCConnect.query(JDBCConnect.java:77)
at com.wyh.service.TripWorker.runRouteQuery(TripWorker.java:3557)
at com.wyh.service.TripWorker.getRouteInfo(TripWorker.java:3495)
at com.wyh.service.TripWorker.run(TripWorker.java:607)
at org.jppf.server.node.NodeTaskWrapper.run(NodeTaskWrapper.java:136)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)
When I look at CassandraResultSet.java:1159 here
http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/source/browse/src/main/java/org/apache/cassandra/cql/jdbc/CassandraResultSet.java?spec=svn3be43d719cf12d92d067c20b3d1c23682c25a304&r=61cefd982fd281f80fbb6485754c8f7831193114
It seems more of a bug to me. The cql query which is being generated in the code is working in the cqlsh client. Any workaround for this.
UPDATE
I am now moved to datastax driver where I am using these jar files.
http://www.datastax.com/documentation/developer/java-driver/2.0/java-driver/reference/settingUpJavaProgEnv_r.html
Now, whenever I am trying to obtain the session I am getting this exception. Surprise to the fact that the method is there in the class.
Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.ListenableFutureTask.create(Ljava/util/concurrent/Callable;)Lcom/google/common/util/concurrent/ListenableFutureTask;
at com.google.common.util.concurrent.AbstractListeningExecutorService.newTaskFor(AbstractListeningExecutorService.java:46)
at com.google.common.util.concurrent.AbstractListeningExecutorService.newTaskFor(AbstractListeningExecutorService.java:37)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:131)
at com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:58)
at com.datastax.driver.core.SessionManager.addOrRenewPool(SessionManager.java:247)
at com.datastax.driver.core.SessionManager.init(SessionManager.java:66)
at com.datastax.driver.core.Cluster.connect(Cluster.java:199)

Related

Hbase client API not connecting to Hbase throwing SocketTimeoutException

I'm trying to connect to Hbase using Hbase client API in a kerborized Cloudera cluster.
Sample code:
Configuration hbaseConf = HBaseConfiguration.create();
/*hbaseConf.set("hbase.master", "somenode.net:2181");
hbaseConf.set("hbase.client.scanner.timeout.period", "1200000");
hbaseConf.set("hbase.zookeeper.quorum",
"somenode.net,somenode2.net");
hbaseConf.set("zookeeper.znode.parent", "/hbase");*/
hbaseConf.setInt("timeout", 120000);
hbaseConf.set(TableInputFormat.INPUT_TABLE, tableName);
//hbaseConf.addResource("src/main/resources/hbase-site.xml");
UserGroupInformation.setConfiguration(hbaseConf);
UserGroupInformation.loginUserFromKeytab("principal", "keytab");
JavaPairRDD<ImmutableBytesWritable, Result> javaPairRdd = ctx
.newAPIHadoopRDD(hbaseConf, TableInputFormat.class,
ImmutableBytesWritable.class, Result.class);
I tried to set the hbase-site.xml in the maven project resources, also passed as jar file in spark-submit command using --jars, but nothing works.
Error log:
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68545: row '¨namespace:test,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hostname.net,60020,1511970022474, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:278)
at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:266)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:920)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1242)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:394)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:203)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:64)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:360)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:334)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more
18/02/26 16:25:42 INFO spark.SparkContext: Invoking stop() from shutdown hook
The problem you are facing is because your environment is not properly set up.
I have answer it to my own question here

Cassandra query in java program running infinite times even though called only once

I run the following code as a java program. The query runs infinite times even though it is called only once which I am not able to figure out why. It drops the keyspace during the first query and when it runs again (which it should not), it keeps giving exception numerous times that the keyspace does not exist.
What should I do to make the query run only once?
Session session=cassandraSessionFactory.getSession();
String query = "drop keyspace "+keyspace_name";
session.execute(query);
session.close();
Here is the exception stack trace which keeps on getting printed numerous times and stops only when manually halted.
23:24:55,654 INFO BackendOperation:75 - Temporary exception during backend operation [messageReading#0:0]. Attempting backoff retry.
com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Temporary failure in storage backend
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:114)
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:78)
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getSlice(AstyanaxKeyColumnValueStore.java:67)
at com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller$1.call(KCVSLog.java:769)
at com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller$1.call(KCVSLog.java:766)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:133)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation$1.call(BackendOperation.java:147)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:56)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:42)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:144)
at com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller.run(KCVSLog.java:703)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.netflix.astyanax.connectionpool.exceptions.BadRequestException: BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=0(0), attempts=1]InvalidRequestException(why:Keyspace keyspace_name does not exist)
at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:65)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:28)
at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:153)
at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:119)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:352)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4.execute(ThriftColumnFamilyQueryImpl.java:538)
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:112)
... 17 more
Caused by: InvalidRequestException(why:Keyspace keyspace_name does not exist)
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:14678)
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:14633)
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result.read(Cassandra.java:14559)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_multiget_slice(Cassandra.java:741)
at org.apache.cassandra.thrift.Cassandra$Client.multiget_slice(Cassandra.java:725)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4$1.internalExecute(ThriftColumnFamilyQueryImpl.java:544)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4$1.internalExecute(ThriftColumnFamilyQueryImpl.java:541)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60)
... 23 more

How to Split solr shard in solr cloud

I am using solr 4.10.3 in solrCloud mode. I have one shard and 3 replica. external zookeeper ensemble in being used. My document in one index has been increase too much. Now I want to create more shards. I tries to use
http://localhost:8983/solr/admin/collections?action=SPLITSHARD&collection=collection1&shard=shard1
But it gives following error
Error executing split operation for collection: collection1 parent shard: shard1
java.lang.NullPointerException
at org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1288)
at org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:563)
at org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2629)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Collection: collection1 operation: splitshard failed:org.apache.solr.common.SolrException
at org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1569)
at org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:563)
at org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2629)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1288)
null:org.apache.solr.common.SolrException
null:org.apache.solr.common.SolrException
at org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:364)
at org.apache.solr.handler.admin.CollectionsHandler.handleSplitShardAction(CollectionsHandler.java:606)
at org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:172)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
Where is the problem and what is its soultion?
The property SPLITSHARD can only be used when you have defined -DnumShards=(some value) when you start you cluster first time.

adal4j: testAcquireToken fails

I've downloaded adal4j from Github and I've built it with maven. Compilation passed successfully but one of the tests - testAcquireToken() always fails with the same error:
AADSTS70002: Error validating credentials. AADSTS50012: Invalid client secret is provided.
What I am doing wrong? I am interesting exactly in the scenario of this test.
My environment: Windows 7 64 bit. Java version "1.7.0_02" 32 bit.
testAcquireToken
java.util.concurrent.ExecutionException: java.lang.AssertionError:
Unexpected method call AuthenticationCallback.onFailure(com.microsoft.aad.adal4j.AuthenticationException: {"error":"invalid_client","error_description":"AADSTS70002: Error validating credentials. AADSTS50012: Invalid client secret is provided.\r\nTrace ID: 285b162f-2655-429a-bde7-3c5042aed74e\r\nCorrelation ID: b5eae0c9-2574-4ffb-ade8-8a25f02df980\r\nTimestamp: 2014-09-09 13:48:26Z"}):
AuthenticationCallback.onSuccess(<any>): expected: 1, actual: 0
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
at java.util.concurrent.FutureTask.get(FutureTask.java:111)
at com.microsoft.aad.adal4j.AuthenticationContextTest.testAcquireToken(AuthenticationContextTest.java:102)
Caused by: java.lang.AssertionError:
Unexpected method call AuthenticationCallback.onFailure(com.microsoft.aad.adal4j.AuthenticationException: {"error":"invalid_client","error_description":"AADSTS70002: Error validating credentials. AADSTS50012: Invalid client secret is provided.\r\nTrace ID: 285b162f-2655-429a-bde7-3c5042aed74e\r\nCorrelation ID: b5eae0c9-2574-4ffb-ade8-8a25f02df980\r\nTimestamp: 2014-09-09 13:48:26Z"}):
AuthenticationCallback.onSuccess(<any>): expected: 1, actual: 0
at org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:44)
at org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:94)
at $Proxy15.onFailure(Unknown Source)
at com.microsoft.aad.adal4j.AuthenticationContext$1.call(AuthenticationContext.java:133)
at com.microsoft.aad.adal4j.AuthenticationContext$1.call(AuthenticationContext.java:112)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
... Removed 30 stack frames

Exception when closing the index

I create a river definition which has a scheduler for every 5s to connect to a database. I need to add settings for that index (analyzer. filter). This cannot be done when the indices are open. So as suggested in many threads, I closed the index. Once I close I am getting the following exception.
[2014-08-27 17:43:05,236][ERROR][BulkNodeClient ] after bulk [3] error
org.elasticsearch.indices.IndexMissingException: [db2] missing
at org.elasticsearch.cluster.routing.operation.plain.PlainOperationRouting.indexRoutingTable(PlainOperationRouting.java:245)
at org.elasticsearch.cluster.routing.operation.plain.PlainOperationRouting.shards(PlainOperationRouting.java:259)
at org.elasticsearch.cluster.routing.operation.plain.PlainOperationRouting.shards(PlainOperationRouting.java:255)
at org.elasticsearch.cluster.routing.operation.plain.PlainOperationRouting.indexShards(PlainOperationRouting.java:70)
at org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(TransportBulkAction.java:242)
at org.elasticsearch.action.bulk.TransportBulkAction.doExecute(TransportBulkAction.java:153)
at org.elasticsearch.action.bulk.TransportBulkAction.doExecute(TransportBulkAction.java:65)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:65)
at org.elasticsearch.client.node.NodeClient.execute(NodeClient.java:92)
at org.elasticsearch.client.support.AbstractClient.bulk(AbstractClient.java:159)
at org.elasticsearch.action.bulk.BulkProcessor.execute(BulkProcessor.java:294)
at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.xbib.elasticsearch.support.client.bulk.BulkProcessorHelper.flush(BulkProcessorHelper.java:28)
at org.xbib.elasticsearch.support.client.node.BulkNodeClient.flushIngest(BulkNodeClient.java:306)
at org.xbib.elasticsearch.support.client.node.BulkNodeClient.flushIngest(BulkNodeClient.java:37)
at org.xbib.elasticsearch.river.jdbc.strategy.simple.SimpleRiverMouth.flush(SimpleRiverMouth.java:179)
at org.xbib.elasticsearch.plugin.feeder.jdbc.JDBCFeeder.executeTask(JDBCFeeder.java:181)
at org.xbib.elasticsearch.plugin.feeder.AbstractFeeder.newRequest(AbstractFeeder.java:363)
at org.xbib.elasticsearch.plugin.feeder.AbstractFeeder.newRequest(AbstractFeeder.java:53)
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:87)
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:14)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2014-08-27 17:43:05,241][ERROR][Feeder ] error while getting next input: client is closed
org.elasticsearch.ElasticsearchIllegalStateException: client is closed
at org.xbib.elasticsearch.support.client.node.BulkNodeClient.waitForResponses(BulkNodeClient.java:313)
at org.xbib.elasticsearch.support.client.node.BulkNodeClient.waitForResponses(BulkNodeClient.java:37)
at org.xbib.elasticsearch.river.jdbc.strategy.simple.SimpleRiverMouth.flush(SimpleRiverMouth.java:182)
at org.xbib.elasticsearch.plugin.feeder.jdbc.JDBCFeeder.executeTask(JDBCFeeder.java:181)
at org.xbib.elasticsearch.plugin.feeder.AbstractFeeder.newRequest(AbstractFeeder.java:363)
at org.xbib.elasticsearch.plugin.feeder.AbstractFeeder.newRequest(AbstractFeeder.java:53)
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:87)
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:14)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
How to solve this? What could be the issue?
Looks like your river is trying to push to the index whilst it is closed. Change your index settings, reopen the index and restart the river.

Categories

Resources