How to Split solr shard in solr cloud - java

I am using solr 4.10.3 in solrCloud mode. I have one shard and 3 replica. external zookeeper ensemble in being used. My document in one index has been increase too much. Now I want to create more shards. I tries to use
http://localhost:8983/solr/admin/collections?action=SPLITSHARD&collection=collection1&shard=shard1
But it gives following error
Error executing split operation for collection: collection1 parent shard: shard1
java.lang.NullPointerException
at org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1288)
at org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:563)
at org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2629)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Collection: collection1 operation: splitshard failed:org.apache.solr.common.SolrException
at org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1569)
at org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:563)
at org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2629)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1288)
null:org.apache.solr.common.SolrException
null:org.apache.solr.common.SolrException
at org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:364)
at org.apache.solr.handler.admin.CollectionsHandler.handleSplitShardAction(CollectionsHandler.java:606)
at org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:172)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
Where is the problem and what is its soultion?

The property SPLITSHARD can only be used when you have defined -DnumShards=(some value) when you start you cluster first time.

Related

Nifi GenerateTableFetch gives error when partition size is set to zero

While setting partitionSize=0 so as to fetch all the rows in the given table for GenerateTableFetch Processor in Nifi, I am getting the following error:
ERROR [Timer-Driven Process Thread-4]
o.a.n.p.standard.GenerateTableFetch
GenerateTableFetch[id=d0932834-015d-1000-8224-c230630b6fa6]
GenerateTableFetch[id=d0932834-015d-1000-8224-c230630b6fa6] failed to
process session due to java.lang.NullPointerException: {}
java.lang.NullPointerException: null
at org.apache.nifi.processors.standard.GenerateTableFetch.onTrigger(GenerateTableFetch.java:300)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
When I give partitionSize>0 it works. How can I resolve this error?
Unfortunately this is a bug in the processor code for the case where Partition Size is 0.
I have created this JIRA for it:
https://issues.apache.org/jira/browse/NIFI-4286

Cassandra query in java program running infinite times even though called only once

I run the following code as a java program. The query runs infinite times even though it is called only once which I am not able to figure out why. It drops the keyspace during the first query and when it runs again (which it should not), it keeps giving exception numerous times that the keyspace does not exist.
What should I do to make the query run only once?
Session session=cassandraSessionFactory.getSession();
String query = "drop keyspace "+keyspace_name";
session.execute(query);
session.close();
Here is the exception stack trace which keeps on getting printed numerous times and stops only when manually halted.
23:24:55,654 INFO BackendOperation:75 - Temporary exception during backend operation [messageReading#0:0]. Attempting backoff retry.
com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Temporary failure in storage backend
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:114)
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:78)
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getSlice(AstyanaxKeyColumnValueStore.java:67)
at com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller$1.call(KCVSLog.java:769)
at com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller$1.call(KCVSLog.java:766)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:133)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation$1.call(BackendOperation.java:147)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:56)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:42)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:144)
at com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller.run(KCVSLog.java:703)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.netflix.astyanax.connectionpool.exceptions.BadRequestException: BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=0(0), attempts=1]InvalidRequestException(why:Keyspace keyspace_name does not exist)
at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:65)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:28)
at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:153)
at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:119)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:352)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4.execute(ThriftColumnFamilyQueryImpl.java:538)
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:112)
... 17 more
Caused by: InvalidRequestException(why:Keyspace keyspace_name does not exist)
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:14678)
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:14633)
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result.read(Cassandra.java:14559)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_multiget_slice(Cassandra.java:741)
at org.apache.cassandra.thrift.Cassandra$Client.multiget_slice(Cassandra.java:725)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4$1.internalExecute(ThriftColumnFamilyQueryImpl.java:544)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4$1.internalExecute(ThriftColumnFamilyQueryImpl.java:541)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60)
... 23 more

Apache phoenix concurrent queries failing with exception

We are trying a bunch of operations like SELECT -> store row key into a collection ->then split the collection into each worker thread -> Each thread again created connection using phoenix jdbc -> perform SELECT then depending on the result UPSERT into a different phoenix table.
I am using ExecutorService with a fixed thread pool of 4 I am seeing exceptions as below.
org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: The system cannot find the path specified
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:538)
at org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
at org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
at org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:764)
at com.vonage.test.PopulateStagingGWCDRWorker.run(MyCode.java:74)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: org.apache.phoenix.exception.PhoenixIOException: The system cannot find the path specified
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:534)
... 8 more
Caused by: org.apache.phoenix.exception.PhoenixIOException: The system cannot find the path specified
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:122)
at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:73)
at org.apache.phoenix.iterate.SpoolingResultIterator$SpoolingResultIteratorFactory.newIterator(SpoolingResultIterator.java:67)
at org.apache.phoenix.iterate.ChunkedResultIterator.<init>(ChunkedResultIterator.java:92)
at org.apache.phoenix.iterate.ChunkedResultIterator$ChunkedResultIteratorFactory.newIterator(ChunkedResultIterator.java:72)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:92)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:83)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 more
Caused by: java.io.IOException: The system cannot find the path specified
at java.io.WinNTFileSystem.createFileExclusively(Native Method)
at java.io.File.createTempFile(File.java:2024)
at org.apache.commons.io.output.DeferredFileOutputStream.thresholdReached(DeferredFileOutputStream.java:176)
at org.apache.phoenix.iterate.SpoolingResultIterator$1.thresholdReached(SpoolingResultIterator.java:98)
at org.apache.commons.io.output.ThresholdingOutputStream.checkThreshold(ThresholdingOutputStream.java:224)
at org.apache.commons.io.output.ThresholdingOutputStream.write(ThresholdingOutputStream.java:92)
at java.io.DataOutputStream.writeByte(DataOutputStream.java:153)
at org.apache.hadoop.io.WritableUtils.writeVLong(WritableUtils.java:273)
at org.apache.hadoop.io.WritableUtils.writeVInt(WritableUtils.java:253)
at org.apache.phoenix.util.TupleUtil.write(TupleUtil.java:146)
at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:107)
... 10 more
enter code here
But If I am using a pool size of 2 or less it works fine. I was wondering if there is a property at the client side that can be changed ?
I my case I solved this by using below dependency in pom.xml
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-protocol</artifactId>
<version>1.1.11</version>
</dependency>
Just to update you i am having Hbase version : 1.1 and phoenix is at 4.7
phoenix.spool.directory in the hbase-site.xml fixes this. Thanks

Jenkins getDescriptor() returns NullPointerException

My descriptor populates two lists, i am trying to call the descriptor with the following:
ZAPDriverDescriptorImpl zapDriver = getDescriptor();
then i call
zapDriver.getAllFormats() and zapDriver.getAllExportFormats()) to obtain the two lists. I concatenate them into list with only the unique elements.
Full class can be found on GitHub
The problem is that this works when i'm running jenkins locally (only on the master) but when i do a master-slave, this code would execute on the slave and run into a NullPointerException
ERROR: java.lang.NullPointerException
at hudson.model.AbstractDescribableImpl.getDescriptor(AbstractDescribableImpl.java:41)
at com.github.jenkinsci.zaproxyplugin.ZAPDriver.getDescriptor(ZAPDriver.java:2435)
at com.github.jenkinsci.zaproxyplugin.ZAPDriver.deleteReports(ZAPDriver.java:815)
at com.github.jenkinsci.zaproxyplugin.ZAPDriver.executeZAP(ZAPDriver.java:1141)
at com.github.jenkinsci.zaproxyplugin.ZAPBuilder$ZAPDriverCallable.invoke(ZAPBuilder.java:362)
at com.github.jenkinsci.zaproxyplugin.ZAPBuilder$ZAPDriverCallable.invoke(ZAPBuilder.java:1)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2720)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at hudson.remoting.Engine$1$1.run(Engine.java:69)
at java.lang.Thread.run(Thread.java:745)
You're calling Jenkins.getInstance() from a slave executor. It's not available there because slave machines don't run a Jenkins instance.

Nifi "PutSQL" Out of bounds exception

I'm trying to use the "PutSQL" processor to do exactly that.
I modify the flowfile to using "ReplaceText" and create an INSERT statement. I have tested that statement in the MySQL database and the statement works.
Here is the statement:
INSERT INTO monitor.security_nifi (RemoteIPAddress, Timestamp,RequestUrl, Status, Instance)
VALUES ('10.129.2.35', '2016-09-2016:44:16,347','/secure/Dashboard.jspa', 'PASSED', '35');
When it goes through the processor, I keep getting this error:
failed to process session due to java.lang.IndexOutOfBoundsException:
Index:1, Size:1: java.lang.IndexOutOfBoundsException: Index:1, Size:1
Here is the stack trace:
2016-09-21 10:41:24,658 WARN [Timer-Driven Process Thread-1] o.a.n.c.t.ContinuallyRunProcessorTask
java.lang.IndexOutOfBoundsException: Index: 1, Size: 1 at
java.util.ArrayList.rangeCheck(ArrayList.java:653) ~[na:1.8.0_101] at
java.util.ArrayList.get(ArrayList.java:429) ~[na:1.8.0_101] at
org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:304)
~[na:na] at
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
~[nifi-api-1.0.0.jar:1.0.0] at
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1064)
~[nifi-framework-core-1.0.0.jar:1.0.0] at
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
[nifi-framework-core-1.0.0.jar:1.0.0] at
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
[nifi-framework-core-1.0.0.jar:1.0.0] at
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
[nifi-framework-core-1.0.0.jar:1.0.0] at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[na:1.8.0_101] at
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
[na:1.8.0_101] at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
[na:1.8.0_101] at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
[na:1.8.0_101] at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_101] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_101] at java.lang.Thread.run(Thread.java:745)
[na:1.8.0_101]
This looks like a bug that is hiding an actual issue. Try setting the "Support Fragmented Transactions" property of the PutSQL processor to "false". That should prevent the Index Out Of Bounds Exception, but may also bring to light a real issue that can be corrected. If that's true (that there was another issue and it is corrected), you may be able to restore the property value to "true" and run without error.

Categories

Resources