My descriptor populates two lists, i am trying to call the descriptor with the following:
ZAPDriverDescriptorImpl zapDriver = getDescriptor();
then i call
zapDriver.getAllFormats() and zapDriver.getAllExportFormats()) to obtain the two lists. I concatenate them into list with only the unique elements.
Full class can be found on GitHub
The problem is that this works when i'm running jenkins locally (only on the master) but when i do a master-slave, this code would execute on the slave and run into a NullPointerException
ERROR: java.lang.NullPointerException
at hudson.model.AbstractDescribableImpl.getDescriptor(AbstractDescribableImpl.java:41)
at com.github.jenkinsci.zaproxyplugin.ZAPDriver.getDescriptor(ZAPDriver.java:2435)
at com.github.jenkinsci.zaproxyplugin.ZAPDriver.deleteReports(ZAPDriver.java:815)
at com.github.jenkinsci.zaproxyplugin.ZAPDriver.executeZAP(ZAPDriver.java:1141)
at com.github.jenkinsci.zaproxyplugin.ZAPBuilder$ZAPDriverCallable.invoke(ZAPBuilder.java:362)
at com.github.jenkinsci.zaproxyplugin.ZAPBuilder$ZAPDriverCallable.invoke(ZAPBuilder.java:1)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2720)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at hudson.remoting.Engine$1$1.run(Engine.java:69)
at java.lang.Thread.run(Thread.java:745)
You're calling Jenkins.getInstance() from a slave executor. It's not available there because slave machines don't run a Jenkins instance.
Related
edited
I am trying to iterate trough a dataframe to create another one. In this example I am not using data from the first one, it is just to show what I am trying to do. However, the idea is to use the first one to generate a new one much bigger based on data from the first one.
Whatever I try in the void function, I always get the error in the foreach.
Sample dataframe to iterate:
Dataset<Row> obtencionRents = spark.createDataFrame(Arrays.asList(
new testRentabilidades("0000A0","PORTAL","4-ANUAL","asdasd","asdasd"),
new testRentabilidades("00A00","PORTAL","","asdasd","sdasd"),
new testRentabilidades("00A","PORTAL","4-ANUAL","asdasd","asdasd")
), testRentabilidades.class);
Foreach function to iterate sample dataframe:
obtencionRents.toJavaRDD().foreach(new VoidFunction<Row>() {
public void call(Row r) throws Exception {
//add registers to new collection/arraylist/etc.
}
});
The Error I've got:
Driver stacktrace:
2021-11-03 17:34:41 INFO DAGScheduler:54 - Job 0 failed: foreach at CargarRentabilidades.java:154, took 0,812094 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.NullPointerException
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:139)
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:137)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:73)
at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:419)
at batchload.proceso.builder.CargarRentabilidades$1.call(CargarRentabilidades.java:157)
at batchload.proceso.builder.CargarRentabilidades$1.call(CargarRentabilidades.java:154)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:351)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:351)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:921)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:921)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:921)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:919)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.foreach(RDD.scala:919)
at org.apache.spark.api.java.JavaRDDLike$class.foreach(JavaRDDLike.scala:351)
at org.apache.spark.api.java.AbstractJavaRDDLike.foreach(JavaRDDLike.scala:45)
at batchload.proceso.builder.CargarRentabilidades.transformacionRentabilidades(CargarRentabilidades.java:154)
at batchload.proceso.builder.CargarRentabilidades.coleccionRentabilidades(CargarRentabilidades.java:78)
at batchload.proceso.builder.CargarRentabilidades.coleccionCargaRentabilidades(CargarRentabilidades.java:52)
at batchload.proceso.MainBatch.init(MainBatch.java:59)
at batchload.BatchloadRentabilidades.main(BatchloadRentabilidades.java:24)
Caused by: java.lang.NullPointerException
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:139)
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:137)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:73)
at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:419)
at batchload.proceso.builder.CargarRentabilidades$1.call(CargarRentabilidades.java:157)
at batchload.proceso.builder.CargarRentabilidades$1.call(CargarRentabilidades.java:154)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:351)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:351)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:921)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:921)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Versions:
mongo-spark-connector_2.11-2.3.0
Java 1.8
IntelliJ 2021 1.2 Community
Spark library versions 2.11
other dependency versions I am using:
hadoop 2.7, spark 2.3.0, java driver 2.7, spark catalyst,core,hive,sql ....all 2.11:2.3.0, scala scala-library:2.11.12
Stuck with this, any help is more than welcome
Thanks!
This might be due to serialization issue.
Can you try converting your anonymous Function into static method of class?
I am trying to deploy a war file into Wildfly 10.0.0.Final, the deployment fails with the following error :
"{\"WFLYCTL0080: Failed services\" => {\"jboss.deployment.unit.\\\"management-app.war\\\".WeldStartService\" => \"org.jboss.msc.service.StartException in service jboss.deployment.unit.\\\"management-app.war\\\".WeldStartService: Failed to start service
Caused by: org.jboss.weld.exceptions.DefinitionException: Exception List with 1 exceptions:
Exception 0 :
org.jboss.weld.exceptions.IllegalStateException: WELD-001332: BeanManager method getReference() is not available during application initialization
at org.jboss.weld.bean.builtin.BeanManagerProxy.checkContainerState(BeanManagerProxy.java:242)
at org.jboss.weld.bean.builtin.BeanManagerProxy.getReference(BeanManagerProxy.java:84)
at morpho.mcp.adapter.cdi.AdapterExtension.getAdapterDefinition(AdapterExtension.java:159)
at morpho.mcp.adapter.cdi.AdapterExtension.afterBeanDiscovery(AdapterExtension.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:88)
at org.jboss.weld.injection.MethodInvocationStrategy$SpecialParamPlusBeanManagerStrategy.invoke(MethodInvocationStrategy.java:144)
at org.jboss.weld.event.ObserverMethodImpl.sendEvent(ObserverMethodImpl.java:309)
at org.jboss.weld.event.ExtensionObserverMethodImpl.sendEvent(ExtensionObserverMethodImpl.java:124)
at org.jboss.weld.event.ObserverMethodImpl.sendEvent(ObserverMethodImpl.java:287)
at org.jboss.weld.event.ObserverMethodImpl.notify(ObserverMethodImpl.java:265)
at org.jboss.weld.event.ObserverNotifier.notifySyncObservers(ObserverNotifier.java:271)
at org.jboss.weld.event.ObserverNotifier.notify(ObserverNotifier.java:260)
at org.jboss.weld.event.ObserverNotifier.fireEvent(ObserverNotifier.java:154)
at org.jboss.weld.event.ObserverNotifier.fireEvent(ObserverNotifier.java:148)
at org.jboss.weld.bootstrap.events.AbstractContainerEvent.fire(AbstractContainerEvent.java:53)
at org.jboss.weld.bootstrap.events.AbstractDefinitionContainerEvent.fire(AbstractDefinitionContainerEvent.java:42)
at org.jboss.weld.bootstrap.events.AfterBeanDiscoveryImpl.fire(AfterBeanDiscoveryImpl.java:61)
at org.jboss.weld.bootstrap.WeldStartup.deployBeans(WeldStartup.java:423)
at org.jboss.weld.bootstrap.WeldBootstrap.deployBeans(WeldBootstrap.java:83)
at org.jboss.as.weld.WeldStartService.start(WeldStartService.java:95)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
\"}}"
Assuming that the war was working fine for JBoss AS7. What could be the possible root cause for such behavior ?
It seems that a WAR which is not a bean archive is missing the org.jboss.as.weld.WeldStartService dependency. But how can this issue be resolved ?
But how can this issue be resolved ?
Do not use BeanManager.getReference before AfterDeploymentValidation. Refactor your code accordingly. The reason is - getReference might give you incomplete results. At that point, the bootstrap is not yet done and getReference method might fail to retrieve a reference as there might not be one at a given time (although it would be there later on).
As for how to workaround this:
Use Weld non-portable mode although that might have more undesirable effects. You should firstly try to refactor the code before falling back to this configuration.
I run the following code as a java program. The query runs infinite times even though it is called only once which I am not able to figure out why. It drops the keyspace during the first query and when it runs again (which it should not), it keeps giving exception numerous times that the keyspace does not exist.
What should I do to make the query run only once?
Session session=cassandraSessionFactory.getSession();
String query = "drop keyspace "+keyspace_name";
session.execute(query);
session.close();
Here is the exception stack trace which keeps on getting printed numerous times and stops only when manually halted.
23:24:55,654 INFO BackendOperation:75 - Temporary exception during backend operation [messageReading#0:0]. Attempting backoff retry.
com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Temporary failure in storage backend
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:114)
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:78)
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getSlice(AstyanaxKeyColumnValueStore.java:67)
at com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller$1.call(KCVSLog.java:769)
at com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller$1.call(KCVSLog.java:766)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:133)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation$1.call(BackendOperation.java:147)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:56)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:42)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:144)
at com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller.run(KCVSLog.java:703)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.netflix.astyanax.connectionpool.exceptions.BadRequestException: BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=0(0), attempts=1]InvalidRequestException(why:Keyspace keyspace_name does not exist)
at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:65)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:28)
at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:153)
at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:119)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:352)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4.execute(ThriftColumnFamilyQueryImpl.java:538)
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:112)
... 17 more
Caused by: InvalidRequestException(why:Keyspace keyspace_name does not exist)
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:14678)
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:14633)
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result.read(Cassandra.java:14559)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_multiget_slice(Cassandra.java:741)
at org.apache.cassandra.thrift.Cassandra$Client.multiget_slice(Cassandra.java:725)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4$1.internalExecute(ThriftColumnFamilyQueryImpl.java:544)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4$1.internalExecute(ThriftColumnFamilyQueryImpl.java:541)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60)
... 23 more
We are trying a bunch of operations like SELECT -> store row key into a collection ->then split the collection into each worker thread -> Each thread again created connection using phoenix jdbc -> perform SELECT then depending on the result UPSERT into a different phoenix table.
I am using ExecutorService with a fixed thread pool of 4 I am seeing exceptions as below.
org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: The system cannot find the path specified
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:538)
at org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
at org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
at org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:764)
at com.vonage.test.PopulateStagingGWCDRWorker.run(MyCode.java:74)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: org.apache.phoenix.exception.PhoenixIOException: The system cannot find the path specified
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:534)
... 8 more
Caused by: org.apache.phoenix.exception.PhoenixIOException: The system cannot find the path specified
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:122)
at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:73)
at org.apache.phoenix.iterate.SpoolingResultIterator$SpoolingResultIteratorFactory.newIterator(SpoolingResultIterator.java:67)
at org.apache.phoenix.iterate.ChunkedResultIterator.<init>(ChunkedResultIterator.java:92)
at org.apache.phoenix.iterate.ChunkedResultIterator$ChunkedResultIteratorFactory.newIterator(ChunkedResultIterator.java:72)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:92)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:83)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 more
Caused by: java.io.IOException: The system cannot find the path specified
at java.io.WinNTFileSystem.createFileExclusively(Native Method)
at java.io.File.createTempFile(File.java:2024)
at org.apache.commons.io.output.DeferredFileOutputStream.thresholdReached(DeferredFileOutputStream.java:176)
at org.apache.phoenix.iterate.SpoolingResultIterator$1.thresholdReached(SpoolingResultIterator.java:98)
at org.apache.commons.io.output.ThresholdingOutputStream.checkThreshold(ThresholdingOutputStream.java:224)
at org.apache.commons.io.output.ThresholdingOutputStream.write(ThresholdingOutputStream.java:92)
at java.io.DataOutputStream.writeByte(DataOutputStream.java:153)
at org.apache.hadoop.io.WritableUtils.writeVLong(WritableUtils.java:273)
at org.apache.hadoop.io.WritableUtils.writeVInt(WritableUtils.java:253)
at org.apache.phoenix.util.TupleUtil.write(TupleUtil.java:146)
at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:107)
... 10 more
enter code here
But If I am using a pool size of 2 or less it works fine. I was wondering if there is a property at the client side that can be changed ?
I my case I solved this by using below dependency in pom.xml
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-protocol</artifactId>
<version>1.1.11</version>
</dependency>
Just to update you i am having Hbase version : 1.1 and phoenix is at 4.7
phoenix.spool.directory in the hbase-site.xml fixes this. Thanks
I am using solr 4.10.3 in solrCloud mode. I have one shard and 3 replica. external zookeeper ensemble in being used. My document in one index has been increase too much. Now I want to create more shards. I tries to use
http://localhost:8983/solr/admin/collections?action=SPLITSHARD&collection=collection1&shard=shard1
But it gives following error
Error executing split operation for collection: collection1 parent shard: shard1
java.lang.NullPointerException
at org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1288)
at org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:563)
at org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2629)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Collection: collection1 operation: splitshard failed:org.apache.solr.common.SolrException
at org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1569)
at org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:563)
at org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2629)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1288)
null:org.apache.solr.common.SolrException
null:org.apache.solr.common.SolrException
at org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:364)
at org.apache.solr.handler.admin.CollectionsHandler.handleSplitShardAction(CollectionsHandler.java:606)
at org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:172)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
Where is the problem and what is its soultion?
The property SPLITSHARD can only be used when you have defined -DnumShards=(some value) when you start you cluster first time.