I am trying to use drools in spark job submitted to a cluster. THe job will start by getting the drools jar from a drools server then initialize the base and sessions.
My code work when executed in Spark but when submitting to spark cluster a NPE happens.
This is how I am doing
String url = "{my server address}/drools-wb/maven2/com/myspace/Project1/1.0.0/Project1-1.0.0.jar";
KieServices ks = KieServices.Factory.get();
//ERROR is in the below line
ReleaseId releaseId = ks.newReleaseId("com.myspace", "Project1", "1.0.0");
KieRepository kr = ks.getRepository();
UrlResource urlResource = (UrlResource) ks.getResources().newUrlResource(url);
The error shown after submitting the code:
Exception in thread "main" java.lang.NullPointerException
at org.opencell.spark.jobs.TestWithDrools.main(TestWithDrools.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2018-08-29 10:08:09 INFO ShutdownHookManager:54 - Shutdown hook called
Do you have an idea about solving this issue ?
The issue is caused because ks is null.
So to resolve this issue please refer to this post:
Drools 7.4.1 kieservices.factory.get() returns null
Related
Have a Input of JSON inside HDFS location
It is required to parse the JSON and to aggregate results
To do am using the PIG UDF which are using JSON-path libraries
On the hadoop2.7 environment jar: json-smart1.2, json-path1.2 are hardbinded
Whenever I execute the PIG Mapreduce which throws me below Exception
java.lang.NoSuchFieldError: defaultReader
at com.jayway.jsonpath.spi.json.JsonSmartJsonProvider.<init>(JsonSmartJsonProvider.java:39)
at com.jayway.jsonpath.internal.DefaultsImpl.jsonProvider(DefaultsImpl.java:21)
at com.jayway.jsonpath.Configuration.defaultConfiguration(Configuration.java:174)
at com.jayway.jsonpath.internal.JsonContext.<init>(JsonContext.java:52)
at com.jayway.jsonpath.JsonPath.parse(JsonPath.java:596)
In-order to solve the problem tried below options
Option 1:
Tried setting Registering the json-smart2.3.jar & json-path2.3.0.jar
But no promising results (As the Jar it was referencing is json-path1.2.jar)
Option 2:
Downgrading my module dependencies to json-path1.2.jar
No results
Option 3:
Using Custom classLoaders tried to load the jar of JSON-path2.3.0 jar it
loaded the class went into issues of Org.slf4j binding
There were multiple binding paths identified, But went problems with sun.misc classloader
Failed to instantiate SLF4J LoggerFactory
Reported exception:
java.lang.NullPointerException
at sun.net.util.URLUtil.urlNoFragString(URLUtil.java:50)
at sun.misc.URLClassPath.getLoader(URLClassPath.java:485)
at sun.misc.URLClassPath.getNextLoader(URLClassPath.java:457)
at sun.misc.URLClassPath.access$100(URLClassPath.java:64)
at sun.misc.URLClassPath$1.next(URLClassPath.java:239)
at sun.misc.URLClassPath$1.hasMoreElements(URLClassPath.java:250)
at java.net.URLClassLoader$3$1.run(URLClassLoader.java:601)
at java.net.URLClassLoader$3$1.run(URLClassLoader.java:599)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader$3.next(URLClassLoader.java:598)
at java.net.URLClassLoader$3.hasMoreElements(URLClassLoader.java:623)
at sun.misc.CompoundEnumeration.next(CompoundEnumeration.java:45)
at sun.misc.CompoundEnumeration.hasMoreElements(CompoundEnumeration.java:54)
at org.slf4j.LoggerFactory.findPossibleStaticLoggerBinderPathSet(LoggerFactory.java:238)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:138)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:120)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:331)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:283)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:304)
at com.jayway.jsonpath.internal.JsonContext.<clinit>(JsonContext.java:41)
at com.jayway.jsonpath.internal.ParseContextImpl.parse(ParseContextImpl.java:38)
at com.jayway.jsonpath.JsonPath.parse(JsonPath.java:599)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.optum.pdm.ReferenceDataUpdate.addURL(ReferenceDataUpdate.java:112)
at com.optum.pdm.ReferenceDataUpdate.main(ReferenceDataUpdate.java:124)
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.optum.pdm.ReferenceDataUpdate.addURL(ReferenceDataUpdate.java:112)
at com.optum.pdm.ReferenceDataUpdate.main(ReferenceDataUpdate.java:124)
Caused by: java.lang.ExceptionInInitializerError
at com.jayway.jsonpath.internal.ParseContextImpl.parse(ParseContextImpl.java:38)
at com.jayway.jsonpath.JsonPath.parse(JsonPath.java:599)
... 6 more
Caused by: java.lang.IllegalStateException: Unexpected initialization failure
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:167)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:120)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:331)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:283)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:304)
at com.jayway.jsonpath.internal.JsonContext.<clinit>(JsonContext.java:41)
... 8 more
Caused by: java.lang.NullPointerException
at sun.net.util.URLUtil.urlNoFragString(URLUtil.java:50)
at sun.misc.URLClassPath.getLoader(URLClassPath.java:485)
at sun.misc.URLClassPath.getNextLoader(URLClassPath.java:457)
at sun.misc.URLClassPath.access$100(URLClassPath.java:64)
at sun.misc.URLClassPath$1.next(URLClassPath.java:239)
at sun.misc.URLClassPath$1.hasMoreElements(URLClassPath.java:250)
at java.net.URLClassLoader$3$1.run(URLClassLoader.java:601)
at java.net.URLClassLoader$3$1.run(URLClassLoader.java:599)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader$3.next(URLClassLoader.java:598)
at java.net.URLClassLoader$3.hasMoreElements(URLClassLoader.java:623)
at sun.misc.CompoundEnumeration.next(CompoundEnumeration.java:45)
at sun.misc.CompoundEnumeration.hasMoreElements(CompoundEnumeration.java:54)
at org.slf4j.LoggerFactory.findPossibleStaticLoggerBinderPathSet(LoggerFactory.java:238)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:138)
... 13 more
Can some one suggest me to solve this problem, Can find one stackoverflow link where it was telling about weblogic and not a generalized solution which can be applied on Hadoop2.7 also (JSON Parser -java.lang.NoSuchFieldError: defaultReader)
As hadoop environment(Pig, hdfs, Hive & etc) is using json-path-2.3.0, Its better user Mapper logic should use another version "jsonpath-1.0.jar" will solve the problem
Provide the custom implementation to load/parse the JSON so that we can avoid using Json-smart-2.x/1.x of Hadoop/lib
public static void changeJsonPathConfig() {
if (!configChanged) {
Configuration.setDefaults(new Configuration.Defaults() {
private final JsonProvider jsonProvider = new GsonJsonProvider(
new GsonBuilder().serializeNulls().create());
private final MappingProvider mappingProvider = new GsonMappingProvider();
#Override
public JsonProvider jsonProvider() {
return jsonProvider;
}
#Override
public MappingProvider mappingProvider() {
return mappingProvider;
}
#Override
public Set<Option> options() {
return EnumSet.noneOf(Option.class);
}
});
configChanged = true;
}
}
We are trying to integrate Spark and Spring boot, unfortunately we are facing each time lot of issues. After resolving the most of them, we are now stuck on the exception below
Job aborted due to stage failure: Task 0 in stage 11.0 failed 4 times, most recent failure: Lost task 0.3 in stage 11.0 (TID 14, xxxxx.ax.internal.cloudapp.net, executor 1): java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2233)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1405)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2284)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1928)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1941)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1954)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:336)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2386)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withNewExecutionId(Dataset.scala:2788)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2385)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2392)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2128)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2127)
at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2818)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2127)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2342)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:248)
at org.apache.spark.sql.Dataset.show(Dataset.scala:638)
at org.apache.spark.sql.Dataset.show(Dataset.scala:597)
at org.apache.spark.sql.Dataset.show(Dataset.scala:606)
at com.xxx.xxx.spark.Execute.run(Execute.java:46)
at com.xxx.xxx.spark.Loader.process(Loader.java:505)
at com.xxx.xxx.spark.Loader.run(Loader.java:122)
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:732)
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:716)
at org.springframework.boot.SpringApplication.afterRefresh(SpringApplication.java:703)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:304)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1118)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1107)
at com.xxx.xxx.spark.AnalyseFec.main(AnalyseFec.java:11)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
This exception is raised when trying to manipulate a transformed data set (created with a map). The count and collect methods works fine,
The sample below throw the exception on dataf22.show();
StructType schemata = DataTypes.createStructType(
new StructField[]{
DataTypes.createStructField("Column1", DataTypes.StringType, false),
DataTypes.createStructField("Column2", DataTypes.DoubleType, false),
DataTypes.createStructField("Column3", DataTypes.DoubleType, false),
});
ExpressionEncoder<Row> encoder = RowEncoder.apply(schemata);
Dataset<Row> dataf2 = session.read()
.option("header", "true")
.option("delimiter",separateur)
// .schema(schemata)
.csv(csvPath);
dataf2.write().mode(SaveMode.Overwrite).parquet("xxx.parquet");
Dataset<Row> parquetFileDF = session.read().parquet("xxx.parquet");
Dataset<Row> dataf22 = parquetFileDF.map(row -> {
return RowFactory.create(row.getAs("Column1"),
Double.parseDouble(row.getAs("Column2").toString().replace(",", ".")),
Double.parseDouble(row.getAs("Column3").toString().replace(",", ".")));
}, encoder);
dataf22.printSchema();
dataf22.show();
dataf22.groupBy("Column1");
Dataset<Row> ds1 = dataf22.groupBy("Column1").sum("Column2");
ds1.show();
Dataset<Row> ds2 = dataf22.groupBy("Column1").sum("Column3");
ds2.show();
Initially we were packaging using the spring-boot-maven-plugin, the spark-submit was calling the org.springframework.boot.loader.JarLauncher that launch our starter class.
When we moved to maven-shade-plugin with some modification to support spring boot, the exception above disappear and we were able to execute our program, but only in client mode. In cluster mode the application is never running in Yarn, after multiple attempt the application Fail without any error that can help to fix the issue.
I feel that once the program is executed on executors, the problem will appear related to classpath or classloader issues
Did you succeed to made this integration working ? If yes, what maven plugin did you used ? what extra parameters of spark-submit command did you uses ( extraclasspath … )
Thank you
I'm trying to start netty-socketio server, and I can't trace origins of this exception. I have marked in stacktrace place where it may lead to the answer, so if anyone could provide explanation on this it will be much appreciated.
public class SocketIoServer {
private Configuration cnf = new Configuration();
private SocketIOServer server;
public SocketIoServer() {
Configuration config = new Configuration();
config.setHostname("localhost");
config.setPort(8081);
server = new SocketIOServer(config);
server.start();
}
}
When I initialize socket an Exception gets thrown. Here's context:
Exception in thread "main" java.lang.IllegalArgumentException:
java.lang.reflect.InvocationTargetException
at com.corundumstudio.socketio.Configuration.<init>(Configuration.java:112)
at com.corundumstudio.socketio.SocketIOServer.<init>(SocketIOServer.java:66)
at SocketIoServer.<init>(SocketIoServer.java:17)
at Server.main(Server.java:19)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.corundumstudio.socketio.Configuration.<init>(Configuration.java:109)
... 8 more
This line in particular
Caused by: java.lang.NoSuchMethodError:
com.fasterxml.jackson.databind.module.SimpleModule.setSerializerModifier(Lcom/fasterxml/jackson/databind/ser/BeanSerializerModifier;)Lcom/fasterxml/jackson/databind/module/SimpleModule;
at com.corundumstudio.socketio.protocol.JacksonJsonSupport.init(JacksonJsonSupport.java:316)
at com.corundumstudio.socketio.protocol.JacksonJsonSupport.<init>(JacksonJsonSupport.java:311)
at com.corundumstudio.socketio.protocol.JacksonJsonSupport.<init>(JacksonJsonSupport.java:304)
... 13 more
You have obviously a conflict of version of jackson-databind in your project, indeed the class com.corundumstudio.socketio.protocol.JacksonJsonSupport relies on the method SimpleModule#setSerializerModifier(BeanSerializerModifier mod) which has been added since the version 2.2 so as it cannot find this method, it means that you have a version of jackson-databind older than 2.2 in your classpath such that the method cannot be found.
Check all the jars that you have in your classpath and make sure that you have only one version of jackson-databind corresponding to the version expected by netty-socketio. For example assuming that you use the version 1.7.12 of netty-socketio, the expected version of jackson-databind is 2.7.4 as you can see here.
I am running a map reduce job taking data from a table in Accumulo as the input and storing the result in another table in Accumulo. To do this, I am using the AccumuloInputFormat and AccumuloOutputFormat classes. Here is the code
public int run(String[] args) throws Exception {
Opts opts = new Opts();
opts.parseArgs(PivotTable.class.getName(), args);
Configuration conf = getConf();
conf.set("formula", opts.formula);
Job job = Job.getInstance(conf);
job.setJobName("Pivot Table Generation");
job.setJarByClass(PivotTable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setMapperClass(PivotTableMapper.class);
job.setCombinerClass(PivotTableCombiber.class);
job.setReducerClass(PivotTableReducer.class);
job.setInputFormatClass(AccumuloInputFormat.class);
ClientConfiguration zkConfig = new ClientConfiguration().withInstance(opts.getInstance().getInstanceName()).withZkHosts(opts.getInstance().getZooKeepers());
AccumuloInputFormat.setInputTableName(job, opts.dataTable);
AccumuloInputFormat.setZooKeeperInstance(job, zkConfig);
AccumuloInputFormat.setConnectorInfo(job, opts.getPrincipal(), new PasswordToken(opts.getPassword().value));
job.setOutputFormatClass(AccumuloOutputFormat.class);
BatchWriterConfig bwConfig = new BatchWriterConfig();
AccumuloOutputFormat.setBatchWriterOptions(job, bwConfig);
AccumuloOutputFormat.setZooKeeperInstance(job, zkConfig);
AccumuloOutputFormat.setConnectorInfo(job, opts.getPrincipal(), new PasswordToken(opts.getPassword().value));
AccumuloOutputFormat.setDefaultTableName(job, opts.pivotTable);
AccumuloOutputFormat.setCreateTables(job, true);
return job.waitForCompletion(true) ? 0 : 1;
}
PivotTable is the name of the class that contains the main method (and this one too). I have made the mapper, combiner and reducer classes as well. But when I try to exectute this job, I get an error
Exception in thread "main" java.io.IOException: org.apache.accumulo.core.client.AccumuloException: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validatePermissions(InputConfigurator.java:707)
at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.validateOptions(AbstractInputFormat.java:397)
at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.getSplits(AbstractInputFormat.java:668)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at com.latize.ulysses.accumulo.postprocess.PivotTable.run(PivotTable.java:247)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at com.latize.ulysses.accumulo.postprocess.PivotTable.main(PivotTable.java:251)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.accumulo.core.client.AccumuloException: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(SecurityOperationsImpl.java:87)
at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.hasTablePermission(SecurityOperationsImpl.java:220)
at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validatePermissions(InputConfigurator.java:692)
... 21 more
Caused by: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.recv_hasTablePermission(ClientService.java:641)
at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.hasTablePermission(ClientService.java:624)
at org.apache.accumulo.core.client.impl.SecurityOperationsImpl$8.execute(SecurityOperationsImpl.java:223)
at org.apache.accumulo.core.client.impl.SecurityOperationsImpl$8.execute(SecurityOperationsImpl.java:220)
at org.apache.accumulo.core.client.impl.ServerClient.executeRaw(ServerClient.java:79)
at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(SecurityOperationsImpl.java:73)
Can someone tell me what am I doing wrong here? Any help would be appreciated.
EDIT : I am running Accumulo 1.7.0
A TApplicationException indicates the error occurred on the Accumulo tablet server, rather than in your client (MapReduce) code. You'll need to examine your tablet server logs to get more information about the particular error wherever you see TApplicationException.
Table permissions are usually retrieved from ZooKeeper, so it may indicate a problem with the tserver connecting to ZooKeeper.
Unfortunately, I don't see the hostname or the IP in the stack trace, so you may have to check all the tserver logs to find it.
I'm trying to create JAVARDD on s3 file but not able to create rdd.Can someone help me to solve this problem.
Code :
SparkConf conf = new SparkConf().setAppName(appName).setMaster("local");
JavaSparkContext javaSparkContext = new JavaSparkContext(conf);
javaSparkContext.hadoopConfiguration().set("fs.s3.awsAccessKeyId",
accessKey);
javaSparkContext.hadoopConfiguration().set("fs.s3.awsSecretAccessKey",
secretKey);
javaSparkContext.hadoopConfiguration().set("fs.s3.impl",
"org.apache.hadoop.fs.s3native.NativeS3FileSystem");
JavaRDD<String> rawData = sparkContext
.textFile("s3://mybucket/sample.txt");
This code throwing exception
2015-05-06 18:58:57 WARN LoadSnappy:46 - Snappy native library not loaded
java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected scheme-specific part at index 3: s3:
at org.apache.hadoop.fs.Path.initialize(Path.java:148)
at org.apache.hadoop.fs.Path.<init>(Path.java:126)
at org.apache.hadoop.fs.Path.<init>(Path.java:50)
at org.apache.hadoop.fs.FileSystem.globPathsLevel(FileSystem.java:1084)
at org.apache.hadoop.fs.FileSystem.globPathsLevel(FileSystem.java:1087)
at org.apache.hadoop.fs.FileSystem.globPathsLevel(FileSystem.java:1087)
at org.apache.hadoop.fs.FileSystem.globPathsLevel(FileSystem.java:1087)
at org.apache.hadoop.fs.FileSystem.globPathsLevel(FileSystem.java:1087)
at org.apache.hadoop.fs.FileSystem.globStatusInternal(FileSystem.java:1023)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:987)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:177)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:203)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.rdd.RDD.take(RDD.scala:1156)
at org.apache.spark.rdd.RDD.first(RDD.scala:1189)
at org.apache.spark.api.java.JavaRDDLike$class.first(JavaRDDLike.scala:477)
at org.apache.spark.api.java.JavaRDD.first(JavaRDD.scala:32)
at com.cignifi.DataExplorationValidation.processFile(DataExplorationValidation.java:148)
at com.cignifi.DataExplorationValidation.main(DataExplorationValidation.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.URISyntaxException: Expected scheme-specific part at index 3: s3:
at java.net.URI$Parser.fail(URI.java:2829)
at java.net.URI$Parser.failExpecting(URI.java:2835)
at java.net.URI$Parser.parse(URI.java:3038)
at java.net.URI.<init>(URI.java:753)
at org.apache.hadoop.fs.Path.initialize(Path.java:145)
... 36 more
Some more details
Spark version 1.3.0.
Running in local mode using spark-submit.
I tried this thing on local and EC2 instance ,In both case I'm getting same error.
It should be s3n:// instead of s3://
See External Datasets in Spark Programming Guide