I have installed Hadoop 2.3.0 and able to execute MR jobs successfully. But when I trying to execute MR jobs in normal privilege (without admin privilege) means job get fails with following exception.
I tried "WordCount.jar" sample.
14/10/28 09:16:12 INFO mapreduce.Job: Task Id : attempt_1414467725299_0002_r_000
000_1, Status : FAILED
Error: java.lang.NullPointerException
at org.apache.hadoop.mapred.Task.getFsStatistics(Task.java:347)
at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.<init>(Re
duceTask.java:478)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:414
)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
tion.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
By debugging the source I drill down the problem occurs at class YarnChild.java
childUGI.doAs(new PrivilegedExceptionAction<Object>() {
#Override
public Object run() throws Exception {
// use job-specified working directory
FileSystem.get(job).setWorkingDirectory(job.getWorkingDirectory());
taskFinal.run(job, umbilical); // run the task
return null;
}
});
But If I start "NodeManager" with admin privilege mean the above exception won't occur. I don't know why MR job not working when I start "NodeManager" with normal privilege.
If anyone know the reason and solution for above problem. Please do me the favor as soon as possible.
Related
I am developing a simple Kafka Stream application which extracting messages from a topic and put it into another topic after transformation. I am using Intelij for my development.
When I debug/run this application, it works perfect if my IDE and the Kafka Server sitting in the SAME machine
(i.e. with the BOOTSTRAP_SERVERS_CONFIG = localhost:9092 and
SCHEMA_REGISTRY_URL_CONFIG = localhost:8081)
However, when I try to use another machine to do the development
(i.e. with the BOOTSTRAP_SERVERS_CONFIG = XXX.XXX.XXX:9092 and
SCHEMA_REGISTRY_URL_CONFIG = XXX.XXX.XXX:8081 where XXX.XXX.XXX is the
ip address of my Kafka),
the debug process run without problem at the 1st time. However, when I run 2nd time after resetting the offset, I received the following error:
ERROR stream-thread [main] Failed to delete the state directory. (org.apache.kafka.streams.processor.internals.StateDirectory:297)
java.nio.file.DirectoryNotEmptyException: \tmp\kafka-streams\my_application_id\0_0
Exception in thread "main" org.apache.kafka.streams.errors.StreamsException: java.nio.file.DirectoryNotEmptyException:
If I changed my_application_id as my_application_id2, and run it, it works again at the 1st time but receiving error again if I run it again.
I have the following code in my last sentence in my application:
Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
Any advice how to solve this problem?
UPDATE:
I have reviewed the state directory created in my development machine (Windows Platform) and if I delete these directory manually before running 2nd time, no error found. I have tried to run my IDE as Administrator because I think this could be something about the permission on the folder. However, this doesn't help.
Full stack for reference:
INFO Kafka version : 1.1.0 (org.apache.kafka.common.utils.AppInfoParser:109)
INFO Kafka commitId : fdcf75ea326b8e07 (org.apache.kafka.common.utils.AppInfoParser:110)
INFO stream-thread [main] Deleting state directory 0_0 for task 0_0 as user calling cleanup. (org.apache.kafka.streams.processor.internals.StateDirectory:281)
Disconnected from the target VM, address: '127.0.0.1:16552', transport: 'socket'
Exception in thread "main" org.apache.kafka.streams.errors.StreamsException: java.nio.file.DirectoryNotEmptyException: C:\workspace\bennychan\kafka-streams\my_application_001\0_0
at org.apache.kafka.streams.processor.internals.StateDirectory.clean(StateDirectory.java:231)
at org.apache.kafka.streams.KafkaStreams.cleanUp(KafkaStreams.java:931)
at com.macroviewhk.financialreport.simpleStream.start(simpleStream.java:60)
at com.macroviewhk.financialreport.simpleStream.main(simpleStream.java:45)
Caused by: java.nio.file.DirectoryNotEmptyException: C:\workspace\bennychan\kafka-streams\my_application_001\0_0
at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:266)
at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:651)
at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:634)
at java.nio.file.Files.walkFileTree(Files.java:2688)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at org.apache.kafka.common.utils.Utils.delete(Utils.java:634)
ERROR stream-thread [main] Failed to delete the state directory. (org.apache.kafka.streams.processor.internals.StateDirectory:297)
at org.apache.kafka.streams.processor.internals.StateDirectory.cleanRemovedTasks(StateDirectory.java:287)
java.nio.file.DirectoryNotEmptyException: C:\workspace\bennychan\kafka-streams\my_application_001\0_0
at org.apache.kafka.streams.processor.internals.StateDirectory.clean(StateDirectory.java:228)
at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:266)
... 3 more
at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:651)
at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:634)
at java.nio.file.Files.walkFileTree(Files.java:2688)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at org.apache.kafka.common.utils.Utils.delete(Utils.java:634)
at org.apache.kafka.streams.processor.internals.StateDirectory.cleanRemovedTasks(StateDirectory.java:287)
at org.apache.kafka.streams.processor.internals.StateDirectory.clean(StateDirectory.java:228)
at org.apache.kafka.streams.KafkaStreams.cleanUp(KafkaStreams.java:931)
at com.macroviewhk.financialreport.simpleStream.start(simpleStream.java:60)
at com.macroviewhk.financialreport.simpleStream.main(simpleStream.java:45)
UPDATE 2 :
After another detailed check, the line below throwing IOException
Files.walkFileTree(file.toPath(), new SimpleFileVisitor<Path>() {
This line is located at kafka-clients-1.1.0.jar org.apache.kafka.common.utilsUtils.class
May be this is the problem with Windows system (sorry that I am not an experienced JAVA programmer).
For googlers..
I'm currently using this Scala code for helping windows guys to handle deletion of state store.
if (System.getProperty("os.name").toLowerCase.contains("windows")) {
logger.info("WINDOWS OS MODE - Cleanup state store.")
try {
FileUtils.deleteDirectory(new File("/tmp/kafka-streams/" + config.getProperty("application.id")))
FileUtils.forceMkdir(new File("/tmp/kafka-streams/" + config.getProperty("application.id")))
} catch {
case e: Exception => logger.error(e.toString)
}
}
else {
streams.cleanUp()
}
I agree with #ideano1 that is seems to be related to https://issues.apache.org/jira/browse/KAFKA-6647 -- what you can try is, to explicitly call KafkaStreams#cleanUp() between tests. It's unclear why there are issues at Window-OS. Atm, all testing happens on Linux.
This is what we've implemented that works on Windows. This is written in Kotlin.
Version used : kafka-streams-test-utils:2.3.0.
The key is to catch the exception. The tests will pass as long as you catch the exception raised by testDriver.close()even if you don't delete the directory. However, cleaning up the directory makes your unit tests independent and repeatable.
val directory = "test"
#BeforeEach
fun setup(){
//other code omitted for setting the props
props.setProperty(StreamsConfig.STATE_DIR_CONFIG,directory)
}
#AfterEach
fun tearDown(){
try{
testDriver.close()
}catch(exception: Exception){
FileUtils.deleteDirectory(File(directory)) //there is a bug on Windows that does not delete the state directory properly. In order for the test to pass, the directory must be deleted manually
}
}
For tests (but not only if you afford so) one could use an IN_MEMORY("in-memory") store for each KTable created (directly or indirectly, by e.g. aggregations); this avoids the creation of any directory such that the error no longer occurs.
I have a Spark job that processes several folders on S3 per run and stores its state on DynamoDB. In other words, we're running the job once per day, it looks for new folders added by another job, transforms them one-by-one and writes state to DynamoDB. Here's rough pseudocode:
object App {
val allFolders = S3Folders.list()
val foldersToProcess = DynamoDBState.getFoldersToProcess(allFolders)
Transformer.run(foldersToProcess)
}
object Transformer {
def run(folders: List[String]): Unit = {
val sc = new SparkContext()
folders.foreach(process(sc, _))
}
def process(sc: SparkContext, folder: String): Unit = ??? // transform and write to S3
}
This approach works well if S3Folders.list() returns relatively small amount of folders (up to few thousands), if it returns more (4-8K) very often we see following error (that in first glance has nothing to do with Spark):
17/10/31 08:38:20 ERROR ApplicationMaster: User class threw exception: shadeaws.SdkClientException: Failed to sanitize XML document destined for handler class shadeaws.services.s3.model.transform.XmlResponses
SaxParser$ListObjectsV2Handler
shadeaws.SdkClientException: Failed to sanitize XML document destined for handler class shadeaws.services.s3.model.transform.XmlResponsesSaxParser$ListObjectsV2Handler
at shadeaws.services.s3.model.transform.XmlResponsesSaxParser.sanitizeXmlDocument(XmlResponsesSaxParser.java:214)
at shadeaws.services.s3.model.transform.XmlResponsesSaxParser.parseListObjectsV2Response(XmlResponsesSaxParser.java:315)
at shadeaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:88)
at shadeaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:77)
at shadeaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:62)
at shadeaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:31)
at shadeaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:70)
at shadeaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1553)
at shadeaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1271)
at shadeaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055)
at shadeaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at shadeaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at shadeaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at shadeaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at shadeaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at shadeaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at shadeaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4247)
at shadeaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194)
at shadeaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4188)
at shadeaws.services.s3.AmazonS3Client.listObjectsV2(AmazonS3Client.java:865)
at me.chuwy.transform.S3Folders$.com$chuwy$transform$S3Folders$$isGlacierified(S3Folders.scala:136)
at scala.collection.TraversableLike$$anonfun$filterImpl$1.apply(TraversableLike.scala:248)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
at scala.collection.TraversableLike$class.filterNot(TraversableLike.scala:267)
at scala.collection.AbstractTraversable.filterNot(Traversable.scala:104)
at me.chuwy.transform.S3Folders$.list(S3Folders.scala:112)
at me.chuwy.transform.Main$.main(Main.scala:22)
at me.chuwy.transform.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:637)
Caused by: shadeaws.AbortedException:
at shadeaws.internal.SdkFilterInputStream.abortIfNeeded(SdkFilterInputStream.java:53)
at shadeaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:81)
at shadeaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.read1(BufferedReader.java:210)
at java.io.BufferedReader.read(BufferedReader.java:286)
at java.io.Reader.read(Reader.java:140)
at shadeaws.services.s3.model.transform.XmlResponsesSaxParser.sanitizeXmlDocument(XmlResponsesSaxParser.java:186)
... 36 more
For big amount of folders (~20K) this happens all the time and job cannot start.
Previously we had very similar, but much more frequent error when getFoldersToProcess did GetItem for every folder from allFolders and therefore took much longer:
17/09/30 14:46:07 ERROR ApplicationMaster: User class threw exception: shadeaws.AbortedException:
shadeaws.AbortedException:
at shadeaws.internal.SdkFilterInputStream.abortIfNeeded(SdkFilterInputStream.java:51)
at shadeaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:71)
at shadeaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
at com.fasterxml.jackson.core.json.ByteSourceJsonBootstrapper.ensureLoaded(ByteSourceJsonBootstrapper.java:489)
at com.fasterxml.jackson.core.json.ByteSourceJsonBootstrapper.detectEncoding(ByteSourceJsonBootstrapper.java:126)
at com.fasterxml.jackson.core.json.ByteSourceJsonBootstrapper.constructParser(ByteSourceJsonBootstrapper.java:215)
at com.fasterxml.jackson.core.JsonFactory._createParser(JsonFactory.java:1240)
at com.fasterxml.jackson.core.JsonFactory.createParser(JsonFactory.java:802)
at shadeaws.http.JsonResponseHandler.handle(JsonResponseHandler.java:109)
at shadeaws.http.JsonResponseHandler.handle(JsonResponseHandler.java:43)
at shadeaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:70)
at shadeaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1503)
at shadeaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1226)
at shadeaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030)
at shadeaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
at shadeaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
at shadeaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at shadeaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at shadeaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at shadeaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at shadeaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:2089)
at shadeaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:2065)
at shadeaws.services.dynamodbv2.AmazonDynamoDBClient.executeGetItem(AmazonDynamoDBClient.java:1173)
at shadeaws.services.dynamodbv2.AmazonDynamoDBClient.getItem(AmazonDynamoDBClient.java:1149)
at me.chuwy.tranform.sdk.Manifest$.contains(Manifest.scala:179)
at me.chuwy.tranform.DynamoDBState$$anonfun$getUnprocessed$1.apply(ProcessManifest.scala:44)
at scala.collection.TraversableLike$$anonfun$filterImpl$1.apply(TraversableLike.scala:248)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
at scala.collection.TraversableLike$class.filterNot(TraversableLike.scala:267)
at scala.collection.AbstractTraversable.filterNot(Traversable.scala:104)
at me.chuwy.transform.DynamoDBState$.getFoldersToProcess(DynamoDBState.scala:44)
at me.chuwy.transform.Main$.main(Main.scala:19)
at me.chuwy.transform.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:637)
I believe that current error has nothing to do with XML parsing or invalid response, but originate from some race condition inside Spark, because:
There's clear connection between amount of time "state-fetching" takes and chance of failure
Tracebacks have underlying AbortedException, which AFAIK caused by swallowed InterruptedException, which can mean something inside JVM (spark-submit or even YARN) calls Thread.sleep for main thread.
Right now I'm using EMR AMI 5.5.0, Spark 2.1.0 and shaded AWS SDK 1.11.208, but had similar error with AWS SDK 1.10.75.
I'm deploying this job on EMR via command-runner.jar spark-submit --deploy-mode cluster --class ....
Does anyone have any idea where does this exception originate from and how to fix it?
foreach does not guarantee orderly computations and it applies the operation(s) to each element of an RDD, meaning that it will instantiate for every element which, in turn, may overwhelm the executor.
The problem was that getFoldersToProcess is a blocking (and very long) operation, which prevents SparkContext from being instantiated. SpackContext itself should signal about own instantiation to YARN and if it doesn't help in a certain amount of time - YARN assumes that driver node has fallen off and kills the whole cluster.
I am running a map reduce job taking data from a table in Accumulo as the input and storing the result in another table in Accumulo. To do this, I am using the AccumuloInputFormat and AccumuloOutputFormat classes. Here is the code
public int run(String[] args) throws Exception {
Opts opts = new Opts();
opts.parseArgs(PivotTable.class.getName(), args);
Configuration conf = getConf();
conf.set("formula", opts.formula);
Job job = Job.getInstance(conf);
job.setJobName("Pivot Table Generation");
job.setJarByClass(PivotTable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setMapperClass(PivotTableMapper.class);
job.setCombinerClass(PivotTableCombiber.class);
job.setReducerClass(PivotTableReducer.class);
job.setInputFormatClass(AccumuloInputFormat.class);
ClientConfiguration zkConfig = new ClientConfiguration().withInstance(opts.getInstance().getInstanceName()).withZkHosts(opts.getInstance().getZooKeepers());
AccumuloInputFormat.setInputTableName(job, opts.dataTable);
AccumuloInputFormat.setZooKeeperInstance(job, zkConfig);
AccumuloInputFormat.setConnectorInfo(job, opts.getPrincipal(), new PasswordToken(opts.getPassword().value));
job.setOutputFormatClass(AccumuloOutputFormat.class);
BatchWriterConfig bwConfig = new BatchWriterConfig();
AccumuloOutputFormat.setBatchWriterOptions(job, bwConfig);
AccumuloOutputFormat.setZooKeeperInstance(job, zkConfig);
AccumuloOutputFormat.setConnectorInfo(job, opts.getPrincipal(), new PasswordToken(opts.getPassword().value));
AccumuloOutputFormat.setDefaultTableName(job, opts.pivotTable);
AccumuloOutputFormat.setCreateTables(job, true);
return job.waitForCompletion(true) ? 0 : 1;
}
PivotTable is the name of the class that contains the main method (and this one too). I have made the mapper, combiner and reducer classes as well. But when I try to exectute this job, I get an error
Exception in thread "main" java.io.IOException: org.apache.accumulo.core.client.AccumuloException: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validatePermissions(InputConfigurator.java:707)
at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.validateOptions(AbstractInputFormat.java:397)
at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.getSplits(AbstractInputFormat.java:668)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at com.latize.ulysses.accumulo.postprocess.PivotTable.run(PivotTable.java:247)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at com.latize.ulysses.accumulo.postprocess.PivotTable.main(PivotTable.java:251)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.accumulo.core.client.AccumuloException: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(SecurityOperationsImpl.java:87)
at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.hasTablePermission(SecurityOperationsImpl.java:220)
at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validatePermissions(InputConfigurator.java:692)
... 21 more
Caused by: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.recv_hasTablePermission(ClientService.java:641)
at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.hasTablePermission(ClientService.java:624)
at org.apache.accumulo.core.client.impl.SecurityOperationsImpl$8.execute(SecurityOperationsImpl.java:223)
at org.apache.accumulo.core.client.impl.SecurityOperationsImpl$8.execute(SecurityOperationsImpl.java:220)
at org.apache.accumulo.core.client.impl.ServerClient.executeRaw(ServerClient.java:79)
at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(SecurityOperationsImpl.java:73)
Can someone tell me what am I doing wrong here? Any help would be appreciated.
EDIT : I am running Accumulo 1.7.0
A TApplicationException indicates the error occurred on the Accumulo tablet server, rather than in your client (MapReduce) code. You'll need to examine your tablet server logs to get more information about the particular error wherever you see TApplicationException.
Table permissions are usually retrieved from ZooKeeper, so it may indicate a problem with the tserver connecting to ZooKeeper.
Unfortunately, I don't see the hostname or the IP in the stack trace, so you may have to check all the tserver logs to find it.
Each time we launch a fitnesse test, it ends by saying
Testing was interrupted and results are incomplete
And we have an exception :
Could not detect death of command line test runner.
java.lang.IllegalThreadStateException: process has not exited
at fitnesse.components.CommandRunner.join(CommandRunner.java:79)
at fitnesse.responders.run.slimResponder.SlimTestSystem.bye(SlimTestSystem.java:208)
at fitnesse.responders.run.MultipleTestsRunner.startTestSystemAndExecutePages(MultipleTestsRunner.java:118)
at fitnesse.responders.run.MultipleTestsRunner.internalExecuteTestPages(MultipleTestsRunner.java:88)
at fitnesse.responders.run.MultipleTestsRunner.executeTestPages(MultipleTestsRunner.java:60)
at fitnesse.responders.run.TestResponder.performExecution(TestResponder.java:190)
at fitnesse.responders.run.TestResponder.doExecuteTests(TestResponder.java:72)
at fitnesse.responders.run.TestResponder$TestExecutor.execute(TestResponder.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:600)
at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:395)
at org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:384)
at org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:173)
at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:280)
at org.apache.velocity.runtime.parser.node.ASTReference.render(ASTReference.java:369)
at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:342)
at org.apache.velocity.runtime.directive.Parse.render(Parse.java:260)
at org.apache.velocity.runtime.parser.node.ASTDirective.render(ASTDirective.java:207)
at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:342)
at org.apache.velocity.Template.merge(Template.java:356)
at org.apache.velocity.Template.merge(Template.java:260)
at fitnesse.responders.templateUtilities.HtmlPage.render(HtmlPage.java:80)
at fitnesse.responders.run.TestResponder.doSending(TestResponder.java:61)
at fitnesse.responders.ChunkingResponder.startSending(ChunkingResponder.java:66)
at fitnesse.http.ChunkedResponse.sendTo(ChunkedResponse.java:26)
at fitnesse.FitNesseExpediter.sendResponse(FitNesseExpediter.java:96)
at fitnesse.FitNesseExpediter.start(FitNesseExpediter.java:48)
at fitnesse.FitNesseServer.serve(FitNesseServer.java:24)
at fitnesse.FitNesseServer.serve(FitNesseServer.java:17)
at fitnesse.socketservice.SocketService$ServerRunner.run(SocketService.java:109)
at java.lang.Thread.run(Thread.java:736)
Have you encountered the same problem ? How did you solved it ?
fitnesse version : 20121220
It's quite possible that you have a class reference that is hanging out there and that FitNesse cannot close. Or something is calling System.exit().
An example of the first is Selenium WebDriver. You must close() the browserDriver to get a clean end to a test.
The second is something that we unintentionally allowed in the past and has been changed in more recent releases. It's possible that updating to the latest EDGE might solve it.
However, more context is really required, as this is quite likely an issue in your configuration or your fixture.
When I try to lease tasks from a pull queue in my application, I get the error below. This didn't happen previously with my code, so something has changed. I suspect that it's a threading issue - these calls are made in a separate thread (created using ThreadManager.createThreadForCurrentRequest) - has that recently been disallowed?
uk.org.jaggard.myapp.BlahBlahBlahRunnable run: Exception leasing tasks. Already tried 0 times.
com.google.apphosting.api.ApiProxy$CancelledException: The API call taskqueue.QueryAndOwnTasks() was explicitly cancelled.
at com.google.apphosting.runtime.ApiProxyImpl.doSyncCall(ApiProxyImpl.java:218)
at com.google.apphosting.runtime.ApiProxyImpl.access$000(ApiProxyImpl.java:68)
at com.google.apphosting.runtime.ApiProxyImpl$1.run(ApiProxyImpl.java:182)
at com.google.apphosting.runtime.ApiProxyImpl$1.run(ApiProxyImpl.java:180)
at java.security.AccessController.doPrivileged(Native Method)
at com.google.apphosting.runtime.ApiProxyImpl.makeSyncCall(ApiProxyImpl.java:180)
at com.google.apphosting.runtime.ApiProxyImpl.makeSyncCall(ApiProxyImpl.java:68)
at com.google.appengine.tools.appstats.Recorder.makeSyncCall(Recorder.java:323)
at com.googlecode.objectify.cache.TriggerFutureHook.makeSyncCall(TriggerFutureHook.java:154)
at com.google.apphosting.api.ApiProxy.makeSyncCall(ApiProxy.java:105)
at com.google.appengine.api.taskqueue.QueueApiHelper.makeSyncCall(QueueApiHelper.java:44)
at com.google.appengine.api.taskqueue.QueueImpl.leaseTasksInternal(QueueImpl.java:709)
at com.google.appengine.api.taskqueue.QueueImpl.leaseTasks(QueueImpl.java:731)
at uk.org.jaggard.myapp.BlahBlahBlahRunnable.run(BlahBlahBlahRunnable.java:53)
at com.google.apphosting.runtime.ApiProxyImpl$CurrentRequestThreadFactory$1$1.run(ApiProxyImpl.java:997)
at java.security.AccessController.doPrivileged(Native Method)
at com.google.apphosting.runtime.ApiProxyImpl$CurrentRequestThreadFactory$1.run(ApiProxyImpl.java:994)
at java.lang.Thread.run(Thread.java:679)
at com.google.apphosting.runtime.ApiProxyImpl$CurrentRequestThreadFactory$2$1.run(ApiProxyImpl.java:1031)
I suspect the warnings in your application's logs will contain
"Thread was interrupted, throwing CancelledException."
Java's InterruptedException is discussed in
http://www.ibm.com/developerworks/java/library/j-jtp05236/index.html
and the book "Java Concurrency in Practice".