I am facing an issue when I try to submit my Spark application on Yarn from eclipse. I try to submit a simple SVM program, but i gives below error. I Have macbook, and I will be so thankful if somebody give me detailed answer
16/09/17 10:04:19 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalStateException: Library directory '.../MyProject/assembly/target/scala-2.11/jars' does not exist; make sure Spark is built.
at org.apache.spark.launcher.CommandBuilderUtils.checkState(CommandBuilderUtils.java:248)
at org.apache.spark.launcher.CommandBuilderUtils.findJarsDir(CommandBuilderUtils.java:368)
at org.apache.spark.launcher.YarnCommandBuilderUtils$.findJarsDir(YarnCommandBuilderUtils.scala:38)
at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:500)
at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:834)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:167)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
at SVM.main(SVM.java:21)
Go to
Run Configurations --> Environment
in Eclipse and add the environment variable SPARK_HOME.
Related
Trying to run my embedded mysql based Unit tests I get an exception with this part:
Failed to instantiate [com.wix.mysql.EmbeddedMysql]: Factory method 'getEmbeddedMysql' threw exception; nested exception is com.wix.mysql.exceptions.CommandFailedException: Command 'CREATE USER 'sa'#'%' IDENTIFIED BY '';' on schema 'information_schema' failed with message 'Stream closed'
The same unit test and environment setup work on my MacBook
This machine with the error is an Ubuntu 20.04
Wix version is 4.6.2; Java 8, mysql.connector 8.0.24
I tried changing the dependencies versions and also tried with Java 11.
Run from within IntelliJ and on the command line. Same result.
Let me put the full comment I found in Github which helped me to fix this and I'm pretty sure most of the people seeing this in Ubuntu will find this as the solution:
I had this same issue with MySQL 5.7, while working on another open source project. I cloned the wix-embedded-mysql repository and ran the tests using the master branch, which also failed in the exact same way, except that I received a longer, more thorough message in the catch.
The issue was that I was on Ubuntu and did not have the ncurses 5 shared library installed. On ubuntu, I installed libncurses5 (apt install libncurses5) and everything started working (all tests on wix, and on my project).
I hope this helps resolve the issue.
Thanks to https://github.com/codesplode
I am trying to start Apache Livy 0.8.0 server on my windows 10 machine for spark 3.1.2 and hadoop 3.2.1. I am taking help from here.. I have successfully built apache livy using maven (I have attached a of it) But I am not able to run the livy server. When I run it I get the following error -
> starting C:/AmazonJDK/jdk1.8.0_332/bin/java -cp /d/ApacheLivy/incubator-livy-master/incubator-livy-master/server/target/jars/*:/d/ApacheLivy/incubator-livy-master/incubator-livy-master/conf:D:/Program_files/spark/conf:D:/ApacheHadoop/hadoop-3.2.1/etc/hadoop: org.apache.livy.server.LivyServer, logging to D:/ApacheLivy/incubator-livy-master/incubator-livy-master/logs/livy--server.out
ps: unknown option -- o
Try `ps --help' for more information.
failed to launch C:/AmazonJDK/jdk1.8.0_332/bin/java -cp /d/ApacheLivy/incubator-livy-master/incubator-livy-master/server/target/jars/*:/d/ApacheLivy/incubator-livy-master/incubator-livy-master/conf:D:/Program_files/spark/conf:D:/ApacheHadoop/hadoop-3.2.1/etc/hadoop: org.apache.livy.server.LivyServer:
Error: Could not find or load main class org.apache.livy.server.LivyServer
full log in D:/ApacheLivy/incubator-livy-master/incubator-livy-master/logs/livy--server.out
I am using Git bash. If you need more information I will provide
The error got resolved when I used Windows Subsystem for Linux (WSL).
I am just trying to execute Apache Beam example code in local spark setup. I generated the source and built the package as mentioned in this page. And submitted the jar using spark-submit as below,
$ ~/spark/bin/spark-submit --class org.apache.beam.examples.WordCount --master local target/word-count-beam-0.1.jar --runner=SparkRunner --inputFile=pom.xml --output=counts
The code gets submitted and starts to execute. But gets stuck at step Evaluating ParMultiDo(ExtractWords). Below is the log after submitting the job.
Am not able to find any error message. Can someone please help in finding whats wrong?
Edit: I also tried using below command,
~/spark/bin/spark-submit --class org.apache.beam.examples.WordCount --master spark://Quartics-MacBook-Pro.local:7077 target/word-count-beam-0.1.jar --runner=SparkRunner --inputFile=pom.xml --output=counts
The job is now stuck at INFO BlockManagerMasterEndpoint: Registering block manager 192.168.0.2:59049 with 366.3 MB RAM, BlockManagerId(0, 192.168.0.2, 59049, None). Attached the screenshots of Spark History & Dashboard below.The dashboard shows the job is running, but no progress at all.
This is just a version issue. I was able to run the job in Spark 1.6.3. Thanks to all the people who just down voted this question without explanations.
I am new to jenkins, I upgraded a couple of plugins(don't remember which), after that I when I try java -jar jenkins.war I end up getting the following error.
jenkins.InitReactorRunner$1 onTaskFailed
SEVERE: Failed Loading global config
java.io.IOException: Unable to read /home/.jenkins/config.xml
I went through several links which address this issue, but no luck yet. In this link which I found https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=764711 it says some plugins are missing, and surprisingly, my /home/.jenkins/plugins/ is empty!!!
How do I restore the necessary plugins from my command line?
I am using CentOS release 6.8 (Final)
Thank you :)
I had encountered this issue somedays back,however after restarting jenkins service solved the issue.
The best you can do now is rename your config file. This way jenkins will load with the default startup.
I am using ubuntu 12.04. I am trying to connect hadoop in eclipse.Successfully installed plugin for 1.04. I am using java 1.7 for this.
My configuration data are
username:hduser,locationname:test,map/reduce host port are localhost:9101 and M/R masterhost localhost:9100.
My temp directory is /app/hduser/temp.
As per this location I set advanced parameters.But I was not able to set fs.s3.buffer.dir as there was no such directory created like /app/hadoop/tmp//s3. unable to set map reduce master directory.I only found local directory. I didnot find mapred.jobtracker.persist.job.dir. And also map red temp dir.
When I ran hadoop in pseudo distributed mode I didnot found any datanode running also with JPS.
I am not sure what is the problem here.In eclipse I got the error while setting the dfs server.I got the message like...
An internal error occurred during: "Connecting to DFS test".
org/apache/commons/configuration/Configuration
Thanks all
I was facing the same issue. Later found this:
Hadoop eclipse mapreduce is not working?
The main blog post is this. HTH someone who is looking for a solution.