I was trying to run the basic WordCount example of Apache MapReduce 2.7 from here:
https://hadoop.apache.org/docs/r2.7.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Example:_WordCount_v1.0
I put the input files at : /user/hadoopLearning/WordCount/input/
Output path : /user/hadoopLearning/WordCount/output/
then I ran the following command :
hadoop jar wc.jar WordCount /user/hadoopLearning/WordCount/input/file01 /user/hadoopLearning/WordCount/output
However on running I am getting the following error:
Exception in thread "main" org.apache.hadoop.mapred.FileAlreadyExistsException: **Output directory** hdfs://sandbox.hortonworks.com:8020/user/hadoopLearning/WordCount/**input**/file01 already exists
I haven't written a single piece of code and copied everything from above location from Apache's website.
I understand the error , but the if we closely look at the error it says the output directory already exists and in the stack trace it gives the path of input directory.
Please can anyone help me. I am a beginner in the field of hadoop. Thanks in advance.
You're trying to create a file which already exists.
HDFS doesn't allow that.
replace your output path ('/user/hadoopLearning/WordCount/output'), with something else.
try this command
hadoop jar wc.jar WordCount /user/hadoopLearning/WordCount/input/file01 /user/hadoopLearning/WordCount/new_output_path
Related
I am running maxent from R, in the package biomod2 and the following error appeared. I do not come from a technical background and wasn't sure why is this error happening. Is it a memory problem or someone said the java path is not set. But I followed the instructions to set maxent to run in R and also downloaded Java Platform, Standard Edition Development Kit and set a path for it as explained in this pdf: http://modata.ceoe.udel.edu/dev/dhaulsee/class_rcode/r_pkgmanuals/MAXENT4R_directions.pdf
I would be really grateful if you could help me understand this problem and any solution to it.
Thanks a lot
Error in file(file, "rt") : cannot open the connection
In addition: Warning messages:
1: running command 'java' had status 1
2: running command 'java -mx512m -jar E:\bioclim_2.5min\model/maxent.jar environmentallayers
="rainfed/models/1432733200/m_47203134/Back_swd.csv"
samplesfile="rainfed/models/1432733200/m_47203134/Sp_swd.csv"
projectionlayers="rainfed/models/1432733200/m_47203134/Predictions/Pred_swd.csv"
outputdirectory="rainfed/models/1432733200/rainfed_PA1_Full_MAXENT_outputs"
outputformat=logistic redoifexists visible=FALSE linear=TRUE quadratic=TRUE
product=TRUE threshold=TRUE hinge=TRUE lq2lqptthreshold=80 l2lqthreshold=10
hingethreshold=15 beta_threshold=-1 beta_categorical=-1 beta_lqp=-1
beta_hinge=-1 defaultprevalence=0.5 autorun nowarnings notooltips
noaddsamplestobackground' had status 1
3: In file(file, "rt") :
cannot open file 'rainfed/models/1432733200/rainfed_PA1_Full_MAXENT_outputs/rainfed_PA1_
Full_Pred_swd.csv': No such file or directory
I've just manage to solve this problem - it is a problem with the file path specified. For me, I had a space in one of the folder names which was not accepted in the path to the maxent.jar file. From looking at your error, it looks like it might be the two backslashes.
E:\bioclim_2.5min\model/maxent.jar
should probably read
E:/bioclim_2.5min/model/maxent.jar
I'm trying to run mapreduce job. My files are in a parquet format.
I'm getting the following error:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/thrift/TException
at parquet.format.converter.ParquetMetadateConverter.readParquetMetadata(ParquetMetadateConverter.java:268)
at parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:271)
at parquet.hadoop.ParquetFileReader.readSummeryFile(ParquetFileReader.java:200)
at parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummeryFiles(ParquetFileReader.java:99)
at parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:354)
at parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:339)
at parquet.hadoop.ParquetInputFormat.getSplits(ParquetInputFormat.java:246)
...
I tried to add the jar that contains the TException with --libjars my_path/libthrift-0.9.0.jar and I still get the same error.
Please try setting the HADOOP_CLASSPATH parameter to point to a libthrift.jar file that matches the version you need.
For example:
export HADOOP_CLASSPATH=/var/lib/hdfs/libthrift-0.9.jar
Hope this helps!
I am continuing to have trouble with the import.bat file for the Neo4j batch importer. I started a new thread as the original problem was resolved.
from the command prompt I run
import.bat test.db sample\nodes.csv sample\rels.csv
With some variations on the path listing for the files, including absolute paths. I continue to get the following error message
The system cannot find the path specified.
Error: Could not find or load main class org.neo4j.batchimport.Importer
I also tried running import.sh from Cygwin and in my Debian VM but keep getting the error
Error: Could not find or load main class org.neo4j.batchimport.Importer
What am I doing wrong?
Please download the zip-file, not the github clone.
This is a pre-build binary as outlined in the readme, that doesn't require that you have to have maven installed to build it.
I am trying to run a MapReduce job to scan a HBase table. Currently I am using the version 0.94.6 of HBase that comes with Cloudera 4.4. At some point in my program I use Scan(), and I properly import it with:
import org.apache.hadoop.hbase.client.Scan;
It compiles well and I am able to create a jar file too. I do it by passing the hbase classpath as the value for the -cp option. When running the program, I obtain the following message:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/client/Scan
I run the code using:
hadoop jar my_program.jar MyJobClass -libjars <list_of_jars>
where list_of_jars contains /opt/cloudera/parcels/CDH/lib/hbase/hbase.jar. Just to double-check, I confirmed that hbase.jar contains Scan. I do it with:
jar tf /opt/cloudera/parcels/CDH/lib/hbase/hbase.jar
And I can see the line:
org/apache/hadoop/hbase/client/Scan.class
in the output. All looks ok to me. I don't understand why is saying that Scan is not defined. I pass the correct jar, and it contains the class.
Any help is appreciated.
Setting the HADOOP_CLASSPATH variable fixed the issue:
export HADOOP_CLASSPATH=`/usr/bin/hbase classpath`
Hey guys so I am trying to run the WordCount.java example, provided by cloudera. I ran the command below and am getting the exception that I have put below the command. So do you have any suggestions on how to proceed. I have gone through all the steps provided by cloudera.
Thanks in advance.
hadoop jar ~/Desktop/wordcount.jar org.myorg.WordCount ~/Desktop/input
~/Desktop/output
Error:
ERROR security.UserGroupInformation: PriviledgedActionException
as:root (auth:SIMPLE)
cause:org.apache.hadoop.mapred.InvalidInputException: Input path does
not exist: hdfs://localhost/home/rushabh/Desktop/input
Exception in thread "main"
org.apache.hadoop.mapred.InvalidInputException: Input path does not
exist: hdfs://localhost/home/rushabh/Desktop/input
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:194)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:205)
at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:977)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:969)
at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:880)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:807)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1248)
at org.myorg.WordCount.main(WordCount.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.util.RunJar.main(RunJar.java:197)
Your input and output files should be at hdfs. Atleast input should be at hdfs.
use the following command:
hadoop jar ~/Desktop/wordcount.jar org.myorg.WordCount hdfs:/input
hdfs:/output
To copy a file from your linux to hdfs use the following command:
hadoop dfs -copyFromLocal ~/Desktop/input hdfs:/
and check your file using :
hadoop dfs -ls hdfs:/
Hope this will help.
The error message says that this file does not exist: "hdfs://localhost/home/rushabh/Desktop/input".
Check that the file does exist at the location you've told it to use.
Check the hostname is correct. You are using "localhost" which most likely resolves to a loopback IP address; e.g. 127.0.0.1. That always means "this host" ... in the context of the machine that you are running the code on.
When I tried to run wordcount MapReduce code, I was getting error as:
ERROR security.UserGroupInformation: PriviledgedActionException as:hduser cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/user/hduser/wordcount
I was trying to execute the wordcount MapReduce java code with input and output path as /user/hduser/wordcount and /user/hduser/wordcount-output. I just added 'fs.default.name' from core-site.xml before this path and it ran perfectly.
The error clearly states that your input path is local. Please specify the input path to something on HDFS rather than on local machine. My guess
hadoop jar ~/Desktop/wordcount.jar org.myorg.WordCount ~/Desktop/input
~/Desktop/output
needs to be changed to
hadoop jar ~/Desktop/wordcount.jar org.myorg.WordCount <hdfs-input-dir>
<hdfs-output-dir>
NOTE: To run MapReduce job, the input directory should be in HDFS, not local.
Hope this helps.
So I added the input folder to HDFS using the following command
hadoop dfs -put /usr/lib/hadoop/conf input/
Check the ownership of the files in hdfs to ensure that the owner of the job (root) has read privileges on the input files. Cloudera provides an hdfs viewer that you can use to view the filespace; open a web browser to either localhost:50075 or {fqdn}:50075 and click on "Browse the filesystem" to view the Input directory and input files. Check the ownership flags; just like *nix filesystem.