Hadoop Java Class cannot be found - java

Exception in thread "main" java.lang.ClassNotFoundException:WordCount-> so many answers relate to this issue and it seems like I am definitely missing a small point again which took me hours to figure.
I will try to be as clear as possible about the paths, code itself and other possible solutions I tried and did not work.
I am kinda sure about my correctly configuring Hadoop as everything was working up until the last stage.
But still posting the details:
Environment variables and paths
>
HADOOP VARIABLES START
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export HADOOP_INSTALL=/usr/local/hadoop
export HADOOP_CLASSPATH=/usr/lib/jvm/java-8-oracle/lib/tools.jar
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
#export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
#export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$JAVA_HOME/bin
#HADOOP VARIABLES END
The Class itself:
package com.cloud.hw03;
/**
* Hello world!
*
*/
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "wordcount");
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setJarByClass(WordCount.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
what I did for compiling and running:
Created jar file in the same folder with my WordCount maven project (eclipse-workspace)
$ hadoop com.sun.tools.javac.Main WordCount.java
$ jar cf WordCount.jar WordCount*.class
Running the program: (i already created a directory and copied input-output files in hdfs)
hadoop jar WordCount.jar WordCount /input/inputfile01 /input/outputfile01
result is: Exception in thread "main" java.lang.ClassNotFoundException: WordCount
Since I am in the same directory with WordCount.class and i created my jar file in that same directory, i am not specifying full path to WordCount, so i am running the above 2nd command in this directory :
I already added job.setJarByClass(WordCount.class); to the code so no help. I would appreciate your spending time on answering!
I am sure I am doing something unexpected again and cannot figure it out for 4 hrs

The Wordcount example code on the Hadoop site does not use a package
Since you do have one, you would run the fully qualified class. The exact same way as a regular Java application
hadoop jar WordCount.jar com.cloud.hw03.WordCount
Also, if you actually had a Maven project, then hadoop com.sun.tools.javac.Main is not correct. You would actually use Maven to compile and create the JAR with all the classes, not only WordCount* files
For example, from the folder with the pom.xml
mvn package
Otherwise, you need to be in the parent directory
hadoop com.sun.tools.javac.Main ./com/cloud/hw03/WordCount.java
And run the jar cf command also from that directory

Related

Hadoop Mapreduce word count Program

I am new to Hadoop. Below is my code. I am getting the following error message when i run the Jar.
Input file (wordcount.txt) => this file is stored in "/home/cloudera/SK_JAR/jsonFile/wordcount.txt" path
Hello Hadoop, Goodbye Hadoop.
package com.main;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class WordCount {
public static class Map extends Mapper {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}
}
}
public static class Reduce extends Reducer {
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws IllegalArgumentException, IOException, ClassNotFoundException, InterruptedException {
// TODO Auto-generated method stub
Configuration conf = new Configuration();
Job job = new Job(conf, "wordcount");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
// job.waitForCompletion(true);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Following is the error message.Can some one please help me on this?
let me know if you guys need more details..
hadoop jar Wordcount.jar WordCount '/home/cloudera/SK_JAR/jsonFile/wordcount.txt' output
Error in laoding args file.java.io.FileNotFoundException: WordCount (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:146)
at java.io.FileInputStream.<init>(FileInputStream.java:101)
at com.main.mainClass.main(mainClass.java:28)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
hadoop jar Wordcount.jar WordCount
Your main class is part of a package com.maintherefore com.main.WordCount is needed to start your class.
You can open your JAR file as a ZIP file to verify if you can find com/main/WordCount$Map.class within it. If it is not there, then Eclipse is building your JAR wrong.
I might suggest you learn Maven, Gradle, or SBT to build a JAR rather than your IDE. In production, these are the tools commonly used to bundle Hadoop JAR files.
It seems main class file was not mentioned with your runnable jar file.
If you are using Eclipse to create jar file then follow below steps to create Wordcount.jar.
Right click on Project -> Export JAR file -> Click Next - > Uncheck
all other resources. Then provide path for exporting .jar file ->
Click Next -> Keep options selected -> Click Next and Finish. reference

Parse Flat Json file using MAP Reduce JAVA

My task is parse Json object from HDFS and write in separate file in HDFS. Below is my Code.
package com.main;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.json.JSONException;
import org.json.JSONObject;
public class JsonMain {
public static class Mapperclass extends Mapper<LongWritable, Text, Text, Text>{
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException{
String regId;
String time;
String line = value.toString();
String[] tuple = line.split("\\n");
try{
for(int i=0;i<tuple.length; i++){
JSONObject obj = new JSONObject(tuple[i]);
regId = obj.getString("regId");
time = obj.getString("time");
context.write(new Text(regId), new Text(time));
}
}catch(JSONException e){
e.printStackTrace();
}
}
}
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
// TODO Auto-generated method stub
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(JsonMain.class);
job.setMapperClass(Mapperclass.class);
//job.setCombinerClass(IntSumReducer.class);
//job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Flatjson.txt
{"regId":"TbEtvRH""time":1509073895112}
{"regId":"lWJ2u0j""time":1509073905112}
{"regId":"uB9WG5K""time":1509073915112}
{"regId":"9sO7aqg""time":1509073925113}
{"regId":"hguOaKh""time":1509073935113}
{"regId":"p1CAzYt""time":1509073945113}
{"regId":"quDVMkD""time":1509073955113}
Note: I have included all the dependency Jar in my project.
Following command executed:
hadoop jar JsonMapper.jar com.main.JsonMain /user/cloudera/FlatJson/Flatjson.txt output007
Following are the error message am getting.
17/11/01 08:11:12 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1509542757670_0003/
17/11/01 08:11:12 INFO mapreduce.Job: Running job: job_1509542757670_0003
17/11/01 08:13:33 INFO mapreduce.Job: Job job_1509542757670_0003 running in uber mode : false
17/11/01 08:13:33 INFO mapreduce.Job: map 0% reduce 0%
17/11/01 08:15:32 INFO mapreduce.Job: Task Id : attempt_1509542757670_0003_m_000000_0, Status : FAILED
Error: java.lang.ClassNotFoundException: org.json.JSONException
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2138)
"java.lang.ClassNotFoundException: org.json.JSONException" ==> I have imported this jar in my project. let me know what's wrong in this.
Lets start to debug your problem in steps .
please do a jar -tvf JsonMapper.jar | grep JSONException and you will see that this class doesnt exist in your jar.
Please do understand including a dependency in your project through a dependency management system like mvn doesn't guarantee its availability in jar.
Please use shaded plugin to include all the jars in the dependency into your shaded fat jar.
"Error: java.lang.ClassNotFoundException: org.json.JSONException" --> This has been solved.
Previously I had the jar in /home/jar/java-json.jar path.
I have moved this jar to "/usr/lib/hadoop-mapreduce/" this path and included this jar and added this jar to the project it worked.
cp java-json.jar /usr/lib/hadoop-mapreduce

running hadoop wordCount example with groovy

I was trying to run the wordCount example with groovy using this but encounter an error
Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
found this for above error but could not locate pom.xml file in my setup.
Then I came across this. How do we run this in hadoop. Is it by making a jar file and run similarly as the java example?(which ran fine)
What is the difference between running a groovy example using groovy-hadoop and by using this file (not sure how to run this) and hadoop-streaming? why would we use one method over others.
I've installed hadoop 2.7.1 on mac 10.10.3
I was able to run this groovy file with hadoop 2.7.1
The procedure I followed is
Install gradle
Generate jar file using gradle. I asked this question which helped me build dependencies in gradle
Run with hadoop as usual as we run a java jar file using this command from the folder where jar is located.
hadoop jar buildSrc-1.0.jar in1 out4
where in1 is input file and out4 is the output folder in hdfs
EDIT- As the above link is broken , I am pasting the groovy file here.
import StartsWithCountMapper
import StartsWithCountReducer
import org.apache.hadoop.conf.Configured
import org.apache.hadoop.fs.Path
import org.apache.hadoop.io.IntWritable
import org.apache.hadoop.io.LongWritable
import org.apache.hadoop.io.Text
import org.apache.hadoop.mapreduce.Job
import org.apache.hadoop.mapreduce.Mapper
import org.apache.hadoop.mapreduce.Reducer
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat
import org.apache.hadoop.util.Tool
import org.apache.hadoop.util.ToolRunner
class CountGroovyJob extends Configured implements Tool {
#Override
int run(String[] args) throws Exception {
Job job = Job.getInstance(getConf(), "StartsWithCount")
job.setJarByClass(getClass())
// configure output and input source
TextInputFormat.addInputPath(job, new Path(args[0]))
job.setInputFormatClass(TextInputFormat)
// configure mapper and reducer
job.setMapperClass(StartsWithCountMapper)
job.setCombinerClass(StartsWithCountReducer)
job.setReducerClass(StartsWithCountReducer)
// configure output
TextOutputFormat.setOutputPath(job, new Path(args[1]))
job.setOutputFormatClass(TextOutputFormat)
job.setOutputKeyClass(Text)
job.setOutputValueClass(IntWritable)
return job.waitForCompletion(true) ? 0 : 1
}
static void main(String[] args) throws Exception {
System.exit(ToolRunner.run(new CountGroovyJob(), args))
}
class GroovyMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable countOne = new IntWritable(1);
private final Text reusableText = new Text();
#Override
protected void map(LongWritable key, Text value, Mapper.Context context) {
value.toString().tokenize().each {
reusableText.set(it)
context.write(reusableText,countOne)
}
}
}
class GroovyReducer extends Reducer<Text, IntWritable, Text, IntWritable>{
private IntWritable outValue = new IntWritable();
#Override
protected void reduce(Text key, Iterable<IntWritable> values, Reducer.Context context) {
outValue.set(values.collect({it.value}).sum())
context.write(key, outValue);
}
}
}
The library you are using, groovy-hadoop, says it supports Hadoop 0.20.2. It's really old.
But the CountGroovyJob.groovy code you are trying to run looks like it's supposed to run on versions 2.x.x of Hadoop.
I can see this because in the imports you see packages such as org.apache.hadoop.mapreduce.Mapper, whereas before version 2, it was called org.apache.hadoop.mapred.Mapper.
The most voted answer in the SO question you linked is probably the answer you needed. You have an incompatibility problem. The groovy-hadoop library can't work with your Hadoop 2.7.1.

Hadoop Java Error : Exception in thread "main" java.lang.NoClassDefFoundError: WordCount (wrong name: org/myorg/WordCount)

I am new to hadoop. I followed the maichel-noll tutorial to set up hadoop in single node.I tried running WordCount program. This is the code I used:
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "WordCount");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
This is what I get when I try running it.
hduser#aswin-HP-Pavilion-15-Notebook-PC:/usr/local/hadoop$ bin/hadoop jar wc.jar WordCount /home/hduser/gutenberg /home/hduser/gutenberg-output/sample.txt
Exception in thread "main" java.lang.NoClassDefFoundError: WordCount (wrong name: org/myorg/WordCount)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:788)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:447)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:411)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at org.apache.hadoop.util.RunJar.main(RunJar.java:205)
Can anyone please help me.
My class path :
hduser#aswin-HP-Pavilion-15-Notebook-PC:/usr/local/hadoop$ hadoop classpath
/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/usr/lib/jvm/java-7-openjdk-i386/lib/tools.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
try this,
import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;
public class WordCount {
public static class Map extends MapReduceBase implements
Mapper<LongWritable, Text, Text, IntWritable> {
#Override
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter)
throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
value.set(tokenizer.nextToken());
output.collect(value, new IntWritable(1));
}
}
}
public static class Reduce extends MapReduceBase implements
Reducer<Text, IntWritable, Text, IntWritable> {
#Override
public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output, Reporter reporter)
throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
}
then run command
bin/hadoop jar WordCount.jar WordCount /hdfs_Input_filename /output_filename
if your code is in particular package then you have to mention package name with class name
bin/hadoop jar WordCount.jar PakageName.WordCount /hdfs_Input_filename /output_filename
This may sound crazy. I added package org.myorg; to my code and compiled it again. I placed the class files in org/myorg folder and created the jar file using them. Then I ran using the jar wc.jar org.myorg.WordCount command and it got executed successfully. It would be nice if someone could explain me how it actually ran :D . Any way, thanks a lot for helping me guys.
try explicitly including the nested classes(i.e. TokenizerMapper and IntSumReducer) in you jar file. Here is how I did it:
jar cvf WordCount.jar WordCount.class WordCount\$TokenizerMapper.class WordCount\$IntSumReducer.class
there is something wrong with packaging.
you should try this:
jar cf wc.jar WordCount*.class
notice there is a symbol '*'
You are using package in your class. So your command should be
bin/hadoop jar wc.jar org.myorg.WordCount /home/hduser/gutenberg /home/hduser/gutenberg-output/sample.txt
I think you made a mistake here :
/usr/local/hadoop$ bin/hadoop jar wc.jar WordCount /home/hduser/gutenberg /home/hduser/gutenberg-output/sample.txt
please change it to :
/usr/local/hadoop$ bin/hadoop jar wc.jar org.myorg.WordCount /home/hduser/gutenberg /home/hduser/gutenberg-output/sample.txt
that should work.
#Aswin Alagappan
: Reason is
a jar file cotains your path in it. JVM cannot find your class in the jar file becase it is in the "jar\org\myorg" path. Understand?
The answer of Kishore, allowed me to go in the right direction,
if it’s possible i want to confirm this, reporting what I did about an experiiment with java code on moltiplication of sparse matrix :
1) Source code (downloaded from https://github.com/marufaytekin/MatrixMultiply/tree/master/src/main/java/com/lendap/hadoop), and saved in /home/hduser/playground/src/matrixMult
2) Downloaded datasets (matrix M and N from https://github.com/marufaytekin/MatrixMultiply/tree/master/input), and then saved in HDFS, with the following path : /user/hduser/inMatrix
3) Compilation with hadoop classes, with creation of java Classes in playground/classes5 :
javac -classpath $HADOOP_HOME/share/hadoop/common/lib/activation-1.1.jar:$HADOOP_HOME/share/hadoop/common/hadoop-common-2.7.1.jar:/usr/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/* -d playground/classes5 playground/src/matrixMult/*
4) Creation of jar file MatrixMultiply.jar with the following command :
jar -cvf playground/MatrixMultiply.jar -C playground/classes5/ .
5) hadoop mapReduce command (from the $HADOOP_HOME path, that in my case is /usr/hadoop/hadoop-2.7.1$
hadoop jar /home/hduser/playground/MatrixMultiply.jar com.lendap.hadoop.MatrixMultiply /user/hduser/inMatrix/ outputMatrix
6) Correct execution of mapreduce job on my 4 nodes cluster. Here, part of the final output :
0,375,890.0
0,376,1005.0
0,377,1377.0
0,378,604.0
0,379,924.0
0,38,476.0
0,380,621.0
0,381,730.0
990,225,542.0
990,226,639.0
990,227,466.0
990,228,406.0
990,229,343.0
990,23,397.0
990,230,794.0

ClassNotFoundException when running WordCount example in Eclipse

I'm trying to run the exemplary code for WordCount map/reduce job. I'm running it on Hadoop 1.2.1. and I'm running it from my Eclipse. Here is the code I try to run:
package mypackage;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.Reducer.Context;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class WordCount {
public static class Map extends
Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}
}
}
public static class Reduce extends
Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
conf.set("mapred.job.tracker", "maprfs://,y_address");
conf.set("fs.default.name", "hdfs://my_address");
Job job = new Job(conf, "wordcount");
job.setJarByClass(WordCount.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
}
Unfortunatelly, running this code ends up with the following error:
13/11/04 13:27:53 INFO mapred.JobClient: Task Id :
attempt_201310311611_0005_m_000000_0, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException:
com.rf.hadoopspikes.WordCount$Map at
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:857)
at
org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:718)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364) at
org.apache.hadoop.mapred.Child$4.run(Child.java:255) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:415) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
I understand that the WordClass cannot be found but I have no idea how to make this work.
Any ideas?
When running this directly from Eclipse, you need to make sure the classes have been bundled into a Jar file (for which hadoop then copies up to HDFS). Your error most probably relates to the fact that your Jar hasn't been built, or at runtime the classes are being run from the output directory and not the bundled jar.
Try and export the classes into a jar file, and then run your WordCount class from that Jar file. You could also look into using the Eclipse Hadoop plugin that i think handles all this form you. Final option would be to bundle the jar and then launch from the command line (as outlined in the various Hadoop tutorials)

Categories

Resources