Parse Flat Json file using MAP Reduce JAVA - java

My task is parse Json object from HDFS and write in separate file in HDFS. Below is my Code.
package com.main;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.json.JSONException;
import org.json.JSONObject;
public class JsonMain {
public static class Mapperclass extends Mapper<LongWritable, Text, Text, Text>{
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException{
String regId;
String time;
String line = value.toString();
String[] tuple = line.split("\\n");
try{
for(int i=0;i<tuple.length; i++){
JSONObject obj = new JSONObject(tuple[i]);
regId = obj.getString("regId");
time = obj.getString("time");
context.write(new Text(regId), new Text(time));
}
}catch(JSONException e){
e.printStackTrace();
}
}
}
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
// TODO Auto-generated method stub
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(JsonMain.class);
job.setMapperClass(Mapperclass.class);
//job.setCombinerClass(IntSumReducer.class);
//job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Flatjson.txt
{"regId":"TbEtvRH""time":1509073895112}
{"regId":"lWJ2u0j""time":1509073905112}
{"regId":"uB9WG5K""time":1509073915112}
{"regId":"9sO7aqg""time":1509073925113}
{"regId":"hguOaKh""time":1509073935113}
{"regId":"p1CAzYt""time":1509073945113}
{"regId":"quDVMkD""time":1509073955113}
Note: I have included all the dependency Jar in my project.
Following command executed:
hadoop jar JsonMapper.jar com.main.JsonMain /user/cloudera/FlatJson/Flatjson.txt output007
Following are the error message am getting.
17/11/01 08:11:12 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1509542757670_0003/
17/11/01 08:11:12 INFO mapreduce.Job: Running job: job_1509542757670_0003
17/11/01 08:13:33 INFO mapreduce.Job: Job job_1509542757670_0003 running in uber mode : false
17/11/01 08:13:33 INFO mapreduce.Job: map 0% reduce 0%
17/11/01 08:15:32 INFO mapreduce.Job: Task Id : attempt_1509542757670_0003_m_000000_0, Status : FAILED
Error: java.lang.ClassNotFoundException: org.json.JSONException
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2138)
"java.lang.ClassNotFoundException: org.json.JSONException" ==> I have imported this jar in my project. let me know what's wrong in this.

Lets start to debug your problem in steps .
please do a jar -tvf JsonMapper.jar | grep JSONException and you will see that this class doesnt exist in your jar.
Please do understand including a dependency in your project through a dependency management system like mvn doesn't guarantee its availability in jar.
Please use shaded plugin to include all the jars in the dependency into your shaded fat jar.

"Error: java.lang.ClassNotFoundException: org.json.JSONException" --> This has been solved.
Previously I had the jar in /home/jar/java-json.jar path.
I have moved this jar to "/usr/lib/hadoop-mapreduce/" this path and included this jar and added this jar to the project it worked.
cp java-json.jar /usr/lib/hadoop-mapreduce

Related

Hadoop Java Class cannot be found

Exception in thread "main" java.lang.ClassNotFoundException:WordCount-> so many answers relate to this issue and it seems like I am definitely missing a small point again which took me hours to figure.
I will try to be as clear as possible about the paths, code itself and other possible solutions I tried and did not work.
I am kinda sure about my correctly configuring Hadoop as everything was working up until the last stage.
But still posting the details:
Environment variables and paths
>
HADOOP VARIABLES START
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export HADOOP_INSTALL=/usr/local/hadoop
export HADOOP_CLASSPATH=/usr/lib/jvm/java-8-oracle/lib/tools.jar
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
#export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
#export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$JAVA_HOME/bin
#HADOOP VARIABLES END
The Class itself:
package com.cloud.hw03;
/**
* Hello world!
*
*/
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "wordcount");
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setJarByClass(WordCount.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
what I did for compiling and running:
Created jar file in the same folder with my WordCount maven project (eclipse-workspace)
$ hadoop com.sun.tools.javac.Main WordCount.java
$ jar cf WordCount.jar WordCount*.class
Running the program: (i already created a directory and copied input-output files in hdfs)
hadoop jar WordCount.jar WordCount /input/inputfile01 /input/outputfile01
result is: Exception in thread "main" java.lang.ClassNotFoundException: WordCount
Since I am in the same directory with WordCount.class and i created my jar file in that same directory, i am not specifying full path to WordCount, so i am running the above 2nd command in this directory :
I already added job.setJarByClass(WordCount.class); to the code so no help. I would appreciate your spending time on answering!
I am sure I am doing something unexpected again and cannot figure it out for 4 hrs
The Wordcount example code on the Hadoop site does not use a package
Since you do have one, you would run the fully qualified class. The exact same way as a regular Java application
hadoop jar WordCount.jar com.cloud.hw03.WordCount
Also, if you actually had a Maven project, then hadoop com.sun.tools.javac.Main is not correct. You would actually use Maven to compile and create the JAR with all the classes, not only WordCount* files
For example, from the folder with the pom.xml
mvn package
Otherwise, you need to be in the parent directory
hadoop com.sun.tools.javac.Main ./com/cloud/hw03/WordCount.java
And run the jar cf command also from that directory

Hadoop Mapreduce word count Program

I am new to Hadoop. Below is my code. I am getting the following error message when i run the Jar.
Input file (wordcount.txt) => this file is stored in "/home/cloudera/SK_JAR/jsonFile/wordcount.txt" path
Hello Hadoop, Goodbye Hadoop.
package com.main;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class WordCount {
public static class Map extends Mapper {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}
}
}
public static class Reduce extends Reducer {
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws IllegalArgumentException, IOException, ClassNotFoundException, InterruptedException {
// TODO Auto-generated method stub
Configuration conf = new Configuration();
Job job = new Job(conf, "wordcount");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
// job.waitForCompletion(true);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Following is the error message.Can some one please help me on this?
let me know if you guys need more details..
hadoop jar Wordcount.jar WordCount '/home/cloudera/SK_JAR/jsonFile/wordcount.txt' output
Error in laoding args file.java.io.FileNotFoundException: WordCount (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:146)
at java.io.FileInputStream.<init>(FileInputStream.java:101)
at com.main.mainClass.main(mainClass.java:28)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
hadoop jar Wordcount.jar WordCount
Your main class is part of a package com.maintherefore com.main.WordCount is needed to start your class.
You can open your JAR file as a ZIP file to verify if you can find com/main/WordCount$Map.class within it. If it is not there, then Eclipse is building your JAR wrong.
I might suggest you learn Maven, Gradle, or SBT to build a JAR rather than your IDE. In production, these are the tools commonly used to bundle Hadoop JAR files.
It seems main class file was not mentioned with your runnable jar file.
If you are using Eclipse to create jar file then follow below steps to create Wordcount.jar.
Right click on Project -> Export JAR file -> Click Next - > Uncheck
all other resources. Then provide path for exporting .jar file ->
Click Next -> Keep options selected -> Click Next and Finish. reference

search a given word in a text File using MapReduce in JAVA in ubuntu 16.04

I have to make a project to find a given word (string). This string will be inputted by the user. Then find the occurrence of the word in a particular text file stored in HDFS. The output should tell the presence of the word string.
package stringSearchJob;
import java.io.IOException;
import java.util.Scanner;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class StringSearch{
public static void main(String argv[]) throws Exception {
try {
if (argv.length<3) {
System.err.println("Give the input/ output/ keyword!");
return;
}
JobConf conf = new JobConf(StringSearch.class);
Job job = new Job(conf,"StringSearch");
FileInputFormat.addInputPath(job, new Path(argv[0]));
FileOutputFormat.setOutputPath(job, new Path(argv[1]));
conf.set("search", argv[2]);
job.setJarByClass(StringSearch.class);
job.setMapperClass(WordMapper.class);
job.setNumReduceTasks(0);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
JobClient.runJob(conf);
job.waitForCompletion(true);
}
catch (Exception e) {
e.printStackTrace();
}
}
public static class WordMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
#Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
try {
Configuration conf = context.getConfiguration();
String search = conf.get("search");
String line = value.toString();
Scanner scanner = new Scanner(line);
while (scanner.hasNext()) {
if (line.contains(search)) {
String line1 = scanner.next();
context.write(new Text(line1), new IntWritable(1));
}
}
scanner.close();
}
catch (IOException e){
e.printStackTrace();
}
catch (InterruptedException e){
e.printStackTrace();
}
}
}
}
Is my code wrong? Because the output I get on Ubuntu-16.04 Terminal is not correct. The steps I followed are as follows:
After wring the above code, I exported it into a Runnable JAR file named as StringSearch.jar. The class name was StringSearch.
Now, on the Terminal I wrote the following commands:
hadoop fs -mkdir /user
hadoop fs -mkdir /user/hduser
hadoop fs -mkdir /user/hduser/StringSearch
hadoop fs -mkdir Stringsearch/input
hadoop -fs -copyFromLocal sample.txt StringSearch/input
hadoop jar StringSearchNew.jar StringSearch /user/hduser/StringSearch/input user/hduser/StringSearch/output 'Lord'
And I am getting the errors as follows.
17/08/20 19:17:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/08/20 19:17:41 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
17/08/20 19:17:41 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
17/08/20 19:17:41 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
Exception in thread "main" org.apache.hadoop.mapred.InvalidJobConfException: Output directory not set in JobConf.
at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:117)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:268)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:870)
at stringSearchJob.StringSearch.main(StringSearch.java:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
I basically learned how to use Hadoop MapReduce from Internet only. When I tried to make the program in JAVA after going through all other similar answers, it didn't gave the output. I am a complete newbie to Hadoop and thus would benefit if you please help me to resort the issue. I don't get what's wrong in here!
After reading the answer, I edited the code and got the following errors:
17/08/24 05:01:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:58)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: No FileSystem for scheme: hdfs
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:357)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(FileInputFormat.java:520)
at stringSearchJob.StringSearch.main(StringSearch.java:28)
... 11 more
Set your input and output directory to JobConf object not Job object
You must change as below :
FileInputFormat.setInputPaths(conf /*from job to conf*/, new Path(args[0]));
FileOutputFormat.setOutputPath(conf /*from job to conf*/, new Path(args[1]));
So the modified code should look like as below:
if (argv.length<3) {
System.err.println("Give the input/ output/ keyword!");
return;
}
JobConf conf = new JobConf(StringSearch.class);
Job job = new Job(conf,"StringSearch");
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
conf.set("search", argv[2]);
job.setJarByClass(StringSearch.class);
job.setMapperClass(WordMapper.class);
job.setNumReduceTasks(0);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
JobClient.runJob(conf);
job.waitForCompletion(true);

Hadoop Java Error : Exception in thread "main" java.lang.NoClassDefFoundError: WordCount (wrong name: org/myorg/WordCount)

I am new to hadoop. I followed the maichel-noll tutorial to set up hadoop in single node.I tried running WordCount program. This is the code I used:
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "WordCount");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
This is what I get when I try running it.
hduser#aswin-HP-Pavilion-15-Notebook-PC:/usr/local/hadoop$ bin/hadoop jar wc.jar WordCount /home/hduser/gutenberg /home/hduser/gutenberg-output/sample.txt
Exception in thread "main" java.lang.NoClassDefFoundError: WordCount (wrong name: org/myorg/WordCount)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:788)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:447)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:411)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at org.apache.hadoop.util.RunJar.main(RunJar.java:205)
Can anyone please help me.
My class path :
hduser#aswin-HP-Pavilion-15-Notebook-PC:/usr/local/hadoop$ hadoop classpath
/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/usr/lib/jvm/java-7-openjdk-i386/lib/tools.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
try this,
import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;
public class WordCount {
public static class Map extends MapReduceBase implements
Mapper<LongWritable, Text, Text, IntWritable> {
#Override
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter)
throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
value.set(tokenizer.nextToken());
output.collect(value, new IntWritable(1));
}
}
}
public static class Reduce extends MapReduceBase implements
Reducer<Text, IntWritable, Text, IntWritable> {
#Override
public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output, Reporter reporter)
throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
}
then run command
bin/hadoop jar WordCount.jar WordCount /hdfs_Input_filename /output_filename
if your code is in particular package then you have to mention package name with class name
bin/hadoop jar WordCount.jar PakageName.WordCount /hdfs_Input_filename /output_filename
This may sound crazy. I added package org.myorg; to my code and compiled it again. I placed the class files in org/myorg folder and created the jar file using them. Then I ran using the jar wc.jar org.myorg.WordCount command and it got executed successfully. It would be nice if someone could explain me how it actually ran :D . Any way, thanks a lot for helping me guys.
try explicitly including the nested classes(i.e. TokenizerMapper and IntSumReducer) in you jar file. Here is how I did it:
jar cvf WordCount.jar WordCount.class WordCount\$TokenizerMapper.class WordCount\$IntSumReducer.class
there is something wrong with packaging.
you should try this:
jar cf wc.jar WordCount*.class
notice there is a symbol '*'
You are using package in your class. So your command should be
bin/hadoop jar wc.jar org.myorg.WordCount /home/hduser/gutenberg /home/hduser/gutenberg-output/sample.txt
I think you made a mistake here :
/usr/local/hadoop$ bin/hadoop jar wc.jar WordCount /home/hduser/gutenberg /home/hduser/gutenberg-output/sample.txt
please change it to :
/usr/local/hadoop$ bin/hadoop jar wc.jar org.myorg.WordCount /home/hduser/gutenberg /home/hduser/gutenberg-output/sample.txt
that should work.
#Aswin Alagappan
: Reason is
a jar file cotains your path in it. JVM cannot find your class in the jar file becase it is in the "jar\org\myorg" path. Understand?
The answer of Kishore, allowed me to go in the right direction,
if it’s possible i want to confirm this, reporting what I did about an experiiment with java code on moltiplication of sparse matrix :
1) Source code (downloaded from https://github.com/marufaytekin/MatrixMultiply/tree/master/src/main/java/com/lendap/hadoop), and saved in /home/hduser/playground/src/matrixMult
2) Downloaded datasets (matrix M and N from https://github.com/marufaytekin/MatrixMultiply/tree/master/input), and then saved in HDFS, with the following path : /user/hduser/inMatrix
3) Compilation with hadoop classes, with creation of java Classes in playground/classes5 :
javac -classpath $HADOOP_HOME/share/hadoop/common/lib/activation-1.1.jar:$HADOOP_HOME/share/hadoop/common/hadoop-common-2.7.1.jar:/usr/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/* -d playground/classes5 playground/src/matrixMult/*
4) Creation of jar file MatrixMultiply.jar with the following command :
jar -cvf playground/MatrixMultiply.jar -C playground/classes5/ .
5) hadoop mapReduce command (from the $HADOOP_HOME path, that in my case is /usr/hadoop/hadoop-2.7.1$
hadoop jar /home/hduser/playground/MatrixMultiply.jar com.lendap.hadoop.MatrixMultiply /user/hduser/inMatrix/ outputMatrix
6) Correct execution of mapreduce job on my 4 nodes cluster. Here, part of the final output :
0,375,890.0
0,376,1005.0
0,377,1377.0
0,378,604.0
0,379,924.0
0,38,476.0
0,380,621.0
0,381,730.0
990,225,542.0
990,226,639.0
990,227,466.0
990,228,406.0
990,229,343.0
990,23,397.0
990,230,794.0

ClassNotFoundException when running WordCount example in Eclipse

I'm trying to run the exemplary code for WordCount map/reduce job. I'm running it on Hadoop 1.2.1. and I'm running it from my Eclipse. Here is the code I try to run:
package mypackage;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.Reducer.Context;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class WordCount {
public static class Map extends
Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}
}
}
public static class Reduce extends
Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
conf.set("mapred.job.tracker", "maprfs://,y_address");
conf.set("fs.default.name", "hdfs://my_address");
Job job = new Job(conf, "wordcount");
job.setJarByClass(WordCount.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
}
Unfortunatelly, running this code ends up with the following error:
13/11/04 13:27:53 INFO mapred.JobClient: Task Id :
attempt_201310311611_0005_m_000000_0, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException:
com.rf.hadoopspikes.WordCount$Map at
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:857)
at
org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:718)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364) at
org.apache.hadoop.mapred.Child$4.run(Child.java:255) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:415) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
I understand that the WordClass cannot be found but I have no idea how to make this work.
Any ideas?
When running this directly from Eclipse, you need to make sure the classes have been bundled into a Jar file (for which hadoop then copies up to HDFS). Your error most probably relates to the fact that your Jar hasn't been built, or at runtime the classes are being run from the output directory and not the bundled jar.
Try and export the classes into a jar file, and then run your WordCount class from that Jar file. You could also look into using the Eclipse Hadoop plugin that i think handles all this form you. Final option would be to bundle the jar and then launch from the command line (as outlined in the various Hadoop tutorials)

Categories

Resources