Hadoop wordcount Pseudodistributed mode error Exit code:127 - java

I have installed Hadoop 2.7.1 stable version. I followed Tom White's book for installation in Pseudodistributed mode. I did set all environment variables like JAVA_HOME, HADOOP_HOME, PATH etc.. I configured yarn-site.xml, hdfs-site.xml, core-site.xml, mapred-site.xml.
I copied the sample file file.txt using following command.
$hadoop fs -copyFromLocal textFiles/file.txt file.txt
which shows me
Found 2 items
-rw-r--r-- 1 RAMA supergroup 3737 2015-12-27 21:52 file.txt
drwxr-xr-x - RAMA supergroup 0 2015-12-27 22:17 input
When I am executing the wordcount program in hadoop-mapreduce-examples-2.7.1.jar using below command
RAMAs-MacBook-Pro:hadoop-2.7.1 RAMA$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount file.txt output
it is throwing me following exception, for which I am not able to find any feasible solution.
15/12/27 22:41:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/12/27 22:41:53 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
15/12/27 22:41:53 INFO input.FileInputFormat: Total input paths to process : 1
15/12/27 22:41:53 INFO mapreduce.JobSubmitter: number of splits:1
15/12/27 22:41:53 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1451216397139_0020
15/12/27 22:41:54 INFO impl.YarnClientImpl: Submitted application application_1451216397139_0020
15/12/27 22:41:54 INFO mapreduce.Job: The url to track the job: http://192.168.1.6:8088/proxy/application_1451216397139_0020/
15/12/27 22:41:54 INFO mapreduce.Job: Running job: job_1451216397139_0020
15/12/27 22:41:57 INFO mapreduce.Job: Job job_1451216397139_0020 running in uber mode : false
15/12/27 22:41:57 INFO mapreduce.Job: map 0% reduce 0%
15/12/27 22:41:57 INFO mapreduce.Job: Job job_1451216397139_0020 failed with state FAILED due to: Application application_1451216397139_0020 failed 2 times due to AM Container for appattempt_1451216397139_0020_000002 exited with exitCode: 127
For more detailed output, check application tracking page:http://192.168.1.6:8088/cluster/app/application_1451216397139_0020Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1451216397139_0020_02_000001
Exit code: 127
Stack trace: ExitCodeException exitCode=127:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 127
Failing this attempt. Failing the application.
15/12/27 22:41:57 INFO mapreduce.Job: Counters: 0
Any suggestions/comments is of great help...

In hadoop-env.sh, explicitly add the java home path in JAVA_HOME variable.

Related

Could not run jar file in hadoop3.1.3

I tried this command in command prompt (run as administrator):
hadoop jar C:\Users\tejashri\Desktop\Hadoopproject\WordCount.jar WordcountDemo.WordCount /work /out
but i got this error message:
my application got stopped.
2020-04-04 23:53:27,918 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
2020-04-04 23:53:28,881 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2020-04-04 23:53:28,951 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/tejashri/.staging/job_1586024027199_0006
2020-04-04 23:53:29,162 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2020-04-04 23:53:29,396 INFO input.FileInputFormat: Total input files to process : 1
2020-04-04 23:53:29,570 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2020-04-04 23:53:29,762 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2020-04-04 23:53:29,802 INFO mapreduce.JobSubmitter: number of splits:1
2020-04-04 23:53:30,059 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2020-04-04 23:53:30,156 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1586024027199_0006
2020-04-04 23:53:30,156 INFO mapreduce.JobSubmitter: Executing with tokens: []
2020-04-04 23:53:30,504 INFO conf.Configuration: resource-types.xml not found
2020-04-04 23:53:30,507 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2020-04-04 23:53:30,586 INFO impl.YarnClientImpl: Submitted application application_1586024027199_0006
2020-04-04 23:53:30,638 INFO mapreduce.Job: The url to track the job: http://LAPTOP-2UBC7TG1:8088/proxy/application_1586024027199_0006/
2020-04-04 23:53:30,640 INFO mapreduce.Job: Running job: job_1586024027199_0006
2020-04-04 23:53:35,676 INFO mapreduce.Job: Job job_1586024027199_0006 running in uber mode : false
2020-04-04 23:53:35,679 INFO mapreduce.Job: map 0% reduce 0%
2020-04-04 23:53:35,698 INFO mapreduce.Job: Job job_1586024027199_0006 failed with state FAILED due to: Application application_1586024027199_0006 failed 2 times due to AM Container for appattempt_1586024027199_0006_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2020-04-04 23:53:34.955]Exception from container-launch.
Container id: container_1586024027199_0006_02_000001
Exit code: 1
Shell output: 1 file(s) moved.
"Setting up env variables"
"Setting up job resources"
"Copying debugging information"
C:\hadoop\hdfstmp\nm-local-dir\usercache\tejashri\appcache\application_1586024027199_0006\container_1586024027199_0006_02_000001>rem Creating copy of launch script
C:\hadoop\hdfstmp\nm-local-dir\usercache\tejashri\appcache\application_1586024027199_0006\container_1586024027199_0006_02_000001>copy "launch_container.cmd" "C:/hadoop/logs/userlogs/application_1586024027199_0006/container_1586024027199_0006_02_000001/launch_container.cmd"
1 file(s) copied.
C:\hadoop\hdfstmp\nm-local-dir\usercache\tejashri\appcache\application_1586024027199_0006\container_1586024027199_0006_02_000001>rem Determining directory contents
C:\hadoop\hdfstmp\nm-local-dir\usercache\tejashri\appcache\application_1586024027199_0006\container_1586024027199_0006_02_000001>dir 1>>"C:/hadoop/logs/userlogs/application_1586024027199_0006/container_1586024027199_0006_02_000001/directory.info"
"Launching container"
[2020-04-04 23:53:34.959]Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
'"C:\Program Files\Java\jdk1.8.0_171"' is not recognized as an internal or external command,
operable program or batch file.
[2020-04-04 23:53:34.960]Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
'"C:\Program Files\Java\jdk1.8.0_171"' is not recognized as an internal or external command,
operable program or batch file.
For more detailed output, check the application tracking page: http://LAPTOP-2UBC7TG1:8088/cluster/app/application_1586024027199_0006 Then click on links to logs of each attempt.
. Failing the application.
2020-04-04 23:53:35,743 INFO mapreduce.Job: Counters: 0
'"C:\Program Files\Java\jdk1.8.0_171"' is not recognized as an internal or external command,
operable program or batch file.
The JAVA_HOME variable is not properly set in hadoop-env.cmd.
Also, move the JDK installation to a folder without whitespaces (say, C:\Java\jdk1.8.0_171)
Update the JAVA_HOME and PATH environment variables
Add this line in hadoop-env.cmd,
set JAVA_HOME=C:\Java\jdk1.8.0_171
Restart the hadoop daemons and run the Job.

When I run wordcount application it never finish

I need help for running wordcount application in Hadoop. I work on Multi Node clusters, I want to run it on my main(master) node.
I make folder for input and copy .txt to that folder, it works great. 1, 2
Now I need to run mapreduce which is done using eclipse.
Run with:
hadoop jar WordCount.jar WordCount /input /output
When I run it I get:
/input
/output
18/12/30 19:50:13 INFO client.RMProxy: Connecting to ResourceManager at
/0.0.0.0:8032
18/12/30 19:50:13 WARN mapreduce.JobResourceUploader: Hadoop command-line
option parsing not performed. Implement the Tool interface and execute your
application with ToolRunner to remedy this.
18/12/30 19:50:14 INFO input.FileInputFormat: Total input files to process
: 1
18/12/30 19:50:14 INFO mapreduce.JobSubmitter: number of splits:1
18/12/30 19:50:14 INFO Configuration.deprecation:
yarn.resourcemanager.system-metrics-publisher.enabled is deprecated.
Instead, use yarn.system-metrics-publisher.enabled
18/12/30 19:50:14 INFO mapreduce.JobSubmitter: Submitting tokens for job:
job_1546180801267_1648
18/12/30 19:50:15 INFO impl.YarnClientImpl: Submitted application
application_1546180801267_1648
18/12/30 19:50:15 INFO mapreduce.Job: The url to track the job: http://ec2-
3-82-16-179.compute-
1.amazonaws.com:8088/proxy/application_1546180801267_1648/
18/12/30 19:50:15 INFO mapreduce.Job: Running job: job_1546180801267_1648
I listen to a lot of tutorial, in every tutorial they do some things and they get same INFO, but after that map and reduce run, in my situation it does not run.
Here is my .profile 3 :
Here is yarn-site.xml 4 :
Mapred-site.xml 5 :
hdfs-site.xml 6 :
core-site.xml 7, 8 :
Please help me with this, I am trying to solve it for two days but without any good result.
Thanks!

Error when running mapreduce job from windows client

I am new in the hadoop world. I have setup a hadoop cluster in Linux and I try to run a mapreduce job from windows.
I wordcount example work find, but when I try to run another job that I wrote myself, I have this error:
16/07/05 16:04:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/05 16:04:42 INFO client.RMProxy: Connecting to ResourceManager at /myIP:8050
16/07/05 16:04:42 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/07/05 16:04:42 WARN mapreduce.JobResourceUploader: No job jar file set. User classes may not be found. See Job or Job#setJar(String).
16/07/05 16:04:42 INFO input.FileInputFormat: Total input paths to process : 3
16/07/05 16:04:42 INFO mapreduce.JobSubmitter: number of splits:3
16/07/05 16:04:43 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1467727275250_0002
16/07/05 16:04:43 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
16/07/05 16:04:43 INFO impl.YarnClientImpl: Submitted application application_1467727275250_0002
16/07/05 16:04:43 INFO mapreduce.Job: The url to track the job: http://hadoopMaster:8088/proxy/application_1467727275250_0002/
16/07/05 16:04:43 INFO mapreduce.Job: Running job: job_1467727275250_0002
16/07/05 16:04:46 INFO mapreduce.Job: Job job_1467727275250_0002 running in uber mode : false
16/07/05 16:04:46 INFO mapreduce.Job: map 0% reduce 0%
16/07/05 16:04:46 INFO mapreduce.Job: Job job_1467727275250_0002 failed with state FAILED due to: Application application_1467727275250_0002 failed 2 times due to AM Container for appattempt_1467727275250_0002_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://hadoopMaster:8088/cluster/app/application_1467727275250_0002Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1467727275250_0002_02_000001
Exit code: 1
Exception message: /bin/bash: Zeile 0: fg: Keine Job-Steuerung in dieser Shell.
Stack trace: ExitCodeException exitCode=1: /bin/bash: line 0: fg: no Job control in this Shell.
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
16/07/05 16:04:46 INFO mapreduce.Job: Counters: 0
Can someone help me please?
mapred-site.xml add property
<property>
<name>mapred.remote.os</name>
<value>Linux</value>
<description>Remote MapReduce framework's OS, can be either Linux or Windows</description>
</property>
<property>
<name>mapreduce.app-submission.cross-platform</name>
<value>true</value>
</property>

Error: Could not create the Java Virtual Machine. - Apache Hadoop

I am trying to run wordcount example already provided in hadoop on the following env: (Pseudodistributed mode)
Windows 7
Hadoop 2.7.1
JDK 1.7.x
RAM 4 GB
The jps command returns
C:\deploy\hadoop-2.7.1>jps
2336 ResourceManager
7500 NameNode
4984 Jps
6900 NodeManager
4940 DataNode
The command I use for setting the hadoop heap size
set HADOOP_HEAPSIZE=512
The command I use from the hadoop home installation directory is
bin\yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output
I see the following stack trace
C:\deploy\hadoop-2.7.1>bin\yarn jar share/hadoop/mapreduce/hadoop-mapreduce-exam
ples-2.7.1.jar wordcount /input /output
15/08/14 22:36:26 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0
:8032
15/08/14 22:36:27 INFO input.FileInputFormat: Total input paths to process : 1
15/08/14 22:36:28 INFO mapreduce.JobSubmitter: number of splits:1
15/08/14 22:36:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_14
39571873038_0001
15/08/14 22:36:28 INFO impl.YarnClientImpl: Submitted application application_14
39571873038_0001
15/08/14 22:36:28 INFO mapreduce.Job: The url to track the job: http://XXX-PC
:8088/proxy/application_1439571873038_0001/
15/08/14 22:36:28 INFO mapreduce.Job: Running job: job_1439571873038_0001
15/08/14 22:36:37 INFO mapreduce.Job: Job job_1439571873038_0001 running in uber
mode : false
15/08/14 22:36:37 INFO mapreduce.Job: map 0% reduce 0%
15/08/14 22:36:37 INFO mapreduce.Job: Job job_1439571873038_0001 failed with sta
te FAILED due to: Application application_1439571873038_0001 failed 2 times due
to AM Container for appattempt_1439571873038_0001_000002 exited with exitCode:
1
For more detailed output, check application tracking page:http://XXX-PC:8088/
cluster/app/application_1439571873038_0001Then, click on links to logs of each a
ttempt.
Diagnostics: Exception from container-launch.
Container id: container_1439571873038_0001_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:
722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.la
unchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.C
ontainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.C
ontainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:615)
at java.lang.Thread.run(Thread.java:744)
Shell output: 1 file(s) moved.
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
15/08/14 22:36:37 INFO mapreduce.Job: Counters: 0
When I went to the Stderr logs as mentioned in the above stack trace, the actual error came out to be
Error: Could not create the Java Virtual Machine.
When I try to increase HADOOP_HEAPSIZE to 1024, the namenode, datanode and yarn daemons do not start at all and give me the same error of could not create java virtual machine
Has someone got the same problem ? How to solve this issue ?
You can solve this problem by editting the file /etc/hadoop/hadoop-env.sh like the one below:
export HADOOP_HEAPSIZE=3072

Snappy compression error in Hadoop 2.x

I've setup a Hadoop cluster using the newly 2.x version. And I installed snappy and hadoop snappy according to this guide, to enable snappy compression in map output.
When running the example wordcount, The error occurred:
[dm#node1 ~]$ hadoop jar /opt/hadoop-2.0.5-alpha/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.0.5-alpha.jar wordcount /in /out
13/09/06 05:09:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/09/06 05:09:53 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is inited.
13/09/06 05:09:53 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is started.
13/09/06 05:10:04 INFO input.FileInputFormat: Total input paths to process : 1
13/09/06 05:10:04 INFO snappy.LoadSnappy: Snappy native library loaded
13/09/06 05:10:04 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
13/09/06 05:10:04 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev d0f5a10f99f1b2af4f6610447052c5a67b8b1cc7]
13/09/06 05:10:04 INFO mapreduce.JobSubmitter: number of splits:1
13/09/06 05:10:04 WARN conf.Configuration: mapred.jar is deprecated. Instead, use mapreduce.job.jar
13/09/06 05:10:04 WARN conf.Configuration: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
13/09/06 05:10:04 WARN conf.Configuration: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
13/09/06 05:10:04 WARN conf.Configuration: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
13/09/06 05:10:04 WARN conf.Configuration: mapred.job.name is deprecated. Instead, use mapreduce.job.name
13/09/06 05:10:04 WARN conf.Configuration: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
13/09/06 05:10:04 WARN conf.Configuration: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
13/09/06 05:10:04 WARN conf.Configuration: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
13/09/06 05:10:04 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
13/09/06 05:10:04 WARN conf.Configuration: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
13/09/06 05:10:04 WARN conf.Configuration: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
13/09/06 05:10:05 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1378415309099_0001
13/09/06 05:10:06 INFO client.YarnClientImpl: Submitted application application_1378415309099_0001 to ResourceManager at node1/192.168.56.101:60832
13/09/06 05:10:06 INFO mapreduce.Job: The url to track the job: http://node1:60888/proxy/application_1378415309099_0001/
13/09/06 05:10:06 INFO mapreduce.Job: Running job: job_1378415309099_0001
13/09/06 05:10:32 INFO mapreduce.Job: Job job_1378415309099_0001 running in uber mode : false
13/09/06 05:10:32 INFO mapreduce.Job: map 0% reduce 0%
13/09/06 05:10:53 INFO mapreduce.Job: map 100% reduce 0%
13/09/06 05:11:02 INFO mapreduce.Job: Task Id : attempt_1378415309099_0001_m_000000_0, Status : FAILED
Error: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
Container killed by the ApplicationMaster.
13/09/06 05:11:03 INFO mapreduce.Job: map 0% reduce 0%
13/09/06 05:11:07 INFO mapreduce.Job: Task Id : attempt_1378415309099_0001_m_000000_1, Status : FAILED
Error: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
13/09/06 05:11:13 INFO mapreduce.Job: Task Id : attempt_1378415309099_0001_m_000000_2, Status : FAILED
Error: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
13/09/06 05:11:19 INFO mapreduce.Job: map 100% reduce 0%
13/09/06 05:11:19 INFO mapreduce.Job: Job job_1378415309099_0001 failed with state FAILED due to: Task failed task_1378415309099_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
13/09/06 05:11:19 INFO mapreduce.Job: Counters: 6
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=42989
Total time spent by all reduces in occupied slots (ms)=0
I searched google about the error message "Error: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z", haven't find the solution to this problem. So I want to know how can I enable snappy compression in Hadoop 2.x? Thanks.
good server:
$ hadoop checknative -a | grep snappy
15/06/18 14:51:05 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
15/06/18 14:51:05 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
snappy: true /opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/native/libsnappy.so.1
not good server:
$ hadoop checknative -a | grep snappy
15/06/18 14:50:31 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
15/06/18 14:50:31 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
snappy: true /usr/lib64/libsnappy.so.1
Notice in one case "hadoop checknative" command returns system's libsnappy, in another case - one from a Hadoop distrubution. If you have hadoop on some hosts report that they use system's libsnappy, errors like ".hadoop.mapred.YarnChild: Error running child : java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method)
" are quite possible.
From the output you’ve presented ( WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... etc.. ) it looks like your native library isn't loading. I think it would be worth trying to build and install the native library and then adding the Snappy library inside it at <HADOOP_HOME>/lib/native/ . I think that might resolve the issue. Hope that helps!
It looks like your code is not able to load snappy native library.There could be two possible explanation for this.
Either you are using a version of snappy which is not compatible with the version of hadoop you have installed.
OR You have not added the library in your classpath.
The second parameter on that guide is wrong.
Use mapreduce.map.output.compress.codec instead.

Categories

Resources