I use YARN on hadoop 2.6.0. When I ran an mapreduce job, I got an error like this :
15/03/12 22:22:59 INFO mapreduce.Job: Task Id : attempt_1426132548565_0003_m_000002_1, Status : FAILED
Error: Java heap space
15/03/12 22:22:59 INFO mapreduce.Job: Task Id : attempt_1426132548565_0003_m_000000_1, Status : FAILED
Error: Java heap space
15/03/12 22:23:20 INFO mapreduce.Job: Task Id : attempt_1426132548565_0003_m_000002_2, Status : FAILED
Error: Java heap space
Container killed by the ApplicationMaster.
Am I false to configure the java.opts propery. Is that error because of that configuration? Are there any connection between memory settings on yarn-site and mapred-site?
I very confused, I need your suggest all Thanks
When the container exceeds the memory/CPU usage, it is killed by the application master. In your case, the mappers might be using excess memory. Try adding the following configurations:
In mapred-site.xml:
<property>
<name> mapreduce.map.memory.mb </name>
<value>1024</value>
<description>Enter The amount of memory to request from the scheduler for each map task. </description>
</property>
The default value is 1024, try increasing it to 2048.
I would suggest you to restart the cluster after changing the configurations
Related
I used step by step guide as below.
https://phoenixnap.com/kb/install-hadoop-ubuntu
Then I tried to run mapreduce word count file on a text file.
The problem is that the program is not running and I am getting the AM Container for app and Exception from container-launch.
Is there any solution to this?
All nodes are working.
6544 Jps
3041 NameNode
3842 NodeManager
3219 DataNode
3494 SecondaryNameNode
3706 ResourceManager
Below is the yarn status output for my application.
doop#contactkarim-VirtualBox:~/hadoop-3.3.1$ yarn app -status application_1667981786519_0006
2022-11-09 11:35:22,184 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /127.0.0.1:8032
2022-11-09 11:35:22,522 INFO conf.Configuration: resource-types.xml not found
2022-11-09 11:35:22,522 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
Application Report :
Application-Id : application_1667981786519_0006
Application-Name : word count
Application-Type : MAPREDUCE
User : hdoop
Queue : default
Application Priority : 0
Start-Time : 1667982679380
Finish-Time : 1667982691120
Progress : 0%
State : FAILED
Final-State : FAILED
Tracking-URL : http://contactkm-VirtualBox:8088/cluster/app/application_1667981786519_0006
RPC Port : -1
AM Host : N/A
Aggregate Resource Allocation : 20250 MB-seconds, 8 vcore-seconds
Aggregate Resource Preempted : 0 MB-seconds, 0 vcore-seconds
Log Aggregation Status : DISABLED
Diagnostics : Application application_1667981786519_0006 failed 2 times due to AM Container for appattempt_1667981786519_0006_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2022-11-09 11:31:31.113]Exception from container-launch.
Container id: container_1667981786519_0006_02_000001
Exit code: 1
[2022-11-09 11:31:31.116]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
[2022-11-09 11:31:31.116]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
For more detailed output, check the application tracking page: http://contactkarim-VirtualBox:8088/cluster/app/application_1667981786519_0006 Then click on links to logs of each attempt.
. Failing the application.
Unmanaged Application : false
Application Node Label Expression : <Not set>
AM container Node Label Expression : <DEFAULT_PARTITION>
TimeoutType : LIFETIME ExpiryTime : UNLIMITED RemainingTime : -1seconds
Thanks
I troubleshooted many things e.g. I checked the site settings, checked resources. Also, I went through the configurations multiple times. permissions etc are giving.
I am suspecting the java version here only.
Linked blog says nothing about mapreduce, only cluster setup (which I always recommend following official Apache Hadoop site, not 3rd party blogs).
No appenders could be found means you're missing log4j.properties file submitted with your job - See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
You won't be able to see the real runtime error/log output until you add that, e.g. if you've submitted your own jar built by maven/gradle to src/main/resources
I have been trying to import data from MySQL database to Hive using Sqoop utility. I got the table created and I have given the fetch-size as low as 10. Everytime I run the command, I am getting Java Heap Size Error and the job gets killed after 4 attempts. How can I fix this.
My sqoop command is as follows :
sqoop import --connect jdbc:mysql://my_local_ip/mydatabase --fetch-size 10 --username root -P --table table_name --hive-import --compression-codec=snappy --as-parquetfile -m 1
and I am getting :
16/08/29 07:06:24 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1472465929944_0013/
16/08/29 07:06:24 INFO mapreduce.Job: Running job: job_1472465929944_0013
16/08/29 07:06:47 INFO mapreduce.Job: Job job_1472465929944_0013 running in uber mode : false
16/08/29 07:06:47 INFO mapreduce.Job: map 0% reduce 0%
16/08/29 07:07:16 INFO mapreduce.Job: Task Id : attempt_1472465929944_0013_m_000000_0, Status : FAILED
Error: Java heap space
16/08/29 07:07:37 INFO mapreduce.Job: Task Id : attempt_1472465929944_0013_m_000000_1, Status : FAILED
Error: Java heap space
16/08/29 07:07:59 INFO mapreduce.Job: Task Id : attempt_1472465929944_0013_m_000000_2, Status : FAILED
Error: Java heap space
16/08/29 07:08:21 INFO mapreduce.Job: map 100% reduce 0%
16/08/29 07:08:23 INFO mapreduce.Job: Job job_1472465929944_0013 failed with state FAILED due to: Task failed task_1472465929944_0013_m_000000
Try with
sqoop import -Dmapreduce.map.memory.mb=1024 -Dmapreduce.map.java.opts=-Xmx7200m -Dmapreduce.task.io.sort.mb=2400 --connect jdbc:mysql://local.ip/database_name --username root -P --hive-import --table table_name --as-parquetfile --warehouse-dir=/home/cloudera/hadoop --split-by 'id' -m 100
Initially, I have been using 10 mappers to process 10 million records and each chunk has a size of 1 million record. This was causing the error and as I fired 100 mapping jobs, it has processed the data successfully . The only thing I noticed is the time taken to complete the jobs. It has taken almost 1 hr to run all the 100 mapper jobs.
I am new in the hadoop world. I have setup a hadoop cluster in Linux and I try to run a mapreduce job from windows.
I wordcount example work find, but when I try to run another job that I wrote myself, I have this error:
16/07/05 16:04:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/05 16:04:42 INFO client.RMProxy: Connecting to ResourceManager at /myIP:8050
16/07/05 16:04:42 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/07/05 16:04:42 WARN mapreduce.JobResourceUploader: No job jar file set. User classes may not be found. See Job or Job#setJar(String).
16/07/05 16:04:42 INFO input.FileInputFormat: Total input paths to process : 3
16/07/05 16:04:42 INFO mapreduce.JobSubmitter: number of splits:3
16/07/05 16:04:43 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1467727275250_0002
16/07/05 16:04:43 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
16/07/05 16:04:43 INFO impl.YarnClientImpl: Submitted application application_1467727275250_0002
16/07/05 16:04:43 INFO mapreduce.Job: The url to track the job: http://hadoopMaster:8088/proxy/application_1467727275250_0002/
16/07/05 16:04:43 INFO mapreduce.Job: Running job: job_1467727275250_0002
16/07/05 16:04:46 INFO mapreduce.Job: Job job_1467727275250_0002 running in uber mode : false
16/07/05 16:04:46 INFO mapreduce.Job: map 0% reduce 0%
16/07/05 16:04:46 INFO mapreduce.Job: Job job_1467727275250_0002 failed with state FAILED due to: Application application_1467727275250_0002 failed 2 times due to AM Container for appattempt_1467727275250_0002_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://hadoopMaster:8088/cluster/app/application_1467727275250_0002Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1467727275250_0002_02_000001
Exit code: 1
Exception message: /bin/bash: Zeile 0: fg: Keine Job-Steuerung in dieser Shell.
Stack trace: ExitCodeException exitCode=1: /bin/bash: line 0: fg: no Job control in this Shell.
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
16/07/05 16:04:46 INFO mapreduce.Job: Counters: 0
Can someone help me please?
mapred-site.xml add property
<property>
<name>mapred.remote.os</name>
<value>Linux</value>
<description>Remote MapReduce framework's OS, can be either Linux or Windows</description>
</property>
<property>
<name>mapreduce.app-submission.cross-platform</name>
<value>true</value>
</property>
I am trying to run wordcount example already provided in hadoop on the following env: (Pseudodistributed mode)
Windows 7
Hadoop 2.7.1
JDK 1.7.x
RAM 4 GB
The jps command returns
C:\deploy\hadoop-2.7.1>jps
2336 ResourceManager
7500 NameNode
4984 Jps
6900 NodeManager
4940 DataNode
The command I use for setting the hadoop heap size
set HADOOP_HEAPSIZE=512
The command I use from the hadoop home installation directory is
bin\yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output
I see the following stack trace
C:\deploy\hadoop-2.7.1>bin\yarn jar share/hadoop/mapreduce/hadoop-mapreduce-exam
ples-2.7.1.jar wordcount /input /output
15/08/14 22:36:26 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0
:8032
15/08/14 22:36:27 INFO input.FileInputFormat: Total input paths to process : 1
15/08/14 22:36:28 INFO mapreduce.JobSubmitter: number of splits:1
15/08/14 22:36:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_14
39571873038_0001
15/08/14 22:36:28 INFO impl.YarnClientImpl: Submitted application application_14
39571873038_0001
15/08/14 22:36:28 INFO mapreduce.Job: The url to track the job: http://XXX-PC
:8088/proxy/application_1439571873038_0001/
15/08/14 22:36:28 INFO mapreduce.Job: Running job: job_1439571873038_0001
15/08/14 22:36:37 INFO mapreduce.Job: Job job_1439571873038_0001 running in uber
mode : false
15/08/14 22:36:37 INFO mapreduce.Job: map 0% reduce 0%
15/08/14 22:36:37 INFO mapreduce.Job: Job job_1439571873038_0001 failed with sta
te FAILED due to: Application application_1439571873038_0001 failed 2 times due
to AM Container for appattempt_1439571873038_0001_000002 exited with exitCode:
1
For more detailed output, check application tracking page:http://XXX-PC:8088/
cluster/app/application_1439571873038_0001Then, click on links to logs of each a
ttempt.
Diagnostics: Exception from container-launch.
Container id: container_1439571873038_0001_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:
722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.la
unchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.C
ontainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.C
ontainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:615)
at java.lang.Thread.run(Thread.java:744)
Shell output: 1 file(s) moved.
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
15/08/14 22:36:37 INFO mapreduce.Job: Counters: 0
When I went to the Stderr logs as mentioned in the above stack trace, the actual error came out to be
Error: Could not create the Java Virtual Machine.
When I try to increase HADOOP_HEAPSIZE to 1024, the namenode, datanode and yarn daemons do not start at all and give me the same error of could not create java virtual machine
Has someone got the same problem ? How to solve this issue ?
You can solve this problem by editting the file /etc/hadoop/hadoop-env.sh like the one below:
export HADOOP_HEAPSIZE=3072
I am trying to upgrade Cassandra 1 to Cassandra 2.. And to do that I upgraded Java (to Java 7) but whenever I execute : cassandra. Its launching like this :
INFO 17:32:41,413 Logging initialized INFO 17:32:41,437 Loading
settings from file:/etc/cassandra/cassandra.yaml INFO 17:32:41,642
Data files directories: [/var/lib/cassandra/data] INFO 17:32:41,643
Commit log directory: /var/lib/cassandra/commitlog INFO 17:32:41,643
DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO 17:32:41,643 disk_failure_policy is stop INFO 17:32:41,643
commit_failure_policy is stop INFO 17:32:41,647 Global memtable
threshold is enabled at 986MB INFO 17:32:41,727 Not using
multi-threaded compaction INFO 17:32:41,869 JVM vendor/version:
OpenJDK 64-Bit Server VM/1.7.0_55 WARN 17:32:41,869 OpenJDK is not
recommended. Please upgrade to the newest Oracle Java release INFO
17:32:41,869 Heap size: 4137680896/4137680896 INFO 17:32:41,870 Code
Cache Non-heap memory: init = 2555904(2496K) used = 657664(642K)
committed = 2555904(2496K) max = 50331648(49152K) INFO 17:32:41,870
Par Eden Space Heap memory: init = 335544320(327680K) used =
80545080(78657K) committed = 335544320(327680K) max =
335544320(327680K) INFO 17:32:41,870 Par Survivor Space Heap memory:
init = 41943040(40960K) used = 0(0K) committed = 41943040(40960K) max
= 41943040(40960K) INFO 17:32:41,870 CMS Old Gen Heap memory: init = 3760193536(3672064K) used = 0(0K) committed = 3760193536(3672064K) max
= 3760193536(3672064K) INFO 17:32:41,872 CMS Perm Gen Non-heap memory: init = 21757952(21248K) used = 14994304(14642K) committed =
21757952(21248K) max = 174063616(169984K) INFO 17:32:41,872 Classpath:
/etc/cassandra:/usr/share/cassandra/lib/antlr-3.2.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.3.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/guava-15.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.1.2.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.2.5.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jline-1.0.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.9.1.jar:/usr/share/cassandra/lib/log4j-1.2.16.jar:/usr/share/cassandra/lib/lz4-1.2.0.jar:/usr/share/cassandra/lib/metrics-core-2.2.0.jar:/usr/share/cassandra/lib/netty-3.6.6.Final.jar:/usr/share/cassandra/lib/reporter-config-2.1.0.jar:/usr/share/cassandra/lib/servlet-api-2.5-20081211.jar:/usr/share/cassandra/lib/slf4j-api-1.7.2.jar:/usr/share/cassandra/lib/slf4j-log4j12-1.7.2.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.0.5.jar:/usr/share/cassandra/lib/snaptree-0.1.jar:/usr/share/cassandra/lib/super-csv-2.1.0.jar:/usr/share/cassandra/lib/thrift-server-internal-only-0.3.3.jar:/usr/share/cassandra/apache-cassandra-2.0.8.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/apache-cassandra-thrift-2.0.8.jar:/usr/share/cassandra/stress.jar::/usr/share/cassandra/lib/jamm-0.2.5.jar
INFO 17:32:41,873 JNA not found. Native methods will be disabled. INFO
17:32:41,884 Initializing key cache with capacity of 100 MBs. INFO
17:32:41,890 Scheduling key cache save to each 14400 seconds (going to
save all keys). INFO 17:32:41,890 Initializing row cache with capacity
of 0 MBs INFO 17:32:41,895 Scheduling row cache save to each 0 seconds
(going to save all keys). INFO 17:32:41,968 Initializing
system.schema_triggers INFO 17:32:41,985 Initializing
system.compaction_history INFO 17:32:41,988 Initializing
system.batchlog INFO 17:32:41,991 Initializing system.sstable_activity
INFO 17:32:41,994 Initializing system.peer_events INFO 17:32:41,997
Initializing system.compactions_in_progress INFO 17:32:42,000
Initializing system.hints ERROR 17:32:42,001 Exception encountered
during startup java.lang.RuntimeException: Incompatible SSTable found.
Current version jb is unable to read file:
/var/lib/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-hf-2.
Please run upgradesstables. at
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:415)
at
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:392)
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:309) at
org.apache.cassandra.db.Keyspace.<init>(Keyspace.java:266) at
org.apache.cassandra.db.Keyspace.open(Keyspace.java:110) at
org.apache.cassandra.db.Keyspace.open(Keyspace.java:88) at
org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:536)
at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:261)
at
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
java.lang.RuntimeException: Incompatible SSTable found. Current
version jb is unable to read file:
/var/lib/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-hf-2.
Please run upgradesstables. at
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:415)
at
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:392)
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:309) at
org.apache.cassandra.db.Keyspace.<init>(Keyspace.java:266) at
org.apache.cassandra.db.Keyspace.open(Keyspace.java:110) at
org.apache.cassandra.db.Keyspace.open(Keyspace.java:88) at
org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:536)
at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:261)
at
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
Exception encountered during startup: Incompatible SSTable found.
Current version jb is unable to read file:
/var/lib/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-hf-2.
Please run upgradesstables.
When i try to execute : upgradesstables (nodetool upgradesstables -h 127.0.0.1 -u root ...) I got this :
Failed to connect to '127.0.0.1:7000': Connexion refusée
Anyone can help me please ?
Thanks.
The Cassandra error has nothing to do with OpenJDK, although I do recommend using Oracle's.
You need to make sure you're on a valid upgrade path: http://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/upgradeC_c.html
You can't trivially upgrade from a Cassandra older than 1.2.9 to 2.0, and you can't upgrade from 1.x to 2.1 without first going to 2.0.7 or later.
Suggested upgrade path per documentation: 1.x > 1.2.9 > 2.0.7 > 2.1.x
Your java -version output shows your not using correct JDK. It should be something like below.
Java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
For this tell the system that there's a new Java version available
$ sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.8.0_version/bin/java" 1
Set the new JDK as the default using the following command:
$ sudo update-alternatives --config java
You can refer the link https://devopsmanual.in/2018/03/07/how-to-upgrade-datastax-cassandra/ for more details.