I am trying to upgrade Cassandra 1 to Cassandra 2.. And to do that I upgraded Java (to Java 7) but whenever I execute : cassandra. Its launching like this :
INFO 17:32:41,413 Logging initialized INFO 17:32:41,437 Loading
settings from file:/etc/cassandra/cassandra.yaml INFO 17:32:41,642
Data files directories: [/var/lib/cassandra/data] INFO 17:32:41,643
Commit log directory: /var/lib/cassandra/commitlog INFO 17:32:41,643
DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO 17:32:41,643 disk_failure_policy is stop INFO 17:32:41,643
commit_failure_policy is stop INFO 17:32:41,647 Global memtable
threshold is enabled at 986MB INFO 17:32:41,727 Not using
multi-threaded compaction INFO 17:32:41,869 JVM vendor/version:
OpenJDK 64-Bit Server VM/1.7.0_55 WARN 17:32:41,869 OpenJDK is not
recommended. Please upgrade to the newest Oracle Java release INFO
17:32:41,869 Heap size: 4137680896/4137680896 INFO 17:32:41,870 Code
Cache Non-heap memory: init = 2555904(2496K) used = 657664(642K)
committed = 2555904(2496K) max = 50331648(49152K) INFO 17:32:41,870
Par Eden Space Heap memory: init = 335544320(327680K) used =
80545080(78657K) committed = 335544320(327680K) max =
335544320(327680K) INFO 17:32:41,870 Par Survivor Space Heap memory:
init = 41943040(40960K) used = 0(0K) committed = 41943040(40960K) max
= 41943040(40960K) INFO 17:32:41,870 CMS Old Gen Heap memory: init = 3760193536(3672064K) used = 0(0K) committed = 3760193536(3672064K) max
= 3760193536(3672064K) INFO 17:32:41,872 CMS Perm Gen Non-heap memory: init = 21757952(21248K) used = 14994304(14642K) committed =
21757952(21248K) max = 174063616(169984K) INFO 17:32:41,872 Classpath:
/etc/cassandra:/usr/share/cassandra/lib/antlr-3.2.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.3.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/guava-15.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.1.2.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.2.5.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jline-1.0.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.9.1.jar:/usr/share/cassandra/lib/log4j-1.2.16.jar:/usr/share/cassandra/lib/lz4-1.2.0.jar:/usr/share/cassandra/lib/metrics-core-2.2.0.jar:/usr/share/cassandra/lib/netty-3.6.6.Final.jar:/usr/share/cassandra/lib/reporter-config-2.1.0.jar:/usr/share/cassandra/lib/servlet-api-2.5-20081211.jar:/usr/share/cassandra/lib/slf4j-api-1.7.2.jar:/usr/share/cassandra/lib/slf4j-log4j12-1.7.2.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.0.5.jar:/usr/share/cassandra/lib/snaptree-0.1.jar:/usr/share/cassandra/lib/super-csv-2.1.0.jar:/usr/share/cassandra/lib/thrift-server-internal-only-0.3.3.jar:/usr/share/cassandra/apache-cassandra-2.0.8.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/apache-cassandra-thrift-2.0.8.jar:/usr/share/cassandra/stress.jar::/usr/share/cassandra/lib/jamm-0.2.5.jar
INFO 17:32:41,873 JNA not found. Native methods will be disabled. INFO
17:32:41,884 Initializing key cache with capacity of 100 MBs. INFO
17:32:41,890 Scheduling key cache save to each 14400 seconds (going to
save all keys). INFO 17:32:41,890 Initializing row cache with capacity
of 0 MBs INFO 17:32:41,895 Scheduling row cache save to each 0 seconds
(going to save all keys). INFO 17:32:41,968 Initializing
system.schema_triggers INFO 17:32:41,985 Initializing
system.compaction_history INFO 17:32:41,988 Initializing
system.batchlog INFO 17:32:41,991 Initializing system.sstable_activity
INFO 17:32:41,994 Initializing system.peer_events INFO 17:32:41,997
Initializing system.compactions_in_progress INFO 17:32:42,000
Initializing system.hints ERROR 17:32:42,001 Exception encountered
during startup java.lang.RuntimeException: Incompatible SSTable found.
Current version jb is unable to read file:
/var/lib/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-hf-2.
Please run upgradesstables. at
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:415)
at
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:392)
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:309) at
org.apache.cassandra.db.Keyspace.<init>(Keyspace.java:266) at
org.apache.cassandra.db.Keyspace.open(Keyspace.java:110) at
org.apache.cassandra.db.Keyspace.open(Keyspace.java:88) at
org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:536)
at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:261)
at
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
java.lang.RuntimeException: Incompatible SSTable found. Current
version jb is unable to read file:
/var/lib/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-hf-2.
Please run upgradesstables. at
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:415)
at
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:392)
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:309) at
org.apache.cassandra.db.Keyspace.<init>(Keyspace.java:266) at
org.apache.cassandra.db.Keyspace.open(Keyspace.java:110) at
org.apache.cassandra.db.Keyspace.open(Keyspace.java:88) at
org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:536)
at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:261)
at
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
Exception encountered during startup: Incompatible SSTable found.
Current version jb is unable to read file:
/var/lib/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-hf-2.
Please run upgradesstables.
When i try to execute : upgradesstables (nodetool upgradesstables -h 127.0.0.1 -u root ...) I got this :
Failed to connect to '127.0.0.1:7000': Connexion refusée
Anyone can help me please ?
Thanks.
The Cassandra error has nothing to do with OpenJDK, although I do recommend using Oracle's.
You need to make sure you're on a valid upgrade path: http://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/upgradeC_c.html
You can't trivially upgrade from a Cassandra older than 1.2.9 to 2.0, and you can't upgrade from 1.x to 2.1 without first going to 2.0.7 or later.
Suggested upgrade path per documentation: 1.x > 1.2.9 > 2.0.7 > 2.1.x
Your java -version output shows your not using correct JDK. It should be something like below.
Java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
For this tell the system that there's a new Java version available
$ sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.8.0_version/bin/java" 1
Set the new JDK as the default using the following command:
$ sudo update-alternatives --config java
You can refer the link https://devopsmanual.in/2018/03/07/how-to-upgrade-datastax-cassandra/ for more details.
Related
I'm trying to start an embedded OrientDB server. I set the ORIENTDB_HOME to a folder that contains the config/ and plugins/ folders. I also included my config file in the classpath, because the server didn't seem to load my config file even though it has the default name.
Now it does start and apply my config, however apparently the application directory is used as ORIENTDB_HOME, because my plugins aren't loaded and the database is created there instead of where I want it to.
This is my code:
public void startServer() {
try {
System.setProperty("ORIENTDB_HOME", "C:\\my\\orientdb_home\\path");
server = OServerMain.create(true);
// server.startup(); // this doesn't load the correct config
server.startup(getClass().getResourceAsStream("/config/orientdb-server-config.xml")); // workaround
server.activate();
OServerNetworkListener httpListener = server.getListenerByProtocol(server.getNetworkProtocols().get("binary"));
binaryPort = httpListener.getInboundAddr().getPort();
httpListener = server.getListenerByProtocol(server.getNetworkProtocols().get("http"));
httpPort = httpListener.getInboundAddr().getPort();
System.out.println("Started OrientDB Server.\nBinary Port is " + binaryPort + "\nHTTP Port is " + httpPort);
} catch (Exception e) {
e.printStackTrace();
}
}
The fascinating thing is that the log output clearly says it's using the correct directory for the databases, but it doesn't do so.
2019-06-14 09:32:23:556 INFO Loading configuration from input stream [OServerConfigurationLoaderXml]
2019-06-14 09:32:23:731 INFO OrientDB Server v2.2.37 (build a7541e7ceeabf592dd9a7b2928b6c023cbc73193, branch 2.2.x) is starting up... [OServer]
2019-06-14 09:32:23:741 INFO Databases directory: C:\my\orientdb_home\path\databases [OServer]
2019-06-14 09:32:23:830 INFO Configuration of usage of soft references inside of containers of results of SQL execution [OMemoryAndLocalPaginatedEnginesInitializer]
2019-06-14 09:32:23:831 INFO Initial or maximum values of heap memory usage are NOT set, containers of results of SQL executors will NOT use soft references by default [OMemoryAndLocalPaginatedEnginesInitializer]
2019-06-14 09:32:23:832 INFO Auto configuration of disk cache size. [OMemoryAndLocalPaginatedEnginesInitializer]
2019-06-14 09:32:23:919 INFO 17066577920 B/16275 MB/15 GB of physical memory were detected on machine [ONative]
2019-06-14 09:32:23:919 INFO Detected memory limit for current process is 17066577920 B/16275 MB/15 GB [ONative]
2019-06-14 09:32:23:921 INFO OrientDB auto-config DISKCACHE=3,618MB (heap=3,618MB direct=3,618MB os=16,275MB) [OMemoryAndLocalPaginatedEnginesInitializer]
2019-06-14 09:32:23:922 INFO Lowering disk cache size from 3,618MB to 3,616MB. [OGlobalConfiguration]
2019-06-14 09:32:24:117 INFO Listening binary connections on 127.0.0.1:2424 (protocol v.36, socket=default) [OServerNetworkListener]
2019-06-14 09:32:24:120 INFO Listening http connections on 127.0.0.1:2480 (protocol v.10, socket=default) [OServerNetworkListener]
2019-06-14 09:32:25:081 INFO Storage 'plocal:databases/pvRelations' is created under OrientDB distribution : 2.2.37 (build a7541e7ceeabf592dd9a7b2928b6c023cbc73193, branch 2.2.x) [OLocalPaginatedStorage]
2019-06-14 09:32:27:607 INFO {db=pvRelations} -> Loaded plocal database 'pvRelations' [OServer]
2019-06-14 09:32:27:609 INFO Found ORIENTDB_ROOT_PASSWORD variable, using this value as root's password [OServer]
2019-06-14 09:32:27:621 INFO ODefaultPasswordAuthenticator is active [ODefaultPasswordAuthenticator]
2019-06-14 09:32:27:623 INFO OServerConfigAuthenticator is active [OServerConfigAuthenticator]
2019-06-14 09:32:27:625 INFO OSystemUserAuthenticator is active [OSystemUserAuthenticator]
2019-06-14 09:32:27:634 INFO Installed GREMLIN language v.2.6.0 - graph.pool.max=50 [OGraphServerHandler]
2019-06-14 09:32:27:638 WARNI Authenticated clients can execute any kind of code into the server by using the following allowed languages: [sql] [OServerSideScriptInterpreter]
2019-06-14 09:32:27:638 INFO OrientDB Studio available at http://127.0.0.1:2480/studio/index.html [OServer]
2019-06-14 09:32:27:638 INFO OrientDB Server is active v2.2.37 (build a7541e7ceeabf592dd9a7b2928b6c023cbc73193, branch 2.2.x). [OServer]
To clarify again: The directory C:\my\orientdb_home\path\databases isn't used, instead it's path\to\my\application\databases.
What am I doing wrong? How do I tell the server to use the directory of my choice to search for config and plugins as well as store the databases?
EDIT:
I just noticed that in fact the databases directory is used, but only for the OSystem database. My own database is stored at the wrong location. I defined it in my config file:
...
<storages>
<storage name="myDB" path="plocal:databases/myDB" userName="admin" userPassword="admin" loaded-at-startup="true" />
</storages>
...
EDIT2:
So I noticed, the wrong database location is due to the manually configured storage path in the config file. However, this still doesn't explain why I need to directly provide my config file and why my plugins (OrientDB Studio) aren't loaded.
Turns out, I should have read the server.bat more carefully. While setting ORIENTDB_HOME apparently does set the default database directory, the default config file isn't located under %ORIENTDB_HOME%\config\orientdb-server-config.xml. I had to set the environment variable orientdb.config.file.
My plugin wasn't loaded because it didn't reside in the plugins folder, but was included to my classpath, which apparently isn't enough.
I have a misbehaving application running under Tomcat on MSWindows. To set up the system to give better insight into what is failing, I am trying to add GC logging - but thus far my attempts have failed.
Initially I had set CATALINA_OPTS in setenv.bat - but these were ignored on restarting the service.
I then tried adding the options using Tomcat8w.exe :
-Xloggc:"C:\PerfLogs\gc-tomcat.log"
-XX:+UseGCLogFileRotation
-XX:NumberOfGCLogFiles=10
-XX:GCLogFileSize=5M
-XX:+PrintGCDetails
-verbose:gc
-XX:+PrintGCDateStamps
-XX:+HeapDumpOnCtrlBreak
The service fails to start with "error 4". Event viewer shows:
The Apache Tomcat 8.0 Tomcat8 service terminated with the following service-specific error:
The system cannot open the file.
I have checked the path and the SYSTEM user has full control. There are no errors reports in the Tomcat stderr log - only a single entry:
Commons Daemon procrun stdout initialized
I see nothing being added to the other log files.
Removing the options above allows the service to start. Using the above config with the double quotes on the path has no impact. Creating the initial log file before starting the service has no impact.
How do I enable GC logging? How can I find out why this is currently failing?
(sadly, migrating to a more user friendly operating system is not an option).
Update
I found some more log entries - this time in common-daemon-YYYY-MM-DD.log:
[2018-08-29 11:04:52] [info] [ 4068] Running 'Tomcat8' Service...
[2018-08-29 11:04:52] [info] [ 2560] Starting service...
[2018-08-29 11:04:52] [error] [ 4200] CreateJavaVM Failed
[2018-08-29 11:04:52] [error] [ 4200] The system could not find the environment option that was entered.
[2018-08-29 11:04:52] [error] [ 2560] Failed to start Java
[2018-08-29 11:04:52] [error] [ 2560] ServiceStart returned 4
[2018-08-29 11:04:52] [info] [ 4068] Run service finished.
[2018-08-29 11:04:52] [info] [ 4068] Commons Daemon procrun finished
and, in case it is relevant:
java version "1.8.0_74"
Java(TM) SE Runtime Environment (build 1.8.0_74-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)
After a good deal of digging around, I found that the MSWindows jvm is a apparently a second rate citizen in the Java world. According to the Oracle Documentation the MSWindows Java engine does not support log rotation. Removing the following options from the config allowed the JVM to start with GC logging:
-XX:+UseGCLogFileRotation
-XX:NumberOfGCLogFiles=10
-XX:GCLogFileSize=5M
Why this reported as "The system cannot open the file", I have no idea.
Now I just need to work out what happens when the log files fill up / how I prevent this.
I am working on a Spring-MVC application in which I am computing the statistics every night. The problem is, yesterdays computation failed and I have this error and an hs_err_something.log file. The file basically says Out of memory error, but our servers have 32GB ram and quite lot of disk space too. Also, the server is kind of relaxed in night. Why I am I getting this error. I will post contents of relevant code.
StatisticsServiceImpl :
#Override
#Scheduled(cron = "0 2 2 * * ?")
public void computeStatisticsForAllUsers() {
// One of the count as part of statistics
int groupNotesCount = this.groupNotesService.getNoteCountForUser(person.getUsername());
}
GroupNotesDAOImpl :
#Override
public int getNoteCountForUser(String noteCreatorEmail) {
Session session = this.sessionFactory.getCurrentSession();
Query query = session.createQuery("select count(*) from GroupNotes as gn where gn.noteCreatorEmail=:noteCreatorEmail");
query.setParameter("noteCreatorEmail", noteCreatorEmail);
return new Integer(String.valueOf(query.uniqueResult()));
}
Error log :
Aug 05, 2015 2:02:02 AM org.apache.catalina.loader.WebappClassLoader loadClass
INFO: Illegal access: this web application instance has been stopped already. Could not load gn. The eventual following stack trace is cause
d by an error thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access, and has no function
al impact.
java.lang.IllegalStateException
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1612)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1571)
at com.journaldev.spring.dao.GroupNotesDAOImpl.getNoteCountForUser(GroupNotesDAOImpl.java:359)
hs_err.log file :
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 741867520 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2673), pid=20080, tid=140319513569024
#
# JRE version: Java(TM) SE Runtime Environment (8.0_45-b14) (build 1.8.0_45-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.45-b02 mixed mode linux-amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
What should I do. Any help would be nice. Thanks a lot.
I´m trying to get Oracle instantclient up and running on OS X with Java 1.6.0_65. I did all steps described in the Oracle documentation: https://docs.oracle.com/cd/E11882_01/install.112/e38228/toc.htm
but the problem keeps the same:
Invalid memory access of location 0x0 rip=0x106369f87
The stack trace is:
Process: java [6234]
Path: /Library/Java/JavaVirtualMachines/jdk1.6.0_65.jdk/Contents/Home/bin/java
Identifier: com.apple.javajdk16.cmd
Version: 1.0 (1.0)
Code Type: X86-64 (Native)
Parent Process: sh [6222]
Responsible: Terminal [1179]
User ID: 33291
PlugIn Path: /Library/Java/JavaVirtualMachines/jdk1.6.0_65.jdk/Contents/Home/bundle/Libraries/libclient64.dylib
PlugIn Identifier: libclient64.dylib
PlugIn Version: ??? (1)
Date/Time: 2015-07-28 11:22:49.211 +0200
OS Version: Mac OS X 10.10.4 (14E46)
Report Version: 11
Anonymous UUID: 54BA4C92-323A-644A-55CF-CDBEDA054F4E
Time Awake Since Boot: 5500 seconds
Crashed Thread: 27 Java: main
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000000
VM Regions Near 0:
-->
__TEXT 0000000106233000-000000010623b000 [ 32K] r-x/rwx SM=COW /Library/Java/JavaVirtualMachines/jdk1.6.0_65.jdk/Contents/Home/bin/java
Application Specific Information:
Java information:
Exception type: Bus Error (0xa) at pc=106369f87
Java VM: Java HotSpot(TM) 64-Bit Server VM (20.65-b04-462 mixed mode macosx-amd64)
Current thread (7fddcf86d800): JavaThread "main" [_thread_in_vm, id=309616640, stack(112646000,112746000)]
Stack: [112646000,112746000]
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j oracle.jdbc.driver.T2CConnection.t2cSetSessionTimeZone(JLjava/lang/String;)I+0
j oracle.jdbc.driver.T2CConnection.logon()V+825
j oracle.jdbc.driver.PhysicalConnection.<init>(Ljava/lang/String;Ljava/util/Properties;Loracle/jdbc/driver/OracleDriverExtension;)V+323
....
I tried with both 32 and 64 versions. I did it using the appropriate client version together with either activating or deactivating the -d32 flag on the application start.
Both seam to have the same problem.
Does anyone has an idea what can be wrong here?
Thanks
UPDATE:
I replaced OCI by THIN and moved the crash ahead.
jdbc:oracle:thin:#${dbserver:our.domain.de}:${dbport:1234}:${dbsid:OURSID}
jdbc:oracle:oci:#(description=(ADDRESS=(COMMUNITY=tcp.world)(PROTOCOL=TCP)(Host=our.domain.de)(Port=1521))(connect_data=(sid=OURSID)))
It crashes now after reading of the data sources:
12:53:52,490 INFO [ConnectionProviderFactory] Initializing connection provider: org.hibernate.ejb.connection.InjectedDataSourceConnectionProvider
12:53:52,490 INFO [InjectedDataSourceConnectionProvider] Using provided datasource
Invalid memory access of location 0x0 rip=0x10ff50f87
AFAIK client libs were not well maintained by Oracle and they simply did not support the newest OS X version - for years. They crashed on Mac
Try to use 12c drivers. But when even thin drivers crash your JVM, then there must be something wrong with your Java installation. Maybe you have multiple JDBC drivers in your classpath?
I have Hadoop-Yarn cluster, when i try to run hadoop examples i get strange error message in the container log:
Error: Could not find or load main class 1638
My Java version is:
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Running services on master:
593 NodeManager
373 SecondaryNameNode
745 JobHistoryServer
507 ResourceManager
129 NameNode
240 DataNode
Running services on slave:
51 DataNode
136 NodeManager
351 Jps
I execute following command:
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep input/hadoop output 'dfs[a-z.]+'
And get this exception:
15/05/13 13:35:31 INFO mapreduce.Job: Running job: job_1431538391289_0005
15/05/13 13:35:37 INFO mapreduce.Job: Job job_1431538391289_0005 running in uber mode : false
15/05/13 13:35:37 INFO mapreduce.Job: map 0% reduce 0%
15/05/13 13:35:37 INFO mapreduce.Job: Job job_1431538391289_0005 failed with state FAILED due to: Application application_1431538391289_0005 failed 2 times due to AM Container for appattempt_1431538391289_0005_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://namenode:8088/proxy/application_1431538391289_0005/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1431538391289_0005_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Could you please help me to solve this problem.
Instead of
<property>
<name>yarn.app.mapreduce.am.command-opts</name>
<value>819</value>
</property>
It requires
<property>
<name>yarn.app.mapreduce.am.command-opts</name>
<value>-Xmx819m</value>
</property>
Check the RAM size configured, increase it as necessary. Had the same issue with my VM only to find it had too little RAM configured (1G).