I have Hadoop-Yarn cluster, when i try to run hadoop examples i get strange error message in the container log:
Error: Could not find or load main class 1638
My Java version is:
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Running services on master:
593 NodeManager
373 SecondaryNameNode
745 JobHistoryServer
507 ResourceManager
129 NameNode
240 DataNode
Running services on slave:
51 DataNode
136 NodeManager
351 Jps
I execute following command:
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep input/hadoop output 'dfs[a-z.]+'
And get this exception:
15/05/13 13:35:31 INFO mapreduce.Job: Running job: job_1431538391289_0005
15/05/13 13:35:37 INFO mapreduce.Job: Job job_1431538391289_0005 running in uber mode : false
15/05/13 13:35:37 INFO mapreduce.Job: map 0% reduce 0%
15/05/13 13:35:37 INFO mapreduce.Job: Job job_1431538391289_0005 failed with state FAILED due to: Application application_1431538391289_0005 failed 2 times due to AM Container for appattempt_1431538391289_0005_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://namenode:8088/proxy/application_1431538391289_0005/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1431538391289_0005_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Could you please help me to solve this problem.
Instead of
<property>
<name>yarn.app.mapreduce.am.command-opts</name>
<value>819</value>
</property>
It requires
<property>
<name>yarn.app.mapreduce.am.command-opts</name>
<value>-Xmx819m</value>
</property>
Check the RAM size configured, increase it as necessary. Had the same issue with my VM only to find it had too little RAM configured (1G).
Related
I try to start the Jenkins, im facing this error
[root#ip-10-0-1-245 ~]# systemctl start jenkins
Job for jenkins.service failed because the control process exited with error code. See "systemctl status jenkins.service" and "journalctl -xe" for details.
and my java version is
openjdk version "11.0.13" 2021-10-19 LTS
And show only the activating status
[root#ip-10-0-1-245 ~]# systemctl status jenkins
● jenkins.service - Jenkins Continuous Integration Server
Loaded: loaded (/usr/lib/systemd/system/jenkins.service; enabled; vendor preset: disabled)
Active: activating (start) since Sun 2022-05-08 14:46:00 UTC; 169ms ago
Main PID: 20507 (java)
CGroup: /system.slice/jenkins.service
└─20507 /usr/bin/java -Djava.awt.headless=true -jar /usr/share/java/jenkins.war --webroot=%C/jenkins/war --httpPort=...
May 08 14:46:00 ip-10-0-1-245.ap-south-1.compute.internal systemd[1]: Starting Jenkins Continuous Integration Server...
kindly help to resolve this issue
I have installed Hadoop 2.7.1 stable version. I followed Tom White's book for installation in Pseudodistributed mode. I did set all environment variables like JAVA_HOME, HADOOP_HOME, PATH etc.. I configured yarn-site.xml, hdfs-site.xml, core-site.xml, mapred-site.xml.
I copied the sample file file.txt using following command.
$hadoop fs -copyFromLocal textFiles/file.txt file.txt
which shows me
Found 2 items
-rw-r--r-- 1 RAMA supergroup 3737 2015-12-27 21:52 file.txt
drwxr-xr-x - RAMA supergroup 0 2015-12-27 22:17 input
When I am executing the wordcount program in hadoop-mapreduce-examples-2.7.1.jar using below command
RAMAs-MacBook-Pro:hadoop-2.7.1 RAMA$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount file.txt output
it is throwing me following exception, for which I am not able to find any feasible solution.
15/12/27 22:41:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/12/27 22:41:53 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
15/12/27 22:41:53 INFO input.FileInputFormat: Total input paths to process : 1
15/12/27 22:41:53 INFO mapreduce.JobSubmitter: number of splits:1
15/12/27 22:41:53 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1451216397139_0020
15/12/27 22:41:54 INFO impl.YarnClientImpl: Submitted application application_1451216397139_0020
15/12/27 22:41:54 INFO mapreduce.Job: The url to track the job: http://192.168.1.6:8088/proxy/application_1451216397139_0020/
15/12/27 22:41:54 INFO mapreduce.Job: Running job: job_1451216397139_0020
15/12/27 22:41:57 INFO mapreduce.Job: Job job_1451216397139_0020 running in uber mode : false
15/12/27 22:41:57 INFO mapreduce.Job: map 0% reduce 0%
15/12/27 22:41:57 INFO mapreduce.Job: Job job_1451216397139_0020 failed with state FAILED due to: Application application_1451216397139_0020 failed 2 times due to AM Container for appattempt_1451216397139_0020_000002 exited with exitCode: 127
For more detailed output, check application tracking page:http://192.168.1.6:8088/cluster/app/application_1451216397139_0020Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1451216397139_0020_02_000001
Exit code: 127
Stack trace: ExitCodeException exitCode=127:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 127
Failing this attempt. Failing the application.
15/12/27 22:41:57 INFO mapreduce.Job: Counters: 0
Any suggestions/comments is of great help...
In hadoop-env.sh, explicitly add the java home path in JAVA_HOME variable.
Now,I want to deploy the tachyon0.8.2 on my ubuntu14.04,I already has hadoop and spark:
on the master
bd#master$ jps
11871 Jps
3388 Master
2919 NameNode
3266 ResourceManager
3123 SecondaryNameNode
on the slave
bd#slave$ jps
4350 Jps
2778 NodeManager
2647 DataNode
2879 Worker
And I editor the taachyon-env.sh:
export TACHYON_MASTER_ADDRESS=${TACHYON_MASTER_ADDRESS:-master}
export TACHYON_UNDERFS_ADDRESS=${TACHYON_UNDERFS_ADDRESS:-hdfs://master:9000}
Then, I run the bin/tachyon formatand bin/tachyon-start.sh local.
I cannot see the tachyonMaster in JPS:
/usr/local/bigdata/tachyon-0.8.2 [06:06:32]
bd$ bin/tachyon-start.sh local
Killed 0 processes on master
Killed 0 processes on master
Connecting to master as bd...
Killed 0 processes on master
Connection to master closed.
[sudo] password for bd:
Formatting RamFS: /mnt/ramdisk (512mb)
Starting master # master
Starting worker # master
/usr/local/bigdata/tachyon-0.8.2 [06:06:54]
bd$ jps
12183 TachyonWorker
3388 Master
2919 NameNode
3266 ResourceManager
3123 SecondaryNameNode
12203 Jps
and I see the logs in master.logs,I said that:
2015-12-27 18:06:50,635 ERROR MASTER_LOGGER (MetricsConfig.java:loadConfigFile) - Error loading metrics configuration file.
2015-12-27 18:06:51,735 ERROR MASTER_LOGGER (HdfsUnderFileSystem.java:<init>) - Exception thrown when trying to get FileSystem for hdfs://master:9000
org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
at org.apache.hadoop.ipc.Client.call(Client.java:1070)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at tachyon.underfs.hdfs.HdfsUnderFileSystem.<init>(HdfsUnderFileSystem.java:74)
at tachyon.underfs.hdfs.HdfsUnderFileSystemFactory.create(HdfsUnderFileSystemFactory.java:30)
at tachyon.underfs.UnderFileSystemRegistry.create(UnderFileSystemRegistry.java:116)
at tachyon.underfs.UnderFileSystem.get(UnderFileSystem.java:100)
at tachyon.underfs.UnderFileSystem.get(UnderFileSystem.java:83)
at tachyon.master.TachyonMaster.connectToUFS(TachyonMaster.java:412)
at tachyon.master.TachyonMaster.startMasters(TachyonMaster.java:280)
at tachyon.master.TachyonMaster.start(TachyonMaster.java:261)
at tachyon.master.TachyonMaster.main(TachyonMaster.java:64)
2015-12-27 18:06:51,742 ERROR MASTER_LOGGER (TachyonMaster.java:main) - Uncaught exception terminating Master
java.lang.IllegalArgumentException: All eligible Under File Systems were unable to create an instance for the given path: hdfs://master:9000
java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
at tachyon.underfs.UnderFileSystemRegistry.create(UnderFileSystemRegistry.java:132)
at tachyon.underfs.UnderFileSystem.get(UnderFileSystem.java:100)
at tachyon.underfs.UnderFileSystem.get(UnderFileSystem.java:83)
at tachyon.master.TachyonMaster.connectToUFS(TachyonMaster.java:412)
at tachyon.master.TachyonMaster.startMasters(TachyonMaster.java:280)
at tachyon.master.TachyonMaster.start(TachyonMaster.java:261)
at tachyon.master.TachyonMaster.main(TachyonMaster.java:64)
What should I do for this problem?
This exception arises due to version mismatch of Hadoop client and server side. Check your Hadoop version, and then recompile Tachyon against that version using this command:
mvn -Dhadoop.version=your_hadoop_version clean install
Example: mvn -Dhadoop.version=2.4.0 clean install
Now configure your compiled Tachyon and it should work fine. Reference link.
I am trying to run wordcount example already provided in hadoop on the following env: (Pseudodistributed mode)
Windows 7
Hadoop 2.7.1
JDK 1.7.x
RAM 4 GB
The jps command returns
C:\deploy\hadoop-2.7.1>jps
2336 ResourceManager
7500 NameNode
4984 Jps
6900 NodeManager
4940 DataNode
The command I use for setting the hadoop heap size
set HADOOP_HEAPSIZE=512
The command I use from the hadoop home installation directory is
bin\yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output
I see the following stack trace
C:\deploy\hadoop-2.7.1>bin\yarn jar share/hadoop/mapreduce/hadoop-mapreduce-exam
ples-2.7.1.jar wordcount /input /output
15/08/14 22:36:26 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0
:8032
15/08/14 22:36:27 INFO input.FileInputFormat: Total input paths to process : 1
15/08/14 22:36:28 INFO mapreduce.JobSubmitter: number of splits:1
15/08/14 22:36:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_14
39571873038_0001
15/08/14 22:36:28 INFO impl.YarnClientImpl: Submitted application application_14
39571873038_0001
15/08/14 22:36:28 INFO mapreduce.Job: The url to track the job: http://XXX-PC
:8088/proxy/application_1439571873038_0001/
15/08/14 22:36:28 INFO mapreduce.Job: Running job: job_1439571873038_0001
15/08/14 22:36:37 INFO mapreduce.Job: Job job_1439571873038_0001 running in uber
mode : false
15/08/14 22:36:37 INFO mapreduce.Job: map 0% reduce 0%
15/08/14 22:36:37 INFO mapreduce.Job: Job job_1439571873038_0001 failed with sta
te FAILED due to: Application application_1439571873038_0001 failed 2 times due
to AM Container for appattempt_1439571873038_0001_000002 exited with exitCode:
1
For more detailed output, check application tracking page:http://XXX-PC:8088/
cluster/app/application_1439571873038_0001Then, click on links to logs of each a
ttempt.
Diagnostics: Exception from container-launch.
Container id: container_1439571873038_0001_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:
722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.la
unchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.C
ontainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.C
ontainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:615)
at java.lang.Thread.run(Thread.java:744)
Shell output: 1 file(s) moved.
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
15/08/14 22:36:37 INFO mapreduce.Job: Counters: 0
When I went to the Stderr logs as mentioned in the above stack trace, the actual error came out to be
Error: Could not create the Java Virtual Machine.
When I try to increase HADOOP_HEAPSIZE to 1024, the namenode, datanode and yarn daemons do not start at all and give me the same error of could not create java virtual machine
Has someone got the same problem ? How to solve this issue ?
You can solve this problem by editting the file /etc/hadoop/hadoop-env.sh like the one below:
export HADOOP_HEAPSIZE=3072
I am trying to upgrade Cassandra 1 to Cassandra 2.. And to do that I upgraded Java (to Java 7) but whenever I execute : cassandra. Its launching like this :
INFO 17:32:41,413 Logging initialized INFO 17:32:41,437 Loading
settings from file:/etc/cassandra/cassandra.yaml INFO 17:32:41,642
Data files directories: [/var/lib/cassandra/data] INFO 17:32:41,643
Commit log directory: /var/lib/cassandra/commitlog INFO 17:32:41,643
DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO 17:32:41,643 disk_failure_policy is stop INFO 17:32:41,643
commit_failure_policy is stop INFO 17:32:41,647 Global memtable
threshold is enabled at 986MB INFO 17:32:41,727 Not using
multi-threaded compaction INFO 17:32:41,869 JVM vendor/version:
OpenJDK 64-Bit Server VM/1.7.0_55 WARN 17:32:41,869 OpenJDK is not
recommended. Please upgrade to the newest Oracle Java release INFO
17:32:41,869 Heap size: 4137680896/4137680896 INFO 17:32:41,870 Code
Cache Non-heap memory: init = 2555904(2496K) used = 657664(642K)
committed = 2555904(2496K) max = 50331648(49152K) INFO 17:32:41,870
Par Eden Space Heap memory: init = 335544320(327680K) used =
80545080(78657K) committed = 335544320(327680K) max =
335544320(327680K) INFO 17:32:41,870 Par Survivor Space Heap memory:
init = 41943040(40960K) used = 0(0K) committed = 41943040(40960K) max
= 41943040(40960K) INFO 17:32:41,870 CMS Old Gen Heap memory: init = 3760193536(3672064K) used = 0(0K) committed = 3760193536(3672064K) max
= 3760193536(3672064K) INFO 17:32:41,872 CMS Perm Gen Non-heap memory: init = 21757952(21248K) used = 14994304(14642K) committed =
21757952(21248K) max = 174063616(169984K) INFO 17:32:41,872 Classpath:
/etc/cassandra:/usr/share/cassandra/lib/antlr-3.2.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.3.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/guava-15.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.1.2.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.2.5.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jline-1.0.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.9.1.jar:/usr/share/cassandra/lib/log4j-1.2.16.jar:/usr/share/cassandra/lib/lz4-1.2.0.jar:/usr/share/cassandra/lib/metrics-core-2.2.0.jar:/usr/share/cassandra/lib/netty-3.6.6.Final.jar:/usr/share/cassandra/lib/reporter-config-2.1.0.jar:/usr/share/cassandra/lib/servlet-api-2.5-20081211.jar:/usr/share/cassandra/lib/slf4j-api-1.7.2.jar:/usr/share/cassandra/lib/slf4j-log4j12-1.7.2.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.0.5.jar:/usr/share/cassandra/lib/snaptree-0.1.jar:/usr/share/cassandra/lib/super-csv-2.1.0.jar:/usr/share/cassandra/lib/thrift-server-internal-only-0.3.3.jar:/usr/share/cassandra/apache-cassandra-2.0.8.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/apache-cassandra-thrift-2.0.8.jar:/usr/share/cassandra/stress.jar::/usr/share/cassandra/lib/jamm-0.2.5.jar
INFO 17:32:41,873 JNA not found. Native methods will be disabled. INFO
17:32:41,884 Initializing key cache with capacity of 100 MBs. INFO
17:32:41,890 Scheduling key cache save to each 14400 seconds (going to
save all keys). INFO 17:32:41,890 Initializing row cache with capacity
of 0 MBs INFO 17:32:41,895 Scheduling row cache save to each 0 seconds
(going to save all keys). INFO 17:32:41,968 Initializing
system.schema_triggers INFO 17:32:41,985 Initializing
system.compaction_history INFO 17:32:41,988 Initializing
system.batchlog INFO 17:32:41,991 Initializing system.sstable_activity
INFO 17:32:41,994 Initializing system.peer_events INFO 17:32:41,997
Initializing system.compactions_in_progress INFO 17:32:42,000
Initializing system.hints ERROR 17:32:42,001 Exception encountered
during startup java.lang.RuntimeException: Incompatible SSTable found.
Current version jb is unable to read file:
/var/lib/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-hf-2.
Please run upgradesstables. at
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:415)
at
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:392)
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:309) at
org.apache.cassandra.db.Keyspace.<init>(Keyspace.java:266) at
org.apache.cassandra.db.Keyspace.open(Keyspace.java:110) at
org.apache.cassandra.db.Keyspace.open(Keyspace.java:88) at
org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:536)
at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:261)
at
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
java.lang.RuntimeException: Incompatible SSTable found. Current
version jb is unable to read file:
/var/lib/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-hf-2.
Please run upgradesstables. at
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:415)
at
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:392)
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:309) at
org.apache.cassandra.db.Keyspace.<init>(Keyspace.java:266) at
org.apache.cassandra.db.Keyspace.open(Keyspace.java:110) at
org.apache.cassandra.db.Keyspace.open(Keyspace.java:88) at
org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:536)
at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:261)
at
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
Exception encountered during startup: Incompatible SSTable found.
Current version jb is unable to read file:
/var/lib/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-hf-2.
Please run upgradesstables.
When i try to execute : upgradesstables (nodetool upgradesstables -h 127.0.0.1 -u root ...) I got this :
Failed to connect to '127.0.0.1:7000': Connexion refusée
Anyone can help me please ?
Thanks.
The Cassandra error has nothing to do with OpenJDK, although I do recommend using Oracle's.
You need to make sure you're on a valid upgrade path: http://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/upgradeC_c.html
You can't trivially upgrade from a Cassandra older than 1.2.9 to 2.0, and you can't upgrade from 1.x to 2.1 without first going to 2.0.7 or later.
Suggested upgrade path per documentation: 1.x > 1.2.9 > 2.0.7 > 2.1.x
Your java -version output shows your not using correct JDK. It should be something like below.
Java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
For this tell the system that there's a new Java version available
$ sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.8.0_version/bin/java" 1
Set the new JDK as the default using the following command:
$ sudo update-alternatives --config java
You can refer the link https://devopsmanual.in/2018/03/07/how-to-upgrade-datastax-cassandra/ for more details.