Unable to set up hadoop on my local machine - java

I am trying to install Hadoop as a pseudo stand alone application on my macbook, and I have been seeing errors.
When I try to execute sbin/start-dfs.sh, I get the following error.
$ sbin/start-dfs.sh
Starting namenodes on [localhost]
localhost: Connection closed by ::1 port 22
Starting datanodes
localhost: Connection closed by ::1 port 22
Starting secondary namenodes [Kartiks-MacBook-Pro.local]
Kartiks-MacBook-Pro.local: Connection closed by 100.110.189.236 port 22
2018-01-22 00:20:19,441 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
I used the following pages as my reference
1) http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
2) https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#Pseudo-Distributed_Operation
3) http://zhongyaonan.com/hadoop-tutorial/setting-up-hadoop-2-6-on-mac-osx-yosemite.html
None of the URLs to view either the namenode load for me
1) http://localhost:9870/ (from the Hadoop's website)
2) http://localhost:50070/ (from http://zhongyaonan.com/hadoop-tutorial/setting-up-hadoop-2-6-on-mac-osx-yosemite.html)
I am using my personal login id from my Macbook.

Related

Database connection in PhpStorm results in java.rmi.ConnectException: Connection refused to host: 127.0.0.1

I'm trying to connect my database to my project (in PhpStorm), so that I have autocomplete.
Steps that I do to get the error :
Open the database panel, and add a MySQL DataSource
Fill every field
Click TEST CONNECTION button
I've filled correctly every field (host, database, user, password) in the Database feature
Host: s00vl9944624.fr.net.intra
Database: animationqrc
User: animationqrc
URL (built by PhpStorm): jdbc:mysql://s00vl9944624.fr.net.intra:3306/animationqrc
The error is :
java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is:
java.net.ConnectException: Connection timed out: connect
The problem is that, when I use myself a Java class that only tries to connect to the server and print rows from a table, it works.
import java.sql.*;
class MysqlCon{
public static void main(String args[]){
try{
Class.forName("com.mysql.cj.jdbc.Driver");
Connection con=DriverManager.getConnection("jdbc:mysql://s00vl9944624.fr.net.intra:3306/pilconquete?useUnicode=true&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC","*user*","*pass*");
//here sonoo is the database name, root is the username and root is the password
Statement stmt=con.createStatement();
ResultSet rs=stmt.executeQuery("select * from Admin_list");
while(rs.next())
System.out.println(rs.getInt(1)+" "+rs.getString(2)+" "+rs.getString(3));
con.close();
}catch(Exception e){ System.out.println(e);}
}
}
"C:\Users\b96297\AppData\Local\JetBrains\PhpStorm 2018.2.3\jre64\bin\java" MysqlCon
Thu Sep 20 16:14:02 CEST 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
3 497764 Xavier *******
Even when I add PhpStorm default parameter (that you can see in the log below), except classpath.
"C:\Users\b96297\AppData\Local\JetBrains\PhpStorm 2018.2.3\jre64\bin\java" -Djava.net.preferIPv4Stack=true -Djava.rmi.server.hostname=127.0.0.1 -Duser.timezone=UTC -Dfile.encoding=UTF-8 MysqlCon
When I add the classpath parameter, Java doesn't find my class.
I'm using Win7 x64, PhpStorm 2018.2.3 (was also failing in 2017.3.3). I'm at work, so network restrictions might apply, firewall. And I don't have admin rights on my laptop.
Thanks for your help
EDIT:
From PhpStorm log :
2018-09-20 15:54:25,481 [ 81612] INFO - ution.rmi.RemoteProcessSupport - "C:\Users\b96297\AppData\Local\JetBrains\PhpStorm 2018.2.3\jre64\bin\java" -Djava.net.preferIPv4Stack=true -Djava.rmi.server.hostname=127.0.0.1 -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath "C:\Users\b96297\AppData\Local\JetBrains\PhpStorm 2018.2.3\lib\util.jar;C:\Users\b96297\AppData\Local\JetBrains\PhpStorm 2018.2.3\lib\trove4j.jar;C:\Users\b96297\AppData\Local\JetBrains\PhpStorm 2018.2.3\lib\groovy-all-2.4.15.jar;C:\Users\b96297\AppData\Local\JetBrains\PhpStorm 2018.2.3\plugins\DatabaseTools\lib\jdbc-console.jar;C:\Users\b96297\AppData\Local\JetBrains\PhpStorm 2018.2.3\plugins\DatabaseTools\lib\dekaf-single-2.0.0.372.jar;C:\Users\b96297\.PhpStorm\config\jdbc-drivers\MySQL Connector\J\5.1.46\mysql-connector-java-5.1.46.jar;C:\Users\b96297\Downloads\mysql-connector-java-8.0.12.jar" com.intellij.database.remote.RemoteJdbcServer com.mysql.cj.jdbc.Driver
2018-09-20 15:54:25,701 [ 81832] WARN - ution.rmi.RemoteProcessSupport - Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
2018-09-20 15:54:26,310 [ 82441] INFO - ution.rmi.RemoteProcessSupport - Port/ID: 30227/RemoteDriverImpl3260ec8e
2018-09-20 15:54:46,310 [ 102441] WARN - ution.rmi.RemoteProcessSupport - java.rmi.NotBoundException: _DEAD_HAND_
2018-09-20 15:54:46,310 [ 102441] WARN - ution.rmi.RemoteProcessSupport - at sun.rmi.registry.RegistryImpl.lookup(RegistryImpl.java:209)
2018-09-20 15:54:46,310 [ 102441] WARN - ution.rmi.RemoteProcessSupport - at com.intellij.execution.rmi.RemoteServer.start(RemoteServer.java:96)
2018-09-20 15:54:46,310 [ 102441] WARN - ution.rmi.RemoteProcessSupport - at com.intellij.database.remote.RemoteJdbcServerBase.setupAndStart(RemoteJdbcServerBase.java:20)
2018-09-20 15:54:46,310 [ 102441] WARN - ution.rmi.RemoteProcessSupport - at com.intellij.database.remote.RemoteJdbcServer.main(RemoteJdbcServer.java:14)
2018-09-20 15:54:47,334 [ 103465] WARN - ution.rmi.RemoteProcessSupport - The cook failed to start due to java.net.ConnectException: Connection timed out: connect
2018-09-20 15:54:47,335 [ 103466] INFO - ution.rmi.RemoteProcessSupport - Process finished with exit code 1
2018-09-20 15:54:47,339 [ 103470] WARN - lij.database.util.ErrorHandler - java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is:
java.net.ConnectException: Connection timed out: connect
java.lang.RuntimeException: java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is:
java.net.ConnectException: Connection timed out: connect
I've tried setting java.rmi.server.hostname to different IPs (my own, server's hostname, server's IP)
I've tried from the bundled java that comes with PhpStorm
I can log to the server with mysql commandline
To summarize my comments, it seems that PhpStorm uses separate Java processes to isolate database access from the rest of the application. It looks like communication between these two processes (using RMI, Remote Method Invocation) is not possible.
This is possibly a firewall issue. Given you're using Windows, check the Windows Firewall settings under allowed apps config for the OpenJDK Platform Binary for your PhpStorm install (see its details, path should be C:\Users\b96297\AppData\Local\JetBrains\PhpStorm 2018.2.3\jre64\bin\java in your case), and enable Private access (in some cases, you may need to try Public as well). This will allow the Java processes to communicate using RMI.
If you can't find the OpenJDK Platform Binary in the firewall configuration, add the java.exe from the jre64\bin folder of the PhpStorm install and configure it.
After discussing with #MarkRotteveel and Dmitry Tronin | Support Engineer at JetBrains,
I've learned and discovered a few things:
Adding OpenJDK (PhpStorm bundled JDK) to the list of Allowed Programs in Windows Firewall works.
There's an ongoing ticket that the support linked me : Ticket

Hadoop Docker Setup - WordCount Tutorial

I was following the tutorial to run WordCount.java mentioned in here and when I run the following line in the tutorial
hadoop jar wordcount.jar org.myorg.WordCount /user/cloudera/wordcount/input /user/cloudera/wordcount/output
I get the following error -
17/09/04 01:57:29 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/09/04 01:57:30 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
The docker image that I used was docker pull cloudera/quickstart
There were no setup tutorials for Hadoop with Docker so it would be helpful if you could tell me the configurations that are to be made to overcome these issues.
That tutorial assumes you are in the cluster with the Hadoop client command available, the Hadoop Services are started, and properly configured.
0.0.0.0:8032 is the default YARN resource manager, so you need to configure your HADOOP_CONF_DIR XML files (specifically yarn-site for this error) to point at the Docker container for the correct addresses of YARN. core and hdfs-site will need configured to point at HDFS as well.

GlassFish 4 error starting domain domain1 listening for transport dt_socket at address: 9009 Error: could not find or load main class files

I am trying to run GlassFish4 on Windows 7. At glassfish4\bin I run asadmin start-domain -d and I get the following error message:
Waiting for domain 1 to start. Error starting domain domain1. The
server exited prematurely with exit code 1.
Before it died, it produced the following output:
Listening for transport dt_socket at address: 9009 Error: could not
find or load main class files.
Command start-domain failed
I checked the PATH and CLASSPATH and things appear to be ok but obviously something is wrong here.
Can you try starting the domain without the -d flag? I think that is trying to it in debug mode. Port 9009 is not a standard GlassFish port but is for JPDA debugging, which also uses dt_socket as a transport.

error on /127.0.0.1 connection (com.datastax.driver.core.TransportException: [/127.0.0.1] Unexpected exception triggered), no more host to try

I am trying to connect Cassandra with Java under Windows Environment. Following are application/OS/lib version.
-Windows 7
-Java 7
-Cassandra 2.1.12
Code:
Cluster clst;
Session ses;
clst= Cluster.builder().addContactPoint("127.0.0.1").withPort(9042).build();
Cassandra and nodetool is running. Below is the status of nodetool.
C:\Program Files\DataStax Community\apache-cassandra\bin>nodetool -h localhost status
Starting NodeTool
Datacenter: datacenter1
========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 127.0.0.1 245.99 KB 256 ? 61c6b0e5-2f83-4bc9-9b86-6507e2f06dfc rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
C:\Program Files\DataStax Community\apache-cassandra\bin>
When I am trying to connect cassandra with localhost/127.0.01 I and getting below error in stacktrace.
19:19:05.996 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
19:19:06.465 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] error on /127.0.0.1 connection (com.datastax.driver.core.TransportException: [/127.0.0.1] Unexpected exception triggered), no more host to try
19:19:06.469 [main] DEBUG com.datastax.driver.core.Cluster - Shutting down
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/127.0.0.1])
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:162)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:83)
at com.datastax.driver.core.Cluster$Manager.<init>(Cluster.java:516)
at com.datastax.driver.core.Cluster$Manager.<init>(Cluster.java:473)
at com.datastax.driver.core.Cluster.<init>(Cluster.java:65)
at com.datastax.driver.core.Cluster.buildFrom(Cluster.java:93)
at com.datastax.driver.core.Cluster$Builder.build(Cluster.java:458)
at cass.Cass.main(Cass.java:16)
Java Result: 1
Also I have tried to find out the solution on stackoverflow also on another sites but got failed to solve my issue.
Does anybody have the some solution for this query?
Please check your rpc_address. I suggest put rpc_address= 0.0.0.0
and broadcast_rpc_address=
Listen address could be blank or machine IP
From your stack trace, I suspect that you are using an extremely old version of the Java driver, probably some version from the 1.x series (1.0.1?). Try with the latest 3.0.0 version and see if the error is still there.
I found the problem, I was running the code from a project using a lot of third parties libraries and one of them is causing the error (I don't know which one yet). I replaced the whole jar files and problem solved. Go to the following url and download the Java driver jar files according to your cassandra version you are using.
http://docs.datastax.com/en/developer/driver-matrix/doc/javaDrivers.html#java-drivers

tachyon0.8.2 deployed with hadoop2.6.0,but the IPC version are not matched

Now,I want to deploy the tachyon0.8.2 on my ubuntu14.04,I already has hadoop and spark:
on the master
bd#master$ jps
11871 Jps
3388 Master
2919 NameNode
3266 ResourceManager
3123 SecondaryNameNode
on the slave
bd#slave$ jps
4350 Jps
2778 NodeManager
2647 DataNode
2879 Worker
And I editor the taachyon-env.sh:
export TACHYON_MASTER_ADDRESS=${TACHYON_MASTER_ADDRESS:-master}
export TACHYON_UNDERFS_ADDRESS=${TACHYON_UNDERFS_ADDRESS:-hdfs://master:9000}
Then, I run the bin/tachyon formatand bin/tachyon-start.sh local.
I cannot see the tachyonMaster in JPS:
/usr/local/bigdata/tachyon-0.8.2 [06:06:32]
bd$ bin/tachyon-start.sh local
Killed 0 processes on master
Killed 0 processes on master
Connecting to master as bd...
Killed 0 processes on master
Connection to master closed.
[sudo] password for bd:
Formatting RamFS: /mnt/ramdisk (512mb)
Starting master # master
Starting worker # master
/usr/local/bigdata/tachyon-0.8.2 [06:06:54]
bd$ jps
12183 TachyonWorker
3388 Master
2919 NameNode
3266 ResourceManager
3123 SecondaryNameNode
12203 Jps
and I see the logs in master.logs,I said that:
2015-12-27 18:06:50,635 ERROR MASTER_LOGGER (MetricsConfig.java:loadConfigFile) - Error loading metrics configuration file.
2015-12-27 18:06:51,735 ERROR MASTER_LOGGER (HdfsUnderFileSystem.java:<init>) - Exception thrown when trying to get FileSystem for hdfs://master:9000
org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
at org.apache.hadoop.ipc.Client.call(Client.java:1070)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at tachyon.underfs.hdfs.HdfsUnderFileSystem.<init>(HdfsUnderFileSystem.java:74)
at tachyon.underfs.hdfs.HdfsUnderFileSystemFactory.create(HdfsUnderFileSystemFactory.java:30)
at tachyon.underfs.UnderFileSystemRegistry.create(UnderFileSystemRegistry.java:116)
at tachyon.underfs.UnderFileSystem.get(UnderFileSystem.java:100)
at tachyon.underfs.UnderFileSystem.get(UnderFileSystem.java:83)
at tachyon.master.TachyonMaster.connectToUFS(TachyonMaster.java:412)
at tachyon.master.TachyonMaster.startMasters(TachyonMaster.java:280)
at tachyon.master.TachyonMaster.start(TachyonMaster.java:261)
at tachyon.master.TachyonMaster.main(TachyonMaster.java:64)
2015-12-27 18:06:51,742 ERROR MASTER_LOGGER (TachyonMaster.java:main) - Uncaught exception terminating Master
java.lang.IllegalArgumentException: All eligible Under File Systems were unable to create an instance for the given path: hdfs://master:9000
java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
at tachyon.underfs.UnderFileSystemRegistry.create(UnderFileSystemRegistry.java:132)
at tachyon.underfs.UnderFileSystem.get(UnderFileSystem.java:100)
at tachyon.underfs.UnderFileSystem.get(UnderFileSystem.java:83)
at tachyon.master.TachyonMaster.connectToUFS(TachyonMaster.java:412)
at tachyon.master.TachyonMaster.startMasters(TachyonMaster.java:280)
at tachyon.master.TachyonMaster.start(TachyonMaster.java:261)
at tachyon.master.TachyonMaster.main(TachyonMaster.java:64)
What should I do for this problem?
This exception arises due to version mismatch of Hadoop client and server side. Check your Hadoop version, and then recompile Tachyon against that version using this command:
mvn -Dhadoop.version=your_hadoop_version clean install
Example: mvn -Dhadoop.version=2.4.0 clean install
Now configure your compiled Tachyon and it should work fine. Reference link.

Categories

Resources