I want to connect to HBase container on remote server that I connect to with ssh through VPN. Let's say it's 10.0.0.10.In /etc/hosts i placed:
10.0.0.10 hbaseaddr
In my java code I use hbase-client:
config.set("hbase.zookeeper.quorum", "hbaseaddr");
config.set("hbase.zookeeper.property.clientPort", "2181");
I get following error:
Can not resolve 791995b8a2df, please check your network
What is 791995b8a2df? Also, surprisingly, when I have a VPN turned off it just stays idle and does nothing, so it really is connecting to 10.0.0.10, then why do I have this error?
I read that it might be the issue with /etc/hosts. But I have /etc/hosts inside a local machine, /etc/hosts inside machine on 10.0.0.10 and /etc/hosts inside container with HBase.
What can I do to make it work?
Thanks in advance
Related
I'm trying to connect JVisualVM, running on my local machine, to a remote machine which is running a WildFly server (version 8.1.0, to be specific.)
I didn't configure the WildFly server myself, and I don't know who did, but I do know that I can log in as an administrative user from my local machine by pointing my browser at:
https://[ip address of the remote machine]:9443/console
Note that it's https, not ordinary http, and that the port for that has been set to 9443 (I think the default is 8080 or 9990 or something... IDK, I saw a lot of port numbers online. I have been explicitly told that http was disabled for this WildFly server).
I can SSH into the remote machine. I can navigate to the bin directory for WildFly and run jboss-client.sh. I have to connect on port 9999 (I think the default is 9990 for that?)
I copied the jboss-client.jar (under bin/client) to my local machine and ran JVisualVM from the command line like this:
.\jvisualvm.exe -cp:a C:\[path to]\jboss-client.jar
It launches fine. File > Add Remote Host: Then I entered the IP. OK. I right clicked on it under Remote in the tree and picked Add JMX Connection. I entered
service:jmx:http-remoting-jmx://[ip]:9999
I checked off that I wanted to use the security credentials and entered the username and password. Checked off to save the security credentials. Left "Do not require SSL Connection" unchecked. Hit OK. It immediately spat out the message
Cannot connect to admin#service:jmx:http-remoting-jmx://[ip]:9999 using service:jmx:http-remoting-jmx://[ip]:9999
I also tried the port 9443, 9990, and 8080 instead. None of those worked. I tried https instead of http in the protocol name. That also didn't work.
What am I missing? How is it that I can access the console, and connect with jboss-client.sh, but I can't use JVisualVM? Is there some log I can use somewhere to see what's wrong? Maybe someone can point out a configuration I've missed somewhere?
Not sure if it's important or not, but my local machine is running Windows 10 with JDK8 installed. The WildFly server is using Java 6 on CentOS 6.3.
You need to add the jboss-client.jar (or jboss-cli-client.jar) to the class path for JVisualVM. The library can be found in the bin/client directory of the WildFly install.
I used the following command to add the library to the class path.
jvisualvm --cp:a ~/servers/wildfly-10.0.0.Final/bin/client/jboss-client.jar
Then I used service:jmx:remote+http://[ip]:[port] and was able to connect.
I don't know if someone else is also (still) having the same issue (Wildfly10 on a remote machine where management console is available at 9443 with HTTPS). The following worked for me.
For ssh connections:
Starting jvisualvm with jboss-client.jar
jvisualvm --cp:a #JBOSS_HOME/bin/client/jboss-client.jar
Using the following connection string:
service:jmx:remote+https://remote-server:9443
NOTE: I used here remote+https
Provide username and password
Hope this helps.
you missed run jstatd command in remote host ,
this little program is RMI server that possible connection from client to remote host though you using jmx connection it used jmxrmi protocol for that connection .
so first in remote host create file name as security.policy with this contain :
grant codebase "file:${java.home}/../lib/tools.jar" {
permission java.security.AllPermission;
};
off course you must in file section for linux put explicit path and then of creation this file put it in bin directory of jdk.home
then you should run this command on remote host
$JAVA_HOME/bin/jstatd -J-Djava.security.policy=path of /security.policy -J-Djava.rmi.server.hostname=remote ip address -J-Djava.net.preferIPv4Stack=true
then you could connect to server off course with correct settings.
Include jboss-cli-client.jar and jboss-client.jar under \lib\visualvm\platform\lib and restart jvisualvm to pickup new jars.
I have a new DB2 server (v10.5.0.3), and I can connect to the database locally just fine.
When trying to connect from a remote server using JDBC I am getting the "Connection refused. ERRORCODE=-4499, SQLSTATE=08001" error. Based on information found here https://www-304.ibm.com/support/docview.wss?uid=swg21403644 I have confirmed that
[db2inst1#db2 ~]$ db2set -all
[i] DB2COMM=TCPIP
[i] DB2AUTOSTART=YES
[g] DB2SYSTEM=db2.xxxx.com
[g] DB2INSTDEF=db2inst1
[g] DB2ADMINSERVER=xxxxxx
and
[db2inst1#db2 ~]$ db2 get database manager configuration | grep -i svce
TCP/IP Service name (SVCENAME) = 50001
SSL service name (SSL_SVCENAME) =
with these JDBC connection values
driver=com.ibm.db2.jcc.DB2Driver
url=jdbc:db2://db2.xxxxx.com:50001/TESTGEN
username=XXXXXXXX
password=XXXXXXX
I have verified that the firewall on the both servers have opened ports 50000 and 50001. I've run out of ideas, any help is greatly appreciated.
I had the same trouble... Its was caused by IPV6...
The URL connection point to localhost, resolved as ::1 (the IPV6 address of localhost) and DB2 server doens't listing IPV6 protocol.
I resolved by modify the c:\windows\system32\driver\etc\host file : I uncomment the line 127.0.0.1 locahost to force ipv4 resolution name of locahost... and it's works.
I hope that helps. (sorry for my English)
I had same problem, when I couldn't connect to my remote database with Data Studio Client and with DB2 CLP console. Make sure that you checked ping to your server and it is successful, you checked dbm cfg and you know svcename, tcpip port number, you checked ..System32\drivers\etc\services file and there is "svcename tcpip_port_number/tcp" in that file. So, while you get message in your db2diag.log ""TCPIP" protocol support was successfully started.", it isn't network problem. I opened ports on my server mashine: DB2 server tcpip port(svcename) and DB2 DAS tcpip port through the Firewall settings. I found help on this reference https://learn.microsoft.com/en-us/sql/reporting-services/report-server/configure-a-firewall-for-report-server-access?view=sql-server-ver16.
Be careful and consult with your system admin about security.
(Sorry for my English)))
It was indeed a network error. I'm not fully sure which fix was the most important but I made sure telnet was enabled and white listed the DB2 process in the RHEL firewall configuration.
When I print the I.P. address of the system using InetAddress.getLocalHost(), I get user-VAIO/192.168.1.3 . Now, when I connect to derby using jdbc:derby://localhost:1527/mydatabase;create=true, it connects without any errors but when I connect the same using jdbc:derby://192.168.1.3:1527/mydatabase;create=true, it fails giving me the following exception:-
java.net.ConnectException : Error connecting to server 192.168.1.3 on port 1527 with message Connection refused: connect.
Any help will be appreciated.
When you start your Derby Network Server, you provide a value for the '-h' argument. You might not realize you are doing this, if you are using the packaged StartNetworkServer.bat file, but look inside the batch file, and you will see the -h argument there.
The batch file comes provided with the syntax '-h default' when you download Derby from the Apache website.
But you can change that, to say, for example, '-h 192.168.1.3', and then your Derby Network Server will accept connections that specify 'jdbc:derby://192.168.1.3/my/database'.
Note that if you want to accept such connections from other computers on the network, you will also have to adjust your Windows Firewall rules, as by default it will prevent such connections.
There is a Linux VM with Hadoop installed and running.
And there is Java app running in Eclipse that retrieve data from HDFS.
If I am copying file(s) to or from HDFS inside the VM everything works fine.
But when i am running the app from my Windows physical machine I am getting the next exception:
WARN hdfs.DFSClient: Failed to connect to /127.0.0.1:50010 for block, add to
deadNodes and continue. java.net.ConnectException: Connection refused: no further
information. Could not obtain BP-*** from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from namenode and retry
I can only retrieve list of files from HDFS.
Seems that when retrieve data from data node it is connecting to my Windows localhost.
Because when I made a tunnel in putty from my localhost to VM everything was fine.
Here is my Java code:
Configuration config = new Configuration();
config.set("fs.defaultFS", "hdfs://ip:port/");
config.set("mapred.job.tracker", "hdfs://ip:port");
FileSystem dfs = FileSystem.get(new URI("hdfs://ip:port/"), config, "user");
dfs.copyToLocalFile(false, new Path("/tmp/sample.txt"),newPath("D://sample.txt"), true);
How can it be fixed?
Thanks.
P.S. This error occurs when I am using QuickStart VM from Cloudera.
Your DataNode is advertising its address to the NameNode as 127.0.0.1. You need to re-configure your Pseudo distributed cluster such that the nodes use externally available addresses (hostnames or IP addresses) when opening socket services.
I imagine if you run a netstat -atn on your VM, you'll see the Hadoop ports bound to 127.0.0.1 rather than 0.0.0.0 - this means they will only accept internal connections.
You need to look at your VM's /etc/hosts configuration file and ensure hostname doesn't have an entry resolving to 127.0.0.1.
Whenever you start a VM, it gets its own I.P. Something like 192.x.x.x or 172.x.x.x.
Using 127.0.0.1 for HDFS wont help when you are executing from your windows box, because this is mapped to local i.p. So, if you are using 127.0.0.1 from your windows machine, it will think that your HDFS is running on windows machine. This is why your connection is failing.
Find the i.p that is associated with your VM. Here is a link to get that if you are using Hyper-V. http://windowsitpro.com/hyper-v/quickly-view-all-ip-addresses-hyper-v-vms
Once you get the VMs I.P, use it in the application.
You need to change the ip. First go to linux VM and in its terminal find the IP address of your VM.
Command to see the ip address in linux VM is below
ifconfig
Then in your code change the ip address to the IP thats shown in your linux VM.
i have written a following hbase client class for remote server:
System.out.println("Hbase Demo Application ");
// CONFIGURATION
// ENSURE RUNNING
try {
HBaseConfiguration config = new HBaseConfiguration();
config.clear();
config.set("hbase.zookeeper.quorum", "192.168.15.20");
config.set("hbase.zookeeper.property.clientPort","2181");
config.set("hbase.master", "192.168.15.20:60000");
//HBaseConfiguration config = HBaseConfiguration.create();
//config.set("hbase.zookeeper.quorum", "localhost"); // Here we are running zookeeper locally
HBaseAdmin.checkHBaseAvailable(config);
System.out.println("HBase is running!");
// createTable(config);
//creating a new table
HTable table = new HTable(config, "mytable");
System.out.println("Table mytable obtained ");
addData(table);
} catch (MasterNotRunningException e) {
System.out.println("HBase is not running!");
System.exit(1);
}catch (Exception ce){ ce.printStackTrace();
it is throwing some exception:
Oct 17, 2011 1:43:54 PM org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation getMaster
INFO: getMaster attempt 0 of 1 failed; no more retrying.
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:328)
at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:883)
at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:750)
at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
at $Proxy4.getProtocolVersion(Unknown Source)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:419)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:393)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:444)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:359)
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:89)
at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:1215)
at com.ifkaar.hbase.HBaseDemo.main(HBaseDemo.java:31)
HBase is not running!
can you tell me why is it throwing an exception, what is wrong with code and how to solve it.
This problem is occuring due to your HBase server's hosts file.
You just need to edit you HBase server's /etc/hosts file.
Remove the localhost entry from that file and put the localhost entry in front of HBase server IP.
For example, your HBase server's /etc/hosts files seems like this:
127.0.0.1 localhost
192.166.66.66 xyz.hbase.com hbase
You have to change it like this by removing localhost:
# 127.0.0.1 localhost # line commented out
192.166.66.66 xyz.hbase.com hbase localhost # note: localhost added here
This is because when remote machine asks hbase server machine where HMaster is running, it tells that it is running on localhost.
So if the entry is 127.0.0.1 then HBase server returns this address and remote machine start to find HMaster on its own machine (locally).
When we change that with the HBase Server IP then everything works fine :)
I agree.. The HBase is very sensitive to /etc/hosts configurations.. I had to set the zeekeeper bindings property in the hbase-site.xml correctly in order for the above mentioned Java code to work...For example: I had to set it as follows:
{property}
{name}hbase.zookeeper.quorum{/name}
{value}www.remoterg12.net{/value} {!-- this is the externally accessible domain --}
{/property}
{property}
{name}hbase.zookeeper.property.clientPort{/name}
{value}2181{/value} {!-- everything needs to be externally accessible --}
{/property}
{property}
{name}hbase.master.info.port{/name} {!-- http://www.remoterg12.net:60010/ --}
{value}60010{/value}
{/property}
{property}
{name}hbase.master.info.bindAddress{/name}
{value}www.remoterg12.net{/value} {!-- Use this to access the GUI console, --}
{/property}
The Remote GUI will give you a clear picture of the Binding Domains.. For example, the [HBase Master] property in the "GUI Web console" should be something like this: www.remoterg12.net:60010 (It should NOT be localhost:60010 )... AND YES!!, I did have to play around with the /etc/hosts just right as I didn't want to mess up the existing Apache configs :-)
The same problem can be solve by editing the conf/regionservers file in hbase directory to add the Hbase server (Remote) in it . Then no need to change the etc/hosts file
After editing conf/regionservers will look like:
localhost
ip address of the remote hbase server
eg
localhost
10.132.258.366
Exact same problem here with HBase 1.1.3.
2 virtuals machines (Ubuntu) on the same network. The logs show that the client can reach Zookeeper but not the HBase server.
TL;DR: remove the following line in /etc/hosts on the server (server_hostame):
127.0.1.1 server_hostname server_hostname
And add this one with 127.x.y.z the ip of your server on the (local) network:
192.x.y.z server_hostname
I tried a lot of combinations on the client and server sides. In standalone mode I don't think there is a better approach.
Not really proud of that. It is a shame to have to mess with the network configuration and to not even provide a HBase shell client able to connect remotely to a server (welcome to the Java world of illusions...)
On the server side, leave the files conf/hbase-site.xml empty. You don't need to put a Zookeeper configuration in here, defaults are fine.
Same for etc/regionservers. Leave it with the default entry (localhost) because I don't think in standalone mode it really cares (and I tried to put server_hostname in it and of course this does not works).
On the client side, It must know the server by hostname if you want to resolve with it so again add an entry in your /etc/hosts client file for the server.
As a bonus I give you my sbt configuration and some complete working code for the client since the HBase team seems to have spent the documentation budget at Vegas for the last 4 years (again, welcome the «Business ready» world of Java/Scala).
build.sbt:
libraryDependencies ++= Seq(
...
"org.apache.hadoop" % "hadoop-core" % "1.2.1",
"org.apache.hbase" % "hbase" % "1.1.2",
"org.apache.hbase" % "hbase-client" % "1.1.2",
)
some_client_code.scala:
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.{HTable, Put, HBaseAdmin}
import org.apache.hadoop.hbase.util.Bytes
val hbaseConf = HBaseConfiguration.create()
hbaseConf.set("hbase.zookeeper.quorum", "server_hostname")
HBaseAdmin.checkHBaseAvailable(hbaseConf)
val table = new HTable(hbaseConf, "my_hbase_table")
val put = new Put(Bytes.toBytes("row_key"))
put.add(Bytes.toBytes("cf"), Bytes.toBytes("colId1"), Bytes.toBytes("foo"))
I know it is too late to answer this question but I want to share my way of resolving a similar issue.
I had the same issue and I tried to set the zookeeper quorum from the java program and also tried via the CLI but none of them worked.
I am using CDH 5.7.7 with HBase version 1.1.0
Finally I had to export few configs to the Hadoop classpath to fix the issue. Here is config that I have exported.
export HADOOP_CLASSPATH=/etc/hadoop/conf:/usr/share/cmf/lib/cdh5/hbase-protocol-0.98.1-cdh5.5.0.jar:/etc/hbase/conf:/driven/conf
Hope this helps.