HBase and Hadoop is CDH standalone mode in docker. ZK、hbase、phoenix use shell all can operate. java operate zk is OK too.But java cannot operate Hbase and phoenix, Code is OK.
Can anyone help me, Thank you!
public class HbaseTest {
public static Configuration conf;
static{
conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "master:2181");
}
public static void main(String[] args) throws Exception {
Connection connection = ConnectionFactory.createConnection(conf);
HBaseAdmin admin = new HBaseAdmin(conf);
boolean exists = admin.tableExists("stu");
System.out.println(exists);
admin.close();
}
}
The log after Run code:
java.net.SocketTimeoutException: callTimeout=60000, callDuration=64255: Connection refused: no further information row 'stu,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=master,60020,1620539340415, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:276)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:210)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:289)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:164)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:159)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:796)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:408)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:418)
at com.max.hbase.HbaseTest.main(HbaseTest.java:24)
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=64255: Connection refused: no further information row 'stu,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=master,60020,1620539340415, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:171)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused: no further information
You should try to separate the quorum's address and port
conf.set("hbase.zookeeper.quorum", "master");
conf.set("hbase.zookeeper.property.clientPort", "2181");
Also, are you sure that your zookeeper is on the master node? The first property should not be the master address, but the zookeeper quorum (separated by comma, if it's more than one address). I suppose you specified "master" because you are co-locating the master and the single zookeeper node?
Related
I am pulling data from Oracle using dataflow job in java. While running my dataflow job with a Direct runner, I am facing below error
jsonPayload: {
exception: "java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: java.sql.SQLException: Cannot create PoolableConnectionFactory (IO Error: The Network Adapter could not establish the connection)
at com.google.cloud.dataflow.worker.IntrinsicMapTaskExecutorFactory$1.typedApply(IntrinsicMapTaskExecutorFactory.java:192)
at com.google.cloud.dataflow.worker.IntrinsicMapTaskExecutorFactory$1.typedApply(IntrinsicMapTaskExecutorFactory.java:163)
at com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:63)
at com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:50)
at com.google.cloud.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:87)
at com.google.cloud.dataflow.worker.IntrinsicMapTaskExecutorFactory.create(IntrinsicMapTaskExecutorFactory.java:123)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:334)
at com.google.cloud.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:288)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:134)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:114)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:101)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.util.UserCodeException: java.sql.SQLException: Cannot create PoolableConnectionFactory (IO Error: The Network Adapter could not establish the connection)
at org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:36)
at org.apache.beam.sdk.io.jdbc.JdbcIO$ReadFn$DoFnInvoker.invokeSetup(Unknown Source)
at com.google.cloud.dataflow.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.deserializeCopy(DoFnInstanceManagers.java:63)
at com.google.cloud.dataflow.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.peek(DoFnInstanceManagers.java:45)
at com.google.cloud.dataflow.worker.UserParDoFnFactory.create(UserParDoFnFactory.java:94)
at com.google.cloud.dataflow.worker.DefaultParDoFnFactory.create(DefaultParDoFnFactory.java:74)
at com.google.cloud.dataflow.worker.IntrinsicMapTaskExecutorFactory.createParDoOperation(IntrinsicMapTaskExecutorFactory.java:262)
at com.google.cloud.dataflow.worker.IntrinsicMapTaskExecutorFactory.access$000(IntrinsicMapTaskExecutorFactory.java:84)
at com.google.cloud.dataflow.worker.IntrinsicMapTaskExecutorFactory$1.typedApply(IntrinsicMapTaskExecutorFactory.java:181)
... 14 more
And for more clearance. please refer my code below,
PipelineOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().create();
Pipeline pipeline = Pipeline.create(options);
pipeline.apply(JdbcIO.>read().withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create("oracle.jdbc.driver.OracleDriver", "jdbc:oracle:thin:#//:1521/orcl").withUsername("user_name").withPassword("pass_word")).withQuery("select SUPPLIER_ID,SUPPLIER_NAME from suppliers").withRowMapper(new JdbcIO.RowMapper>() {
#Override
public KV mapRow(ResultSet resultSet) throws Exception {
System.out.println("=====================inside resultset====================");
KV kv = KV.of(resultSet.getString("label"), resultSet.getString("name"));
return kv;
}
}).withCoder(KvCoder.of(StringUtf8Coder.of(), StringUtf8Coder.of()))).apply(GroupByKey.create())
Also I having Oracle version 11g so I am using ojdbc6.jar here
Please help me
I'm trying to connect to Hbase using Hbase client API in a kerborized Cloudera cluster.
Sample code:
Configuration hbaseConf = HBaseConfiguration.create();
/*hbaseConf.set("hbase.master", "somenode.net:2181");
hbaseConf.set("hbase.client.scanner.timeout.period", "1200000");
hbaseConf.set("hbase.zookeeper.quorum",
"somenode.net,somenode2.net");
hbaseConf.set("zookeeper.znode.parent", "/hbase");*/
hbaseConf.setInt("timeout", 120000);
hbaseConf.set(TableInputFormat.INPUT_TABLE, tableName);
//hbaseConf.addResource("src/main/resources/hbase-site.xml");
UserGroupInformation.setConfiguration(hbaseConf);
UserGroupInformation.loginUserFromKeytab("principal", "keytab");
JavaPairRDD<ImmutableBytesWritable, Result> javaPairRdd = ctx
.newAPIHadoopRDD(hbaseConf, TableInputFormat.class,
ImmutableBytesWritable.class, Result.class);
I tried to set the hbase-site.xml in the maven project resources, also passed as jar file in spark-submit command using --jars, but nothing works.
Error log:
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68545: row '¨namespace:test,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hostname.net,60020,1511970022474, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:278)
at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:266)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:920)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1242)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:394)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:203)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:64)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:360)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:334)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more
18/02/26 16:25:42 INFO spark.SparkContext: Invoking stop() from shutdown hook
The problem you are facing is because your environment is not properly set up.
I have answer it to my own question here
I have orace 11g running on 192.168.1.217 and I am trying to connect it using JDBC driver with java and it gives me following error
IO Error: The Network Adapter could not establish the connection
Library I am using is ojdbc6.jar
Here is my code
public void makeOracleConnection() {
try {
Class.forName("oracle.jdbc.OracleDriver");
oraCon = DriverManager.getConnection("jdbc:oracle:thin:#192.168.1.217:1521:orcl", "hr", "hr");
oraStmt = oraCon.createStatement();
oraRsStmt=oraCon.createStatement(ResultSet.CONCUR_READ_ONLY,ResultSet.TYPE_SCROLL_INSENSITIVE);
} catch (Exception e) {
System.out.println("Error while making connection with Database : " + e.getMessage());
}
}
I have also tried to ping on 192.168.1.217 then pins is successful.
Also TNSLISTENER is running on that machine.
please help.
Please find print stack trace here
run:
java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:743)
at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:657)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:560)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at test.oracle.makeOracleConnection(oracle.java:30)
at test.oracle.<init>(oracle.java:21)
at test.oracle.main(oracle.java:69)
Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:470)
at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:506)
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:595)
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:230)
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1452)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:496)
... 8 more
Caused by: java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:163)
at oracle.net.nt.ConnOption.connect(ConnOption.java:159)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:428)
... 13 more
BUILD SUCCESSFUL (total time: 1 second)
You get the error
java.net.ConnectException: Connection refused: connect
Which means that there is nothing listening on the machine and port you are trying to connect to. Your Java code looks correct so I would continue to investigate that Oracle is actually listening on port 1521 on 192.168.1.217.
If you run run netstat -n on the server you should find a line that looks like
TCP [::]:1521 [::]:0 LISTENING
If something really is listening on that port. If you do not find that line, check your Oracle configuration.
Try to connect with some other tool, ie sqlplus to verify that the issue is not with Oracle. If you cannot connect with sqlplus/sql developer, make sure that your oracle is configured to allow remote connections, and also listens on given addresses/ports
public void makeOracleConnection() {
try {
Class.forName("oracle.jdbc.OracleDriver");
Connection oraCon = DriverManager.getConnection("jdbc:oracle:thin:#192.168.1.217:1521:orcl", "hr", "hr");
Statement oraStmt = oraCon.createStatement();
//oraRsStmt=oraCon.createStatement(ResultSet.CONCUR_READ_ONLY,ResultSet.TYPE_SCROLL_INSENSITIVE);
ResultSet rs = oraStmt.executeQuery("select hello as result from dual");
while(rs.next()) {
System.out.println(rs.getString("result"));
}
}
catch (Exception e)
System.out.println("Error while making connection with Database : " + e.getMessage());
}
}
Try this out. Hope it'll help. I also don't like your connection path. Is it right? I think it should be something like this:
jdbc:oracle:thin:#(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(ADDRESS=(PROTOCOL=tcp)(HOST=ip adres)(PORT=port)))(CONNECT_DATA=(SERVICE_NAME = orcl)))","username","password"
I have tried a snippet from Basho's docs which is given as below :
public class TasteOfRiak {
public static void main(String[] args) throws UnknownHostException, ExecutionException, InterruptedException
{
RiakClient client = RiakClient.newClient(port, "IP");
Location location = new Location(new Namespace("Bucket"), "bucketType");
FetchValue fv = new FetchValue.Builder(location).build();
FetchValue.Response response = client.execute(fv);
String value = response.getValue(String.class);
System.out.println(value);
client.shutdown();
}
}
But, this throws exception :
[main] ERROR com.basho.riak.client.core.RiakNode - Connection attempt failed: java.net.ConnectException:
Connection timed out: no further information:
Exception in thread "main" [pool-1-thread-2] INFO com.basho.riak.client.core.DefaultNodeManager - NodeManager moved node to unhealthy list; 3.34.211.202:8098
java.util.concurrent.ExecutionException: com.basho.riak.client.core.NoNodesAvailableException
at com.basho.riak.client.core.FutureOperation.get(FutureOperation.java:260)
at com.basho.riak.client.api.commands.CoreFutureAdapter.get(CoreFutureAdapter.java:52)
at com.basho.riak.client.api.RiakCommand.execute(RiakCommand.java:89)
at com.basho.riak.client.api.RiakClient.execute(RiakClient.java:293)
at TasteOfRiak.main(TasteOfRiak.java:20)
Caused by: com.basho.riak.client.core.NoNodesAvailableException
at com.basho.riak.client.core.DefaultNodeManager.executeOnNode(DefaultNodeManager.java:95)
at com.basho.riak.client.core.RiakCluster.execute(RiakCluster.java:197)
at com.basho.riak.client.core.RiakCluster.retryOperation(RiakCluster.java:328)
at com.basho.riak.client.core.RiakCluster.access$800(RiakCluster.java:44)
at com.basho.riak.client.core.RiakCluster$RetryTask.run(RiakCluster.java:340)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Check the the riak node is running (as Craig said), and you could see the node name in etc/vm.args.
Make sure erlang node cookie correct. Erlang VM will check the cookie from other VM is match, if not, it would refuse the ping or connection request. Local cookie place in ~/.erlang.cookie, suggest to try setting VM cookie as your local cookie before connect.
Another way:
$ bin/riak console
%% set cookie
erlang:set_cookie(your_riak_node_name, your_cookie)
The riak java client doesn't support HTTP. Make sure you are connecting using protocol buffer port. Default is 8087
Also make sure your Riak server is listening on the ip address you specify. You can edit the listeners in etc/riak/riak.conf
I have a Hadoop Cluster using inner network(ip range is 192.168.0.0/24), and I want to connect hbase using java library(org.apache.hadoop.hbase.client)
from development computer on the Windows 7 that use different network(ip is outter network 203.252.x.x), But, I couldn't connect hbase.
I Have a question.
Is my code wrong??
Is it possible using Java Library (org.apache.hadoop.hbase.client), should i use thrift protocol? (I don't want to use Thrift)
Do you have any idea? or comment ?
Thank you
This is My Code for Connecting Hbase.
public class TestBase {
public static void main(String[] args) throws MasterNotRunningException, ZooKeeperConnectionException, ServiceException, IOException {
Configuration configuration = HBaseConfiguration.create();
configuration.set("hbase.master", "203.252.x.x"); // master info
configuration.set("hbase.master.port", "6000");
configuration.set("hbase.zookeeper.quorum", "203.252.x.x");
configuration.set("hbase.zookeeper.property.clientPort", "2181");
configuration.set("zookeeper.znode.parent", "/hbase-unsecure");
HBaseAdmin.checkHBaseAvailable(configuration);
HTable table = null;
table = new HTable(configuration, "weatherData");
Scan scan = new Scan();
scan.setTimeRange(1L, 1435633313526L);
ResultScanner scanner = null;
scanner = table.getScanner(scan);
for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
System.out.println(Bytes.toString(rr.getRow())
+ " => "
+ Bytes.toString(rr.getValue(Bytes.toBytes("temp"),
Bytes.toBytes("max"))));
}
table.close();
scanner.close();
}
}
and That is Error Code in Eclipse
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: datanode2
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1661)
at enter code hereorg.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1687)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1904)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:932)
at enter code hereorg.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2409)
at TestBase.main(TestBase.java:28)
Caused by: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: datanode2
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1739)
at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1777)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1698)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1607)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1633)
... 5 more
Caused by: java.net.UnknownHostException: unknown host: datanode2
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.<init>(RpcClient.java:501)
at org.apache.hadoop.hbase.ipc.RpcClient.createConnection(RpcClient.java:325)
at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1614)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1494)
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1724)
... 10 more
There might be a problem of the HBase Master DNS name mapping to the ip address of hbase.master. Be sure that you have either a DNS server set up or else you can try to find something similar to this that worked on my GNU/Linux machine. Such as configuring "/etc/hostname" :set up the name of the HBase Master node) and "/etc/hosts" on the machine that tries to connect to the master node.
Hopefully you can set up this on your Windows machine somehow.
Here is a helpful link for the GNU/Linux way:
http://sujee.net/2012/03/08/getting-dns-right-for-hadoop-hbase-clusters/#.XULnEZNKhTZ
You are unable to reach to nodes of cluster. Check the firewall and network settings. Make sure ports are also open to connect.
This is error in you stack trace:
Exception in thread "main" org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: datanode2
Also, you dont need to specify HBase cluster properties in your code. Put hbase-site.xml in classpath of your java and just instantiate the connection.