hbase standalone quickstart failure is repeateable via maven project - java

I have to launch an hbase process to a private hadoop cluster and need to simply demonstrate minimum functionality; installing hadoop is unnecessary for the demonstration. following the QuickStart apache hbase docs failed via shell and I have been able to duplicate the failures using a self contained maven project. /etc/hosts is not an issue.
I located a maven project which appears to have been built to deomnstrate this same functionality and updated the deps.
that project is here: hbase-demo
to repeat my results requires shell and two lines with maven installed:
git clone https://github.com/jnorthrup/hbase-demo
cd hbase-demo
$ mvn clean package exec:java
[INFO] Scanning for projects...
[INFO]
[...]
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] --- maven-jar-plugin:2.3.2:jar (default-jar) # demo ---
[INFO] Building jar: /vol/big240/snap/jim/work/hbase-demo/target/demo-0.0.1-SNAPSHOT.jar
[INFO]
[INFO] >>> exec-maven-plugin:1.2.1:java (default-cli) # demo >>>
[INFO]
[INFO] <<< exec-maven-plugin:1.2.1:java (default-cli) # demo <<<
[INFO]
[INFO] --- exec-maven-plugin:1.2.1:java (default-cli) # demo ---
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Client environment:host.name=localhost
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_40
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Client environment:java.home=/opt/jdk1.7.0_40/jre
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/opt/maven/boot/plexus-classworlds-2.4.jar
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Client environment:java.library.path=:/opt/AMDAPP/lib/x86_64/:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Client environment:os.version=3.8.0-32-generic
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Client environment:user.name=jim
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/jim
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Client environment:user.dir=/vol/big240/snap/jim/work/hbase-demo
13/10/30 14:30:49 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
13/10/30 14:30:49 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 13494#keyframe
13/10/30 14:30:49 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
13/10/30 14:30:49 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
13/10/30 14:30:49 WARN zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
13/10/30 14:30:49 INFO util.RetryCounter: Sleeping 2000ms before retry #1...
13/10/30 14:30:50 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
13/10/30 14:30:50 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
[repeats]
any help reviving this demo code would be greatly appreciated. twiddling the hbase-site.xml makes no appreciable difference to zookeeper failures with standalone, either with shell or maven..
thanks

Change hbase-site.xml to point to your hbase server:
<property>
<name>hbase.rootdir</name>
<value>hdfs://<hbase machine name>:8020/hbase</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value><zookeeper machine name></value>
</property>

Related

How to disable mongodb java driver's DefaultServerMonitor Thread

I added apache metamodel into my project.
<dependency>
<groupId>org.apache.metamodel</groupId>
<artifactId>MetaModel-full</artifactId>
<version>5.0.0</version>
</dependency>
mongo-java-driver came as a dependency.
[INFO] | +- org.apache.metamodel:MetaModel-mongodb-mongo3:jar:5.0.0:compile
[INFO] | | +- org.apache.metamodel:MetaModel-mongodb-common:jar:5.0.0:compile
[INFO] | | \- org.mongodb:mongo-java-driver:jar:3.4.3:compile
[INFO] | +- org.apache.metamodel:MetaModel-mongodb-mongo2:jar:5.0.0:compile
Then I see some logs like.
2018-01-16 02:33:09.467 INFO 15417 --- [ restartedMain] org.mongodb.driver.cluster : Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
2018-01-16 02:33:09.512 DEBUG 15417 --- [ restartedMain] org.mongodb.driver.cluster : Updating cluster description to {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]
2018-01-16 02:33:09.521 DEBUG 15417 --- [localhost:27017] org.mongodb.driver.connection : Closing connection connectionId{localValue:1}
2018-01-16 02:33:09.525 INFO 15417 --- [localhost:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server localhost:27017
com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.connection.SocketStream.open(SocketStream.java:63)
at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:115)
at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:113)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at com.mongodb.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:57)
at com.mongodb.connection.SocketStream.open(SocketStream.java:58)
... 3 common frames omitted
2018-01-16 02:33:09.527 DEBUG 15417 --- [localhost:27017] org.mongodb.driver.cluster : Updating cluster description to {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]
When I dig source code of mongo-java-driver I see mongo-java-driver starts a Thread to monitor default mongodb server. I dont want to exclude mongodb driver but how can I disable com.mongodb.connection.DefaultServerMonitor thread.
After further search I realized that it is spring-boot autoconfiguration issue. My application is a spring-boot app. spring-boot auto config finds a mongos-java-driver jar in classpath and start-up mongo-drive.
Added #EnableAutoConfiguration(exclude = {MongoAutoConfiguration.class}) and mongo-java-driver doesn't start anymore.

Hbase as sink for Mapreduce: Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException

$ hadoop jar target/projeto5-1.0-SNAPSHOT-fatjar.jar br.edu.ufam.anibrata.HBaseWordCount -input shakespeare.txt -output wcount -numReducers 1
17/07/15 20:23:29 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6022bb5c connecting to ZooKeeper ensemble=localhost:2181
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-cdh5.11.1--1, built on 06/01/2017 17:37 GMT
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Client environment:host.name=quickstart.cloudera
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_67
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.7.0_67-cloudera/jre
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/etc/hadoop/conf:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/avro.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/azure-data-lake-store-sdk-2.1.4.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/aws-java-sdk-dynamodb-1.10.6.jar:/usr/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/junit-4.11.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/aws-java-sdk-sts-1.10.6.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/netty-3.10.5.Final.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/aws-java-sdk-s3-1.10.6.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/aws-java-sdk-core-1.10.6.jar:/usr/lib/hadoop/lib/logredactor-1.0.3.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/curator-client-2.7.1.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.11.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/gson-2.2.4.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/slf4j-log4j12.jar:/usr/lib/hadoop/lib/aws-java-sdk-kms-1.10.6.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/zookeeper.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/.//parquet-tools.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle.jar:/usr/lib/hadoop/.//parquet-format-javadoc.jar:/usr/lib/hadoop/.//parquet-avro.jar:/usr/lib/hadoop/.//parquet-pig.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//parquet-pig-bundle.jar:/usr/lib/hadoop/.//parquet-column.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-aws-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop/.//parquet-format-sources.jar:/usr/lib/hadoop/.//hadoop-auth-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop/.//hadoop-common-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop/.//hadoop-nfs-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop/.//parquet-protobuf.jar:/usr/lib/hadoop/.//hadoop-azure-datalake-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop/.//parquet-scrooge_2.10.jar:/usr/lib/hadoop/.//hadoop-common-2.6.0-cdh5.11.1-tests.jar:/usr/lib/hadoop/.//parquet-generator.jar:/usr/lib/hadoop/.//hadoop-common-tests.jar:/usr/lib/hadoop/.//parquet-common.jar:/usr/lib/hadoop/.//hadoop-aws.jar:/usr/lib/hadoop/.//parquet-test-hadoop2.jar:/usr/lib/hadoop/.//parquet-scala_2.10.jar:/usr/lib/hadoop/.//hadoop-annotations-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop/.//parquet-jackson.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//parquet-encoding.jar:/usr/lib/hadoop/.//parquet-format.jar:/usr/lib/hadoop/.//parquet-cascading.jar:/usr/lib/hadoop/.//parquet-hadoop.jar:/usr/lib/hadoop/.//hadoop-azure-datalake.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-thrift.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.6.0-cdh5.11.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.11.1-yarn-shuffle.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jline-2.11.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/zookeeper.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/avro.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.10.5.Final.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.2.jar:/usr/lib/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//jackson-annotations-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//htrace-core4-4.0.1-incubating.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//avro.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//okio-1.4.0.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//junit-4.11.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archive-logs-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.2.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0-cdh5.11.1-tests.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//jackson-databind-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//okhttp-2.4.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.9.2.jar:/usr/lib/hadoop-mapreduce/.//microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant.jar:/usr/lib/hadoop-mapreduce/.//zookeeper.jar:/usr/lib/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archive-logs.jar:/usr/lib/hadoop-mapreduce/.//hadoop-openstack-2.6.0-cdh5.11.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant-2.6.0-cdh5.11.1.jar
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/lib/hadoop/lib/native
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-696.3.2.el6.x86_64
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Client environment:user.name=cloudera
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/cloudera
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/cloudera/topicosBD-pis/topicosBD-pis/projeto5
17/07/15 20:23:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x6022bb5c0x0, quorum=localhost:2181, baseZNode=/hbase
17/07/15 20:23:29 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
17/07/15 20:23:29 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /127.0.0.1:52300, server: localhost/127.0.0.1:2181
17/07/15 20:23:29 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15d42b07712000e, negotiated timeout = 40000
Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Sat Jul 15 20:24:23 PDT 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=69886: row 'wcount,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=quickstart.cloudera,60020,1500062548162, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:270)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:219)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:293)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:268)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:140)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:135)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:886)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:601)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:365)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:310)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:320)
at br.edu.ufam.anibrata.HBaseWordCount.run(HBaseWordCount.java:175)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at br.edu.ufam.anibrata.HBaseWordCount.main(HBaseWordCount.java:224)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=69886: row 'wcount,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=quickstart.cloudera,60020,1500062548162, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:404)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:710)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:881)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:850)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1174)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:31889)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:349)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:193)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:332)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:306)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more
I am trying to run the wordCount program in MapReduce with HBase as the sink. Now I haven't made any configuration changes in hbase, and I am using the same 'AS IS' provided in the distribution of Cloudera for virtualBox VM. I am not aware of the configuration files for hbase and how to map those to mapreduce and HDFS. Any help or assistance is very much appreciated.
The error is a sign that your hbase client is trying to initiate unsecure RPCs while HBase is expecting Kerberos authentication.
Moreover the timeout period should not be short as the minimum number of cells that must be scanned before a timeout check occurs.
The default value is 10000. A smaller value causes timeout checks to occur more often. From your post I can say timeout has happened.
Edit hbase-site.xml and add the following properties, modifying the values as needed.
<property>
<name>hbase.rpc.timeout</name>
<value>60000</value>
</property>
<property>
<name>hbase.client.scanner.timeout.period</name>
<value>60000</value>
</property>
<property>
<name>hbase.cells.scanned.per.heartbeat.check</name>
<value>10000</value>
</property>
Copy the hbase-site.xml to all your cluster hosts and restart the HBase master and RegionServer processes for the change to take effect.

When I run a java program to submit a spark-app on yarn, but I get "Retrying connect to server: 0.0.0.0/0.0.0.0:8032” info

This is my Java code:
import org.apache.spark.deploy.SparkSubmit;
public class Test{
public static void main(String[] args) {
String[] arg0=new String[]{
"--master","yarn",
"--deploy-mode","client",
"--class","org.apache.spark.examples.SparkPi",
"/opt/spark-1.6.2-bin-hadoop2.6/lib/spark-examples-1.6.2-hadoop2.6.0.jar"
};
SparkSubmit.main(arg0);
}
}
I compile the code like this:
javac -cp ".:/opt/spark-1.6.2-bin-hadoop2.6/lib/spark-assembly-1.6.2-hadoop2.6.0.jar" Test.java
Then I set the environment variable like this:
export HADOOP_CONF_DIR=/opt/yarn_conf
/opt/yarn_conf is copied from the directory on the yarn cluster /opt/hadoop-2.6.5/etc/hadoop/
/opt/yarn_conf/yarn-site.xml has been configured with the following options
<property>
  <name>yarn.resourcemanager.address</name>
  <value>10.110.16.60:8032</value>
</property>
<property>
  <name>yarn.resourcemanager.scheduler.address</name>
  <value>10.110.16.60:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>10.110.16.60:8031</value>
</property>
....
Finally I run the following command
java -cp ".:/opt/spark-1.6.2-bin-hadoop2.6/lib/spark-assembly-1.6.2-hadoop2.6.0.jar" Test
Then I got the info like this:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/02/20 16:22:31 INFO SparkContext: Running Spark version 1.6.2
17/02/20 16:22:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/02/20 16:22:31 INFO SecurityManager: Changing view acls to: root
17/02/20 16:22:31 INFO SecurityManager: Changing modify acls to: root
17/02/20 16:22:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
17/02/20 16:22:32 INFO Utils: Successfully started service 'sparkDriver' on port 55141.
17/02/20 16:22:32 INFO Slf4jLogger: Slf4jLogger started
17/02/20 16:22:32 INFO Remoting: Starting remoting
17/02/20 16:22:32 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#10.110.16.61:37984]
17/02/20 16:22:32 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 37984.
17/02/20 16:22:32 INFO SparkEnv: Registering MapOutputTracker
17/02/20 16:22:32 INFO SparkEnv: Registering BlockManagerMaster
17/02/20 16:22:33 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-109e88d2-ce55-498c-bc0d-78448688fcac
17/02/20 16:22:33 INFO MemoryStore: MemoryStore started with capacity 1088.3 MB
17/02/20 16:22:33 INFO SparkEnv: Registering OutputCommitCoordinator
17/02/20 16:22:33 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/02/20 16:22:33 INFO SparkUI: Started SparkUI at http://10.110.16.61:4040
17/02/20 16:22:33 INFO HttpFileServer: HTTP File server directory is /tmp/spark-2d7ccc96-215f-402a-b425-bf4308bf6a94/httpd-d6478226-76dc-4904-873d-bc9d85e1fe18
17/02/20 16:22:33 INFO HttpServer: Starting HTTP Server
17/02/20 16:22:33 INFO Utils: Successfully started service 'HTTP file server' on port 58022.
17/02/20 16:22:33 INFO SparkContext: Added JAR file:/opt/spark-1.6.2-bin-hadoop2.6/lib/spark-examples-1.6.2-hadoop2.6.0.jar at http://10.110.16.61:58022/jars/spark-examples-1.6.2-hadoop2.6.0.jar with timestamp 1487578953758
17/02/20 16:22:33 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/02/20 16:22:35 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/02/20 16:22:36 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/02/20 16:22:37 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/02/20 16:22:38 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
How can I change it? Thinks.

Exception :org.apache.hadoop.hbase.masternotrunningexception

I am running hbase using java,my hbase started earlier with ease ,now when I am giving "list" command in
hbase(main) :001:0:>>list
It types "TABLE" and give so many lines of java on terminal and ended with
ERROR :org.apache.hadoop.hbase.masternotrunningexception: Retried 7 times
I stopped the hbase and restarted, but it didn't work for me.
This is the trace of logs:
14/02/18 07:16:17 INFO zookeeper.ZooKeeper: Initiating client
connection, connectString=localhost:2181 sessionTimeout=180000
watcher=hconnection
14/02/18 07:16:17 INFO zookeeper.ClientCnxn: Opening socket connection
to server localhost/127.0.0.1:2181. Will not attempt to authenticate
using SASL (unknown error)
14/02/18 07:16:17 INFO zookeeper.RecoverableZooKeeper: The identifier
of this process is 5461#ubuntu
14/02/18 07:16:17 WARN zookeeper.ClientCnxn: Session 0x0 for server
null, unexpected error, closing socket connection and attempting
reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:597)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
14/02/18 07:16:17 WARN zookeeper.RecoverableZooKeeper: Possibly
transient ZooKeeper exception:
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
14/02/18 07:16:17 INFO util.RetryCounter: Sleeping 2000ms before retry #1...
14/02/18 07:16:18 INFO zookeeper.ClientCnxn: Opening socket connection
to server localhost/127.0.0.1:2181. Will not attempt to authenticate
using SASL (unknown error)
14/02/18 07:16:18 WARN zookeeper.ClientCnxn: Session 0x0 for server
null, unexpected error, closing socket connection and attempting
reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:597)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
14/02/18 07:16:19 INFO zookeeper.ClientCnxn: Opening socket connection
to server localhost/127.0.0.1:2181. Will not attempt to authenticate
using SASL (unknown error)
14/02/18 07:16:19 WARN zookeeper.ClientCnxn: Session 0x0 for server
null, unexpected error, closing socket connection and attempting
reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:597)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
14/02/18 07:16:19 WARN zookeeper.RecoverableZooKeeper: Possibly
transient ZooKeeper exception:
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
14/02/18 07:16:19 INFO util.RetryCounter: Sleeping 4000ms before retry #2``
my hbase-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Apache 2 License ommitted to keep the output short -->
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///home/hduser/HBASE/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hduser/HBASE/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2222</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
</configuration>
Please make sure that your zookeeper is running fine and you don't have any name resolution related issue. Also, make sure that you have added the hbase.zookeeper.quorum property in your hbase-site.xml. If the problem still persists please show us your hbase-site.xml file.

Failed to connect to hbase from java

I've set up everything, like the hbase documentation says. Here is my hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///users/urijvoskresenskij/hbase-0.94.3/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/users/urijvoskresenskij/hbase-0.94.3/zookeeper</value>
</property>
</configuration>
Java Code:
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.util.Bytes;
public class MainHBase {
/**
* #param args
*/
public static void main(String[] args) throws Exception {
Configuration config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", "localhost");
HBaseAdmin admin = new HBaseAdmin(config);
try {
HTable table = new HTable(config, "test-table");
Put put = new Put(Bytes.toBytes("test-key"));
put.add(Bytes.toBytes("cf"), Bytes.toBytes("q"), Bytes.toBytes("value"));
table.put(put);
} finally {
admin.close();
}
}
and here is the exception:
2012-12-26 11:08:07.980 java[10204:1c03] Unable to load realm info from SCDynamicStore
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Client environment:host.name=192.168.1.101
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0-ea
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Client environment:java.home=/Library/Java/JavaVirtualMachines/JDK 1.7.0 Developer Preview.jdk/Contents/Home/jre
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/Users/urijvoskresenskij/Documents/workspace/Hbase/bin:/Users/urijvoskresenskij/hbase-0.94.3/lib/activation-1.1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/asm-3.1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/avro-1.5.3.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/avro-ipc-1.5.3.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/commons-beanutils-1.7.0.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/commons-beanutils-core-1.8.0.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/commons-cli-1.2.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/commons-codec-1.4.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/commons-collections-3.2.1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/commons-configuration-1.6.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/commons-digester-1.8.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/commons-el-1.0.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/commons-httpclient-3.1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/commons-io-2.1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/commons-lang-2.5.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/commons-logging-1.1.1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/commons-math-2.1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/commons-net-1.4.1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/core-3.1.1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/guava-11.0.2.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/hadoop-core-1.0.4.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/high-scale-lib-1.1.1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/httpclient-4.1.2.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/httpcore-4.1.3.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jackson-core-asl-1.8.8.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jackson-jaxrs-1.8.8.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jackson-mapper-asl-1.8.8.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jackson-xc-1.8.8.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jamon-runtime-2.3.1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jasper-compiler-5.5.23.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jasper-runtime-5.5.23.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jaxb-api-2.1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jaxb-impl-2.2.3-1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jersey-core-1.8.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jersey-json-1.8.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jersey-server-1.8.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jettison-1.1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jetty-6.1.26.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jetty-util-6.1.26.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jruby-complete-1.6.5.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jsp-2.1-6.1.14.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jsp-api-2.1-6.1.14.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/jsr305-1.3.9.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/junit-4.10-HBASE-1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/libthrift-0.8.0.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/log4j-1.2.16.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/metrics-core-2.1.2.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/netty-3.2.4.Final.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/protobuf-java-2.4.0a.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/servlet-api-2.5-6.1.14.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/slf4j-api-1.4.3.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/slf4j-log4j12-1.4.3.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/snappy-java-1.0.3.2.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/stax-api-1.0.1.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/velocity-1.7.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/xmlenc-0.52.jar:/Users/urijvoskresenskij/hbase-0.94.3/lib/zookeeper-3.4.3.jar:/Users/urijvoskresenskij/hbase-0.94.3/hbase-0.94.3-tests.jar:/Users/urijvoskresenskij/hbase-0.94.3/hbase-0.94.3.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.aop-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.asm-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.aspects-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.beans-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.context-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.context.support-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.core-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.expression-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.instrument-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.instrument.tomcat-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.jdbc-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.jms-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.orm-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.oxm-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.test-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.transaction-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.web-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.web.portlet-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.web.servlet-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/dist/org.springframework.web.struts-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Downloads/commons-logging-1.1.1/commons-logging-1.1.1.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.web-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.aop-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.asm-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.aspects-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.beans-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.context-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.context.support-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.core-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.expression-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.instrument-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.instrument.tomcat-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.jdbc-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.jms-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.orm-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.oxm-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.test-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.transaction-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.web.portlet-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.web.servlet-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Desktop/spring-framework-3.1.3.RELEASE/src/org.springframework.web.struts-sources-3.1.3.RELEASE.jar:/Users/urijvoskresenskij/Downloads/commons-logging-1.1.1/commons-logging-1.1.1-javadoc.jar:/Users/urijvoskresenskij/Downloads/commons-logging-1.1.1/commons-logging-1.1.1-sources.jar:/Users/urijvoskresenskij/Downloads/commons-logging-1.1.1/commons-logging-adapters-1.1.1.jar:/Users/urijvoskresenskij/Downloads/commons-logging-1.1.1/commons-logging-api-1.1.1.jar:/Users/urijvoskresenskij/Downloads/commons-logging-1.1.1/commons-logging-tests.jar
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/Users/urijvoskresenskij/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/var/folders/_q/z5blmlxs39d4pmmv1fj9mh500000gn/T/
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Client environment:os.name=Mac OS X
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Client environment:os.arch=x86_64
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Client environment:os.version=10.7.5
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Client environment:user.name=urijvoskresenskij
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Client environment:user.home=/Users/urijvoskresenskij
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Client environment:user.dir=/Users/urijvoskresenskij/Documents/workspace/Hbase
12/12/26 10:08:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
12/12/26 10:08:08 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:2181
12/12/26 10:08:08 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 10204#MacBook-Air-Urij.local
12/12/26 10:08:08 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
12/12/26 10:08:08 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
12/12/26 10:08:08 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x13bd4823ba20007, negotiated timeout = 40000
12/12/26 10:08:08 WARN client.HConnectionManager$HConnectionImplementation: Encountered problems when prefetch META table:
org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for table: test-table, row=test-table,,99999999999999
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:164)
at org.apache.hadoop.hbase.client.MetaScanner.access$000(MetaScanner.java:54)
at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:133)
at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:130)
at org.apache.hadoop.hbase.client.HConnectionManager.execute(HConnectionManager.java:360)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:130)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:105)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:922)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:977)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:864)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:821)
at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:234)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:174)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:133)
at MainHBase.main(MainHBase.java:20)
Exception in thread "main" org.apache.hadoop.hbase.TableNotFoundException: test-table
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:999)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:864)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:821)
at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:234)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:174)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:133)
at MainHBase.main(MainHBase.java:20)
Tried to google it already, but wasn't able to find the solution. My operating system is OS X. The table, that i've tried to create, cannot be scanned from hbase shell. Please, help :)
have you created "test-table"?? table must exist before putting the data into it.
Configuration conf = HBaseConfiguration.create();
HBaseAdmin hbase = new HBaseAdmin(conf);
HTableDescriptor desc = new HTableDescriptor("test_table");
HColumnDescriptor cf = new HColumnDescriptor("cf".getBytes());
desc.addFamily(cf);
hbase.createTable(desc);

Categories

Resources