My code in Java:
public static void main(String[] args) {
Cluster cluster;
Session session;
cluster = Cluster.builder()
.addContactPoint("127.0.0.1")
.withPort(9042)
.build();
session = cluster.connect("Bundesliga");
session.execute("INSERT INTO test(c1,c2,c3,c4,c5) VALUES(0,0,0,0,0)");
}
Error Message:
Exception in thread "main"
com.datastax.driver.core.exceptions.NoHostAvailableException: All
host(s) tried for query failed (tried: /127.0.0.1:9042 (null)) at
com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:196)
at
com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:80)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1145)
at com.datastax.driver.core.Cluster.init(Cluster.java:149) at
com.datastax.driver.core.Cluster.connect(Cluster.java:225) at
com.datastax.driver.core.Cluster.connect(Cluster.java:258) at
cassandra.cassandra_main.main(cassandra_main.java:19)
I have already looked in cassandra.yaml:
start_native_transport: true
native_transport_port: 9042
I fixed it.
The problem were, that the version of the cassandra-driver-core was not compatible with the cassandra version.
Related
HBase and Hadoop is CDH standalone mode in docker. ZK、hbase、phoenix use shell all can operate. java operate zk is OK too.But java cannot operate Hbase and phoenix, Code is OK.
Can anyone help me, Thank you!
public class HbaseTest {
public static Configuration conf;
static{
conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "master:2181");
}
public static void main(String[] args) throws Exception {
Connection connection = ConnectionFactory.createConnection(conf);
HBaseAdmin admin = new HBaseAdmin(conf);
boolean exists = admin.tableExists("stu");
System.out.println(exists);
admin.close();
}
}
The log after Run code:
java.net.SocketTimeoutException: callTimeout=60000, callDuration=64255: Connection refused: no further information row 'stu,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=master,60020,1620539340415, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:276)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:210)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:289)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:164)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:159)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:796)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:408)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:418)
at com.max.hbase.HbaseTest.main(HbaseTest.java:24)
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=64255: Connection refused: no further information row 'stu,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=master,60020,1620539340415, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:171)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused: no further information
You should try to separate the quorum's address and port
conf.set("hbase.zookeeper.quorum", "master");
conf.set("hbase.zookeeper.property.clientPort", "2181");
Also, are you sure that your zookeeper is on the master node? The first property should not be the master address, but the zookeeper quorum (separated by comma, if it's more than one address). I suppose you specified "master" because you are co-locating the master and the single zookeeper node?
I'm trying to run a spark sql test against a hive table using the Spark Java API. The problem I am having is with kerberos. Whenever I attempt to run the program I get this error message:
Caused by: org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS];
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anon$1.<init>(HiveSessionStateBuilder.scala:69)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.analyzer(HiveSessionStateBuilder.scala:69)
at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:638)
at tester.SparkSample.lambda$0(SparkSample.java:62)
... 5 more
on this line of code:
ss.sql("select count(*) from entps_pma.baraccount").show();
Now when I run the code, I log into kerberos just fine and get this message:
18/05/01 11:21:03 INFO security.UserGroupInformation: Login successful for user <kerberos user> using keytab file /root/hdfs.keytab
I even connect to the Hive Metastore:
18/05/01 11:21:06 INFO hive.metastore: Trying to connect to metastore with URI thrift://<hiveserver>:9083
18/05/01 11:21:06 INFO hive.metastore: Connected to metastore.
But right after that I get the error. Appreciate any direction here. Here is my code:
public static void runSample(String fullPrincipal) throws IOException {
System.setProperty("hive.metastore.sasl.enabled", "true");
System.setProperty("hive.security.authorization.enabled", "true");
System.setProperty("hive.metastore.kerberos.principal", fullPrincipal);
System.setProperty("hive.metastore.execute.setugi", "true");
System.setProperty("hadoop.security.authentication", "kerberos");
Configuration conf = setSecurity(fullPrincipal);
loginUser = UserGroupInformation.getLoginUser();
loginUser.doAs((PrivilegedAction<Void>) () -> {
SparkConf sparkConf = new SparkConf().setMaster("local");
sparkConf.set("spark.sql.warehouse.dir", "hdfs:///user/hive/warehouse");
sparkConf.set("hive.metastore.uris", "thrift://<hive server>:9083");
sparkConf.set("hadoop.security.authentication", "kerberos");
sparkConf.set("hadoop.rpc.protection", "privacy");
sparkConf.set("spark.driver.extraClassPath",
"/opt/cloudera/parcels/CDH/jars/*.jar:/opt/cloudera/parcels/CDH/lib/hive/conf:/opt/cloudera/parcels/CDH/lib/hive/lib/*.jar");
sparkConf.set("spark.executor.extraClassPath",
"/opt/cloudera/parcels/CDH/jars/*.jar:/opt/cloudera/parcels/CDH/lib/hive/conf:/opt/cloudera/parcels/CDH/lib/hive/lib/*.jar");
sparkConf.set("spark.eventLog.enabled", "false");
SparkSession ss = SparkSession
.builder()
.enableHiveSupport()
.config(sparkConf)
.appName("Jim Test Spark App")
.getOrCreate();
ss.sparkContext()
.hadoopConfiguration()
.addResource(conf);
ss.sql("select count(*) from entps_pma.baraccount").show();
return null;
});
}
I guess you are running Spark on YARN. You need to specify spark.yarn.principal and spark.yarn.keytab parameters. Please check running Spark on YARN documentation
I'm trying to access remote Cassandra using Spark in Java. However, when I'm trying to execute an aggregation function (count), the following error:
Exception in thread "main" com.datastax.driver.core.exceptions.TransportException: [/192.168.1.103:9042] Connection has been closed
at com.datastax.driver.core.exceptions.TransportException.copy(TransportException.java:38)
at com.datastax.driver.core.exceptions.TransportException.copy(TransportException.java:24)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
I already set the timeout in the Cassandra.yml to big value.
Here is my code:
SparkConf conf = new SparkConf();
conf.setAppName("Test");
conf.setMaster("local[*]");
conf.set("spark.cassandra.connection.host", "host");
Spark app = new Spark(conf);
app.run();
.
.
.
CassandraConnector connector = CassandraConnector.apply(sc.getConf());
// Prepare the schema
try (Session session = connector.openSession()) {
session.execute("USE keyspace0");
ResultSet results = session.execute("SELECT count(*) FROM table0");
While connecting to Cassandra Database and creating a keyspace I am getting the following error.
Exception in thread "main" > com.datastax.driver.core.exceptions.NoHostAvailableException:
All host(s) tried for query failed (tried: /127.0.0.1:9042
(com.datastax.driver.core.ConnectionException: [/127.0.0.1:9042] Unexpected error during transport initialization
(com.datastax.driver.core.TransportException: [/127.0.0.1:9042] Unexpected exception triggered (java.lang.IndexOutOfBoundsException: Not enough readable bytes - Need 4, maximum is 0))))
This is my code:
package com.hadoop.reloaded.cassandra;
import com.datastax.driver.core.*;
public class CassandraOne {
public static void main(String[] args){
Cluster cluster1 = Cluster.builder().withPort(9042).addContactPoint("127.0.0.1").build();
Session session = cluster1.connect();
String query = "CREATE KEYSPACE test WITH replication "
+ "= {'class':'SimpleStrategy', 'replication_factor':1}; ";
session.execute(query);
session.execute("USE test");
}
}
and this exception throws when i run it:
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException:
All host(s) tried for query failed (tried: /127.0.0.1:9042 (
com.datastax.driver.core.ConnectionException: [/127.0.0.1:9042]
Unexpected error during transport initialization (
com.datastax.driver.core.TransportException: [/127.0.0.1:9042]
Unexpected exception triggered (java.lang.IndexOutOfBoundsException: Not enough readable bytes - Need 4, maximum is 0))))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:196)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1104)
at com.datastax.driver.core.Cluster.init(Cluster.java:121)
at com.datastax.driver.core.Cluster.connect(Cluster.java:198)
at com.hadoop.reloaded.cassandra.CassandraOne.main(CassandraOne.java:13)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
My guess is you are using a version of the Datastax Java Driver that is too old. Check to see if you are running version 3.0.0 or later as this is required to connect to a Cassandra 3.X cluster.
I faced the same issue, after upgrade driver to cassandra-driver-core-3.0.3.jar
and netty as this picture. issue has resolved
Maybe encoding issue ?
More détails on this link
I have a Hadoop Cluster using inner network(ip range is 192.168.0.0/24), and I want to connect hbase using java library(org.apache.hadoop.hbase.client)
from development computer on the Windows 7 that use different network(ip is outter network 203.252.x.x), But, I couldn't connect hbase.
I Have a question.
Is my code wrong??
Is it possible using Java Library (org.apache.hadoop.hbase.client), should i use thrift protocol? (I don't want to use Thrift)
Do you have any idea? or comment ?
Thank you
This is My Code for Connecting Hbase.
public class TestBase {
public static void main(String[] args) throws MasterNotRunningException, ZooKeeperConnectionException, ServiceException, IOException {
Configuration configuration = HBaseConfiguration.create();
configuration.set("hbase.master", "203.252.x.x"); // master info
configuration.set("hbase.master.port", "6000");
configuration.set("hbase.zookeeper.quorum", "203.252.x.x");
configuration.set("hbase.zookeeper.property.clientPort", "2181");
configuration.set("zookeeper.znode.parent", "/hbase-unsecure");
HBaseAdmin.checkHBaseAvailable(configuration);
HTable table = null;
table = new HTable(configuration, "weatherData");
Scan scan = new Scan();
scan.setTimeRange(1L, 1435633313526L);
ResultScanner scanner = null;
scanner = table.getScanner(scan);
for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
System.out.println(Bytes.toString(rr.getRow())
+ " => "
+ Bytes.toString(rr.getValue(Bytes.toBytes("temp"),
Bytes.toBytes("max"))));
}
table.close();
scanner.close();
}
}
and That is Error Code in Eclipse
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: datanode2
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1661)
at enter code hereorg.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1687)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1904)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:932)
at enter code hereorg.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2409)
at TestBase.main(TestBase.java:28)
Caused by: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: datanode2
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1739)
at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1777)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1698)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1607)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1633)
... 5 more
Caused by: java.net.UnknownHostException: unknown host: datanode2
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.<init>(RpcClient.java:501)
at org.apache.hadoop.hbase.ipc.RpcClient.createConnection(RpcClient.java:325)
at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1614)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1494)
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1724)
... 10 more
There might be a problem of the HBase Master DNS name mapping to the ip address of hbase.master. Be sure that you have either a DNS server set up or else you can try to find something similar to this that worked on my GNU/Linux machine. Such as configuring "/etc/hostname" :set up the name of the HBase Master node) and "/etc/hosts" on the machine that tries to connect to the master node.
Hopefully you can set up this on your Windows machine somehow.
Here is a helpful link for the GNU/Linux way:
http://sujee.net/2012/03/08/getting-dns-right-for-hadoop-hbase-clusters/#.XULnEZNKhTZ
You are unable to reach to nodes of cluster. Check the firewall and network settings. Make sure ports are also open to connect.
This is error in you stack trace:
Exception in thread "main" org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: datanode2
Also, you dont need to specify HBase cluster properties in your code. Put hbase-site.xml in classpath of your java and just instantiate the connection.