Testing java HBase connection - java

I am trying to use the HBase Java APIs to write data into HBase. I installed Hadoop/HBase through Ambari.
Here is how the configuration is currently set up:
final Configuration CONFIGURATION = HBaseConfiguration.create();
final HBaseAdmin HBASE_ADMIN;
HBASE_ADMIN = new HBaseAdmin(CONFIGURATION)
When I try to write to HBase, I check to make sure that the table exists
!HBASE_ADMIN.tableExists(tableName)
If not, create a new one. However, it appears that when attempting to check if the table exists exceptions are being thrown.
I'm wondering if I'm not correctly connected to HBase...is there any good way to verify that the configuration is correct and that I am connecting to HBase? The exception I'm getting is below.
Thanks.
java.lang.RuntimeException: java.lang.NullPointerException
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:209)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:288)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:268)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:140)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:135)
at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:597)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:802)
at org.apache.hadoop.hbase.catalog.MetaReader.tableExists(MetaReader.java:359)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:287)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:301)
at com.business.project.hbase.HBaseMessageWriter.getTable(HBaseMessageWriter.java:40)
at com.business.project.hbase.HBaseMessageWriter.write(HBaseMessageWriter.java:59)
at com.business.project.hbase.HBaseMessageWriter.write(HBaseMessageWriter.java:54)
at com.business.project.storm.bolt.package.exampleBolt.execute(exampleBolt.java:19)
at backtype.storm.daemon.executor$fn__5697$tuple_action_fn__5699.invoke(executor.clj:659)
at backtype.storm.daemon.executor$mk_task_receiver$fn__5620.invoke(executor.clj:415)
at backtype.storm.disruptor$clojure_handler$reify__1741.onEvent(disruptor.clj:58)
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125)
at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99)
at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80)
at backtype.storm.daemon.executor$fn__5697$fn__5710$fn__5761.invoke(executor.clj:794)
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:465)
at clojure.lang.AFn.run(AFn.java:24)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.getMetaReplicaNodes(ZooKeeperWatcher.java:269)
at org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.blockUntilAvailable(MetaRegionTracker.java:241)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:62)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1203)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1164)
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:294)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:130)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:55)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:201)

In addition to the configuration parameters suggested by Yosr, specifying
conf.set("zookeeper.znode.parent", "VALUE")
would help resolve the issue.

The property below resolved my issue
For Hortonworks:
hconfig.set("zookeeper.znode.parent", "/hbase-unsecure")
For cloudera:
hconfig.set("zookeeper.znode.parent", "/hbase")

You can use HBaseAdmin.checkHBaseAvailable(conf);

Configuration conf = HBaseConfiguration.create();
conf.set("hbase.master", "ip_address:60000");
conf.set("hbase.zookeeper.quorum","ip_address");
conf.set("hbase.zookeeper.property.clientPort", "2181");
HBaseAdmin admin = new HBaseAdmin(conf);
boolean bool = admin.tableExists("table_name");
System.out.println( bool);
ip_address : this is the ip_adress of your hbase cluster, change your hbase zookeeper port (2181) if it is not the same on your configuration files.

Related

Issue connecting redis cluster using redisson

I am trying to connect redis cluster using redisson 2.3.0 and redis 5.0.7.
Java code for connection:
Config config = new Config();
config.useClusterServers()
.addNodeAddress("redis://127.0.0.1:8000");
RedissonClient redisson = Redisson.create(config);
It is giving following error::
java.lang.IllegalArgumentException: port out of range:-1
at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
at java.net.InetSocketAddress.(InetSocketAddress.java:224)
at org.redisson.client.RedisClient.(RedisClient.java:93)
at org.redisson.connection.MasterSlaveConnectionManager.createClient(MasterSlaveConnectionManager.java:310)
at org.redisson.cluster.ClusterConnectionManager.connect(ClusterConnectionManager.java:150)
at org.redisson.cluster.ClusterConnectionManager.(ClusterConnectionManager.java:81)
at org.redisson.config.ConfigSupport.createConnectionManager(ConfigSupport.java:172)
at org.redisson.Redisson.(Redisson.java:103)
at org.redisson.Redisson.create(Redisson.java:133)
Cluster config from CLUSTER_NODES in redisson CLusterConnectionManager::
[ClusterNodeInfo [nodeId=4317f285b359ddc3ac08bb85239924509146e475, address=//127.0.0.1:8003#18003, flags=[SLAVE], slaveOf=4118a348827e6107d7e35522a251fd39c5a8f82b, slotRanges=[]],
ClusterNodeInfo [nodeId=2f7b93c80d3721b3fb26fe87bc28ed04a63fe0ec, address=//127.0.0.1:8005#18005, flags=[SLAVE], slaveOf=8b81c3e1acb4e1959a83267540058d1a6bffa12f, slotRanges=[]],
ClusterNodeInfo [nodeId=4118a348827e6107d7e35522a251fd39c5a8f82b, address=//127.0.0.1:8001#18001, flags=[MASTER], slaveOf=null, slotRanges=[[5461-10922]]],
ClusterNodeInfo [nodeId=a0770863d893a5b8106a83e247cea2544f99ef36, address=//127.0.0.1:8004#18004, flags=[SLAVE], slaveOf=6b9da1bbe38b978a3017406e5c1e310f4706cfc8, slotRanges=[]],
ClusterNodeInfo [nodeId=8b81c3e1acb4e1959a83267540058d1a6bffa12f, address=//127.0.0.1:8000#18000, flags=[MYSELF, MASTER], slaveOf=null, slotRanges=[[0-5460]]],
ClusterNodeInfo [nodeId=6b9da1bbe38b978a3017406e5c1e310f4706cfc8, address=//127.0.0.1:8002#18002, flags=[MASTER], slaveOf=null, slotRanges=[[10923-16383]]]]
Reason i found after debugging is :
Not generating ClientPartition.getMasterAddress() properly.
Address is //127.0.0.1:8001#18001 as recorded by CLUSTER_NODES, but it reads it as host: 18001 and port=-1, i.e reading the cluster bus port.
Please suggest where i am doing wrong config.
Thank You in advance.

Issue with Spark Java API, Kerberos, and Hive

I'm trying to run a spark sql test against a hive table using the Spark Java API. The problem I am having is with kerberos. Whenever I attempt to run the program I get this error message:
Caused by: org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS];
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anon$1.<init>(HiveSessionStateBuilder.scala:69)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.analyzer(HiveSessionStateBuilder.scala:69)
at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:638)
at tester.SparkSample.lambda$0(SparkSample.java:62)
... 5 more
on this line of code:
ss.sql("select count(*) from entps_pma.baraccount").show();
Now when I run the code, I log into kerberos just fine and get this message:
18/05/01 11:21:03 INFO security.UserGroupInformation: Login successful for user <kerberos user> using keytab file /root/hdfs.keytab
I even connect to the Hive Metastore:
18/05/01 11:21:06 INFO hive.metastore: Trying to connect to metastore with URI thrift://<hiveserver>:9083
18/05/01 11:21:06 INFO hive.metastore: Connected to metastore.
But right after that I get the error. Appreciate any direction here. Here is my code:
public static void runSample(String fullPrincipal) throws IOException {
System.setProperty("hive.metastore.sasl.enabled", "true");
System.setProperty("hive.security.authorization.enabled", "true");
System.setProperty("hive.metastore.kerberos.principal", fullPrincipal);
System.setProperty("hive.metastore.execute.setugi", "true");
System.setProperty("hadoop.security.authentication", "kerberos");
Configuration conf = setSecurity(fullPrincipal);
loginUser = UserGroupInformation.getLoginUser();
loginUser.doAs((PrivilegedAction<Void>) () -> {
SparkConf sparkConf = new SparkConf().setMaster("local");
sparkConf.set("spark.sql.warehouse.dir", "hdfs:///user/hive/warehouse");
sparkConf.set("hive.metastore.uris", "thrift://<hive server>:9083");
sparkConf.set("hadoop.security.authentication", "kerberos");
sparkConf.set("hadoop.rpc.protection", "privacy");
sparkConf.set("spark.driver.extraClassPath",
"/opt/cloudera/parcels/CDH/jars/*.jar:/opt/cloudera/parcels/CDH/lib/hive/conf:/opt/cloudera/parcels/CDH/lib/hive/lib/*.jar");
sparkConf.set("spark.executor.extraClassPath",
"/opt/cloudera/parcels/CDH/jars/*.jar:/opt/cloudera/parcels/CDH/lib/hive/conf:/opt/cloudera/parcels/CDH/lib/hive/lib/*.jar");
sparkConf.set("spark.eventLog.enabled", "false");
SparkSession ss = SparkSession
.builder()
.enableHiveSupport()
.config(sparkConf)
.appName("Jim Test Spark App")
.getOrCreate();
ss.sparkContext()
.hadoopConfiguration()
.addResource(conf);
ss.sql("select count(*) from entps_pma.baraccount").show();
return null;
});
}
I guess you are running Spark on YARN. You need to specify spark.yarn.principal and spark.yarn.keytab parameters. Please check running Spark on YARN documentation

Apache CXF Policy: Security configuration could not be detected (external policies)

For a few days I try to resolve following issue:
Exception in thread "main" javax.xml.ws.soap.SOAPFaultException: Security configuration could not be detected. Potential cause: Make sure jaxws:client element with name attribute value matching endpoint port is defined as well as a ws-security.signature.properties element within it.
at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:158)
at com.sun.proxy.$Proxy38.getSignedDocument(Unknown Source)
at pl.mycompany.epuap.TPSigning_TPSigning_Client.main(TPSigning_TPSigning_Client.java:55)
Caused by: org.apache.cxf.ws.policy.PolicyException: Security configuration could not be detected. Potential cause: Make sure jaxws:client element with name attribute value matching endpoint port is defined as well as a ws-security.signature.properties element within it.
at org.apache.cxf.ws.security.wss4j.policyhandlers.AbstractBindingBuilder.policyNotAsserted(AbstractBindingBuilder.java:315)
at org.apache.cxf.ws.security.wss4j.policyhandlers.AbstractBindingBuilder.getSignatureBuilder(AbstractBindingBuilder.java:1851)
at org.apache.cxf.ws.security.wss4j.policyhandlers.AsymmetricBindingHandler.doSignature(AsymmetricBindingHandler.java:570)
at org.apache.cxf.ws.security.wss4j.policyhandlers.AsymmetricBindingHandler.doSignBeforeEncrypt(AsymmetricBindingHandler.java:149)
at org.apache.cxf.ws.security.wss4j.policyhandlers.AsymmetricBindingHandler.handleBinding(AsymmetricBindingHandler.java:98)
at org.apache.cxf.ws.security.wss4j.PolicyBasedWSS4JOutInterceptor$PolicyBasedWSS4JOutInterceptorInternal.handleMessage(PolicyBasedWSS4JOutInterceptor.java:176)
at org.apache.cxf.ws.security.wss4j.PolicyBasedWSS4JOutInterceptor$PolicyBasedWSS4JOutInterceptorInternal.handleMessage(PolicyBasedWSS4JOutInterceptor.java:90)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:272)
at org.apache.cxf.endpoint.ClientImpl.doInvoke(ClientImpl.java:572)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:481)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:382)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:335)
at org.apache.cxf.frontend.ClientProxy.invokeSync(ClientProxy.java:96)
at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:136)
... 2 more
This happens when I try to call webservice: https://pz.gov.pl/pz-services/tpSigning?wsdl with external (referenced) policy: https://pz.gov.pl/pz-services/tpSigning?wsdl=wssec-policies.wsdl.
Here is my code:
Properties properties = new Properties();
properties.put("org.apache.ws.security.crypto.provider", "org.apache.ws.security.components.crypto.Merlin");
properties.put("org.apache.ws.security.crypto.merlin.keystore.type",config.getKeystoreType());
properties.put("org.apache.ws.security.crypto.merlin.keystore.password",config.getKeystorePass());
properties.put("org.apache.ws.security.crypto.merlin.keystore.alias",config.getKeystoreAlias());
properties.put("org.apache.ws.security.crypto.merlin.file", config.getKeystoreFile());
outProps.put("cryptoProperties",properties);
outProps.put(WSHandlerConstants.ACTION, WSHandlerConstants.TIMESTAMP + " " + WSHandlerConstants.SIGNATURE);
//outProps.put(WSHandlerConstants.ACTION, WSHandlerConstants.SIGNATURE);
outProps.put(WSHandlerConstants.USER, config.getKeystoreAlias());
outProps.put(WSHandlerConstants.PW_CALLBACK_CLASS, PasswordCallbackHandler.class.getName());
outProps.put(WSHandlerConstants.SIG_PROP_REF_ID,"cryptoProperties");
outProps.put(WSHandlerConstants.SIG_KEY_ID, "DirectReference");
outProps.put(WSHandlerConstants.SIGNATURE_PARTS, "{}{http://schemas.xmlsoap.org/soap/envelope/}Body");
WSS4JOutInterceptor wssOut = new WSS4JOutInterceptor(outProps);
cxfEndpoint.getOutInterceptors().add(wssOut);
try{
String s = signing.addDocumentToSigning(doc, succesUrl, failureUrl, additionalInfo);
return s;
}
As I noticed, the policies aren't at all loaded by CXF engine. I tried to load policies by interceptors, but the effect is the same.
The probelm occurs also in 2.7.18 as in 3.x version.
Any help will be highly appreciated.
Regards
Mariusz
The problem is that you are mixing the "action" based approach to WS-Security, and the WS-SecurityPolicy driven approach. The WSDL you reference contains a security policy, and the CXF PolicyBasedWSS4JOutInterceptor will automatically take care of configuring security based on this. You just need to specify a few security configuration options, e.g. keystores. See here for more information: https://cxf.apache.org/docs/ws-securitypolicy.html

How can I connect Hbase using Java Library(org.apache.hadoop.hbase.client) from different network?

I have a Hadoop Cluster using inner network(ip range is 192.168.0.0/24), and I want to connect hbase using java library(org.apache.hadoop.hbase.client)
from development computer on the Windows 7 that use different network(ip is outter network 203.252.x.x), But, I couldn't connect hbase.
I Have a question.
Is my code wrong??
Is it possible using Java Library (org.apache.hadoop.hbase.client), should i use thrift protocol? (I don't want to use Thrift)
Do you have any idea? or comment ?
Thank you
This is My Code for Connecting Hbase.
public class TestBase {
public static void main(String[] args) throws MasterNotRunningException, ZooKeeperConnectionException, ServiceException, IOException {
Configuration configuration = HBaseConfiguration.create();
configuration.set("hbase.master", "203.252.x.x"); // master info
configuration.set("hbase.master.port", "6000");
configuration.set("hbase.zookeeper.quorum", "203.252.x.x");
configuration.set("hbase.zookeeper.property.clientPort", "2181");
configuration.set("zookeeper.znode.parent", "/hbase-unsecure");
HBaseAdmin.checkHBaseAvailable(configuration);
HTable table = null;
table = new HTable(configuration, "weatherData");
Scan scan = new Scan();
scan.setTimeRange(1L, 1435633313526L);
ResultScanner scanner = null;
scanner = table.getScanner(scan);
for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
System.out.println(Bytes.toString(rr.getRow())
+ " => "
+ Bytes.toString(rr.getValue(Bytes.toBytes("temp"),
Bytes.toBytes("max"))));
}
table.close();
scanner.close();
}
}
and That is Error Code in Eclipse
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: datanode2
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1661)
at enter code hereorg.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1687)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1904)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:932)
at enter code hereorg.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2409)
at TestBase.main(TestBase.java:28)
Caused by: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: datanode2
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1739)
at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1777)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1698)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1607)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1633)
... 5 more
Caused by: java.net.UnknownHostException: unknown host: datanode2
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.<init>(RpcClient.java:501)
at org.apache.hadoop.hbase.ipc.RpcClient.createConnection(RpcClient.java:325)
at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1614)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1494)
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1724)
... 10 more
There might be a problem of the HBase Master DNS name mapping to the ip address of hbase.master. Be sure that you have either a DNS server set up or else you can try to find something similar to this that worked on my GNU/Linux machine. Such as configuring "/etc/hostname" :set up the name of the HBase Master node) and "/etc/hosts" on the machine that tries to connect to the master node.
Hopefully you can set up this on your Windows machine somehow.
Here is a helpful link for the GNU/Linux way:
http://sujee.net/2012/03/08/getting-dns-right-for-hadoop-hbase-clusters/#.XULnEZNKhTZ
You are unable to reach to nodes of cluster. Check the firewall and network settings. Make sure ports are also open to connect.
This is error in you stack trace:
Exception in thread "main" org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: datanode2
Also, you dont need to specify HBase cluster properties in your code. Put hbase-site.xml in classpath of your java and just instantiate the connection.

Basics of Hector & Cassandra

I'm working with Cassandra-0.8.2.
I am working with the most recent version of Hector &
My java version is 1.6.0_26
I'm very new to Cassandra & Hector.
What I'm trying to do:
1. connect to an up & running instance of cassandra on a different server. I know it's running b/c I can ssh through my terminal into the server running this Cassandra instance and run the CLI with full functionality.
2. then I want to connect to a keyspace & create a column family and then add a value to that column family through Hector.
I think my problem is that this running instance of Cassandra on this server might not be configured to get commands that are not local. I think my next step will be to add a local instance of Cassandra on the cpu I'm working on and try to do this locally. What do you think?
Here's my Java code:
import me.prettyprint.cassandra.serializers.StringSerializer;
import me.prettyprint.cassandra.service.CassandraHostConfigurator;
import me.prettyprint.hector.api.Cluster;
import me.prettyprint.hector.api.Keyspace;
import me.prettyprint.hector.api.ddl.ColumnFamilyDefinition;
import me.prettyprint.hector.api.ddl.ComparatorType;
import me.prettyprint.hector.api.factory.HFactory;
import me.prettyprint.hector.api.mutation.Mutator;
public class MySample {
public static void main(String[] args) {
Cluster cluster = HFactory.getOrCreateCluster("Test Cluster", "xxx.xxx.x.41:9160");
Keyspace keyspace = HFactory.createKeyspace("apples", cluster);
ColumnFamilyDefinition cf = HFactory.createColumnFamilyDefinition("apples","ColumnFamily2",ComparatorType.UTF8TYPE);
StringSerializer stringSerializer = StringSerializer.get();
Mutator<String> mutator = HFactory.createMutator(keyspace, stringSerializer);
mutator.insert("jsmith", "Standard1", HFactory.createStringColumn("first", "John"));
}
}
My ERROR is:
16:22:19,852 INFO CassandraHostRetryService:37 - Downed Host Retry service started with queue size -1 and retry delay 10s
16:22:20,136 INFO JmxMonitor:54 - Registering JMX me.prettyprint.cassandra.service_Test Cluster:ServiceType=hector,MonitorType=hector
Exception in thread "main" me.prettyprint.hector.api.exceptions.HInvalidRequestException: InvalidRequestException(why:Keyspace apples does not exist)
at me.prettyprint.cassandra.connection.HThriftClient.getCassandra(HThriftClient.java:70)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:226)
at me.prettyprint.cassandra.service.KeyspaceServiceImpl.operateWithFailover(KeyspaceServiceImpl.java:131)
at me.prettyprint.cassandra.service.KeyspaceServiceImpl.batchMutate(KeyspaceServiceImpl.java:102)
at me.prettyprint.cassandra.service.KeyspaceServiceImpl.batchMutate(KeyspaceServiceImpl.java:108)
at me.prettyprint.cassandra.model.MutatorImpl$3.doInKeyspace(MutatorImpl.java:222)
at me.prettyprint.cassandra.model.MutatorImpl$3.doInKeyspace(MutatorImpl.java:219)
at me.prettyprint.cassandra.model.KeyspaceOperationCallback.doInKeyspaceAndMeasure(KeyspaceOperationCallback.java:20)
at me.prettyprint.cassandra.model.ExecutingKeyspace.doExecute(ExecutingKeyspace.java:85)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:219)
at me.prettyprint.cassandra.model.MutatorImpl.insert(MutatorImpl.java:59)
at org.cassandra.examples.MySample.main(MySample.java:25)
Caused by: InvalidRequestException(why:Keyspace apples does not exist)
at org.apache.cassandra.thrift.Cassandra$set_keyspace_result.read(Cassandra.java:5302)
at org.apache.cassandra.thrift.Cassandra$Client.recv_set_keyspace(Cassandra.java:481)
at org.apache.cassandra.thrift.Cassandra$Client.set_keyspace(Cassandra.java:456)
at me.prettyprint.cassandra.connection.HThriftClient.getCassandra(HThriftClient.java:68)
... 11 more
Thank you in advance for your help.
The exception you are getting is,
why:Keyspace apples does not exist
In your code, this line does not actually create the keyspace,
Keyspace keyspace = HFactory.createKeyspace("apples", cluster);
As described here, this is the code you need to define your keyspace,
ColumnFamilyDefinition cfDef = HFactory.createColumnFamilyDefinition("MyKeyspace", "ColumnFamilyName", ComparatorType.BYTESTYPE);
KeyspaceDefinition newKeyspace = HFactory.createKeyspaceDefinition("MyKeyspace", ThriftKsDef.DEF_STRATEGY_CLASS, replicationFactor, Arrays.asList(cfDef));
// Add the schema to the cluster.
// "true" as the second param means that Hector will block until all nodes see the change.
cluster.addKeyspace(newKeyspace, true);
We also have a getting started guide up on the wiki as well which might be of some help.

Categories

Resources