I am trying to understand Akka Clustering for parallel computation using nodes. So, I wrote one factorial program and want to run that on a cluster of 3 nodes (inclusive master).
I am using a configuration file to provide seed nodes and cluster provider. And reading file in my code.
cluster {
akka {
actor {
provider = "cluster"
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 0
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:9876",
"akka.tcp://ClusterSystem#127.0.0.1:6789"]
# auto downing is NOT safe for production deployments.
# you may want to use it during development, read more about it in the docs.
#
# auto-down-unreachable-after = 10s
}
}
}
Following is the java code:
package test
import java.io.File
import akka.actor.{Actor,ActorSystem, Props}
import akka.stream.ActorMaterializer
import com.typesafe.config.ConfigFactory
import scala.concurrent.ExecutionContextExecutor
class Factorial extends Actor {
override def receive = {
case (n: Int) => fact(n)
}
def fact(n:Int): Int ={
if (n<=1){
return 1
}
else {
return n * fact(n - 1)
}
}
}
object ClusterActor {
def main(args: Array[String]): Unit = {
val configFile = "E:/Scala/StatsRuleEngine/Resources/local_configuration.conf"
val config = ConfigFactory.parseFile(new File(configFile))
implicit val system:ActorSystem = ActorSystem("ClusterSystem" ,config.getConfig("cluster"))
implicit val materializer:ActorMaterializer = ActorMaterializer()
implicit val executionContext: ExecutionContextExecutor = system.dispatcher
val FacActor = system.actorOf(Props[Factorial],"Factorial")
FacActor ! (5)
}
}
On running the program, I am getting below error
Remote connection to [null] failed with java.net.ConnectException: Connection refused: no further information: /127.0.0.1:6789
[WARN] [01/21/2019 16:31:15.979] [New I/O boss #3] [NettyTransport(akka://ClusterSystem)] Remote connection to [null] failed with java.net.ConnectException: Connection refused: no further information: /127.0.0.1:9876
I tried to search, but I don't why this error is coming.
When you boot your nodes, you need to specify the exact ports that will be open in config
netty.tcp {
hostname = "127.0.0.1"
port = 0 // THE EXACT PORT
}
So, if your seed nodes say 9876 and 6789. Two of nodes have to specify
netty.tcp {
hostname = "127.0.0.1"
port = 9876
}
and
netty.tcp {
hostname = "127.0.0.1"
port = 6789
}
Note, that the node that is listed first in seed nodes list must start first.
Related
I'm trying to setup logstash in docker.
I'm using the logstash:8.0.0 image.
This is my logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.enabled: false
This is my pipeline.conf
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://10.135.95.164:9200"]
index => "instameister"
username => "elastic"
password => ""
}
stdout { codec => rubydebug }
}
And this is the error im getting:
Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (ConfigurationError) Something is wrong with your configuration.", :backtrace=>["org.logstash.config.ir.CompiledPipeline.<init>(CompiledPipeline.java:120)", "org.logstash.execution.JavaBasePipelineExt.initialize(JavaBasePipelineExt.java:85)", "org.logstash.execution.JavaBasePipelineExt$INVOKER$i$1$0$initialize.call(JavaBasePipelineExt$INVOKER$i$1$0$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:837)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1169)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuperSplatArgs(IRRuntimeHelpers.java:1156)", "org.jruby.ir.targets.InstanceSuperInvokeSite.invoke(InstanceSuperInvokeSite.java:39)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$initialize$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:333)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:87)", "org.jruby.RubyClass.newInstance(RubyClass.java:939)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:50)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:49)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.RUBY$block$converge_state$2(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:376)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:138)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:52)", "org.jruby.runtime.Block.call(Block.java:139)", "org.jruby.RubyProc.call(RubyProc.java:318)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)", "java.base/java.lang.Thread.run(Thread.java:829)"]}
All i see is the Unable to configure plugins: (ConfigurationError) Something is wrong with your configuration. But i have no idea whats wrong.
Instead of modifying logstash.yml, you can override the variables in the environment instead. Your pipeline.conf seems to be Ok. rubydebug codec is enabled by default for stdout.
So, assuming that you have a docker compose file, the configuration would be something like this:
logstash:
image: docker.elastic.co/logstash/logstash
container_name: logstash
restart: always
user: root
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
- ./logstash/logs/:/logstash/logs/:rw
environment:
- xpack.monitoring.enabled=false
- outputs.elasticsearch=http://elasticuser:elasticuserpassword#elasticsearch:9200
depends_on:
- elasticsearch
In the ./logstash/pipeline directory, a logstash.conf file with:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "${outputs.elasticsearch}"
}
stdout {
}
}
Adapt to your needs.
The problem was that the key 'username' should be 'user'
This is the working config:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://10.135.95.164:9200"]
user => "elastic"
password => ""
index => "instameister"
manage_template => false
}
stdout { codec => json_lines }
}
I'm using the Kafka JDK client ver 0.10.2.1 . I am able to produce simple messages to Kafka for a "heartbeat" test, but I cannot consume a message from that same topic using the sdk. I am able to consume that message when I go into the Kafka CLI, so I have confirmed the message is there. Here's the function I'm using to consume from my Kafka server, with the props - I pass the message I produced to the topic only after I have indeed confirmed the produce() was succesful, I can post that function later if requested:
private def consumeFromKafka(topic: String, expectedMessage: String): Boolean = {
val props: Properties = initProps("consumer")
val consumer = new KafkaConsumer[String, String](props)
consumer.subscribe(List(topic).asJava)
var readExpectedRecord = false
try {
val records = {
val firstPollRecs = consumer.poll(MAX_POLLTIME_MS)
// increase timeout and try again if nothing comes back the first time in case system is busy
if (firstPollRecs.count() == 0) firstPollRecs else {
logger.info("KafkaHeartBeat: First poll had 0 records- trying again - doubling timeout to "
+ (MAX_POLLTIME_MS * 2)/1000 + " sec.")
consumer.poll(MAX_POLLTIME_MS * 2)
}
}
records.forEach(rec => {
if (rec.value() == expectedMessage) readExpectedRecord = true
})
} catch {
case e: Throwable => //log error
} finally {
consumer.close()
}
readExpectedRecord
}
private def initProps(propsType: String): Properties = {
val prop = new Properties()
prop.put("bootstrap.servers", kafkaServer + ":" + kafkaPort)
propsType match {
case "producer" => {
prop.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
prop.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
prop.put("acks", "1")
prop.put("producer.type", "sync")
prop.put("retries", "3")
prop.put("linger.ms", "5")
}
case "consumer" => {
prop.put("group.id", groupId)
prop.put("enable.auto.commit", "false")
prop.put("auto.commit.interval.ms", "1000")
prop.put("session.timeout.ms", "30000")
prop.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
prop.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
prop.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
// poll just once, should only be one record for the heartbeat
prop.put("max.poll.records", "1")
}
}
prop
}
Now when I run the code, here's what it outputs in the console:
13:04:21 - Discovered coordinator serverName:9092 (id: 2147483647
rack: null) for group 0b8947e1-eb68-4af3-ac7b-be3f7c02e76e. 13:04:23
INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned
partitions [] for group 0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:24
INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:25 INFO
o.a.k.c.c.i.AbstractCoordinator - Successfully joined group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e with generation 1 13:04:26 INFO
o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions
[HeartBeat_Topic.Service_5.2018-08-03.13_04_10.377-0] for group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:27 INFO
c.p.p.l.util.KafkaHeartBeatUtil - KafkaHeartBeat: First poll had 0
records- trying again - doubling timeout to 60 sec.
And then nothing else, no errors thrown -so no records are polled. Does anyone have any idea what's preventing the 'consume' from happening? The subscriber seems to be successful, as I'm able to successfully call the listTopics and list partions no problem.
Your code has a bug. It seems your line:
if (firstPollRecs.count() == 0)
Should say this instead
if (firstPollRecs.count() > 0)
Otherwise, you're passing in an empty firstPollRecs, and then iterating over that, which obviously returns nothing.
This question already has answers here:
How to create a Topic in Kafka through Java
(6 answers)
Closed 1 year ago.
I am trying to create a Kafka topic using Java API, but getting LEADER is NOT AVAILABLE.
Code:
int partition = 0;
ZkClient zkClient = null;
try {
String zookeeperHosts = "localhost:2181"; // If multiple zookeeper then -> String zookeeperHosts = "192.168.20.1:2181,192.168.20.2:2181";
int sessionTimeOutInMs = 15 * 1000; // 15 secs
int connectionTimeOutInMs = 10 * 1000; // 10 secs
zkClient = new ZkClient(zookeeperHosts, sessionTimeOutInMs, connectionTimeOutInMs, ZKStringSerializer$.MODULE$);
String topicName = "mdmTopic5";
int noOfPartitions = 2;
int noOfReplication = 1;
Properties topicConfiguration = new Properties();
AdminUtils.createTopic(zkClient, topicName, noOfPartitions, noOfReplication, topicConfiguration);
} catch (Exception ex) {
ex.printStackTrace();
} finally {
if (zkClient != null) {
zkClient.close();
}
}
Error:
[2017-10-19 12:14:42,263] WARN Error while fetching metadata with correlation id 1 : {mdmTopic5=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-10-19 12:14:42,370] WARN Error while fetching metadata with correlation id 3 : {mdmTopic5=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-10-19 12:14:42,479] WARN Error while fetching metadata with correlation id 4 : {mdmTopic5=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
Does Kafka 0.11.0.1 supports AdminUtils.??? Please let me know how to create topic in this version.
Thanks in Advance.
Since Kafka 0.11 there is a proper Admin API for creating (and deleting) topics and I'd recommend to use it instead of directly connecting to Zookeeper.
See AdminClient.createTopics(): http://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/admin/AdminClient.html#createTopics(java.util.Collection)
Generally LEADER NOT AVAILABLE points to network issues rather than issues with your code.
Try:
telnet host port to see if you can connect to all required hosts/ports from your machine.
However, the latest approach is to use the BOOTSTRAP_SERVERS while creating topics.
A working version of topic creation code using scala would be as follows:
Import the required kafka-clients using sbt.
// https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients
libraryDependencies += Seq("org.apache.kafka" % "kafka-clients" % "2.1.1")
The code for topic creation in scala:
import java.util.Arrays
import java.util.Properties
import org.apache.kafka.clients.admin.NewTopic
import org.apache.kafka.clients.admin.{AdminClient, AdminClientConfig}
class CreateKafkaTopic {
def create(): Unit = {
val config = new Properties()
config.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "192.30.1.5:9092")
val localKafkaAdmin = AdminClient.create(config)
val partitions = 3
val replication = 1.toShort
val topic = new NewTopic("integration-02", partitions, replication)
val topics = Arrays.asList(topic)
val topicStatus = localKafkaAdmin.createTopics(topics).values()
//topicStatus.values()
println(topicStatus.keySet())
}
}
Hope it helps.
I have the following configuration in application.conf:
bounded-mailbox {
mailbox-type = "akka.dispatch.BoundedMailbox"
mailbox-capacity = 100
mailbox-push-timeout-time = 3s
}
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = INFO
daemonic = on
}
This is the way how I configured my actor
public class MyTestActor extends UntypedActor implements RequiresMessageQueue<BoundedMessageQueueSemantics>{
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof String){
Thread.sleep(500);
System.out.println("message = " + message);
}
else {
System.out.println("Unknown Message " );
}
}
}
Now this is how I initate this actor:
myTestActor = myActorSystem.actorOf(Props.create(MyTestActor.class).withMailbox("bounded-mailbox"), "simple-actor");
After it, in my code I'm sending 3000 messages to this actor.
for (int i =0;i<3000;i++){
myTestActor.tell(guestName, null);}
What I expect to see is the exception that my Queues are full, but my messages are printed inside onReceive method every half second, like nothing happened. So I believe my mailbox configuration is not applied.
What am I doing wrong?
Updated: I created actor which subscribes to dead letter events:
deadLetterActor = myActorSystem.actorOf(Props.create(DeadLetterMonitor.class),"deadLetter-monitor");
and installed Kamon for queues monitoring:
After sending 3000 messages sent to the actor, Kamin shows me the following:
Actor: user/simple-actor
MailBox size:
Min: 100
Avg.: 100.0
Max: 101
Actor: system/deadLetterListener
MailBox size:
Min: 0
Avg.: 0.0
Max: 0
Actor: system/deadLetter-monitor
MailBox size:
Min: 0
Avg.: 0.0
Max: 0
By default Akka discards overflowing messages into DeadLetters and actor doesn't stop processing:
https://github.com/akka/akka/blob/876b8045a1fdb9cdd880eeab8b1611aa976576f6/akka-actor/src/main/scala/akka/dispatch/Mailbox.scala#L411
But sending thread will be blocked on interval which is configured by mailbox-push-timeout-time before discarding the message. Try to decrease it to 1ms and see that following test will pass:
import java.util.concurrent.atomic.AtomicInteger
import akka.actor._
import com.typesafe.config.Config
import com.typesafe.config.ConfigFactory._
import org.specs2.mutable.Specification
class BoundedActorSpec extends Specification {
args(sequential = true)
def config: Config = load(parseString(
"""
bounded-mailbox {
mailbox-type = "akka.dispatch.BoundedMailbox"
mailbox-capacity = 100
mailbox-push-timeout-time = 1ms
}
"""))
val system = ActorSystem("system", config)
"some messages should go to dead letters" in {
system.eventStream.subscribe(system.actorOf(Props(classOf[DeadLetterMetricsActor])), classOf[DeadLetter])
val myTestActor = system.actorOf(Props(classOf[MyTestActor]).withMailbox("bounded-mailbox"))
for (i <- 0 until 3000) {
myTestActor.tell("guestName", null)
}
Thread.sleep(100)
system.shutdown()
system.awaitTermination()
DeadLetterMetricsActor.deadLetterCount.get must be greaterThan(0)
}
}
class MyTestActor extends Actor {
def receive = {
case message: String =>
Thread.sleep(500)
println("message = " + message);
case _ => println("Unknown Message")
}
}
object DeadLetterMetricsActor {
val deadLetterCount = new AtomicInteger
}
class DeadLetterMetricsActor extends Actor {
def receive = {
case _: DeadLetter => DeadLetterMetricsActor.deadLetterCount.incrementAndGet()
}
}
I'm trying to write a remote HBase client using Java. Here is the code for reference :
package ttumdt.app.connector;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.MasterNotRunningException;
import org.apache.hadoop.hbase.ZooKeeperConnectionException;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.BinaryComparator;
import org.apache.hadoop.hbase.filter.CompareFilter;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.filter.FilterList;
import org.apache.hadoop.hbase.filter.RowFilter;
import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
import org.apache.hadoop.hbase.util.Bytes;
import java.io.IOException;
import java.util.Arrays;
import java.util.HashSet;
import java.util.Set;
public class HBaseClusterConnector {
private final String MASTER_IP = "10.138.168.185";
private final String ZOOKEEPER_PORT = "2181";
final String TRAFFIC_INFO_TABLE_NAME = "TrafficLog";
final String TRAFFIC_INFO_COLUMN_FAMILY = "TimeStampIMSI";
final String KEY_TRAFFIC_INFO_TABLE_BTS_ID = "BTS_ID";
final String KEY_TRAFFIC_INFO_TABLE_DATE = "DATE";
final String COLUMN_IMSI = "IMSI";
final String COLUMN_TIMESTAMP = "TIME_STAMP";
private final byte[] columnFamily = Bytes.toBytes(TRAFFIC_INFO_COLUMN_FAMILY);
private final byte[] qualifier= Bytes.toBytes(COLUMN_IMSI);
private Configuration conf = null;
public HBaseClusterConnector () throws MasterNotRunningException, ZooKeeperConnectionException {
conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum",MASTER_IP);
conf.set("hbase.zookeeper.property.clientPort",ZOOKEEPER_PORT);
HBaseAdmin.checkHBaseAvailable(conf);
}
/**
* This filter will return list of IMSIs for a given btsId and ime interval
* #param btsId : btsId for which the query has to run
* #param startTime : start time for which the query has to run
* #param endTime : end time for which the query has to run
* #return returns IMSIs as set of Strings
* #throws IOException
*/
public Set<String> getInfoPerBTSID(String btsId, String date,
String startTime, String endTime)
throws IOException {
Set<String> imsis = new HashSet<String>();
//ToDo : better exception handling
HTable table = new HTable(conf, TRAFFIC_INFO_TABLE_NAME);
Scan scan = new Scan();
scan.addColumn(columnFamily,qualifier);
scan.setFilter(prepFilter(btsId, date, startTime, endTime));
// filter to build where timestamp
Result result = null;
ResultScanner resultScanner = table.getScanner(scan);
while ((result = resultScanner.next())!= null) {
byte[] obtainedColumn = result.getValue(columnFamily,qualifier);
imsis.add(Bytes.toString(obtainedColumn));
}
resultScanner.close();
return imsis;
}
//ToDo : Figure out how valid is this filter code?? How comparison happens
// with eqaul or grater than equal etc
private Filter prepFilter (String btsId, String date,
String startTime, String endTime)
{
byte[] tableKey = Bytes.toBytes(KEY_TRAFFIC_INFO_TABLE_BTS_ID);
byte[] timeStamp = Bytes.toBytes(COLUMN_TIMESTAMP);
// filter to build -> where BTS_ID = <<btsId>> and Date = <<date>>
RowFilter keyFilter = new RowFilter(CompareFilter.CompareOp.EQUAL,
new BinaryComparator(Bytes.toBytes(btsId+date)));
// filter to build -> where timeStamp >= startTime
SingleColumnValueFilter singleColumnValueFilterStartTime =
new SingleColumnValueFilter(columnFamily, timeStamp,
CompareFilter.CompareOp.GREATER_OR_EQUAL,Bytes.toBytes(startTime));
// filter to build -> where timeStamp <= endTime
SingleColumnValueFilter singleColumnValueFilterEndTime =
new SingleColumnValueFilter(columnFamily, timeStamp,
CompareFilter.CompareOp.LESS_OR_EQUAL,Bytes.toBytes(endTime));
FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL, Arrays
.asList((Filter) keyFilter,
singleColumnValueFilterStartTime, singleColumnValueFilterEndTime));
return filterList;
}
public static void main(String[] args) throws IOException {
HBaseClusterConnector flt = new HBaseClusterConnector();
Set<String> imsis= flt.getInfoPerBTSID("AMCD000784", "26082013","104092","104095");
System.out.println(imsis.toString());
}
}
I'm currently using Cloudera quick start VM to test this.
The problem is; if i run this very code on VM it works absolutely fine. But it fails with below error if it is run from outside. And I'm suspecting it has something to do with the VM setting rather than anything else. Please note that I've already checked if I can connect to the node manager / job tracker of the VM from host machine and it works absolutely fine. When I run the code from my host OS instead of running it on VM; I get the below error :
2013-10-15 18:16:04.185 java[652:1903] Unable to load realm info from SCDynamicStore
Exception in thread "main" org.apache.hadoop.hbase.MasterNotRunningException: Retried 1 times
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:138)
at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:1774)
at ttumdt.app.connector.HBaseClusterConnector.<init>(HBaseClusterConnector.java:47)
at ttumdt.app.connector.HBaseClusterConnector.main(HBaseClusterConnector.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Process finished with exit code 1
Please note that; the master node is actually running. The zookeper log shows that it has established connection with the host OS :
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
6:16:03.274 PM INFO org.apache.zookeeper.server.ZooKeeperServer
Client attempting to establish new session at /10.138.169.81:50567
6:16:03.314 PM INFO org.apache.zookeeper.server.ZooKeeperServer
Established session 0x141bc2487440004 with negotiated timeout 60000 for client /10.138.169.81:50567
6:16:03.964 PM INFO org.apache.zookeeper.server.PrepRequestProcessor
Processed session termination for sessionid: 0x141bc2487440004
6:16:03.996 PM INFO org.apache.zookeeper.server.NIOServerCnxn
Closed socket connection for client /10.138.169.81:50567 which had sessionid 0x141bc2487440004
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
But I see no trace of any activity in Master or RegionServer log.
Please note that my host OS is Mac OSX 10.7.5
As per the resource available; this should work fine; though some suggest the simple HBase java client never works. I'm confused; and eagerly waiting for pointers !!! Please reply
start your hiveserver2 on different port and then try connecting
command to connect hiveserver2 on different port (make sure hive is in path ) :
hive --service hiveserver2 --hiveconf hive.server2.thrift.port=13000
The HBase Java client certainly does work!
The most likely explanation is that your client can't see the machine that the master is running on for some reason.
One possible explanation is that, although you are connecting to Zookeeper using an IP address, the HBase client is attempting to connect to the master using its hostname.
So, if you ensure that you have entries in your hosts file (on the client) that match the hostname of the machine running the master, this may resolve the problem.
Check that you can access the master Web UI at <hostname>:60010 from your client machine.