Storm BasicDRPC client execute - java

I am a beginner storm user. I am trying out drpc server in remote mode. I got drpc server started and configured the drpc server location in yaml file. BUT, I am not understanding how the drpc client code should look like:
https://github.com/nathanmarz/storm-starter/blob/master/src/jvm/storm/starter/BasicDRPCTopology.java
Here is what I did:
Launched DRPC server(s) (storm drpc command)
Configure the locations of the DRPC servers (edited the yaml file. Added the local host name)
Submit DRPC topologies to Storm cluster - did this, looks like the topology is up and running.
But how do I get a client to call/execute on this topology? Do I need something like this?https://github.com/mykidong/storm-finagle-drpc-client/blob/master/src/main/java/storm/finagle/drpc/StormDrpcClient.java ?? I tried but I keep getting this error:
storm/starter/DRPCClient.java:[68,18] error: execute(String,String) in DRPCClient cannot implement execute(String,String) in Iface
[ERROR] overridden method does not throw TException
What am I missing here? thanks

Here is Storm DRPC Document
Maybe useful to understand DRPC Call :)
Just Like following code :
DRPCClient client = new DRPCClient("drpc-host", 3772);
String result = client.execute("reach", "http://twitter.com");
Create a client connection to DRPC-Server-Host: drpc-host at 3772 port .
DRPCClient called "reach" function using argument "http://twitter.com"
and return a string named result

Related

Kafka will connect in Java but not Python

I am attempting to connect to a cloudera environment using Kafka and stream data from a topic. I have been able to successfully do this in java but not python. Python appears to connect but it unable to receive the logs. I don't believe my paths, or servers are incorrect because I have connect via java with the same information.
I have done this successfully before with another cloudera environment, in python, and I'm basically copying and pasting from that code. With that being said is it possible that there are some settings in cloudera for this environment that are preventing me from receiving the logs via python?.
with java:
from java.lang import System
System.setProperty('java.security.auth.login.config','<path to jaas.conf>')
System.setProperty('java.security.krb5.conf','<path to krb5.conf>')
broker=['<broker1>:9092','<broker2>:9092','<broker3>:9092']
try:
consumer=KafkaConsumer(bootstrap_servers=broker,
sasl_kerberos_service_name='kafka',
auto_offset_reset='earliest',api_version=(1,0,1),
session_timeout_ms= 30000,enable_auto_commit=True,
sasl_mechanism='GSSAPI',
security_protocol='SASL_PLAINTEXT')
except Exception as e:
message_consumer="Error connecting to kafka"+e.message
sendAlertEmail(message_consumer)
logger1.error("Failed to connect to brokers"+e.message)
To test the program I do,
for message in consumer:
print(message)
When i attempt to access the environment it never makes it into the loop. However, I know there are logs for the topic.

kafka + zookeeper remote = error

I am trying to install a kafka & zookeeper instance on a remote server. I only need 1 node of each actually because i only want to provide remote kafka for test purposes.
Kafka and Zookeeper are running from the Apache Kafka tarball you can find there (v0.0.9), inside a Docker image.
Trying to consume / produce using the provided scripts. And trying to produce using own java application. Everythinf is working fine if Kafka & ZK are installed on the local server.
Here is the error I get while trying to produce :
BrokerPartitionInfo:83 - Error while fetching metadata [{TopicMetadata for topic RSS ->
No partition metadata for topic RSS due to kafka.common.LeaderNotAvailableException}] for topic [RSS]: class kafka.common.LeaderNotAvailableException
Kafka properties tested
First :
borker.id=0
port=9092
host.name=<external-ip>
zookeeper.connect=localhost:<PORT>
Second:
borker.id=0
port=9092
host.name=<external-ip>
zookeeper.connect=<external-ip>:<PORT>
Third:
borker.id=0
port=9092
host.name=<external-ip>
zookeeper.connect=<external-ip>:<PORT>
advertised.host.name=<external-ip>
advertised.host.port=<external-ip>
Last:
borker.id=0
port=9092
host.name=</etc/host name>
zookeeper.connect=<external-ip>:<PORT>
advertised.host.name=<external-ip>
advertised.host.port=<external-ip>
Here is my "/etc/hosts"
127.0.0.1 kafka kafka
127.0.0.1 localhost
I followed the Getting Started, which if I understood is a localhost / signle server configurations. I cannot understand what I have to do to get this work with remote calls...
Thanks for your help !
EDIT 1
host.name=localhost
advertised.host.name=politik.cm-cloud.fr
Seems to allow a local consumer (on the server) and producer. But if we want to do the same from a remote server we get
[2015-12-09 12:44:10,826] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.NoRouteToHostException: No route to host
The error does not look like connectivity problem with Zookeeper / Kafka.
Just follow the instruction in "quickstart" from http://kafka.apache.org/
BrokerPartitionInfo:83 - Error while fetching metadata [{TopicMetadata for topic RSS ->
Additionally the error indicates there is no partition info i.e topic not yet created . Try creating topics first and then try to produce/consume because when producing to a non existent topic kafka will create the topic based on auto.create.topics.enable in server.properties but remotely it is better to create topics rathen than relying on auto create

How can I produce messages with Kafka 8.2 API in Java?

I'm trying to work with the kafka API in java. I'm using the following maven dependency:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.8.2.0</version>
</dependency>
I'm having trouble connecting to a remote kafka server.
I changed the kafka 'server.properties' file port attribute to be port 8080.
I can start both the zookeeper and the kafka server no problem.
I can also use the console producer and consumer applications that came with the kafka download. (Scala 2.10 version)
I'm using the following client code to create a remote KafkaProducer
Properties propsProducer = new Properties();
propsProducer.put("bootstrap.servers", "172.xx.xx.xxx:8080");
propsProducer.put("key.serializer", org.apache.kafka.common.serialization.ByteArraySerializer.class);
propsProducer.put("value.serializer", org.apache.kafka.common.serialization.ByteArraySerializer.class);
propsProducer.put("topic.metadata.refresh.interval.ms", "0");
KafkaProducer<byte[], byte[]> m_kafkaProducer = new KafkaProducer<byte[], byte[]>(propsProducer);
Once I've created the producer, I can run the following line and get valid topic info returned, granted strTopic is an existing topic name.
List<PartitionInfo> partitionInfo = m_kafkaProducer.partitionsFor(strTopic);
When I try to send a message, I do the following:
ProducerRecord<byte[], byte[]> prMessage = new ProducerRecord<byte[],byte[]>(strTopic, strMessage.getBytes());
RecordMetadata futureData = m_kafkaProducer.send(prMessage).get();
The call to send() blocks indefinitely and when I manually terminate the process, I see that the ERROR Closing socket because of error on kafka server(IOException, Connection Reset by Peer) error.
Also, it's worth nothing that the host.name, advertised.host.name, and advertised.port properties are all still commented out on the 'server.properties' file. Oh, and if I change the line:
propsProducer.put("bootstrap.servers", "172.xx.xx.xxx:8080");
to
propsProducer.put("bootstrap.servers", "127.0.0.1:8080");
and run it on the same server as where the kafka server is installed, it works but I'm trying to work with it remotely.
Appreciate any help and if I can clarify at all let me know.
After lots of digging, I decided to implement the example found here: Kafka Producer Example. I shortened the code and didn't implement a partitioner class. I updated my pom with the dependency listed and I was still having the same issue. Ultimately, I made some configuration changes and everything worked.
The final piece of the puzzle was defining the Kafka server in /etc/hosts of both the server and the client machines. I added the following to both files.
172.xx.xx.xxx serverHost1
Again, the x's are just masks. Then, I set the advertised.host.name in the server.properties file to serverHost1. NOTE: I got that IP after running an ifconfig on the server machine.
I changed the line
propsProducer.put("metadata.broker.list", "172.xx.xx.xxx:8080");
to
propsProducer.put("metadata.broker.list", "serverHost1:8080");
The Kafka API didn't like the fact that I was defining an IP as a string. Instead it was looking up the IP from within the etc/hosts file although the documentation says:
"Hostname the broker will advertise to producers and consumers. If not set, it uses the value for "host.name" if configured. Otherwise, it will use the value returned from java.net.InetAddress.getCanonicalHostName()."
Which will just return the IP, in the string form, I was previously using if not defined in etc/hosts of client machine, otherwise it returns the name paired with the IP (serverHost1 in my case). Also, I never did set the value of host.name either.

How to develop (locally) and deploy Storm Topology (remotely)?

I currently work with Netbeans on Windows machine to develop topologies. When I deploy in local mode:
LocalCluster cluster = new LocalCluster();
cluster.submitTopology("word-count", conf, builder.createTopology());
everything works just fine, but when I try to :
StormSubmitter.submitTopology("word", conf, builder.createTopology());
it obviously tries to deploy the topology in a cluster mode and fails since I dont have storm nimbus running on my local computer. I do have storm deployed on one Digital Ocean droplet, but my current (and not convenient) solution is to copy the JAR file and use the storm jar... command to deploy.
My question is: is there a way to tell Netbeans what is my nimbus IP address, so it can deploy it remotely? (and save me the time)Thank you in advance!
Check this link
Now I can develope topologies in Netbeans, test them locally, and eventually deploy them to my Nimbus on the cluster. This solution works great for me!!!
Add to conf file:
conf.put(Config.NIMBUS_HOST, "123.456.789.101); //YOUR NIMBUS'S IP
conf.put(Config.NIMBUS_THRIFT_PORT,6627); //int is expected here
Also, add the following :
System.setProperty("storm.jar", <path-to-jar>); //link to exact file location (w/ dependencies)
to avoid the following error:[main] INFO backtype.storm.StormSubmitter - Jar not uploaded to master yet. Submitting jar...
Exception in thread "main" java.lang.RuntimeException: Must submit topologies using the 'storm' client script so that StormSubmitter knows which jar to upload.
Cheers!
Yeah, definitely you can tell your topology about your nimbus IP. Following is the example code to submit topology on remote cluster.
Map storm_conf = Utils.readStormConfig();
storm_conf.put("nimbus.host", "<Nimbus Machine IP>");
Client client = NimbusClient.getConfiguredClient(storm_conf)
.getClient();
String inputJar = "C:\\workspace\\TestStormRunner\\target\\TestStormRunner-0.0.1-SNAPSHOT-jar-with-dependencies.jar";
NimbusClient nimbus = new NimbusClient(storm_conf, "<Nimbus Machine IP>",
<Nimbus Machine Port>);
// upload topology jar to Cluster using StormSubmitter
String uploadedJarLocation = StormSubmitter.submitJar(storm_conf,
inputJar);
String jsonConf = JSONValue.toJSONString(storm_conf);
nimbus.getClient().submitTopology("testtopology",
<uploadedJarLocation>, jsonConf, builder.createTopology());
Here is the working example : Submitting a topology to Remote Storm Cluster
You can pass those information using the conf map parameters .. you can pass a key, value pair as per your requirements
for a list of accepted parameters check this page ..

java.io.FileNotFoundException: http://[IP:8888]/oozie/versions

Hi i am following the below link
http://oozie.apache.org/docs/4.0.1/DG_JMSNotifications.html
snippet
OozieClient oc = new OozieClient("http://IP:8888/oozie");
JMSConnectionInfo jmsInfo = oc.getJMSConnectionInfo();
Properties jndiProperties = jmsInfo.getJNDIProperties();
Context jndiContext = new InitialContext(jndiProperties);
however as per the sample code given as above when trying to see the debug information for getting JMSConnectionInfo it says
java.io.FileNotFoundException: http://[ip:8888]/oozie/versions
is it some configuration with oozie-4.0.0-cdh5.1.0 (i m using). One more info i am running the above code with separate jvm on eclipse and oozie is configured on some other machine.
I found the link http://archive.cloudera.com/cdh4/cdh/4/oozie/WebServicesAPI.html
this says
The Oozie Web Services API is a HTTP REST JSON API.
All responses are in UTF-8 .
Assuming Oozie is runing at OOZIE_URL , the following web services end points are supported:
/versions
/v1/admin
/v1/job
/v1/jobs
in my case /versions are not supported so this is the reason. however i am not sure how i
can make my oozieserver to support /versions. please help
The port that i was using was wrong it should be 11000 instead. Due to this the oozieclient was not able to establish HTTPConnection to oozie server to get the Rest call. I am adding this as might be this is useful for some other person.

Categories

Resources