Drill not finding drillbit from zookeeper? - java

I am trying out the drill sample in my project using this example.
https://github.com/vicenteg/DrillJDBCExample/blob/master/src/main/java/com/mapr/drill/DrillJDBCExample.java
I have started drillbits on all my datanodes with the same "cluster-id" and I specify "zk.connect" to point to "zookeeper1,zookeeper2,zookeeper3" in my drill-override.conf (picked up by default i believe).
I am getting the following error:
java.lang.IllegalStateException: No DrillbitEndpoint can be found
Am I supposed to start drillbits on my zookeeper nodes too in addition to my datanodes? Or what is wrong?
My drill-override is as follows:
drill.exec: {
cluster-id: "testcluster",
zk.connect: "zookeeper1:2181,zookeeper2:2181,zookeeper3:2181"
}

if you are trying to connect to a different machine where drill is installed. Then while connecting from windows, give ip address of the machine where drill is running as .
Important
If in drill-override.conf(Linux machine where Drill is running) you have written "zookeeper1" as node name, then you should modify your "c:\Windows\System32\Drivers\etc\hosts" file in your client machine, and gave a DNS name of that ip.
Example :
192.168.32.84 zookeeper1 .

Related

VirtualBox Java API - Get IP Address of VM in VirtualBox

I have question related to getting IP address, I referred to few online resources and some answers already but I was not able to still fix it,
https://forums.virtualbox.org/viewtopic.php?f=34&t=65910 - link that most matched with my question.
Configuration :
Host Machine : Windows 8
Guest : Slitaz 4.0
VirtualBox : 4.3.24
VirtualBox Java API : 4.3
Adapter in VM : Host Only Adapter
Slitaz Guest Additions are also installed.
I have created a Java program using VirtualBox API and I can basically connect, clone and start a VM in VirtualBox. But what I was not able to do is get the IP address of machine. (Why I need IP is because I want to SSH into that machine, using JSCH java library and execute some processes)
This is what I tried so far,
Based on the link mentioned above,
machine.getGuestPropertyValue("/VirtualBox/GuestInfo/Net/<nicid>/V4/IP") - returns empty result
machine.getGuestPropertyValue("/VirtualBox/GuestInfo/Net/Count") - returns empty result
machine.getGuestPropertyValue("/VirtualBox/GuestInfo/Net/<nicid>/MAC") - return empty result
machine.getGuestPropertyValue("/VirtualBox/GuestInfo/OS/Release") - this gives me release version of linux - which is ok
I tried this one more thing, how many properties are exactly available
org.virtualbox_4_3.Holder<List<String>> tempList = new Holder<List<String>>();
org.virtualbox_4_3.Holder<List<String>> tempList1 = new Holder<List<String>>();
Holder<List<Long>> tempList2 = new Holder<List<Long>>();
org.virtualbox_4_3.Holder<List<String>> tempList3 = new Holder<List<String>>();
machine.enumerateGuestProperties("",tempList, tempList1,tempList2,tempList3);
Now in templist, I get name of all properties available and in tempList1 it's values.
[/VirtualBox/GuestInfo/OS/Product, /VirtualBox/HostInfo/GUI/LanguageID, /VirtualBox/HostInfo/VBoxVerExt, /VirtualBox/GuestAdd/Vbgl/Video/SavedMode, /VirtualBox/GuestInfo/OS/Version, /VirtualBox/GuestAdd/VersionExt, /VirtualBox/GuestAdd/Revision, /VirtualBox/HostGuest/SysprepExec, /VirtualBox/GuestAdd/Vbgl/Video/0, /VirtualBox/HostGuest/SysprepArgs, /VirtualBox/GuestAdd/Version, /VirtualBox/HostInfo/VBoxRev, /VirtualBox/HostInfo/VBoxVer, /VirtualBox/GuestInfo/OS/Release, /VirtualBox/GuestAdd/HostVerLastChecked]
That is the reason, I think this works
machine.getGuestPropertyValue("/VirtualBox/GuestInfo/OS/Release") - this gives me release version of linux - which is ok
But there is no property related to IP. Also I am not sure what exactly here is nicid and how to get this using API, is it mac address of adapter in VM, or something else.
machine.getGuestPropertyValue("/VirtualBox/GuestInfo/Net/<nicid>/V4/IP")
Can somebody please help me out here or guide in appropriate direction.
Thanks to virtual box forum, for solving the issue here is the link
https://forums.virtualbox.org/viewtopic.php?f=34&t=66788&sid=bc4d34a5ad65fe3ff5006f91c8690817
But to summarize the solution,
It was made sure that guest additions on slitaz are compatible (correct versions) with the one matching with virtual box version.
Problem was VBoxService was not started automatically during the boot process of Slitaz.
So etc/rcS.conf file was edited to add extra program in LOAD_MODULES
.............. /usr/sbin/VBoxService (use which VBoxService to get the path for service)
Now when Slitaz was restarted, this process was running and
vboxmanage guestproperty enumerate '<vmname>' listed all the configuration
and one intereseted was,
Name: /VirtualBox/GuestInfo/Net/0/V4/IP, value: 192.168.56.101, timestamp: 1427467107101664501, flags:
Hope it helps somebody !!!

Java client can't find master node: MasterNotDiscoveredException waited for [1m]

I'm using vagrant and I installed ES on it using the debian package:
elasticsearch-1.1.1.deb
In my web app, I am using the jar:
org.elasticsearch elasticsearch 1.1.1
I am creating my client like:
val node = nodeBuilder.client(true).node
val client: Client = node.client
When I try and index I get the error:
val response = client.prepareIndex("articles", "article", article.id.toString).setSource(json).execute.actionGet
The error I get is:
[MasterNotDiscoveredException: waited for [1m]]
I can see my ES instance is working fine by going to:
http://localhost:9200
I ran some test queries from the README file and they worked fine, but now for some reason it isn't working either:
http://localhost:9200/twitter/user/kimchy?pretty=true
I get the error:
{
"error" : "ClusterBlockException[blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];]",
"status" : 503
}
My vagrant file 2 ports open for elastic search:
config.vm.network "forwarded_port", guest: 9200, host: 9200 # ES
config.vm.network "forwarded_port", guest: 9300, host: 9300 # ES
What seems to be the problem?
Note: my web application isn't using a elasticsearch.yml file because it is just connecting to the default localhost:9200 from what I understand.
Normally you have to connect to ES from outside through http (normally, but there are also others protocols available) and then talk REST/JSON. So your webapp should use a scala/java ES client (see http://www.elasticsearch.org/guide/en/elasticsearch/client/community/current/clients.html) and then connect via http to your host which is running ES on port 9200. Port 9300 is only for internode communication (ES is a distributed clustered system). But there is another way to talk remotely to ES: Powerup a node which joins the cluster and then query this node through the internal client. But:
In your above question you try to connect to ES through the internal Java client (internal transport) which starts a node and then try to joins the cluster. That fails because the master node could to be found. Maybe due to networking issues. Try to include elasticsearch.yml in the classpath or use REST like described above. There is also a third option: TransportClient - look http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/client.html#transport-client
See also http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-transport.html and http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-http.html and http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-memcached.html
Since you are generating your client node with .client(true), that disables both data-storage and master-eligibility on your node, if I understand the docs correctly. (the source is not very helpful either)
Note that any ES cluster needs at least 1 master node.
First, to clarify the config situation, your main elasticsearch.yml (see reference config) configuration is under /etc/elasticsearch/. You can also configure a second elasticsearch.yml in your src/main/resources folder, which will apply to the nodes you create in your app. I'd recommend doing this as it's way clearer compared to using the mysterious nodeBuilder methods.
Can you show what is the response when you query, right after starting es up, http://localhost:9200/_nodes ?
Specifically, if you have
"attributes": {
"master": "true"
},
set on one of the nodes. If so, then it looks like a networking problem as your client node is unable to contact the master node. I actually had a similar issue when I was setting up, and the solution was to set network.host: 127.0.0.1 in the app's elasticsearch.yml (wish I knew why)
uncomment discovery.zen.ping.multicast.enabled: false in /etc/elasticsearch/elasticsearch.yml

Error while copying a file from HDFS to Windows machine

There is a Linux VM with Hadoop installed and running.
And there is Java app running in Eclipse that retrieve data from HDFS.
If I am copying file(s) to or from HDFS inside the VM everything works fine.
But when i am running the app from my Windows physical machine I am getting the next exception:
WARN hdfs.DFSClient: Failed to connect to /127.0.0.1:50010 for block, add to
deadNodes and continue. java.net.ConnectException: Connection refused: no further
information. Could not obtain BP-*** from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from namenode and retry
I can only retrieve list of files from HDFS.
Seems that when retrieve data from data node it is connecting to my Windows localhost.
Because when I made a tunnel in putty from my localhost to VM everything was fine.
Here is my Java code:
Configuration config = new Configuration();
config.set("fs.defaultFS", "hdfs://ip:port/");
config.set("mapred.job.tracker", "hdfs://ip:port");
FileSystem dfs = FileSystem.get(new URI("hdfs://ip:port/"), config, "user");
dfs.copyToLocalFile(false, new Path("/tmp/sample.txt"),newPath("D://sample.txt"), true);
How can it be fixed?
Thanks.
P.S. This error occurs when I am using QuickStart VM from Cloudera.
Your DataNode is advertising its address to the NameNode as 127.0.0.1. You need to re-configure your Pseudo distributed cluster such that the nodes use externally available addresses (hostnames or IP addresses) when opening socket services.
I imagine if you run a netstat -atn on your VM, you'll see the Hadoop ports bound to 127.0.0.1 rather than 0.0.0.0 - this means they will only accept internal connections.
You need to look at your VM's /etc/hosts configuration file and ensure hostname doesn't have an entry resolving to 127.0.0.1.
Whenever you start a VM, it gets its own I.P. Something like 192.x.x.x or 172.x.x.x.
Using 127.0.0.1 for HDFS wont help when you are executing from your windows box, because this is mapped to local i.p. So, if you are using 127.0.0.1 from your windows machine, it will think that your HDFS is running on windows machine. This is why your connection is failing.
Find the i.p that is associated with your VM. Here is a link to get that if you are using Hyper-V. http://windowsitpro.com/hyper-v/quickly-view-all-ip-addresses-hyper-v-vms
Once you get the VMs I.P, use it in the application.
You need to change the ip. First go to linux VM and in its terminal find the IP address of your VM.
Command to see the ip address in linux VM is below
ifconfig
Then in your code change the ip address to the IP thats shown in your linux VM.

Sun Directory Server can't connect ldap server

I'am installing ArcGisServer for the Java plataform on Centos 5.5 x86_64, this is not a supported platform but I have overcome almost every problem preventing the success of the installation. It uses exhaustively Sun Directory Server. The last error i receive was:
ldap_simple_bind: Can't connect to the ldap server - No route to host
It happens in other applications which makes uses of it, so it seems to be an specific problem of Sun Directory Server on linux and solaris. There is no reported solution. Usually I search the problem as much as I can but this time I have reached my patience and I need it working as soon as posible. I recognize this as an excellent forum because of it's community and quality of answers, ¿can anybody help me with this?
The "No route to host" error suggests that the issue is one of network connectivity between your ArcGIS server (the Sun Directory Server component, as you mention) and the LDAP server. So, a few things to examine, in order:
Do you have an LDAP server set up and running?
Is your LDAP server reachable from your Centos machine outside of the ArcGis server?
Is your ArcGis configured with the correct address to the LDAP server - should be in the web.config file? Example below:
<connectionStrings>
<add name="ADConnectionString"
connectionString="LDAP://SERVER_LDAP:389/ou=Sigestredi,o=Sicondef,dc=aplicaciones,o=mdef,c= es" />
</connectionStrings>
Disclaimer: I don't know anything about the ArcGIS server per se - I'm just diagnosing the "no route to host error" with a few snippets I picked up from some quick searches of the ArcGis forums.
ArcGIS includes a Sun Directory Server on it, so arcgis server and ldap are on the same machine. The port is set to 62000. When I run the diagnostic tool the DG028 fails:
DG028 - check LDAP server: is listening
I made a prove using nmap as:
nmap localhost -p62000
And it says it is opened. I don't know how to verify if LDAP is up and running, the startup log doesn't show anything wrong. I have found a config file named ldap.conf:
url ldap://name.subdomain.domain:62000/dc=name,dc=arcgis
admnm agsadmin
And my /etc/hosts is:
127.0.0.1 localhost localhost
ip_direction name.subdomain.domain name.subdomain.domain
I don't have an alias for "name", so:
ping name.subdomain.domain
Works
But:
ping name
Doesn't work
I have never used ldap so I don't know what should be on "dc". Could my hosts file be malformed or is my ldap.conf?
Another information is that the computer I am using is part of a domain. When I installed Centos, /etc/hosts file had an alias for the loopback interface as localhost.localdomain but i removed it.
I would appreciate any help.
I have solved my problem, the problem was on my /etc/hosts file. I added an alias for my ip direction:
127.0.0.1 localhost localhost
ip_direction name.subdomain.domain name
Then I run the ServerConfig script. This is a successful installation of ArcGIS Server for the Java platform on Linux. Thanks Greg for your guide.

Error when trying to connect to Jacorb naming service

I'm hoping to get some help with this weird problem. We're running the Jacorb name server and I have a simple client that I'm using to try to connect and do awesome CORBA voodoo. The name server is running, but when I try to start my java app, I get a "Connection failure" error (org.omg.CORBA.COMM_FAILURE, minor code 201, "caused by java.net.ConnectionException: Connection refused: connect").
Here's the weird part. The error reports that it's trying to connect using the default port 900, but I'm passing in an argument to try to override the port number of the name service to match the one being used by the name server. My java command is like this:
java -classpath . HelloClient -Djava.endorsed.dirs="bla bla bla" <br>
-Dorg.omg.CORBA.ORBClass=org.jacorb.orb.ORB
-Dorg.omg.CORBA.ORBSingletonClass=org.jacorb.orb.ORBSingleton
-DORBInitRef.NameService=corbaloc::localhost:2809/StandardNS/NameServer-POA/_root
I also tried the parameters without the first capital D (I've seen it both ways and I don't know the difference).
Now, if I put in -ORBInitialPort 2809, then the client does appear to try to connect, but then I get a corba.OBJECT_NOT_EXIST error.
I could use any help or advice anyone has.
Connection Refused. This sounds like a firewall/program not running issue.
try a telnet <machine> 2809. You should get a "Connected to "
and not a refusal, if everything is running/enabled correctly.
I'm running on a UNIX client so the paths use UNIX style.
jacORB installed properly ? e.g. get the nameservice entry from the
orb.properties file (in ${JAVA_HOME}/jre/lib/
I use "ORBInitRef.NameService=corbaloc::localhost:2809/NameServer"
as "NameServer" is used on the production name server and not the other
string of "Standard...."
The other changes in the properties files are setting the paths to UNIX
style (i.e. e:\NS_Ref -> /tmp/NS_Ref)
jacorb.naming.ior_filename=/tmp/NS_Ref
1a. Setting the http:// in the properties file didn't seem to do anything
in regards to resolving on the client side.
1b. NOTE: start ns with:
ns -DOAPort=2809
Log will show:
2010-05-27 10:00:47.777 FINE Created socket listener on 0.0.0.0/0.0.0.0:2809
2010-05-27 10:00:47.777 FINE Using port 2809
Running:
$ lsof | grep 2809
java 27529 jbsymolo 15u IPv6 693300 TCP *:2809 (LISTEN)
$ lsof -Pnl +M -i6
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
Naming_Se 9678 1000 7u IPv6 45779 TCP *:51148 (LISTEN)
java 27959 1000 15u IPv6 696092 TCP *:2809 (LISTEN)
Not Running: (shows nothing)
ns when started will log where it reads the properties from and it shouldn't
throw any errors. If it does your properties files have issues.
VM arguments. The -D is used to set system properties. Any Java code can
then access any property so defined via System.getProperty(). Even though
I've also seen the "non-D" used, I've been using the D.
-DORBInitRef.NameService=corbaloc::localhost:2809/NameService
-Dorg.omg.CORBA.ORBClass=org.jacorb.orb.ORB
-Dorg.omg.CORBA.ORBSingletonClass=org.jacorb.orb.ORBSingleton
When running the client in Eclipse, I see the following in the Console:
May 27, 2010 10:01:06 AM org.jacorb.config.JacORBConfiguration init
INFO: base configuration loaded from file /usr/lib/java/jdk1.6.0_18/jre/lib/orb.properties
...
2010-05-27 10:01:09.836 FINE Trying to connect to 127.0.0.1:2809 with timeout=90000.
2010-05-27 10:01:09.844 INFO Connected to 127.0.0.1:2809 from local port 45745
2010-05-27 10:01:09.846 FINE wrote 12 bytes to 127.0.0.1:2809
...
Skipping lots of other read/write traffic
I can't be sure without seeing the rest of the code, but I'm pretty sure you need to change the InitRef string to be:
-DORBInitRef.NameService=corbaloc::localhost:2809
When your client connects, this should give you the root naming context for the naming service and then you can traverse the NameContext tree to get to your desired server object.

Categories

Resources