I'm finding that some Apache Curator recipes don't work with older versions of ZooKeeper. This isn't an issue except that I keep having developers in my company try to use some code I wrote and it fails without any errors or log messages due to them running an old version of their local machine. So I want to retrive the version of the ZooKeeper server to which I'm connected and die with an useful error message if the version is too old. However, I can't find any way to get the server's version number with either the Curator or ZooKeeper APIs. Anyone know how to do it?
The ZooKeeper "four letter words" can help. You can connect to the local ZK instance on port 2181 and execute a "srvr" four letter word. You'll get back lines of info one of which is the version.
Have a look at Exhibitor's FourLetterWord class for a sample of how to get this programmatically: https://github.com/Netflix/exhibitor/blob/40a02452dc3133fe37bf4ecf076bda99c29ab6ec/exhibitor-core/src/main/java/com/netflix/exhibitor/core/state/FourLetterWord.java
Related
To clarify the issue. I am running an MQ simulator jar given to us by a vendor (essentially to verify or MQ setup prior to them delivering MQ functionality).
The program works, when I run it from my laptop.
I get "unable to connect to queue manager, check remote access to MQ installation" when I run it from the linux dev server (which is where we actually want to verify the connectivity from). Same exact MQ properties as when I run from my windows laptop. I ran telnet from the server to the host and port and confirmed that worked. So I imagine there shouldn't be any firewall rule preventing the connection.
The program doesn't actually print out an error code, just the message (which I am pretty sure is the first question that is going to get asked of me).
I've been talking to the folks that set up MQ, and they are really making it sound like it isn't an issue on their end despite the fact most everything I can find on this message is indicating it is probably a problem with configuration there.
Not super linux savvy so any tips on any way I can monitor and get more information on a root cause other than this error message I am seeing would be great.
Each time when I need to debug a Java application deployed in a cluster environment, I'm in big trouble.
Company's environments (Test, Acceptance, etc.) usually is cluster environments with multiply servers and in front of the cluster, there is a proxy server that forwards the requests (HTTP) to one of the servers in the cluster. If you have no access to the individual servers and you are not allowed to lunch the app from one particular server then you must use the endpoint that comes from the proxy.
As I know one IntelliJ can open only one remote debug connection. That means if the request goes to another server in the cluster (where my debugger is not attached) then I can not see anything in my debug window. Maybe next time.
If you are lucky you can stop all servers in the cluster except one, what you are debugging. But to stop servers is also not easy, especially in Acceptance environments.
According to my colleagues, I can debug multiply servers with one Eclipse instance, but I really do not want to use Eclipse.
Okay, I guess that I can copy the whole source code to a different folder, open the code with a new IntelliJ instance, and from there I can connect to the 2nd server in the cluster. But this is a painful hack.
Is there any normal way how to debug the cluster environment with multiple servers with IntelliJ?
You can open as many number of debug sessions as many remote JVM are running on remote ends (on corresponding ip address and tcp port) by creating multiply Run Configurations and launch them:
For example to connect with the above host and port, the remove JVM must be started with
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005
if you use JDK version 8 or less, and the
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005
if you use JDK version 9 or newer.
So I was wondering if it's possible to run a MQTT broker on the Google App Engine platform?
Couldn't find any information about it (or maybe I might be using the wrong keywords).
I've got my GAE running on Java so I'd like to go into direction of running the MQTT broker on GAE using a backend.
EDIT:
Did some further research and it seems Moquette is running on Java. Does someone have experience running Moquette on the GAE?
EDIT2:
Ok, it seems the examples of Moquette are running using an OSGi container, which is unavailble in GAE. Looking for a script to start this server on GAE.
MQTT is protocol on top of TCP. In order to run MQTT server, one needs to be able to open a listening socket. Those are still not supported on normal AppEngine instances.
Note: GAE backends have been replaced: now you just have automatic scaled (aka frontend) instances and manual scaled (aka backend) instances.
Back to your problem: Managed VMs have most of the benefits of GAE (access to services), but run a full JVM, which allows listening sockets.
An alternative to Moquette would also be the HiveMQ broker, it also runs on Java and can be easily installed. All the documentation is available here.
We haven't tested it on GAE yet, but if you have any problems running it, you could ask in the support forum.
Update: If Peter Knego is right, then HiveMQ or any other MQTT broker won't work on GAE.
Full disclose: I'm working for the company, who develops HiveMQ.
Cheers,
Christian
#Peter Knego is definitely right, and all i would add to his answer is that,
If you manage to configure you application to use a custom Runtime on the Managed Vms of Appengine and Compute Engine,
then you will be able to run you MQTT brooker perfectly sound and well.
As long as you define a fire wall to allow a tcp connection at the port which your broker is listening from.
By default the ports are blocked for security reasons.
I have installed Hadoop and Hbase on a VirtualBox VM running Ubuntu; both Hadoop and Hbase are running successfully in pseudo-distributed mode. I have disabled IPv6 on Ubuntu and changed the localhost to 127.0.0.1 in the hosts file on the VM.
I am trying to write some basic Java code on a Windows machine in Eclipse to connect to the Hbase instance, create a table, insert and retrieve data, etc. The code fails with an error that it cannot connect to the master. However, it makes the Zookeeper connection to the VM just fine.
On the Windows machine, I am able to connect to the Hbase instance info via the web browser via the same IP address and port that I specify in the Java code.
I have searched everywhere and tried everything that I could find, but it is still failing to connect to the master after it makes the zookeeper connection.
I have read that others have had this problem too, but no one has posted a solution.
Please help! Thanks!
The IP and Port used to view information are not the one used to read/write from/into HBase. To do so you need to use either the REST API (included in HBase) or Apache Thrift (2 thrift servers are included in HBase - thrift & thrift2)
I would recommend you to use Apache Thrift (thrift2)
To start REST use :
$HBASE-INSTALL-DIR/bin/hbase-deamon.sh start rest
To start Thrift use :
$HBASE-INSTALL-DIR/bin/hbase-deamon.sh start thrift
To start Thrift (v2) use :
$HBASE-INSTALL-DIR/bin/hbase-deamon.sh start thrift2
To use the Thrift client from Java for example you will need to install thrift on the server and then generate the Java Classes using the hbase thrift file included with HBase.
By default Thrift will be listening on the 9090 port and REST on the 8080
Usefull Links :
HBase Thrift
HBase REST
Ok -- Someone gave me some 1-1 help that fixed the problem and I wanted to pass it along. It turned out to be an IP addressing issue with the VM and with my Windows machine. First, in the etc/hosts file on the VM, I had to take out '127.0.0.1 locahost' and instead insert ' localhost'. Second, on my Windows hosts file, I had to add ' '. Thankfully, that fixed the problem. Please let me know this is unclear since I have seen this problem posted quite a few times without suitable resolution. Also, since I am writing Java code to access the HBase instance in the VM, there was no need to use Thrift or REST -- the Java API was sufficient.
I am using Zero MQ PUB/SUB model where PUB and SUB are two different applications deployed on the same machine on webpshere 6.1. This model works fine on my local machine but when I deploy it on a remote unix box server it isn't working. My SUB never receives a message from PUB. I tried all the options suggested i could find on the web (localhost, 127.0.0.1) but no luck. Appreciate any help on this. I am using jeroMq 3.2.2.
Thanks
Akash
If you're using multicast, then you need to enable loopback on the socket. As a result of this, the sender app will get the data as well if it's listening for it.
We also faced same issue and it was fixed by using below settings:
Publisher side use * (star) : tcp:// *.port number
Subscriber side use machine name : tcp://machine name of publisher.port number