Connecting to couchbase running on alternate ports using java client - java

I'm using couchbase-client 2.3.2 for Java and Couchbase server Community 4.0
So I'm experimenting with running Couchbase on non-standard ports using the documentation at Couchbase website
I've managed to start Couchbase using these alternate ports but I've only managed to change some of the ports in the java client, here's my code:
final CouchbaseEnvironment env = DefaultCouchbaseEnvironment.builder()
.bootstrapCarrierDirectPort(21210)
.bootstrapHttpDirectPort(9091)
.build();
return CouchbaseCluster.create(env, "10.0.2.15");
My program is able to connect to couchbase and so some things, however I still need to change the view port (default 8092) and the query port (default 8093) in the client. as a result I'm met with these errors:
2016-09-30 14:03:49.696 [] WARN c.c.c.c.e.Endpoint - [null][QueryEndpoint]: Could not connect to endpoint, retrying with delay 32 MILLISECONDS: ! java.net.ConnectException: Connection refused: /10.0.2.15:8093
2016-09-30 14:03:52.077 [] WARN c.c.c.c.e.Endpoint - [null][ViewEndpoint]: Could not connect to endpoint, retrying with delay 2048 MILLISECONDS: ! java.net.ConnectException: Connection refused: /10.0.2.15:8092
So the client still tries to connect to 8092 and 8093 when in fact I've changed those to 9092 and 9093

From the JavaDoc on 2.3.4 (http://docs.couchbase.com/sdk-api/couchbase-java-client-2.3.4/), I believe what you want is this:
DefaultCouchbaseEnvironment.Builder viewEndpoints(int viewServiceEndpoints)

Even though it's completely undocumented, you need to add those ports to static_config as well:
{capi_port, 9092}.
{query_port, 9093}.
and then it works, hope someone at couchbase sees this and updates their documentation :)

Related

Java RMI - Lookup Success , but method call fails - Onpremise Vs AWS

I have a java code which makes connection to a Java RMI server - lookup followed by method invocation.
Both machine are under the same firewall in onpremise and works as expected.
When my client/java moved to AWS, the RMI server is still running in on-premise.. But here in this case, it fails with below error..
Lookup success, method call failed..
Lookup for Remote Object Successful.
ErrorMessage:startupFunction : RemoteException Caught.. Connection refused to host: XXXXX ; nested exception is:
java.net.ConnectException: Connection timed out
PS : I see similiar issue in this post, but nothing seems to work in my case
You need to export your remote object on a fixed port, and open that port in your firewall.
Fixed this by adding the port used by RMI Method.. We ran the request from on-premise and in the RMI host we ran netstat and captured the port list.. This way we could figure out the port and enabled in Security group in AWS.. thanks all for your help..

How to connect to a running bigtable emulator from java

I am trying to use the bigtable emulator from gcloud beta emulators.
I launch the emulator, grab the hostname (localhost) and port (in this instance 8885)
gcloud beta emulators bigtable start
Executing: /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/platform/bigtable-emulator/cbtemulator --host=localhost --port=8885
I am trying to connect to the emulator from a java test client,
here is what I provide:
Configuration conf = BigtableConfiguration.configure(projectId, instanceId);
if(!Strings.isNullOrEmpty(host)){
conf.set(BigtableOptionsFactory.BIGTABLE_HOST_KEY, host);
conf.set(BigtableOptionsFactory.BIGTABLE_PORT_KEY, Integer.toString(port));
}
connection = BigtableConfiguration.connect(configuration);
try (Table table = connection.getTable("tName")){
table.put(<Put instance>);
}
When I execute the test code I get:
16:36:37.369 [bigtable-batch-pool-1] INFO com.google.cloud.bigtable.grpc.async.AbstractRetryingRpcListener - Retrying failed call. Failure #1, got: Status{code=UNAVAILABLE, description=null, cause=java.net.ConnectException: Connection refused: localhost/0:0:0:0:0:0:0:1:8885}
java.net.ConnectException: Connection refused: localhost/0:0:0:0:0:0:0:1:8885
I am using the library: com.google.cloud.bigtable:bigtable-hbase-1.2:0.9.1
Any idea of what I am doing wrong ?
Thanks !
You need one additional config property to be set:
conf.set(BigtableOptionsFactory.BIGTABLE_USE_PLAINTEXT_NEGOTIATION, true);
Also, from the log message it looks like it's trying to connect to an IPv6 address, which I don't think will work. Double-check that host is a valid IPv4 address.
The java client will make this easier to do in the near future.
Now you can set
configuration.set(BigtableOptionsFactory.BIGTABLE_EMULATOR_HOST_KEY,<HOST:PORT>); to connect to an emulator.
Also "https://github.com/googleapis/java-bigtable/tree/master/google-cloud-bigtable-emulator" can be used to start emulators programmatically for tests etc.

kafka + zookeeper remote = error

I am trying to install a kafka & zookeeper instance on a remote server. I only need 1 node of each actually because i only want to provide remote kafka for test purposes.
Kafka and Zookeeper are running from the Apache Kafka tarball you can find there (v0.0.9), inside a Docker image.
Trying to consume / produce using the provided scripts. And trying to produce using own java application. Everythinf is working fine if Kafka & ZK are installed on the local server.
Here is the error I get while trying to produce :
BrokerPartitionInfo:83 - Error while fetching metadata [{TopicMetadata for topic RSS ->
No partition metadata for topic RSS due to kafka.common.LeaderNotAvailableException}] for topic [RSS]: class kafka.common.LeaderNotAvailableException
Kafka properties tested
First :
borker.id=0
port=9092
host.name=<external-ip>
zookeeper.connect=localhost:<PORT>
Second:
borker.id=0
port=9092
host.name=<external-ip>
zookeeper.connect=<external-ip>:<PORT>
Third:
borker.id=0
port=9092
host.name=<external-ip>
zookeeper.connect=<external-ip>:<PORT>
advertised.host.name=<external-ip>
advertised.host.port=<external-ip>
Last:
borker.id=0
port=9092
host.name=</etc/host name>
zookeeper.connect=<external-ip>:<PORT>
advertised.host.name=<external-ip>
advertised.host.port=<external-ip>
Here is my "/etc/hosts"
127.0.0.1 kafka kafka
127.0.0.1 localhost
I followed the Getting Started, which if I understood is a localhost / signle server configurations. I cannot understand what I have to do to get this work with remote calls...
Thanks for your help !
EDIT 1
host.name=localhost
advertised.host.name=politik.cm-cloud.fr
Seems to allow a local consumer (on the server) and producer. But if we want to do the same from a remote server we get
[2015-12-09 12:44:10,826] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.NoRouteToHostException: No route to host
The error does not look like connectivity problem with Zookeeper / Kafka.
Just follow the instruction in "quickstart" from http://kafka.apache.org/
BrokerPartitionInfo:83 - Error while fetching metadata [{TopicMetadata for topic RSS ->
Additionally the error indicates there is no partition info i.e topic not yet created . Try creating topics first and then try to produce/consume because when producing to a non existent topic kafka will create the topic based on auto.create.topics.enable in server.properties but remotely it is better to create topics rathen than relying on auto create

how can we add a document using solr cloud server

While adding a document using solr cloud server I'm getting following exception
60 [main] INFO org.apache.solr.common.cloud.ConnectionManager - Waiting for client to connect to ZooKeeper
65 [main-SendThread(jmajeed.ibsorb.com:8982)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server jmajeed.ibsorb.com/192.168.70.91:8982. Will not attempt to authenticate using SASL (unknown error)
69 [main-SendThread(jmajeed.ibsorb.com:8982)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to jmajeed.ibsorb.com/192.168.70.91:8982, initiating session
Exception in thread "main" java.lang.RuntimeException: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 192.168.70.91:8982/#/hotelcontent within 10000 ms
Does anybody has any idea why this is happening??
Thanks.
Have the disturbed the default configuration of solr nodes because by default if you do not specify the port the first node in the cluster will start in 8983 port so check this first. If this is not the problem then check whether the cluster is up or not by accessing admin UI of solr cloud. Then see whether all the shards in the cluster are alive by clicking on the cloud tab.
If everything is fine and still you are facing the above problem then are you trying to access a remote solr cloud server and it is firewall issue.

Cannot connect to embedded Cassandra

I'm trying to start Cassandra using code, and I can't connect to it. When I telnet to port 7000, it does connect, but when I try to connect to 9042 (the "native transport" port) I get a "connection refused". So, somehow, the native transport isn't happening.
My startup code:
File file = new File(home, "etc/cassandra.yaml");
System.setProperty("cassandra.config", "file:" + file.getPath());
CassandraDaemon cassandra = new CassandraDaemon();
cassandra.init(null);
My cassandra.yaml contains:
start_native_transport: true
native_transport_port: 9042
The logs indicate that Cassandra is starting. I see no reference in the logs to any native transport, even when the log level is set to DEBUG. No references to port 9042.
I'm on Windows. I don't think it's a firewall issue because I'm trying to connect from localhost.
Any ideas?
Have you tried calling the .start method?
I've implemented an Embedded Cassandra Server in Achilles, example of working code here: https://github.com/doanduyhai/Achilles/blob/master/achilles-core/src/main/java/info/archinnov/achilles/embedded/CassandraEmbedded.java
CassandraDaemon cassandraDaemon = new CassandraDaemon();
cassandraDaemon.activate();

Categories

Resources