Initialize RabbitMQ metrics for Java - java

I am unsuccessfully trying to use RabbitMQ metrics support for Java.
My objective is to get some messaging statistics into my Java program. When testing I use a RabbitMQ instance at localhost, and I have put some test data into a test queue on a test virtual host using the RabbitMQ web interface.
My non-working code is:
ConnectionFactory connectionFactory = new ConnectionFactory();
connectionFactory.setHost(host); // localhost
connectionFactory.setUsername(userName);
connectionFactory.setPassword(password);
connectionFactory.setPort(port); // 5672
connectionFactory.setVirtualHost(virtualHost);
StandardMetricsCollector metrics = new StandardMetricsCollector();
connectionFactory.setMetricsCollector(metrics);
It seems like metrics is not properly initialized:
Metrics have the default value 0
Within metrics there are properties called initialized set to false (for instance metrics.getPublishedMessages().m1Rate.initialized)
So, it seems I am missing something important here despite trying to follow the official documentation.
As a workaround, I'm currently using HTTP requests to the API to get some basic messaging statistics, but the API is very limited.

Related

Failed to get driver instance for jdbcUrl=jdbc:postgresql:///<dbname> error for CloudSQL

I am trying to connect to my GCP projects PostgreSQL CloudSQL instance from my local machine. The PostgreSQL doesn't have a public IP, only private.
Properties connProps = new Properties();
connProps.setProperty("user", "XXX-compute#developer.gserviceaccount.com");
connProps.setProperty("password", "password");
connProps.setProperty("sslmode", "disable");
connProps.setProperty("socketFactory", "com.google.cloud.sql.postgres.SocketFactory");
connProps.setProperty("cloudSqlInstance", "coral-XXX-XXXX:us-central1:mdm");
connProps.setProperty("enableIamAuth", "true");
HikariConfig config = new HikariConfig();
config.setJdbcUrl(jdbcURL);
config.setDataSourceProperties(connProps);
config.setConnectionTimeout(10000); // 10s
HikariDataSource connectionPool = new HikariDataSource(config);
I get the below error
Failed to get driver instance for jdbcUrl=jdbc:postgresql:///mdm
java.sql.SQLException: No suitable driver
I have verified that my username, instancename, IAM connectivity is all working fine. The IAM service account I am using is my compute engine's default service account.
Should I be able to connect to this PostgreSQL instance from my local machine?
First, make sure you're configuring your JDBC URL correctly.
The URL should look like this:
jdbc:postgresql:///<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=<POSTGRESQL_USER_NAME>&password=<POSTGRESQL_USER_PASSWORD>
See the docs for details.
Second, if your Cloud SQL instance is Private IP only, your local machine won't have a network path to it, unless you've explicitly configured one (see this answer for options).
Generally, the simplest way to connect to a private IP instance is to run a VM in the same VPC as the instance, and connect from that VM.
While it is a good practice from the security point to have only the private IP enabled and remove the public IP from the Cloud SQL instance, there are some considerations to be kept in mind when thinking about the connectivity.
With the Cloud SQL instance having only the private IP enabled there is no direct way in which you can connect to it from the local machine, neither by using private IP nor by using Cloud SQL proxy.
Now, in your case, as you mentioned you have only private IP enabled in the Cloud SQL instance, it seems to be the reason you are getting the error.
To mitigate the error -
If possible, I would suggest you provision a public IP address
for the Cloud SQL instance and then try to connect it by correctly
specifying the jdbc URL as mentioned here, which looks something like this -
jdbc:postgresql:///<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=<POSTGRESQL_USER_NAME>&password=<POSTGRESQL_USER_PASSWORD>
Else, if you don’t want to provision a public IP address, to
establish the connection from an external resource you can make use
of a Cloud VPN Tunnel or a VLAN attachment if you have a Dedicated
Interconnect or a Partner Interconnect as mentioned here.
If you don’t have a Dedicated Interconnect or a Partner Interconnect and you want to use the private IP only then, to connect to the Cloud SQL you can enable port forwarding via a Compute Engine VM instance. This is done in two steps -
Connect the Compute Engine to the Cloud SQL instance via the private IP.
Forward the local machine database connection request to the Compute Engine to reach the Cloud SQL instance through Cloud SQL Proxy tunnel. This youtube video describes how to do this.
To get a detailed description of the above you can go through this article.

Upload files to AWS S3 using Spring Boot works fine without Proxy but fails with Proxy

Uploading files to AWS S3 using spring boot works great when it executed without proxy and when I add proxy in the VM args it fails with following error,
Internal Server Error (Service: Amazon S3; Status Code: 500; Error Code: 500 Internal Server Error; Request ID: null; S3 Extended Request ID: null; Proxy: 192.168.1.171)
Below are the vm arguments that I have provided,
-Dhttp.proxyHost=192.168.1.171 -Dhttp.proxyPort=9999 -Dhttps.proxyHost=192.168.1.171 -Dhttps.proxyPort=9999
When I started to execute the package the AWS SDK auto initialize the proxy as it finds in the args list
and it prints in the console
com.amazonaws.http.AmazonHttpClient - Configuring Proxy. Proxy Host: 192.168.1.171 Proxy Port: 9999
I can not remove the proxy because I am using Oauth2 authentication in spring security.
Is there any way that I can disable auto initializing the proxy in AWS SDK?
In effect to communicate over a network "endpoints" are an address, but links require "connections" so it may be more of a connection/connected "host conflict" by configuration.
For example i could have both a webserver and a DB that takes requests via http direct but it depends if they are configured for each other and/or to be connected through an irq to network to anything else.
I was able to resolve the issue after contacting AWS, I had to configure the method “setNoneProxyHosts (String nonProxyHosts)” method in the ClientConfiguration class. This sets the optional hosts the client will access without going through the proxy.
The parameter – “nonProxyHosts” takes the hosts the client will access without going through the proxy. Please follow this link here for more information about the class and the method: https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html#setNonProxyHosts-java.lang.String-
Based on the instructions provided by AWS Support I had amended the config and added following;
public ClientConfiguration clientConfiguration() {
final ClientConfiguration clientConfiguration = new ClientConfiguration();
clientConfiguration.setNonProxyHosts("*.s3.<ZONE>.amazonaws.com|*.s3-<ZONE>.amazonaws.com");
return clientConfiguration;
}

How can I get connection pool metrics in Cassandra ..?

Is there any way to get connection pool metrics in Cassandra using CqlSession. Need answer specific to core java.
I want to get each client connection metrics in Cassandra version(4.9.0).
Metrics like -> opened connection, closed connection, active connections ..
And
Is there any way to notify evetime when new connection is created or update..?
In 4.x versions, you need to explicitly enable in configuration file every metric that you need - something like this (taken from docs, full list of metrics is in the reference):
datastax-java-driver.advanced.metrics {
session.enabled = [ connected-nodes, cql-requests ]
node.enabled = [ pool.open-connections, pool.in-flight ]
}
Regarding the hook on the opened/closed connection, I'm not sure that there is an easy way to do that, except just record previous numbers of the opened connections. Maybe such things would be easier to track via Prometheus, or other monitoring system. Here is an example of how you can integrate driver metrics with Prometheus.

Redis with Java / Jedis Library Cold Standby Server

I am looking for a solution using Java and Redis (currently using the Jedis library) of having a cold standby redis server. I am looking for an intermediate solution to a single server, and a cluster of servers. Specifically, I want to have two servers setup, each standalone, and have my application only use the first Redis server if it is available, and fail over to the second server, only if the first server is not available - a standard cold standby scenario - no replication.
The current connection factory is setup as
public JedisConnectionFactory redisConnectionFactory() {
JedisConnectionFactory redisConnectionFactory = new JedisConnectionFactory();
redisConnectionFactory.setHostName(redisUrl);
redisConnectionFactory.setPort(redisPort);
redisConnectionFactory.setDatabase(redisDbIndex);
return redisConnectionFactory;
}
where redisUrl resolves to something like 'my-redis-server.some-domain.com'. I would like to be able to specify the redis host name something like 'my-redis-server-1.some-domain.com,my-redis-server-2.some-domain.com' and have the second server used as the cold standby.

Get Accumulo instance name

I want to use GeoMesa (GIS extension of Accumulo) and virtualized it using Docker just like this repo. Now I want to connect to the Accumulo instance using Java using:
Instance i = new ZooKeeperInstance("docker_instance",zkIP:port);
Connector conn = i.getConnector(user, new PasswordToken(password));
The connetion does not get established and hangs (just like in this question). I can connect to the ZooKeeper instance using using
./zkCli.sh -server ip:port
So i guess the instance_name is wrong. I used the one noted in the repo linked first. However I don't know where how to check the instance_name needed.
To make my problem reproducable I did setup a digital ocean server with all necessary dependencies and accumulo. I tested that the connection to zookeeper is possible using zkCli and checked the credentials using accumulo shell on the server.
Instance i = new ZooKeeperInstance("DIGITAL_OCEAN","46.101.199.216:2181");
// WARN org.apache.accumulo.core.client.ClientConfiguration - Found no client.conf in default paths. Using default client configuration values.
System.out.println("This is reached");
Connector conn = i.getConnector("root", new PasswordToken("mypassw"));
System.out.println("This is not reached");
As a troubleshooting step, you may be able to extract the instance name by using HdfsZooInstance.getInstance().getInstanceName() or by connecting directly to ZooKeeper and listing the instance names with ls /accumulo/instances/
There are multiple easy ways to get the instance_name: Ether just look to the top of the accumulo status page as elserj noted in the comments or use zkCli to connect to Zookeeper and use ls /accumulo/instances / as Christopher answered.
However I could not manage to connect to accumulo using the ordinary Java Connector. Nevertheless I managed to connect to Accumulo using the Proxy-Settings which is a valid solution for me, even that I still would have liked to find the problem.

Categories

Resources