I am trying to connect to my GCP projects PostgreSQL CloudSQL instance from my local machine. The PostgreSQL doesn't have a public IP, only private.
Properties connProps = new Properties();
connProps.setProperty("user", "XXX-compute#developer.gserviceaccount.com");
connProps.setProperty("password", "password");
connProps.setProperty("sslmode", "disable");
connProps.setProperty("socketFactory", "com.google.cloud.sql.postgres.SocketFactory");
connProps.setProperty("cloudSqlInstance", "coral-XXX-XXXX:us-central1:mdm");
connProps.setProperty("enableIamAuth", "true");
HikariConfig config = new HikariConfig();
config.setJdbcUrl(jdbcURL);
config.setDataSourceProperties(connProps);
config.setConnectionTimeout(10000); // 10s
HikariDataSource connectionPool = new HikariDataSource(config);
I get the below error
Failed to get driver instance for jdbcUrl=jdbc:postgresql:///mdm
java.sql.SQLException: No suitable driver
I have verified that my username, instancename, IAM connectivity is all working fine. The IAM service account I am using is my compute engine's default service account.
Should I be able to connect to this PostgreSQL instance from my local machine?
First, make sure you're configuring your JDBC URL correctly.
The URL should look like this:
jdbc:postgresql:///<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=<POSTGRESQL_USER_NAME>&password=<POSTGRESQL_USER_PASSWORD>
See the docs for details.
Second, if your Cloud SQL instance is Private IP only, your local machine won't have a network path to it, unless you've explicitly configured one (see this answer for options).
Generally, the simplest way to connect to a private IP instance is to run a VM in the same VPC as the instance, and connect from that VM.
While it is a good practice from the security point to have only the private IP enabled and remove the public IP from the Cloud SQL instance, there are some considerations to be kept in mind when thinking about the connectivity.
With the Cloud SQL instance having only the private IP enabled there is no direct way in which you can connect to it from the local machine, neither by using private IP nor by using Cloud SQL proxy.
Now, in your case, as you mentioned you have only private IP enabled in the Cloud SQL instance, it seems to be the reason you are getting the error.
To mitigate the error -
If possible, I would suggest you provision a public IP address
for the Cloud SQL instance and then try to connect it by correctly
specifying the jdbc URL as mentioned here, which looks something like this -
jdbc:postgresql:///<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=<POSTGRESQL_USER_NAME>&password=<POSTGRESQL_USER_PASSWORD>
Else, if you don’t want to provision a public IP address, to
establish the connection from an external resource you can make use
of a Cloud VPN Tunnel or a VLAN attachment if you have a Dedicated
Interconnect or a Partner Interconnect as mentioned here.
If you don’t have a Dedicated Interconnect or a Partner Interconnect and you want to use the private IP only then, to connect to the Cloud SQL you can enable port forwarding via a Compute Engine VM instance. This is done in two steps -
Connect the Compute Engine to the Cloud SQL instance via the private IP.
Forward the local machine database connection request to the Compute Engine to reach the Cloud SQL instance through Cloud SQL Proxy tunnel. This youtube video describes how to do this.
To get a detailed description of the above you can go through this article.
I have a Java Spring application that i want to deploy on an Azure App Service that will connect to a Redis for Azure. I've already done it with the same application on a different subscription. I've created both the AppService and the Redis connection on Azure. I got the connection strings from Redis and i added to my Java Application, and also i've created a bean like follow:
#Bean
public JedisConnectionFactory connectionFactory() {
JedisConnectionFactory jedisConnectionFactory = new JedisConnectionFactory();
jedisConnectionFactory.setHostName(this.jedisHost);
if (this.jedisPassword != null && !this.jedisPassword.isEmpty()) {
jedisConnectionFactory.setPassword(this.jedisPassword);
}
jedisConnectionFactory.setPort(jedisPort);
jedisConnectionFactory.getPoolConfig().setMaxIdle(30);
jedisConnectionFactory.getPoolConfig().setMinIdle(10);
// jedisConnectionFactory.setUsePool(true);
return jedisConnectionFactory;
}
when i run the application in my local enviroment everything works fine. The application start and also can connect to Redis.
But when i deploy it on the azure cloud i get:
Caused by: java.net.SocketTimeoutException: connect timed out
JedisConnectionException: Failed connecting to host valuegoredis.redis.cache.windows.net:6380
JedisConnectionException: Could not get a resource from the pool
Error creating bean with name 'enableRedisKeyspaceNotificationsInitializer'
I also had the same problems when running localy, but i added a firewall rule to include my IP.
I've also added the IP of the appService found by running nslookup .azurewebsite.net but here nothing changed.
I'm losing my mind trying to figure it out how can i make it works.
Edit: I've tried both jedis ports 6379 and 6380
The the blue circled are the IP of my app service and the red are my local IP
I'm allowing both SSL and no SSL connection on 6380 and 6379 but nothing seems to work
Using Lettuce, how do we configure Spring Data Redis running on host x at port 6379 and slave running on the same or different host but at port 6380?
That's a feature which will be included in the upcoming Spring Data Redis 2.1 release.
You would configure LettuceConnectionFactory similar to:
LettuceClientConfiguration configuration = LettuceClientConfiguration.builder()
.readFrom(ReadFrom.REPLICA)
.build();
LettuceConnectionFactory factory = new LettuceConnectionFactory(new RedisStandaloneConfiguration("x", 6379),
configuration);
Lettuce auto-discovers masters and replicas from a static (not managed with Redis Sentinel) setup.
I am unsuccessfully trying to use RabbitMQ metrics support for Java.
My objective is to get some messaging statistics into my Java program. When testing I use a RabbitMQ instance at localhost, and I have put some test data into a test queue on a test virtual host using the RabbitMQ web interface.
My non-working code is:
ConnectionFactory connectionFactory = new ConnectionFactory();
connectionFactory.setHost(host); // localhost
connectionFactory.setUsername(userName);
connectionFactory.setPassword(password);
connectionFactory.setPort(port); // 5672
connectionFactory.setVirtualHost(virtualHost);
StandardMetricsCollector metrics = new StandardMetricsCollector();
connectionFactory.setMetricsCollector(metrics);
It seems like metrics is not properly initialized:
Metrics have the default value 0
Within metrics there are properties called initialized set to false (for instance metrics.getPublishedMessages().m1Rate.initialized)
So, it seems I am missing something important here despite trying to follow the official documentation.
As a workaround, I'm currently using HTTP requests to the API to get some basic messaging statistics, but the API is very limited.
For my tests I am using a test Zookeeper server but I would like to be able to wait until the server is fully started (since I am starting it as part of the test init process).
How is it possible to cleanly check that a (Test) Zookeeper server is correctly started using Curator? Some form of ping/etc?
I managed to find the answer and wanted to share.
Curator has a method blockUntilConnected which will wait until it gets a connection from Zookeeper.
CuratorFramework curator = CuratorFrameworkFactory.newClient("localhost:" + TestConstants.TEST_ZOOKEEPER_PORT, new RetryOneTime(100));
curator.start();
curator.blockUntilConnected();