Azure App Service is not able to connect to Azure Redis - java

I have a Java Spring application that i want to deploy on an Azure App Service that will connect to a Redis for Azure. I've already done it with the same application on a different subscription. I've created both the AppService and the Redis connection on Azure. I got the connection strings from Redis and i added to my Java Application, and also i've created a bean like follow:
#Bean
public JedisConnectionFactory connectionFactory() {
JedisConnectionFactory jedisConnectionFactory = new JedisConnectionFactory();
jedisConnectionFactory.setHostName(this.jedisHost);
if (this.jedisPassword != null && !this.jedisPassword.isEmpty()) {
jedisConnectionFactory.setPassword(this.jedisPassword);
}
jedisConnectionFactory.setPort(jedisPort);
jedisConnectionFactory.getPoolConfig().setMaxIdle(30);
jedisConnectionFactory.getPoolConfig().setMinIdle(10);
// jedisConnectionFactory.setUsePool(true);
return jedisConnectionFactory;
}
when i run the application in my local enviroment everything works fine. The application start and also can connect to Redis.
But when i deploy it on the azure cloud i get:
Caused by: java.net.SocketTimeoutException: connect timed out
JedisConnectionException: Failed connecting to host valuegoredis.redis.cache.windows.net:6380
JedisConnectionException: Could not get a resource from the pool
Error creating bean with name 'enableRedisKeyspaceNotificationsInitializer'
I also had the same problems when running localy, but i added a firewall rule to include my IP.
I've also added the IP of the appService found by running nslookup .azurewebsite.net but here nothing changed.
I'm losing my mind trying to figure it out how can i make it works.
Edit: I've tried both jedis ports 6379 and 6380
The the blue circled are the IP of my app service and the red are my local IP
I'm allowing both SSL and no SSL connection on 6380 and 6379 but nothing seems to work

Related

Failed to get driver instance for jdbcUrl=jdbc:postgresql:///<dbname> error for CloudSQL

I am trying to connect to my GCP projects PostgreSQL CloudSQL instance from my local machine. The PostgreSQL doesn't have a public IP, only private.
Properties connProps = new Properties();
connProps.setProperty("user", "XXX-compute#developer.gserviceaccount.com");
connProps.setProperty("password", "password");
connProps.setProperty("sslmode", "disable");
connProps.setProperty("socketFactory", "com.google.cloud.sql.postgres.SocketFactory");
connProps.setProperty("cloudSqlInstance", "coral-XXX-XXXX:us-central1:mdm");
connProps.setProperty("enableIamAuth", "true");
HikariConfig config = new HikariConfig();
config.setJdbcUrl(jdbcURL);
config.setDataSourceProperties(connProps);
config.setConnectionTimeout(10000); // 10s
HikariDataSource connectionPool = new HikariDataSource(config);
I get the below error
Failed to get driver instance for jdbcUrl=jdbc:postgresql:///mdm
java.sql.SQLException: No suitable driver
I have verified that my username, instancename, IAM connectivity is all working fine. The IAM service account I am using is my compute engine's default service account.
Should I be able to connect to this PostgreSQL instance from my local machine?
First, make sure you're configuring your JDBC URL correctly.
The URL should look like this:
jdbc:postgresql:///<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=<POSTGRESQL_USER_NAME>&password=<POSTGRESQL_USER_PASSWORD>
See the docs for details.
Second, if your Cloud SQL instance is Private IP only, your local machine won't have a network path to it, unless you've explicitly configured one (see this answer for options).
Generally, the simplest way to connect to a private IP instance is to run a VM in the same VPC as the instance, and connect from that VM.
While it is a good practice from the security point to have only the private IP enabled and remove the public IP from the Cloud SQL instance, there are some considerations to be kept in mind when thinking about the connectivity.
With the Cloud SQL instance having only the private IP enabled there is no direct way in which you can connect to it from the local machine, neither by using private IP nor by using Cloud SQL proxy.
Now, in your case, as you mentioned you have only private IP enabled in the Cloud SQL instance, it seems to be the reason you are getting the error.
To mitigate the error -
If possible, I would suggest you provision a public IP address
for the Cloud SQL instance and then try to connect it by correctly
specifying the jdbc URL as mentioned here, which looks something like this -
jdbc:postgresql:///<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=<POSTGRESQL_USER_NAME>&password=<POSTGRESQL_USER_PASSWORD>
Else, if you don’t want to provision a public IP address, to
establish the connection from an external resource you can make use
of a Cloud VPN Tunnel or a VLAN attachment if you have a Dedicated
Interconnect or a Partner Interconnect as mentioned here.
If you don’t have a Dedicated Interconnect or a Partner Interconnect and you want to use the private IP only then, to connect to the Cloud SQL you can enable port forwarding via a Compute Engine VM instance. This is done in two steps -
Connect the Compute Engine to the Cloud SQL instance via the private IP.
Forward the local machine database connection request to the Compute Engine to reach the Cloud SQL instance through Cloud SQL Proxy tunnel. This youtube video describes how to do this.
To get a detailed description of the above you can go through this article.

How to resolve host name of kubernetes pod while creating grpc client from other pod?

Problem:
how to resolve host name of kubernetes pod?
I have the Following requirement we are using grpc with java where we have one app where we are running out grpc server other app where we are creating grpc client and connecting to grpc server (that is running on another pod).
We have three kubernetes pod running where our grpc server is running.
lets say :
my-service-0, my-service-1, my-service-2
my-service has a cluster IP as: 10.44.5.11
We have another three kubernetes pod running where our gprc client is running.
lets say:
my-client-0, my-client-1, my-client-2
Without Security:
i am try to connect grpc server pod with grpc client pod and it work fine.
grpc client (POD -> my-client) ----------------> groc server(POD -> my-service)
So without security i am giving host name as my-service and it's working fine without any problem..
ManagedChannel channel = ManagedChannelBuilder.forAddress("my-service", 50052)
.usePlaintext()
.build();
With SSL Security:
if i try to connect grpc server it will throw host name not match.
we have created a certificate with wild card *.default.pod.cluster.local
it will throw the below error:
java.security.cert.CertificateException: No name matching my-service found
at java.base/sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:225) ~[na:na]
at java.base/sun.security.util.HostnameChecker.match(HostnameChecker.java:98) ~[na:na]
at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) ~[na:na]
Not Working Code:
ManagedChannel channel = NettyChannelBuilder.forAddress("my-service", 50052)
.sslContext(GrpcSslContexts.forClient().trustManager(new File(System.getenv("GRPC_CLIENT_CA_CERT_LOCATION"))).build())
.build();
but if i give the host name as like this ==> 10-44-5-11.default.pod.cluster.local it will work fine correctly.
Working Code
ManagedChannel channel = NettyChannelBuilder.forAddress("10-44-5-11.default.pod.cluster.local", 50052)
.sslContext(GrpcSslContexts.forClient().trustManager(new File(System.getenv("GRPC_CLIENT_CA_CERT_LOCATION"))).build())
.build();
Now my problem is cluster ip of pod is dynamic and it will change every time during app deploy. what is the right way to resolve this host name?
is it possible if i give host name and it will return me the ip then i will append default.pod.cluster.local to hostname and try to connect to grpc server?
Addressing your pod directly is not a good solution since Kubernetes may need to move your pods around the cluster. This can occur for example because of the failing node.
To allow you clients/traffic to easy find desired containers you can place them behind a service with single static IP address. Service IP can be look up through DNS.
This is how you can connect to the service through it`s FQDN:
my-service.default.svc.cluster.local
Where my-service is your service name, default for your namespace and svc.cluster.local is a configurable cluster domain suffix used in all cluster services.
It's worth to know that you can skip svc.cluster.local suffix and even the namespace if the pods are in the same namespace. So you'll just refer to the service as my-service.
For more you can check K8s documents about DNS.

Windows Server 2012, Apache Tomcat, Spring MVC: Websocket connection blocked for external IP

We've deployed our Spring MVC web application on Windows Server 2012. Our web-app uses Spring Websockets for updates with stomp.js and sock.js.
Our websocket configuration:
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic");
config.setApplicationDestinationPrefixes("/calcApp");
}
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/add").setAllowedOrigins("*").withSockJS();
}
}
Websocket works on localhost and logs are following:
Opening Web Socket...
Web Socket Opened...
>>> CONNECT
accept-version:1.1,1.0
heart-beat:10000,10000
<<< CONNECTED
version:1.1
heart-beat:0,0
user-name:admin
connected to server undefined
>>> SUBSCRIBE
id:sub-0
destination:/topic/resident
...
Strangely, it doesn't work when I enter external ip, on same machine and browser:
Opening Web Socket...
WebSocket connection to 'ws://192.168.5.50:8080/autopark/add/629/i148hb1c/websocket' failed: WebSocket is closed before the connection is established.
Whoops! Lost connection to undefined
We thought that for external access, there is some firewall and totally disabled it:
But it didn't solve our problem.
How can we solve this issue?
I'm not really sure and not a spring expert.
but it seems you need to call the server by a domain name over its ip addr which is logical.
Since an ip would be used for more than one domain, so it seems the context need to know which context should be called(even if one) in spring context.
in other word, calling the context by ip would confuse the spring context to select/invoke which context/domain, so it refuses the connection.
have atry, binf the 192.168.5.50 to a domain name, then try to call the path using the domain(not ip). Hope it works this way.
The first step in debugging this would be verifying that your application server is actually listening on the external interface.
You can verify what IP your container is bound to by looking for 8080 entries in the output of netstat.
netstat -a -n -o | find "8080"
If you don't see an entry bound to either 0.0.0.0 or the external IP, then we know it is a configuration issue with your application server.
Example for embedded tomcat - How to set a IP address to the tomcat?
Example for standalone tomcat - How do you configure tomcat to bind to a single ip address (localhost) instead of all addresses?
The next step should be verifying an external computer can "see" the port on the external IP. There are various ways to do this, but using the telnet command will suffice.
telnet 192.168.5.50 8080
If this does not work, then we know there is something blocking communication between the two applications
If we get to this point, then there is likely an issue with the configuration of the application itself.

Connecting to Amazon ElastiCache using Java Redis client (Lettuce)

. . it is possible to connect to Amazon Elastic Cache from my local machine with a java redis client (lettuce) ?
I have defined Inbound rules in the Security Group to TCP port 6379 and SSH port 22 to any IP address.
my connecting code is:
RedisClient redisClient = new RedisClient("CacheCluster Endpoint", 6379);
RedisConnection<String, String> connection = redisClient.connect();
connection.set("key", "Hello, Redis!");
connection.close();
redisClient.shutdown();
I run this java and I got:
Exception in thread "main" com.lambdaworks.redis.RedisConnectionException: Unable to connect to mycachecluster.b4ujee.0001.usw2.cache.amazonaws.com/172.31.34.211:6379
at com.lambdaworks.redis.AbstractRedisClient.initializeChannel(AbstractRedisClient.java:214)
at com.lambdaworks.redis.RedisClient.connectAsync(RedisClient.java:322)
at com.lambdaworks.redis.RedisClient.connectAsync(RedisClient.java:303)
at com.lambdaworks.redis.RedisClient.connect(RedisClient.java:259)
at com.lambdaworks.redis.RedisClient.connect(RedisClient.java:238)
at com.lambdaworks.redis.RedisClient.connect(RedisClient.java:222)
at project1.JavaRedis.main(JavaRedis.java:17)
Caused by: java.net.ConnectException: Connection timed out: no further information: mycachecluster.b4ujee.0001.usw2.cache.amazonaws.com/172.31.34.211:6379
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:619)
so my question is: What am I doing so wrong ? have I to use the ClusterRedis EndPoint or EC2 DNS to stablished the connection ?
help please !!!
THANKS!!!
No, you can't connect to it because it doesn't have a public IP. The DNS name resolves into a private IP, 172.31.34.211, which can only be accessed from your AWS VPC.
Also, for connecting you need to use the DNS, not the IP, because the IP of the node might change.
If you need to develop locally with Redis, you can easily install one instance on your local machine.
The best way to do this, if you still want to connect from your local to AWS ElastiCache (redis) without hosting your web service in AWS, is through a VPN.
We are using https://pritunl.com for this, it is very easy to configure and use.

Communication between spring boot dockerized apps

I new using spring boot and docker and I faced a problem running the docker containers.
On debug mode, there is no problem on applications boot, but when I run them as a container, there is something wrong.
For example, I have my server config with all the yml files, also eureka properties.
The config server boot perfectly, but not the eureka server, it must look for it`s configuration to the config server becouse of these:
uri: ${vcap.services.config-service.credentials.uri:http://127.0.0.1:8888}
In the eureka`s log I can found:
Could not locate PropertySource: I/O error on GET request for
"http://127.0.0.1:8888/server-eureka/default":Connection refused;
nested exception is java.net.ConnectException: Connection refused
So I see that eureka cant connect to the config server for a reason I cant understund.
Maybe I miss something in my docker file.
If you are not using docker linked containers you'll have to use only the public ip addresses. Docker will assign every running container an own ip address which is per default not accessible. Only when you start to expose ports there will be an entry to iptables that is linking the hosts public ip address and given port to the internal used port and (dynamically assigned) ip address of the docker container. This is also why 127.0.0.1 does not work cause it would look into the containers local context but tgere the service is not running.

Categories

Resources