I have a spring boot application running on cloud run, so far I only had to add the spring cloud gcp mysql
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
<version>1.2.8.RELEASE</version>
</dependency>
dependency in my POM, and configure my application.yml file to set database name, connection name etc, and it runs fine locally and on cloud run.
My application.yml:
spring:
cloud:
gcp:
sql:
enabled: true
database-name: pos_database
instance-connection-name: pos-sys:asia-southeast2:pos-server-database
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
username: ***
password: ***
hikari:
maximum-pool-size: 20
However I realized cold start performance has taken a hit, because on startup the socket factory connects to the database instance via SSL socket:
2021-05-31 13:10:07.152 INFO 1539 --- [onnection adder] c.g.cloud.sql.core.CoreSocketFactory :
Connecting to Cloud SQL instance [pos-sys:asia-southeast2:owl-server-database] via SSL socket.
and i get a bunch of lines just repeating
2021-05-31 13:10:09.461 INFO 1539 --- [connection adder] c.g.cloud.sql.core.CoreSocketFactory :
Connecting to Cloud SQL instance [pos-sys:asia-southeast2:pos-server-database] via SSL socket.
I know there is a faster way to connect then the application is running on the cloud, I have been following this tutorial so far:
https://cloud.google.com/sql/docs/mysql/connect-run
But i'm very confused on the last part where it says I have to connect with unix socket, is this a docker thing or within my application? where does the ConnectionPoolContextListener.java
file have to go?
It also says in a comment within the file itself not to use this for java users, and to instead use
Cloud SQL JDBC Socket Factory
But when I go to that link it says to add a dependency to for mysql-connector, but isnt that already included in spring-gcp-starter-mysql? It also says make a connection string in this format:
jdbc:mysql:///<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.mysql.SocketFactory&user=<MYSQL_USER_NAME>&password=<MYSQL_USER_PASSWORD>
But doesnt mention where do I put this?
So to summarise:
I have a cloud mysql instance, with the admin api enabled.
I did the Enable connecting to a Cloud SQL in my cloud run by selecting my db instance.
I am very confused by the documentation on what the next step is and what to do next.
Cloud Run provide a Unix domain socket when configured with a Cloud SQL instance - it's a file that can be used to connect to a database. You are using the Cloud SQL Java connector, which allows you to bypass using the Unix socket (which is usually preferred on Java, since Unix sockets aren't natively supported).
Instead to improve your cold start time, I recommend doing two things:
Reduce the number of connections in your pool. While the optimal number varies greatly between applications, 20 is almost certainly way more than you need. As a rule of thumb, try 2 * the number of cores used as your starting value, and increase/decrease as needed. Hikari uses maximumPoolSize to do this.
Adjust the number of starting connections in your pool. Hikari offers minimumIdle, which sets the minimum number of idle connections in the pool, and up to maximumPoolSize. While Hikari recommends not setting this value (so you have a fixed pool), setting it to 0 means your pool won't establish connections on startup. This means your application will start faster, but will take longer to get a connection from the pool on average.
Related
Is there any way to get connection pool metrics in Cassandra using CqlSession. Need answer specific to core java.
I want to get each client connection metrics in Cassandra version(4.9.0).
Metrics like -> opened connection, closed connection, active connections ..
And
Is there any way to notify evetime when new connection is created or update..?
In 4.x versions, you need to explicitly enable in configuration file every metric that you need - something like this (taken from docs, full list of metrics is in the reference):
datastax-java-driver.advanced.metrics {
session.enabled = [ connected-nodes, cql-requests ]
node.enabled = [ pool.open-connections, pool.in-flight ]
}
Regarding the hook on the opened/closed connection, I'm not sure that there is an easy way to do that, except just record previous numbers of the opened connections. Maybe such things would be easier to track via Prometheus, or other monitoring system. Here is an example of how you can integrate driver metrics with Prometheus.
My Spring boot application is using Spring Data JPA to interact with a AWS MySQL RDS. The application is deployed on 2 ec2 instances on top of ELB.
MySQL DB has a maximum connection limit of 147 connections, so I have added following entry in my application-prod.properties file:
spring.datasource.hikari.connection-timeout=30000
spring.datasource.hikari.maximum-pool-size=70
After this my application runs fine for few days but then I again start getting JDBC connection not found error.
On further debugging running following command on my MySQl gives me only 73 active connections whereas for the time when application was running properly, it returned 143.
Could it be possible that 1 of the ec2 instance is stopped or terminated?
If so what can I do to debug further?
SHOW STATUS WHERE `variable_name` = 'Threads_connected';
I have had a hard time to debug this issue, need some help.
I'm using hikari pool connection through play framework and mariadb client and since I've updated them (play 2.6.5 -> 2.6.6 and mariadb 2.1.1 -> 2.1.2 but not sure it's related) regularly I've got the following error:
HikariPool-1 - Failed to validate connection org.mariadb.jdbc.MariaDbConnection#31124a47 (Connection.setNetworkTimeout cannot be called on a closed connection)
at com.zaxxer.hikari.pool.PoolBase.isConnectionAlive(PoolBase.java:184)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:172)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:146)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:85)
at play.api.db.DefaultDatabase.getConnection(Databases.scala:142)
at play.api.db.DefaultDatabase.withConnection(Databases.scala:152)
at play.api.db.DefaultDatabase.withConnection(Databases.scala:148)
I've found a related issue here and tried to change the idleTimeout and maxLifetime to 2 and 5 minutes but the error still happened.
I'm using HikariCP 2.7.1, play 2.6.6 and mariadb-java-client 2.1.2
Although you write that you had no success solving this issue by changing the maxLifetime value, I wanted to note that it actually worked for me. Putting its value to 590000 has removed the warnings from my log file.
The maxLifetime (in milliseconds) value of your client should be less than the wait_timeout (in seconds) value of your MySQL instance. This way the client will always terminate the connection before the database tries to. The other way around, the client will try to act upon a closed connection and you will get the above mentioned warnings in your log file.
To see the wait_timeout value of your MySQL instance, you can use the following query:
SHOW VARIABLES like '%timeout%';
The default maxLifetime value for MariaDB should be 28800, but I noticed that 600 can be in place because of MySQL config files being loaded.
I should note that I have no other explicit hikari configuration in place except for a maximum-pool-size of 50.
I got the inspiration from: https://github.com/brettwooldridge/HikariCP/issues/856 by the way. Other very useful resources are: https://github.com/brettwooldridge/HikariCP#configuration-knobs-baby and https://mariadb.com/kb/en/library/server-system-variables/#wait_timeout
SOLVED
Ultimately the solution noted in the similar ticket noted below worked for us. When we tried it initially our configuration parser was mangling the URL and removing &connectTimeout=15000&socketTimeout=60000 from it, which invalidated that test.
I'm having trouble getting a tomcat jdbc connection pool to fail-over to a new DB server using Amazon RDS' mutli-az feature. When fail-over occurs the application server gets hung up trying to borrow a connection from the pool. It's similar to this question, however the solution to this users problem did not help me, so I suspect it's not quite the same: Configure GlassFish JDBC connection pool to handle Amazon RDS Multi-AZ failover
The sequence goes a bit like this:
Succesful request is made, log output as expected
fail-over initiated (via reboot with failover in RDS)
Request made that times out, log messages from request appear as expected up until a connection is borrowed from the pool.
Subsequent requests generate no log messages, they time out as well.
After some amount of time, the daemon will eventually start printing more log messages as if it succesfully connected to the database and performing operations. This can take just over 16 minutes to occur, the client has long since timed out.
If I wait about 50 minutes and try again eventually the system will finally accept connections again.
Notes
If I tell tomcat to shut down there are exceptions in the logs about it being unable to clean up resources, and about my servlet still processing a request. Most notable (In my opinion) is the following stack trace indicating at least one thing is stuck waiting for communication over a socket from mysql.
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
java.net.SocketInputStream.read(SocketInputStream.java:171)
java.net.SocketInputStream.read(SocketInputStream.java:141)
com.mysql.jdbc.util.ReadAheadInputStream.fill(ReadAheadInputStream.java:114)
com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:161)
com.mysql.jdbc.util.ReadAheadInputStream.read(ReadAheadInputStream.java:189)
com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3116)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3573)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4113)
com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2570)
com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2731)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2812)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2761)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:894)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:732)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:441)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:384)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:716)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:579)
org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:174)
org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:111)
<...>
If I restart tomcat things return to normal as soon as it comes back. Obviously long term this is probably not a workable solution for maintaining uptime.
Environment Details
Database: MySQL (using mysql-connector-java 5.1.26) (server 5.5.53)
Tomcat 8.0.45
I've gone through several changes to my configuration while trying to solve this issue. At the time of this writing the following related settings are in place:
jre/lib/security/java.security -- I'm under the impression that the default value for Oracle Java 1.8 is 30s with no security manager. I set these to zero just to be positive this isn't the issue.
networkaddress.cache.ttl: 0
networkaddress.cache.negative.ttl: 0
connection pool settings
testOnBorrow: true
testOnConnect: true
testOnReturn: true
jdbc url parameters
connectTimeout:60000
socketTimeout:60000
autoReconnect:true
Update
Still no solution found.
Added in logging to confirm that this was not a DNS caching issue. IP address logged immediately before timeout matches up with IP address of 'new' master RDS host.
For reference the following block represents the properties I'm using to initialize my data source. I’m configuring the pool in code rather than JNDI, with some elements pulled out of our app's config file. I’ve pasted the code below along with comments indicating what the config values are for the tests I’ve been running.
PoolProperties p = new PoolProperties();
p.setUrl(config.get("JDBC_URL")); // jdbc:mysql://host:3306/dbname
p.setDriverClassName("com.mysql.jdbc.Driver");
p.setUsername(config.get("JDBC_USER"));
p.setPassword(config.get("JDBC_PASSWORD"));
p.setJmxEnabled(true);
p.setTestWhileIdle(false);
p.setTestOnBorrow(true);
p.setValidationQuery("SELECT 1");
p.setValidationInterval(30000);
p.setTimeBetweenEvictionRunsMillis(30000);
p.setMaxActive(Integer.parseInt(config.get("MAX_ACTIVE"))); //45
p.setInitialSize(10);
p.setMaxWait(5);
p.setRemoveAbandonedTimeout(Integer.parseInt(config.get("REMOVE_ABANDONED_TIMEOUT"))); //600
p.setMinEvictableIdleTimeMillis(Integer.parseInt(config.get("DB_EVICTION_TIMEOUT"))); //60000
p.setMinIdle(Integer.parseInt(config.get("DB_MIN_IDLE"))); //50
p.setLogAbandoned(Boolean.parseBoolean(config.get("LOG_ABANDONED"))); //true
p.setRemoveAbandoned(true);
p.setJdbcInterceptors(
"org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;"+
"org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;"+
"org.apache.tomcat.jdbc.pool.interceptor.ResetAbandonedTimer");
// make sure new connections have auto commit true
p.setDefaultAutoCommit(true);
I am running a batch java application. The application runs every 10/20 minutes in my Production and UAT environment and I get database alerts like this:
Thu Feb 06 15:15:08 2014
opiodr aborting process unknown ospid (28246400) as a result of ORA-609
After researching a bit on the internet the suggested fix for these alerts is to change INBOUND_CONNECT_TIMEOUT as:
Sqlnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=180
Listener.ora: INBOUND_CONNECT_TIMEOUT_listener_name=120
We have changed the setting on the database server side but don't know where to change in the client application. We are using c3p0 to create a connection pool and we are setting only these parameters:
dataSource.setAcquireRetryDelay(30000);
dataSource.setMaxPoolSize(50);
dataSource.setMinPoolSize(20);
dataSource.setInitialPoolSize(10);
We have other web services running on the same server as the batch application and they use Tomcat's DBCP pool and they don't seem to create any alerts. Also strangely enough, our batch application doesn't generate the alerts in lower test environments. They happen once in a while but the UAT and PROD environments get these alerts very frequently based on the schedule. Any suggestions what configurations to set in the c3p0 pool or should I try changing to another pool API like DBCP?
Update: I have added a few more parameters in the datasource and the frequency of alerts has reduced. I added the following and the number of alerts have gone down from 15 an hour to 4 an hour.
dataSource.setLoginTimeout(120);
dataSource.setAcquireRetryAttempts(10);
dataSource.setCheckoutTimeout(60000);
I moved to DBCP connection pooling and it seems to have fixed the issue. I tried changing a few more c3p0 settings mentioned above but nothing changed. The alerts were reduced but didn't go completely. So we decided to try DBCP. I am using all default values in DBCP except for the pool size. I'm using the tomcat version of DBCP available in tomcat's lib folder (tomcat-dbcp.jar).