Hikari: Failed to validate connection because connection is closed - java

I'm using hikari pool connection through play framework and mariadb client and since I've updated them (play 2.6.5 -> 2.6.6 and mariadb 2.1.1 -> 2.1.2 but not sure it's related) regularly I've got the following error:
HikariPool-1 - Failed to validate connection org.mariadb.jdbc.MariaDbConnection#31124a47 (Connection.setNetworkTimeout cannot be called on a closed connection)
at com.zaxxer.hikari.pool.PoolBase.isConnectionAlive(PoolBase.java:184)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:172)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:146)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:85)
at play.api.db.DefaultDatabase.getConnection(Databases.scala:142)
at play.api.db.DefaultDatabase.withConnection(Databases.scala:152)
at play.api.db.DefaultDatabase.withConnection(Databases.scala:148)
I've found a related issue here and tried to change the idleTimeout and maxLifetime to 2 and 5 minutes but the error still happened.
I'm using HikariCP 2.7.1, play 2.6.6 and mariadb-java-client 2.1.2

Although you write that you had no success solving this issue by changing the maxLifetime value, I wanted to note that it actually worked for me. Putting its value to 590000 has removed the warnings from my log file.
The maxLifetime (in milliseconds) value of your client should be less than the wait_timeout (in seconds) value of your MySQL instance. This way the client will always terminate the connection before the database tries to. The other way around, the client will try to act upon a closed connection and you will get the above mentioned warnings in your log file.
To see the wait_timeout value of your MySQL instance, you can use the following query:
SHOW VARIABLES like '%timeout%';
The default maxLifetime value for MariaDB should be 28800, but I noticed that 600 can be in place because of MySQL config files being loaded.
I should note that I have no other explicit hikari configuration in place except for a maximum-pool-size of 50.
I got the inspiration from: https://github.com/brettwooldridge/HikariCP/issues/856 by the way. Other very useful resources are: https://github.com/brettwooldridge/HikariCP#configuration-knobs-baby and https://mariadb.com/kb/en/library/server-system-variables/#wait_timeout

Related

Oracle 19c JDBC (Instant client basiclite) driver Issue

I working on upgrade the oracle jdbc driver from 11g (ojdbc6.jar) to 19c (ojdbc8.jar) in my java application, driver used is Instant Client (instantclient-basiclite-nt-19.11) with JRE1.8.0_271. After change to 19c, my application keep hit "ORA-02396: exceeded maximum idle time, please connect again" or "ORA-03113: end-of-file" error.
In oracle database properties, there are some limitation set, Idle-Time = 2 minutes and Connection-Time = 10 minutes. But I will not do any change on database because this may cause high CPU if many users are using the application at the same time.
In Java application, connection is stored in connection pool, I put the logging and can see that connection is closed and return to connection pool after finish execute. But when I run the application again after 2 minutes, the oracle error raised.
If I switch back to 11g, I don't get such error and application is working fine after 2 minutes. No change in code.
Is this BUG in latest oracle driver? I saw there is UCP.jar (Universal Connection Pool) package available in 19c but not in 11g, is it I have implement this? and how?
Your problem has nothing to do with the driver but with a database profile setting that limits the maximum allowed idle_time. Normally this is done to get rid of forgotten sessions.
You can check this using
select a.username,b.profile,b.RESOURCE_NAME,b.LIMIT from dba_users a, dba_profiles b where b.resource_name='IDLE_TIME' and a.profile=b.profile;
find which profile is used for your user and see if your dba is willing to change the limit.
if this happens to be the default profile it could be changed to unlimited by
ALTER PROFILE DEFAULT LIMIT IDLE_TIME UNLIMITED;
but it might be better to create a custom profile for your user.
If the IDLE_TIME limit can not be changed, run a query every once and a while, like select 'keep me alive from dual;
This also prevents closure by firewalls.

Optimizing connection to GCP mysql from cloud run service?

I have a spring boot application running on cloud run, so far I only had to add the spring cloud gcp mysql
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
<version>1.2.8.RELEASE</version>
</dependency>
dependency in my POM, and configure my application.yml file to set database name, connection name etc, and it runs fine locally and on cloud run.
My application.yml:
spring:
cloud:
gcp:
sql:
enabled: true
database-name: pos_database
instance-connection-name: pos-sys:asia-southeast2:pos-server-database
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
username: ***
password: ***
hikari:
maximum-pool-size: 20
However I realized cold start performance has taken a hit, because on startup the socket factory connects to the database instance via SSL socket:
2021-05-31 13:10:07.152 INFO 1539 --- [onnection adder] c.g.cloud.sql.core.CoreSocketFactory :
Connecting to Cloud SQL instance [pos-sys:asia-southeast2:owl-server-database] via SSL socket.
and i get a bunch of lines just repeating
2021-05-31 13:10:09.461 INFO 1539 --- [connection adder] c.g.cloud.sql.core.CoreSocketFactory :
Connecting to Cloud SQL instance [pos-sys:asia-southeast2:pos-server-database] via SSL socket.
I know there is a faster way to connect then the application is running on the cloud, I have been following this tutorial so far:
https://cloud.google.com/sql/docs/mysql/connect-run
But i'm very confused on the last part where it says I have to connect with unix socket, is this a docker thing or within my application? where does the ConnectionPoolContextListener.java
file have to go?
It also says in a comment within the file itself not to use this for java users, and to instead use
Cloud SQL JDBC Socket Factory
But when I go to that link it says to add a dependency to for mysql-connector, but isnt that already included in spring-gcp-starter-mysql? It also says make a connection string in this format:
jdbc:mysql:///<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.mysql.SocketFactory&user=<MYSQL_USER_NAME>&password=<MYSQL_USER_PASSWORD>
But doesnt mention where do I put this?
So to summarise:
I have a cloud mysql instance, with the admin api enabled.
I did the Enable connecting to a Cloud SQL in my cloud run by selecting my db instance.
I am very confused by the documentation on what the next step is and what to do next.
Cloud Run provide a Unix domain socket when configured with a Cloud SQL instance - it's a file that can be used to connect to a database. You are using the Cloud SQL Java connector, which allows you to bypass using the Unix socket (which is usually preferred on Java, since Unix sockets aren't natively supported).
Instead to improve your cold start time, I recommend doing two things:
Reduce the number of connections in your pool. While the optimal number varies greatly between applications, 20 is almost certainly way more than you need. As a rule of thumb, try 2 * the number of cores used as your starting value, and increase/decrease as needed. Hikari uses maximumPoolSize to do this.
Adjust the number of starting connections in your pool. Hikari offers minimumIdle, which sets the minimum number of idle connections in the pool, and up to maximumPoolSize. While Hikari recommends not setting this value (so you have a fixed pool), setting it to 0 means your pool won't establish connections on startup. This means your application will start faster, but will take longer to get a connection from the pool on average.

Tomcat jdbc stalls during amazon multi-az failover

SOLVED
Ultimately the solution noted in the similar ticket noted below worked for us. When we tried it initially our configuration parser was mangling the URL and removing &connectTimeout=15000&socketTimeout=60000 from it, which invalidated that test.
I'm having trouble getting a tomcat jdbc connection pool to fail-over to a new DB server using Amazon RDS' mutli-az feature. When fail-over occurs the application server gets hung up trying to borrow a connection from the pool. It's similar to this question, however the solution to this users problem did not help me, so I suspect it's not quite the same: Configure GlassFish JDBC connection pool to handle Amazon RDS Multi-AZ failover
The sequence goes a bit like this:
Succesful request is made, log output as expected
fail-over initiated (via reboot with failover in RDS)
Request made that times out, log messages from request appear as expected up until a connection is borrowed from the pool.
Subsequent requests generate no log messages, they time out as well.
After some amount of time, the daemon will eventually start printing more log messages as if it succesfully connected to the database and performing operations. This can take just over 16 minutes to occur, the client has long since timed out.
If I wait about 50 minutes and try again eventually the system will finally accept connections again.
Notes
If I tell tomcat to shut down there are exceptions in the logs about it being unable to clean up resources, and about my servlet still processing a request. Most notable (In my opinion) is the following stack trace indicating at least one thing is stuck waiting for communication over a socket from mysql.
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
java.net.SocketInputStream.read(SocketInputStream.java:171)
java.net.SocketInputStream.read(SocketInputStream.java:141)
com.mysql.jdbc.util.ReadAheadInputStream.fill(ReadAheadInputStream.java:114)
com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:161)
com.mysql.jdbc.util.ReadAheadInputStream.read(ReadAheadInputStream.java:189)
com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3116)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3573)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4113)
com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2570)
com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2731)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2812)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2761)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:894)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:732)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:441)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:384)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:716)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:579)
org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:174)
org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:111)
<...>
If I restart tomcat things return to normal as soon as it comes back. Obviously long term this is probably not a workable solution for maintaining uptime.
Environment Details
Database: MySQL (using mysql-connector-java 5.1.26) (server 5.5.53)
Tomcat 8.0.45
I've gone through several changes to my configuration while trying to solve this issue. At the time of this writing the following related settings are in place:
jre/lib/security/java.security -- I'm under the impression that the default value for Oracle Java 1.8 is 30s with no security manager. I set these to zero just to be positive this isn't the issue.
networkaddress.cache.ttl: 0
networkaddress.cache.negative.ttl: 0
connection pool settings
testOnBorrow: true
testOnConnect: true
testOnReturn: true
jdbc url parameters
connectTimeout:60000
socketTimeout:60000
autoReconnect:true
Update
Still no solution found.
Added in logging to confirm that this was not a DNS caching issue. IP address logged immediately before timeout matches up with IP address of 'new' master RDS host.
For reference the following block represents the properties I'm using to initialize my data source. I’m configuring the pool in code rather than JNDI, with some elements pulled out of our app's config file. I’ve pasted the code below along with comments indicating what the config values are for the tests I’ve been running.
PoolProperties p = new PoolProperties();
p.setUrl(config.get("JDBC_URL")); // jdbc:mysql://host:3306/dbname
p.setDriverClassName("com.mysql.jdbc.Driver");
p.setUsername(config.get("JDBC_USER"));
p.setPassword(config.get("JDBC_PASSWORD"));
p.setJmxEnabled(true);
p.setTestWhileIdle(false);
p.setTestOnBorrow(true);
p.setValidationQuery("SELECT 1");
p.setValidationInterval(30000);
p.setTimeBetweenEvictionRunsMillis(30000);
p.setMaxActive(Integer.parseInt(config.get("MAX_ACTIVE"))); //45
p.setInitialSize(10);
p.setMaxWait(5);
p.setRemoveAbandonedTimeout(Integer.parseInt(config.get("REMOVE_ABANDONED_TIMEOUT"))); //600
p.setMinEvictableIdleTimeMillis(Integer.parseInt(config.get("DB_EVICTION_TIMEOUT"))); //60000
p.setMinIdle(Integer.parseInt(config.get("DB_MIN_IDLE"))); //50
p.setLogAbandoned(Boolean.parseBoolean(config.get("LOG_ABANDONED"))); //true
p.setRemoveAbandoned(true);
p.setJdbcInterceptors(
"org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;"+
"org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;"+
"org.apache.tomcat.jdbc.pool.interceptor.ResetAbandonedTimer");
// make sure new connections have auto commit true
p.setDefaultAutoCommit(true);

JDBC Connection Pool: connections are not recycled after DB restart

I added setMaxActive(8) on org.apache.tomcat.jdbc.pool.PoolProperties. Every time the DB restarts, the application is unusable because the established connections remain. I get the following error:
org.postgresql.util.PSQLException: This connection has been closed
I've tried using some other settings on the pool to no avail...
Thank you for help!
Use the validationQuery property which will check if the connection is valid before returning the connection.
Ref: Tomcat 6 JDBC Connection Pool
This property is available on latest tomcat versions.
Look at this link:
Postgres connection has been closed error in Spring Boot
Very valid question and this problem is usually faced by many. The
exception generally occurs, when network connection is lost between
pool and database (most of the time due to restart). Looking at the
stack trace you have specified, it is quite clear that you are using
jdbc pool to get the connection. JDBC pool has options to fine-tune
various connection pool settings and log details about whats going on
inside pool.
You can refer to to detailed Apache documentation on pool
configuration to specify abandon timeout
Check for removeAbandoned, removeAbandonedTimeout, logAbandoned parameters
Additionally you can make use of additional properties to further
tighten the validation
Use testXXX and validationQuery for connection validity.
My own $0.02: use these two parameters:
validationQuery=<TEST SQL>
testOnBorrow=true

How to reestablish a JDBC connection after a timeout?

I have a long-running method which executes a large number of native SQL queries through the EntityManager (TopLink Essentials). Each query takes only milliseconds to run, but there are many thousands of them. This happens within a single EJB transaction. After 15 minutes, the database closes the connection which results in following error:
Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.1 (Build b02-p04 (04/12/2010))): oracle.toplink.essentials.exceptions.DatabaseException
Internal Exception: java.sql.SQLException: Closed Connection
Error Code: 17008
Call: select ...
Query: DataReadQuery()
at oracle.toplink.essentials.exceptions.DatabaseException.sqlException(DatabaseException.java:319)
.
.
.
RAR5031:System Exception.
javax.resource.ResourceException: This Managed Connection is not valid as the phyiscal connection is not usable
at com.sun.gjc.spi.ManagedConnection.checkIfValid(ManagedConnection.java:612)
In the JDBC connection pool I set is-connection-validation-required="true" and connection-validation-method="table" but this did not help .
I assumed that JDBC connection validation is there to deal with precisely this kind of errors. I also looked at TopLink extensions (http://www.oracle.com/technetwork/middleware/ias/toplink-jpa-extensions-094393.html) for some kind of timeout settings but found nothing. There is also the TopLink session configuration file (http://download.oracle.com/docs/cd/B14099_19/web.1012/b15901/sessions003.htm) but I don't think there is anything useful there either.
I don't have access to the Oracle DBA tables, but I think that Oracle closes connections after 15 minutes according to the setting in CONNECT_TIME profile variable.
Is there any other way to make TopLink or the JDBC pool to reestablish a closed connection?
The database is Oracle 10g, application server is Sun Glassfish 2.1.1.
All JPA implementations (running on a Java EE container) use a datasource with an associated connection pool to manage connectivity with the database.
The persistence context itself is associated with the datasource via an appropriate entry in persistence.xml. If you wish to change the connection timeout settings on the client-side, then the associated connection pool must be re-configured.
In Glassfish, the timeout settings associated with the connection pool can be reconfigured by editing the pool settings, as listed in the following links:
Changing timeout settings in GlassFish 3.1
Changing timeout settings in GlassFish 2.1
On the server-side (whose settings if lower than the client settings, would be more important), the Oracle database can be configured to have database profiles associated with user accounts. The session idle_time and connect_time parameters of a profile would constitute the timeout settings of importance in this aspect of the client-server interaction. If no profile has been set, then by default, the timeout is unlimited.
Unless you've got some sort of RAC failover, when the connection is terminated, it will end the session and transaction.
The admins may have set into some limits to prevent runaway transactions or a single job 'hogging' a connection in a pool. You generally don't want to lock a connection in a pool for an extended period.
If these queries aren't necessarily part of the same transaction, then you could try terminating and restarting a new connection.
Are you able to restructure your code so that it completes in under 15 minutes. A stored procedure in the background may be able to do the job a lot quicker than dragging the results of thousands of operations over the network.
I see you set your connection-validation-method="table" and is-connection-validation-required="true", but you do not mention that you specified the table you were validating on; did you set validation-table-name="any_table_you_know_exists" and provide any existing table-name? validation-table-name="existing_table_name" is required.
See this article for more details on connection validation.
Related StackOverflow article with similar problem - he wants to flush the entire invalid connection pool.

Categories

Resources