How to preinitialize DBCP connection pool on startup? - java

The setup of my project is -
Spring JDBC for persistence
Apache DBCP 1.4 for connection pooling
Mysql 5 on Linux
Here is the log of my application that captures the interactions with the database.
2013-01-29 15:52:21,549 DEBUG http-bio-8080-exec-3 org.springframework.jdbc.core.JdbcTemplate - Executing SQL query [SELECT id from emp]
2013-01-29 15:52:21,558 DEBUG http-bio-8080-exec-3 org.springframework.jdbc.datasource.DataSourceUtils - Fetching JDBC Connection from DataSource
2013-01-29 15:52:31,878 INFO http-bio-8080-exec-3 jdbc.connection - 1. Connection opened org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
2013-01-29 15:52:31,878 DEBUG http-bio-8080-exec-3 jdbc.connection - open connections: 1 (1)
2013-01-29 15:52:31,895 INFO http-bio-8080-exec-3 jdbc.connection - 1. Connection closed org.apache.commons.dbcp.DelegatingConnection.close(DelegatingConnection.java:247)
2013-01-29 15:52:31,895 DEBUG http-bio-8080-exec-3 jdbc.connection - open connections: none
2013-01-29 15:52:41,950 INFO http-bio-8080-exec-3 jdbc.connection - 2. Connection opened org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
2013-01-29 15:52:41,950 DEBUG http-bio-8080-exec-3 jdbc.connection - open connections: 2 (1)
2013-01-29 15:52:52,001 INFO http-bio-8080-exec-3 jdbc.connection - 3. Connection opened org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
2013-01-29 15:52:52,002 DEBUG http-bio-8080-exec-3 jdbc.connection - open connections: 2 3 (2)
2013-01-29 15:53:02,058 INFO http-bio-8080-exec-3 jdbc.connection - 4. Connection opened org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
2013-01-29 15:53:02,058 DEBUG http-bio-8080-exec-3 jdbc.connection - open connections: 2 3 4 (3)
2013-01-29 15:53:03,403 DEBUG http-bio-8080-exec-3 org.springframework.jdbc.core.BeanPropertyRowMapper - Mapping column 'id' to property 'id' of type int
2013-01-29 15:53:04,494 DEBUG http-bio-8080-exec-3 org.springframework.jdbc.datasource.DataSourceUtils - Returning JDBC Connection to DataSource
Two things are clear from the log -
The connection pool only starts creating connections when the first request to execute a query is received.
A pool of 4 connections takes nearly 30 seconds to initialize.
My questions are -
How should one configure DBCP to initialize on startup automatically?
Should it really take that long to create connections?
Note: Please don't suggest switching to C3P0 or Tomcat connection pool. I'm aware of those solutions. I'm more interested in understanding the problem at hand than just a quick fix.
Besides I'm sure something so basic should be possible with DBCP as well.
Contents of dbcontext -
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="${db.driver}" />
<property name="url" value="${db.jdbc.url}" />
<property name="username" value="${db.user}" />
<property name="password" value="${db.password}" />
<property name="maxActive" value="20" />
<property name="initialSize" value="4" />
<property name="testOnBorrow" value="true" />
<property name="validationQuery" value="SELECT 1" />
</bean>

The initialSize doesn't take effect until you first request a connection. From the java docs to BasicDataSource#setInitialSize
Sets the initial size of the connection pool.
Note: this method currently has no effect once the pool has been
initialized. The pool is initialized the first time one of the
following methods is invoked: getConnection, setLogwriter,
setLoginTimeout, getLoginTimeout, getLogWriter.
Try adding init-method="getLoginTimeout" to your bean to confirm this.

For a web application, you can implement ServletContextListener.contextInitialized() method and fire a test query (e.g. Select ID From Emp Limit 1) using your DataAccess layer. This should initialize your connection pool and make it ready before your application starts serving real user from web.

Have a look at the initialSize property -especially the part about when the pool is initialized. As sbridges points out, you can use the init-method property on beans to call one of the methods to trigger pool creation.
Also, you should look into why it takes 7.5 seconds on average to create a connection...
Cheers,

Related

SQLite in-memory database encounters SQLITE_LOCKED_SHAREDCACHE intermittently

I am using mybatis 3.4.6 along with org.xerial:sqlite-jdbc 3.28.0. Below is my configuration to use an in-memory database with shared mode enabled
db.driver=org.sqlite.JDBC
db.url=jdbc:sqlite:file::memory:?cache=shared
The db.url is correct according to this test class
And I managed to setup the correct transaction isolation level with below mybatis configuration though there is a typo of property read_uncommitted according to this issue which is reported by me as well
<environment id="${db.env}">
<transactionManager type="jdbc"/>
<dataSource type="POOLED">
<property name="driver" value="${db.driver}" />
<property name="url" value="${db.url}"/>
<property name="username" value="${db.username}" />
<property name="password" value="${db.password}" />
<property name="defaultTransactionIsolationLevel" value="1" />
<property name="driver.synchronous" value="OFF" />
<property name="driver.transaction_mode" value="IMMEDIATE"/>
<property name="driver.foreign_keys" value="ON"/>
</dataSource>
</environment>
This line of configuration
<property name="defaultTransactionIsolationLevel" value="1" />
does the trick to set the correct value of PRAGMA read_uncommitted
I am pretty sure of it since I debugged the underneath code which initialize the connection and check the value has been set correctly
However with the above setting, my program still encounters SQLITE_LOCKED_SHAREDCACHE intermittently while reading, which I think it shouldn't happen according the description highlighted in the red rectangle of below screenshot. I want to know the reason and how to resolve it, though the occurring probability of this error is low.
Any ideas would be appreciated!!
The debug configurations is below
===CONFINGURATION==============================================
jdbcDriver org.sqlite.JDBC
jdbcUrl jdbc:sqlite:file::memory:?cache=shared
jdbcUsername
jdbcPassword ************
poolMaxActiveConnections 10
poolMaxIdleConnections 5
poolMaxCheckoutTime 20000
poolTimeToWait 20000
poolPingEnabled false
poolPingQuery NO PING QUERY SET
poolPingConnectionsNotUsedFor 0
---STATUS-----------------------------------------------------
activeConnections 5
idleConnections 5
requestCount 27
averageRequestTime 7941
averageCheckoutTime 4437
claimedOverdue 0
averageOverdueCheckoutTime 0
hadToWait 0
averageWaitTime 0
badConnectionCount 0
===============================================================
Attachments:
The exception is below
org.apache.ibatis.exceptions.PersistenceException:
### Error querying database. Cause: org.apache.ibatis.transaction.TransactionException: Error configuring AutoCommit. Your driver may not support getAutoCommit() or setAutoCommit(). Requested setting: false. Cause: org.sqlite.SQLiteException: [SQLITE_LOCKED_SHAREDCACHE] Contention with a different database connection that shares the cache (database table is locked)
### The error may exist in mapper/MsgRecordDO-sqlmap-mappering.xml
### The error may involve com.super.mock.platform.agent.dal.daointerface.MsgRecordDAO.getRecord
### The error occurred while executing a query
### Cause: org.apache.ibatis.transaction.TransactionException: Error configuring AutoCommit. Your driver may not support getAutoCommit() or setAutoCommit(). Requested setting: false. Cause: org.sqlite.SQLiteException: [SQLITE_LOCKED_SHAREDCACHE] Contention with a different database connection that shares the cache (database table is locked)
I finally resolved this issue by myself and share the workaround below in case someone else encounters similar issue in the future.
First of all, we're able to get the completed call stack of the exception shown below
Going through the source code indicated by the callback, we have below findings.
SQLite is built-in with auto commit enabled by default which is contradict with MyBatis which disables auto commit by default since we're using SqlSessionManager
MyBatis would override the auto commit property during connection initialization using method setDesiredAutoCommit which finally invokes SQLiteConnection#setAutoCommit
SQLiteConnection#setAutoCommit would incur a begin immediate operation against the database which is actually exclusive, check out below source code screenshots for detailed explanation since we configure our transaction mode to be IMMEDIATE
<property name="driver.transaction_mode" value="IMMEDIATE"/>
So until now, An apparent solution is to change the transaction mode to be DEFERRED. Furthermore, the solution of making the auto commit setting the same between MyBatis and SQLite has been considered as well, however, it's not adopted since there is no way to set the auto commit of SQLiteConnection during initialization stage, there would be always switching (from true to false or vice versa) and switch would cause the above error probably if transaction mode is not set properly

Mybatis - error occurred during closing connection

I'm using mybatis connect to oracle.
My mybatis config is:
<settings>
<setting name="lazyLoadingEnabled" value="true" />
<setting name="aggressiveLazyLoading" value="false" />
<setting name="logImpl" value="${logImpl}" />
<setting name="defaultStatementTimeout" value="10" />
</settings>
<environments default="default">
<environment id="default">
<transactionManager type="JDBC" />
<dataSource type="POOLED">
<property name="driver" value="${jdbc.driver}" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.username}" />
<property name="password" value="${jdbc.password}" />
<property name="poolPingConnectionsNotUsedFor" value="290000"/>
<property name="poolPingQuery" value="SELECT COUNT(*) FROM RESORT"/>
<property name="poolPingEnabled" value="true"/>
</dataSource>
</environment>
</environments>
My code of open session is like
SqlSession sqlSession = factory.openSession();
Object result = null;
try
{
QueryInfoMapper mapper = sqlSession.getMapper(QueryInfoMapper.class);
result = mapper.queryInfoFromOpera(mybatisMapping);
} finally
{
sqlSession.close();
}
Because of application scoped of the class, and sqlSession could not be used in application scope, so I have to manage sqlSession by myself.
The log is
2019-04-11 15:30:35,773 INFO [stdout] (default task-60) Opening JDBC Connection
2019-04-11 15:30:41,860 INFO [stdout] (default task-57) Bad connection. Could not roll back
2019-04-11 15:30:41,861 INFO [stdout] (default task-57) Claimed overdue connection 962608913.
2019-04-11 15:30:41,861 INFO [stdout] (default task-57) A bad connection (962608913) was returned from the pool, getting another connection.
2019-04-11 15:30:41,895 INFO [stdout] (default task-57) Created connection 1812494479.
2019-04-11 15:30:41,895 INFO [stdout] (default task-57) Setting autocommit to false on JDBC Connection [oracle.jdbc.driver.T4CConnection#6c08788f]
2019-04-11 15:30:41,895 INFO [stdout] (default task-57) ==> Preparing: SELECT TRAVEL_AGENT_NAME FROM( SELECT TRAVEL_AGENT_NAME FROM OPERA.NAME_RESERVATION WHERE RESV_NAME_ID = ? ) WHERE ROWNUM = 1
2019-04-11 15:30:41,896 INFO [stdout] (default task-57) ==> Parameters: 288541(String)
2019-04-11 15:30:41,900 INFO [stdout] (default task-57) <== Columns: TRAVEL_AGENT_NAME
2019-04-11 15:30:41,900 INFO [stdout] (default task-57) <== Row: null
2019-04-11 15:30:41,900 INFO [stdout] (default task-57) <== Total: 1
2019-04-11 15:30:41,900 INFO [stdout] (default task-57) Resetting autocommit to true on JDBC Connection [oracle.jdbc.driver.T4CConnection#6c08788f]
2019-04-11 15:30:41,900 INFO [stdout] (default task-57) Closing JDBC Connection [oracle.jdbc.driver.T4CConnection#6c08788f]
2019-04-11 15:31:00,788 INFO [stdout] (default task-60) Bad connection. Could not roll back
2019-04-11 15:31:00,788 INFO [stdout] (default task-60) Claimed overdue connection 1228464923.
2019-04-11 15:31:00,788 INFO [stdout] (default task-60) A bad connection (1228464923) was returned from the pool, getting another connection.
2019-04-11 15:31:00,820 INFO [stdout] (default task-60) Created connection 265625885.
2019-04-11 15:31:00,820 INFO [stdout] (default task-60) Setting autocommit to false on JDBC Connection [oracle.jdbc.driver.T4CConnection#fd5211d]
2019-04-11 15:31:00,820 INFO [stdout] (default task-57) Returned connection 1812494479 to pool.
Seeing the log, according to the timestamp, it seems happens during closing connection(which is transaction here)
But it takes 9s or 19s to close it. The second log is "Bad connection. Could not roll back". I can't locate where is the really cause. And which method takes so much time. This issue doesn't happen every time but randomly.
I thought to set <property name="poolMaximumActiveConnections" value="40" /> to increase connections. I'm not sure if it would help.
What would be the cause of failed to close connection/transaction? How can I avoid the failed of closing connection/transaction?
===========================
Update: I met this issue again and log comes something different:
2019-04-13 15:42:31,812 INFO [stdout] (default task-86) Opening JDBC Connection
2019-04-13 15:42:35,493 INFO [stdout] (default task-62) Execution of ping query 'SELECT COUNT(*) FROM RESORT' failed: IO Error: Socket read timed out
2019-04-13 15:42:35,493 INFO [stdout] (default task-62) Connection 1963609369 is BAD: IO Error: Socket read timed out
2019-04-13 15:42:35,493 INFO [stdout] (default task-62) A bad connection (1963609369) was returned from the pool, getting another connection.
2019-04-13 15:42:35,493 INFO [stdout] (default task-62) Checked out connection 195963529 from pool.
2019-04-13 15:42:35,493 INFO [stdout] (default task-62) Testing connection 195963529 ...
2019-04-13 15:42:54,448 INFO [stdout] (default task-62) Execution of ping query 'SELECT COUNT(*) FROM RESORT' failed: IO Error: Socket read timed out
2019-04-13 15:42:54,448 INFO [stdout] (default task-62) Connection 195963529 is BAD: IO Error: Socket read timed out
2019-04-13 15:42:54,448 INFO [stdout] (default task-62) A bad connection (195963529) was returned from the pool, getting another connection.
2019-04-13 15:42:54,479 INFO [stdout] (default task-62) Created connection 741137137.
Btw, I'll change the ping sql to SELECT 1 FROM DUAL.
What could cause this socket read timed out?
I can see several problems here:
potentially heavy ping query (as pointed by beny23)
long close connection operation
incorrect behaviour of the mybatis connection pool
You definitely need to use SELECT 1 FROM DUAL as a ping query. Otherwise you a doing some not so cheap operation on every connection open.
The long close and IO Error: Socket read timed out suggests that there is either some network connectivity issue or oracle server availability issue or both.
It makes sense to check oracle healthiness at the time when this issue happens. Does it respond to other queries at that time? What is the CPU/io/memory/swap usage etc. If the server is under very high load it may be that it does not respond in time.
Checking the issues with network connectivity is a very broad topic. The most reliable (and also complex) way I know is to capture network traffic (with tools like tcpdump or WireShark) on both ends and compare them.
Then there's an issue with mybatis connection pool.
First of all some background about how mybatis connection pool works.
One important and not obvious thing is that mybatis connection pool implementation forcefully returns connections to the pool if they are used for too long. Here's the quote from the documentation:
poolMaximumCheckoutTime – This is the amount of time that a Connection can be "checked out" of the pool before it will be forcefully returned. Default: 20000ms (i.e. 20 seconds)
It means that if the application tries to open new connection and all connections are busy then mybatis will close the oldest connection if it was in use for more than 20 seconds (by default).
It is by itself may be a very unexpected behaviour if you have some long running queries. Another and probably bigger problem is how this is implemented in mybatis. In order to grab a connection the request to rollback the transaction is done from the thread which requested new connection (In the example above thread default task-57 is holding the connection and thread default task-60 tries to get the connection from the pool).
This is the problem because oracle jdbc driver requires proper synchronization when accessing the connection from multiple threads and mybatis does not do that:
Controlled serial access to a connection, such as that provided by connection caching, is both necessary and encouraged. However, Oracle strongly discourages sharing a database connection among multiple threads. Avoid allowing multiple threads to access a connection simultaneously. If multiple threads must share a connection, use a disciplined begin-using/end-using protocol.
So this failure to synchronize access from multiple thread to the shared resource (the connection) may cause all kinds of consistency problems and I do not exclude the possibility that the problem with closing the connection is caused by the fact that connection had gotten into some inconsistent state earlier because of the lack of the synchronization.
Increasing the pool size removes this problem for the given load as the situation when the pool is exhausted does not happen (or happens less frequently).
Note that concurrency issues are very tricky to reproduce and positive synthetic test gives you virtually no guarantee. This a broad topic so recommend you to look to Goetz book for details.
I would change the connection pool implementation, namely use https://github.com/swaldman/c3p0 or https://commons.apache.org/proper/commons-dbcp/ or https://brettwooldridge.github.io/HikariCP/.

Many threads created using C3P0 with Hibernate/Spring

I do a project merging Hibernate and Spring in a Java web application, using Tomcat under Linux environment. Due to the Mysql 8 hours timeout problem, we want to use C3P0 to manage a connection pool with our Mysql database.
But when we use it, we have numerous threads that are created. I figured it out beacause I did on each request a print of all of them with a memory status that show me the increasing memory and that kind of threads:
name: C3P0PooledConnectionPoolManager[identityToken->1hged7o8r13kpj7n1h3ycia|39c446]-HelperThread-#0 daemon: true group! main groupParent: system alive: true interrupted: false
name: C3P0PooledConnectionPoolManager[identityToken->1hged7o8r13kpj7n1h3ycia|17ec0e8]-AdminTaskTimer daemon: true group! main groupParent: system alive: true interrupted: false
It can produce more than 500 threads like these ones, after enough time.
Here is my Hibernate.cfg.xml:
<property name="connection.provider_class">
org.hibernate.connection.C3P0ConnectionProvider</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.idle_test_period">5</property>
<property name="hibernate.c3p0.max_size">100</property>
<property name="hibernate.c3p0.max_statements">100</property>
<property name="hibernate.c3p0.min_size">10</property>
<property name="hibernate.c3p0.timeout">5</property>
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="hibernate.connection.url">jdbc:mysql://localhost:3306/myBase</property>
<property name="hibernate.connection.username">root</property>
<property name="hibernate.connection.password"></property>
<property name="hibernate.hbm2ddl.auto">update</property>
<property name="hibernate.default_schema">myProject</property>
<property name="hibernate.query.factory_class">org.hibernate.hql.classic.ClassicQueryTranslatorFactory</property>
<property name="show_sql">false</property>
<property name="cache.provider_class">
org.hibernate.cache.NoCacheProvider
</property>
I also tried to add a C3P0 propeties file, but except reducing the helper thread number, it don't delete the unsused thread:
c3p0.maxStatements=5
c3p0.maxIdleTime=10
c3p0.numHelperThreads=1
c3p0.testConnectionOnCheckout=true
c3p0.preferredTestQuery=SELECT 1
c3p0.initialPoolSize=1
c3p0.minPoolSize=1
c3p0.maxPoolSize=10
c3p0.acquireIncrement=1
c3p0.idleConnectionTestPeriod=1
Does anyone have an idea of why this happen and how to solve this problem?
Thanks a lot.
if you are seeing a multiplication of c3p0 helper and timer threads, you are somehow creating a multitude of c3p0 DataSources when you want there to be just one. sometimes this happens if you are hot-reloading your app but forgetting to close() your old c3p0 DataSource when you recycle.
effectively it looks like you are "leaking" DataSources. you need to figure out why/where this is happening. for some clues, check out your logs for c3p0 DataSource initialization messages at INFO level. Search for the string "Initializing c3p0 pool", for example.
good luck!
Ok I found a combination of properties to solve my problem, keeping in mind that I don't need a lot of connection at a time:
c3p0.maxStatements=5
c3p0.maxIdleTime=10
c3p0.numHelperThreads=3
c3p0.testConnectionOnCheckout=true
c3p0.preferredTestQuery=SELECT 1
c3p0.initialPoolSize=1
c3p0.minPoolSize=1
c3p0.maxPoolSize=1
c3p0.acquireIncrement=1
c3p0.idleConnectionTestPeriod=1
c3p0.maxAdministrativeTaskTime=1
Thanks to everyone
I would expect it to create a number of threads proportional to c3p0.minPoolSize
and c3p0.maxPoolSize and your maximum is 10.
http://www.mchange.com/projects/c3p0/#other_ds_configuration
"numHelperThreads and maxAdministrativeTaskTime help to configure the behavior of DataSource thread pools. By default, each DataSource has only three associated helper threads. If performance seems to drag under heavy load, or if you observe via JMX or direct inspection of a PooledDataSource, that the number of "pending tasks" is usually greater than zero, try increasing numHelperThreads. maxAdministrativeTaskTime may be useful for users experiencing tasks that hang indefinitely and "APPARENT DEADLOCK" messages. (See Appendix A for more.) "
numHelperThreads defines how many threads per DataSource are used, therefore indeed you will have 10 threads with numHelperThreads=1.
The only way to make sure C3P0 consumes only one Thread is to set c3p0.minPoolSize
and c3p0.maxPoolSize to 1 but this defeats the purpose of connection pooling.

PostgreSQL + Hibernate + C3P0 = FATAL: sorry, too many clients already

I have the following code
Configuration config = new Configuration().configure();
config.buildMappings();
serviceRegistry = new ServiceRegistryBuilder().applySettings(config.getProperties()).buildServiceRegistry();
SessionFactory factory = config.buildSessionFactory(serviceRegistry);
Session hibernateSession = factory.openSession();
Transaction tx = hibernateSession.beginTransaction();
ObjectType ot = (ObjectType)hibernateSession.merge(someObj);
tx.commit();
return ot;
hibernate.cfg.xml contains:
<session-factory>
<property name="connection.url">jdbc:postgresql://127.0.0.1:5432/dbase</property>
<property name="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</property>
<property name="connection.driver_class">org.postgresql.Driver</property>
<property name="connection.username">username</property>
<property name="connection.password">password</property>
<property name="connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.max_statements">50</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.idle_test_period">3000</property>
<property name="hibernate.c3p0.acquireRetryAttempts">1</property>
<property name="hibernate.c3p0.acquireRetryDelay">250</property>
<property name="hibernate.show_sql">true</property>
<property name="hibernate.use_sql_comments">true</property>
<property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property>
<property name="hibernate.current_session_context_class">thread</property>
<mapping class="...." />
</session-factory>
After a few seconds and some successful inserts, the following exception appears:
org.postgresql.util.PSQLException: FATAL: sorry, too many clients already
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:291)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:108)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:66)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:125)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:22)
at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:30)
at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:24)
at org.postgresql.Driver.makeConnection(Driver.java:393)
at org.postgresql.Driver.connect(Driver.java:267)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:135)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:182)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:171)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:137)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1014)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
at com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
12:24:19.151 [ Thread-160] WARN internal.JdbcServicesImpl - HHH000342: Could not obtain connection to query metadata : Connections could not be acquired from the underlying database!
12:24:19.151 [ Thread-160] INFO dialect.Dialect - HHH000400: Using dialect: org.hibernate.dialect.PostgreSQLDialect
12:24:19.151 [ Thread-160] INFO internal.LobCreatorBuilder - HHH000422: Disabling contextual LOB creation as connection was null
12:24:19.151 [ Thread-160] INFO internal.TransactionFactoryInitiator - HHH000268: Transaction strategy: org.hibernate.engine.transaction.internal.jdbc.JdbcTransactionFactory
12:24:19.151 [ Thread-160] INFO ast.ASTQueryTranslatorFactory - HHH000397: Using ASTQueryTranslatorFactory
12:24:19.151 [ Thread-160] INFO hbm2ddl.SchemaUpdate - HHH000228: Running hbm2ddl schema update
12:24:19.151 [ Thread-160] INFO hbm2ddl.SchemaUpdate - HHH000102: Fetching database metadata
12:24:19.211 [Runner$PoolThread-#0] WARN resourcepool.BasicResourcePool - com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#ee4084 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (1). Last acquisition attempt exception:
org.postgresql.util.PSQLException: FATAL: sorry, too many clients already
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:291)
It seems that the hibernate doesn't realse the connection. But hibernateSession.close() causes exception Session is closed because tx.commit() is called.
I'm not quite sure what's going on here, but I'd recommend you not set hibernate.c3p0.acquireRetryAttempts to 1. First, that renders your next setting, hibernate.c3p0.acquireRetryDelay irrelevant -- that sets the length of time between retry attempts, but if there is only one attempt (ok, the param name is misleading, it sets the total number of tries), there are no retries. The effect of your settings is simply to have the pool try to fetch a Connection whenever a client comes in, then throw an Exception to clients immediately if that fails. It doesn't at all limit the number of Connections the pool will try to acquire (unless you set breakOnAcquireFailure to true, in which case, with your settings, any failure to acquire a Connection would invalidate the whole pool).
I share sola's concern about your lack of reliable resource cleanup. If, under your settings, commit() means close() (and you are not allowed to call close explicitly? that seems bad), then it is commit that should be in the finally block (but commit in a finally block also seems bad, sometimes you don't want to commit). Whatever the issue with close/commit, with the code you have, occasional Exceptions between openSession and commit will lead to Connection leaks.
But that should not be the cause of your too-many-open-Connections problem. If you leak Connections, you'll find that the Connection pool eventually freezes (as maxPoolSize Connectiosn are checked out forever due to the leaks). You'd only have 25 open Connections. Something else is going on. Try reviewing your logs. Is more than one Connection pool somehow being initialized? (c3p0 dumps config information at INFO level on pool init, so if multiple pools are getting opened, you should see multiple messages. alternatively, you can inspect running c3p0 pools via JMX, to see whether/why more than 25 Connections have been opened.)
Good luck!
I found the cause why c3p0 behaved in this way.The issue was quite trivial...
This part of code:
Configuration config = new Configuration().configure();
config.buildMappings();
serviceRegistry = new ServiceRegistryBuilder().applySettings(config.getProperties()).buildServiceRegistry();
SessionFactory factory = config.buildSessionFactory(serviceRegistry);
was executed multiple times. Thank you Steve for the tip.
I'm suggesting you to use try-catch-finally block,
in finally kindly close the session
i.e
try {
tx.commit();
} catch (HibernateException e) {
handleException(e);
} finally {
hibernateSession.close();
}
and also,
the max_connections property in postgresql.conf it's 100 by default. Increase it if you need.

Spring JDBCTemplate other MySQL datasource than apache commons?

I am using Spring JDBCTemplate to perform SQL operations on an apache commons datasource (org.apache.commons.dbcp.BasicDataSource) and when the service is up and running to long, i end up getting this exception:
org.springframework.dao.RecoverableDataAccessException: StatementCallback; SQL [SELECT * FROM vendor ORDER BY name]; The last packet successfully received from the server was 64,206,061 milliseconds ago. The last packet sent successfully to the server was 64,206,062 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 64,206,061 milliseconds ago. The last packet sent successfully to the server was 64,206,062 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
at org.springframework.jdbc.support.SQLExceptionSubclassTranslator.doTranslate(SQLExceptionSubclassTranslator.java:98)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:406)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:455)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:463)
at com.cable.comcast.neto.nse.taac.dao.VendorDao.getAllVendor(VendorDao.java:25)
at com.cable.comcast.neto.nse.taac.controller.RemoteVendorAccessController.requestAccess(RemoteVendorAccessController.java:78)
I have tried adding the 'autoReconnect=true' to the connection string, but this problem still occurs. Is there another datasource that can be used that will manage the reconnecting for me?
BasicDataSource can manage keeping the connections alive for you. You need to set the following properties :
minEvictableIdleTimeMillis = 120000 // Two minutes
testOnBorrow = true
timeBetweenEvictionRunsMillis = 120000 // Two minutes
minIdle = (some acceptable number of idle connections for your server)
These will configure the data source to keep continually test your connections, and expire and remove them if they become stale. There's a number of other properties on the basic data source that you may want to consider checking into as well to tweak your connection pooling performance. I've run into some strange problems in the past where I was having issues with my database access and it all came down to how the connection pool was configured.
You can try to C3PO:
http://sourceforge.net/projects/c3p0/
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"destroy-method="close">
<property name="user" value="${db.username}"/>
<property name="password" value="${db.password}"/>
<property name="driverClass" value="${db.driverClassName}"/>
<property name="jdbcUrl" value="${db.url}"/>
<property name="initialPoolSize" value="0"/>
<property name="maxPoolSize" value="1"/>
<property name="minPoolSize" value="1"/>
<property name="acquireIncrement" value="1"/>
<property name="acquireRetryAttempts" value="0"/>
<property name="idleConnectionTestPeriod" value="600"/> <!--in seconds-->
</bean>
grettings
pacovr

Categories

Resources