I'm developing a web app on Tomcat 8 with Maven, I'm using c3p0 to handle connections on the main thread and on 2 other concurrent threads, my connection manager class is asking a DataSource singleton class I've implemented for synchronized connections, like so
public synchronized Connection getConnection() {
try {
return cpds.getConnection();
} catch (SQLException ex) {
logger.error("Error while issuing a pooled connection", ex);
}
return null;
}
, but when I'm trying to use these connections they start to either interrupt
09:47:17.164 [QuartzScheduler_Worker-4] ERROR com.myapp.providers.DataSource - Error while issuing a pooled connection
java.sql.SQLException: An SQLException was provoked by the following failure: java.lang.InterruptedException
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106) ~[c3p0-0.9.1.2.jar:0.9.1.2]
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:65) ~[c3p0-0.9.1.2.jar:0.9.1.2]
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:62) ~[c3p0-0.9.1.2.jar:0.9.1.2]
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:531) ~[c3p0-0.9.1.2.jar:0.9.1.2]
at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128) ~[c3p0-0.9.1.2.jar:0.9.1.2]
or close in mid transaction and breaking any statements and result sets that are being used at that time
I'm configuring the DataSource object like so
cpds = new ComboPooledDataSource();
cpds.setDriverClass(oracle.jdbc.driver.OracleDriver);
cpds.setJdbcUrl(jdbc:oracle:thin:#xx.xxx.xxx.xxx:1521:XE);
cpds.setUser("username");
cpds.setPassword("password");
// database connection properties
cpds.setInitialPoolSize(10);
cpds.setAcquireIncrement(3);
cpds.setMaxPoolSize(100);
cpds.setMinPoolSize(15);
cpds.setMaxStatements(75);
// connection pool preferences
cpds.setIdleConnectionTestPeriod(60);
cpds.setMaxIdleTime(30000);
cpds.setAutoCommitOnClose(false);
cpds.setPreferredTestQuery("SELECT 1 FROM DUAL");
cpds.setTestConnectionOnCheckin(false);
cpds.setTestConnectionOnCheckout(false);
cpds.setAcquireRetryAttempts(30);
cpds.setAcquireRetryDelay(1000);
cpds.setBreakAfterAcquireFailure(false);
I've also written a small test method that runs in a loop and queries the database for n times but that works fine.
c3p0-0.9.1.2 is very, very old; please consider upgrading to 0.9.5.1, the current production version.
The problem is both clear and not so clear. The clear part is that something is calling interrupt() on client Threads that are waiting to acquire Connections. The not-so-clear part is who is doing that and why.
A guess is that Tomcat itself is doing that because the client Threads are hung too long. If the Threads are hanging at getConnection(), that could be due to a Connection leak and pool exhaustion. We see above how you acquire Connections. Are you vigilant about ensuring that they are reliably close()ed in finally blocks?
A thing you might try is to set a checkoutTimeout, e.g.
cpds.setCheckoutTimeout( 5000 ); // 5 secs
This won't actually solve the problem if Connection checkouts are hanging. But rather than a problem provoked by mysterious interrupts, you'll see c3p0 TimeoutExceptions instead. That will verify that the issue is long hangs on checkout, though, which would most likely be due to pool exhaustion, either from a Connection leak (missing calls to close()), or simply from a maxPoolSize value too low for your load.
If there does seem to be a Connection leak, please see unreturnedConnectionTimeout and debugUnreturnedConnectionStackTraces for help tracking it down. See also "Configuring to Debug and Workaround Broken Client Applications"
Related
I have a websphere application server with an MSSQL database connection with 10 in the connection pool. When my worker thread runs I have a process that runs the following:
Psuedo code
for(List<String> items){
Connection connection = null;
Statement statement = null;
try{
connection = getConnection();
statement = connection.createStatement();
String sql = "exec someStoredProcedure '" + item + "';";
statement.execute(sql);
}finally{
//real code includes null checks and try catch for errors
statement.close();
connection.close();
}
}
My problem is that if my loop is larger than 10 it will hang till it hits the timeout and error "ConnectionWaitTimeoutException: J2CA1010E: Connection not available; timed out waiting for 180 seconds".
I would think that closing the statements and connections would allow me to reuse the connection from the pool. The connections do get closed/collected after the thread finishes. I've updated my code to reuse the same connection through the for loop and I can run the process rapid fire over and over without problems because each thread cleans up after itself it seems. Any idea what's going on or how to resolve? I'm worried in the future I'll have a process that needs more than 10 threads over the course of running.
The behavior that you describe sounds inconsistent with how the connection pool is supposed to work. However, there are a number of important details that impact connection reuse which are not apparent from your writeup.
You are either using sharable or unsharable connections. You can find that out by looking at the resource reference that you used for the DataSource lookup. For example, #Resource with shareable=false or a deployment descriptor resource reference with <res-sharing-scope>Unshareable</res-sharing-scope> is unsharable, whereas if you set true for the former or Shareable for the latter or omit the attribute, then the connection is sharable. If you didn't use a resource reference, connections will typically be sharable, unless you explicitly overrode that with advanced configuration.
If you are using an unsharable connection and the entire code block is not encompassed within a global transaction, then every connection.close will return the connection to the pool to become immediately reusable. If you are in a global transaction, then each connection will remain unavailable after close because there is outstanding work on it that still needs to be committed or rolled back.
If you are using a sharable connection and each of the connection requests match (same user/password or absence thereof for all), then you should keep getting the same underlying connection back and should not run out of connections. If however, the connection requests do not match, then the connections are kept around for the duration of the scope (could be a transaction or a request scope like servlet boundary) and you will keep getting a new connection with each request until you run out.
If the above isn't enough to figure out the issue, add more detail to your scenario indicating the sharability, how you request each connections, and the placement of transaction boundaries and request boundaries, so that a more detailed answer for your specific scenario can be given.
I use Hikary connection pool with following settings:
HikariDataSource dataSource = new HikariDataSource();
dataSource.setMinimumIdle(0);
dataSource.setMaximumPoolSize(Integer.MAX_VALUE);
dataSource.setJdbcUrl(jdbcConnectionString);
dataSource.setConnectionTestQuery("select 1");
dataSource.setIdleTimeout(TimeUnit.SECONDS.toMillis(60));
dataSource.getConnection();
After getConnection() hikari try to get 2 connections to instance, but put in connection pool just one connection. How can I fix it? The hikari version is 3.4.0
I found the answer. Hikari creates first connection in checkFailFast method. I update this comment when find how to disable this method. The checkFailFast doesn't work if initializationFailTimeout<0. It helps me
After getConnection() hikari try to get 2 connections to instance, but put in connection pool just one connection. How can I fix it?
There is nothing to fix in this behavior. It simple means, that two conenctions were opened and one of them was closed.
The reason why the second connection was closed is that you set setMinimumIdle(0), i.e. no idle connection is maintainend in the pool and all idle connection are closed.
If you want to see both connection in the pool, simple set setMinimumIdle(1). After calling DataSource.getConnection() there will be two connection in the pool - one yours and one idle.
If you don't want to open the second connection at all, set
config.setMinimumIdle( 1 );
config.setMaximumPoolSize( 1 );
But think twice, why do you use a connection pool with only one connection.
You may anyway increase both parameters later, while the pool is running.
HikariConfigMXBean bn = DataSource.ds.getHikariConfigMXBean()
bn.setMaximumPoolSize(10)
bn.setMinimumIdle(10)
This will (not instantly) open 9 additional connections to the database.
Note that while setting the MaximumPoolSize == MinimumIdle the number of connection in the pool remains stable, no connections are opened or closed, which is probably the thing you want to observe.
Tested with Hicari 3.4.0 and Oracle 12.2
What is best/good/optimal way to monitor connection to database.
Im writing swing application. What I want it to do is to check connection with database every time period. I've tried something like this.
org.hibernate.Session session = null;
try {
System.out.println("Check seesion!");
session = HibernateUtil.getSessionFactory().openSession();
} catch (HibernateException ex) {
} finally {
session.close();
}
But that dosn't work.
Second question which is comming to my mind is how this session closing will affect other queries.
Use a connection pool like c3p0 or dbcp. You can configure such pool to monitor connections in pool - before passing connection to Hibernate, after receiving it back or periodically. If the connection is broken, the pool with transparently close it, discard and open a new one - without you noticing.
Database connection pools are better suited for multi-user, database heavy application where several connections are opened at the same time, but I don't think it's an overkill. Pools should work just fine being bound to max 1 connection.
UPDATE: every time you try to access the database Hibernate will ask the DataSource (connection pool). If no active connection is available (e.g. because database is down), this will throw an exception. If you want to know in advance when database is unavailable (even when user is not doing anything), unfortunately you need a background thread checking the database once in a while.
However barely opening a session might not be enough in some configurations. You'll better run some dummy, cheap query like SELECT 1 (in raw JDBC).
Using c3p0 for my connection pool, the thread calling c3p0 appears to be terminated or left in an undefined state after the connection pool exhausts its retry attempts.
The connection pool is defined as such:
private val pool = new ComboPooledDataSource
pool.setDriverClass(config("database.driverClass").as[String])
pool.setJdbcUrl(config("database.jdbcUrl").as[String])
pool.setUser(config("database.user").as[String])
pool.setPassword(config("database.password").as[String])
pool.setAcquireRetryAttempts(5)
The client code calls getConnection, which blocks briefly while c3p0 spins through the connection retry attempts. The strange thing is that it doesn't really seem to return back from this call. From the documentation, I expected an Exception to be thrown:
If all attempts fail, any clients waiting for Connections from the
DataSource will see an Exception, indicating that a Connection
could not be acquired
It's very bizarre. It definitely does not return, and it does not throw an Exception.The client code that is calling getConnection from a Executors.newSingleThreadScheduledExecutor that runs every 5 seconds. When the call to getConnection seems to evaporate, the scheduled executor thread also seems to stop executing at all.
In this case I'm intentionally leaving the database off so that I can figure out what's going on here. Any ideas?
UPDATE: Strangely enough, if I don't use a scheduled executor and instead run it in my own monitor thread, #getConnection performs as expected.
I have a scenario and the question follows
Application server has two connections pools to DB. A and B
A points to -> DatabaseA -> has 128 connections
A has Stored Procedures which access tables residing in DatabaseB over the DB link
B points to -> DatabaseB -> has 36 connections
Now lets say that Java code calls Stored Proc in DatabaseA by using connection pool A. This stored proc is getting data over the DB link from DatabaseB
Question:
Based on this scenario if we get connection closed errors on the front end. Is it viable to say that even though java is calling the SP (in DatabaseA) from pool A (128) but since the SP is bringing data from DatabaseB it has less amount of connections (36).
Basically I want to know when the data is brought over the DB link like this...does it take away from 36 connections assigned to pool B pointint to DatabaseB?
Exact Exception
Exact exception I get is: --- Cause: java.sql.SQLException: Closed Connection
Some Stack trace:
Caused by: java.sql.SQLException:
Closed Connection at
com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryWithCallback(GeneralStatement.java:185)
at
com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryForList(GeneralStatement.java:123)
at
com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java:614)
at
com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java:588)
at
com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForList(SqlMapSessionImpl.java:118)
at
org.springframework.orm.ibatis.SqlMapClientTemplate$3.doInSqlMapClient(SqlMapClientTemplate.java:268)
at
org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:193)
at
org.springframework.orm.ibatis.SqlMapClientTemplate.executeWithListResult(SqlMapClientTemplate.java:219)
at
org.springframework.orm.ibatis.SqlMapClientTemplate.queryForList(SqlMapClientTemplate.java:266)
Also, I am using iBatis ...so don't have try..catch..finally blocks
The stored procedure is running in the database; when it makes the connection to the other database it makes a direct connection and doesn't go through the app server's pool. In fact, it could make a connection to any database that is linked to A, regardless whether or not there's a connection pool to that database maintained by the app server.
This exception indicates a resource leak, i.e. the JDBC code is not properly closing connections in the finally block (to ensure that it's closed even in case of an exception) or the connection is been shared among multiple threads. If two threads share the same connection from the pool and one thread closes it, then this exception will occur when the other thread uses the connection.
The JDBC code should be written so that connections (and statements and resultsets) are acquired and closed (in reversed order) in the very same method block. E.g.
Connection connection = null;
// ...
try {
connection = database.getConnection();
// ...
} finally {
// ...
if (connection != null) try { connection.close(); } catch (SQLException logOrIgnore) {}
}
Another possible cause is that the pool is holding connections too long idle and not testing/verifying them before releasing. This is configureable in a decent connection pool. Consult its documentation.
"Basically I want to know when the data is brought over the DB link like this...does it take away from 36 connections assigned to pool B pointint to DatabaseB?"
No. The database server will make a distinct connection to the other database server irrespective of any connection pool.
I have to suffer a firewall that cuts of connections after a period of inactivity so I see this error quite a lot. Look into dbms_session.close_database_link, since the database link connection would generally remain for the duration of the session (and since you have a connection pool, that session probably sits around for a very long time).