I am using JTDS to connect to MS-SQL 2005. I am using c3p0 as the DB Connection pool, configured with Spring.
I am randomly getting an SQLException: Invalid state, the ResultSet object is closed in a Groovy script in which I have passed a reference to the connection pool. The script is executed by a timer every so often. By random, I mean that the script works perfectly 99% of the time, but when it fails, it will do so a couple of times, then go back to working correctly again, picking up where it left off. All of the critical work is done in a transaction, pulling off of a Message Queue.
Logic below:
//passed into the groovy context
DataSource source = source;
Connection conn = source.getConnection();
...
//Have to omit proprietary DB stuff... sorry...
PreparedStatement fooStatement = conn.prepareStatement("INSERT INTO foo (x,y,z) VALUES (?,?,?) select SCOPE_IDENTITY();");
ResultSet identRes = fooStatement.executeQuery();
//This is where the execption is thrown.
identRes.next();
...
try{
log.info("Returning SQL connection.");
conn.close();
}catch(Exception ex){}
There is a separate timer thread that runs a similar groovy script, in which we have not seen this issue. That script uses similar calls to get the connection, and close it.
Originally, we thought that the second script may have been grabbing the same connection off the pool, finishing first, then closing the connection. But c3p0's documentation says that calling conn.close() should simply return it to the pool.
Has anyone else seen this, or am I missing something big here?
Thanks.
We solved this... C3P0 was configured to drop connections that were checked out longer than 30 seconds, we did this to prevent dead-lock in the database (we don't control the tuning). One of the transactions was taking horridly long to complete, and C3P0 was dropping the connection, resulting in the ResultSet Closed error. Surprisingly, however, C3P0 was not logging the incident, so we didnt see this in the application's logs.
Related
I have a websphere application server with an MSSQL database connection with 10 in the connection pool. When my worker thread runs I have a process that runs the following:
Psuedo code
for(List<String> items){
Connection connection = null;
Statement statement = null;
try{
connection = getConnection();
statement = connection.createStatement();
String sql = "exec someStoredProcedure '" + item + "';";
statement.execute(sql);
}finally{
//real code includes null checks and try catch for errors
statement.close();
connection.close();
}
}
My problem is that if my loop is larger than 10 it will hang till it hits the timeout and error "ConnectionWaitTimeoutException: J2CA1010E: Connection not available; timed out waiting for 180 seconds".
I would think that closing the statements and connections would allow me to reuse the connection from the pool. The connections do get closed/collected after the thread finishes. I've updated my code to reuse the same connection through the for loop and I can run the process rapid fire over and over without problems because each thread cleans up after itself it seems. Any idea what's going on or how to resolve? I'm worried in the future I'll have a process that needs more than 10 threads over the course of running.
The behavior that you describe sounds inconsistent with how the connection pool is supposed to work. However, there are a number of important details that impact connection reuse which are not apparent from your writeup.
You are either using sharable or unsharable connections. You can find that out by looking at the resource reference that you used for the DataSource lookup. For example, #Resource with shareable=false or a deployment descriptor resource reference with <res-sharing-scope>Unshareable</res-sharing-scope> is unsharable, whereas if you set true for the former or Shareable for the latter or omit the attribute, then the connection is sharable. If you didn't use a resource reference, connections will typically be sharable, unless you explicitly overrode that with advanced configuration.
If you are using an unsharable connection and the entire code block is not encompassed within a global transaction, then every connection.close will return the connection to the pool to become immediately reusable. If you are in a global transaction, then each connection will remain unavailable after close because there is outstanding work on it that still needs to be committed or rolled back.
If you are using a sharable connection and each of the connection requests match (same user/password or absence thereof for all), then you should keep getting the same underlying connection back and should not run out of connections. If however, the connection requests do not match, then the connections are kept around for the duration of the scope (could be a transaction or a request scope like servlet boundary) and you will keep getting a new connection with each request until you run out.
If the above isn't enough to figure out the issue, add more detail to your scenario indicating the sharability, how you request each connections, and the placement of transaction boundaries and request boundaries, so that a more detailed answer for your specific scenario can be given.
/* connection pool created with 5 connections based on the region specific.
with below code it will get connection from connection pool which is already created.*/
Connection con = DatasourceClient.getDataSourceMap.get(region).getConnection();
OracleConnection oConn = con.unwrap(oracle.jdbc.OracleConnection.class);
Will above code will get two connections from pool and do i need to close both con and Oconn ?
i am getting pool exhausted and connection closed exceptions tried many ways by changing pool properties.
So just want to know what above code is doing.
tried closing the above connections but didn't get any difference results.
Using Oracle Jdbc template instead of spring jdbc because in my procedures there are array values which in few cases only input, in some cases only output and other both INOUT.
Can any one help me in this please ? Thank you.
No, it will get only a single connection out, which you then unwrap to it's actual class.
However you will need to call con.close() (and never oCon.close()) to return the connection back to the pool. This is because the wrapper's close() doesn't actually close the connection, it returns it back to the pool.
I'm using latest derby10.11.1.1.
Doing something like this:
DriverManager.registerDriver(new org.apache.derby.jdbc.EmbeddedDriver())
java.sql.Connection connection = DriverManager.getConnection("jdbc:derby:filePath", ...)
Statement stmt = connection.createStatement();
stmt.setQueryTimeout(2); // shall stop query after 2 seconds but id does nothing
stmt.executeQuery(strSql);
stmt.cancel(); // this in fact would run in other thread
I get exception "java.sql.SQLFeatureNotSupportedException: Caused by: ERROR 0A000: Feature not implemented: cancel"
Do you know if there is way how to make it work? Or is it really not implemented in Derby and I would need to use different embedded database? Any tip for some free DB, which I can use instead of derby and which would support SQL timeout?
As i got in java docs
void cancel() throws SQLException
Cancels this Statement object if both the DBMS and driver support aborting an SQL statement. This method can be used by one thread to cancel a statement that is being executed by another thread.
and it will throws
SQLFeatureNotSupportedException - if the JDBC driver does not support this method
you can go with mysql.
there are so many embedded database available you can go through
embedded database
If you get Feature not implemented: cancel then that is definite, cancel is not supported.
From this post by H2's author it looks like H2 supports two ways to timeout your queries, both through the JDBC API and through a setting on the JDBC URL.
Actually I found that there is deadlock timeout in derby as well only set to 60 seconds by default and I never have patience to reach it :).
So the correct answer would be:
stmt.setQueryTimeout(2); truly seems not working
stmt.cancel(); truly seems not implemented
But luckily timeout in database manager exists. And it is set to 60 seconds. See derby dead-locks.
Time can be changed using command:
statement.executeUpdate("CALL SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY(" +
"'derby.locks.waitTimeout', '5')");
And it works :)
A console application executing under:
1). Multiple threads
2). Connection Pooling (as the database connections range could be 5 to 30) of type Microsoft Access using DBCP.
While executing this application at my end (not tested the database limit) it works fine. And whenever I try to introduce the same application on one of other machines it generates an error.
I'm wondering why this is happening as there is only the difference of machines here. So, it works perfectly at my end.
I don't know much about connection pooling but it seems whatever I have understood I have implemented as:
public class TestDatabases implements Runnable{
public static Map<String, Connection> correctDatabases;
#Override
public void run() {
// validating the databases using DBCP
datasource.getConnection(); // Obtaining the java.sql.Connection from DataSource
// if validated successfully °º¤ø,¸¸,ø¤º°`°º¤ø,¸,ø¤°º¤ø,¸¸,ø¤º°`°º¤ø,¸ putting them in correctDatabases
}
}
The above case is implemented using ExecutorService = Number of databases.
Finally, I'm trying to put them in a static Collection of Type
Map<String, Connection> and making use of it throughout the application. In other words: I'm trying to collect the connectionString along with the Connection in a Map.
In other parts of my application I'm simply dealing with multiple threads coming along with the Connection URL. So, to perform any database operations I'm calling the
Connection con = TestDatabases.correctDatases.get(connectUrl);
For that machine, this application works fine for around ~5 databases. And the error is always getting generated when I'm trying to fire the query using above Connection (con) as stmt.executeQuery(query);
As, I'm not able to reproduce this issue at my end, it seems something is going-on wrong with the Connection Pooling or I have not configured my application to deal with Connection Pooling correctly.
Just for your information, I'm correctly performing Connection close in finally block where my application terminates and this Application is using Quartz Scheduler as well. For Connection Pooling, a call to the following from TestDatabases class is done for setUp as:
public synchronized DataSource setUp() throws Exception {
Class.forName(RestConnectionValidator.prop.getProperty("driverClass")).newInstance();
log.debug("Class Loaded.");
connectionPool = new GenericObjectPool();
log.debug("Connection pool made.");
connectionPool.setMaxActive(100);
ConnectionFactory cf = new DriverManagerConnectionFactory(
RestConnectionValidator.prop.getProperty("connectionUrl")+new String(get().toString().trim()),
"","");
PoolableConnectionFactory pcf =
new PoolableConnectionFactory(cf, connectionPool,
null, null, false, true);
return new PoolingDataSource(connectionPool);
}
Following is the error I'm getting (at the other machine)
java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver] System resource exceeded.
Following is the Database Path:
jdbc:odbc:DRIVER= {Microsoft Access Driver (*.mdb, *.accdb)};DBQ=D:\\DataSources\\PR01.mdb
Each of those database seems to be not much heavy (its ~ 5 to 15 MB of total size).
So, I'm left with the following solutions:
1). Correction of Connection Pooling or migrate to the newer one's like c3p0 or DBPool or BoneCP.
2). Introducing batch concept - in which I will schedule my application for each group of 4 databases. It could be very expensive to deal with as any time the other schedule may also collapse.
I’m pretty sure that this is Java related error but I can’t fathom out why.
Just done the migration to BoneCP which solved my problem. I guess due to multi-threaded environment the dpcp was not providing the connection from pool rather it was trying to hit the database again and again. Maybe I could have solved the dpcp issue but migrating to BoneCP also provides advantage of performance.
What is best/good/optimal way to monitor connection to database.
Im writing swing application. What I want it to do is to check connection with database every time period. I've tried something like this.
org.hibernate.Session session = null;
try {
System.out.println("Check seesion!");
session = HibernateUtil.getSessionFactory().openSession();
} catch (HibernateException ex) {
} finally {
session.close();
}
But that dosn't work.
Second question which is comming to my mind is how this session closing will affect other queries.
Use a connection pool like c3p0 or dbcp. You can configure such pool to monitor connections in pool - before passing connection to Hibernate, after receiving it back or periodically. If the connection is broken, the pool with transparently close it, discard and open a new one - without you noticing.
Database connection pools are better suited for multi-user, database heavy application where several connections are opened at the same time, but I don't think it's an overkill. Pools should work just fine being bound to max 1 connection.
UPDATE: every time you try to access the database Hibernate will ask the DataSource (connection pool). If no active connection is available (e.g. because database is down), this will throw an exception. If you want to know in advance when database is unavailable (even when user is not doing anything), unfortunately you need a background thread checking the database once in a while.
However barely opening a session might not be enough in some configurations. You'll better run some dummy, cheap query like SELECT 1 (in raw JDBC).