I am using Hibernate version 3.3.1 and
jtds 1.2.2 as JDBC driver and
c3p0 version 0.9.1.2 for connection pooling to connect SQL Server.
My query takes about 12 seconds to run.
When i run the query, I get the following exception
ERROR [main] (JDBCExceptionReporter.java:101) - Invalid state, the Connection object is closed.
org.hibernate.exception.GenericJDBCException: could not inspect JDBC autocommit mode
at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:126)
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:114)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:52)
at org.hibernate.jdbc.JDBCContext.afterNontransactionalQuery(JDBCContext.java:275)
at org.hibernate.impl.SessionImpl.afterOperation(SessionImpl.java:444)
at org.hibernate.impl.SessionImpl.listCustomQuery(SessionImpl.java:1728)
at org.hibernate.impl.AbstractSessionImpl.list(AbstractSessionImpl.java:165)
at org.hibernate.impl.SQLQueryImpl.list(SQLQueryImpl.java:175)
If I modify the query to return a small set of data, I do not get exception.
It seems there is some configuration problem.
In my hibernate.properties file, I have the following configuration values
hibernate.format_sql=true
hibernate.dialect=org.hibernate.dialect.SQLServerDialect
hibernate.optimistic-lock=true
hibernate.connection.autocommit=true
hibernate.show_sql=false
hibernate.generate_statistics=false
c3p0.acquire_increment=1
c3p0.idle_test_period=1000
c3p0.max_size=10
c3p0.max_statements=0
c3p0.min_size=5
c3p0.timeout=800
Can you please suggest what parameters needs to be set to run a long running query?
Thank you
c3p0 requires no special setup for long-running queries, although there are some settings (in particular unreturnedConnectionTimeout) that could interfere with long-running queries if set.
i'd verify that unreturnedConnectionTimeout is not set. (there are lots of places c3p0 configuration might be set outside of your hibernate.properties file.) c3p0 dumps its config to log at INFO on pool initialization. look for the value of unreturnedConnectionTimeout. it should be 0.
if it is 0, there is nothing at the c3p0 level that should interfere with your long-duration query.
Related
My Spring Boot application needs to connect to two different databases. The first database (main) is installed on the same server as the localhost application and the other database (secondary) on a remote server and that is not always available (for maintenance, backup, testing, etc.).
I use the following configuration (application.properties).
# main connection
spring.datasource.driverClassName=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://localhost/?autoReconnect=true&verifyServerCertificate=false&useSSL=false&requireSSL=false
spring.datasource.username=emater
spring.datasource.password=emater
# Keep the connection alive if idle for a long time (needed in production)
spring.datasource.testWhileIdle = true
spring.datasource.validationQuery = SELECT 1
# secondary connection
planejamento.datasource.driverClassName=com.mysql.jdbc.Driver
planejamento.datasource.url=jdbc:mysql://10.22.1.4/?verifyServerCertificate=false&useSSL=false&requireSSL=false
planejamento.datasource.username=emater
planejamento.datasource.password=emater
planejamento.datasource.testWhileIdle = false
#config hibernate
spring.jpa.hibernate.ddl-auto=none
spring.jpa.properties.hibernate.dialect=org.hibernate.spatial.dialect.mysql.MySQLSpatial56Dialect
spring.jpa.properties.hibernate.current_session_context_class=org.springframework.orm.hibernate4.SpringSessionContext
spring.jpa.show-sql=true
spring.jpa.format-sql=true
spring.jpa.use-sql-comments=true
spring.jpa.hibernate.enable_lazy_load_no_trans=true
When initializing the application hibernate tries to connect to both databases. If the second database is not available at the time, an exception is thrown and application initialization is aborted.
Is there any property I could use to prevent my application from aborting at the time of its startup?
What should I do?
Hibernate requires to connect to the DB when the SessionFactory so that i can extract the DatabaseMetaData from a DB Connection.
With the DatabaseMetaData, it needs to find out:
the current catalog and schema
how to qualify identifiers
if the DB supports temporary tables
if the DDL causes Transaction commit
if the Driver supports scrollable ResultSet
if the Driver supports batch updates
if the Driver return generated keys for IDENTITY columns
This info is resolved when the SessionFactory is initialized, so you are better off starting a new MicroService lazily, when the associated database is avilable as well.
I have a pretty large transaction, annotated with #Transactional. There are a few long-running queries in it, but usually runs fine. About 20% of the time the Connection appears to be getting forcibly closed outside of the transaction, and when the transaction tries to continue doing work, it fails with the following stack trace. Even worse, the transaction is not rolling back.
JDBC commits by default on connection close. However, Spring should be setting the Connection's auto-commit to false using something like the following before opening the transaction (Spring #Transactional and JDBC autoCommit). I've confirmed that the version we are using still does this. We use SimpleJdbcTemplate to execute the queries and the connections are obtained from an Apache Commons DBCP pool.
Is this an issue where DBCP thinks the connection is stale (as the transaction has a few long-running queries in it) and so the pool tries to reclaim the connection, committing it along the way? Shouldn't autocommit=false prevent this? Anyone have any other suggestions?
Thanks
Using (All pretty old):
SpringJDBC 2.5
Oracle JDBC Drivers 11.1.0.7 (ojdbc6)
Apache Commons DBCP 1.2.1
Stack trace:
Activity threw exception:
org.springframework.transaction.TransactionSystemException: Could not roll back JDBC transaction; nested exception is java.sql.SQLException: Connection oracle.jdbc.driver.T4CConnection#56a2191a is closed.
at org.springframework.jdbc.datasource.DataSourceTransactionManager.doRollback(DataSourceTransactionManager.java:279)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processRollback(AbstractPlatformTransactionManager.java:800)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.rollback(AbstractPlatformTransactionManager.java:777)
at org.springframework.transaction.interceptor.TransactionAspectSupport.completeTransactionAfterThrowing(TransactionAspectSupport.java:339)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:110)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202)
[...]
Caused by: java.sql.SQLException: Connection oracle.jdbc.driver.T4CConnection#56a2191a is closed.
at org.apache.commons.dbcp.DelegatingConnection.checkOpen(DelegatingConnection.java:398)
at org.apache.commons.dbcp.DelegatingConnection.rollback(DelegatingConnection.java:368)
at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.rollback(PoolingDataSource.java:323)
at org.springframework.jdbc.datasource.DataSourceTransactionManager.doRollback(DataSourceTransactionManager.java:276)
... 21 more
I added setMaxActive(8) on org.apache.tomcat.jdbc.pool.PoolProperties. Every time the DB restarts, the application is unusable because the established connections remain. I get the following error:
org.postgresql.util.PSQLException: This connection has been closed
I've tried using some other settings on the pool to no avail...
Thank you for help!
Use the validationQuery property which will check if the connection is valid before returning the connection.
Ref: Tomcat 6 JDBC Connection Pool
This property is available on latest tomcat versions.
Look at this link:
Postgres connection has been closed error in Spring Boot
Very valid question and this problem is usually faced by many. The
exception generally occurs, when network connection is lost between
pool and database (most of the time due to restart). Looking at the
stack trace you have specified, it is quite clear that you are using
jdbc pool to get the connection. JDBC pool has options to fine-tune
various connection pool settings and log details about whats going on
inside pool.
You can refer to to detailed Apache documentation on pool
configuration to specify abandon timeout
Check for removeAbandoned, removeAbandonedTimeout, logAbandoned parameters
Additionally you can make use of additional properties to further
tighten the validation
Use testXXX and validationQuery for connection validity.
My own $0.02: use these two parameters:
validationQuery=<TEST SQL>
testOnBorrow=true
With Derby you're specifically suppose to call:
DriverManager.getConnection("jdbc:derby:myDatabase;shutdown=true");
When you want to shutdown the database. However with BoneCP you do:
BoneCPConfig config = new BoneCPConfig();
config.setJdbcUrl("jdbc:derby:myDatabase");
config.setXXX(...);
...
BoneCP connectionPool = new BoneCP(config);
// shutdown connection pool
connectionPool.shutdown();
However with derby you need to call the shutdown command otherwise you can get some errors
So the question is how do I call that shutdown connection string within the BoneCP framework?
In another related newer question, the following appears to be the same cause: "Unless you are running v0.8.1-beta2 or greater, set "disableConnectionTracking" to true in your config."
In other words you need both the derby connection URL as well as the proper config for BoneCP, at least for now...
Please note that you should expect an exception when SUCCESSFULLY shutting down derby: "A successful shutdown always results in an SQLException to indicate that Derby has shut down and that there is no other exception."
I have a long-running method which executes a large number of native SQL queries through the EntityManager (TopLink Essentials). Each query takes only milliseconds to run, but there are many thousands of them. This happens within a single EJB transaction. After 15 minutes, the database closes the connection which results in following error:
Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.1 (Build b02-p04 (04/12/2010))): oracle.toplink.essentials.exceptions.DatabaseException
Internal Exception: java.sql.SQLException: Closed Connection
Error Code: 17008
Call: select ...
Query: DataReadQuery()
at oracle.toplink.essentials.exceptions.DatabaseException.sqlException(DatabaseException.java:319)
.
.
.
RAR5031:System Exception.
javax.resource.ResourceException: This Managed Connection is not valid as the phyiscal connection is not usable
at com.sun.gjc.spi.ManagedConnection.checkIfValid(ManagedConnection.java:612)
In the JDBC connection pool I set is-connection-validation-required="true" and connection-validation-method="table" but this did not help .
I assumed that JDBC connection validation is there to deal with precisely this kind of errors. I also looked at TopLink extensions (http://www.oracle.com/technetwork/middleware/ias/toplink-jpa-extensions-094393.html) for some kind of timeout settings but found nothing. There is also the TopLink session configuration file (http://download.oracle.com/docs/cd/B14099_19/web.1012/b15901/sessions003.htm) but I don't think there is anything useful there either.
I don't have access to the Oracle DBA tables, but I think that Oracle closes connections after 15 minutes according to the setting in CONNECT_TIME profile variable.
Is there any other way to make TopLink or the JDBC pool to reestablish a closed connection?
The database is Oracle 10g, application server is Sun Glassfish 2.1.1.
All JPA implementations (running on a Java EE container) use a datasource with an associated connection pool to manage connectivity with the database.
The persistence context itself is associated with the datasource via an appropriate entry in persistence.xml. If you wish to change the connection timeout settings on the client-side, then the associated connection pool must be re-configured.
In Glassfish, the timeout settings associated with the connection pool can be reconfigured by editing the pool settings, as listed in the following links:
Changing timeout settings in GlassFish 3.1
Changing timeout settings in GlassFish 2.1
On the server-side (whose settings if lower than the client settings, would be more important), the Oracle database can be configured to have database profiles associated with user accounts. The session idle_time and connect_time parameters of a profile would constitute the timeout settings of importance in this aspect of the client-server interaction. If no profile has been set, then by default, the timeout is unlimited.
Unless you've got some sort of RAC failover, when the connection is terminated, it will end the session and transaction.
The admins may have set into some limits to prevent runaway transactions or a single job 'hogging' a connection in a pool. You generally don't want to lock a connection in a pool for an extended period.
If these queries aren't necessarily part of the same transaction, then you could try terminating and restarting a new connection.
Are you able to restructure your code so that it completes in under 15 minutes. A stored procedure in the background may be able to do the job a lot quicker than dragging the results of thousands of operations over the network.
I see you set your connection-validation-method="table" and is-connection-validation-required="true", but you do not mention that you specified the table you were validating on; did you set validation-table-name="any_table_you_know_exists" and provide any existing table-name? validation-table-name="existing_table_name" is required.
See this article for more details on connection validation.
Related StackOverflow article with similar problem - he wants to flush the entire invalid connection pool.