How to reestablish a JDBC connection after a timeout? - java

I have a long-running method which executes a large number of native SQL queries through the EntityManager (TopLink Essentials). Each query takes only milliseconds to run, but there are many thousands of them. This happens within a single EJB transaction. After 15 minutes, the database closes the connection which results in following error:
Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.1 (Build b02-p04 (04/12/2010))): oracle.toplink.essentials.exceptions.DatabaseException
Internal Exception: java.sql.SQLException: Closed Connection
Error Code: 17008
Call: select ...
Query: DataReadQuery()
at oracle.toplink.essentials.exceptions.DatabaseException.sqlException(DatabaseException.java:319)
.
.
.
RAR5031:System Exception.
javax.resource.ResourceException: This Managed Connection is not valid as the phyiscal connection is not usable
at com.sun.gjc.spi.ManagedConnection.checkIfValid(ManagedConnection.java:612)
In the JDBC connection pool I set is-connection-validation-required="true" and connection-validation-method="table" but this did not help .
I assumed that JDBC connection validation is there to deal with precisely this kind of errors. I also looked at TopLink extensions (http://www.oracle.com/technetwork/middleware/ias/toplink-jpa-extensions-094393.html) for some kind of timeout settings but found nothing. There is also the TopLink session configuration file (http://download.oracle.com/docs/cd/B14099_19/web.1012/b15901/sessions003.htm) but I don't think there is anything useful there either.
I don't have access to the Oracle DBA tables, but I think that Oracle closes connections after 15 minutes according to the setting in CONNECT_TIME profile variable.
Is there any other way to make TopLink or the JDBC pool to reestablish a closed connection?
The database is Oracle 10g, application server is Sun Glassfish 2.1.1.

All JPA implementations (running on a Java EE container) use a datasource with an associated connection pool to manage connectivity with the database.
The persistence context itself is associated with the datasource via an appropriate entry in persistence.xml. If you wish to change the connection timeout settings on the client-side, then the associated connection pool must be re-configured.
In Glassfish, the timeout settings associated with the connection pool can be reconfigured by editing the pool settings, as listed in the following links:
Changing timeout settings in GlassFish 3.1
Changing timeout settings in GlassFish 2.1
On the server-side (whose settings if lower than the client settings, would be more important), the Oracle database can be configured to have database profiles associated with user accounts. The session idle_time and connect_time parameters of a profile would constitute the timeout settings of importance in this aspect of the client-server interaction. If no profile has been set, then by default, the timeout is unlimited.

Unless you've got some sort of RAC failover, when the connection is terminated, it will end the session and transaction.
The admins may have set into some limits to prevent runaway transactions or a single job 'hogging' a connection in a pool. You generally don't want to lock a connection in a pool for an extended period.
If these queries aren't necessarily part of the same transaction, then you could try terminating and restarting a new connection.
Are you able to restructure your code so that it completes in under 15 minutes. A stored procedure in the background may be able to do the job a lot quicker than dragging the results of thousands of operations over the network.

I see you set your connection-validation-method="table" and is-connection-validation-required="true", but you do not mention that you specified the table you were validating on; did you set validation-table-name="any_table_you_know_exists" and provide any existing table-name? validation-table-name="existing_table_name" is required.
See this article for more details on connection validation.
Related StackOverflow article with similar problem - he wants to flush the entire invalid connection pool.

Related

Oracle 19c JDBC (Instant client basiclite) driver Issue

I working on upgrade the oracle jdbc driver from 11g (ojdbc6.jar) to 19c (ojdbc8.jar) in my java application, driver used is Instant Client (instantclient-basiclite-nt-19.11) with JRE1.8.0_271. After change to 19c, my application keep hit "ORA-02396: exceeded maximum idle time, please connect again" or "ORA-03113: end-of-file" error.
In oracle database properties, there are some limitation set, Idle-Time = 2 minutes and Connection-Time = 10 minutes. But I will not do any change on database because this may cause high CPU if many users are using the application at the same time.
In Java application, connection is stored in connection pool, I put the logging and can see that connection is closed and return to connection pool after finish execute. But when I run the application again after 2 minutes, the oracle error raised.
If I switch back to 11g, I don't get such error and application is working fine after 2 minutes. No change in code.
Is this BUG in latest oracle driver? I saw there is UCP.jar (Universal Connection Pool) package available in 19c but not in 11g, is it I have implement this? and how?
Your problem has nothing to do with the driver but with a database profile setting that limits the maximum allowed idle_time. Normally this is done to get rid of forgotten sessions.
You can check this using
select a.username,b.profile,b.RESOURCE_NAME,b.LIMIT from dba_users a, dba_profiles b where b.resource_name='IDLE_TIME' and a.profile=b.profile;
find which profile is used for your user and see if your dba is willing to change the limit.
if this happens to be the default profile it could be changed to unlimited by
ALTER PROFILE DEFAULT LIMIT IDLE_TIME UNLIMITED;
but it might be better to create a custom profile for your user.
If the IDLE_TIME limit can not be changed, run a query every once and a while, like select 'keep me alive from dual;
This also prevents closure by firewalls.

JDBC Connection Pool: connections are not recycled after DB restart

I added setMaxActive(8) on org.apache.tomcat.jdbc.pool.PoolProperties. Every time the DB restarts, the application is unusable because the established connections remain. I get the following error:
org.postgresql.util.PSQLException: This connection has been closed
I've tried using some other settings on the pool to no avail...
Thank you for help!
Use the validationQuery property which will check if the connection is valid before returning the connection.
Ref: Tomcat 6 JDBC Connection Pool
This property is available on latest tomcat versions.
Look at this link:
Postgres connection has been closed error in Spring Boot
Very valid question and this problem is usually faced by many. The
exception generally occurs, when network connection is lost between
pool and database (most of the time due to restart). Looking at the
stack trace you have specified, it is quite clear that you are using
jdbc pool to get the connection. JDBC pool has options to fine-tune
various connection pool settings and log details about whats going on
inside pool.
You can refer to to detailed Apache documentation on pool
configuration to specify abandon timeout
Check for removeAbandoned, removeAbandonedTimeout, logAbandoned parameters
Additionally you can make use of additional properties to further
tighten the validation
Use testXXX and validationQuery for connection validity.
My own $0.02: use these two parameters:
validationQuery=<TEST SQL>
testOnBorrow=true

Unable to fill pool (no buffer space available)

I'm using Wildfly 8.2 and fire a series of DB requests when a certain web page is opened. All queries are invoked thru JPA Criteria API, return results as expected - and - none of them delivers a warning, error or exception. It all runs in Parallel Plesk.
Now, I noticed that within 2 to 3 days the following error appears and the site becomes unresponsive. I restart and I wait approx another 3 days till it happens again (depending on the number of requests I have).
I checked the tcpsndbuf on my linux server and I noticed it is constantly at max. Unless I restart Wildfly. Apparently it fails to release the connections.
The connections are managed by JPA/Hibernate and the Wildfly container. I don't do any special or custom transaction handling e.g. open, close. etc. I leave it all to Wildfly.
The MySQL Driver I'm using is 5.1.21 (mysql-connector-java-5.1.21-bin.jar)
In the standalone.xml I have defined the following datasource datasource values (among others):
<transaction-isolation>TRANSACTION_READ_COMMITTED</transaction-isolation>
<pool>
<min-pool-size>3</min-pool-size>
<max-pool-size>10</max-pool-size>
</pool>
<statement>
<prepared-statement-cache-size>32</prepared-statement-cache-size>
<shared-prepared-statements>true</shared-prepared-statements>
</statement
Has anyone experience the same rise of tcpsndbuf values (or this error)? In case you require more config or log files, let me know. Thanks!
UPDATE
Despite the following additional timeout settings, it still runs into the hanger. And thus, it will then use 100% CPU time, whenever the max tcpsndbuf is reached.,
Try adding this Hibernate property:
<property name="hibernate.connection.release_mode">after_transaction</property>
By default, JTA mandates that connection should be released after each statement, which is undesirable for most use cases. Most Drivers don't allow multiplexing a connection over multiple XA transactions anyway.
Do you use openvz? I think this question should be asked on serverfault. It is related to linux configuration. You can read: tcpsndbuf. You should count opened sockets and check condition:

How to reconnect when the LDAP server is restarted?

I have a situation where through a Java program, I create a javax.naming.ldap.LdapContext and do a search() operation on it - which makes an underlying connection. Then I put the Java app thread to sleep, during which I restart the LDAP server (OpenLDAP, just to note). When the App thread wakes up and tries to do any operation on the LdapContext created earlier, it throws "CommunicationException: Connection is closed".
What I want is to be able to re-establish the connection.
I see that LdapContext has a reconnect() method - where I pass controls as null. However, this does not have any effect. What I saw in the Sun LDAP implementation that during the time when the LDAP server was restarted, the ConnectionPool maintained by the Sun implementation marked the underlying com.sun.jndi.ldap.LdapClient instance with a "usable=false". Upon reconnect() call - it simply calls ensureOpen(), which again checks if the usable flag is false or not - if it's false; then it throws CommunicationException - so back to square one.
My question is: how does a Java app survive an external LDAP server restart? Is creation of new LdapContext again is the only way out?
Appreciate any insights.
Here is the stacktrace of the exception:
javax.naming.CommunicationException: connection closed [Root exception is java.io.IOException: connection closed]; remaining name 'uid=foo,ou=People,dc=example,dc=com'
at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1979)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1824)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1749)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:368)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:338)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:321)
at javax.naming.directory.InitialDirContext.search(InitialDirContext.java:248)
Caused by: java.io.IOException: connection closed
at com.sun.jndi.ldap.LdapClient.ensureOpen(LdapClient.java:1558)
at com.sun.jndi.ldap.LdapClient.search(LdapClient.java:504)
at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1962)
... 26 more
Just enable JNDI connection pooling and it will all be taken care of for you behind the scenes. See the JNDI Guide to Features and the LDAP Provider documentation. It's controlled by just a couple of properties.
The UnboundID LDAP SDK provides a means to auto-connect wherein that auto-reconnect operation is invisible to the client.
We had this problem at work. The solution we came up with (may not be the best answer). Was to create a watchdog thread that would check the connection at some fixed rate. If the connection did not work, it would re-initialize the connection with LDAP.
You should note that this is related essentially to LDAP connection pooling. As defined here:
A connection is retrieved from the pool, used, returned to the pool, and then, retrieved again from the pool for another Context instance.
Thus, the reuse of a previous connection may cause such problem:
You may test the behavior without using LDAP connection pooling by setting
com.sun.jndi.ldap.connect.pool=false
Also, another possible cause may be the timeout of reading the LDAP operations. In fact, the reading operation is not notified about the closure of the LDAP server after a specific timeout. For more information, you may take a look at this link

How to remove invalid database connection from pool

I am using connection pooling of tomcat with oracle database. It is working fine, but when i use my application after a long time it is giving error that "connection reset". I am getting this error because of physical connection at oracle server closed before logical connection closed at tomcat datasource. So before getting the connection from datasource i am checking the connection validity with isValid(0) method of connection object which gives false if the physical connection was closed. But i don't know how to remove that invalid connection object from the pool.
This could be because on the db server, there is a timeout to not allow connections to live beyond a set time, or to die if it does not receive something saying it is still valid. One way to fix this is to turn on keepalives. These basically ping the db server saying that they are still valid connections.
This is a pretty good link on Tomcats DBCP configurations. Take a look at the section titled "Preventing dB connection pool leaks". That looks like it may be a good place to start.
I used validatationquery while configuring the datasource in server.xml file. It is going to check the validity of the connection by executing the query at database before giving to the application.
for Oracle
validationQuery="/* select 1 from dual */"
for MySql
validationQuery="/* ping */"
Try closing it and opening it if it's invalid. I mean u would reinitialize it in this way so u won't need to remove it from the pool and reuse it.
If we want to dispose an ill java.sql.connection from Tomcat jdbc connection pool,
we may do this explicitly in the program.
Unwrap it into an org.apache.tomcat.jdbc.pool.PooledConnection,
setDiscarded(true) and close the JDBC connection finally.
The ConnectionPool will remove the underlying connection once it has been returned.
(ConnectionPool.returnConnection(....))
e.g.
PooledConnection pconn = conn.unwrap(PooledConnection.class); pconn.setDiscarded(true);
conn.close();

Categories

Resources