I have a situation where through a Java program, I create a javax.naming.ldap.LdapContext and do a search() operation on it - which makes an underlying connection. Then I put the Java app thread to sleep, during which I restart the LDAP server (OpenLDAP, just to note). When the App thread wakes up and tries to do any operation on the LdapContext created earlier, it throws "CommunicationException: Connection is closed".
What I want is to be able to re-establish the connection.
I see that LdapContext has a reconnect() method - where I pass controls as null. However, this does not have any effect. What I saw in the Sun LDAP implementation that during the time when the LDAP server was restarted, the ConnectionPool maintained by the Sun implementation marked the underlying com.sun.jndi.ldap.LdapClient instance with a "usable=false". Upon reconnect() call - it simply calls ensureOpen(), which again checks if the usable flag is false or not - if it's false; then it throws CommunicationException - so back to square one.
My question is: how does a Java app survive an external LDAP server restart? Is creation of new LdapContext again is the only way out?
Appreciate any insights.
Here is the stacktrace of the exception:
javax.naming.CommunicationException: connection closed [Root exception is java.io.IOException: connection closed]; remaining name 'uid=foo,ou=People,dc=example,dc=com'
at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1979)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1824)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1749)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:368)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:338)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:321)
at javax.naming.directory.InitialDirContext.search(InitialDirContext.java:248)
Caused by: java.io.IOException: connection closed
at com.sun.jndi.ldap.LdapClient.ensureOpen(LdapClient.java:1558)
at com.sun.jndi.ldap.LdapClient.search(LdapClient.java:504)
at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1962)
... 26 more
Just enable JNDI connection pooling and it will all be taken care of for you behind the scenes. See the JNDI Guide to Features and the LDAP Provider documentation. It's controlled by just a couple of properties.
The UnboundID LDAP SDK provides a means to auto-connect wherein that auto-reconnect operation is invisible to the client.
We had this problem at work. The solution we came up with (may not be the best answer). Was to create a watchdog thread that would check the connection at some fixed rate. If the connection did not work, it would re-initialize the connection with LDAP.
You should note that this is related essentially to LDAP connection pooling. As defined here:
A connection is retrieved from the pool, used, returned to the pool, and then, retrieved again from the pool for another Context instance.
Thus, the reuse of a previous connection may cause such problem:
You may test the behavior without using LDAP connection pooling by setting
com.sun.jndi.ldap.connect.pool=false
Also, another possible cause may be the timeout of reading the LDAP operations. In fact, the reading operation is not notified about the closure of the LDAP server after a specific timeout. For more information, you may take a look at this link
Related
SOLVED
Ultimately the solution noted in the similar ticket noted below worked for us. When we tried it initially our configuration parser was mangling the URL and removing &connectTimeout=15000&socketTimeout=60000 from it, which invalidated that test.
I'm having trouble getting a tomcat jdbc connection pool to fail-over to a new DB server using Amazon RDS' mutli-az feature. When fail-over occurs the application server gets hung up trying to borrow a connection from the pool. It's similar to this question, however the solution to this users problem did not help me, so I suspect it's not quite the same: Configure GlassFish JDBC connection pool to handle Amazon RDS Multi-AZ failover
The sequence goes a bit like this:
Succesful request is made, log output as expected
fail-over initiated (via reboot with failover in RDS)
Request made that times out, log messages from request appear as expected up until a connection is borrowed from the pool.
Subsequent requests generate no log messages, they time out as well.
After some amount of time, the daemon will eventually start printing more log messages as if it succesfully connected to the database and performing operations. This can take just over 16 minutes to occur, the client has long since timed out.
If I wait about 50 minutes and try again eventually the system will finally accept connections again.
Notes
If I tell tomcat to shut down there are exceptions in the logs about it being unable to clean up resources, and about my servlet still processing a request. Most notable (In my opinion) is the following stack trace indicating at least one thing is stuck waiting for communication over a socket from mysql.
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
java.net.SocketInputStream.read(SocketInputStream.java:171)
java.net.SocketInputStream.read(SocketInputStream.java:141)
com.mysql.jdbc.util.ReadAheadInputStream.fill(ReadAheadInputStream.java:114)
com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:161)
com.mysql.jdbc.util.ReadAheadInputStream.read(ReadAheadInputStream.java:189)
com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3116)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3573)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4113)
com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2570)
com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2731)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2812)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2761)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:894)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:732)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:441)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:384)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:716)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:579)
org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:174)
org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:111)
<...>
If I restart tomcat things return to normal as soon as it comes back. Obviously long term this is probably not a workable solution for maintaining uptime.
Environment Details
Database: MySQL (using mysql-connector-java 5.1.26) (server 5.5.53)
Tomcat 8.0.45
I've gone through several changes to my configuration while trying to solve this issue. At the time of this writing the following related settings are in place:
jre/lib/security/java.security -- I'm under the impression that the default value for Oracle Java 1.8 is 30s with no security manager. I set these to zero just to be positive this isn't the issue.
networkaddress.cache.ttl: 0
networkaddress.cache.negative.ttl: 0
connection pool settings
testOnBorrow: true
testOnConnect: true
testOnReturn: true
jdbc url parameters
connectTimeout:60000
socketTimeout:60000
autoReconnect:true
Update
Still no solution found.
Added in logging to confirm that this was not a DNS caching issue. IP address logged immediately before timeout matches up with IP address of 'new' master RDS host.
For reference the following block represents the properties I'm using to initialize my data source. I’m configuring the pool in code rather than JNDI, with some elements pulled out of our app's config file. I’ve pasted the code below along with comments indicating what the config values are for the tests I’ve been running.
PoolProperties p = new PoolProperties();
p.setUrl(config.get("JDBC_URL")); // jdbc:mysql://host:3306/dbname
p.setDriverClassName("com.mysql.jdbc.Driver");
p.setUsername(config.get("JDBC_USER"));
p.setPassword(config.get("JDBC_PASSWORD"));
p.setJmxEnabled(true);
p.setTestWhileIdle(false);
p.setTestOnBorrow(true);
p.setValidationQuery("SELECT 1");
p.setValidationInterval(30000);
p.setTimeBetweenEvictionRunsMillis(30000);
p.setMaxActive(Integer.parseInt(config.get("MAX_ACTIVE"))); //45
p.setInitialSize(10);
p.setMaxWait(5);
p.setRemoveAbandonedTimeout(Integer.parseInt(config.get("REMOVE_ABANDONED_TIMEOUT"))); //600
p.setMinEvictableIdleTimeMillis(Integer.parseInt(config.get("DB_EVICTION_TIMEOUT"))); //60000
p.setMinIdle(Integer.parseInt(config.get("DB_MIN_IDLE"))); //50
p.setLogAbandoned(Boolean.parseBoolean(config.get("LOG_ABANDONED"))); //true
p.setRemoveAbandoned(true);
p.setJdbcInterceptors(
"org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;"+
"org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;"+
"org.apache.tomcat.jdbc.pool.interceptor.ResetAbandonedTimer");
// make sure new connections have auto commit true
p.setDefaultAutoCommit(true);
I added setMaxActive(8) on org.apache.tomcat.jdbc.pool.PoolProperties. Every time the DB restarts, the application is unusable because the established connections remain. I get the following error:
org.postgresql.util.PSQLException: This connection has been closed
I've tried using some other settings on the pool to no avail...
Thank you for help!
Use the validationQuery property which will check if the connection is valid before returning the connection.
Ref: Tomcat 6 JDBC Connection Pool
This property is available on latest tomcat versions.
Look at this link:
Postgres connection has been closed error in Spring Boot
Very valid question and this problem is usually faced by many. The
exception generally occurs, when network connection is lost between
pool and database (most of the time due to restart). Looking at the
stack trace you have specified, it is quite clear that you are using
jdbc pool to get the connection. JDBC pool has options to fine-tune
various connection pool settings and log details about whats going on
inside pool.
You can refer to to detailed Apache documentation on pool
configuration to specify abandon timeout
Check for removeAbandoned, removeAbandonedTimeout, logAbandoned parameters
Additionally you can make use of additional properties to further
tighten the validation
Use testXXX and validationQuery for connection validity.
My own $0.02: use these two parameters:
validationQuery=<TEST SQL>
testOnBorrow=true
Vaadin 7 offers the SQLContainer implementation. The Book of Vaadin says to use either of its implementations of a JDBC connection pool. But I already am using the Tomcat JDBC Connection Pool implementation. Having one pool that draws from another pool seems like a bad thing.
To continue using the Tomcat pool, I implemented the com.vaadin.data.util.sqlcontainer.connection.JDBCConnectionPool interface. That interface requires three methods:
reserveConnectionI return a connection drawn from the Tomcat pool.
releaseConnectionI do nothing. I tried calling close on the connection, but that actually closed the connection rather than returning it to the Tomcat pool. Apparently the SQLContainer already called close once and my second call to close actually closes down the connection. I was getting runtime errors saying the connection was not open.
destroyI do nothing. Supposedly this method is just some workaround for some issue with Postgres (which I'm using), but seems irrelevant given that I am actually using the Tomcat pool.
➜ Is implementing that interface the correct approach?
➜ If implementing that interface is the way to go, did I do so properly? Any other issues I should address?
My Tomcat pool is available via JNDI, so I'm not sure if I should be using the Vaadin class J2EEConnectionPool.
You should in fact be able to use J2EEConnectionPool, exactly as you described in your own answer above. I have used successfully used J2EEConnectionPool with FreeformQuery so I know this works. Unfortunately there is apparently a bug in Vaadin's TableQuery implementation which caused the "connection has been closed" error you saw. See: http://dev.vaadin.com/ticket/12370
The ticket proposes a code change, but in my case I simply replaced the offending TableQuery with a FreeformQuery.
J2EEConnectionPool Is Not A Pool (a misnomer)
Having looked at the source code, it appears the J2EEConnectionPool class is misnamed. It is not an implementation of a pool. It merely draws on a javax.sql.DataSource object (obtained via JNDI if not provided to constructor) to get a java.sql.Connection object.
The "Pool" part of the name must come from an assumption that the DataSource is backed by a connection pool.
Yes, Use J2EEConnectionPool
So, if a real connection pool is in use such as the Tomcat JDBC Connection Pool available via DataSource or JNDI, pass that DataSource or JNDI info to an instance of J2EEConnectionPool for use with the Vaadin SQLContainer.
Forum Thread
See this discussion, “Any chance of bug fix for TableQuery+SQLContainer and Connection pools?”, on the Vaadin Forums.
Bug Ticket
See Ticket # 12370, “SQLContainer does not work with tomcat/BoneCP connection pool”, in the Vaadin issue tracker.
I have a long-running method which executes a large number of native SQL queries through the EntityManager (TopLink Essentials). Each query takes only milliseconds to run, but there are many thousands of them. This happens within a single EJB transaction. After 15 minutes, the database closes the connection which results in following error:
Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.1 (Build b02-p04 (04/12/2010))): oracle.toplink.essentials.exceptions.DatabaseException
Internal Exception: java.sql.SQLException: Closed Connection
Error Code: 17008
Call: select ...
Query: DataReadQuery()
at oracle.toplink.essentials.exceptions.DatabaseException.sqlException(DatabaseException.java:319)
.
.
.
RAR5031:System Exception.
javax.resource.ResourceException: This Managed Connection is not valid as the phyiscal connection is not usable
at com.sun.gjc.spi.ManagedConnection.checkIfValid(ManagedConnection.java:612)
In the JDBC connection pool I set is-connection-validation-required="true" and connection-validation-method="table" but this did not help .
I assumed that JDBC connection validation is there to deal with precisely this kind of errors. I also looked at TopLink extensions (http://www.oracle.com/technetwork/middleware/ias/toplink-jpa-extensions-094393.html) for some kind of timeout settings but found nothing. There is also the TopLink session configuration file (http://download.oracle.com/docs/cd/B14099_19/web.1012/b15901/sessions003.htm) but I don't think there is anything useful there either.
I don't have access to the Oracle DBA tables, but I think that Oracle closes connections after 15 minutes according to the setting in CONNECT_TIME profile variable.
Is there any other way to make TopLink or the JDBC pool to reestablish a closed connection?
The database is Oracle 10g, application server is Sun Glassfish 2.1.1.
All JPA implementations (running on a Java EE container) use a datasource with an associated connection pool to manage connectivity with the database.
The persistence context itself is associated with the datasource via an appropriate entry in persistence.xml. If you wish to change the connection timeout settings on the client-side, then the associated connection pool must be re-configured.
In Glassfish, the timeout settings associated with the connection pool can be reconfigured by editing the pool settings, as listed in the following links:
Changing timeout settings in GlassFish 3.1
Changing timeout settings in GlassFish 2.1
On the server-side (whose settings if lower than the client settings, would be more important), the Oracle database can be configured to have database profiles associated with user accounts. The session idle_time and connect_time parameters of a profile would constitute the timeout settings of importance in this aspect of the client-server interaction. If no profile has been set, then by default, the timeout is unlimited.
Unless you've got some sort of RAC failover, when the connection is terminated, it will end the session and transaction.
The admins may have set into some limits to prevent runaway transactions or a single job 'hogging' a connection in a pool. You generally don't want to lock a connection in a pool for an extended period.
If these queries aren't necessarily part of the same transaction, then you could try terminating and restarting a new connection.
Are you able to restructure your code so that it completes in under 15 minutes. A stored procedure in the background may be able to do the job a lot quicker than dragging the results of thousands of operations over the network.
I see you set your connection-validation-method="table" and is-connection-validation-required="true", but you do not mention that you specified the table you were validating on; did you set validation-table-name="any_table_you_know_exists" and provide any existing table-name? validation-table-name="existing_table_name" is required.
See this article for more details on connection validation.
Related StackOverflow article with similar problem - he wants to flush the entire invalid connection pool.
I am using connection pooling of tomcat with oracle database. It is working fine, but when i use my application after a long time it is giving error that "connection reset". I am getting this error because of physical connection at oracle server closed before logical connection closed at tomcat datasource. So before getting the connection from datasource i am checking the connection validity with isValid(0) method of connection object which gives false if the physical connection was closed. But i don't know how to remove that invalid connection object from the pool.
This could be because on the db server, there is a timeout to not allow connections to live beyond a set time, or to die if it does not receive something saying it is still valid. One way to fix this is to turn on keepalives. These basically ping the db server saying that they are still valid connections.
This is a pretty good link on Tomcats DBCP configurations. Take a look at the section titled "Preventing dB connection pool leaks". That looks like it may be a good place to start.
I used validatationquery while configuring the datasource in server.xml file. It is going to check the validity of the connection by executing the query at database before giving to the application.
for Oracle
validationQuery="/* select 1 from dual */"
for MySql
validationQuery="/* ping */"
Try closing it and opening it if it's invalid. I mean u would reinitialize it in this way so u won't need to remove it from the pool and reuse it.
If we want to dispose an ill java.sql.connection from Tomcat jdbc connection pool,
we may do this explicitly in the program.
Unwrap it into an org.apache.tomcat.jdbc.pool.PooledConnection,
setDiscarded(true) and close the JDBC connection finally.
The ConnectionPool will remove the underlying connection once it has been returned.
(ConnectionPool.returnConnection(....))
e.g.
PooledConnection pconn = conn.unwrap(PooledConnection.class); pconn.setDiscarded(true);
conn.close();