I am working with java web application which use hibernate 3. And proxool for connection pooling. It mainly dose file uploading and the approximate load will be 2000 file uploads by about 300 users per hour. And the load is getting higher some times. I am facing a trouble of having high number of active and inactive sessions in oracle side and even after the system(wildfly server) down, the sessions stays as they were.
I have checked the code and it always close the hibernate session in finally block.
My actual problem is when sessions in oracle side get increase and my application can't get database connections after a while.
my proxool file is as follow
<proxool>
<alias>piokms-conn</alias>
<driver-url>jdbc:oracle:thin:#1.1.1.1:1521:orcl64</driver-url>
<driver-class>oracle.jdbc.driver.OracleDriver</driver-class>
<driver-properties>
<property name="user" value="test" />
<property name="password" value="test" />
</driver-properties>
<autocommit>false</autocommit>
<simultaneous-build-throttle>150</simultaneous-build-throttle>
<minimum-connection-count>200</minimum-connection-count>
<maximum-connection-count>800</maximum-connection-count>
<maximum-connection-lifetime>1200000</maximum-connection-lifetime>
<maximum-active-time>600000</maximum-active-time>
<house-keeping-test-sql>SELECT 1 From DUAL</house-keeping-test-sql>
<statistics>5m,15m,1d</statistics>
<statistics-log-level>ERROR</statistics-log-level>
<fatal-sql-exception>Connection is closed,SQLSTATE=08003,Error opening socket. SQLSTATE=08S01,SQLSTATE=08S01</fatal-sql-exception>
<fatal-sql-exception-wrapper-class>org.logicalcobwebs.proxool.FatalRuntimeException</fatal-sql-exception-wrapper-class>
<verbose>false</verbose>
<trace>true</trace>
</proxool>
Please help me to solve this.
Thanks
Also I am not familiar with proxool itself, your description reminded my of behaviours observered with different frameworks where the connection pool was not properly terminated.
According to the documentation you have two options:
Using the ServletConfigurator
Or manually calling the shutdown method, when your sever terminates.
To verify this problem just shutdown your application and monitor oracle to check whether the number of active sessions goes down to the normal value.
I have checked the code and it always close the hibernate session in finally block.
I assume you want to express that the connection is returned to the pool, which itself only helps you in terms that the connections are not leaking. They are normally not closed at this point of time.
Related
SOLVED
Ultimately the solution noted in the similar ticket noted below worked for us. When we tried it initially our configuration parser was mangling the URL and removing &connectTimeout=15000&socketTimeout=60000 from it, which invalidated that test.
I'm having trouble getting a tomcat jdbc connection pool to fail-over to a new DB server using Amazon RDS' mutli-az feature. When fail-over occurs the application server gets hung up trying to borrow a connection from the pool. It's similar to this question, however the solution to this users problem did not help me, so I suspect it's not quite the same: Configure GlassFish JDBC connection pool to handle Amazon RDS Multi-AZ failover
The sequence goes a bit like this:
Succesful request is made, log output as expected
fail-over initiated (via reboot with failover in RDS)
Request made that times out, log messages from request appear as expected up until a connection is borrowed from the pool.
Subsequent requests generate no log messages, they time out as well.
After some amount of time, the daemon will eventually start printing more log messages as if it succesfully connected to the database and performing operations. This can take just over 16 minutes to occur, the client has long since timed out.
If I wait about 50 minutes and try again eventually the system will finally accept connections again.
Notes
If I tell tomcat to shut down there are exceptions in the logs about it being unable to clean up resources, and about my servlet still processing a request. Most notable (In my opinion) is the following stack trace indicating at least one thing is stuck waiting for communication over a socket from mysql.
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
java.net.SocketInputStream.read(SocketInputStream.java:171)
java.net.SocketInputStream.read(SocketInputStream.java:141)
com.mysql.jdbc.util.ReadAheadInputStream.fill(ReadAheadInputStream.java:114)
com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:161)
com.mysql.jdbc.util.ReadAheadInputStream.read(ReadAheadInputStream.java:189)
com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3116)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3573)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4113)
com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2570)
com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2731)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2812)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2761)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:894)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:732)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:441)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:384)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:716)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:579)
org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:174)
org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:111)
<...>
If I restart tomcat things return to normal as soon as it comes back. Obviously long term this is probably not a workable solution for maintaining uptime.
Environment Details
Database: MySQL (using mysql-connector-java 5.1.26) (server 5.5.53)
Tomcat 8.0.45
I've gone through several changes to my configuration while trying to solve this issue. At the time of this writing the following related settings are in place:
jre/lib/security/java.security -- I'm under the impression that the default value for Oracle Java 1.8 is 30s with no security manager. I set these to zero just to be positive this isn't the issue.
networkaddress.cache.ttl: 0
networkaddress.cache.negative.ttl: 0
connection pool settings
testOnBorrow: true
testOnConnect: true
testOnReturn: true
jdbc url parameters
connectTimeout:60000
socketTimeout:60000
autoReconnect:true
Update
Still no solution found.
Added in logging to confirm that this was not a DNS caching issue. IP address logged immediately before timeout matches up with IP address of 'new' master RDS host.
For reference the following block represents the properties I'm using to initialize my data source. I’m configuring the pool in code rather than JNDI, with some elements pulled out of our app's config file. I’ve pasted the code below along with comments indicating what the config values are for the tests I’ve been running.
PoolProperties p = new PoolProperties();
p.setUrl(config.get("JDBC_URL")); // jdbc:mysql://host:3306/dbname
p.setDriverClassName("com.mysql.jdbc.Driver");
p.setUsername(config.get("JDBC_USER"));
p.setPassword(config.get("JDBC_PASSWORD"));
p.setJmxEnabled(true);
p.setTestWhileIdle(false);
p.setTestOnBorrow(true);
p.setValidationQuery("SELECT 1");
p.setValidationInterval(30000);
p.setTimeBetweenEvictionRunsMillis(30000);
p.setMaxActive(Integer.parseInt(config.get("MAX_ACTIVE"))); //45
p.setInitialSize(10);
p.setMaxWait(5);
p.setRemoveAbandonedTimeout(Integer.parseInt(config.get("REMOVE_ABANDONED_TIMEOUT"))); //600
p.setMinEvictableIdleTimeMillis(Integer.parseInt(config.get("DB_EVICTION_TIMEOUT"))); //60000
p.setMinIdle(Integer.parseInt(config.get("DB_MIN_IDLE"))); //50
p.setLogAbandoned(Boolean.parseBoolean(config.get("LOG_ABANDONED"))); //true
p.setRemoveAbandoned(true);
p.setJdbcInterceptors(
"org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;"+
"org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;"+
"org.apache.tomcat.jdbc.pool.interceptor.ResetAbandonedTimer");
// make sure new connections have auto commit true
p.setDefaultAutoCommit(true);
when ever I keep my application idle for 10 or 15 hours, I will get Connection time out error. But when I frequently use my application then I couldn't able to see this error any time. Could any one please guide me whether I am making some thing wrong in the below code. This application is used by only two users and that too be not frequently.
<Context path="/****" reloadable="true">
<Resource
name="XXXX"
type="javax.sql.DataSource"
username="XXXX"
password="XXXX"
driverClassName="com.microsoft.sqlserver.jdbc.SQLServerDriver"
maxIdle="4"
maxWait="30000"
initialSize="2"
url="jdbc:sqlserver://localhost;database=XXXX"
maxActive="20"/>
</Context>
Unfortunately there is not so much information given in your example. Assumed that is a small piece of a Spring (Spring Framework) context configuration, i would prefer to configure a connection pooling for the database. This pool can hold an amount of idle connections for each request an can open new ones if no connection is available in the pool for the current request. I found this piece of XML in the MySql documentation.
<bean id="dataSource" destroy-method="close"
class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="${db.driver}"/>
<property name="url" value="${db.jdbcurl}"/>
<property name="username" value="${db.username}"/>
<property name="password" value="${db.password}"/>
<property name="initialSize" value="3"/>
</bean>
Please be aware that the datasource is of type org.apache.commons.dbcp.BasicDataSource. For further information please refer to the Documentation of MySql, Spring Framework and DBCP
Based on information provided on question, i can see:
Most probably, application is not properly handling connection i.e. it is creating connection, but then somehow failed to close connection properly in application code.
As per question, if only 2 users are using this app then probably you might not able to catch connection exhausted error which usually thrown when connection pool reaches max limit.
But if you increase app users you will see your connection reached max limit very quickly.
Check number of connections that got created after application starts and observe behavior. Check if inactive connections are getting closed or not.
Also, I don't see inactive connection timeout property in configuration. Can you check if you have this property.
This is an issue with databases, it gets shutdown/locked up after a long idle period. In MySQL it is 8 hours by default. You have to use a connection pooling library like C3P0 which has configuration to talk to database and turn on the connection if it is closed.
It is possible to increase the timeout amount in databases, but not recommended. Therefor go for a system like I mentioned above which can turn on the connection for you.
I'm using Wildfly 8.2 and fire a series of DB requests when a certain web page is opened. All queries are invoked thru JPA Criteria API, return results as expected - and - none of them delivers a warning, error or exception. It all runs in Parallel Plesk.
Now, I noticed that within 2 to 3 days the following error appears and the site becomes unresponsive. I restart and I wait approx another 3 days till it happens again (depending on the number of requests I have).
I checked the tcpsndbuf on my linux server and I noticed it is constantly at max. Unless I restart Wildfly. Apparently it fails to release the connections.
The connections are managed by JPA/Hibernate and the Wildfly container. I don't do any special or custom transaction handling e.g. open, close. etc. I leave it all to Wildfly.
The MySQL Driver I'm using is 5.1.21 (mysql-connector-java-5.1.21-bin.jar)
In the standalone.xml I have defined the following datasource datasource values (among others):
<transaction-isolation>TRANSACTION_READ_COMMITTED</transaction-isolation>
<pool>
<min-pool-size>3</min-pool-size>
<max-pool-size>10</max-pool-size>
</pool>
<statement>
<prepared-statement-cache-size>32</prepared-statement-cache-size>
<shared-prepared-statements>true</shared-prepared-statements>
</statement
Has anyone experience the same rise of tcpsndbuf values (or this error)? In case you require more config or log files, let me know. Thanks!
UPDATE
Despite the following additional timeout settings, it still runs into the hanger. And thus, it will then use 100% CPU time, whenever the max tcpsndbuf is reached.,
Try adding this Hibernate property:
<property name="hibernate.connection.release_mode">after_transaction</property>
By default, JTA mandates that connection should be released after each statement, which is undesirable for most use cases. Most Drivers don't allow multiplexing a connection over multiple XA transactions anyway.
Do you use openvz? I think this question should be asked on serverfault. It is related to linux configuration. You can read: tcpsndbuf. You should count opened sockets and check condition:
I have been trying to go through the c3p0 documentation but not able to understand 'testConnectionOnCheckin' property.
Docs says - "Connections are tested before they are included in pool".
Does this property apply to only new connections that c3p0 creates are tested before they are included in pool? What is point of checking new connections? Wouldn't they generally be valid?
Also couple of days my application logs were showing following:
[managed:2 unused:2 excluded:1]
And my application was throwing exception for one particular connection which I assume is 'excluded' one. Is 'excluded' connection counted in pool and can c3p0 hand it over to application without checking validity? If not, then would setting 'testConnectionOnCheckin' test this excluded connection for validity before it is used by my application?
I apologize for too many questions but it's just that I am confused.
Thanks
Jitendra
testConnectionOnCheckin tests Connections after they are checked-in by clients [ie via Connection.close()], but before they are reintegrated into the Connection pool. I'm not sure what documentation you are looking at, but see
http://www.mchange.com/projects/c3p0/#testConnectionOnCheckin
http://www.mchange.com/projects/c3p0/#configuring_connection_testing
I generally recommend testing Connections with a combination of idleConnectionTestPeriod and testConnectionsOnCheckIn (and a fast preferredTestQuery).
An "excluded" Connection is a Connection currently in use by a client, but which c3p0 has noticed is faulty. c3p0 marks these Connections to be destroyed rather than reintegrated into the pool when they are checked-in by the client.
I hope this helps!
I have a long-running method which executes a large number of native SQL queries through the EntityManager (TopLink Essentials). Each query takes only milliseconds to run, but there are many thousands of them. This happens within a single EJB transaction. After 15 minutes, the database closes the connection which results in following error:
Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.1 (Build b02-p04 (04/12/2010))): oracle.toplink.essentials.exceptions.DatabaseException
Internal Exception: java.sql.SQLException: Closed Connection
Error Code: 17008
Call: select ...
Query: DataReadQuery()
at oracle.toplink.essentials.exceptions.DatabaseException.sqlException(DatabaseException.java:319)
.
.
.
RAR5031:System Exception.
javax.resource.ResourceException: This Managed Connection is not valid as the phyiscal connection is not usable
at com.sun.gjc.spi.ManagedConnection.checkIfValid(ManagedConnection.java:612)
In the JDBC connection pool I set is-connection-validation-required="true" and connection-validation-method="table" but this did not help .
I assumed that JDBC connection validation is there to deal with precisely this kind of errors. I also looked at TopLink extensions (http://www.oracle.com/technetwork/middleware/ias/toplink-jpa-extensions-094393.html) for some kind of timeout settings but found nothing. There is also the TopLink session configuration file (http://download.oracle.com/docs/cd/B14099_19/web.1012/b15901/sessions003.htm) but I don't think there is anything useful there either.
I don't have access to the Oracle DBA tables, but I think that Oracle closes connections after 15 minutes according to the setting in CONNECT_TIME profile variable.
Is there any other way to make TopLink or the JDBC pool to reestablish a closed connection?
The database is Oracle 10g, application server is Sun Glassfish 2.1.1.
All JPA implementations (running on a Java EE container) use a datasource with an associated connection pool to manage connectivity with the database.
The persistence context itself is associated with the datasource via an appropriate entry in persistence.xml. If you wish to change the connection timeout settings on the client-side, then the associated connection pool must be re-configured.
In Glassfish, the timeout settings associated with the connection pool can be reconfigured by editing the pool settings, as listed in the following links:
Changing timeout settings in GlassFish 3.1
Changing timeout settings in GlassFish 2.1
On the server-side (whose settings if lower than the client settings, would be more important), the Oracle database can be configured to have database profiles associated with user accounts. The session idle_time and connect_time parameters of a profile would constitute the timeout settings of importance in this aspect of the client-server interaction. If no profile has been set, then by default, the timeout is unlimited.
Unless you've got some sort of RAC failover, when the connection is terminated, it will end the session and transaction.
The admins may have set into some limits to prevent runaway transactions or a single job 'hogging' a connection in a pool. You generally don't want to lock a connection in a pool for an extended period.
If these queries aren't necessarily part of the same transaction, then you could try terminating and restarting a new connection.
Are you able to restructure your code so that it completes in under 15 minutes. A stored procedure in the background may be able to do the job a lot quicker than dragging the results of thousands of operations over the network.
I see you set your connection-validation-method="table" and is-connection-validation-required="true", but you do not mention that you specified the table you were validating on; did you set validation-table-name="any_table_you_know_exists" and provide any existing table-name? validation-table-name="existing_table_name" is required.
See this article for more details on connection validation.
Related StackOverflow article with similar problem - he wants to flush the entire invalid connection pool.