Unable to fill pool (no buffer space available) - java

I'm using Wildfly 8.2 and fire a series of DB requests when a certain web page is opened. All queries are invoked thru JPA Criteria API, return results as expected - and - none of them delivers a warning, error or exception. It all runs in Parallel Plesk.
Now, I noticed that within 2 to 3 days the following error appears and the site becomes unresponsive. I restart and I wait approx another 3 days till it happens again (depending on the number of requests I have).
I checked the tcpsndbuf on my linux server and I noticed it is constantly at max. Unless I restart Wildfly. Apparently it fails to release the connections.
The connections are managed by JPA/Hibernate and the Wildfly container. I don't do any special or custom transaction handling e.g. open, close. etc. I leave it all to Wildfly.
The MySQL Driver I'm using is 5.1.21 (mysql-connector-java-5.1.21-bin.jar)
In the standalone.xml I have defined the following datasource datasource values (among others):
<transaction-isolation>TRANSACTION_READ_COMMITTED</transaction-isolation>
<pool>
<min-pool-size>3</min-pool-size>
<max-pool-size>10</max-pool-size>
</pool>
<statement>
<prepared-statement-cache-size>32</prepared-statement-cache-size>
<shared-prepared-statements>true</shared-prepared-statements>
</statement
Has anyone experience the same rise of tcpsndbuf values (or this error)? In case you require more config or log files, let me know. Thanks!
UPDATE
Despite the following additional timeout settings, it still runs into the hanger. And thus, it will then use 100% CPU time, whenever the max tcpsndbuf is reached.,

Try adding this Hibernate property:
<property name="hibernate.connection.release_mode">after_transaction</property>
By default, JTA mandates that connection should be released after each statement, which is undesirable for most use cases. Most Drivers don't allow multiplexing a connection over multiple XA transactions anyway.

Do you use openvz? I think this question should be asked on serverfault. It is related to linux configuration. You can read: tcpsndbuf. You should count opened sockets and check condition:

Related

Oracle active and innactive sessions are there even after system down

I am working with java web application which use hibernate 3. And proxool for connection pooling. It mainly dose file uploading and the approximate load will be 2000 file uploads by about 300 users per hour. And the load is getting higher some times. I am facing a trouble of having high number of active and inactive sessions in oracle side and even after the system(wildfly server) down, the sessions stays as they were.
I have checked the code and it always close the hibernate session in finally block.
My actual problem is when sessions in oracle side get increase and my application can't get database connections after a while.
my proxool file is as follow
<proxool>
<alias>piokms-conn</alias>
<driver-url>jdbc:oracle:thin:#1.1.1.1:1521:orcl64</driver-url>
<driver-class>oracle.jdbc.driver.OracleDriver</driver-class>
<driver-properties>
<property name="user" value="test" />
<property name="password" value="test" />
</driver-properties>
<autocommit>false</autocommit>
<simultaneous-build-throttle>150</simultaneous-build-throttle>
<minimum-connection-count>200</minimum-connection-count>
<maximum-connection-count>800</maximum-connection-count>
<maximum-connection-lifetime>1200000</maximum-connection-lifetime>
<maximum-active-time>600000</maximum-active-time>
<house-keeping-test-sql>SELECT 1 From DUAL</house-keeping-test-sql>
<statistics>5m,15m,1d</statistics>
<statistics-log-level>ERROR</statistics-log-level>
<fatal-sql-exception>Connection is closed,SQLSTATE=08003,Error opening socket. SQLSTATE=08S01,SQLSTATE=08S01</fatal-sql-exception>
<fatal-sql-exception-wrapper-class>org.logicalcobwebs.proxool.FatalRuntimeException</fatal-sql-exception-wrapper-class>
<verbose>false</verbose>
<trace>true</trace>
</proxool>
Please help me to solve this.
Thanks
Also I am not familiar with proxool itself, your description reminded my of behaviours observered with different frameworks where the connection pool was not properly terminated.
According to the documentation you have two options:
Using the ServletConfigurator
Or manually calling the shutdown method, when your sever terminates.
To verify this problem just shutdown your application and monitor oracle to check whether the number of active sessions goes down to the normal value.
I have checked the code and it always close the hibernate session in finally block.
I assume you want to express that the connection is returned to the pool, which itself only helps you in terms that the connections are not leaking. They are normally not closed at this point of time.

Tomcat jdbc stalls during amazon multi-az failover

SOLVED
Ultimately the solution noted in the similar ticket noted below worked for us. When we tried it initially our configuration parser was mangling the URL and removing &connectTimeout=15000&socketTimeout=60000 from it, which invalidated that test.
I'm having trouble getting a tomcat jdbc connection pool to fail-over to a new DB server using Amazon RDS' mutli-az feature. When fail-over occurs the application server gets hung up trying to borrow a connection from the pool. It's similar to this question, however the solution to this users problem did not help me, so I suspect it's not quite the same: Configure GlassFish JDBC connection pool to handle Amazon RDS Multi-AZ failover
The sequence goes a bit like this:
Succesful request is made, log output as expected
fail-over initiated (via reboot with failover in RDS)
Request made that times out, log messages from request appear as expected up until a connection is borrowed from the pool.
Subsequent requests generate no log messages, they time out as well.
After some amount of time, the daemon will eventually start printing more log messages as if it succesfully connected to the database and performing operations. This can take just over 16 minutes to occur, the client has long since timed out.
If I wait about 50 minutes and try again eventually the system will finally accept connections again.
Notes
If I tell tomcat to shut down there are exceptions in the logs about it being unable to clean up resources, and about my servlet still processing a request. Most notable (In my opinion) is the following stack trace indicating at least one thing is stuck waiting for communication over a socket from mysql.
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
java.net.SocketInputStream.read(SocketInputStream.java:171)
java.net.SocketInputStream.read(SocketInputStream.java:141)
com.mysql.jdbc.util.ReadAheadInputStream.fill(ReadAheadInputStream.java:114)
com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:161)
com.mysql.jdbc.util.ReadAheadInputStream.read(ReadAheadInputStream.java:189)
com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3116)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3573)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4113)
com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2570)
com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2731)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2812)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2761)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:894)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:732)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:441)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:384)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:716)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:579)
org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:174)
org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:111)
<...>
If I restart tomcat things return to normal as soon as it comes back. Obviously long term this is probably not a workable solution for maintaining uptime.
Environment Details
Database: MySQL (using mysql-connector-java 5.1.26) (server 5.5.53)
Tomcat 8.0.45
I've gone through several changes to my configuration while trying to solve this issue. At the time of this writing the following related settings are in place:
jre/lib/security/java.security -- I'm under the impression that the default value for Oracle Java 1.8 is 30s with no security manager. I set these to zero just to be positive this isn't the issue.
networkaddress.cache.ttl: 0
networkaddress.cache.negative.ttl: 0
connection pool settings
testOnBorrow: true
testOnConnect: true
testOnReturn: true
jdbc url parameters
connectTimeout:60000
socketTimeout:60000
autoReconnect:true
Update
Still no solution found.
Added in logging to confirm that this was not a DNS caching issue. IP address logged immediately before timeout matches up with IP address of 'new' master RDS host.
For reference the following block represents the properties I'm using to initialize my data source. I’m configuring the pool in code rather than JNDI, with some elements pulled out of our app's config file. I’ve pasted the code below along with comments indicating what the config values are for the tests I’ve been running.
PoolProperties p = new PoolProperties();
p.setUrl(config.get("JDBC_URL")); // jdbc:mysql://host:3306/dbname
p.setDriverClassName("com.mysql.jdbc.Driver");
p.setUsername(config.get("JDBC_USER"));
p.setPassword(config.get("JDBC_PASSWORD"));
p.setJmxEnabled(true);
p.setTestWhileIdle(false);
p.setTestOnBorrow(true);
p.setValidationQuery("SELECT 1");
p.setValidationInterval(30000);
p.setTimeBetweenEvictionRunsMillis(30000);
p.setMaxActive(Integer.parseInt(config.get("MAX_ACTIVE"))); //45
p.setInitialSize(10);
p.setMaxWait(5);
p.setRemoveAbandonedTimeout(Integer.parseInt(config.get("REMOVE_ABANDONED_TIMEOUT"))); //600
p.setMinEvictableIdleTimeMillis(Integer.parseInt(config.get("DB_EVICTION_TIMEOUT"))); //60000
p.setMinIdle(Integer.parseInt(config.get("DB_MIN_IDLE"))); //50
p.setLogAbandoned(Boolean.parseBoolean(config.get("LOG_ABANDONED"))); //true
p.setRemoveAbandoned(true);
p.setJdbcInterceptors(
"org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;"+
"org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;"+
"org.apache.tomcat.jdbc.pool.interceptor.ResetAbandonedTimer");
// make sure new connections have auto commit true
p.setDefaultAutoCommit(true);

Oracle database alert opiodr aborting process ORA-609

I am running a batch java application. The application runs every 10/20 minutes in my Production and UAT environment and I get database alerts like this:
Thu Feb 06 15:15:08 2014
opiodr aborting process unknown ospid (28246400) as a result of ORA-609
After researching a bit on the internet the suggested fix for these alerts is to change INBOUND_CONNECT_TIMEOUT as:
Sqlnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=180
Listener.ora: INBOUND_CONNECT_TIMEOUT_listener_name=120
We have changed the setting on the database server side but don't know where to change in the client application. We are using c3p0 to create a connection pool and we are setting only these parameters:
dataSource.setAcquireRetryDelay(30000);
dataSource.setMaxPoolSize(50);
dataSource.setMinPoolSize(20);
dataSource.setInitialPoolSize(10);
We have other web services running on the same server as the batch application and they use Tomcat's DBCP pool and they don't seem to create any alerts. Also strangely enough, our batch application doesn't generate the alerts in lower test environments. They happen once in a while but the UAT and PROD environments get these alerts very frequently based on the schedule. Any suggestions what configurations to set in the c3p0 pool or should I try changing to another pool API like DBCP?
Update: I have added a few more parameters in the datasource and the frequency of alerts has reduced. I added the following and the number of alerts have gone down from 15 an hour to 4 an hour.
dataSource.setLoginTimeout(120);
dataSource.setAcquireRetryAttempts(10);
dataSource.setCheckoutTimeout(60000);
I moved to DBCP connection pooling and it seems to have fixed the issue. I tried changing a few more c3p0 settings mentioned above but nothing changed. The alerts were reduced but didn't go completely. So we decided to try DBCP. I am using all default values in DBCP except for the pool size. I'm using the tomcat version of DBCP available in tomcat's lib folder (tomcat-dbcp.jar).

Enabling connection checker and statements tracker on JBoss 5 app server

I am using JBoss datasource (<local-tx-datasource>) on IBM DB2 9.7 database. The connectivity works fine.
However, I have tried enabling the <track-statements>, <valid-connection-checker>, and <check-valid-connection-sql> properties, but how do I know they are working? I get no errors or warnings on server console when using the datasource, but I also don't get any enhanced logging.
Below is a slightly modified version of what I am using. I have tried various property combinations. Below I have commented out the check-valid-connection-sql property, but enabled the valid, stale and exception sorter properties. I am using JBoss5 and I wonder if it has an impact.
I have tried e.g. to remove a hibernate session close statement in a finally clause, but nothing is logged. It just seems that the additional properties below are not enabled for some reason, or at least they are not logged in the JBoss server.log log file.
<datasources>
<local-tx-datasource>
<jndi-name>[jndiname]</jndi-name>
<connection-url>jdbc:db2://[ip]:[port]/[dbname]</connection-url>
<driver-class>com.ibm.db2.jcc.DB2Driver</driver-class>
<user-name>[user]</user-name>
<password>[password]</password>
<min-pool-size>10</min-pool-size>
<valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.db2.DB2ValidConnectionChecker"></valid-connection-checker>
<stale-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.db2.DB2StaleConnectionChecker"></stale-connection-checker>
<exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.db2.DB2ExceptionSorter"/>
<track-statements>true</track-statements>
<metadata>
<type-mapping>DB2</type-mapping>
</metadata>
If you have valid-connection-checker-class-name, check-valid-connection-sql, and validate-on-match (set to true by default) properly configured then you can test that it works by manually performing an outage on your database server and see if JBoss reconnects and refreshes the datasource pool with valid connections.
I would not recommend keeping track-statements enabled in production. It is mainly a debug feature to track whether you are not closing statements and resultSets properly in your code. Turn it on in dev or test servers to validate that you are properly closing them (e.g. close them in a finally block). You can test it by validating messages like "Closing a result set you left open! Please close it yourself." in server log.

How to reestablish a JDBC connection after a timeout?

I have a long-running method which executes a large number of native SQL queries through the EntityManager (TopLink Essentials). Each query takes only milliseconds to run, but there are many thousands of them. This happens within a single EJB transaction. After 15 minutes, the database closes the connection which results in following error:
Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.1 (Build b02-p04 (04/12/2010))): oracle.toplink.essentials.exceptions.DatabaseException
Internal Exception: java.sql.SQLException: Closed Connection
Error Code: 17008
Call: select ...
Query: DataReadQuery()
at oracle.toplink.essentials.exceptions.DatabaseException.sqlException(DatabaseException.java:319)
.
.
.
RAR5031:System Exception.
javax.resource.ResourceException: This Managed Connection is not valid as the phyiscal connection is not usable
at com.sun.gjc.spi.ManagedConnection.checkIfValid(ManagedConnection.java:612)
In the JDBC connection pool I set is-connection-validation-required="true" and connection-validation-method="table" but this did not help .
I assumed that JDBC connection validation is there to deal with precisely this kind of errors. I also looked at TopLink extensions (http://www.oracle.com/technetwork/middleware/ias/toplink-jpa-extensions-094393.html) for some kind of timeout settings but found nothing. There is also the TopLink session configuration file (http://download.oracle.com/docs/cd/B14099_19/web.1012/b15901/sessions003.htm) but I don't think there is anything useful there either.
I don't have access to the Oracle DBA tables, but I think that Oracle closes connections after 15 minutes according to the setting in CONNECT_TIME profile variable.
Is there any other way to make TopLink or the JDBC pool to reestablish a closed connection?
The database is Oracle 10g, application server is Sun Glassfish 2.1.1.
All JPA implementations (running on a Java EE container) use a datasource with an associated connection pool to manage connectivity with the database.
The persistence context itself is associated with the datasource via an appropriate entry in persistence.xml. If you wish to change the connection timeout settings on the client-side, then the associated connection pool must be re-configured.
In Glassfish, the timeout settings associated with the connection pool can be reconfigured by editing the pool settings, as listed in the following links:
Changing timeout settings in GlassFish 3.1
Changing timeout settings in GlassFish 2.1
On the server-side (whose settings if lower than the client settings, would be more important), the Oracle database can be configured to have database profiles associated with user accounts. The session idle_time and connect_time parameters of a profile would constitute the timeout settings of importance in this aspect of the client-server interaction. If no profile has been set, then by default, the timeout is unlimited.
Unless you've got some sort of RAC failover, when the connection is terminated, it will end the session and transaction.
The admins may have set into some limits to prevent runaway transactions or a single job 'hogging' a connection in a pool. You generally don't want to lock a connection in a pool for an extended period.
If these queries aren't necessarily part of the same transaction, then you could try terminating and restarting a new connection.
Are you able to restructure your code so that it completes in under 15 minutes. A stored procedure in the background may be able to do the job a lot quicker than dragging the results of thousands of operations over the network.
I see you set your connection-validation-method="table" and is-connection-validation-required="true", but you do not mention that you specified the table you were validating on; did you set validation-table-name="any_table_you_know_exists" and provide any existing table-name? validation-table-name="existing_table_name" is required.
See this article for more details on connection validation.
Related StackOverflow article with similar problem - he wants to flush the entire invalid connection pool.

Categories

Resources