I seem to have stumbled across a weird behavior with tomcat 7 and connection pooling...
In my app, I have the following 3 data sources - connecting to the same database, but different services ( and are the same across all 3)
jdbc:sybase:Tds:<db_ip_address>:<db_port>/service1
jdbc:sybase:Tds:<db_ip_address>:<db_port>/service2
jdbc:sybase:Tds:<db_ip_address>:<db_port>/service3
In my context.xml, I have the 3 data sources listed as a separate resource as usual, with all neccessary options set, including
<Resource
name="jdbc/dbDataSource1"
type="javax.sql.DataSource"
driverClassName="com.sybase.jdbc3.jdbc.SybDriver"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
maxActive="20"
initialSize="1"
minIdle="5"
maxIdle="10"
<Resource
name="jdbc/dbDataSource2"
<!-- Rest is same as above -->
<Resource
name="jdbc/dbDataSource3"
<!-- Rest is same as above -->
What I have noticed is, because the 3 data sources connect to the same database, tomcat only seems to be creating and using one connection pool and sharing between all 3. This can be seen at startup, where if I change initialSize to say 10, the first 2 data sources are created no problem - on the 3rd, I get an exception saying
java.sql.SQLException: JZ00L: Login failed.
Examine the SQLWarnings chained to this exception for the reason(s).
Am I missing something obvious here on how to set up the connection pool? I have looked at the tomcat documentation and stuff related to global connection pools, however from what I can gather this seems to be related to sharing the connections between multiple apps?
Any help is much appreciated!
Indeed seems to be too many idle connection. Try to increase the idle Connection properties or check whether you are closing all the opened connection.
Please refer to this link
Related
I experienced an outage with my application the other day and I need to understand how to avoid this in the future.
We have a Java based web application running on Tomcat 7. The application connected to several different data sources including an Oracle database.
Here are the details, the Oracle database server went down and had to be rebooted. My simple understanding tells me this would have severed the application's connections to the database, and in fact the users reported errors in the application.
The Oracle data source is setup in Tomcat's sever.xml as a GlobalNaming Resource:
<Resource name="datasource"
auth="Container"
type="javax.sql.DataSource"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
....
initialSize="4"
minIdle="2"
maxIdle="8"
maxActive="8"
maxAge="28800000"
maxWait="30000"
testOnBorrow="false"
testOnReturn="false"
testWhileIdle="false"
validationQuery="SELECT 1 FROM dual"
validationQueryTimeout="10"
validationInterval="600000"
timeBetweenEvictionRunsMillis="60000"
minEvictableIdleTimeMillis="900000"
removeAbandoned="true"
removeAbandonedTimeout="60"
logAbandoned="true"
jmxEnabled="true" />
So here is what I understand regarding connection validation.
Connections are not validated while idle (testWhileIdle = false), when borrowed (testOnBorrow = false), when returned (testOnReturn = false)
The PoolSweeper is enabled because timeBetweenEvictionRunsMillis > 0, removeAbandoned is true, and removeAbandonedTimeout > 0
What confuses me is the inclusion of the validation query and the validationInterval > 0. Since all of the tests are disabled, does the pool sweeper then use the validation query to check the connections? Or is the validation query irrelevant?
So when the database server went down, I believe the connection pool would not have tried to reestablish connections because there are no validation tests enabled. In my opinion, had testOnBorrow been enabled then when the database server came back up valid connections would have been established and the web application (meaning tomcat) would not have required a restart.
Do I have a correct understanding of how connection validation works?
Let us take a look at the relevant part of your configuration to avoid invalid connections in your pool.
maxAge="28800000"
The connections regardless if valid or not will be kept open for 8 hrs. After 8 hrs the connection is closed and a new connection will be established if requested and no free connection is available in the pool. [1]
testOnBorrow="false"
testOnReturn="false"
testWhileIdle="false"
A connection in the pool will not be tested if it is valid when or while it's been borrowed, returned and idle. [1]
validationInterval="600000"
This property has no effect since all connection tests are set to false. This interval defines when a connection is needed to be tested. In your example, a connection will be tested every 10 minutes in case a test property would be set to true. [1]
An invalid connection can stay open up to 8 hrs with your current configuration. To enable validation tests of opened connections you have to set at least one test property (testOnBorrow, testOnReturn, testWhileIdle) to true.
Please note, in case of validationInterval="600000" a connection test/validation will be done every 10 minutes. So, an invalid connection could be up to 10 minutes available in the pool regardless which test property is set.
For more information about the individual properties please take a look at [1]: Apache Tomcat 7: The Tomcat JDBC Connection Pool.
you must lower the maxAge="28800000" parameter, in addition to this your instance crashes due to application errors and you should use jdbc interceptor
I have a mysql database configured with a max_connections value of 150. I also have a Java 6 web application running in Tomcat 5.5 configured with the following setup:
<Resource name="jdbc/myDB"
type="javax.sql.DataSource"
driver="com.mysql.jdbc.Driver"
username="username"
password="password"
maxActive="100"
maxIdle="100"
maxWait="-1"
removeAbandoned="true"
removeAbandonedTimeout="300"
logAbandoned="true"
url="jdbc:mysql://localhost:3306/myDB?autoreconnect=true"
validationQuery="SELECT 1" />
This application is not using any 3rd party framework just basic java servlets. I have a bug in some code in the java app that is not properly releasing opened mysql connections from the pool. I am working on identifying and fixing these. But in the meantime I need to figure out why at most there is only 25 connections being allowed to mysql. After these 25 connections are used up, the application becomes unresponsive.
Can someone please help me figure out why both mysql and tomcat are configured for 100+ connections but it is only allowing 25 at a time?
Tomcat JDBC Connection Pool
What connection pool do you use?
Do you use the Tomcat JDBC Connection Pool, rather than the Apache Commons pool? It has properties to detect connection leaks or abandon connection that are open for a long time than the configured timeout.
MySQL's max_connections was set to 150 but the max_user_connections was set to 25 which was the limiting factor here. I removed this setting from my.cnf to restore it to the default value of unlimited.
Here is my current config
<Resource
name="jdbc/data"
auth="Container"
type="javax.sql.DataSource"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/TABLE_NAME"
username="USER_NAME"
password="PASSWORD"
initialSize="10"
maxActive="50"
suspectTimeout="120"
minIdle="10"
maxIdle="20"
maxWait="1000"
testOnBorrow="true"
timeBetweenEvictionRunsMillis="30000"
minEvictableIdleTimeMillis="60000"
validationQuery="SELECT 1 FROM DUAL"
validationInterval="40"
removeAbandoned="true"
removeAbandonedTimeout="100"
/>
This is in global context so multiple apps can use it.
I am little confused about parameters.need some details.
What I understand is
initalSize a number of connection created when a pool started.
maxActive maximum 50 connections can active at a time.
minIdle 10 connections remain Idle when connection is not used else are closed after maxwait
maxIdle 20 connections can be store as idle.
But When I start tomcat server I can see a 30 IDLE connections which remains forever.Why this happens? Am I missing something ? According to my understanding about connection pool there should only 10 connections should created and can stay in IDLE mode. Is there any specific changes that I have to do with mysql my.cnf
When you say...
This is in global context so multiple apps can use it.
What specifically do you mean? Is it in $CATALINA_BASE/conf/server.xml in the GlobalNamingResources block or in $CATALINA_BASE/conf/context.xml?
Defining a Resource tag in the GlobalNamingResources block of $CATALINA_BASE/conf/server.xml will cause only one resource to be created across the entire server. This can then be shared to applications deployed on your system by adding a ResourceLink tag to the Context configuration.
Defining a Resource in $CATALINA_BASE/conf/context.xml will define the resource once for each application deployed to your Tomcat instance. Thus if you have three applications deployed, you'll end up with three separate resources. This is a guess, but probably why you are seeing 30 connections to your database server.
I need to hook up a Multi-instance queue manager on a Tomcat server. I have found all kinds of "properties" that I have to set to do it, but where do they go? Tomcat, in the server XML has some settings but most of the settings needed in the IBM documentation do not map. Currently we have hooked up a "single" instance queue like this:
<Resource
name="jms/TelematicsQCF"
CHAN="JAVA.Z1LC.CLIENT"
HOST="blah.blah.com"
PORT="1111"
QMGR="MQB3"
TRAN="1" auth="Container"
description="JMS Queue Connection Factory for sending messages"
factory="com.ibm.mq.jms.MQQueueConnectionFactoryFactory"
type="com.ibm.mq.jms.MQQueueConnectionFactory"
/>
How do I hook up a multi-instance one? AND, Can I still use the Spring DefaultMessageListenerContainer? AND (o man...) what settings do I need?
I don't have much of Tomcat knowledge but I come from WebSphere MQ background. Looking at the Context you provided, I think the below would work for Multi-instance queue manager.
I am setting CRHOSTS to multiple connection names. I am assuming, on blah.blah.com host, active instance of queue manager runs and listens at port 1414 and standby instance runs on b2.b3.com and listens at port 1544.
CROPT is reconnect option and is set to WMQ_CLIENT_RECONNECT_Q_MGR whose value is 67108864. You can find the value of these constants from cmqc.h file.
CRT is the reconnection timeout value which tells, for how long the client will try to reconnect. After the timeout period, client stops reconnecting if a connection attempt was not successful. In this case I have set the value to 500 seconds.
<Resource
name="jms/TelematicsQCF"
CHAN="JAVA.Z1LC.CLIENT"
CRHOSTS="blah.blah.com(1414), b2.b3.com(1544)"
CROPT="67108864"
CRT="500"
QMGR="MQB3"
TRAN="1" auth="Container"
description="JMS Queue Connection Factory for sending messages"
factory="com.ibm.mq.jms.MQQueueConnectionFactoryFactory"
type="com.ibm.mq.jms.MQQueueConnectionFactory"
/>
Hope this helps.
So the answer is this:
<Resource name="jms/XXXQCF1"
CHAN="TMAX.CHANNEL"
CRSHOSTS="blah1.example.com(1420),blah2.example.com(1420)"
CROPT="67108864"
CRT="500"
QMGR="tmax.lrd.qmgr.a"
TRAN="1"
auth="Container"
description="JMS Queue Connection Factory for sending messages"
factory="com.ibm.mq.jms.MQQueueConnectionFactoryFactory"
type="com.ibm.mq.jms.MQQueueConnectionFactory" />
Notice that Shashi above has "CRHOSTS" and the IBM documentation has the same, however when we tried that it did not work. We put in a ticket to IBM and they said the documentation is incorrect on their website (and by the way, they wanted a ticket to fix their docs!).
I tried Shashi's "CRHOSTS" and it did not work and CRSHOSTS did work. Not sure why that is. We also had to upgrade our jars to 7.5.*. The "CROPT" and "CRT" I am not sure about but these settings work.
I have a tomcat instance setup but the database connection I have configured in context.xml keeps dying after periods of inactivity.
When I check the logs I get the following error:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
The last packet successfully received from the server was68051 seconds
ago. The last packet sent successfully to the server was 68051 seconds
ago, which is longer than the server configured value of
'wait_timeout'. You should consider either expiring and/or testing
connection validity before use in your application, increasing the
server configured values for client timeouts, or using the Connector/J
connection property 'autoReconnect=true' to avoid this problem.
Here is the configuration in context.xml:
<Resource name="dataSourceName"
auth="Container"
type="javax.sql.DataSource"
maxActive="100"
maxIdle="30"
maxWait="10000"
username="username"
password="********"
removeAbandoned = "true"
logAbandoned = "true"
driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://127.0.0.1:3306/databasename?autoReconnect=true&useEncoding=true&characterEncoding=UTF-8" />
I am using autoReconnect=true like the error says to do, but the connection keeps dying. I have never seen this happen before.
I have also verified that all database connections are being closed properly.
Tomcat Documentation
DBCP uses the Jakarta-Commons Database Connection Pool. It relies on number of Jakarta-Commons components:
* Jakarta-Commons DBCP
* Jakarta-Commons Collections
* Jakarta-Commons Pool
This attribute may help you out.
removeAbandonedTimeout="60"
I'm using the same connection pooling stuff and I'm setting these properties to prevent the same thing it's just not configured through tomcat.
But if the first thing doesn't work try these.
testWhileIdle=true
timeBetweenEvictionRunsMillis=300000
Just to clarify what is actually causing this. MySQL by default terminates open connections after 8 hours of inactivity. However the database connection pool will retain connections for longer than that.
So by setting timeBetweenEvictionRunsMillis=300000 you are instructing the connection pool to run through connections and evict and close idle ones every 5 minutes.
The removeAbandoned option is deprecated as of DBCP 1.2 (though still present in the 1.3 branch). Here's a non-official explanation.
I do not know whether the above answer does basically the same thing, but some of our systems use the DB connection about once a week and I've seen that we provide a -Otimeout flag or something of that sort to mysql to set the connection timeout.