<Resource name="myConn" auth="Container"
type="javax.sql.DataSource" driverClassName="oracle.jdbc.driver.OracleDriver"
url="jdbc:oracle:thin:#10.10.10.10.:1521:mydb"
username="username" password="password" maxActive="500" maxIdle="50"
maxWait="-1" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true" accessToUnderlyingConnectionAllowed="true"
/>
I am trying to find out areas of the application where connections are NOT being closed. I added the removeAbandoned and logAbandoned clauses in my context file but if i check v$session on oracle it is still showing the same number of connections active even after 60 seconds. Is there something wrong in the configuration above?
I would set maxActive to smaller value like 50 and then check if the configuration is working correctly.
According to the docs the connections pool must running low to execute the check for abandoned connections:
When available db connections run low
DBCP will recover and recycle any
abandoned dB connections it finds.
I would also changed the removeAbandonedTimeout to 20 so that you won't have to wait to long to check if the detector is working fine.
Related
I have a MySql Master/Slave replication question that google couldn't seem to answer. When using com.mysql.jdbc.ReplicationDriver, how does the driver handle failures on read replicas? Does it blacklist them, does it try just continue to try them and throw an exception each time (after whatever timeouts are configured)? From my testing it seems that my application is just hanging when I kill a read replica. I'm using tomcat and here is my context.xml....
<Resource auth="Container"
driverClassName="com.mysql.jdbc.ReplicationDriver"
defaultAutoCommit="false"
initialSize="10"
minIdle="5"
logAbandoned="false"
maxIdle="10"
maxWait="10000"
name="jdbc/db"
removeAbandoned="true"
testOnBorrow="true"
removeAbandonedTimeout="86400"
testWhileIdle="true"
type="javax.sql.DataSource"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
username="powerptc"
password="password"
url="jdbc:mysql:replication://localhost:3306,host1,host2:3306/db?allowSlavesDownConnections=true&readFromMasterWhenNoSlaves=true"
validationQuery="/* ping */ SELECT 1"
validationQueryTimeout="5" />
Is there a way to have the driver blacklist a failed read replica ( for x minutes ) instead of just retrying it over and over again?
In this case MySQL driver uses LoadBalanced driver for slaves and switch to master only if picking connection from LoadBalanced cluster of slaves fails.
Application hanging because default value for retriesAllDown = 120.
If you set retriesAllDown = 4, then Load Balancer will sleep 4 times for 250 milliseconds before switching to master.
By default loadBalanceBlacklistTimeout = 0, it means that load balancer for slaves does not use blacklist. Even if you set loadBalanceBlacklistTimeout > 0, it does not help, because strange implementation of blacklist, which is empty if all hosts are added to blacklist. But you can use next trick: Use ServerAffinityStrategy and put master hostname to slaves list, but set only slaves as affinity servers.
My working url is:
jdbc:mysql:replication://master:3306,slave1,slave2:3306/db?allowSlaveDownConnections=true&readFromMasterWhenNoSlaves=true&loadBalanceBlacklistTimeout=30000&retriesAllDown=4&loadBalanceStrategy=serverAffinity&serverAffinityOrder=slave1,slave2
In result, master will be used only if there is no available slave
I am using Tomcat 7 (jdk 1.6) in Eclipse 4.3.2.
I configured my Connection Pool as below :
<Resource name="jdbc/myDS" auth="Container" type="javax.sql.DataSource"
driverClassName="com.p6spy.engine.spy.P6SpyDriver"
url="jdbc:p6spy:oracle:thin:#server:1521:XXX"
username="XXX" password="XXX" maxActive="2" maxIdle="2" maxWait="-1"
validationInterval="30000" validationQuery="SELECT 1 FROM DUAL"
/>
I am using Spring 3.2.14, Hibernate 3.2.6-GA, CXF 2.7.
Every time I receive a SOAP request, I saw in P6SPY logs that the validation query is run independently of validationInterval and its description https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html.
I was expecting the connections to be validated at most once every 30 seconds.
Is there anything wrong with my configuration, or is this a known bug ?
The explanation is pretty simple, I did not read correctly the documentation, I need to set the factory to org.apache.tomcat.jdbc.pool.DataSourceFactory in order to use the "Tomcat High-concurrency connection pool".
After that all parameters work as expected :
<Resource
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
name="jdbc/myDS" auth="Container" type="javax.sql.DataSource"
driverClassName="com.p6spy.engine.spy.P6SpyDriver"
url="jdbc:p6spy:oracle:thin:#server:1521:XXX"
username="XXX" password="XXX" maxActive="2" maxIdle="2" maxWait="-1"
testOnBorrow="true"
testWhileIdle="true"
timeBetweenEvictionRunsMillis="10000"
validationInterval="30000"
validationQuery="SELECT 1 FROM DUAL"
/>
The connections are validated at most every validationInterval. An evictionThread runs every timeBetweenEvictionRunsMillis and validates idle connection (I choose to do this in order to spare time on connection borrow).
well, that might be because you set the testOnBorrow parameter to true. this is taken from the documentation link you give.
The indication of whether objects will be validated before being borrowed from the pool
so, I think you might want to set it to false
I have a mysql database configured with a max_connections value of 150. I also have a Java 6 web application running in Tomcat 5.5 configured with the following setup:
<Resource name="jdbc/myDB"
type="javax.sql.DataSource"
driver="com.mysql.jdbc.Driver"
username="username"
password="password"
maxActive="100"
maxIdle="100"
maxWait="-1"
removeAbandoned="true"
removeAbandonedTimeout="300"
logAbandoned="true"
url="jdbc:mysql://localhost:3306/myDB?autoreconnect=true"
validationQuery="SELECT 1" />
This application is not using any 3rd party framework just basic java servlets. I have a bug in some code in the java app that is not properly releasing opened mysql connections from the pool. I am working on identifying and fixing these. But in the meantime I need to figure out why at most there is only 25 connections being allowed to mysql. After these 25 connections are used up, the application becomes unresponsive.
Can someone please help me figure out why both mysql and tomcat are configured for 100+ connections but it is only allowing 25 at a time?
Tomcat JDBC Connection Pool
What connection pool do you use?
Do you use the Tomcat JDBC Connection Pool, rather than the Apache Commons pool? It has properties to detect connection leaks or abandon connection that are open for a long time than the configured timeout.
MySQL's max_connections was set to 150 but the max_user_connections was set to 25 which was the limiting factor here. I removed this setting from my.cnf to restore it to the default value of unlimited.
Here is my current config
<Resource
name="jdbc/data"
auth="Container"
type="javax.sql.DataSource"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/TABLE_NAME"
username="USER_NAME"
password="PASSWORD"
initialSize="10"
maxActive="50"
suspectTimeout="120"
minIdle="10"
maxIdle="20"
maxWait="1000"
testOnBorrow="true"
timeBetweenEvictionRunsMillis="30000"
minEvictableIdleTimeMillis="60000"
validationQuery="SELECT 1 FROM DUAL"
validationInterval="40"
removeAbandoned="true"
removeAbandonedTimeout="100"
/>
This is in global context so multiple apps can use it.
I am little confused about parameters.need some details.
What I understand is
initalSize a number of connection created when a pool started.
maxActive maximum 50 connections can active at a time.
minIdle 10 connections remain Idle when connection is not used else are closed after maxwait
maxIdle 20 connections can be store as idle.
But When I start tomcat server I can see a 30 IDLE connections which remains forever.Why this happens? Am I missing something ? According to my understanding about connection pool there should only 10 connections should created and can stay in IDLE mode. Is there any specific changes that I have to do with mysql my.cnf
When you say...
This is in global context so multiple apps can use it.
What specifically do you mean? Is it in $CATALINA_BASE/conf/server.xml in the GlobalNamingResources block or in $CATALINA_BASE/conf/context.xml?
Defining a Resource tag in the GlobalNamingResources block of $CATALINA_BASE/conf/server.xml will cause only one resource to be created across the entire server. This can then be shared to applications deployed on your system by adding a ResourceLink tag to the Context configuration.
Defining a Resource in $CATALINA_BASE/conf/context.xml will define the resource once for each application deployed to your Tomcat instance. Thus if you have three applications deployed, you'll end up with three separate resources. This is a guess, but probably why you are seeing 30 connections to your database server.
I have developed Java application using connection pooling (DBCP) with Sql Server 2005. In my configuration file I have MaxActive="500" but in some cases it will exceed more then 500 connects. Why? And database is slow that time.
<Resource
name="jdbc/tm4u"
auth="Container"
type="javax.sql.DataSource"
driverClassName="com.microsoft.sqlserver.jdbc.SQLServerDriver"
url="jdbc:sqlserver://XXXX;databaseName=XX;User=abc;Password=son;selectMethod=cursor"
username="abc"
password="son"
autoReconnect="true"
maxActive="500"
removeAbandoned="true"
logAbandoned="true"
removeAbandonedTimeout="60"
maxIdle="10"
/>
In your code, do you close the connections that were opened? By doing this the connections will be returned to the pool and re-used. There should be no performance degradation in this case. However, if we need more than 500 active connections, some of them will have to wait.
Also see other questions on SO related to pooling.