We have a web application that uses C3P0 to pool our connections. We inject C3P0 as a data source into JdbcTemplate. You can see how we do this here:
<bean id="dataSourceDev" class="com.mchange.v2.c3p0.ComboPooledDataSource">
<property name="driverClass" value="${databasedev.driver}" />
<property name="jdbcUrl" value="${databasedev.url}"/>
<property name="user" value="${databasedev.username}"/>
<property name="password" value="${databasedev.password}"/>
<property name="initialPoolSize" value="5" />
<property name="minPoolSize" value="5" />
<property name="maxPoolSize" value="1000" />
<property name="acquireIncrement" value="5" />
<property name="maxStatements" value="1000" />
<property name="maxStatementsPerConnection" value="1000"/>
<property name="maxIdleTime" value="10800"/> <!-- 3 hours -->
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<constructor-arg>
<ref bean="dataSourceDev" />
</constructor-arg>
</bean>
<bean id="someDaoBean" class="com.gedi.platform.dao.SomeDaoClass">
<property name="jdbcTemplate" ref="jdbcTemplate" />
</bean>
<bean id="someResourceClass" class="com.gedi.platform.SomeResourceClass">
<property name="someDao" ref="someDaoBean" />
</bean>
You can see that it's a Java EE web application - it uses Jetty as its application server. My question is, how does Jetty instantiate our beans, and how will that affect connection pooling? If we have dozens of users using the web site at different times, will all of these users be placed in the same connection pool? Or is there only one connection pool per client, in which every HTTP client creates new instances of Resource, DAO, JdbcTemplate and C3P0?
Am I being clear? What I want to have is one connection pool for all HTTP requests, regardless of whether they come from web browsers originating in Boston or New Zealand. That way, the connection pool is exerting its maximum effects. However, if a new connection pool is instantiated for every HTTP client, then the pooling doesn't end up being much of an improvement.
Edit
An important tidbit - We use the Jersey reference implementation of JAX-RS to produce a RESTful interface. So our servlet dispatches requests through Jersey which finds a suitable Resource class/method to handle them. I wonder whether Jersey re-instantiates these classes on every request, or keeps one instance of them at all times.
Neither Jersey nor Jetty are relevant here. Spring is important here. And in Spring every bean (like your dataSourceDev, jdbcTemplate and someDaoBean) are singletons. That means when Spring application context starts, it will creatly exactly one instance of each of them.
That means no matter what uses your DataSource (web request, background job, etc.), the same instance (thus the same connection pool) is used. You are right that if connection pool was created per each request, it would not have been much of an improvement. Actually it would be much, much slower.
But in your case (and this is how 99% of web applications work) all code requiring database access will compete and reuse the same connections (or wait if none available). BTW make sure your database can actually handle 1000 concurrent connections.
Spring creates the beans and caches them, so unless you have specified the beans as prototype scoped(which creates a new bean for each request), all bean are singleton's by default. Jetty doesn't interfere.
When a request comes in, the DispatcherServlet catches the request and hands it off to the appropriate handler. The handler is the same bean if it has not been declared as a prototype bean.
You understood the connection pool correctly. This is exactly why the concept was created. It doesn't matter where the request came from, the maximum number of connections to the database at any point in time will be the one you have defined in the maxPoolSize property.
Related
I have controller ,service and dao class singleton
Dao Class:
#Autowired
JdbcTemplate jdbcTemplate;
#Override
public String addUsers(UserDTO userDto) throws Exception {
// TODO Auto-generated method stub
System.out.println("JDBC TEMPLATE::"+jdbcTemplate);
String query="Insert into users values('"+userDto.getUserName()+"')";
System.out.println(query);
jdbcTemplate.update(query);
return "success";
}
applicationContext.xml
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="dataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource" >
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="jdbc:mysql://localhost:3306/demo" />
<property name="username" value="" />
<property name="password" value="" />
</bean>
In dao class I am using JdbcTemplate which is defined as singleton and dataSource bean is also singleton.
Now I have following doubt:
1)If my JdbcTemplate is singleton and dataSource bean is singleton will they cause any problem for concurrent request?
2)Is that the ideal way to make JdbcTemplate bean and injecting in to DAo?
3)Is request scope should only be there if any class hold instance variables?
In order be able to work concurrently against your DB, I would suggest to you to use connection pooling.
When multiple requests will "arrive" concurrently, for each of them the connections pool will assign a dedicated connection to work against the DB.
Of course, that is under your responsibility to make sure you're not accessing to the same "area" in your DB.
In MySql DB I know that there's a locking mechanism for such scenarios, but I would recommend making a deeper research.
There are 2 well-known connection pools:
Apache DBCP http://commons.apache.org/proper/commons-dbcp/
c3po http://www.mchange.com/projects/c3p0/
More detailed explanation:
Connection Pooling
It's a technique to allow multiple clients to make use of a cached set of shared and reusable connection objects providing access to a database.
Opening/Closing database connections is an expensive process and hence connection pools improve the performance of execution of commands on a database for which we maintain connection objects in the pool.
It facilitates reuse of the same connection object to serve a number of client requests.
Every time a client request is received, the pool is searched for an available connection object and it's highly likely that it gets a free connection object.
Otherwise, either the incoming requests are queued or a new connection object is created and added to the pool (depending on how many connections are already there in the pool and how many the particular implementation and configuration can support).
As soon as a request finishes using a connection object, the object is given back to the pool from where it's assigned to one of the queued requests (based on what scheduling algorithm the particular connection pool implementation follows for serving queued requests).
I am using Spring Integration (2.2.0) with WebSphere (8.0.0.x), in order to send messages via JMS (Tibco EMS).
Communication between components is working fine, but we have observed huge latencies between the messaging hops. These are in line with what we see in the EMS logs:
2014-09-30 06:04:19.940 [user#host]: Destroyed consumer (connid=19202559, sessid=28728543, consid=328585032) on queue 'test.queue3.request'
2014-09-30 06:04:19.969 [user#host]: Created consumer (connid=19202564, sessid=28728551, consid=328585054) on queue 'test.queue2.request'
2014-09-30 06:04:20.668 [user#host]: Destroyed consumer (connid=19202562, sessid=28728549, consid=328585048) on queue 'test.queue1.request'
2014-09-30 06:04:20.733 [user#host]: Created consumer (connid=19202567, sessid=28728555, consid=328585071) on queue 'test.queue5.request'
2014-09-30 06:04:20.850 [user#host]: Destroyed consumer (connid=19202563, sessid=28728550, consid=328585051) on queue 'test.queue4.request'
2014-09-30 06:04:21.001 [user#host]: Destroyed consumer (connid=19202564, sessid=28728551, consid=328585054) on queue 'test.queue2.request'
2014-09-30 06:04:21.701 [user#host]: Created consumer (connid=19202571, sessid=28728561, consid=328585093) on queue 'test.queue3.request'
2014-09-30 06:04:21.762 [user#host]: Destroyed consumer (connid=19202567, sessid=28728555, consid=328585071) on queue 'test.queue5.request'
Apparently, consumers are constantly being destroyed and re-created. This is not only bad for EMS, but also it's killing the latency, as messages are not delivered until the consumer is back online.
This is how the consumers are defined:
<jee:jndi-lookup id="rawConnectionFactory" jndi-name="jms/QueueCF"/>
<bean id="jmsDestinationResolver"
class="org.springframework.jms.support.destination.JndiDestinationResolver"/>
<bean id="connectionFactory"
class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter"
p:targetConnectionFactory-ref="rawConnectionFactory"
p:username="${jms.internal.username}"
p:password="${jms.internal.password}"/>
<bean id="taskExecutor"
class="org.springframework.scheduling.commonj.WorkManagerTaskExecutor"
p:workManagerName="wm/mc"
p:resourceRef="false"/>
<bean id="transactionManager"
class="org.springframework.transaction.jta.WebSphereUowTransactionManager"/>
<bean id="adp1Container"
class="org.springframework.jms.listener.DefaultMessageListenerContainer"
p:taskExecutor-ref="taskExecutor"
p:destinationName="requestQueue1" p:connectionFactory-ref="connectionFactory"
p:destinationResolver-ref="jmsDestinationResolver"
p:transactionManager-ref="transactionManager" />
<jms:inbound-gateway id="jmsInAdapter1"
request-channel="adapter1logic" container="adp1Container" />
<channel id="adapter1logic" />
Update:
This behaviour is related to the use of the transaction manager.
If we specify the connection to the EMS server directly in Spring (indicating there the host, port, user, password), consumers are still constantly recreated, but for some reason these recreations are not affecting the end-to-end latencies. Connections are apparently being managed better in Spring than in WAS.
How to configure WAS so that consumers trigger as quick as in Spring?
If, along with the previous change, I also remove the reference to the transaction manager in the DefaultMessageListenerContainer, consumers stop destroying and constructing altogether.
What could be the issue with WebSphere's transaction manager? Why are consumers destroying and constructing when WAS' transaction manager is in use? Is there any configuration that can be adjusted?
You should not see consumers being recycled like that, unless your listener is throwing an exception. Container consumers a long-lived by default. I suggest you turn on DEBUG (or even TRACE) logging for the container to figure out what's going on.
Suggesting to wrap your connection factory with CachingConnectionFactory decorator and configure the session caching strategy:
<bean id="cacheConnFactory"
class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="connectionFactory" />
<property name="cacheProducers" value="true" />
<property name="cacheConsumers" value="true" />
<property name="sessionCacheSize" value="10" />
</bean>
Use the above connection factory in your DMLC along with cacheLevel settings as follows:
<bean id="adp1Container"
class="org.springframework.jms.listener.DefaultMessageListenerContainer"
p:taskExecutor-ref="taskExecutor"
p:destinationName="requestQueue1"
p:connectionFactory-ref="cacheConnFactory"
p:destinationResolver-ref="jmsDestinationResolver"
p:transactionManager-ref="transactionManager">
<property name="sessionTransacted" value="true" />
<property name="cacheLevel" value="3" /> <!-- Consumer Level -->
</bean>
I am using Spring MVC with Spring 3.1. I have a web application that uses many REST services. One of these REST services takes up to an hour to respond - which I can not change. I have my timeout for the RestTemplate set up like this with the timeout set to 60 minutes:
<bean id="restTemplate" class="org.springframework.web.client.RestTemplate ">
<constructor-arg>
<bean class="org.springframework.http.client.CommonsClientHttpRequestFactory">
<property name="readTimeout" value="3600000" />
<property name="connectTimeout" value="3600000" />
</bean>
</constructor-arg>
</bean>
I would like to be able to set all of my other REST calls to a different set of timeouts. Any ideas on how to do this?
Thanks,
Tim
You can't do this on a method call basis. In other words, all calls on the restTemplate bean will use the same underlying ClientHttpRequestFactory. If you want different requests to use different timeout values, declare multiple RestTemplate beans and inject the appropriate ones in your beans.
I'm using spring integration to invoke a service on the other end of an active mq. My config looks like:
<bean id="jmsConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<constructor-arg>
<bean class="org.apache.activemq.ActiveMQConnectionFactory"
p:brokerURL="${risk.approval.queue.broker}"
p:userName="${risk.approval.queue.username}"
p:password="${risk.approval.queue.password}"
/>
</constructor-arg>
<property name="reconnectOnException" value="true"/>
<property name="sessionCacheSize" value="100"/>
</bean>
<!-- create and close a connection to prepopulate the pool -->
<bean factory-bean="jmsConnectionFactory" factory-method="createConnection" class="javax.jms.Connection"
init-method="close" />
<integration:channel id="riskApprovalRequestChannel"/>
<integration:channel id="riskApprovalResponseChannel"/>
<jms:outbound-gateway id="riskApprovalServiceGateway"
request-destination-name="${risk.approval.queue.request}"
reply-destination-name="${risk.approval.queue.response}"
request-channel="riskApprovalRequestChannel"
reply-channel="riskApprovalResponseChannel"
connection-factory="jmsConnectionFactory"
receive-timeout="5000"/>
<integration:gateway id="riskApprovalService" service-interface="com.my.super.ServiceInterface"
default-request-channel="riskApprovalRequestChannel"
default-reply-channel="riskApprovalResponseChannel"/>
What I've noticed is that with this config the consumers created to grab the matching request from active mq never close. Every request increments the consumer count.
I can stop this from happening by adding
<property name="cacheConsumers" value="false" />
To the CachingConnectionFactory.
However according to the java docs for CachingConnectionFactory :
Note that durable subscribers will only be cached until logical
closing of the Session handle.
Which suggests that the session is never being closed.
Is this a bad thing? Is there a better way to stop the consumers from piling up?
Cheers,
Peter
First, you don't need the init-method on your factory-bean - it does nothing - the session factory only has one connection and calling close() on it is a no-op. (CCF is a subclass of SingleConnectionFactory).
Second; caching consumers is the default; sessions are never closed, unless the number of sessions exceeds the sessionCacheSize (which you have set to 100).
When close() is called on a cached session, it is cached for reuse; that's what the caching connection factory is for - avoiding the overhead of session creation for every request.
If you don't want the performance benefit of caching sessions, producers and consumers, use the SingleConnectionFactory instead. See the JavaDoc for CachingConnectionFactory.
Does the following work when using cachingConnectionFactory?
In your spring config file add in the connection factory config details something like this: cacheConsumers="false"
Default Behaviour is true which was causing a connection leak in the Queue.
I'm using the Spring MVC to build a thin layer on top of a SQL Server database. When I began testing, it seems that it doesn't handle stress very well :). I'm using Apache Commons DBCP to handle connection pooling and the data source.
When I first attempted ~10-15 simultaneous connections, it used to hang and I'd have to restart the server (for dev I'm using Tomcat, but I'm gonna have to deploy on Weblogic eventually).
These are my Spring bean definitions:
<bean id="dataSource" destroy-method="close"
class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
<property name="url" value="[...]"/>
<property name="username" value="[...]" />
<property name="password" value="[...]" />
</bean>
<bean id="partnerDAO" class="com.hp.gpl.JdbcPartnerDAO">
<constructor-arg ref="dataSource"/>
</bean>
<!-- + other beans -->
And this is how I use them:
// in the DAO
public JdbcPartnerDAO(DataSource dataSource) {
jdbcTemplate = new JdbcTemplate(dataSource);
}
// in the controller
#Autowired
private PartnerDAO partnerDAO;
// in the controller method
Collection<Partner> partners = partnerDAO.getPartners(...);
After reading around a little bit, I found the maxWait, maxActive and maxIdle properties for the BasicDataSource (from GenericObjectPool). Here comes the problem. I'm not sure how I should set them, performance-wise. From what I know, Spring should be managing my connections so I shouldn't have to worry about releasing them.
<bean id="dataSource" destroy-method="close"
class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
<property name="url" value="[...]"/>
<property name="username" value="[...]" />
<property name="password" value="[...]" />
<property name="maxWait" value="30" />
<property name="maxIdle" value="-1" />
<property name="maxActive" value="-1" />
</bean>
First, I set maxWait, so that it wouldn't hang and instead throw an exception when no connection was available from the pool. The exception message was:
Could not get JDBC Connection; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object
There are some long-running queries, but the exception was thrown regardless of the query complexity.
Then, I set maxActive and maxIdle so that it wouldn't throw the exceptions in the first place. The default values are 8 for maxActive and maxIdle (I don't understand why); if I set them to -1 there are no more exceptions thrown and everything seems to work fine.
Considering that this app should support a large number of concurrent requests is it ok to leave these settings to infinite? Will Spring actually manage my connections, considering the errors I was receiving? Should I switch to C3P0 considering it's kinda dead?
DBCP maxWait parameter should be defined in milliseconds. 30 ms is very low value, consider increasing it to 30000 ms and try again.
As you already found out, the default dbcp connection pool is 8 connections, so if you want to run 9 simultaneous queries one of them will be blocked. I suggest you connect to your database and run exec sp_who2 which will show you what is connected, and active, and whether any queries are being blocked. You can then confirm whether the issue is on the db or in your code.
As long as you are using Spring's JdbcTemplate family of objects your connections will be managed as you expect, and if you want to use a raw DataSource make sure you use DataSourceUtils to obtain a Connection.
One other suggestion - prior to Spring 3, don't ever using JdbcTemplate, stick to SimpleJdbcTemplate, you can still access the same methods using SimpleJdbcTemplate.getJdbcOperations(), but you should find yourself writing much nicer code using generics, and remove the need to ever create JdbcTemplate/NamedParameterJdbcTemplate instances.
Let's change the perspective.
but the exception was thrown
regardless of the query complexity
It could be because the table or the records in the table, which you are querying against has been locked (by some other active transaction) and hence it times out.
Try running the same query from SQLServer Client and if it takes a long time, then you can be sure that it is the table or record lock that is causing this.