I'm analyzing a heap dump where I have 50% live set in the heap. This is caused by a large amount of heap reserved just to hold all the JDBC4Connection and it's internal properties hashmap. This is a heapdump of an application running for a couple of days.
It looks like it's holding thousands of JDBC connection objects and it's configuration.
I found a question which asked a similar question, but got dismissed suggesting that the user was not closing connections: Too many instances of "com.mysql.jdbc.JDBC4Connection"
However, I'm using the org.springframework.data.jpa.repository.JpaSpecificationExecutor.findAll, without querying the database directly. Code:
Specification<AccountProfile> spec = getUserInfoListSurfacing(userInfo);
JPAImpl.findAll(spec, new PageRequest(0, 1, Sort.Direction.DESC, "reportedDate")).getContent()
Here is the bean definition for this connection pool
<bean id="db" class="com.mchange.v2.c3p0.ComboPooledDataSource">
<property name="dataSourceName" value="users"/>
<property name="driverClass" value="${userdb.driver}"/>
<property name="forceUseNamedDriverClass" value="true"/>
<property name="jdbcUrl" value="${userdb.url}"/>
<property name="user" value="${userdb.username}"/>
<property name="password" value="${userdb.password}"/>
<property name="initialPoolSize" value="${userdb.hibernate.c3p0.initialPoolSize}"/>
<property name="maxPoolSize" value="${userdb.hibernate.c3p0.max_size}"/>
<property name="minPoolSize" value="${userdb.hibernate.c3p0.min_size}"/>
<property name="idleConnectionTestPeriod" value="${userdb.hibernate.c3p0.idle_test_period}"/>
<property name="maxStatements" value="${userdb.hibernate.c3p0.max_statements}"/>
<property name="maxIdleTime" value="${userdb.hibernate.c3p0.idle_test_period}"/>
<property name="preferredTestQuery" value="${userdb.hibernate.c3p0.validationQuery}"/>
<property name="testConnectionOnCheckout" value="${userdb.hibernate.c3p0.testOnBorrow}"/>
<property name="acquireIncrement" value="${userdb.hibernate.c3p0.acquireincrement}"/>
<property name="unreturnedConnectionTimeout"
value="${userdb.hibernate.c3p0.unreturnedConnectionTimeout}"/>
<property name="debugUnreturnedConnectionStackTraces" value="${userdb.hibernate.c3p0.debugUnreturnedConnectionStackTraces}"/>
<property name="maxConnectionAge" value="${userdb.hibernate.c3p0.maxconnectionage}"/>
<property name="numHelperThreads" value="${userdb.hibernate.c3p0.numHelperThreads}"/>
<property name="connectionCustomizerClassName" value="${userdb.hibernate.c3p0.connectionCustomizerClassName}"/>
</bean>
I have not been able to find confirmed reports of memory leaks in the Hibernate versions that I'm using.
I'm using spring-data-jpa version 2.0.6.Final
hibernate-core version 4.3.5.Final
hibernate-jpa-2.1-api version 1.0.2.Final
You show 224 instances of the object "managed". There is one instance of "managed" per Connection pool, so you have 224 Connection pools holding references to Connections, not just one. You have to understand why you are instantiating so many pools when a typical application just wants one.
The usual error that could cause this is to create a new Connection pool every time you mean to establish a new Connection. The less common cause is using the same DataSource to establish Connections under multiple authentications using dataSource.getConnection( user, password ). A new Connection pool is established for each distinct authentication.
Since you are not directly instantiating your Connection pool but using Spring to do it, it's not so easy to debug how why you are instantiating so any pools. One thing you should definitely add is a destroy-method attribute to your bean XML tag. That is...
<bean id="db" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
...
</bean>
You could be generating all these pools just by hot redeploying your app a lot, and failing to cleanup the old pools because Spring doesn't know it has to clean up your beans. See Spring's Destruction Callbacks.
Related
we use java SSH, recently we encountered the following problem frequently,not sure what happened, i searched so many times and no such shiro related scenario occured,we use shiro as authenticaiton framework, and customized the sessionDAO including session operations like "doCreate,doUpdate etc.",even config like this in applicaitonContext.xml:
<tx:method name="do*" propagation="REQUIRES_NEW" />
the trace:
2018-01-22 18:02:03.482 [http-nio-8080-exec-762] INFO org.apache.struts2.rest.RestActionInvocation - Executed action [//order/order!index!xhtml!200] took 574 ms (execution: 149 ms, result: 425 ms)
org.springframework.dao.TransientDataAccessResourceException: PreparedStatementCallback; SQL [update sessions set session=? where session_id=?]; Connection is read-only. Queries leading to data modification are not allowed; nested exception is java.sql.SQLException: Connection is read-only. Queries leading to data modification are not allowed
at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:108)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:649)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:870)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:931)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:941)
at com.shopping.web.authentication.dao.impl.ShiroSessionDao.doUpdate(ShiroSessionDao.java:48)
at org.apache.shiro.session.mgt.eis.CachingSessionDAO.update(CachingSessionDAO.java:277)
at org.apache.shiro.session.mgt.eis.CachingSessionDAO$$FastClassBySpringCGLIB$$2a5e5afd.invoke(<generated>)
anyone could please help?
thanks a lot.
db configuration in applicationContext.xml:
<!-- C3P0 -->
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource">
<property name="user" value="${jdbc.user}"></property>
<property name="password" value="${jdbc.password}"></property>
<property name="driverClass" value="${jdbc.driverClass}"></property>
<property name="jdbcUrl" value="${jdbc.jdbcUrl}"></property>
<property name="initialPoolSize" value="${jdbc.initPoolSize}"></property>
<property name="maxPoolSize" value="${jdbc.maxPoolSize}"></property>
<property name="maxIdleTime" value="${cpool.maxIdleTime}"/>
</bean>
<!-- Spring jdbcTempate used for authentication -->
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource"></property>
</bean>
<!--SessionFactory -->
<bean id="sessionFactory" class="org.springframework.orm.hibernate5.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource"></property>
<property name="configLocation" value="classpath:hibernate.cfg.xml"></property>
<property name="mappingLocations" value="classpath:com/shopping/web/entities/*.hbm.xml"></property>
<property name="packagesToScan">
<list>
<value>com.shopping</value>
</list>
</property>
</bean>
we use jdbctemplate together with hibernate5 with the same session managment.
in db.proerties:
jdbc.user=shopping
jdbc.password=123456
jdbc.driverClass=com.mysql.jdbc.Driver
jdbc.jdbcUrl=jdbc:mysql://192.168.2.221:3306,192.168.2.222:3306,192.168.2.200:3306/shopping?useUnicode=true&characterEncoding=utf-8
jdbc.initPoolSize=5
jdbc.maxPoolSize=10
cpool.maxIdleTime=25200
after hard work on this problem, we got the root cause:
we use mysql cluster and c3p0 connection pool on server side to store data as well as shiro session,sometimes the web site need to reconnect to mysql cluster if there is no available connection to use in the pool, once reconnect, the default mode is read only, but the shiro session need update operation in database, this cause the problem.
the solution is: change the mysql connection url as below:
jdbc.jdbcUrl=jdbc:mysql://server1:3306,server2:3306,server3:3306/database?autoReconnectForPools=true&autoReconnect=true&failOverReadOnly=false&useUnicode=true&characterEncoding=utf-8
and autoReconnectForPools=true&autoReconnect=true&failOverReadOnly=false is the key to solution
I have an issue in my Spring MVC JDBC call. If I make the call quickly after starting the server, the JDBC connection is make in a second and the data is retrieved. Similarly, if the other DAOs are called in quick succession with one another, the connection is made soon. But if I try to call a DAO after a gap of even a few minutes, the JDBC connection takes forever to be done. It gets stuck on
"DataSourceUtils:110 - Fetching JDBC Connection from DataSource"
I have never had the patience to really check how long it takes to retrieve the connection but I've waited for 10 minutes and there was no sign of the connection being made.
Next, I try to restart the server at least. But JDBC obstructs even stopping of the server!! The console is stuck on this line:
"DisposableBeanAdapter:327 - Invoking destroy method 'close' on bean with name 'dataSource'"
Eventually I restart Eclipse and it works alright until there is a time gap again.
This is my bean definition for the data source:
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="url" />
<property name="username" value="abc" />
<property name="password" value="abc" />
<property name="validationQuery" value="SELECT 1" />
<property name="testWhileIdle" value="true" />
<property name="maxActive" value="100" />
<property name="minIdle" value="10" />
<property name="initialSize" value="10" />
<property name="maxIdle" value="20" />
<property name="maxWait" value="1000" />
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="getDataDao" class="com.project.dao.GetDataDao">
<constructor-arg index="0" ref="jdbcTemplate" />
<constructor-arg index="1" value="STORED_PROC_NAME"></constructor-arg>
</bean>
In my DAO file, I extend Spring's StoredProcedure class and this is the constructor:
public GetDataDao(JdbcTemplate jdbcTemplate, String spName) {
super(jdbcTemplate, spName);
declareParameter(new SqlParameter("p_input", Types.VARCHAR));
declareParameter(new SqlOutParameter("o_result", Types.VARCHAR));
compile();
}
In another function, this is how I call the SP:
spOutput = super.execute(spInput);
where spOutput and spInput are HashMaps.
Am I doing something wrong in my configuration? TIA.
I too had the exact same issue. I found that issue was consistent with a particular query, checked the query and found that issue was within query itself. Running query separately was also taking time. Query was converting a column to lower and that column was not indexed. Query was like lower(trim(column_name)), removed the lower and trim. It worked perfectly fine after that.
The additional code helps, but I do not see anything in it that would cause issue you are seeing. The most likely reason for issue you are seeing is that connections are being pulled out of the pool, but they are not being returned, and the pool eventually becomes starved. The dbcp pool is then later blocking your shutdown because these connections are still open, and probably hung.
To verify, you might try setting maxActive and similar settings to something much lower, perhaps even "1", and then verify that you get the same issue immediately.
Have you verified that your stored procedure is returning? i.e. You actually get spOutput for every call and the stored procedure itself is not hanging consistently or randomly?
If so, my only other suggestion is to post more code, especially from the call stack leading in to GetDataDao, and including whatever method in the DAO is making the sp.execute call. An assumption is that you are not using transactions, but if you are, then showing where you start/commit transaction in code would also be very important.
The application connects to MS SQL server. It uses the c3p0 ComboPooledDataSource in Tomcat and Spring environment.
When the application loses database connection and gets it back few seconds later, the application recovers the connection and can continue querying the db quickly (as soon as the network is back).
But it the network outage is longer, the application needs more than 10 minutes to recover a db connection after network came back.
I see these logs when the db connection is back after 10 minutes:
[WARNING] Exception on close of inner statement.java.sql.SQLException: Invalid state, the Connection object is closed.
at net.sourceforge.jtds.jdbc.TdsCore.checkOpen(TdsCore.java:481)
[WARNING] [c3p0] A PooledConnection that has already signalled a Connection error is still in use!
[WARNING] [c3p0] Another error has occurred [ java.sql.SQLException: Invalid state, the Connection object is closed. ] which will not be reported to listeners!java.sql.SQLException: Invalid state, the Connection object is closed.
Here is the spring-config.xml configuration:
<bean id="CommonDataSource" abstract="true" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
<property name="driverClass" value="net.sourceforge.jtds.jdbc.Driver" />
<property name="minPoolSize" value="${db.minPoolSize}" />
<property name="maxPoolSize" value="${db.maxPoolSize}" />
<property name="acquireRetryAttempts" value="0" />
<property name="checkoutTimeout" value="0" />
<property name="testConnectionOnCheckout" value="true" />
<property name="testConnectionOnCheckin" value="false" />
<property name="idleConnectionTestPeriod" value="10" />
<property name="preferredTestQuery" value="select 1" />
</bean>
I tried other configurations, with a non-zero checkoutTimeout, testConnectionOnCheckout=false and testConnectionOnCheckin=true, the recovery still is very long.
What is wrong with my configuration? I would like to recover the db connection as soon as network issues are fixed.
Many thanks for you help
EDIT with Hakari configuration as suggested by M. Deinum
Hi,
I tried with this Hakari configuration:
<bean id="CommonDataSource" abstract="true" class="com.zaxxer.hikari.HikariDataSource" destroy-method="close">
<property name="maximumPoolSize" value="${db.maxPoolSize}" />
<property name="connectionTestQuery" value="select 1"/>
<property name="allowPoolSuspension" value="true"/>
</bean>
But the behaviour is similar: I have to wait for 10-15 minutes before getting the database connection back.
Would you have any suggestion please?
The issue was not related to c3p0 nor HikariCP. I had to modify the jdbc url and add these properties:
loginTimeout=60;socketTimeout=60
Maybe only one is enough but I could do the job with both of these.
This link helps a lot http://jtds.sourceforge.net/faq.html
I've been searching around the web for some time now and i've yet to fix this issue:
I have the following datasource configuration:
<bean id="cpms.prod.ds" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName"><value>com.mysql.jdbc.Driver</value></property>
<property name="url"><value>jdbc:mysql://localhost/mysql</value></property>
<property name="username"><value>test</value></property>
<property name="password"><value>test</value></property>
<property name="initialSize" value="1" />
<property name="maxActive" value="2" />
<property name="maxIdle" value="1"></property>
</bean>
This should be enough to make sure there are only 2 active connections at one point and those are the ones used for the pool.
In my java code i'm using SimpleJdbcTemplate to do three queries, and as I understand it this object should be returning the connections to the pool after each query ends, should also be blocking the third query while one of the others end.
When looking at my database administration console I see 3 connections appear, and then change to sleep state. If I run the queries again I see another 3 connections popup and the other 3 stay there.
The only way i've found for the connections to be closed is by setting:
<property name="removeAbandoned" value="true"/>
<property name="removeAbandonedTimeout" value="1"/>
<property name="minIdle" value="0"></property>
<property name="timeBetweenEvictionRunsMillis" value="1000"></property>
<property name="minEvictableIdleTimeMillis" value="1000"></property>
which forces the abandoned connections procedure to run and clean the old connections.
I shouldn't have to meddle with these parameters, and especially not be setting them so low as it might have performance issues.
I've also tried the solution shown here to the same effect until i change the timeBetweenEvictionRunsMillis and minEvictableIdleTimeMillis to the lower values. And it still doesn't limit the connections to 2.
All connections in the JdbcTemplate are via DataSourceUtils.doGetConnection. What you are seeing might be due to the 'intelligence' in BasicDataSource
From the API:
Abandonded connections are identified and removed when getConnection() is invoked and the following conditions hold:
getRemoveAbandoned() = true
getNumActive() > getMaxActive() - 3
getNumIdle() < 2
The data source seems to allow 3 more active connections than the specified max-active.
I am working on web-project with Hibernate ans Spring MVC.
My hibernate configuration is:
<property name="connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="connection.url">jdbc:mysql://localhost/xxx</property>
<property name="connection.username">xxx</property>
<property name="connection.password">xxx</property>
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.timeout">1800</property>\<property name="hibernate.c3p0.max_statements">50</property>
<property name="dialect">org.hibernate.dialect.MySQL5Dialect</property>
<property name="connection.useUnicode">true</property>
<property name="connection.characterEncoding">UTF-8</property>
<property name="current_session_context_class">thread</property>
<property name="show_sql">true</property>
The hibernate sessions are closing in service-classes destructors (these service-classes have DAO objects). But after publishing at production server I've got too many connections exception from mysql.Every server call, the mysql connection was opened. When the connections amount become 101 - db failed. I think that destructors had not time enought for executes so the connections were opened all the time.
Then, I rebuilded the structure. Now, the Spring controllers call the service-class function, that releases the session mannualy. But it doesn't help: the connections are still opened and now I can not use LAZY-collections in views because the session already closed.
How can I solve that problem? What is the usual approach here?
Thank you.