There is a service that connects to Oracle DB for reading data and it uses Hibernate-3.6 and SpringData-JPA-1.10.x. Heap dumps are getting generated frequently which results in out of memory on the hosts.
After analyzing few heapdumps using Eclipse MAT, found that the majority of the memory is accumulated in one instance of org.hibernate.engine.StatefulPersistenceContext -> org.hibernate.util.IdentityMap -> java.util.LinkedHashMap.
And the leak suspect says
The thread java.lang.Thread # 0x84427e10 ... : 29 keeps local
variables with total size 1,582,637,976 (95.04%) bytes.
The memory is accumulated in one instance of "java.util.LinkedHashMap"
loaded by "".
Searched it on StackOverflow and it says SessionFactory should be singleton and session.flush() and session.clear() should be invoked before each call to clear the cache. But SessionFactory is not explicitly initialized or used in the code.
What's causing the memory leak here (looks like the result of each query is cached and not cleared) and how to fix it?
More info about the Spring Data configuration:
TransactionManager is initialized as:
<tx:annotation-driven mode='proxy' proxy-target-class='true' />
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
....
</bean>
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" depends-on="...">
....
</bean>
To interact with the table, an interface is declared extending Spring Data Repository and JpaSpecificationExecutor. Both are typed into the domain class that it will handle.
API activity method has the annotation #Transactional(propagation = Propagation.SUPPORTS, readOnly = true).
From what you describe this is what I expect to be going on:
Hibernate (actually JPA in general) keeps a reference to all entities it loads or saves for the lifetime of the session.
In a typical web application setup, this isn't a problem, because. A new session starts with each request and gets closed once the request is finished and it doesn't involve that many entities.
But for your application, it looks like the session keeps growing and growing.
I can imagine the following reasons:
something runs in an open session all the time without it ever closing. Maybe something like a batch job or a scheduled job which runs at regular intervals.
Hibernate is configured in such a way that it reuses the same session without ever closing it.
In order to find the culprit enable logging for opening and closing the session. Judging from https://hibernate.atlassian.net/browse/HHH-2425 org.hibernate.impl.SessionImpl should be the right log category and you probably need trace level logging.
Now test the various requests to your server and see if there are any sessions that get opened but not closed.
The question contains information about the creations of some beans. But the problem doesn't lie there. The problem is in your code, where have you use these beans.
Please check your code. Probably you are loading items in a loop. And the loop is wrapped with a transaction.
Hibernate creates huge intermediate objects, and it doesn't clean these before the transaction being completed (commit/rollback).
Related
I'm using a spring batch HibernateCursorItemReader, it's defined as follows
<bean class="org.springframework.batch.item.database.HibernateCursorItemReader"
scope="step" id="priceListFctrItemReader">
<property name="queryName" value="FIND_ALL_PRICE_LIST_FCTR_ITEM_ID_BY_MONTRY_FCTR_VER"/>
<property name="sessionFactory" ref="sessionFactory"/>
<property name="parameterValues">
<map>
<entry key="factorVersion" value="#{jobParameters['current.factor.version']}"/>
<entry key="trueValue" value="#{true}"/>
</map>
</property>
</bean>
On small results it seems to be fine. But if processing takes long it seems the session closes and I get
org.hibernate.exception.GenericJDBCException: could not advance using next()
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:54)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:126)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:112)
and further down
Caused by: java.sql.SQLException: Result set already closed
at weblogic.jdbc.wrapper.ResultSet.checkResultSet(ResultSet.java:144)
at weblogic.jdbc.wrapper.ResultSet.preInvocationHandler(ResultSet.java:93)
I don't experience this in spring-boot, but on weblogic I do. It could be that local spring boot is just faster.
any ideas on how to avoid this ?
The problem is that spring-batch does a commit after every chunk, and committing closes the transaction and thus the result set.
When you are not in an application container, like when you are using spring boot, *CursorItemReaders use a separate connection to bypass the transaction and thereby avoid the commit which closes the cursor result set.
On the other hand, if you are running on an application server the connection you get from the server managed data source will by default take part in a transaction. In order for a cursor item reader to work you must set up a data source which does not take part in transactions.a
Alternatively, you may be able to use a *PagingItemReader which reads page size records per chunk, each in a separate transaction. This completely avoids the problem of a closing result set. Beware: If the underlying table changes between chunks the results may not be what you expect!
[a] : https://blog.codecentric.de/en/2012/03/transactions-in-spring-batch-part-2-restart-cursor-based-reading-and-listeners/
I am trying to get Multi-Threading working in my Java web application and it seems like no matter what I try I run into some sort of issues with connection pooling.
My current process is that I have am looping through all my departments and processing them, which finally generates a display. This takes time, so I want to spawn a thread for each department and have them process concurrently.
After alot of time of first figuring out how to get my hibernate session to stay open in a thread to prevent the lazy initializion loading errors, I finally had the solution of creating a Spring bean of my Thread class, and creating a new instance of that bean for each thread. I have tried 2 different versions
1) I directly inject the DAO classes into the Bean. FAILED - After loading the page a few times I would get "Cannot get a connection, pool error Timeout waiting for idle object" for each thread and the app would crash.
2) Ok so then I tried injecting the spring SessionFactory into my bean, then create new instances of my DAO and set it with the SessionFactory. My DAO objects all extend HibernateDaoSupport. FAILED - After loading the page a few times I would get "too many connections" error messages.
Now what I am confused about is that my SessionFactory bean is a singleton which I understand to mean that it is a single Object that is shared throughout the Spring container. If that is true then why does it appear like each Thread is creating a new connection when it should just be sharing that single instance? It appears that all my connection pool is getting filled up and I don't understand why. Once the threads are done, all the connections that were created should be released but there not. I even tried running a close() operation on the injected SessionFactory but that has no effect. I tried limiting how many threads run concurrently at 5, hoping that would cause not so many connections to be created at one time but no luck.
I am obviously doing something wrong but I am not sure what. Am I taking the wrong approach entirely in trying to get my hibernate Session into my Threads? Am I somehow not managing my connection pool properly? any ideas would be greatly appreciated!
More info I thought of: My process creates about 25 threads total, which are run 5 at a time. I am able to refresh my page about 3 times before I start getting the errors. So apparently each refresh creates and holds on to a bunch of connections.
Here is my spring config file:
<bean id="mainDataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
....
<!-- Purposely put big values in here trying to figure this issue out
since my standard smaller values didn't help. These big values Fail just the same -->
<property name="maxActive"><value>500</value></property>
<property name="maxIdle"><value>500</value></property>
<property name="minIdle"><value>500</value></property>
<property name="maxWait"><value>5000</value></property>
<property name="removeAbandoned"><value>true</value></property>
<property name="validationQuery"><value>select 0</value></property>
</bean>
<!--Failed attempt #1 -->
<bean id="threadBean" class="controller.manager.ManageApprovedFunds$TestThread" scope="prototype">
<property name="dao"><ref bean="dao" /></property>
</bean>
<!--Failed attempt #2 -->
<bean id="threadBean" class="manager.ManageApprovedFunds$TestThread" scope="prototype">
<property name="sessionFactory"><ref bean="sessionFactory" /></property>
</bean>
Java code:
ExecutorService taskExecutor = Executors.newFixedThreadPool(5);
for(Department dept : getDepartments()) {
TestThread t = (TestThread)springContext.getBean("threadBean");
t.init(dept, form, getSelectedYear());
taskExecutor.submit(t);
}
taskExecutor.shutdown();
public static class TestThread extends HibernateDaoSupport implements Runnable, ApplicationContextAware {
private ApplicationContext appContext;
#Override
public void setApplicationContext(ApplicationContext arg0)
throws BeansException {
this.appContext = arg0;
}
#Override
public void run() {
try {
MyDAO dao = new MyDAO();
dao.setSessionFactory(getSessionFactory());
//SOME READ OPERATIONS
getSessionFactory().close();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Basically you shouldn't try to share a single hibernate Session between threads. Hibernate sessions, nor the entity objects, are threadsafe and that can lead to some suprising problems. Here and [here]3[] is a small but nice read there is also some value in the comments. Basically don't try to share a Session between threads.
There is also another problem as the whole transaction management is based around ThreadLocal objects, the same goes for obtaining the current session and underlying JDBC connection. Now trying to spawn threads will lead to suprising problems, one of them would be connection pool starvation. (Note: don't open too many connections, more information here).
Next to not to opening to much connections you should be aware of starting to many threads. A Thread is bound to a cpu (or core if you have multiple cores). Adding to many threads might lead to heavy sharing of a single cpu/core between to many threads. This can kill your performance instead of increasing it.
In short IMHO your approach is wrong, each thread should simply read the entity it cares about, do its thing and commit the transaction. I would suggest using something like Spring Batch for this instead of inventing your own mechanism. (Although if it is simple enough I would probably go for it).
A database connection is not associated with the SessionFactory but with the Session. To get your handling correct you have to take care of the following:
Only create one instance of SessionFactory (per persistence context) - but you are doing this already
Don't close() the SessionFactory - it's lifetime should end when the application dies - that is at undeployment or server-shutdown.
Make sure to always close() your Session - you use one database connection per open Session and if these don't get closed you are leaking connections.
I am working on an application consisting of Spring and Hibernate frameworks. In one particular module, the application fetches the data from database (select queries). Along with the select queries, application also issues an update statement. After further debugging, I found that the update query is fired from some TransactionInterceptor.
I think, transaction interceptor is not required here as all are select queries. Can anyone please suggest me a way to disable/suppress this interceptor at runtime?
This problem might sound too abstract at first. However, I am new to this application and don't have much knowledge about it's architecture. If you need any configuration details, please let me know.
Thanks in advance.
Can you post your application-context.xml transaction management declarations part. Where the bean : org.springframework.jdbc.datasource.DataSourceTransactionManager is defined.
If the annotaion is not enabled you should activate it like this :
<bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="yourDataSource" />
</bean>
<tx:annotation-driven transaction-manager="txManager" proxy-target-class="true" />
#Transactional(propagation = Propagation.NOT_SUPPORTED)
on your method will disable any Spring transactions on this proxy method call. Note that by disabling the transaction you also lose other benefits, like isolation.
However, the fact that you have an update query fired is NOT because of a transaction. You are likely to encounter a different error if you simply remove the transaction (likely stale object exception when hibernate tries to update outside of a transaction, or a malfunction of some module). Hibernate does not fire spurious updates, you should look for updates to the object in question during your transaction.
Here you have the JavaDoc of the interface org.hibernate.Session method clear() :
Completely clear the session. Evict all loaded instances and cancel all pending saves, updates and deletions. Do not close open iterators or instances of ScrollableResults
So when you use clear you will clear whole the Session. That ok, you will ask me : have I only one session per transaction ? I will answer you it's depends on your application HibernateTemplate configuration, if the HibernateTemplate.alwaysUseNewSession==true but the default value is false. The solution is to not intercepte your dao method with the Transaction Manager because it will be executed by default in a non Transactional Session.
Did you get a look to the Spring Framework AOP Proxy configuration. section 10.5 Declarative transaction management
I managed to suppress the update query by writing the following line in my DAO class (which was extending HibernateDAOSupport)
super.getSessionFactory().getCurrentSession().clear();
I just cleared the session as there was no update required while fetching the data and interceptor was updating the table.
Also, the original issue which I was facing is, the application was encountering org.hibernate.StaleStateException: Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1 from this update statement when it got executed twice (God knows why!).
Hence, we figured out that the update was never required and wanted to suppress it.
Can this fix have any consequences on the application? Is this the right way to do it? Will appreciate the inputs.
So your PlatformTransactionManager instance is HibernateTransactionManager. TransactionInterceptor will delegate the transaction handling to HibernateTransactionManager. All that means : all calls that you make to your data access methods annotated with #Transactional will be path throw spring AOP Proxy (which is a Proxy Design pattern).
If you don't use annotation-based and you have declared an AOP Proxy (search for aop:config tag in your ApplicationContext.xml).
So in the AOP Proxy configuration you will find the politic that your application use for intercepting data access methods and handling transactions.
For finding if you are using annotation-based you should know what is 'transactionAttributeSource' : AnnotationTransactionAttributeSource or AttributesTransactionAttributeSource ?
I'm trying to use a JDBC Job Store in Quartz with the following code:
DateTime dt = new DateTime().plusHours(2);
JobDetail jobDetail = new JobDetail(identifier, "group", TestJob.class);
SimpleTrigger trigger = new SimpleTrigger(identifier, dt.toDate());
trigger.setJobName(identifier);
trigger.setJobGroup("group");
quartzScheduler.addJob(jobDetail, true);
quartzScheduler.scheduleJob(trigger);
And am configuring the scheduler as follows:
<bean id="scheduler" class="org.springframework.scheduling.quartz.SchedulerFactoryBean" lazy-init="false">
<property name="autoStartup" value="true" />
<property name="waitForJobsToCompleteOnShutdown" value="false" />
<property name="dataSource" ref="schedulerDataSource" />
<property name="nonTransactionalDataSource" ref="nonTXdataSource" />
<property name="quartzProperties">
<props>
<!--Job Store -->
<prop key="org.quartz.jobStore.driverDelegateClass">
org.quartz.impl.jdbcjobstore.StdJDBCDelegate
</prop>
<prop key="org.quartz.jobStore.class">
org.quartz.impl.jdbcjobstore.JobStoreCMT
</prop>
<prop key="org.quartz.jobStore.tablePrefix">QRTZ_</prop>
</props>
</property>
</bean>
The schedulerDataSource is a standard JNDI data source, the nonTXdataSource is configured via a simple org.springframework.jdbc.datasource.DriverManagerDataSource I have specified the job store class to be: org.quartz.impl.jdbcjobstore.JobStoreCMT and was hoping that the code:
quartzScheduler.addJob(jobDetail, true);
quartzScheduler.scheduleJob(trigger);
would not commit the job to the database when the each method is called. Basically when I call addJob the job is immediately saved to the database, the scheduleJob method causes the trigger information to be immediately saved in the database as well, but this tends to happen over two separate transactions already.
There is a fair bit of subsequent logic in the code that needs to be committed to the database together with the scheduled jobs in one transactions, however no matter what I try the jobs are committed by the scheduler to the database as soon as they methods are called. I tried in various environments Testing/Tomcat/Glassfish and various configurations of data sources but to no avail.
Can somebody point me into the direction of where I am going wrong?
Thank you.
Having thought this over a bit, now i believe you can achieve this providing your own wrapping DataSource but you should not do this. I think Quartz maintains some internal state in memory that must be in sync with the database (or at least it can do so). If you rollback a transaction or otherwise modify database state not notifying Quartz about this fact, it may not work as expected.
On the other hand you can use Quartz's pausing of the jobs to achieve similar effect: you simply create new job and pause it before adding any triggers. Then, you resume it only after you commit your transaction.
---------------------- my original answer ----------------------
I think, but I'm not sure, not tried this, that you can try the following:
You need a transaction around a code that uses DataSource.getConnection internally. To achieve that you have to use data source that'd be aware of global transaction state. I suppose that JBoss application server gives you just that (even with plain data source).
JBoss comes with a transaction manager (Arjuna) and data sources wrappers (JBoss app server internal) that are at least aware of global transaction state.
Other options include Atomikos and a XA data source, but i have less experience here.
Edit: if Quartz uses explicit COMMIT or setAutocommit(true) internally, both my suggestions would not work.
When you set datasource on SchedulerFactoryBean, spring uses below class as JobStore ( extension to Quartz's JobStoreCMT )
LocalDataSourceJobStore
This supports both transactional and non-transactional DataSource access.
Please try following
Remove property org.quartz.jobStore.class [Edit : Its ignored ,anyways]
Make sure the method which does addJob / ScheduleJob is in spring managed transaction.
After upgrading from Spring 3.0.0.M4 to 3.0.0.RC1 and Spring Security 3.0.0.M2 to 3.0.0.RC1, I've had to use a security:authentication-manager tag instead of defining an _authenticationManager like I used to in M4/M2. I've done my best at defining it, and ended up with this:
<security:authentication-manager alias="authenticationManager">
<security:authentication-provider user-service-ref="userService">
<security:password-encoder hash="plaintext"/>
</security:authentication-provider>
</security:authentication-manager>
When I do my unit tests one at a time, this works great, and for most AJAX requests it works fine as well, but seemingly randomly, I get weird errors in my transactions where my database session seems to get closed midway in the work. The way I can provoke these errors is just sending a lot of different AJAX requests to my different controllers from the same client, then at least one of them will fail at random. Next time I try, that one will work and another will fail.
The error happens most frequently in my userDAO, but also quite frequently in other DAOS, and the exceptions include at least the following:
"java.sql.SQLException: Operation not allowed after ResultSet closed"
"org.hibernate.impl.AbstractSessionImpl:errorIfClosed(): Session is closed!"
"java.lang.NullPointerException at com.mysql.jdbc.PreparedStatement.fillSendPacket(PreparedStatement.java:2439)"
"java.util.ConcurrentModificationException at java.util.LinkedHashMap$LinkedHashIterator.nextEntry(Unknown Source)"
"org.hibernate.LazyInitializationException: illegal access to loading collection"
etc...
Before, I used to define an _authenticationManager bean, and the same requests worked like a charm. But with RC1, I'm no longer allowed to define it. It used to look like this:
<bean id="_authenticationManager" class="org.springframework.security.authentication.ProviderManager">
<property name="providers">
<list>
<bean class="org.springframework.security.authentication.dao.DaoAuthenticationProvider">
<property name="userDetailsService" ref="userService"/>
<property name="passwordEncoder">
<bean class="org.springframework.security.authentication.encoding.PlaintextPasswordEncoder" />
</property>
</bean>
</list>
</property>
</bean>
Have I defined my security:authentication-manager incorrectly so that it will share transactions for multiple requests from the same client? Should I define it differently, or should I define some other security: beans?
Is there something I have misunderstood that makes my database sessions close? In my head, each request has its own database connection and transaction. All getters and setters are synchronized methods, so I really shouldn't have any concurrency issues. All the REST controllers that the UI makes requests against are GET-requests and do read-only work. To my knowledge, not a single INSERT/UPDATE/DELETE is done during any of these requests, and I've inspected the database logs to verify this.
I look forward to hearing your suggestions on how to avoid these race-conditions.
Cheers
Nik
PS, my I've updated the question to be more specific that the problem is with the security:authentication-manager (or so it seems to me, if you have tips that it could be something else that would be great) that I'm forced to use instead of my own _authenticationManager starting with 3.0.0.RC1
PPS, here is the thread that made me understand I could no longer define an _authenticationManager: SpringSource Forum Post
It seems that I had a big problem in database session handling in my DAO, so I've made a write-up of my problem and posted the solution in another thread here at StackOverflow and asked for people's opinion on the solution. I hope it doesn't give more issues :-)
Cheers
Nik