We are changing our ldap from the java ldap library to spring's to take advantage of the rollback. It mostly works. Mostly.
One of our problems is that whenever an exception is thrown, instead of the exception bubbling out, we get this:
Caused by: java.lang.ClassCastException: org.springframework.ldap.transaction.compensating.manager.ContextSourceAndHibernateTransactionManager$ContextSourceAndHibernateTransactionObject incompatible with org.springframework.orm.hibernate3.HibernateTransactionManager$HibernateTransactionObject
at org.springframework.orm.hibernate3.HibernateTransactionManager.doSetRollbackOnly(HibernateTransactionManager.java:696) ~[spring-orm-3.2.0.RELEASE.jar:3.2.0.RELEASE]
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processRollback(AbstractPlatformTransactionManager.java:852) ~[spring-tx-3.2.0.RELEASE.jar:3.2.0.RELEASE]
at org.springframework.transaction.support.AbstractPlatformTransactionManager.rollback(AbstractPlatformTransactionManager.java:822) ~[spring-tx-3.2.0.RELEASE.jar:3.2.0.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.completeTransactionAfterThrowing(TransactionAspectSupport.java:410) ~[spring-tx-3.2.0.RELEASE.jar:3.2.0.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:114) ~[spring-tx-3.2.0.RELEASE.jar:3.2.0.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) ~[spring-aop-3.2.0.RELEASE.jar:3.2.0.RELEASE]
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:90) ~[spring-aop-3.2.0.RELEASE.jar:3.2.0.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) ~[spring-aop-3.2.0.RELEASE.jar:3.2.0.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:631) ~[spring-aop-3.2.0.RELEASE.jar:3.2.0.RELEASE]
This swallows up the exceptions. Yet, it doesn't seem to prevent the rollback from actually happening.
Relevant parts of our spring config:
<ldap:context-source
url="${LDAP_URL_SSL}"
base="${SEARCHBASE_DC}"
referral="follow"
authentication-source-ref="afmsAuthenticationSource" />
<ldap:ldap-template id="ldapTemplate" />
<bean id="txManager" name="txManager transactionManager" class="my.package.hibernate.HibernateTransactionManager">
<property name="sessionFactory" ref="enoteSessionFactory" />
<property name="defaultUserName" value="umc.not.provided" />
<property name="defaultApplicationId" value="umc" />
</bean>
<ldap:transaction-manager id="ldapTransactionManager" session-factory-ref="sessionFactory" >
<ldap:default-renaming-strategy />
</ldap:transaction-manager>
What you'll notice is that we have both a hibernate transaction manager (which we customized a long time ago and has worked forever) and the new contextsource transaction manager. Both are invoked via either AOP or #Transactional depending on which part of the code one is in.
We have a little disagreement in house as to whether we are supposed to REPLACE the existing tx manager with the ldap one (which might do both jobs) or ADD TO it. The dev doing the work thinks ADD TO. I think REPLACE, and i think this exception is a symptom of having both in play.
Anyway, can someone give advice on what might be causing this exception, and whether having one or both tx manager is the right way to use this library?
Related
There is a service that connects to Oracle DB for reading data and it uses Hibernate-3.6 and SpringData-JPA-1.10.x. Heap dumps are getting generated frequently which results in out of memory on the hosts.
After analyzing few heapdumps using Eclipse MAT, found that the majority of the memory is accumulated in one instance of org.hibernate.engine.StatefulPersistenceContext -> org.hibernate.util.IdentityMap -> java.util.LinkedHashMap.
And the leak suspect says
The thread java.lang.Thread # 0x84427e10 ... : 29 keeps local
variables with total size 1,582,637,976 (95.04%) bytes.
The memory is accumulated in one instance of "java.util.LinkedHashMap"
loaded by "".
Searched it on StackOverflow and it says SessionFactory should be singleton and session.flush() and session.clear() should be invoked before each call to clear the cache. But SessionFactory is not explicitly initialized or used in the code.
What's causing the memory leak here (looks like the result of each query is cached and not cleared) and how to fix it?
More info about the Spring Data configuration:
TransactionManager is initialized as:
<tx:annotation-driven mode='proxy' proxy-target-class='true' />
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
....
</bean>
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" depends-on="...">
....
</bean>
To interact with the table, an interface is declared extending Spring Data Repository and JpaSpecificationExecutor. Both are typed into the domain class that it will handle.
API activity method has the annotation #Transactional(propagation = Propagation.SUPPORTS, readOnly = true).
From what you describe this is what I expect to be going on:
Hibernate (actually JPA in general) keeps a reference to all entities it loads or saves for the lifetime of the session.
In a typical web application setup, this isn't a problem, because. A new session starts with each request and gets closed once the request is finished and it doesn't involve that many entities.
But for your application, it looks like the session keeps growing and growing.
I can imagine the following reasons:
something runs in an open session all the time without it ever closing. Maybe something like a batch job or a scheduled job which runs at regular intervals.
Hibernate is configured in such a way that it reuses the same session without ever closing it.
In order to find the culprit enable logging for opening and closing the session. Judging from https://hibernate.atlassian.net/browse/HHH-2425 org.hibernate.impl.SessionImpl should be the right log category and you probably need trace level logging.
Now test the various requests to your server and see if there are any sessions that get opened but not closed.
The question contains information about the creations of some beans. But the problem doesn't lie there. The problem is in your code, where have you use these beans.
Please check your code. Probably you are loading items in a loop. And the loop is wrapped with a transaction.
Hibernate creates huge intermediate objects, and it doesn't clean these before the transaction being completed (commit/rollback).
I have a problem with database transactions in mule flow. This is the flow that i have defined:
<flow name="createPortinCaseServiceFlow">
<vm:inbound-endpoint path="createPortinCase" exchange-pattern="request-response">
<custom-transaction action="ALWAYS_BEGIN" factory-ref="muleTransactionFactory"/>
</vm:inbound-endpoint>
<component>
<spring-object bean="checkIfExists"/>
</component>
<component>
<spring-object bean="createNewOne"/>
</component>
</flow>
The idea is that in checkIfExists we verify if some data exists (in the database) if it does we throw an exception. If it does not we go to createNewOne and create a new data.
The problem
is that if we run the flow concurrently new objects will be created multiple times in createNewOne and they should not be as we invoke checkIfExists just before it. This means that the transaction is not working properly.
More info:
both createNewOne and checkIfExists have following annotation:
#Transactional(propagation = Propagation.MANDATORY)
Definition of muleTransactionFactory looks as follows
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="dataSource" ref="teleportNpDataSource"/>
<property name="entityManagerFactory" ref="npEntityManagerFactory"/>
<property name="nestedTransactionAllowed" value="true"/>
<property name="defaultTimeout" value="${teleport.np.tm.transactionTimeout}"/>
</bean>
<bean id="muleTransactionFactory" class="org.mule.module.spring.transaction.SpringTransactionFactory">
<property name="manager" ref="transactionManager"/>
</bean>
I have set the TRACE log level (as #Shailendra suggested) and i have discovered that transaction is reused in all of the spring beans:
00:26:32.751 [pool-75-thread-1] DEBUG org.springframework.orm.jpa.JpaTransactionManager - Participating in existing transaction
In the logs transaction is commited at the same time which means that those transactions are created properly but there are executed concurrently which causes an issue.
Issue may be because multi-threading. When you post multiple objects to VM, they will be dispatched to multiple receiving threads and if multi-threading is not properly handled in your components then you might run into the issue you mentioned.
Test making these change - Add VM connector reference and turn off the dispatcher threading profile. That way, VM will process messages one at a time as there is just one dispatcher thread.
<vm:connector name="VM" validateConnections="true" doc:name="VM" >
<dispatcher-threading-profile doThreading="false"/>
</vm:connector>
<flow name="testFlow8">
<vm:inbound-endpoint exchange-pattern="one-way" doc:name="VM" connector-ref="VM">
<custom-transaction action="NONE"/>
</vm:inbound-endpoint>
</flow>
Be aware of the fact that, if the number of incoming messages on VM are very high and time taken to process each message is more, then you may run into SEDA-QUEUE errors due to no thread availability.
If without threading, your flow is behaving correctly then you may need to look how your components should behave in multi-threading.
Hope That helps!
I'm using a spring batch HibernateCursorItemReader, it's defined as follows
<bean class="org.springframework.batch.item.database.HibernateCursorItemReader"
scope="step" id="priceListFctrItemReader">
<property name="queryName" value="FIND_ALL_PRICE_LIST_FCTR_ITEM_ID_BY_MONTRY_FCTR_VER"/>
<property name="sessionFactory" ref="sessionFactory"/>
<property name="parameterValues">
<map>
<entry key="factorVersion" value="#{jobParameters['current.factor.version']}"/>
<entry key="trueValue" value="#{true}"/>
</map>
</property>
</bean>
On small results it seems to be fine. But if processing takes long it seems the session closes and I get
org.hibernate.exception.GenericJDBCException: could not advance using next()
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:54)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:126)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:112)
and further down
Caused by: java.sql.SQLException: Result set already closed
at weblogic.jdbc.wrapper.ResultSet.checkResultSet(ResultSet.java:144)
at weblogic.jdbc.wrapper.ResultSet.preInvocationHandler(ResultSet.java:93)
I don't experience this in spring-boot, but on weblogic I do. It could be that local spring boot is just faster.
any ideas on how to avoid this ?
The problem is that spring-batch does a commit after every chunk, and committing closes the transaction and thus the result set.
When you are not in an application container, like when you are using spring boot, *CursorItemReaders use a separate connection to bypass the transaction and thereby avoid the commit which closes the cursor result set.
On the other hand, if you are running on an application server the connection you get from the server managed data source will by default take part in a transaction. In order for a cursor item reader to work you must set up a data source which does not take part in transactions.a
Alternatively, you may be able to use a *PagingItemReader which reads page size records per chunk, each in a separate transaction. This completely avoids the problem of a closing result set. Beware: If the underlying table changes between chunks the results may not be what you expect!
[a] : https://blog.codecentric.de/en/2012/03/transactions-in-spring-batch-part-2-restart-cursor-based-reading-and-listeners/
I am using c3p0 0.9.2.1, GWT 2.6.0, SQLServer 2012.
I am getting an intermittent stackTrace from c3p0
[2015-Feb-03 16:30:12] - [INFO ] - A checked-out resource is overdue, and will be destroyed: com.mchange.v2.c3p0.impl.NewPooledConnection#14bc0e
[2015-Feb-03 16:30:12] - [INFO ] - Logging the stack trace by which the overdue resource was checked-out.
java.lang.Exception: DEBUG STACK TRACE: Overdue resource check-out stack trace.
at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:555)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutAndMarkConnectionInUse(C3P0PooledConnectionPool.java:755)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:682)
at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:140)
at org.hibernate.c3p0.internal.C3P0ConnectionProvider.getConnection(C3P0ConnectionProvider.java:89)
at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(AbstractSessionImpl.java:380)
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:228)
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.getConnection(LogicalConnectionImpl.java:171)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.connection(StatementPreparerImpl.java:63)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:162)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:186)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:160)
at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1884)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1861)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1838)
at org.hibernate.loader.Loader.doQuery(Loader.java:909)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:354)
at org.hibernate.loader.Loader.doList(Loader.java:2551)
at org.hibernate.loader.Loader.doList(Loader.java:2537)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2367)
at org.hibernate.loader.Loader.list(Loader.java:2362)
at org.hibernate.loader.criteria.CriteriaLoader.list(CriteriaLoader.java:126)
at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1678)
at org.hibernate.internal.CriteriaImpl.list(CriteriaImpl.java:380)
at net.impro.portal.server.model.dao.CommsQueueDaoImpl.getOldestRecords(CommsQueueDaoImpl.java:54)
at net.impro.portal.server.engine.uploader.AbstractBiometricUploader.getOldestRecords(AbstractBiometricUploader.java:217)
at net.impro.portal.server.engine.uploader.UploaderMorpho$$EnhancerByGuice$$9212bdc2.getOldestRecords(<generated>)
at net.impro.portal.server.engine.uploader.AbstractUploader.getSomeQueueData(AbstractUploader.java:222)
at net.impro.portal.server.engine.uploader.UploaderMorpho$$EnhancerByGuice$$9212bdc2.getSomeQueueData(<generated>)
at net.impro.portal.server.engine.uploader.AbstractBiometricUploader.processLoop(AbstractBiometricUploader.java:92)
at net.impro.portal.server.engine.uploader.AbstractUploader.run(AbstractUploader.java:172)
at java.lang.Thread.run(Unknown Source)
So, its telling me I havent finished with the connection within the timeout I have specified
<property name="hibernate.c3p0.debugUnreturnedConnectionStackTraces" value="true" />
<property name="hibernate.c3p0.unreturnedConnectionTimeout" value="10" />
I fail to understand how this is possible. My code is very simple...
#Transactional
protected void getSomeQueueData() {
logger.error("+Entered Thread " + Thread.currentThread());
//A little clumsy here but I dont see anything wrong
final CommsQueueDao commsQueueDao = InjectHelp.requestNewInstance(CommsQueueDao.class);
List<CommsQueue> qData = getOldestRecords(commsQueueDao, MAX_LOCAL_QUEUE_SIZE);
Iterator<CommsQueue> iterator = qData.iterator();
//Iterate the result, pack it into a local queue
//...
logger.error("-Exit Thread " + Thread.currentThread() + " " + stuffToSend.size());
}
#Transactional
protected List<CommsQueue> getOldestRecords(CommsQueueDao qDao, int fetchSize) {
return qDao.getOldestRecords(getBioAddress(), fetchSize);
}
//FROM MY DAO CLASS
public List<CommsQueue> getOldestRecords(BioAddress bioAddress, int maxRecords) {
Criteria cr = getSession().createCriteria(CommsQueue.class);
cr.add(Restrictions.eq("bioAddress", bioAddress));
cr.addOrder(Order.asc("id"));
cr.setMaxResults(maxRecords);
return cr.list();
}
On start up, 25 threads startup, each calling getSomeQueueData(). About 1/10 times it fails. I check the debug output and find that every thread has successfully entered and exited the method within a couple of seconds as expected, so why is c3p0 telling me that the connection is still in use? Just to clarify, the #Transactional is guice, not spring.
---EDIT---
I have spent many hours investigating this issue and it only becomes more puzzling....
I have also updated c3p0 to 0.9.5
The #Transactional seems erratic for some reason. Once in a while it doesnt get called. I inspect the stack trace of every method that passes through my DAO and every so often, it indicates that the interceptor processing has been skipped??.
In my DAO I have a getSession() method, and when I add an arbitrary sleep of 100ms , then almost every stackTrace indicates that the annotation has been skipped.
Additionally, these settings make the problem happen almost every time
<property name="hibernate.c3p0.maxPoolSize" value="20" />
<property name="hibernate.c3p0.minPoolSize" value="1" />
<property name="hibernate.c3p0.numHelperThreads" value="1" />
...these settings make the problem happen very rarely
<property name="hibernate.c3p0.maxPoolSize" value="20" />
<property name="hibernate.c3p0.minPoolSize" value="20" />
<property name="hibernate.c3p0.numHelperThreads" value="20" />
And removing c3p0 altogether seems to fix the problem completely as far as my testing reveals.
I have read that the use of finally is something to beware of for cases where reflection us used by guice and I have been careful to avoid that. The symptoms are too bizarre for me to narrow it down. Sometimes it seems like guice is not binding my class correctly, sometimes it seems like a c3p0 error.
Oh, I have checked that I am using com.google.inject.persist.Transactional.
Having tried an alternative pooling library without resolving the problem, I turned my attention to dependency injection. I think the problem is related to code existing in the constructor of a guice managed class. If the effects of that code has the potential to call methods marked as #Transactional, then this problem can occur where the transactional annotation is "skipped".
This seems to make sense to me, so I have moved this from the constructor. I will keep monitoring it for a while before Im convinced this is the solution.
After upgrading from Spring 3.0.0.M4 to 3.0.0.RC1 and Spring Security 3.0.0.M2 to 3.0.0.RC1, I've had to use a security:authentication-manager tag instead of defining an _authenticationManager like I used to in M4/M2. I've done my best at defining it, and ended up with this:
<security:authentication-manager alias="authenticationManager">
<security:authentication-provider user-service-ref="userService">
<security:password-encoder hash="plaintext"/>
</security:authentication-provider>
</security:authentication-manager>
When I do my unit tests one at a time, this works great, and for most AJAX requests it works fine as well, but seemingly randomly, I get weird errors in my transactions where my database session seems to get closed midway in the work. The way I can provoke these errors is just sending a lot of different AJAX requests to my different controllers from the same client, then at least one of them will fail at random. Next time I try, that one will work and another will fail.
The error happens most frequently in my userDAO, but also quite frequently in other DAOS, and the exceptions include at least the following:
"java.sql.SQLException: Operation not allowed after ResultSet closed"
"org.hibernate.impl.AbstractSessionImpl:errorIfClosed(): Session is closed!"
"java.lang.NullPointerException at com.mysql.jdbc.PreparedStatement.fillSendPacket(PreparedStatement.java:2439)"
"java.util.ConcurrentModificationException at java.util.LinkedHashMap$LinkedHashIterator.nextEntry(Unknown Source)"
"org.hibernate.LazyInitializationException: illegal access to loading collection"
etc...
Before, I used to define an _authenticationManager bean, and the same requests worked like a charm. But with RC1, I'm no longer allowed to define it. It used to look like this:
<bean id="_authenticationManager" class="org.springframework.security.authentication.ProviderManager">
<property name="providers">
<list>
<bean class="org.springframework.security.authentication.dao.DaoAuthenticationProvider">
<property name="userDetailsService" ref="userService"/>
<property name="passwordEncoder">
<bean class="org.springframework.security.authentication.encoding.PlaintextPasswordEncoder" />
</property>
</bean>
</list>
</property>
</bean>
Have I defined my security:authentication-manager incorrectly so that it will share transactions for multiple requests from the same client? Should I define it differently, or should I define some other security: beans?
Is there something I have misunderstood that makes my database sessions close? In my head, each request has its own database connection and transaction. All getters and setters are synchronized methods, so I really shouldn't have any concurrency issues. All the REST controllers that the UI makes requests against are GET-requests and do read-only work. To my knowledge, not a single INSERT/UPDATE/DELETE is done during any of these requests, and I've inspected the database logs to verify this.
I look forward to hearing your suggestions on how to avoid these race-conditions.
Cheers
Nik
PS, my I've updated the question to be more specific that the problem is with the security:authentication-manager (or so it seems to me, if you have tips that it could be something else that would be great) that I'm forced to use instead of my own _authenticationManager starting with 3.0.0.RC1
PPS, here is the thread that made me understand I could no longer define an _authenticationManager: SpringSource Forum Post
It seems that I had a big problem in database session handling in my DAO, so I've made a write-up of my problem and posted the solution in another thread here at StackOverflow and asked for people's opinion on the solution. I hope it doesn't give more issues :-)
Cheers
Nik