I'm using a spring batch HibernateCursorItemReader, it's defined as follows
<bean class="org.springframework.batch.item.database.HibernateCursorItemReader"
scope="step" id="priceListFctrItemReader">
<property name="queryName" value="FIND_ALL_PRICE_LIST_FCTR_ITEM_ID_BY_MONTRY_FCTR_VER"/>
<property name="sessionFactory" ref="sessionFactory"/>
<property name="parameterValues">
<map>
<entry key="factorVersion" value="#{jobParameters['current.factor.version']}"/>
<entry key="trueValue" value="#{true}"/>
</map>
</property>
</bean>
On small results it seems to be fine. But if processing takes long it seems the session closes and I get
org.hibernate.exception.GenericJDBCException: could not advance using next()
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:54)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:126)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:112)
and further down
Caused by: java.sql.SQLException: Result set already closed
at weblogic.jdbc.wrapper.ResultSet.checkResultSet(ResultSet.java:144)
at weblogic.jdbc.wrapper.ResultSet.preInvocationHandler(ResultSet.java:93)
I don't experience this in spring-boot, but on weblogic I do. It could be that local spring boot is just faster.
any ideas on how to avoid this ?
The problem is that spring-batch does a commit after every chunk, and committing closes the transaction and thus the result set.
When you are not in an application container, like when you are using spring boot, *CursorItemReaders use a separate connection to bypass the transaction and thereby avoid the commit which closes the cursor result set.
On the other hand, if you are running on an application server the connection you get from the server managed data source will by default take part in a transaction. In order for a cursor item reader to work you must set up a data source which does not take part in transactions.a
Alternatively, you may be able to use a *PagingItemReader which reads page size records per chunk, each in a separate transaction. This completely avoids the problem of a closing result set. Beware: If the underlying table changes between chunks the results may not be what you expect!
[a] : https://blog.codecentric.de/en/2012/03/transactions-in-spring-batch-part-2-restart-cursor-based-reading-and-listeners/
Related
There is a service that connects to Oracle DB for reading data and it uses Hibernate-3.6 and SpringData-JPA-1.10.x. Heap dumps are getting generated frequently which results in out of memory on the hosts.
After analyzing few heapdumps using Eclipse MAT, found that the majority of the memory is accumulated in one instance of org.hibernate.engine.StatefulPersistenceContext -> org.hibernate.util.IdentityMap -> java.util.LinkedHashMap.
And the leak suspect says
The thread java.lang.Thread # 0x84427e10 ... : 29 keeps local
variables with total size 1,582,637,976 (95.04%) bytes.
The memory is accumulated in one instance of "java.util.LinkedHashMap"
loaded by "".
Searched it on StackOverflow and it says SessionFactory should be singleton and session.flush() and session.clear() should be invoked before each call to clear the cache. But SessionFactory is not explicitly initialized or used in the code.
What's causing the memory leak here (looks like the result of each query is cached and not cleared) and how to fix it?
More info about the Spring Data configuration:
TransactionManager is initialized as:
<tx:annotation-driven mode='proxy' proxy-target-class='true' />
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
....
</bean>
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" depends-on="...">
....
</bean>
To interact with the table, an interface is declared extending Spring Data Repository and JpaSpecificationExecutor. Both are typed into the domain class that it will handle.
API activity method has the annotation #Transactional(propagation = Propagation.SUPPORTS, readOnly = true).
From what you describe this is what I expect to be going on:
Hibernate (actually JPA in general) keeps a reference to all entities it loads or saves for the lifetime of the session.
In a typical web application setup, this isn't a problem, because. A new session starts with each request and gets closed once the request is finished and it doesn't involve that many entities.
But for your application, it looks like the session keeps growing and growing.
I can imagine the following reasons:
something runs in an open session all the time without it ever closing. Maybe something like a batch job or a scheduled job which runs at regular intervals.
Hibernate is configured in such a way that it reuses the same session without ever closing it.
In order to find the culprit enable logging for opening and closing the session. Judging from https://hibernate.atlassian.net/browse/HHH-2425 org.hibernate.impl.SessionImpl should be the right log category and you probably need trace level logging.
Now test the various requests to your server and see if there are any sessions that get opened but not closed.
The question contains information about the creations of some beans. But the problem doesn't lie there. The problem is in your code, where have you use these beans.
Please check your code. Probably you are loading items in a loop. And the loop is wrapped with a transaction.
Hibernate creates huge intermediate objects, and it doesn't clean these before the transaction being completed (commit/rollback).
I'm using spring integration to invoke a service on the other end of an active mq. My config looks like:
<bean id="jmsConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<constructor-arg>
<bean class="org.apache.activemq.ActiveMQConnectionFactory"
p:brokerURL="${risk.approval.queue.broker}"
p:userName="${risk.approval.queue.username}"
p:password="${risk.approval.queue.password}"
/>
</constructor-arg>
<property name="reconnectOnException" value="true"/>
<property name="sessionCacheSize" value="100"/>
</bean>
<!-- create and close a connection to prepopulate the pool -->
<bean factory-bean="jmsConnectionFactory" factory-method="createConnection" class="javax.jms.Connection"
init-method="close" />
<integration:channel id="riskApprovalRequestChannel"/>
<integration:channel id="riskApprovalResponseChannel"/>
<jms:outbound-gateway id="riskApprovalServiceGateway"
request-destination-name="${risk.approval.queue.request}"
reply-destination-name="${risk.approval.queue.response}"
request-channel="riskApprovalRequestChannel"
reply-channel="riskApprovalResponseChannel"
connection-factory="jmsConnectionFactory"
receive-timeout="5000"/>
<integration:gateway id="riskApprovalService" service-interface="com.my.super.ServiceInterface"
default-request-channel="riskApprovalRequestChannel"
default-reply-channel="riskApprovalResponseChannel"/>
What I've noticed is that with this config the consumers created to grab the matching request from active mq never close. Every request increments the consumer count.
I can stop this from happening by adding
<property name="cacheConsumers" value="false" />
To the CachingConnectionFactory.
However according to the java docs for CachingConnectionFactory :
Note that durable subscribers will only be cached until logical
closing of the Session handle.
Which suggests that the session is never being closed.
Is this a bad thing? Is there a better way to stop the consumers from piling up?
Cheers,
Peter
First, you don't need the init-method on your factory-bean - it does nothing - the session factory only has one connection and calling close() on it is a no-op. (CCF is a subclass of SingleConnectionFactory).
Second; caching consumers is the default; sessions are never closed, unless the number of sessions exceeds the sessionCacheSize (which you have set to 100).
When close() is called on a cached session, it is cached for reuse; that's what the caching connection factory is for - avoiding the overhead of session creation for every request.
If you don't want the performance benefit of caching sessions, producers and consumers, use the SingleConnectionFactory instead. See the JavaDoc for CachingConnectionFactory.
Does the following work when using cachingConnectionFactory?
In your spring config file add in the connection factory config details something like this: cacheConsumers="false"
Default Behaviour is true which was causing a connection leak in the Queue.
We are using Spring 2.6 and we use jdbcTemplate as well as NamedparameterJdbcTemplate in our system and configurations are as follows.
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<constructor-arg ref="dataSource"></constructor-arg>
<property name="fetchSize" value="500>
</bean>
<bean id="namedParameterJdbcTemplate" class="org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate">
<constructor-arg ref="dataSource"></constructor-arg>
</bean>
While jdbcTemplate has a property "fetchSize", namedParameterJdbcTemplate does not have. I want to set the fetchSize for this so i came up with another constructor for namedParameterJdbcTemplate which accepts "jdbcTemplate", so i configured my bean as follows to use fetchSize of 500 which is already configured for jdbcTemplate:
<bean id="namedParameterJdbcTemplate" class="org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate">
<constructor-arg ref="jdbcTemplate"></constructor-arg>
</bean>
But after this i am getting following exceptions for few queries :
com.sybase.jdbc3.jdbc.SybSQLException: Cursor 'jconnect_implicit_16' was declared with a FOR UPDATE clause. This cursor was found to be read only.
at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.translate(SQLStateSQLExceptionTranslator.java:121)
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.translate(SQLErrorCodeSQLExceptionTranslator.java:322)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:582)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:616)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:641)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:657)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query(NamedParameterJdbcTemplate.java:123)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query(NamedParameterJdbcTemplate.java:127)
Can someone please suggest its solution?
Unfortunately the above workaround has strange side effects.
Use of JdbcTemplate.query(...) to perform selects which occasionally result in 0 rows now throw the exception
Caused by: java.sql.SQLException: JZ0R2: No result set for this query.
Another obscure problem with few results on Google (and no helpful ones).
I'm now using the solution of creating a StreamingStatementCreator as described in an answer to this question How to manage a large dataset using Spring MySQL and RowCallbackHandler
This creates a Statement that has ResultSet.CONCUR_READ_ONLY set, but you can still use JdbcTemplate, rather than reverting back to Connections, Statements and ResultSets.
I'm also getting this exception.
It's related to Sybase jConnect for JDBC.
I found this workaround.
To quote the Sybase Programmers Reference jConnect for JDBC 6.05
There is no known advantage to setting LANGUAGE_CURSOR to true,
but the option is provided in case an application displays unexpected
behavior when LANGUAGE_CURSOR is false.
Go through all your mappings and make sure they are ok! Look for the proper types. This issue is normally caused by a lower level exception that is thrown up and caught by the SQLStateSQLExceptionTranslator. Normally it has something to do with the improper map object type.
I'm trying to use a JDBC Job Store in Quartz with the following code:
DateTime dt = new DateTime().plusHours(2);
JobDetail jobDetail = new JobDetail(identifier, "group", TestJob.class);
SimpleTrigger trigger = new SimpleTrigger(identifier, dt.toDate());
trigger.setJobName(identifier);
trigger.setJobGroup("group");
quartzScheduler.addJob(jobDetail, true);
quartzScheduler.scheduleJob(trigger);
And am configuring the scheduler as follows:
<bean id="scheduler" class="org.springframework.scheduling.quartz.SchedulerFactoryBean" lazy-init="false">
<property name="autoStartup" value="true" />
<property name="waitForJobsToCompleteOnShutdown" value="false" />
<property name="dataSource" ref="schedulerDataSource" />
<property name="nonTransactionalDataSource" ref="nonTXdataSource" />
<property name="quartzProperties">
<props>
<!--Job Store -->
<prop key="org.quartz.jobStore.driverDelegateClass">
org.quartz.impl.jdbcjobstore.StdJDBCDelegate
</prop>
<prop key="org.quartz.jobStore.class">
org.quartz.impl.jdbcjobstore.JobStoreCMT
</prop>
<prop key="org.quartz.jobStore.tablePrefix">QRTZ_</prop>
</props>
</property>
</bean>
The schedulerDataSource is a standard JNDI data source, the nonTXdataSource is configured via a simple org.springframework.jdbc.datasource.DriverManagerDataSource I have specified the job store class to be: org.quartz.impl.jdbcjobstore.JobStoreCMT and was hoping that the code:
quartzScheduler.addJob(jobDetail, true);
quartzScheduler.scheduleJob(trigger);
would not commit the job to the database when the each method is called. Basically when I call addJob the job is immediately saved to the database, the scheduleJob method causes the trigger information to be immediately saved in the database as well, but this tends to happen over two separate transactions already.
There is a fair bit of subsequent logic in the code that needs to be committed to the database together with the scheduled jobs in one transactions, however no matter what I try the jobs are committed by the scheduler to the database as soon as they methods are called. I tried in various environments Testing/Tomcat/Glassfish and various configurations of data sources but to no avail.
Can somebody point me into the direction of where I am going wrong?
Thank you.
Having thought this over a bit, now i believe you can achieve this providing your own wrapping DataSource but you should not do this. I think Quartz maintains some internal state in memory that must be in sync with the database (or at least it can do so). If you rollback a transaction or otherwise modify database state not notifying Quartz about this fact, it may not work as expected.
On the other hand you can use Quartz's pausing of the jobs to achieve similar effect: you simply create new job and pause it before adding any triggers. Then, you resume it only after you commit your transaction.
---------------------- my original answer ----------------------
I think, but I'm not sure, not tried this, that you can try the following:
You need a transaction around a code that uses DataSource.getConnection internally. To achieve that you have to use data source that'd be aware of global transaction state. I suppose that JBoss application server gives you just that (even with plain data source).
JBoss comes with a transaction manager (Arjuna) and data sources wrappers (JBoss app server internal) that are at least aware of global transaction state.
Other options include Atomikos and a XA data source, but i have less experience here.
Edit: if Quartz uses explicit COMMIT or setAutocommit(true) internally, both my suggestions would not work.
When you set datasource on SchedulerFactoryBean, spring uses below class as JobStore ( extension to Quartz's JobStoreCMT )
LocalDataSourceJobStore
This supports both transactional and non-transactional DataSource access.
Please try following
Remove property org.quartz.jobStore.class [Edit : Its ignored ,anyways]
Make sure the method which does addJob / ScheduleJob is in spring managed transaction.
I'm using the Spring MVC to build a thin layer on top of a SQL Server database. When I began testing, it seems that it doesn't handle stress very well :). I'm using Apache Commons DBCP to handle connection pooling and the data source.
When I first attempted ~10-15 simultaneous connections, it used to hang and I'd have to restart the server (for dev I'm using Tomcat, but I'm gonna have to deploy on Weblogic eventually).
These are my Spring bean definitions:
<bean id="dataSource" destroy-method="close"
class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
<property name="url" value="[...]"/>
<property name="username" value="[...]" />
<property name="password" value="[...]" />
</bean>
<bean id="partnerDAO" class="com.hp.gpl.JdbcPartnerDAO">
<constructor-arg ref="dataSource"/>
</bean>
<!-- + other beans -->
And this is how I use them:
// in the DAO
public JdbcPartnerDAO(DataSource dataSource) {
jdbcTemplate = new JdbcTemplate(dataSource);
}
// in the controller
#Autowired
private PartnerDAO partnerDAO;
// in the controller method
Collection<Partner> partners = partnerDAO.getPartners(...);
After reading around a little bit, I found the maxWait, maxActive and maxIdle properties for the BasicDataSource (from GenericObjectPool). Here comes the problem. I'm not sure how I should set them, performance-wise. From what I know, Spring should be managing my connections so I shouldn't have to worry about releasing them.
<bean id="dataSource" destroy-method="close"
class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
<property name="url" value="[...]"/>
<property name="username" value="[...]" />
<property name="password" value="[...]" />
<property name="maxWait" value="30" />
<property name="maxIdle" value="-1" />
<property name="maxActive" value="-1" />
</bean>
First, I set maxWait, so that it wouldn't hang and instead throw an exception when no connection was available from the pool. The exception message was:
Could not get JDBC Connection; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object
There are some long-running queries, but the exception was thrown regardless of the query complexity.
Then, I set maxActive and maxIdle so that it wouldn't throw the exceptions in the first place. The default values are 8 for maxActive and maxIdle (I don't understand why); if I set them to -1 there are no more exceptions thrown and everything seems to work fine.
Considering that this app should support a large number of concurrent requests is it ok to leave these settings to infinite? Will Spring actually manage my connections, considering the errors I was receiving? Should I switch to C3P0 considering it's kinda dead?
DBCP maxWait parameter should be defined in milliseconds. 30 ms is very low value, consider increasing it to 30000 ms and try again.
As you already found out, the default dbcp connection pool is 8 connections, so if you want to run 9 simultaneous queries one of them will be blocked. I suggest you connect to your database and run exec sp_who2 which will show you what is connected, and active, and whether any queries are being blocked. You can then confirm whether the issue is on the db or in your code.
As long as you are using Spring's JdbcTemplate family of objects your connections will be managed as you expect, and if you want to use a raw DataSource make sure you use DataSourceUtils to obtain a Connection.
One other suggestion - prior to Spring 3, don't ever using JdbcTemplate, stick to SimpleJdbcTemplate, you can still access the same methods using SimpleJdbcTemplate.getJdbcOperations(), but you should find yourself writing much nicer code using generics, and remove the need to ever create JdbcTemplate/NamedParameterJdbcTemplate instances.
Let's change the perspective.
but the exception was thrown
regardless of the query complexity
It could be because the table or the records in the table, which you are querying against has been locked (by some other active transaction) and hence it times out.
Try running the same query from SQLServer Client and if it takes a long time, then you can be sure that it is the table or record lock that is causing this.