I understand that Hibernate's Session.flush() method writes current data state in memory into the database. I use it and it works fine. But my question is: using Session.flush() is mandatory?
If I remove Session.flush() then nothing gets inserted/updated in the database. And I don't see any errors in my log file.
I am using Spring + Hibernate in my application. I am also using the OpenSessionInViewFilter which I have defined in my web.xml as follows:
<filter>
<filter-name>hibernateFilter</filter-name>
<filter-class>org.springframework.orm.hibernate3.support.OpenSessionInViewFilter</filter-class>
<init-param>
<param-name>flushMode</param-name>
<param-value>AUTO</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>hibernateFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
I have tried using flushMode as COMMIT but still it didn't help.
My Spring applicationContext.xml has the following lines for datasource:
<bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="${jdbc.jndiName}" />
<property name="resourceRef" value="true" />
</bean>
<bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource" ref="dataSource"></property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.show_sql">${hibernate.show_sql}</prop>
<prop key="hibernate.dialect">${hibernate.dialect}</prop>
</props>
</property>
<property name="annotatedClasses">
<list>
<value>...</value>
</list>
</property>
</bean>
And finally my code snippet looks like this:
public String saveRegistration (final Registration registration) {
getHibernateTemplate().saveOrUpdate(registration);
getSession().flush(); // If I remove this, no records would be inserted/updated!!
String id = registration.getId();
return id;
}
As you can see I am using a very basic configurations in Spring and Hibernate.
Can someone please help me in understanding as to why getSession.flush() should be called? Why without it no records are getting saved/updated in the database?
Are you certain that no records get created in the database? If you are saying that because the returned id is null then I think you've misunderstood. When you call saveOrUpdate, the object will be stored in the Hibernate session and other code that runs a query can retrieve it, but it won't necessarily be persisted to the database immediately. The current object instance won't have an id until you run a query that prompts Hibernate to persist the object to the database, or you manually call flush().
I'd suggest you check what the behaviour is if you code a second method to query for the object. I think you'll find that causes Hibernate to flush the object to the database.
Probably parameter "registration" is not been changed once it's loaded from database or saved. Hibernate will save/update entity when there is some difference between cached object and database data.
Try after modifying one of the field in "registration" instance.
Also, it would be good idea to turn on Hibernate/Spring debug logging and see what's going on under the hood.
The flush is just synchronization with the db and then you need to commit or clear the session to see the changes.
And if you do just commit .It automatically flushes and you can see your changes from another session too.
Commit will make the database commit
Flushing is the process of synchronizing the underlying persistent
store with persistable state held in memory.
ie. it will update or insert into your tables in the running
transaction, but it may not commit those changes (this depends on
your flush mode).
Related
I have a Spring Boot RESTful CRUD service in car rental domain.
From the high overview, it's a simple CRUD app with SQL database and all such entities as a Car, Client, Lease, etc.
Now I have to introduce a report generation feature that aimed to process lease data and calculate some statistics based on data in SQL db and persist the report into MongoDB.
I've already implemented it by creating a ReportGenerationService that depends on OriginDataService and MongoService.
ReportGenerationService generates the report based on a data returned by OriginDataService. In turn, OriginDataService has a method getData() that do a number of calls to DAO layer and thus annotated with #Transactional(isolation = Isolation.REPEATABLE_READ). I want the returned data to be consistent. After getting the data ReportGenerationService generates a report and persists it by invoking MongoService's persist(Report) method.
In my implementation I get data -> generate report -> persist report.
But what if the base data and report can't fit into RAM?
The solution is to select it little by little, generate a part of the report, persist the part of the report and after all rows of data is processed merge the report.
It means that one method should read the data, processes it and persists.
I also want my method to read data with Repeatable Read isolation level, then I have to annotate the method with #Transactional(isolation = Isolation.REPEATABLE_READ). But since 2 dbs are used in the method the #Transactional will spread on both of them, and I want only SQL to use it.
How can I do gradually reads and writes to different dbs?
Refer below links and code sample, it may help you to resolve your issue.
https://www.javaworld.com/article/2077963/distributed-transactions-in-spring--with-and-without-xa.html?page=2
Transaction management for multiple database Using Spring & Hibernate
<bean id="transactionManager" class="com.springsource.open.db.ChainedTransactionManager">
<property name="transactionManagers">
<list>
<bean
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<bean
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="otherDataSource" />
</bean>
</list>
</property>
</bean>
I am using Spring + Hibernate on my JavaEE project.
In this project the user can upload an XLS file which I should import to my database. Before importing I have to validate this file checking its integrity with the other entities on my database. So I have more or less the following:
// The importer
#Component("importer")
public class Importer {
#Autowired
FirstDAO firstDao;
#Autowired
SecondDAO secondDao;
// Read the file and open it (65.000 lines for example)
public void validate() {
foreach line in the file {
firstDAO.has(line[col1]);
secondDao.has(line[col2]);
}
// It stores the valid objects in a List and persist them at the end
}
}
// The DAO
#Repository
public class FirstDao {
#PersistenceContext
protected EntityManager entityManager;
#Transactional(propagation = Propagation.NOT_SUPPORTED)
public boolean has(String name) {
List<Object> result = entityManager.createQuery( from FIRST_TABLE where name = :name)
.setParameter("name", name)
.getResultList();
if (result.size > 0) return true;
else return false;
}
}
// The PersistenceContext/Hibernate configuration
<!-- Data Source -->
<jee:jndi-lookup id="myDS" jndi-name="jdbc/my-DS" cache="true" proxy-interface="javax.sql.DataSource" />
<!-- Entity Manager -->
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property value="classpath:META-INF/my_persistence.xml" name="persistenceXmlLocation"/>
<property name="dataSource" ref="myDS"/>
<property name="persistenceUnitName" value="myPersistenceUnit" />
<!--
<property name="loadTimeWeaver">
<bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver"/>
</property>
-->
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="database" value="ORACLE" />
<property name="showSql" value="false" />
</bean>
</property>
</bean>
After logging the application I have noticed:
For each query (has method on my DAO) a connection is opened and closed with my Database.
The memory on the server is being flooded (probably memory leak).
After a lot of opening and closing connections I have a connection reset from the Database. Don't know why. And if I still keep requesting coonections, the Datasource is suspended.
I have read somethings about entityManager but I still don't know if I am doing it right, so:
Is it right to execute the validation in a for loop that way? (One connection for each item, meaning 130.000 connections open and closed in a 65000 lines file)
I have read about Stateless Persistence Context for the entityManager. I suspect the memory leak may be there. Maybe Hibernate is kepting a lot of objects in the PersistenceContext. How do I tell Entity Manager to not cache those guys when validating?
Thanks in advance.
First of all, you really shouldn't do that line by line unless you have a very very good reason. Even if the data size is bigger than your memory you should do that 1000 lines at a time or something like that but definitely not one by one.
Because one of the most important optimization for database usage is reducing number of database hit.
Secondly you should not retrieve the data just to check if it is exist.
You should use a basic "select count" query. By that way you will get rid of all stuff like consuming IO to read data and retrieving that data through network to your server and spending memory to just get the number of object in that list.
If you will use my first advice and check the existing of records not one at a time but 1000s at a time you can select just the names instead of all rows.
Btw as far as I can see you are using a datasource if that is properly configured like number of max connection etc. you shouldn't worry about number of database connection.
I am having an issue in saving and retrieving objects in database in just one request.
I want to clear the cache of our hibernate session to get the updated entity in our database.
My code looks like this:
public class SampleController{
protected ModelAndView onSubmit(HttpServletRequest request, HttpServletResponse response, Object command, BindException errors)
throws Exception {
myServiceOne.doAllotsOfSaving(parameters);
//some code enhancements to remove cache in hibernate session
//without affecting the session of other user logged in.
//some fields in MyEntity class contains the old values but the actual data in database is already updated
MyEntity entity = myServiceTwo.getMyEntityByOrderNo(orderNo);
}
}
--configurations
<bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="configLocation" value="classpath:hibernate.cfg.xml" />
<property name="hibernateProperties">
<ref local="hibernateProperties"/>
</property>
<property name="entityInterceptor">
<ref bean="auditLogInterceptor" />
</property>
</bean>
<bean id="myServiceOne" class="com.test.service.impl.MyServiceOneImpl">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<bean id="myServiceTwo" class="com.test.service.impl.MyServiceTwoImpl">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
--configurations
Does java web application contains only one hibernate session and how to clear this hibernate session?
No any hibernate based application must use multiple sessions. Each of these sessions must be closed when they perform their task. Hibernate can manage sessions for you if you configure the same in hibernate's configuration file.
However you should have only one instance of SessionFactory per application.
To clear the session you can call the session.clear() method. It clears the session level cache.
without affecting the session of other user logged in
Since you have a web application, you have a different thread for each user for database transactions. This means each user will have a different hibernate session, so you won't have to worry about this. If by some means you're using same session for all the users, you're doing it wrong and results can be catastrophic. After some time you'll get an OutOfMemoryError because of session level cache.
You must note that you cannot disable hibernate session level cache. For this purpose you may use StatelessSession.
Session factory is long live multithreaded object.
Usually one session factory should be created for one database.
A Session is used to get a physical connection with a database. The Session object is lightweight and designed to be instantiated each time an interaction is needed with the database.
The main function of the Session is to offer CRUD operations for instances of mapped entity classes. Instances may exist in one of the following three states at a given point in time:
transient: A new instance of a a persistent class which is not associated with a Session and has no representation in the database and no identifier value is considered transient by Hibernate.
persistent: You can make a transient instance persistent by associating it with a Session. A persistent instance has a representation in the database, an identifier value and is associated with a Session.
detached: Once we close the Hibernate Session, the persistent instance will become a detached instance.
I'm trying to use a JDBC Job Store in Quartz with the following code:
DateTime dt = new DateTime().plusHours(2);
JobDetail jobDetail = new JobDetail(identifier, "group", TestJob.class);
SimpleTrigger trigger = new SimpleTrigger(identifier, dt.toDate());
trigger.setJobName(identifier);
trigger.setJobGroup("group");
quartzScheduler.addJob(jobDetail, true);
quartzScheduler.scheduleJob(trigger);
And am configuring the scheduler as follows:
<bean id="scheduler" class="org.springframework.scheduling.quartz.SchedulerFactoryBean" lazy-init="false">
<property name="autoStartup" value="true" />
<property name="waitForJobsToCompleteOnShutdown" value="false" />
<property name="dataSource" ref="schedulerDataSource" />
<property name="nonTransactionalDataSource" ref="nonTXdataSource" />
<property name="quartzProperties">
<props>
<!--Job Store -->
<prop key="org.quartz.jobStore.driverDelegateClass">
org.quartz.impl.jdbcjobstore.StdJDBCDelegate
</prop>
<prop key="org.quartz.jobStore.class">
org.quartz.impl.jdbcjobstore.JobStoreCMT
</prop>
<prop key="org.quartz.jobStore.tablePrefix">QRTZ_</prop>
</props>
</property>
</bean>
The schedulerDataSource is a standard JNDI data source, the nonTXdataSource is configured via a simple org.springframework.jdbc.datasource.DriverManagerDataSource I have specified the job store class to be: org.quartz.impl.jdbcjobstore.JobStoreCMT and was hoping that the code:
quartzScheduler.addJob(jobDetail, true);
quartzScheduler.scheduleJob(trigger);
would not commit the job to the database when the each method is called. Basically when I call addJob the job is immediately saved to the database, the scheduleJob method causes the trigger information to be immediately saved in the database as well, but this tends to happen over two separate transactions already.
There is a fair bit of subsequent logic in the code that needs to be committed to the database together with the scheduled jobs in one transactions, however no matter what I try the jobs are committed by the scheduler to the database as soon as they methods are called. I tried in various environments Testing/Tomcat/Glassfish and various configurations of data sources but to no avail.
Can somebody point me into the direction of where I am going wrong?
Thank you.
Having thought this over a bit, now i believe you can achieve this providing your own wrapping DataSource but you should not do this. I think Quartz maintains some internal state in memory that must be in sync with the database (or at least it can do so). If you rollback a transaction or otherwise modify database state not notifying Quartz about this fact, it may not work as expected.
On the other hand you can use Quartz's pausing of the jobs to achieve similar effect: you simply create new job and pause it before adding any triggers. Then, you resume it only after you commit your transaction.
---------------------- my original answer ----------------------
I think, but I'm not sure, not tried this, that you can try the following:
You need a transaction around a code that uses DataSource.getConnection internally. To achieve that you have to use data source that'd be aware of global transaction state. I suppose that JBoss application server gives you just that (even with plain data source).
JBoss comes with a transaction manager (Arjuna) and data sources wrappers (JBoss app server internal) that are at least aware of global transaction state.
Other options include Atomikos and a XA data source, but i have less experience here.
Edit: if Quartz uses explicit COMMIT or setAutocommit(true) internally, both my suggestions would not work.
When you set datasource on SchedulerFactoryBean, spring uses below class as JobStore ( extension to Quartz's JobStoreCMT )
LocalDataSourceJobStore
This supports both transactional and non-transactional DataSource access.
Please try following
Remove property org.quartz.jobStore.class [Edit : Its ignored ,anyways]
Make sure the method which does addJob / ScheduleJob is in spring managed transaction.
I'm using Hibernate with OpenSessionInViewInterceptor so that a single Hibernate session will be used for the entire HTTP request (or so I wish). The problem is that Spring-configured transaction boundaries are causing a new session to be created, so I'm running into the following problem (pseudocode):
Start in method marked #Transactional(propagation = Propagation.SUPPORTS, readOnly = false)
Hibernate session #1 starts
Call DAO method to update object foo; foo gets loaded into session cache for session #1
Call another method to update foo.bar, this one is marked #Transactional(propagation = Propagation.REQUIRED, readOnly = false)
Transaction demarcation causes suspension of current transaction synchronization, which temporarily unbinds the current Hibernate session
Hibernate session #2 starts since there's no currently-existing session
Update field bar on foo (loading foo into session cache #2); persist to DB
Transaction completes and method returns, session #1 resumes
Call yet another method to update another field on foo
Load foo from session cache #1, with old, incorrect value of bar
Update field foo.baz, persist foo to DB
foo.bar's old value overwrites the change we made in the previous step
Configuration looks like:
<bean name="openSessionInViewInterceptor" class="org.springframework.orm.hibernate3.support.OpenSessionInViewInterceptor" autowire="byName">
<property name="flushModeName">
<value>FLUSH_AUTO</value>
</property>
</bean>
<bean id="txManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="myDataSource" />
</bean>
<bean id="sessionFactory"
class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<property name="useTransactionAwareDataSource" value="true" />
<property name="mappingLocations">
<list>
<value>/WEB-INF/xml/hibernate/content.hbm.xml</value>
</list>
</property>
<property name="lobHandler">
<ref local="oracleLobHandler" />
</property>
<!--property name="entityInterceptor" ref="auditLogInterceptor" /-->
<property name="hibernateProperties"
ref="HibernateProperties" />
<property name="dataSource" ref="myDataSource" />
</bean>
I've done some debugging and figured out exactly where this is happening, here is the stack trace:
Daemon Thread [http-8080-1] (Suspended (entry into method doUnbindResource in TransactionSynchronizationManager))
TransactionSynchronizationManager.doUnbindResource(Object) line: 222
TransactionSynchronizationManager.unbindResource(Object) line: 200
SpringSessionSynchronization.suspend() line: 115
DataSourceTransactionManager(AbstractPlatformTransactionManager).doSuspendSynchronization() line: 620
DataSourceTransactionManager(AbstractPlatformTransactionManager).suspend(Object) line: 549
DataSourceTransactionManager(AbstractPlatformTransactionManager).getTransaction(TransactionDefinition) line: 372
TransactionInterceptor(TransactionAspectSupport).createTransactionIfNecessary(TransactionAttribute, String) line: 263
TransactionInterceptor.invoke(MethodInvocation) line: 101
ReflectiveMethodInvocation.proceed() line: 171
JdkDynamicAopProxy.invoke(Object, Method, Object[]) line: 204
$Proxy14.changeVisibility(Long, ContentStatusVO, ContentAuditData) line: not available
I can't figure out why transaction boundaries (even "nested" ones - though here we're just moving from SUPPORTS to REQUIRED) would cause the Hibernate session to be suspended, even though OpenSessionInViewInterceptor is in use.
When the session is unbound, I see the following in my logs:
[2010-02-16 18:20:59,150] DEBUG org.springframework.transaction.support.TransactionSynchronizationManager Removed value [org.springframework.orm.hibernate3.SessionHolder#7def534e] for key [org.hibernate.impl.SessionFactoryImpl#693f23a2] from thread [http-8080-1]
First, your openSessionInViewInterceptor must have a sessionFactory injected, otherwise it can't do its job:
<property name="sessionFactory">
<ref bean="sessionFactory" />
</property>
Also, there is a property called singleSession - it is true by default, but debug its value just in case.
Then, if using Spring-MVC, you have to configure the interceptor for the SimpleUrlHandlerMapping (or whichever you are using), so that it can be actually applied:
<property name="interceptors">
<list>
<ref bean="openSessionInViewInterceptor"/>
</list>
</property>
If using anything else, I think you have to define it using <aop> tags (what web framework are you using?)
I have this exact same problem. I had first thought that DB transaction boundaries drove creation of hibernate sessions. After a bit of debugging I realize now that I don't really understand them -- or how they are 'supposed' to be setup.
I'm using spring and a #Transactional service with two associated DAOs. I'm also using the default propagation (REQUIRED) across the board.
public class MyService {
public MyPersonDao personDao; // injected by spring
public MyAddressDao addressDao; // injected by spring
#Transactional
public void create(Person p) {
Address a = addressDao.findOrCreate(p.getAddressData());
boolean inSession = personDao.getHibernateTemplate.contains(a); // false
p.setAddress(adressDao.create();
personDao.store(p); // fails because a is transient
}
}
From what I see in my logs, it looks like function calls through transactional proxies seem to open and close hibernate sessions.