Here is my implementation in summary
1) all DAOs implemented using HibernateDAO support/ #Transational annotation is only used in Service layer
2) Use MySQL / HibernateTransactionManager
3) Test using main(String args[]) method (do transactions work using this method? )
Transactions does not get rollbacked and invalid entried can be seen in the database.
What am I doing wrong here?
Detailed information is given below.
1) I have configured transactions using #Transactional annotation in service layer as:
#Transactional(readOnly=false, rollbackFor={DuplicateEmailException.class,DuplicateLoginIdException.class,IdentityException.class},propagation=Propagation.REQUIRES_NEW)
public void createUserProfile(UserProfile profile)
throws DuplicateEmailException, DuplicateLoginIdException,
IdentityException {
//Accessing DAO (implemented using HibernateDAOSupport)
identityDAO.createPrincipal(profile.getSecurityPrincipal());
try {
//Accessing DAO
userProfileDAO.createUserProfile(profile);
} catch (RuntimeException e) {
throw new IdentityException("UseProfile create Error", e);
}
}
2) My Transation Manager configuration / datasource is as follows :
<bean id="dataSource"
class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url"
value="jdbc:mysql://localhost:3306/ibmdusermgmt" />
<property name="username" value="root" />
<property name="password" value="root" />
<property name="defaultAutoCommit" value="false"/>
</bean>
<bean id="sessionFactory"
class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="mappingLocations"
value="classpath*:hibernate/*.hbm.xml" />
<property name="hibernateProperties">
<prop key="hibernate.dialect">
org.hibernate.dialect.MySQLDialect
</prop>
<prop key="hibernate.query.substitutions">
true=1 false=0
</prop>
<prop key="hibernate.show_sql">true</prop>
<prop key="hibernate.use_outer_join">false</prop>
</props>
</property>
</bean>
<bean id="transactionManager"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<tx:annotation-driven transaction-manager="transactionManager"/>
3) Test using main() method as follows :
public static void main(String[] args) {
ApplicationContext cnt=new ClassPathXmlApplicationContext("testAppContext.xml");
IUserProfileService upServ=(IUserProfileService)cnt.getBean("ibmdUserProfileService");
UserProfile up=UserManagementFactory.createProfile("testlogin");//
up.setAddress("address");
up.setCompany("company");
up.setTelephone("94963842");
up.getSecurityPrincipal().setEmail("security#email.com");
up.getSecurityPrincipal().setName("Full Name");
up.getSecurityPrincipal().setPassword("password");
up.getSecurityPrincipal().setSecretQuestion(new SecretQuestion("Question", "Answer"));
try {
upServ.createUserProfile(up);
} catch(Exception e){
e.printStackTrace();
}
As far as I know, MySQL's default storage engine MyISAM does not support transactions.
MySQL Server (version 3.23-max and all versions 4.0 and above) supports transactions with the InnoDB and BDB transactional storage engines. InnoDB provides full ACID compliance. ... The other nontransactional storage engines in MySQL Server (such as MyISAM) follow a different paradigm for data integrity called “atomic operations.” In transactional terms, MyISAM tables effectively always operate in autocommit = 1 mode
In order to get transactions working, you'll have to switch your storage engine for those tables to InnoDB. You may also want to use Hibernate's MySQLInnodDBDialect if you generate your tables (hdm2ddl) from your mappings:
<prop key="hibernate.dialect">org.hibernate.dialect.MySQLInnoDBDialect</prop>
As mentioned by #gid, it's not a requirement for transactions though.
Yes. you can call transactional objects from main().
#sfussenegger is correct in that MyISAM doesn't support transactions, use of the MySQLDialect dialect doesn't exclude the use of transactions it only means that the hdm2ddl table generation will create InnoDB type tables. (which of course could be your issue).
Can you confirm you are using a transactional type table in MySQL?
Assuming this is true the next thing to do is get some debug output from org.springframework.transaction.support.AbstractPlatformTransactionManager. How to do this depends on which logging framework you are using but it should be straight forward.
Related
I have a legacy Spring Data JPA application that has a large number of Entities and CrudRepositories. JPA is configured using the XML below. We have a new requirement that requires us to insert 10,000 - 50,000 entities into the database at once via a FileUpload. With the existing configuration the database CPU spikes. After enabling hibernate statistics, its was apparent that these 10,000 insert operations were generating over 200,000 DB queries due to all the validation logic required with a single insert in the InvoiceService.
Original Configuration
<bean id="dataSource" destroy-method="close" class="org.apache.commons.dbcp2.BasicDataSource">
<property name="driverClassName" value="${db.driver}"/>
<property name="url" value="${db.jdbcurl}"/>
<property name="username" value="${db.username}"/>
<property name="password" value="${db.password}"/>
<property name="maxTotal" value="${db.maxTotal}"/>
<property name="maxIdle" value="${db.maxIdle}"/>
<property name="minIdle" value="${db.minIdle}"/>
<property name="initialSize" value="${db.initialSize}"/>
<property name="maxWaitMillis" value="${db.maxWaitMillis}"/>
<property name="minEvictableIdleTimeMillis" value="${db.minEvictableIdleTimeMillis}"/>
<property name="timeBetweenEvictionRunsMillis" value="${db.timeBetweenEvictionRunsMillis}"/>
<property name="testOnBorrow" value="${db.testOnBorrow}"/>
<property name="testOnReturn" value="${db.testOnReturn}"/>
<property name="testWhileIdle" value="${db.testWhileIdle}"/>
<property name="removeAbandonedOnBorrow" value="${db.removeAbandonedOnBorrow}"/>
<property name="removeAbandonedOnMaintenance" value="${db.removeAbandonedOnMaintenance}"/>
<property name="removeAbandonedTimeout" value="${db.removeAbandonedTimeout}"/>
<property name="logAbandoned" value="${db.logAbandoned}"/>
</bean>
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" depends-on="flyway">
<property name="dataSource" ref="dataSource" />
<property name="packagesToScan" value="my.package.domain" />
<property name="loadTimeWeaver">
<bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver" />
</property>
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" />
</property>
<property name="jpaProperties">
<props>
<prop key="hibernate.hbm2ddl.auto">${hibernate.hbm2ddl.auto:validate}</prop>
<prop key="hibernate.dialect">org.hibernate.dialect.PostgreSQL9Dialect</prop>
<prop key="hibernate.show_sql">${hibernate.show_sql:false}</prop>
</props>
</property>
<property name="persistenceUnitName" value="entityManagerFactory" />
</bean>
<bean id="persistenceAnnotationBeanPostProcessor" class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor">
<property name="defaultPersistenceUnitName" value="entityManagerFactory"/>
</bean>
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
<tx:annotation-driven proxy-target-class="true" />
<bean id="persistenceExceptionTranslationPostProcessor"
class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor" />
<jpa:repositories base-package="my.package.repository" entity-manager-factory-ref="entityManagerFactory"/>
The FileUploadService snippet looks like this...
EntityManager batchEntityManager = entityManagerFactory.createEntityManager();
EntityTransaction transaction = batchEntityManager.getTransaction();
transaction.begin();
try (BufferedReader buffer = new BufferedReader(new InputStreamReader(file.getInputStream()))) {
buffer.lines()
.filter(StringUtils::isNotBlank)
.forEach(csvLine -> {
invoiceService.createInvoice(csvLine);
if (counter.incrementAndGet() % (updateFrequency.equals(0) ? 1 : updateFrequency) == 0) {
FileUpload fileUpload1 = fileUploadRepository.findOne(fileUpload.getId());
fileUpload1.setNumberOfSentRecords(sentCount.get());
fileUploadRepository.save(fileUpload1);
transaction.commit();
transaction.begin();
batchEntityManager.clear();
}
});
transaction.commit();
} catch (IOException ex) {
systemException.incrementAndGet();
log.error("Unexpected error while performing send task.", ex);
transaction.rollback();
}
// Update FileUpload status.
FileUpload fileUpload1 = fileUploadRepository.findOne(fileUpload.getId());
fileUpload1.setNumberOfSentRecords(sentCount.get());
if (systemException.get() != 0) {
fileUpload1.setStatus(FileUploadStatus.SYSTEM_ERROR);
} else {
fileUpload1.setStatus(FileUploadStatus.SENT);
}
fileUploadRepository.save(fileUpload1);
batchEntityManager.close();
Most of the DB queries were select statements that return the same results for each record being inserted. It was obvious that enabling EhCache as a Second-Level cache would have a significant performance improvement. However, this application has been running flawlessly in production for several years without ehcache enabled. I am hesitant to turn this on globally as I do not know how this will affect the large number of other repositories/queries.
Question 1
Is there a way to configure a "alternate" EntityManagerFactory that uses the second level cache for this batch process only? How can I choose to use this factory instead of the primary for this batch process only?
I experimented adding something like below to my spring config. I can easily inject this additional EntityManager into my class and use it. However, the existing Repositories (such as FileUploadRepository) don't seem to use it - they just return null. I am not sure if this approach is possible. The documentation for JpaTransactionManager says
This transaction manager is appropriate for applications that use a single JPA EntityManagerFactory for transactional data access
which is exactly what I am not doing. So what other options are there?
<bean id="batchEntityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" depends-on="flyway">
<property name="dataSource" ref="dataSource" />
<property name="packagesToScan" value="my.package.domain" />
<property name="loadTimeWeaver">
<bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver" />
</property>
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" />
</property>
<property name="jpaProperties">
<props>
<prop key="hibernate.hbm2ddl.auto">${hibernate.hbm2ddl.auto:validate}</prop>
<prop key="hibernate.dialect">org.hibernate.dialect.PostgreSQL9Dialect</prop>
<prop key="hibernate.show_sql">${hibernate.show_sql:false}</prop>
<prop key="hibernate.generate_statistics">${hibernate.generate_statistics:false}</prop>
<prop key="hibernate.ejb.interceptor">my.package.HibernateStatistics</prop>
<prop key="hibernate.cache.use_query_cache">true</prop>
<prop key="hibernate.cache.use_second_level_cache">true</prop>
<prop key="hibernate.cache.region.factory_class">org.hibernate.cache.ehcache.EhCacheRegionFactory</prop>
<prop key="hibernate.jdbc.batch_size">100</prop>
<prop key="hibernate.order_inserts">true</prop>
<prop key="hibernate.order_updates">true</prop>
</props>
</property>
<property name="persistenceUnitName" value="entityManagerFactory" />
</bean>
Question 2
Assuming there is no other option to "selectively" use EhCache, I tried enabling it on the primary EntityManagerFactory only. We can certainly do regression testing to make sure we don't introduce new issues. I assume this is fairly safe to do? However, another issue came up. I am trying to commit inserts in batches as described in this post and shown in my code above. I am getting an RollbackException when trying to commit the batch due to Connection org.postgresql.jdbc.PgConnection#1e7eb804 is closed.. I presume this is due to the maxWaitMillis property on the dataSource.
I don't want to change this property for every other existing spring Service/Repository/query in the application. If I could use the "custom" EntityManagerFactory I could easily provide a different DataSource Bean with the properties I want. Again, is this possible?
Maybe I looking at this problem all wrong. Are there any other suggestions?
You can have another EntityManagerFactory bean with a different qualifier, so that's one option. I would still recommend you look into these select queries. I bet the problem is just a missing index which causes the database to do full table scans. If you add the proper indexes, you probably don't need to change a thing in your application.
The problem
I have an application build on Spring 4, Hibernate 5 and Spring Data JPA 1.7. I use PostgresSQL as database. I'd like to use Hibernate's support for multi tenancy, but have problem with correctly implementing MultiTenantConnectionProvider. I'd like to use SCHEMA stratagey for separating tenants and I'd prefer to use single DataSource.
My current implementation of MultiTenantConnectionProvider's method getConnection() looks like this:
#Override
public Connection getConnection(String tenantIdentifier) throws SQLException {
final Connection connection = getAnyConnection();
try {
connection.createStatement().execute("SET SCHEMA '" + tenantIdentifier + "'");
}
catch (SQLException e) {
throw new HibernateException("Could not alter JDBC connection to specified schema [" + tenantIdentifier + "]", e);
}
return connection;
}
I take the connection from my DataSource, which I inject by implementing interface ServiceRegistryAwareService, but I'm not sure that this is a right way.
Method gets called when it should with correct tenantIdentifier (comming from my implementation of CurrentTenantIdentifierResolver), statement is executed, but in practise, it is useless. Problem is, that queries generated by Hibernate contain fully qualified names of tables including default schema. Can I tell hibernate to omit default schema from queries? Or should I use completely different approach?
My configuration
I don't wanna clutter my question with too much configuration, I'll paste here what I believe is relevant. If I missed something important, please let me know.
This is part of my application context xml
<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="org.postgresql.Driver"/>
<property name="url" value="${database.url}"/>
<property name="username" value="${database.username}"/>
<property name="password" value="${database.password}"/>
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="persistenceUnitName" value="pu" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="false" />
<property name="databasePlatform" value="org.hibernate.dialect.PostgreSQL9Dialect" />
</bean>
</property>
</bean>
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
<property name="dataSource" ref="dataSource" />
</bean>
Side question
Is there any difference between hibernate.multiTenancy values SCHEMA and DATABASE, from Hibernate's point of view? I understand the conceptual difference between these two, but from looking at the source code, it seems to me like these two are completely interchangeable and all the logic is hidden in the implementation of MultiTenantConnectionProvider. Am I missing something?
I have the following Spring/Hibernate configs:
<bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource" ref="jndiDataSource" />
<property name="annotatedClasses">
<list>
...
</list>
</property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.Oracle10gDialect</prop>
<prop key="hibernate.show_sql">true</prop>
</props>
</property>
</bean>
<tx:annotation-driven transaction-manager="transactionManager"/>
<bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor"/>
<bean id="transactionManager"
class="org.springframework.orm.hibernate4.HibernateTransactionManager"
p:sessionFactory-ref="sessionFactory" />
I am extensively using spring transaction annotations to control transaction propagation methods throughout my code base. It is all working great.
I have a need to perform an update that will affect many rows at once and I want to blend it in with the rest of my transaction logic. The last thing I want to do is to bull in a china shop and load all objects into memory and loop through them changing them one at a time.
I am trying something like:
#Autowired
private SessionFactory sessionFactory;
...
String hql = "UPDATE TABLE_NAME SET COLUMN_LIST WHERE ID IN :list";
// I see the transaction and get an error when I try to start a new one.
Transaction trans = sessionFactory.getCurrentSession().getTransaction();
Query query = sessionFactory.getCurrentSession().createSQLQuery(hql);
query.setParameterList("list", listOfIds);
success = (query.executeUpdate() == listOfIds.size());
I have tried dropping this into a method with its own transaction instructions, but am not seeing any evidence that it is being executed (other than a lack of errors).
What is the recommended way to include custom Hibernate SQL into Spring managed transactions?
Thanks a lot.
I'm a newbie with Java and I need to create a console application that is going to connect with 4 databases (access, vfp, mysql and sqlserver).
I started with hibernate.cfg.xml files and managed to configure them, one for each database. Then I realized that jpa was a better solution. So I changed all my hibernate files to a persistence.xml file with 4 persistence-unit.
The databases are working well, but to use them I have to create a lot of code. This is an example:
EntityManagerFactory dbPersistence =
Persistence.createEntityManagerFactory("oneOfMyDatabases");
EntityManager em = dbPersistence.createEntityManager();
Query query = em.createQuery("from ProductEntity").setMaxResults(10);
for (Object o : query.getResultList()) {
ProductEntity c = (ProductEntity) o;
System.out.println("Product " + c.getName());
}
cgPersistence.close();
I need to update one of the databases with data from the other databases.
It's a pain to create all the code like this, so I was thinking about creating repositories but I can't see how to create them with different entityManagers for each database.
I tried to inject the managers with google guice without success, but I couldn't handle how to close or where to close the persistence connection.
Finally, I've found Spring Data and it seems to be what I need, but I don't really understand how to use it.
I need some guide to get it working because I've read tons of tutorials and each of them appear to be different:
· Can I use the same persistence.xml or do I need another configuration file? I've seen that Spring Data has jpa compatibility but I'm not sure how it works.
· Is Spring Context the IOC Container of Spring Framework? Do I need it to work with Spring Data?
· What else do I need?
Thank you in advance
Each database is going to be represented by a different data source. For every data source you need a different session factory/entity manager factory.
If you want to save in more than one data source in a single transaction you then need XA transactions, therefore a Java EE or a stand-alone transaction manager, like Bitronix or Atomikos.
You have 4 different entity manager factories, you also need specific repositories for each of those:
<jpa:repositories base-package="your.company.project.repository.access" entity-manager-factory-ref="accessEntityManagerFactory"/>
<jpa:repositories base-package="your.company.project.repository.sqlserver" entity-manager-factory-ref="sqlserverEntityManagerFactory"/>
Then your application doesn't have to care which repository it uses.
The JpaTransactionManager requires one entityManagerFactory, but since you have 4 you may end up creating for of those, which I think it's undisirable.
It's better to switch to JTA instead:
<!-- 1. You define the Bitronix config ->
<bean id="btmConfig" factory-method="getConfiguration" class="bitronix.tm.TransactionManagerServices">
<property name="serverId" value="spring-btm"/>
<property name="warnAboutZeroResourceTransaction" value="true"/>
<property name="logPart1Filename" value="${btm.config.logpart1filename}"/>
<property name="logPart2Filename" value="${btm.config.logpart2filename}"/>
<property name="journal" value="${btm.config.journal:disk}"/>
</bean>
<!-- 2. You define all your data sources ->
<bean id="dataSource" class="bitronix.tm.resource.jdbc.PoolingDataSource" init-method="init"
destroy-method="close">
<property name="className" value="${jdbc.driverClassName}"/>
<property name="uniqueName" value="dataSource"/>
<property name="minPoolSize" value="0"/>
<property name="maxPoolSize" value="5"/>
<property name="allowLocalTransactions" value="false"/>
<property name="driverProperties">
<props>
<prop key="user">${jdbc.username}</prop>
<prop key="password">${jdbc.password}</prop>
<prop key="url">${jdbc.url}</prop>
</props>
</property>
</bean>
<!-- 3. For each data source you create a new persistenceUnitManager and you give its own specific persistence.xml ->
<bean id="persistenceUnitManager" depends-on="transactionManager"
class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager">
<property name="persistenceXmlLocation" value="classpath*:META-INF/persistence.xml"/>
<property name="defaultDataSource" ref="dataSource"/>
<property name="dataSourceLookup">
<bean class="org.springframework.jdbc.datasource.lookup.BeanFactoryDataSourceLookup"/>
</property>
</bean>
<!-- JpaDialect must be configured for transactionManager to make JPA and JDBC share transactions -->
<bean id="jpaDialect" class="org.springframework.orm.jpa.vendor.HibernateJpaDialect"/>
<!-- 4. For each data source you create a new entityManagerFactory ->
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="persistenceUnitName" value="persistenceUnit"/>
<property name="persistenceUnitManager" ref="persistenceUnitManager"/>
<property name="jpaDialect" ref="jpaDialect"/>
</bean>
<!-- 5. You have only one JTA transaction manager ->
<bean id="jtaTransactionManager" factory-method="getTransactionManager"
class="bitronix.tm.TransactionManagerServices" depends-on="btmConfig, dataSource"
destroy-method="shutdown"/>
<!-- 6. You have only one Spring transaction manager abstraction layer ->
<bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManager" ref="jtaTransactionManager"/>
<property name="userTransaction" ref="jtaTransactionManager"/>
</bean>
If this proves too much work for your current problem, you can give a try to having 4 JPA transaction managers and see how it works.
If your schema is same across all the databases, then the best approach is to use hibernate,because hibernate provides portability between multiple databases. once you create the hbm files and pojo classes, the only thing you need to change is the dialect and your datasource.
The transactionmanager support can be provided by spring.
I have a method, marked as #Transactional.
It consists of several functions, one of them uses JDBC and the second one - Hibernate, third - JDBC.
The problem is that changes, made by Hibernate function are not visible in the last functions, that works with JDBC.
#Transactional
void update() {
jdbcUpdate1();
hibernateupdate1();
jdbcUpdate2(); // results of hibernateupdate1() are not visible here
}
All functions are configured to use the same datasource:
<bean id="myDataSource" class="org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy">
<property name="targetDataSource" ref="targetDataSource"/>
</bean>
<bean id="targetDataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close" lazy-init="true" scope="singleton">
<!-- settings here -->
</bean>
myDataSource bean is used in the code. myDataSource.getConnection() is used to work with connections in jdbc functions and
getHibernateTemplate().execute(new HibernateCallback() {
public Object doInHibernate(Session session) throws HibernateException, SQLException {
...
}
});
is used in hibernate function.
Thanks.
First, avoid using JDBC when using hibernate.
Then, if you really need it, use to Session.doWork(..). If your hibernate version does not yet have this method, obtain the Connection from session.connection().
You can use JDBC and Hibernate in the same transaction if you use the right Spring setup:
<bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource"/>
</bean>
<bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory"/>
</bean>
<bean id="myDao" class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean">
<property name="transactionManager" ref="transactionManager"/>
<property name="target">
<bean class="MyDaoImpl">
<property name="dataSource" ref="dataSource"/>
<property name="sessionFactory" ref="sessionFactory"/>
</bean>
</property>
<property name="transactionAttributes">
<props>
<prop key="get*">PROPAGATION_SUPPORTS,readOnly</prop>
<prop key="*">PROPAGATION_REQUIRED</prop>
</props>
</property>
</bean>
This assumes that the JDBC portion of your DAO uses JdbcTemplate. If it doesn't you have a few options:
Use DataSourceUtils.getConnection(javax.sql.DataSource) to get a connection
Wrap the DataSource you pass to your DAO (but not necessarily the one you pass to the SessionFactory) with a TransactionAwareDataSourceProxy
The latter is preferred since it hidse the DataSourceUtils.getConnection inside the proxy datasource.
This is of course the XML path, it should be easy to convert this to annotation based.
The problem is, the operations on Hibernate engine does not result in immediate SQL execution. You can trigger it manually calling flush on Hibernate session. Then the changes made in hibernate will be visible to the SQL code within the same transaction. As long as you do DataSourceUtils.getConnection to get SQL connection, because only then you'll have them run in the same transaction...
In the opposite direction, this is more tricky, because you have 1nd level cache (session cache), and possibly also 2nd level cache. With 2nd level cache all changes made to database will be invisible to the Hibernate, if the row is cached, until the cache expires.