Configure transaction in Spring 4.1.5 without XML - java

I'm writing application which connects with Oracle Database. I call function from DB which inserts new records to table. And after this callback I can decide what I want to do: commit or rollback.
Unfortunalety I'm new in Spring, so I have problems with configuration. And what's more I want to make this configuration in Java class, not in XML. And here I need your help.
UPDATED CODE:
ApplicationConfig code:
#Configuration
#EnableTransactionManagement
#ComponentScan("hr")
#PropertySource({"classpath:jdbc.properties", "classpath:functions.properties", "classpath:procedures.properties"})
public class ApplicationConfig {
#Autowired
private Environment env;
#Bean(name="dataSource")
public DataSource dataSource() {
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName(env.getProperty("jdbc.driver"));
dataSource.setUrl(env.getProperty("jdbc.url"));
dataSource.setUsername(env.getProperty("jdbc.username"));
dataSource.setPassword(env.getProperty("jdbc.password"));
dataSource.setDefaultAutoCommit(false);
return dataSource;
}
#Bean
public JdbcTemplate jdbcTemplate(DataSource dataSource) {
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.setResultsMapCaseInsensitive(true);
return jdbcTemplate;
}
#Bean(name="txName")
public PlatformTransactionManager txManager() {
DataSourceTransactionManager txManager = new DataSourceTransactionManager();
txManager.setDataSource(dataSource());
return txManager;
}
}
I have Dao and Service, where both implements proper interface.
Service implementation:
#Service
public class HumanResourcesServiceImpl implements HumanResourcesService {
#Autowired
private HumanResourcesDao hrDao;
#Override
public String generateData(int rowsNumber) {
return hrDao.generateData(rowsNumber);
}
#Override
#Transactional("txName")
public void shouldCommit(boolean doCommit, Connection connection) throws SQLException {
hrDao.shouldCommit(doCommit, connection);
}
}
Dao implementation:
#Repository
public class HumanResourcesDaoImpl implements HumanResourcesDao {
private JdbcTemplate jdbcTemplate;
private SimpleJdbcCall generateData;
#Autowired
public HumanResourcesDaoImpl(JdbcTemplate jdbcTemplate, Environment env) {
this.jdbcTemplate = jdbcTemplate;
generateData = new SimpleJdbcCall(jdbcTemplate)
.withProcedureName(env.getProperty("procedure.generateData"));
}
#Override
public String generateData(int rowsNumber) {
HashMap<String, Object> params = new HashMap<>();
params.put("i_rowsNumber", rowsNumber);
Map<String, Object> m = generateData.execute(params);
return (String) m.get("o_execution_time");
}
#Override
#Transactional("txName")
public void shouldCommit(boolean doCommit, Connection connection) throws SQLException {
if(doCommit) {
connection.commit();
} else {
connection.rollback();
}
}
}
Main class code:
public class Main extends Application implements Initializable {
#Override
public void initialize(URL url, ResourceBundle resourceBundle) {
ApplicationContext context = new AnnotationConfigApplicationContext(ApplicationConfig.class);
hrService = context.getBean(HumanResourcesService.class);
BasicDataSource ds = (BasicDataSource)context.getBean("dataSource");
Connection connection = ds.getConnection();
//do something and call
//hrService.generateData
//do something and call
//hrService.shouldCommit(true, connection);
//which commit or rollback generated data from previoues callback
}
}
UPDATE:
I think that the problem is with connection, because this statement:
this.jdbcTemplate.getDataSource().getConnection();
creates new connection, so then there is nothing to commit or rollback. But still I can't figure why this doesn't work properly. No errors, no new records...
What is wierd, is that when I debuged connection.commit(); I found out that in DelegatingConnection.java, parameter this has proper connection, but there is something like:
_conn.commit();
and _conn has different connection. Why?
Should I in some way synchronize connection for those 2 methods or what? Or this is only one connection? To be honest, I'm not sure how it works exactly. One connection and all callbacks to stored procedures are in this connection or maybe with each callback new connection is created?
Real question is how to commit or rollback data from previous callback which do insert into table?

One easy way to do this is to annotate the method with #Transactional:
#Transactional
public void myBeanMethod() {
...
if (!doCommit)
throw new IllegalStateException(); // any unchecked will do
}
and spring will roll all database changes back.
Remember to add #EnableTransactionManagement to your spring application/main class

You can use #Transactional and #EnableTransactionManagement to setup transactions without using the XML configuration. In short, annotate the methods/classes you want to have transactions with #Transactional. To setup the transactional management you use the #EnableTransactionManagement inside your #Configuration.
See Springs docs for example on how to use both. The #EnableTransactionManagement is detailed in the JavaDocs but should match the XML configuration.
UPDATE
The problem is that you are mixing raw JDBC calls (java.sql.Connection) with Spring JDBC. When you execute your SimpleJdbcCall, Spring creates a new Connection. This is not the same Connection as the one you later try to commit. Hence, nothing happens when you perform the commit. I tried to somehow get the connection that the SimpleJdbcCall uses, but could not find any easy way.
To test this I tried the following (I did not use params):
#Override
public String generateData(int rowsNumber) {
//HashMap<String, Object> params = new HashMap<>();
//params.put("i_rowsNumber", rowsNumber);
//Map<String, Object> m = generateData.execute(params);
Connection targetConnection = DataSourceUtils.getTargetConnection(generateData.getJdbcTemplate().getDataSource().getConnection());
System.out.println(targetConnection.prepareCall((generateData.getCallString())).execute());
targetConnection.commit();
return (String) m.get("o_execution_time");
}
If I don't save the targetConnection, and instead try to get the connection again by calling DataSourceUtils.getTargetConnection() when committing, nothing happens. Thus, you must commit on the same connection that you perform the statement on. This does not seem to be easy, nor the proper way.
The solution is to drop the java.sql.Connection.commit() call. Instead, you use Spring Transactions completly. If you use #Transaction on the method that performs database call, Spring will automatically commit when the method has finished. If the method body experiences any Exception (even outside the actual database call) it will automatically rollback. In other words, this should suffice for normal Transaction management.
However, if you are doing batch processing, and wish to have more control over your transactions with commits and rollbacks, you can still use Spring. To programatically control transactions with Spring, you can use TransactionTemplate which have commit and rollback methods. Don't have time to give you proper samples, but may do so in later days if you are still stuck ;)

#Configuration
#EnableTransactionManagement
#ComponentScan(basePackages="org.saat")
#PropertySource(value="classpath:resources/db.properties",ignoreResourceNotFound=true)
public class AppConfig {
#Autowired
private Environment env;
#Bean(name="dataSource")
public DataSource getDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(env.getProperty("db.driver"));
dataSource.setUrl(env.getProperty("db.url"));
dataSource.setUsername(env.getProperty("db.username"));
dataSource.setPassword(env.getProperty("db.password"));
return dataSource;
}
#Bean(name="entityManagerFactoryBean")
public LocalContainerEntityManagerFactoryBean getSessionFactory() {
LocalContainerEntityManagerFactoryBean factoryBean = new LocalContainerEntityManagerFactoryBean ();
factoryBean.setDataSource(getDataSource());
factoryBean.setPackagesToScan("org.saat");
factoryBean.setJpaVendorAdapter(getJpaVendorAdapter());
Properties props=new Properties();
props.put("hibernate.dialect", env.getProperty("hibernate.dialect"));
props.put("hibernate.hbm2ddl.auto",env.getProperty("hibernate.hbm2ddl.auto"));
props.put("hibernate.show_sql",env.getProperty("hibernate.show_sql"));
factoryBean.setJpaProperties(props);
return factoryBean;
}
#Bean(name="transactionManager")
public JpaTransactionManager getTransactionManager() {
JpaTransactionManager jpatransactionManager = new JpaTransactionManager();
jpatransactionManager.setEntityManagerFactory(getSessionFactory().getObject());
return jpatransactionManager;
}
#Bean
public JpaVendorAdapter getJpaVendorAdapter() {
HibernateJpaVendorAdapter hibernateJpaVendorAdapter = new HibernateJpaVendorAdapter();
return hibernateJpaVendorAdapter;
}
}

Related

How to use autowired repositories in Spring Batch integration test?

I am facing some issues while writing integration tests for Spring Batch jobs. The main problem is that an exception is thrown whenever a transaction is started inside the batch job.
Well, first things first. Imagine this is the step of a simple job. A Tasklet for the sake of simplicity. Of course, it is used in a proper batch config (MyBatchConfig) which I also omit for brevity.
#Component
public class SimpleTask implements Tasklet {
private final MyRepository myRepository;
public SimpleTask(MyRepository myRepository) {
this.myRepository = myRepository;
}
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
myRepository.deleteAll(); // or maybe saveAll() or some other #Transactional method
return RepeatStatus.FINISHED;
}
}
MyRepository is a very unspecial CrudRepository.
Now, to test that job I use the following test class.
#SpringBatchTest
#EnableAutoConfiguration
#SpringJUnitConfig(classes = {
H2DataSourceConfig.class, // <-- this is a configuration bean for an in-memory testing database
MyBatchConfig.class
})
public class MyBatchJobTest {
#Autowired
private JobLauncherTestUtils jobLauncherTestUtils;
#Autowired
private JobRepositoryTestUtils jobRepositoryTestUtils;
#Autowired
private MyRepository myRepository;
#Test
public void testJob() throws Exception {
var testItems = List.of(
new MyTestItem(1),
new MyTestItem(2),
new MyTestItem(3)
);
myRepository.saveAll(testItems); // <--- works perfectly well
jobLauncherTestUtils.launchJob();
}
}
When it comes to the tasklet execution and more precisely to the deleteAll() method call this exception is fired:
org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is java.lang.IllegalStateException: Already value [org.springframework.jdbc.datasource.ConnectionHolder#68f48807] for key [org.springframework.jdbc.datasource.DriverManagerDataSource#49a6f486] bound to thread [SimpleAsyncTaskExecutor-1]
at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:448)
...
Do you have any ideas why this is happening?
As a workaround I currently mock the repository with #MockBean and back it with an ArrayList but this is not what the inventor intended, I guess.
Any advice?
Kind regards
Update 1.1 (includes solution)
The mentioned data source configuration class is
#Configuration
#EnableJpaRepositories(
basePackages = {"my.project.persistence.repository"},
entityManagerFactoryRef = "myTestEntityManagerFactory",
transactionManagerRef = "myTestTransactionManager"
)
#EnableTransactionManagement
public class H2DataSourceConfig {
#Bean
public DataSource myTestDataSource() {
var dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("org.h2.Driver");
dataSource.setUrl("jdbc:h2:mem:myDb;DB_CLOSE_DELAY=-1");
return dataSource;
}
#Bean
public LocalContainerEntityManagerFactoryBean myTestEntityManagerFactory() {
var emFactory = new LocalContainerEntityManagerFactoryBean();
var adapter = new HibernateJpaVendorAdapter();
adapter.setDatabasePlatform("org.hibernate.dialect.H2Dialect");
adapter.setGenerateDdl(true);
emFactory.setDataSource(myTestDataSource());
emFactory.setPackagesToScan("my.project.persistence.model");
emFactory.setJpaVendorAdapter(adapter);
return emFactory;
}
#Bean
public PlatformTransactionManager myTestTransactionManager() {
return new JpaTransactionManager(myTestEntityManagerFactory().getObject());
}
#Bean
public BatchConfigurer testBatchConfigurer() {
return new DefaultBatchConfigurer() {
#Override
public PlatformTransactionManager getTransactionManager() {
return myTestTransactionManager();
}
};
}
}
By default, when you declare a datasource in your application context, Spring Batch will use a DataSourceTransactionManager to drive step transactions, but this transaction manager knows nothing about your JPA context.
If you want to use another transaction manager, you need to override BatchConfigurer#getTransactionManager and return the transaction manager you want to use to drive step transactions. In your case, you are only declaring a transaction manager bean in the application context which is not enough. Here a quick example:
#Bean
public BatchConfigurer batchConfigurer() {
return new DefaultBatchConfigurer() {
#Override
public PlatformTransactionManager getTransactionManager() {
return new JpaTransactionManager(myTestEntityManagerFactory().getObject());
}
};
}
For more details, please refer to the reference documentation.

Why do SimpleJdbcCall igronre #Transactional annotation

I want to do some DB related actions in service method. Initialy it looks like this:
#Override
#Transactional
public void addDirectory(Directory directory) {
//some cheks here
directoryRepo.save(directory);
rsdhUtilsService.createPhysTable(directory);
}
Firs method directoryRepo.save(directory); is just simple JPA save action, second one rsdhUtilsService.createPhysTable(directory); is JDBCTemplate stored procedure call from it's own service. The problem is: if any exceptions accures within JPA or SimpleJdbcCall action, transaction will rollback and nothig related to JPA won't be persited, but if exception occures only within JPA action, result of SimpleJdbcCall won't be affected by transaction rollback.
To illustrate this behaviour I've remove JAP action, mark #Transactional as (readOnly = true) and moved all JDBCTemplate related logic from another service to current one.
#Service
public class DirectoriesServiceImpl implements DirectoriesService {
private final DirectoryRepo directoryRepo;
private final MapSQLParamUtils sqlParamUtils;
private final JdbcTemplate jdbcTemplate;
#Autowired
public DirectoriesServiceImpl(DirectoryRepo directoryRepo, MapSQLParamUtils sqlParamUtils, JdbcTemplate jdbcTemplate) {
this.directoryRepo = directoryRepo;
this.sqlParamUtils = sqlParamUtils;
this.jdbcTemplate = jdbcTemplate;
}
#Override
#Transactional(readOnly = true)
public void addDirectory(Directory directory) {
directoryRepo.save(directory);
new SimpleJdbcCall(jdbcTemplate).withSchemaName("RSDH_DICT").withCatalogName("UTL_DICT")
.withFunctionName("create_dict")
.executeFunction(String.class, sqlParamUtils.getMapSqlParamForCreatePhysTable(directory));
}
}
As a result #Transactional annotation is ignored and I can see new records persisted in DB.
I've got only one DataSource configured via application.properties, and here is how JDBCTemlate configured
#Component
class MapSQLParamUtils {
private final DataSource dataSource;
#Autowired
MapSQLParamUtils(DataSource dataSource) {
this.dataSource = dataSource;
}
#Bean
public JdbcTemplate jdbcTemplate() {
return new JdbcTemplate(dataSource);
}
}
So my questions are: why do #Transactional ignored by SimpleJdbcCall and how to configure JPA and JDBCTemlate to use same transaction manager.
UPDATE:
This is how I use this service in controller
#RestController
#RequestMapping(value = "/api/v1/directories")
public class DirectoriesRESTControllerV1 {
private final DirectoriesService directoriesService;
#Autowired
public DirectoriesRESTControllerV1(DirectoriesService directoriesService) {
this.directoriesService = directoriesService;
}
#PostMapping
#PreAuthorize("hasPermission('DIRECTORIES_USER', 'W')")
public ResponseEntity createDirectory(#NotNull #RequestBody DirectoryRequestDTO createDirectoryRequestDTO) {
Directory directoryFromRequest = ServiceUtils.convertDtoToEntity(createDirectoryRequestDTO);
directoriesService.addDirectory(directoryFromRequest);
return ResponseEntity.noContent().build();
}
}
As mentioned earlier, the problem here is that JPA does not execute sql queries at once repository methods called. To enforce it you can use explicit entityManager.flush():
#Autowired
private javax.persistence.EntityManager entityManager;
...
#Override
#Transactional(readOnly = true)
public void addDirectory(Directory directory) {
directoryRepo.save(directory);
entityManager.flush();
new SimpleJdbcCall(jdbcTemplate).withSchemaName("RSDH_DICT").withCatalogName("UTL_DICT")
.withFunctionName("create_dict")
.executeFunction(String.class, sqlParamUtils.getMapSqlParamForCreatePhysTable(directory));
}
To see real SQL queries by hibernate you can enable option show_sql, in case if your application is spring-boot, this configuration enables it:
spring.jpa:
show-sql: true
properties:
hibernate:
format_sql: true
logging.level:
org.hibernate.SQL: DEBUG
Regarding transaction manager. In case if entityManager flush is not enough, you may need the composite transaction manager, that handles both JPA and DataSource. Spring data commons has ChainedTransactionManager. Note: you should be careful with it. I used it this way in my project:
#Bean(BEAN_CONTROLLER_TX)
public PlatformTransactionManager controllerTransactionManager(EntityManagerFactory entityManagerFactory) {
return new JpaTransactionManager(entityManagerFactory);
}
#Bean(BEAN_ANALYTICS_TX)
public PlatformTransactionManager analyticsTransactionManager(DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
/**
* Chained both 2 transaction managers.
*
* #return chained transaction manager for controller datasource and analytics datasource
*/
#Primary
#Bean
public PlatformTransactionManager transactionManager(
#Qualifier(BEAN_CONTROLLER_TX) PlatformTransactionManager controllerTransactionManager,
#Qualifier(BEAN_ANALYTICS_TX) PlatformTransactionManager analyticsTransactionManager) {
return new ChainedTransactionManager(controllerTransactionManager, analyticsTransactionManager);
}
Please try this :
#Transactional(rollbackFor = Exception.class)
public void addDirectory(Directory directory){
#Transactional only rolls back transactions for unchecked exceptions. For checked exceptions and their subclasses, it commits data. So although an exception is raised here, because it's a checked exception, Spring ignores it and commits the data to the database.
So if you throw an Exception or a subclass of it, always use the above with the #Transactional annotation to tell Spring to roll back transactions if a checked exception occurs.
It's very simple, just use the following with #Transactional:
#Transactional(rollbackFor = Exception.class)

Spring JPA: How to update 2 different tables in 2 different `DataSource` in the same request?

In our application, we have a common database called central and every customer will have their own database with exactly the same set of tables. Each customer's database might be hosted on our own server or the customer's server based on the requirement of the customer organization.
To handle this multi-tenant requirement, we're extending the AbstractRoutingDataSource from Spring JPA and overriding the determineTargetDataSource() method to create a new DataSource and establish a new connection on the fly based on the incoming customerCode. We also use a simple DatabaseContextHolder class to store the current datasource context in a ThreadLocal variable. Our solution is similar to what is describe in this article.
Let's say in a single request, we'll need to update some data in both the central database and the customer's database as following.
public void createNewEmployeeAccount(EmployeeData employee) {
DatabaseContextHolder.setDatabaseContext("central");
// Code to save a user account for logging in to the system in the central database
DatabaseContextHolder.setDatabaseContext(employee.getCustomerCode());
// Code to save user details like Name, Designation, etc. in the customer's database
}
This code would only work if determineTargetDataSource() is called every time just before any SQL queries gets executed so that we can switch the DataSource dynamically half way through our method.
However, from this Stackoverflow question, it seems like determineTargetDataSource() is only called once for each HttpRequest when a DataSource is being retrieved for the very first time in that request.
I'd be very grateful if you can give me some insights into when AbstractRoutingDataSource.determineTargetDataSource() actually gets called. Besides, if you've dealt with a similar multi-tenant scenario before, I'd love to hear your opinion on how I should deal with the updating of multiple DataSource in a single request.
We found a working solution, which is a mix of static data source settings for our central database and dynamic data source settings for our customer's database.
In essence, we know exactly which table comes from which database. Hence, we were able to separate our #Entity classes into 2 different packages as following.
com.ft.model
-- central
-- UserAccount.java
-- UserAccountRepo.java
-- customer
-- UserProfile.java
-- UserProfileRepo.java
Subsequently, we created two #Configuration classes to set up the data source setting for each package. For our central database, we use static settings as following.
#Configuration
#EnableTransactionManagement
#EnableJpaRepositories(
entityManagerFactoryRef = "entityManagerFactory",
transactionManagerRef = "transactionManager",
basePackages = { "com.ft.model.central" }
)
public class CentralDatabaseConfiguration {
#Primary
#Bean(name = "dataSource")
public DataSource dataSource() {
return DataSourceBuilder.create(this.getClass().getClassLoader())
.driverClassName("com.microsoft.sqlserver.jdbc.SQLServerDriver")
.url("jdbc:sqlserver://localhost;databaseName=central")
.username("sa")
.password("mhsatuck")
.build();
}
#Primary
#Bean(name = "entityManagerFactory")
public LocalContainerEntityManagerFactoryBean entityManagerFactory(EntityManagerFactoryBuilder builder, #Qualifier("dataSource") DataSource dataSource) {
return builder
.dataSource(dataSource)
.packages("com.ft.model.central")
.persistenceUnit("central")
.build();
}
#Primary
#Bean(name = "transactionManager")
public PlatformTransactionManager transactionManager (#Qualifier("entityManagerFactory") EntityManagerFactory entityManagerFactory) {
return new JpaTransactionManager(entityManagerFactory);
}
}
For the #Entity in the customer package, we set up dynamic data source resolver using the following #Configuration.
#Configuration
#EnableTransactionManagement
#EnableJpaRepositories(
entityManagerFactoryRef = "customerEntityManagerFactory",
transactionManagerRef = "customerTransactionManager",
basePackages = { "com.ft.model.customer" }
)
public class CustomerDatabaseConfiguration {
#Bean(name = "customerDataSource")
public DataSource dataSource() {
return new MultitenantDataSourceResolver();
}
#Bean(name = "customerEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean entityManagerFactory(EntityManagerFactoryBuilder builder, #Qualifier("customerDataSource") DataSource dataSource) {
return builder
.dataSource(dataSource)
.packages("com.ft.model.customer")
.persistenceUnit("customer")
.build();
}
#Bean(name = "customerTransactionManager")
public PlatformTransactionManager transactionManager(#Qualifier("customerEntityManagerFactory") EntityManagerFactory entityManagerFactory) {
return new JpaTransactionManager(entityManagerFactory);
}
}
In the MultitenantDataSourceResolver class, we plan to maintain a Map of the created DataSource using customerCode as key. From each incoming request, we will get the customerCode and inject it into our MultitenantDataSourceResolver to get the correct DataSource within the determineTargetDataSource() method.
public class MultitenantDataSourceResolver extends AbstractRoutingDataSource {
#Autowired
private Provider<CustomerWrapper> customerWrapper;
private static final Map<String, DataSource> dsCache = new HashMap<String, DataSource>();
#Override
protected Object determineCurrentLookupKey() {
try {
return customerWrapper.get().getCustomerCode();
} catch (Exception ex) {
return null;
}
}
#Override
protected DataSource determineTargetDataSource() {
String customerCode = (String) this.determineCurrentLookupKey();
if (customerCode == null)
return MultitenantDataSourceResolver.getDefaultDataSource();
else {
DataSource dataSource = dsCache.get(customerCode);
if (dataSource == null)
dataSource = this.buildDataSourceForCustomer();
return dataSource;
}
}
private synchronized DataSource buildDataSourceForCustomer() {
CustomerWrapper wrapper = customerWrapper.get();
if (dsCache.containsKey(wrapper.getCustomerCode()))
return dsCache.get(wrapper.getCustomerCode() );
else {
DataSource dataSource = DataSourceBuilder.create(MultitenantDataSourceResolver.class.getClassLoader())
.driverClassName("com.microsoft.sqlserver.jdbc.SQLServerDriver")
.url(wrapper.getJdbcUrl())
.username(wrapper.getDbUsername())
.password(wrapper.getDbPassword())
.build();
dsCache.put(wrapper.getCustomerCode(), dataSource);
return dataSource;
}
}
private static DataSource getDefaultDataSource() {
return DataSourceBuilder.create(CustomerDatabaseConfiguration.class.getClassLoader())
.driverClassName("com.microsoft.sqlserver.jdbc.SQLServerDriver")
.url("jdbc:sqlserver://localhost;databaseName=central")
.username("sa")
.password("mhsatuck")
.build();
}
}
The CustomerWrapper is a #RequestScope object whose values will be populated on each request by the #Controller. We use java.inject.Provider to inject it into our MultitenantDataSourceResolver.
Lastly, even though, logically, we will never save anything using the default DataSource because all requests will always contain a customerCode, at startup time, there is no customerCode available. Hence, we still need to provide a valid default DataSource. Otherwise, the application will not be able to start.
If you have any comments or a better solution, please let me know.

Jooq not working with spring transactions

I tried setting up using Jooq with Spring JDBC, everything is working properly except transactions.
This is my current setup:
#Configuration
public class DALConfig {
#Value("${jdbcUrl}")
String jdbcUrl;
#Value("${username}")
String username;
#Value("${password}")
String password;
#Bean(destroyMethod = "close")
DataSource getDataSource() {
BasicDataSource dataSource = new BasicDataSource();
dataSource.setUrl(jdbcUrl);
dataSource.setUsername(username);
dataSource.setPassword(password);
dataSource.setDriverClassName("com.mysql.jdbc.Driver");
return dataSource;
}
#Bean(name="transactionManager")
DataSourceTransactionManager getDataSourceTransactionManager() {
return new DataSourceTransactionManager(getDataSource());
}
#Bean(name="transactionAwareDataSource")
TransactionAwareDataSourceProxy getTransactionAwareDataSourceProxy() {
return new TransactionAwareDataSourceProxy(getDataSource());
}
#Bean(name="connectionProvider")
DataSourceConnectionProvider getDataSourceConnectionProvider() {
return new DataSourceConnectionProvider(getTransactionAwareDataSourceProxy());
}
#Bean
DefaultDSLContext getDefaultDSLContext() {
return new DefaultDSLContext(getConfiguration());
}
#Bean
DefaultConfiguration getConfiguration() {
DefaultConfiguration config = new DefaultConfiguration();
config.set(SQLDialect.MYSQL);
config.setConnectionProvider(getDataSourceConnectionProvider());
return config;
}
#Bean
CourseDao getCourseDao() {
return new CourseDao(getConfiguration());
}
}
I am using #Transactional(propagation = Propagation.MANDATORY) annotation on the method which inserts a new Course, but I am getting the following exception org.springframework.transaction.IllegalTransactionStateException: No existing transaction found for transaction marked with propagation 'mandatory'.
I have read the docs for spring and jooq but I have not been able to figure out what is missing and what to do to resolve this. Can someone point out what am I missing here.
OK, I got the problem the exception which was supposed to rollback the transaction was happening outside the scope of the transaction. If I added #Transactional in the scope which includes the exception, the rollback works properly.
Also the propagation should be Propagation.MANDATORY should be changed to Propagation.REQUIRED (which is the default).

Will controller block while making spring/hibernate call?

This is controller after creating project from activator template for Play and Spring sample.
Controller Code:
#org.springframework.stereotype.Controller
public class Application {
#Autowired
private BarService barService;
public Result addBar() {
Form<Bar> form = Form.form(Bar.class).bindFromRequest();
Bar bar = form.get();
barService.addBar(bar);
return play.mvc.Controller.redirect(controllers.routes.Application.index());
}
}
Bar Service:
#Service
#Transactional
public class BarServiceImpl implements BarService {
#PersistenceContext
EntityManager em;
#Override
public void addBar(Bar bar) {
em.persist(bar);
}
#Override
public List<Bar> getAllBars() {
CriteriaQuery<Bar> c = em.getCriteriaBuilder().createQuery(Bar.class);
c.from(Bar.class);
return em.createQuery(c).getResultList();
}
}
Spring Hibernate configuration:
#Configuration
#EnableTransactionManagement
public class DataConfig {
#Bean
public EntityManagerFactory entityManagerFactory() {
HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
vendorAdapter.setShowSql(true);
vendorAdapter.setGenerateDdl(true);
LocalContainerEntityManagerFactoryBean entityManagerFactory = new LocalContainerEntityManagerFactoryBean();
entityManagerFactory.setPackagesToScan("models");
entityManagerFactory.setJpaVendorAdapter(vendorAdapter);
entityManagerFactory.setDataSource(dataSource());
entityManagerFactory.setJpaPropertyMap(new HashMap<String, String>(){{
put("hibernate.hbm2ddl.auto", "create-drop");
}});
entityManagerFactory.afterPropertiesSet();
return entityManagerFactory.getObject();
}
#Bean
public PlatformTransactionManager transactionManager() {
JpaTransactionManager transactionManager = new JpaTransactionManager(entityManagerFactory());
return transactionManager;
}
#Bean
public DataSource dataSource() {
final DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(Play.application().configuration().getString("db.default.driver"));
dataSource.setUrl(Play.application().configuration().getString("db.default.url"));
return dataSource;
}
}
My question is when controller calls addBar function in barService is it a blocking call? If yes then what should be the proper way of doing spring/hibernate integration in a Play application considering it is a sample code from Typesafe activator itself.
Yes, it blocks because JDBC does not have async/non-blocking support. And since Hibernate depends on JDBC, it inherits its blocking behavior. This is also documented here:
Common examples of such blocking operations are JDBC calls, streaming API, HTTP requests and long computations.
I highly recommend that you read the following documentation pages:
JavaAsync: Handling asynchronous results
Understanding Play thread pools
I also recommend that you take a look at other very similar discussions here:
https://stackoverflow.com/a/32784410/4600

Categories

Resources