How can I query SQL just after transaction open? - java

Just like in the title. I need to query a specific function on database which will define a few values assigned to the transaction. If it's possible I want to make it a global configuration connected with a specific profile(There are few requests that I do not want to load that function).
Project is build on Java SE 1.7, Spring Boot 1.1.7 and it connects with PostgreSQL database.
Requests are build on 3 layers, SomeClassController(Controller), SomeClassService(Service), SomeClassDB(Repository). On SomeClassDB it connects with database using JdbcTemplate from Spring and performs a CRUD operations. Before any of those operations I want to query a function. And as i mentioned it, I don't want a method that will do the job - I need something like global configuration on TransactionManager?
Maybe I should use TransactionSynchronization with beforeCommit method? But I don't know how to use it globally.
EDIT1: What I can but i don't want to ;):
#Scope(value = "session", proxyMode = ScopedProxyMode.TARGET_CLASS)
#Service
public class SessionService {
Boolean flag;
public SessionService(){
flag=false;
}
#Value("${appVersion}") String appVersion;
#Value("${appArtifactId}") String appArtifactId;
public void addSession(){
if(!flag){
jdbcTemplate.execute("SELECT add_ses('"+appArtifactId+"','"+appVersion+")");
flag=true;
}
}
public void deleteSession(){
jdbcTemplate.execute("SELECT del_ses()");
}
}
And now I can just call those two methods on start and end of 2nd layer class with #Autowired this class. But I really don't want to do it in that way. Someone, someday will forget about it propagating that 2nd layer SomeClassService class and I want to avoid it.
I hope that will get you closer to my problem.

This could be typical use case of aspects. Have you tried to write an aspect for this? Aspect can be configured to execute some code before and after call of a method in specific package or method decorated by specific annotation. Spring has very nice support for aspects.
Another way is to create a proxy (using java.lang.reflect.Proxy) object which will call your initialization code and then delegate to proxied object.

Related

Spring Async for batch insertion

I'm using spring boot. I was new to spring and started a spring project. So I didn't know about pre defined repositories (JPA, CRUD) which can be easily implemented. In case, I wanted to save a bulk data, so I use for loop and save one by one, Its taking more time. So I tried to use #Async. But it doesn't also work, is my concept wrong?
#Async has two limitation
it must be applied to public methods only
self-invocation – calling the async method from within the same class won’t work
1) Controller
for(i=0;i < array.length();i++){
// Other codes
gaugeCategoryService.saveOrUpdate(getEditCategory);
}
2) Dao implementation
#Repository
public class GaugeCategoryDaoImpl implements GaugeCategoryDao {
// Other codings
#Async
#Override
public void saveOrUpdate(GaugeCategory GaugeCategory) {
sessionFactory.getCurrentSession().saveOrUpdate(GaugeCategory);
}
}
After removing #Async , it working normally. But with that annotation it doesn't work. Is there any alternative method for time consuming? Thanks in advance.
the #Async annotation creates a thread for every time you call that method. but you need to enable it in your class using this annotation #EnableAsync
You also need to configure the asyncExecutor Bean.
You can find more details here : https://spring.io/guides/gs/async-method/
In my opinion, there are several issues with your code:
You overwrite the saveOrUpdate() method without any need to do so. A simple call to "super()" should have been enough to make #Async work.
I guess that you somewhere (within your controller class?) declare a transactional context. That one usually applies to the current thread. By using #Async, you might leave this transaction context as (because of the async DAO execution), the main thread may already be finished when saveOrUpdate() is called. And even though I currently don't know it exactly, there is a good change that the declared transaction is only valid for the current thread.
One possble fix: create an additional component like AsyncGaugeCategoryService or so like this:
#Component
public class AsyncGaugeCategoryService {
private final GaugeCategoryDao gaugeCategoryDao;
#Autowired
public AsyncGaugeCategoryService(GaugeCategoryDao gaugeCategoryDao) {
this.gaugeCategoryDao = gaugeCategoryDao;
}
#Async
#Transactional
public void saveOrUpdate(GaugeCategory gaugeCategory) {
gaugeCategoryDao.saveOrUpdate(gaugeCategory);
}
}
Then inject the service instead of the DAO into your controller class. This way, you don't need to overwrite any methods, and you should have a valid transactional context within your async thread.
But be warned that your execution flow won't give you any hint if something goes wrong while storing into the database. You'll have to check the log files to detect any problems.

Switching data source during a single transaction using multi tenant implementation

I've been struggling for a few days to get this working, but it seems that I cannot find a solution to it. That's why I'd like to ask it here.
Short version
I have a multi tenant implementation which works with Spring boot, Spring Data JPA and Hibernate. This works like a charm. But now I'd like to implement a functionality where I switch the database (data source) during a single transaction. For example I use similar code in my service class
#Autowired
private CustomRepository customRepository;
#Autorwired
private CustomTenantIdentifierResolver customResolver;
#Transactional
public Custom getCustom(String name) {
// Set the datasource to "one";
this.customResolver.setIdentifier("one");
Custom result = this.customRepository.findOneByName(name);
//If the result is null, switch datasource to default and try again
this.customResolver.setIdentifier("default");
result = this.customRepository.findOneByName(name);
return result;
}
The problem is, my data source does not switch. It uses the same source for the second request. I guess I'm doing something terribly wrong here.
What is the correct way to switch the data source during a single transaction?
EDIT (07-06-2016)
Since I noticed that switching the data source for a single transaction is not going to work, I'll add a followup.
Would it be possible to switch the data source in between two transactions for a single user request? If so, what would be the correct way to do this?
Long Version
Before moving on, I'd like to mention that my multi tenant implementation is based on the tutorial provided on this blog.
Now, my goal is to use the default data source as a fallback when the dynamic one (chosen by a custom identifier) fails to find a result. All this needs to be done in a single user request. It doesn't make a difference in the solution uses a single or multiple transactional annotated methods.
Until now I tried a couple things, one of them is described above, another includes the use of multiple transaction managers. That implementation uses a configuration file to create two transaction manager beans which each a different data source.
#Configuration
#EnableTransactionManagement
public class TransactionConfig {
#Autowired
private EntityManagerFactory entityManagerFactory;
#Autowired
private DataSourceProvider dataSourceProvider;
#Bean(name = "defaultTransactionManager")
public PlatformTransactionManager defaultTransactionManager() {
JpaTransactionManager jpaTransactionManager = new JpaTransactionManager();
jpaTransactionManager.setEntityManagerFactory(entityManagerFactory);
jpaTransactionManager.setDataSource(dataSourceProvider.getDefaultDataSource());
jpaTransactionManager.afterPropertiesSet();
return jpaTransactionManager;
}
#Bean(name = "dynamicTransactionManager")
public PlatformTransactionManager dynamicTransactionManager() {
JpaTransactionManager jpaTransactionManager = new JpaTransactionManager();
jpaTransactionManager.setEntityManagerFactory(entityManagerFactory);
jpaTransactionManager.afterPropertiesSet();
return jpaTransactionManager;
}
}
Next I split the service method into two separate ones and added the #Transactional annotation including the right bean name
#Transactional("dynamicTransactionManager")
public Custom getDynamicCustom(String name) {
...stuff...
}
#Transactional("defaultTransactionManager")
public Custom getDefaultCustom(String name) {
...stuff...
}
But it didn't make any difference, the first data source was still used for the second method call (which should use the default transaction manager).
I hope someone can help me find a solution to this.
Thanks in advance.
Spring provides a variation of DataSource, called AbstractRoutingDatasource. It can be used in place of standard DataSource implementations and enables a mechanism to determine which concrete DataSource to use for each operation at runtime. All you need to do is to extend it and to provide an implementation of an abstract determineCurrentLookupKey method.
Keep in mind that determineCurrentLookupKey method will be called whenever TransactionsManager requests a connection. So, if you want to switch DataSource, you just need to open new transaction.
You can find example here
http://fedulov.website/2015/10/14/dynamic-datasource-routing-with-spring/
You can't just move a transaction over to another datasource. While there is a concept of distributed (or XA) transactions, it consists of separate transactions (in separate data sources) that are treated as if they were part of a single (distributed) transaction.
I do not know if it is possible, but i think you should try to avoid switching source during a transaction, for the following reason:
If an error occurs during the second request you will want to roll back the entire transaction, which means switching back to the old source. In order to be able to do that you will need to hold an open connection to that old source: When the transaction is complete you will need to confirm the transaction to that old source.
I would recommend to rethink if you really want this, beside the point if it is possible at all.

Create Spring #Service instance with #Transactional methods manually from Java

Let's say there are #Service and #Repository interfaces like the following:
#Repository
public interface OrderDao extends JpaRepository<Order, Integer> {
}
public interface OrderService {
void saveOrder(Order order);
}
#Service
public class OrderServiceImpl implements OrderService {
#Autowired
private OrderDao orderDao;
#Override
#Transactional
public void saveOrder(Order order) {
orderDao.save(order);
}
}
This is part of working application, everything is configured to access single database and everything works fine.
Now, I would like to have possibility to create stand-alone working instance of OrderService with auto-wired OrderDao using pure Java with jdbcUrl specified in Java code, something like this:
final int tenantId = 3578;
final String jdbcUrl = "jdbc:mysql://localhost:3306/database_" + tenantId;
OrderService orderService = someMethodWithSpringMagic(appContext, jdbcUrl);
As you can see I would like to introduce multi-tenant architecture with tenant per database strategy to existing Spring-based application.
Please note that I was able to achieve that quite easily before with self-implemented jdbcTemplate-like logic also with JDBC transactions correctly working so this is very valid task.
Please also note that I need quite simple transaction logic to start transaction, do several requests in service method in scope of that transaction and then commit it/rollback on exception.
Most solutions on the web regarding multi-tenancy with Spring propose specifying concrete persistence units in xml config AND/OR using annotation-based configuration which is highly inflexible because in order to add new database url whole application should be stopped, xml config/annotation code should be changed and application started.
So, basically I'm looking for a piece of code which is able to create #Service just like Spring creates it internally after properties are read from XML configs / annotations. I'm also looking into using ProxyBeanFactory for that, because Spring uses AOP to create service instances (so I guess simple good-old re-usable OOP is not the way to go here).
Is Spring flexible enough to allow this relatively simple case of code reuse?
Any hints will be greatly appreciated and if I find complete answer to this question I'll post it here for future generations :)
HIbernate has out of the box support for multi tenancy, check that out before trying your own. Hibernate requires a MultiTenantConnectionProvider and CurrentTenantIdentifierResolver for which there are default implementations out of the box but you can always write your own implementation. If it is only a schema change it is actually pretty simple to implement (execute a query before returning the connection). Else hold a map of datasources and get an instance from that, or create a new instance.
About 8 years ago we already wrote a generic solution which was documented here and the code is here. It isn't specific for hibernate and could be used with basically anything you need to switch around. We used it for DataSources and also some web related things (theming amongst others).
Creating a transactional proxy for an annotated service is not a difficult task but I'm not sure that you really need it. To choose a database for a tenantId I guess that you only need to concentrate in DataSource interface.
For example, with a simple driver managed datasource:
public class MultitenancyDriverManagerDataSource extends DriverManagerDataSource {
#Override
protected Connection getConnectionFromDriverManager(String url,
Properties props) throws SQLException {
Integer tenant = MultitenancyContext.getTenantId();
if (tenant != null)
url += "_" + tenant;
return super.getConnectionFromDriverManager(url, props);
}
}
public class MultitenancyContext {
private static ThreadLocal<Integer> tenant = new ThreadLocal<Integer>();
public static Integer getTenantId() {
return tenant.get();
}
public static void setTenatId(Integer value) {
tenant.set(value);
}
}
Of course, If you want to use a connection pool, you need to elaborate it a bit, for example using a connection pool per tenant.

How to test database calls with testng in spring boot

I have created small web application in spring boot.I am new for TestNG. I am trying to test my services with testng which calls dao for database operations. I am trying to do it using inmemory database HSQL.
Following is my UserService
#Service
class UserServiceImpl implements UserService
{
public void save(User user)
{
userDao.save(user);
}
public User update(user)
{
userDao.update(user);
}
}
Following is my UserTest class
#Test
class UserTest
{
?
}
What is good way to use HSQL to test save and update methods in UserService using TestNG with DataProvider? Please let me know if need some more information regarding question ;)
Your response will be greatly appreciated!!
If you just want to test if the dao is called correctly, mock it with a mocking framework (i'd choose Mockito) and just verify that the correct methods were called by your Service. This is more "unit" like, since you tests reflect the clear separation of dao and service.
If you are interested in real DB communication to create/load instances and so on, you can use an in memory database like h2 and let your dao run against that database. This becomes more of an integration test, but can be useful none the less.
Either way, you would set up a test application context that cares about datasources and mocks.
Acolyte provides a JDBC driver & tools, designed for such purposes (mock up, testing, ...): https://github.com/cchantep/acolyte
It's already used in several open source projects (Anorm, Youtube Vitess, ...), either in vanilla Java, or using its Scala DSL:
handler = handleStatement.withQueryDetection(...).
withQueryHandler(/* which result for which query */).
withUpdateHandler(/* which result for which update */).
// Register prepared handler with expected ID 'my-unique-id'
acolyte.Driver.register("my-unique-id", handler);
// then ...
Connection con = DriverManager.getConnection(jdbcUrl);
// ... Connection |con| is managed through |handler|
Note: I am the author of Acolyte.

Commit transaction immideatly after method finished

I have the following problem:
I'm using Spring MVC 4.0.5 with Hibernate 4.3.5 and I'm trying to create a Restfull Web application. The problem is that I want to exclude some fields from getting serialized as JSON, depending on the method called in the controller using aspects.
My problem now is that Hiberate does not commit the transaction immideatly after it returns from a method but just before serializing.
Controller.java
public class LoginController {
/*
* Autowire all the Services and stuff..
*/
#RemoveAttribues({"fieldA","fieldB"})
#RequestMapping{....}
public ResponseEntity login(#RequestBody user) {
User updatedUser = userService.loginTheUser(user);
return new ResponseEntity<>(updatedUser,HttpStatus.OK);
}
}
Service.java
public class UserService {
#Transactional
public User loginUser(User user) {
user.setLoginToken("some generated token");
return userDao.update(user); //userDao just calls entityManager.merge(..)
}
}
The advice of the aspect does the following:
for every String find the corresponding setter and set the field to null
This is done, like I said, to avoid serialization of data (for which Jackson 2 is used)
The problem now is that only after the advice has finished the transaction is commited. Is there anything I can do to tell hibernate to commit immediatly or do I have to dig deeper and start handling the transactions myself (which I would like to avoid)?
EDIT:
I also have autocommit turned on
<prop key="hibernate.connection.autocommit">true</prop>
I think the problem lies in the fact that I use lazy loading (because each user may have a huge laod of other enities attached to him), so the transaction is not commited until I try to serialze the object.
Don't set auto-commit to true. It's a terrible mistake.
I think you need a UserService interface and a UserServiceImpl for the interface implementation. Whatever you now have in the UserService class must be migrated to UserServiceImpl instead.
This can ensure that the #Transactions are applied even for JDK dynamic proxies and not just for CGLIB runtime proxies.
If you are using Open-Session-in-View anti-patterns, you need to let it go and use session-per-request instead. It's much more scalable and it forces you to handle optimum queries sin your data layer.
Using JDBC Transaction Management and the default session-close-on-request pattern you should be fine with this issue.

Categories

Resources