Using same jdbcTemplate for two different schemas - java

I have 2 datasources say dataSourceA and dataSourceB but based upon few calculations, I need to execute the same query in different schemas. Also, it is going to be executed in either of the schemas.
so, at DAO layer , I have one setDataSource() method which is #autowired to the dataSourceA, thus,returning the JDBCTemplate with former DataSource. How can I implement the dataSourceB changes using the same JDBCTemplate as it will be difficult to change at every DAO layer as entire application change will be required.

You could you inject both datasources and select the datasource inside your method according to your logic:
public class SomeDaoImpl implements SomeDao {
private final JdbcTemplate jdbcTemplateA;
private final JdbcTemplate jdbcTemplateB;
#Autowired
public SomeDaoImpl(JdbcTemplate jdbcTemplateA, JdbcTemplate jdbcTemplateB) {
// injecting both JdbcTemplate instances
this.jdbcTemplateA = jdbcTemplateA;
this.jdbcTemplateB = jdbcTemplateB;
}
public void businessLogicMethod(...) {
// choosing the actual template to be used according to your logic
JdbcTemplate jdbcTemplate = chooseTemplate(...);
// now using the template to execute a query
jdbcTemplate.execute(...);
}
}
Another option would be to instantiate two SomeDaoImpl instances and inject one JdbcTemplate into each of them, and select the DAO instance in your service layer.
But both these solutions have a flaw: transaction is usually initiated in the service layer (with an interceptor, for example), and it has no idea that you are going to route your requests to another datasource; so it could happen that a transaction starts on one datasource, but the query is executed on another one.
So the clearest solution would be to go one level up and instantiate 2 services, in each of them DAOs with different JdbcTemplate instances. Of course, 2 transaction managers will have to be configured and carefully wired (for example, via #Transactional("transactionManagerA")). More information on this here Spring - Is it possible to use multiple transaction managers in the same application?

Related

Custom StoredProcedure class as a Spring component and extends jdbc StoredProcedure

I have a custom Stored Procedure class which is extending jdbc.StoredProcedure but I have annotated this class with Spring #Component to bring this class bean into Spring context.
Why I am doing this?
I wanted to add spring-retry on execute method which will work only on spring components
I wanted to reused the compiled StoredProcedure instead of creating a new object and recompiling every time, in this way I can reuse the compiled StoredProcedure every time.
anything wrong with this kind of implementation?
are there any issues we may see with this Spring component based StoredProcedure?
Ex:
#Component
public class ExampleStoredProcedure extends StoredProcedure {
#Autowired
private DataSource dataSource;
#Postconstruct
public void init() {
super.setDataSource(dataSource);
setSql("stored_procedure_name");
//TODO declare parameters
compile();
}
public void execute(){
//Todo set all parameters to ParameterSource
super.execute(parameterSource);
}
}
Try implementing a layered application architecture where you annotate your services with spring retry like this example:
https://dzone.com/articles/spring-retry-way-to-handle-failures
These service methods can define transaction boundaries and call your data persistence layers methods that could be based on spring data's standardized ways to call stored procedures and manage your database connections etc.
See for more info on Spring and architecture for example this brief introduction:
https://www.petrikainulainen.net/software-development/design/understanding-spring-web-application-architecture-the-classic-way/

Create Spring #Service instance with #Transactional methods manually from Java

Let's say there are #Service and #Repository interfaces like the following:
#Repository
public interface OrderDao extends JpaRepository<Order, Integer> {
}
public interface OrderService {
void saveOrder(Order order);
}
#Service
public class OrderServiceImpl implements OrderService {
#Autowired
private OrderDao orderDao;
#Override
#Transactional
public void saveOrder(Order order) {
orderDao.save(order);
}
}
This is part of working application, everything is configured to access single database and everything works fine.
Now, I would like to have possibility to create stand-alone working instance of OrderService with auto-wired OrderDao using pure Java with jdbcUrl specified in Java code, something like this:
final int tenantId = 3578;
final String jdbcUrl = "jdbc:mysql://localhost:3306/database_" + tenantId;
OrderService orderService = someMethodWithSpringMagic(appContext, jdbcUrl);
As you can see I would like to introduce multi-tenant architecture with tenant per database strategy to existing Spring-based application.
Please note that I was able to achieve that quite easily before with self-implemented jdbcTemplate-like logic also with JDBC transactions correctly working so this is very valid task.
Please also note that I need quite simple transaction logic to start transaction, do several requests in service method in scope of that transaction and then commit it/rollback on exception.
Most solutions on the web regarding multi-tenancy with Spring propose specifying concrete persistence units in xml config AND/OR using annotation-based configuration which is highly inflexible because in order to add new database url whole application should be stopped, xml config/annotation code should be changed and application started.
So, basically I'm looking for a piece of code which is able to create #Service just like Spring creates it internally after properties are read from XML configs / annotations. I'm also looking into using ProxyBeanFactory for that, because Spring uses AOP to create service instances (so I guess simple good-old re-usable OOP is not the way to go here).
Is Spring flexible enough to allow this relatively simple case of code reuse?
Any hints will be greatly appreciated and if I find complete answer to this question I'll post it here for future generations :)
HIbernate has out of the box support for multi tenancy, check that out before trying your own. Hibernate requires a MultiTenantConnectionProvider and CurrentTenantIdentifierResolver for which there are default implementations out of the box but you can always write your own implementation. If it is only a schema change it is actually pretty simple to implement (execute a query before returning the connection). Else hold a map of datasources and get an instance from that, or create a new instance.
About 8 years ago we already wrote a generic solution which was documented here and the code is here. It isn't specific for hibernate and could be used with basically anything you need to switch around. We used it for DataSources and also some web related things (theming amongst others).
Creating a transactional proxy for an annotated service is not a difficult task but I'm not sure that you really need it. To choose a database for a tenantId I guess that you only need to concentrate in DataSource interface.
For example, with a simple driver managed datasource:
public class MultitenancyDriverManagerDataSource extends DriverManagerDataSource {
#Override
protected Connection getConnectionFromDriverManager(String url,
Properties props) throws SQLException {
Integer tenant = MultitenancyContext.getTenantId();
if (tenant != null)
url += "_" + tenant;
return super.getConnectionFromDriverManager(url, props);
}
}
public class MultitenancyContext {
private static ThreadLocal<Integer> tenant = new ThreadLocal<Integer>();
public static Integer getTenantId() {
return tenant.get();
}
public static void setTenatId(Integer value) {
tenant.set(value);
}
}
Of course, If you want to use a connection pool, you need to elaborate it a bit, for example using a connection pool per tenant.

How to connect to a database inside a BeanFactoryPostProcessor?

I am working on a project using Spring Data JPA (on Tomcat 7). I'am implementing a BeanFactoryPostProcessor to dynamically create my DataSources. But the problem is that my DataSource's information (name, url, etc..) is stored in a database itself.
#Component
class DatasourceRegisteringBeanFactoryPostProcessor implements BeanFactoryPostProcessor {
// This doesn't work
#Autowired DatabaseService databaseService;
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) {
// my code here ...
// ...
}
}
As you can see, i was trying to inject a service which can get me a list of all DataSources from my database, but it doesn't work. Is the anyway to connect to the database and get that list within the BeanFactoryPostProcessor class? Any other workaround will be welcome. :)
BeanFactoryPostProcessors are a very special kind of concept in Spring. They are components that operate on BeanDefinition instances, which are a metamodel of the bean instances to be created.
That means, that at the point in time when the BFPPs are invoked, no bean instances have been created yet as the metamodel is about to be post processed (as the name suggests). Hence beans depended on by the BFPP will be initialized extremely early in the lifecycle of the container. Thus it's highly recommended to not depend on application components from BFPPs or - if really required - only on beans that don't necessarily trigger the creation of a lot of downstream components.
That said, you shouldn't depend on especially repositories from BFPPs as they usually require the creation of a lot of infrastructure components. I'd recommend getting the configuration properties that are required to connect to the configuration database (JDBC URL, username, password, etc.) and just create a throw-away DataSource that's only used to create a new BeanDefinition for a new DataSource that's going to be used by the application eventually.
So here are the recommended steps (from the top of my head - might need some tweaking):
drop the autowiring of a DataSource
configure an #PropertySource pointing to the properties containing the coordinates to connect
inject the values of that PropertySource into the constructor of the BFPP:
class YourBeanFactoryPostProcessor implements BeanFactoryPostProcessor {
public YourBeanFactoryPostProcessor(#Value("#{properties.url}) String url, …) {
// assign to fields
}
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) {
// 1. Create throw-away DataSource
// 2. Create JdbcTemplate
// 3. Use template to lookup configuration properties
// 4. Create BeanDefinition for DataSource using properties just read
// 5. Register BeanDefinition with the BeanFactory
}
}

Design Pattern that define action and target

I have a GenericDAO which delegates its operations to a DataSource class
public class BaseDAOImpl<T> implements BaseDAO<T> {
DataSource ds;
public T update(T entity) {
ds.update(entity);
}
The issue I'm having right now is that we want it to work with multiple DataSources. This leaves me with 2 alternatives
1) make a setter in DAO for datasource and use it before every operation
2) create each child of BaseDAO n times per number of datasources
I would like DataSource to get out of DAO, but then how the actions can get delegated to it?
I guess you want to implement something like multitenancy: when request comes from the user A, all DAO involved into processing that request should talk to user A's DataSource, and so on.
If so, DataSource is a part of context for your request, and one possible option to store this kind of contextual data is to use ThreadLocal:
When request comes, you put the appropriate DataSource into ThreadLocal
All DAOs obtain the DataSource from that ThreadLocal.
Obviously, for the sake of Single Responsibility Principle it would be better to hide this logic behind a factory and inject that factory into your DAOs, so that DAOs will call factory.getCurrentDataSource() for each operation.
Clear ThreadLocal when you finished processing of the request.
Note that it only works if each request is processed by a single thread.
You can use a factory for creating your datasource, so depending on your requirement create your datasource and then if you can use dependency injection to have your datasource injected to your DAO.
To get rid of datasource in DAO you can use Delegate Pattern, inject delegator in your DAO, your delegate will have reference of DataSource.
Also to note if you persist with just one generic DAO, your DAO may eventually get blotted with methods which are not generic but more specific to a certain functionality of your application, IMHO you should also consider breaking your DAO to more specific level leaving the generic DAO actually do the generic work.
I wouldn't use a setter for the data source, I would pass it in the constructor for the DAO. Doesn't seem right to be able to change the data source during the life of the DAO object.
Well I think, you should try and use dependency injection in this case. Your base class would be abstracted from type of datasource. So even if you are adding a new type of datasource the only change that you would end up doing would be the factory method which would generate a type of DataSource object based upon current request and hence increase loose coupling of your application
interface IDataSource<T>
{
T update<T>(T entity);
}
public ConcereteDataSource<T> : IDataSource<T>
{
public T update<T>(T entity)
{
//Concerete implementation
}
}
public class BaseDAOImpl<T> implements BaseDAO<T>
{
public IDataSource ds {get;set;}
public T update(T entity) {
ds.update(entity);
}
//where you try to instansiate a new instance of Base DAO
//Factory to create a new instance of datasource for this context
IDataSource contextualDs = GetDataSourceForThisUser();
BaseDAOImpl<SomeType> dao = new BaseDAOImpl<SomeType>();
//inject the dependency
dao.ds = contextualDs;

Spring: separate datasource for read-only transactions

Thanks for reading this.
I have 2 MySQL databases - master for writes, slave for reads. The perfect scenario I imagine is that my app uses connection to master for readOnly=false transactions, slave for readOnly=true transactions.
In order to implement this I need to provide a valid connection depending on the type of current transaction. My data service layer should not know about what type of connection it uses and just use the injected SqlMapClient (I use iBatis) directly. This means that (if I get it right) the injected SqlMapClients should be proxied and the delegate should be chosen at runtime.
public class MyDataService {
private SqlMapClient sqlMap;
#Autowired
public MyDataService (SqlMapClient sqlMap) {
this.sqlMap = sqlMap;
}
#Transactional(readOnly = true)
public MyData getSomeData() {
// an instance of sqlMap connected to slave should be used
}
#Transactional(readOnly = false)
public void saveMyData(MyData myData) {
// an instance of sqlMap connected to master should be used
}
}
So the question is - how can I do this?
Thanks a lot
It's an interesting idea, but you'd have a tough job on your hands. The readOnly attribute is intended as a hint to the transaction manager, and isn't really consulted anywhere meaningful. You'd have to rewrite or extend multiple Spring infrastructure classes.
So unless you're hell-bent on getting this working a you want, your best option is almost certainly to inject two separate SqlMapClient objects into your DAO, and for the methods to pick the appropriate one. The #Transactional annotations would also need to indicate which transaction manager to use (assuming you're using DataSourceTransactionManager rather than JpaTransactionManager), taking care to match the transaction manager to the DataSource used by the SqlMapClient.

Categories

Resources