Changing the value of an #Bean at runtime in Java Spring Boot - java

In Java with Spring Boot framework there is an #Bean named DataSource that is used to make connections to a database, and is used by another #Bean called JdbcTemplate that serves to perform actions on the database. But, there is a problem, this #Bean DataSource, which serves to make the connection to the database, requires the properties for the connection to be preconfigured (url, username and password). The #Bean DataSource needs to start with an "empty" or "default" value at project startup, and at runtime it changes this value. I want a certain Endpoint to execute the action of changing the value of #Bean, to be more exact. And with the change of the value of the #Bean DataSource the JdbcTemplate, consequently, will be able to perform actions in several database.
Some details:
I have already evaluated this issue of using multiple databases, and in my case, it will be necessary.
All databases to be connected have the same model.
I do not think I need to delete and create another #Bean DataSource at runtime, maybe just override the #Bean values ​​that the Spring Boot itself already creates automatically.
I have already made the #Bean DataSource start with an "empty" value by making a method with the #Bean annotation and that returns a DataSource object that is literally this code: DataSourceBuilder.build().create();.
My English is not very good so if it's not very understandable, sorry.
DataSource #Bean code:
#Bean
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
Main class:
#SpringBootApplication(scanBasePackages = "br.com.b2code")
#RequiredArgsConstructor(onConstructor = #__(#Autowired))
public class RunAdm extends SpringBootServletInitializer implements
CommandLineRunner {
public static final String URL_FRONTEND = "*";
/**
* Método main do módulo de Gestão.
*
* #param args argumentos de inicialização
* #throws Exception uma exception genérica
*/
public static void main(String[] args) throws Exception {
SpringApplication.run(RunAdm.class, args);
}
#Override
protected SpringApplicationBuilder
configure(SpringApplicationBuilder application) {
return application.sources(RunAdm.class);
}
#Override
public void run(String... args) throws Exception {
}
}
A class to exemplify how I use JdbcTemplate:
#Repository
#RequiredArgsConstructor(onConstructor = #__(#Autowired))
public class ClienteQueryRepositoryImpl implements ClienteQueryRepository {
private final #NonNull
JdbcTemplate jdbc;
#Override
public List<Cliente> findAll() {
return jdbc.query(ClienteQuerySQL.SELECT_ALL_CLIENTE_SQL, new ClienteRowMapper());
}
}

I think as a general approach you might consider a Proxy Design Pattern for the actual DataSource Implementation.
Let's suppose the DataSource is an interface that has getConnection method by user and password (other methods are not really important because this answer is theoretical):
interface DataSource {
Connection getConnection(String user, String password);
}
Now, in order to maintain many databases you might want to provide an implementation of the datasource which will act as a proxy for other datasources that will be created on the fly (upon the endpoint call as you say).
Here is an example:
public class MultiDBDatasource implements DataSource {
private DataSourcesRegistry registry;
public Connection getConnection(String user, String password) {
UserAndPassword userAndPassword = new UserAndPassword(user, password);
registry.get(userAndPassword);
}
}
#Component
class DataSourcesRegistry {
private Map<UserAndPassword, DataSource> map = ...
public DataSource get(UserAndPassword a) {
map.get(a);
}
public void addDataSource(UserAndPassword cred, DataSource ds) {
// add to Map
map.put(...)
}
}
#Controller
class InvocationEndPoint {
// injected by spring
private DataSourcesRegistry registry;
#PostMapping ...
public void addNewDB(params) {
DataSource ds = createDS(params); // not spring based
UserAndPassword cred = createCred(params);
registry.addDataSource(cred, ds);
}
}
A couple of notes:
You should "override" the bean of DataSource offered by spring - this can be done by defining your own bean with the same name in your own configuration that will take precedence over spring's definition.
Spring won't create Dynamic Data Source, they'll be created from the "invocation point" (controller in this case for brevity, in real life probably some service). in any case only Registry will be managed by spring, not the data sources.
Keep in mind that this approach is very high-level, in a real life you'll have to think about:
Connection Pooling
Metering
Transaction Support
Multithreaded Access
and many other things

Related

Spring JPA: How to update 2 different tables in 2 different `DataSource` in the same request?

In our application, we have a common database called central and every customer will have their own database with exactly the same set of tables. Each customer's database might be hosted on our own server or the customer's server based on the requirement of the customer organization.
To handle this multi-tenant requirement, we're extending the AbstractRoutingDataSource from Spring JPA and overriding the determineTargetDataSource() method to create a new DataSource and establish a new connection on the fly based on the incoming customerCode. We also use a simple DatabaseContextHolder class to store the current datasource context in a ThreadLocal variable. Our solution is similar to what is describe in this article.
Let's say in a single request, we'll need to update some data in both the central database and the customer's database as following.
public void createNewEmployeeAccount(EmployeeData employee) {
DatabaseContextHolder.setDatabaseContext("central");
// Code to save a user account for logging in to the system in the central database
DatabaseContextHolder.setDatabaseContext(employee.getCustomerCode());
// Code to save user details like Name, Designation, etc. in the customer's database
}
This code would only work if determineTargetDataSource() is called every time just before any SQL queries gets executed so that we can switch the DataSource dynamically half way through our method.
However, from this Stackoverflow question, it seems like determineTargetDataSource() is only called once for each HttpRequest when a DataSource is being retrieved for the very first time in that request.
I'd be very grateful if you can give me some insights into when AbstractRoutingDataSource.determineTargetDataSource() actually gets called. Besides, if you've dealt with a similar multi-tenant scenario before, I'd love to hear your opinion on how I should deal with the updating of multiple DataSource in a single request.
We found a working solution, which is a mix of static data source settings for our central database and dynamic data source settings for our customer's database.
In essence, we know exactly which table comes from which database. Hence, we were able to separate our #Entity classes into 2 different packages as following.
com.ft.model
-- central
-- UserAccount.java
-- UserAccountRepo.java
-- customer
-- UserProfile.java
-- UserProfileRepo.java
Subsequently, we created two #Configuration classes to set up the data source setting for each package. For our central database, we use static settings as following.
#Configuration
#EnableTransactionManagement
#EnableJpaRepositories(
entityManagerFactoryRef = "entityManagerFactory",
transactionManagerRef = "transactionManager",
basePackages = { "com.ft.model.central" }
)
public class CentralDatabaseConfiguration {
#Primary
#Bean(name = "dataSource")
public DataSource dataSource() {
return DataSourceBuilder.create(this.getClass().getClassLoader())
.driverClassName("com.microsoft.sqlserver.jdbc.SQLServerDriver")
.url("jdbc:sqlserver://localhost;databaseName=central")
.username("sa")
.password("mhsatuck")
.build();
}
#Primary
#Bean(name = "entityManagerFactory")
public LocalContainerEntityManagerFactoryBean entityManagerFactory(EntityManagerFactoryBuilder builder, #Qualifier("dataSource") DataSource dataSource) {
return builder
.dataSource(dataSource)
.packages("com.ft.model.central")
.persistenceUnit("central")
.build();
}
#Primary
#Bean(name = "transactionManager")
public PlatformTransactionManager transactionManager (#Qualifier("entityManagerFactory") EntityManagerFactory entityManagerFactory) {
return new JpaTransactionManager(entityManagerFactory);
}
}
For the #Entity in the customer package, we set up dynamic data source resolver using the following #Configuration.
#Configuration
#EnableTransactionManagement
#EnableJpaRepositories(
entityManagerFactoryRef = "customerEntityManagerFactory",
transactionManagerRef = "customerTransactionManager",
basePackages = { "com.ft.model.customer" }
)
public class CustomerDatabaseConfiguration {
#Bean(name = "customerDataSource")
public DataSource dataSource() {
return new MultitenantDataSourceResolver();
}
#Bean(name = "customerEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean entityManagerFactory(EntityManagerFactoryBuilder builder, #Qualifier("customerDataSource") DataSource dataSource) {
return builder
.dataSource(dataSource)
.packages("com.ft.model.customer")
.persistenceUnit("customer")
.build();
}
#Bean(name = "customerTransactionManager")
public PlatformTransactionManager transactionManager(#Qualifier("customerEntityManagerFactory") EntityManagerFactory entityManagerFactory) {
return new JpaTransactionManager(entityManagerFactory);
}
}
In the MultitenantDataSourceResolver class, we plan to maintain a Map of the created DataSource using customerCode as key. From each incoming request, we will get the customerCode and inject it into our MultitenantDataSourceResolver to get the correct DataSource within the determineTargetDataSource() method.
public class MultitenantDataSourceResolver extends AbstractRoutingDataSource {
#Autowired
private Provider<CustomerWrapper> customerWrapper;
private static final Map<String, DataSource> dsCache = new HashMap<String, DataSource>();
#Override
protected Object determineCurrentLookupKey() {
try {
return customerWrapper.get().getCustomerCode();
} catch (Exception ex) {
return null;
}
}
#Override
protected DataSource determineTargetDataSource() {
String customerCode = (String) this.determineCurrentLookupKey();
if (customerCode == null)
return MultitenantDataSourceResolver.getDefaultDataSource();
else {
DataSource dataSource = dsCache.get(customerCode);
if (dataSource == null)
dataSource = this.buildDataSourceForCustomer();
return dataSource;
}
}
private synchronized DataSource buildDataSourceForCustomer() {
CustomerWrapper wrapper = customerWrapper.get();
if (dsCache.containsKey(wrapper.getCustomerCode()))
return dsCache.get(wrapper.getCustomerCode() );
else {
DataSource dataSource = DataSourceBuilder.create(MultitenantDataSourceResolver.class.getClassLoader())
.driverClassName("com.microsoft.sqlserver.jdbc.SQLServerDriver")
.url(wrapper.getJdbcUrl())
.username(wrapper.getDbUsername())
.password(wrapper.getDbPassword())
.build();
dsCache.put(wrapper.getCustomerCode(), dataSource);
return dataSource;
}
}
private static DataSource getDefaultDataSource() {
return DataSourceBuilder.create(CustomerDatabaseConfiguration.class.getClassLoader())
.driverClassName("com.microsoft.sqlserver.jdbc.SQLServerDriver")
.url("jdbc:sqlserver://localhost;databaseName=central")
.username("sa")
.password("mhsatuck")
.build();
}
}
The CustomerWrapper is a #RequestScope object whose values will be populated on each request by the #Controller. We use java.inject.Provider to inject it into our MultitenantDataSourceResolver.
Lastly, even though, logically, we will never save anything using the default DataSource because all requests will always contain a customerCode, at startup time, there is no customerCode available. Hence, we still need to provide a valid default DataSource. Otherwise, the application will not be able to start.
If you have any comments or a better solution, please let me know.

Different txManager with same DAO, want inject Connection of ConnectionHolder into persistence

I have an application that copy data from one DB to an other DB. On both DB's the same DB schema is installed. When we copy data a lot of business logic is needed.
We have a typical, traditional implementation with a service, dao and a custom persistence layer (no entity manager).
First I start to define two datasources used by two different transaction managers.
#Configuration
#Profile("test")
public class TestEnvironment {
/**
* Configuration for the database datasource 1
*
* #return
*/
#Bean(name = { "DataSource1"})
public DataSource basicDatasource1() {
...
}
#Bean
public DataSourceTransactionManager txManagerDB1() {
DataSourceTransactionManager dataSourceTransactionManager = new DataSourceTransactionManager();
dataSourceTransactionManager.setDataSource(basicDatasource1());
return dataSourceTransactionManager;
}
/**
* Configuration for the database datasource 2
*
* #return
*/
#Bean(name = { "DataSource2"})
public DataSource basicDatasource2() {
...
}
#Bean
public DataSourceTransactionManager txManagerDB2() {
DataSourceTransactionManager dataSourceTransactionManager = new DataSourceTransactionManager();
dataSourceTransactionManager.setDataSource(basicDatasource2());
return dataSourceTransactionManager;
}
}
Then I implement my Service and inject via #Autowired my test DAO.
I implement two methods using the same DAO (reentrant). This two methods in my Service I annotate with #Transactional and set respectively the txManager I want to use.
#Service
#Primary
public class MyTestServiceImpl extends AbstractHandler implements MyTestService {
#Autowired
private TxManagerTestDao txManagerTestDao;
#Transactional("txManagerDB1")
public String callToDB1() {
txManagerTestDao.setTransactionName("txManagerDB1");
return txManagerTestDao.loadTestBean().getTestString();
}
/*
* Return DB Connection Type
*/
#Transactional("txManagerDB2")
public String callToDB2() {
txManagerTestDao.setTransactionName("txManagerDB2");
return txManagerTestDao.loadTestBean().getTestString();
}
}
The line txManagerTestDao.setTransactionName("txManagerDB1") or txManagerTestDao.setTransactionName("txManagerDB2") is bad design I want to remove. Because, I need that only to know in the DAO layer what transaction manager I want to use to set the active connection to my persistence layer.
My DAO looks like the following:
#Repository
public class TxManagerTestDaoImpl implements TxManagerTestDao {
private String transactionName;
#Autowired
private AutowireCapableBeanFactory autowireBeanFactory;
public TestBean loadTestBean() {
DataSourceTransactionManager myTransaction = (DataSourceTransactionManager) autowireBeanFactory.getBean(transactionName);
ConnectionHolder conHolder = (ConnectionHolder) TransactionSynchronizationManager.getResource(myTransaction.getDataSource());
//conHolder.getConnection() is know the correct connection I have to use due to the #Transactional in my servide layer.
//HOW CAN I GET WITH A GENERIC WAY THE ACTIVE TRANSACTION MANAGER?
TestBean testBean = new TestBean();
testBean.setTestString("myTransaction: '" + myTransaction + "' Connection: '" + conHolder.getConnection() + "'");
return testBean;
}
#Override
public void setTransactionName(String transactionName) {
this.transactionName = transactionName;
}
}
I want to set the active txManager via #Transactional("...") as I did it in my service layer. I will not use in the future txManagerTestDao.setTransactionName("txManagerDB2") in an redundant way.
The solution works perfect, but is not perfect to implement for our special use case.
Is there a generic way to found out in the method TxManagerTestDaoImpl.loadTestBean() if txManagerDB1 or txManagerDB2 is active?
As a result I need the active ConnectionHolder Object of my active Transaction.
The I inject that Connection into my Persistence, then all classes runs in the same Transaction.
The DAO and the persistence Layer should be neutral. Our persistence is only a DataMapper from/to SQL. I want/can not use JPA or a similar framework.
Thanks

Spring - dynamically switch between application contexts in the request handler

I would like to implement a RESTful web service using Jersey/Spring which switches between application contexts depending on the host name to which the request was made (similar to virtual hosting).
For example the requests made to site1.domain.com and site2.domain.com should have different sets of service beans working with different databases, users, etc.
What is the best way to do it?
EDIT: the application contexts are supposed to have identical bean classes. The difference is the database used. Also they have to be dynamic, i.e. defined/destroyed during application runtime.
Would be nice to know if this is possible with Spring at all and what is the starting point to search for. Most of the information I found is related to static configuration during webapp init.
You need to use a Custom Spring bean scope depending on the request, to manage multiple EMF/SessionFactory, with multiple Units.
#Scope("dynamic")
#Bean(name ="erpEMF")
public LocalContainerEntityManagerFactoryBean erpManagerFactory() {
LocalContainerEntityManagerFactoryBean emf = buildEmf();
return emf;
}
#Scope("dynamic")
#Bean(name ="erpJPA")
public JpaTransactionManager erpTransactionManager() {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setPersistenceUnitName("erpUnit");
return transactionManager;
}
#Scope("dynamic")
#Bean(name ="erpDataSource", destroyMethod=EmfHolder.DataSourceCloseMethod)
public DataSource erpDataSource() {
return dynamicDataSource( DB_NAMES.AGI_ERP );
}
public class DynamicScope implements Scope{
public Object get(String name, ObjectFactory<?> objectFactory) {
//return Service depending on your request
}
}
#Scope("dynamic")
#Service
public class ActAccountService extends ErpGenericService<ActAccount> implements IActAccountService {
#Transactional("erpJPA")
public Account create(Account t){
}
}

Configure transaction in Spring 4.1.5 without XML

I'm writing application which connects with Oracle Database. I call function from DB which inserts new records to table. And after this callback I can decide what I want to do: commit or rollback.
Unfortunalety I'm new in Spring, so I have problems with configuration. And what's more I want to make this configuration in Java class, not in XML. And here I need your help.
UPDATED CODE:
ApplicationConfig code:
#Configuration
#EnableTransactionManagement
#ComponentScan("hr")
#PropertySource({"classpath:jdbc.properties", "classpath:functions.properties", "classpath:procedures.properties"})
public class ApplicationConfig {
#Autowired
private Environment env;
#Bean(name="dataSource")
public DataSource dataSource() {
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName(env.getProperty("jdbc.driver"));
dataSource.setUrl(env.getProperty("jdbc.url"));
dataSource.setUsername(env.getProperty("jdbc.username"));
dataSource.setPassword(env.getProperty("jdbc.password"));
dataSource.setDefaultAutoCommit(false);
return dataSource;
}
#Bean
public JdbcTemplate jdbcTemplate(DataSource dataSource) {
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.setResultsMapCaseInsensitive(true);
return jdbcTemplate;
}
#Bean(name="txName")
public PlatformTransactionManager txManager() {
DataSourceTransactionManager txManager = new DataSourceTransactionManager();
txManager.setDataSource(dataSource());
return txManager;
}
}
I have Dao and Service, where both implements proper interface.
Service implementation:
#Service
public class HumanResourcesServiceImpl implements HumanResourcesService {
#Autowired
private HumanResourcesDao hrDao;
#Override
public String generateData(int rowsNumber) {
return hrDao.generateData(rowsNumber);
}
#Override
#Transactional("txName")
public void shouldCommit(boolean doCommit, Connection connection) throws SQLException {
hrDao.shouldCommit(doCommit, connection);
}
}
Dao implementation:
#Repository
public class HumanResourcesDaoImpl implements HumanResourcesDao {
private JdbcTemplate jdbcTemplate;
private SimpleJdbcCall generateData;
#Autowired
public HumanResourcesDaoImpl(JdbcTemplate jdbcTemplate, Environment env) {
this.jdbcTemplate = jdbcTemplate;
generateData = new SimpleJdbcCall(jdbcTemplate)
.withProcedureName(env.getProperty("procedure.generateData"));
}
#Override
public String generateData(int rowsNumber) {
HashMap<String, Object> params = new HashMap<>();
params.put("i_rowsNumber", rowsNumber);
Map<String, Object> m = generateData.execute(params);
return (String) m.get("o_execution_time");
}
#Override
#Transactional("txName")
public void shouldCommit(boolean doCommit, Connection connection) throws SQLException {
if(doCommit) {
connection.commit();
} else {
connection.rollback();
}
}
}
Main class code:
public class Main extends Application implements Initializable {
#Override
public void initialize(URL url, ResourceBundle resourceBundle) {
ApplicationContext context = new AnnotationConfigApplicationContext(ApplicationConfig.class);
hrService = context.getBean(HumanResourcesService.class);
BasicDataSource ds = (BasicDataSource)context.getBean("dataSource");
Connection connection = ds.getConnection();
//do something and call
//hrService.generateData
//do something and call
//hrService.shouldCommit(true, connection);
//which commit or rollback generated data from previoues callback
}
}
UPDATE:
I think that the problem is with connection, because this statement:
this.jdbcTemplate.getDataSource().getConnection();
creates new connection, so then there is nothing to commit or rollback. But still I can't figure why this doesn't work properly. No errors, no new records...
What is wierd, is that when I debuged connection.commit(); I found out that in DelegatingConnection.java, parameter this has proper connection, but there is something like:
_conn.commit();
and _conn has different connection. Why?
Should I in some way synchronize connection for those 2 methods or what? Or this is only one connection? To be honest, I'm not sure how it works exactly. One connection and all callbacks to stored procedures are in this connection or maybe with each callback new connection is created?
Real question is how to commit or rollback data from previous callback which do insert into table?
One easy way to do this is to annotate the method with #Transactional:
#Transactional
public void myBeanMethod() {
...
if (!doCommit)
throw new IllegalStateException(); // any unchecked will do
}
and spring will roll all database changes back.
Remember to add #EnableTransactionManagement to your spring application/main class
You can use #Transactional and #EnableTransactionManagement to setup transactions without using the XML configuration. In short, annotate the methods/classes you want to have transactions with #Transactional. To setup the transactional management you use the #EnableTransactionManagement inside your #Configuration.
See Springs docs for example on how to use both. The #EnableTransactionManagement is detailed in the JavaDocs but should match the XML configuration.
UPDATE
The problem is that you are mixing raw JDBC calls (java.sql.Connection) with Spring JDBC. When you execute your SimpleJdbcCall, Spring creates a new Connection. This is not the same Connection as the one you later try to commit. Hence, nothing happens when you perform the commit. I tried to somehow get the connection that the SimpleJdbcCall uses, but could not find any easy way.
To test this I tried the following (I did not use params):
#Override
public String generateData(int rowsNumber) {
//HashMap<String, Object> params = new HashMap<>();
//params.put("i_rowsNumber", rowsNumber);
//Map<String, Object> m = generateData.execute(params);
Connection targetConnection = DataSourceUtils.getTargetConnection(generateData.getJdbcTemplate().getDataSource().getConnection());
System.out.println(targetConnection.prepareCall((generateData.getCallString())).execute());
targetConnection.commit();
return (String) m.get("o_execution_time");
}
If I don't save the targetConnection, and instead try to get the connection again by calling DataSourceUtils.getTargetConnection() when committing, nothing happens. Thus, you must commit on the same connection that you perform the statement on. This does not seem to be easy, nor the proper way.
The solution is to drop the java.sql.Connection.commit() call. Instead, you use Spring Transactions completly. If you use #Transaction on the method that performs database call, Spring will automatically commit when the method has finished. If the method body experiences any Exception (even outside the actual database call) it will automatically rollback. In other words, this should suffice for normal Transaction management.
However, if you are doing batch processing, and wish to have more control over your transactions with commits and rollbacks, you can still use Spring. To programatically control transactions with Spring, you can use TransactionTemplate which have commit and rollback methods. Don't have time to give you proper samples, but may do so in later days if you are still stuck ;)
#Configuration
#EnableTransactionManagement
#ComponentScan(basePackages="org.saat")
#PropertySource(value="classpath:resources/db.properties",ignoreResourceNotFound=true)
public class AppConfig {
#Autowired
private Environment env;
#Bean(name="dataSource")
public DataSource getDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(env.getProperty("db.driver"));
dataSource.setUrl(env.getProperty("db.url"));
dataSource.setUsername(env.getProperty("db.username"));
dataSource.setPassword(env.getProperty("db.password"));
return dataSource;
}
#Bean(name="entityManagerFactoryBean")
public LocalContainerEntityManagerFactoryBean getSessionFactory() {
LocalContainerEntityManagerFactoryBean factoryBean = new LocalContainerEntityManagerFactoryBean ();
factoryBean.setDataSource(getDataSource());
factoryBean.setPackagesToScan("org.saat");
factoryBean.setJpaVendorAdapter(getJpaVendorAdapter());
Properties props=new Properties();
props.put("hibernate.dialect", env.getProperty("hibernate.dialect"));
props.put("hibernate.hbm2ddl.auto",env.getProperty("hibernate.hbm2ddl.auto"));
props.put("hibernate.show_sql",env.getProperty("hibernate.show_sql"));
factoryBean.setJpaProperties(props);
return factoryBean;
}
#Bean(name="transactionManager")
public JpaTransactionManager getTransactionManager() {
JpaTransactionManager jpatransactionManager = new JpaTransactionManager();
jpatransactionManager.setEntityManagerFactory(getSessionFactory().getObject());
return jpatransactionManager;
}
#Bean
public JpaVendorAdapter getJpaVendorAdapter() {
HibernateJpaVendorAdapter hibernateJpaVendorAdapter = new HibernateJpaVendorAdapter();
return hibernateJpaVendorAdapter;
}
}

Multiple keyspace support for spring-data-cassandra repositories?

Does Spring Data Cassandra support multiple keyspace repositories in the same application context? I am setting up the cassandra spring data configuration using the following JavaConfig class
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.repository")
public class CassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace1";
}
I tried creating a second configuration class after moving the repository classes to a different package.
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.secondrepository")
public class SecondCassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace2";
}
However in that case the first set if repositories fail as the configured column family for the entities is not found in the keyspace. I think it is probably looking for the column family in the second keyspace.
Does spring-data-cassandra support multiple keyspace repositories? The only place where I found a reference for multiple keyspaces was here. But it does not explain if this can be done with repositories?
Working APP Sample:
http://valchkou.com/spring-boot-cassandra.html#multikeyspace
The Idea you need override default beans: sessionfactory and template
Sample:
1) application.yml
spring:
data:
cassandra:
test1:
keyspace-name: test1_keyspace
contact-points: localhost
test2:
keyspace-name: test2_keyspace
contact-points: localhost
2) base config class
public abstract class CassandraBaseConfig extends AbstractCassandraConfiguration{
protected String contactPoints;
protected String keyspaceName;
public String getContactPoints() {
return contactPoints;
}
public void setContactPoints(String contactPoints) {
this.contactPoints = contactPoints;
}
public void setKeyspaceName(String keyspaceName) {
this.keyspaceName = keyspaceName;
}
#Override
protected String getKeyspaceName() {
return keyspaceName;
}
}
3) Config implementation for test1
package com.sample.repo.test1;
#Configuration
#ConfigurationProperties("spring.data.cassandra.test1")
#EnableCassandraRepositories(
basePackages = "com.sample.repo.test1",
cassandraTemplateRef = "test1Template"
)
public class Test1Config extends CassandraBaseConfig {
#Override
#Primary
#Bean(name = "test1Template")
public CassandraAdminOperations cassandraTemplate() throws Exception {
return new CassandraAdminTemplate(session().getObject(), cassandraConverter());
}
#Override
#Bean(name = "test1Session")
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setConverter(cassandraConverter());
session.setKeyspaceName(getKeyspaceName());
session.setSchemaAction(getSchemaAction());
session.setStartupScripts(getStartupScripts());
session.setShutdownScripts(getShutdownScripts());
return session;
}
}
4) same for test2, just use different package
package com.sample.repo.test2;
5) place repo for each keyspace in dedicated package
i.e.
package com.sample.repo.test1;
#Repository
public interface RepositoryForTest1 extends CassandraRepository<MyEntity> {
// ....
}
package com.sample.repo.test2;
#Repository
public interface RepositoryForTest2 extends CassandraRepository<MyEntity> {
// ....
}
Try explicitly naming your CassandraTemplate beans for each keyspace and using those names in the #EnableCassandraRepositories annotation's cassandraTemplateRef attribute (see lines with /* CHANGED */ for changes).
In your first configuration:
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.repository",
/* CHANGED */ cassandraTemplateRef = "template1")
public class CassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace1";
}
/* CHANGED */
#Override
#Bean(name = "template1")
public CassandraAdminOperations cassandraTemplate() throws Exception {
return new CassandraAdminTemplate(session().getObject(), cassandraConverter());
}
...and in your second configuration:
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.secondrepository",
/* CHANGED */ cassandraTemplateRef = "template2")
public class SecondCassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace2";
}
/* CHANGED */
#Override
#Bean(name = "template2")
public CassandraAdminOperations cassandraTemplate() throws Exception {
return new CassandraAdminTemplate(session().getObject(), cassandraConverter());
}
I think that might do the trick. Please post back if it doesn't.
It seems that it is recommended to use fully qualified keyspace names in queries managed by one session, as the session is not very lightweight.
Please see reference here
I tried this approach. However I ran into exceptions while trying to access the column family 2. Operations on column family 1 seems to be fine.
I am guessing because the underlying CassandraSessionFactoryBean bean is a singleton. And this causes
unconfigured columnfamily columnfamily2
Here are some more logs to provide context
DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Returning cached instance of singleton bean 'entityManagerFactory'
DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Returning cached instance of singleton bean 'session'
DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Returning cached instance of singleton bean 'cluster'
org.springframework.cassandra.support.exception.CassandraInvalidQueryException: unconfigured columnfamily shardgroup; nested exception is com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured columnfamily columnfamily2
at org.springframework.cassandra.support.CassandraExceptionTranslator.translateExceptionIfPossible(CassandraExceptionTranslator.java:116)
at org.springframework.cassandra.config.CassandraCqlSessionFactoryBean.translateExceptionIfPossible(CassandraCqlSessionFactoryBean.java:74)
Hmm. Can't comment on the answer by matthew-adams. But that will reuse the session object as AbstractCassandraConfiguration is annotated with #Bean on all the relevant getters.
In a similar setup I initially had it working with overwriting all the getters and specifically give them different bean names. But due to Spring still claiming to need beans with the names. I have now had to make a copy of AbstractCassandraConfiguration with no annotations that I can inherit.
Make sure to expose the CassandraTemplate so you can refer to it from #EnableCassandraRepositories if you use those.
I also have a separate implementation of AbstractClusterConfiguration to expose a common CassandraCqlClusterFactoryBean so the underlying connections are being reused.
Edit:
I guess according to the email thread linked by bclarance one should really attempt to reuse the Session object. Seems the way Spring Data Cassandra isn't really set up for that though
In my case, I had a Spring Boot app where the majority of repositories were in one keyspace, and just two were in a second. I kept the default Spring Boot configuration for the first keyspace, and manually configured the second keyspace using the same configuration approach Spring Boot uses for its autoconfiguration.
#Repository
#NoRepositoryBean // This uses a different keyspace than the default, so not auto-creating
public interface SecondKeyspaceTableARepository
extends MapIdCassandraRepository<SecondKeyspaceTableA> {
}
#Repository
#NoRepositoryBean // This uses a different keyspace than the default, so not auto-creating
public interface SecondKeyspaceTableBRepository
extends MapIdCassandraRepository<SecondKeyspaceTableB> {
}
#Configuration
public class SecondKeyspaceCassandraConfig {
public static final String KEYSPACE_NAME = "second_keyspace";
/**
* #see org.springframework.boot.autoconfigure.data.cassandra.CassandraDataAutoConfiguration#cassandraSession(CassandraConverter)
*/
#Bean(autowireCandidate = false)
public CassandraSessionFactoryBean secondKeyspaceCassandraSession(
Cluster cluster, Environment environment, CassandraConverter converter) {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster);
session.setConverter(converter);
session.setKeyspaceName(KEYSPACE_NAME);
Binder binder = Binder.get(environment);
binder.bind("spring.data.cassandra.schema-action", SchemaAction.class)
.ifBound(session::setSchemaAction);
return session;
}
/**
* #see org.springframework.boot.autoconfigure.data.cassandra.CassandraDataAutoConfiguration#cassandraTemplate(com.datastax.driver.core.Session, CassandraConverter)
*/
#Bean(autowireCandidate = false)
public CassandraTemplate secondKeyspaceCassandraTemplate(
Cluster cluster, Environment environment, CassandraConverter converter) {
return new CassandraTemplate(secondKeyspaceCassandraSession(cluster, environment, converter)
.getObject(), converter);
}
#Bean
public SecondKeyspaceTableARepository cdwEventRepository(
Cluster cluster, Environment environment, CassandraConverter converter) {
return createRepository(CDWEventRepository.class,
secondKeyspaceCassandraTemplate(cluster, environment, converter));
}
#Bean
public SecondKeyspaceTableBTypeRepository dailyCapacityRepository(
Cluster cluster, Environment environment, CassandraConverter converter) {
return createRepository(DailyCapacityRepository.class,
secondKeyspaceCassandraTemplate(cluster, environment, converter));
}
private <T> T createRepository(Class<T> repositoryInterface, CassandraTemplate operations) {
return new CassandraRepositoryFactory(operations).getRepository(repositoryInterface);
}
}

Categories

Resources