Multi-tenant Spring JPA: Dynamic dialects resolution for dynamic datasources - java

I have an application that has a base database (Oracle). It fetches the other tenant database connection string from a table in the base database. These tenants can be Oracle or Postgres or MSSQL.
When the application starts the dialect is set to org.hibernate.dialect.SQLServerDialect by hibernate which is of the base database. But when I try to insert data in a tenant of the MSSQL database it is throwing error while inserting data. com.microsoft.sqlserver.jdbc.SQLServerException: DEFAULT or NULL are not allowed as explicit identity values
This is because it is setting MSSQL dialect for the Oracle database.
[WARN ] 2020-01-21 09:16:22.504 [https-jsse-nio-22500-exec-5] [o.h.e.j.s.SqlExceptionHelper] -- SQL Error: 339, SQLState: S0001
[ERROR] 2020-01-21 09:16:22.504 [https-jsse-nio-22500-exec-5] [o.h.e.j.s.SqlExceptionHelper] -- DEFAULT or NULL are not allowed as explicit identity values.
[ERROR] 2020-01-21 09:16:22.535 [https-jsse-nio-22500-exec-5] [o.a.c.c.C.[.[.[.[dispatcherServlet]] -- Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute statement; SQL [n/a]; nested exception is org.hibernate.exception.SQLGrammarException: could not execute statement] with root cause
com.microsoft.sqlserver.jdbc.SQLServerException: DEFAULT or NULL are not allowed as explicit identity values.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:217)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1655)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:440)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:385)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7505)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2445)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:191)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:166)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeUpdate(SQLServerPreparedStatement.java:328)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:197)
at org.hibernate.dialect.identity.GetGeneratedKeysDelegate.executeAndExtract(GetGeneratedKeysDelegate.java:57)
at org.hibernate.id.insert.AbstractReturningDelegate.performInsert(AbstractReturningDelegate.java:43)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3106)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3699)
at org.hibernate.action.internal.EntityIdentityInsertAction.execute(EntityIdentityInsertAction.java:84)
at org.hibernate.engine.spi.ActionQueue.execute(ActionQueue.java:645)
at org.hibernate.engine.spi.ActionQueue.addResolvedEntityInsertAction(ActionQueue.java:282)
at org.hibernate.engine.spi.ActionQueue.addInsertAction(ActionQueue.java:263)
at org.hibernate.engine.spi.ActionQueue.addAction(ActionQueue.java:317)
at org.hibernate.event.internal.AbstractSaveEventListener.addInsertAction(AbstractSaveEventListener.java:335)
at org.hibernate.event.internal.AbstractSaveEventListener.performSaveOrReplicate(AbstractSaveEventListener.java:292)
at org.hibernate.event.internal.AbstractSaveEventListener.performSave(AbstractSaveEventListener.java:198)
at org.hibernate.event.internal.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:128)
at org.hibernate.event.internal.DefaultPersistEventListener.entityIsTransient(DefaultPersistEventListener.java:192)
at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:135)
at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:62)
at org.hibernate.event.service.internal.EventListenerGroupImpl.fireEventOnEachListener(EventListenerGroupImpl.java:108)
at org.hibernate.internal.SessionImpl.firePersist(SessionImpl.java:702)
at org.hibernate.internal.SessionImpl.persist(SessionImpl.java:688)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
I have a TenantIdentifierResolver which implements CurrentTenantIdentifierResolver
#Component
public class TenantIdentifierResolver implements CurrentTenantIdentifierResolver {
#Autowired
PropertyConfig propertyConfig;
#Override
public String resolveCurrentTenantIdentifier() {
String tenantId = TenantContext.getCurrentTenant();
if (tenantId != null) {
return tenantId;
}
return propertyConfig.getDefaultTenant();
}
#Override
public boolean validateExistingCurrentSessions() {
return true;
}
}
A component class MultiTenantConnectionProviderImpl which extends AbstractDataSourceBasedMultiTenantConnectionProviderImpl
#Component
public class MultiTenantConnectionProviderImpl extends AbstractDataSourceBasedMultiTenantConnectionProviderImpl {
#Autowired
private DataSource defaultDS;
#Autowired
PropertyConfig propertyConfig;
#Autowired
TenantDataSourceService tenantDBService;
private Map<String, DataSource> map = new HashMap<>();
boolean init = false;
#PostConstruct
public void load() {
map.put(propertyConfig.getDefaultTenant(), defaultDS);
ConcurrentMap<String,DataSource> tenantList = tenantDBService.getGlobalTenantDataSource(); //gets tenant datasources from service
map.putAll(tenantList);
}
#Override
protected DataSource selectAnyDataSource() {
return map.get(propertyConfig.getDefaultTenant());
}
#Override
protected DataSource selectDataSource(String tenantIdentifier) {
return map.get(tenantIdentifier) != null ? map.get(tenantIdentifier) : map.get(propertyConfig.getDefaultTenant());
}
}
And a configuration class HibernateConfig
#Configuration
public class HibernateConfig {
#Autowired
private JpaProperties jpaProperties;
#Bean
public JpaVendorAdapter jpaVendorAdapter() {
return new HibernateJpaVendorAdapter();
}
#Bean
LocalContainerEntityManagerFactoryBean entityManagerFactory(
DataSource dataSource,
MultiTenantConnectionProviderImpl multiTenantConnectionProviderImpl,
TenantIdentifierResolver currentTenantIdentifierResolverImpl
) {
Map<String, Object> jpaPropertiesMap = new HashMap<>(jpaProperties.getProperties());
jpaPropertiesMap.put(Environment.MULTI_TENANT, MultiTenancyStrategy.SCHEMA);
jpaPropertiesMap.put(Environment.MULTI_TENANT_CONNECTION_PROVIDER, multiTenantConnectionProviderImpl);
jpaPropertiesMap.put(Environment.MULTI_TENANT_IDENTIFIER_RESOLVER, currentTenantIdentifierResolverImpl);
//jpaPropertiesMap.put(Environment.DIALECT_RESOLVERS, "com.esq.cms.CashOrderMgmtService.multitenant.CustomDialectResolver");
jpaPropertiesMap.put("hibernate.jdbc.batch_size", 500);
jpaPropertiesMap.put("hibernate.order_inserts", true);
jpaPropertiesMap.put("hibernate.order_updates", true);
LocalContainerEntityManagerFactoryBean em = new LocalContainerEntityManagerFactoryBean();
em.setDataSource(dataSource);
em.setPackagesToScan("com.esq.cms.*");
em.setJpaVendorAdapter(this.jpaVendorAdapter());
em.setJpaPropertyMap(jpaPropertiesMap);
return em;
}
}
There are many examples to set a dialect using properties file but there they have fixed type and number of databases. In my case it can be any of the database types. I have also tried adding a custom class for hibernate resolver in but it is still not working. I might be missing something. Therefore, what should I do to enable dialect as per the database by hibernate itself. Any help will be appriciated. Thanks

Try going for the Multi-tenancy strategy as DATABASE and not SCHEMA or DISCRIMINATOR as you are dealing with different types of databases (for example : Oracle, MySQL and so on).
As per Hibernate Docs : The approaches that you can take for separating data in the multi-tenant systems :
Separate Database (MultiTenancyStrategy.DATABASE) :
Each tenant's data is kept in a physically separate database instance.
JDBC connections will point to each separate database specifically so
that connection pooling will be per-single-tenant. The connection pool
is selected based on the "tenant identifier" linked to particular user.
Separate Schema (MultiTenancyStrategy.SCHEMA) :
Each tenant's data is kept in a distinct database schema on a single
database instance.
Partitioned data (MultiTenancyStrategy.DISCRIMINATOR):
All data is kept in a single database schema only. The data for each
tenant is partitioned by the use of discriminator. A single JDBC
connection pool is used for all tenants. For every SQL statement, the
app needs to manage execution of the queries on the database based on "tenant
identifier" discriminator.
Decide which strategy you want to go for based on the requirements.
I'm providing my own sample code of multi-tenancy (with spring-boot) that I have done with two different databases that is one with MySQL and another one with Postgres. This is the working example that I'm providing.
Github Repository : Working Multitenancy Code
Note: Create tables before doing any operations in the database.
I have configured all the tenants in the properties file (application.properties) with different databases.
server.servlet.context-path=/sample
spring.jpa.generate-ddl=false
spring.jpa.hibernate.ddl-auto=none
spring.jpa.show-sql=true
## Tenant 1 database ##
multitenant.datasources.tenant1.url=jdbc:postgresql://localhost:5432/tenant1
multitenant.datasources.tenant1.username=postgres
multitenant.datasources.tenant1.password=Anish#123
multitenant.datasources.tenant1.driverClassName=org.postgresql.Driver
## Tenant 2 database ##
multitenant.datasources.tenant2.url=jdbc:mysql://localhost:3306/tenant2
multitenant.datasources.tenant2.username=root
multitenant.datasources.tenant2.password=Anish#123
multitenant.datasources.tenant2.driverClassName=com.mysql.cj.jdbc.Driver
MultiTenantProperties : This class binds and validates the properties set for multiple tenants and keeping them as a map of tenant vs required database information.
#Component
#ConfigurationProperties(value = "multitenant")
public class MultiTenantProperties {
private Map<String, Map<String, String>> datasources = new LinkedHashMap<>();
public Map<String, Map<String, String>> getDatasources() {
return datasources;
}
public void setDatasources(Map<String, Map<String, String>> datasources) {
this.datasources = datasources;
}
}
ThreadLocalTenantStorage : This class keeps the name of the tenant coming from the incoming request for the current thread to perform CRUD operations.
public class ThreadLocalTenantStorage {
private static ThreadLocal<String> currentTenant = new ThreadLocal<>();
public static void setTenantName(String tenantName) {
currentTenant.set(tenantName);
}
public static String getTenantName() {
return currentTenant.get();
}
public static void clear() {
currentTenant.remove();
}
}
MultiTenantInterceptor : This class intercepts the incoming request and sets the ThreadLocalTenantStorage with current tenant for the database to be selected. After completion of the request, the tenant is removed from the ThreadLocalTenantStorage class.
public class MultiTenantInterceptor extends HandlerInterceptorAdapter {
private static final String TENANT_HEADER_NAME = "TENANT-NAME";
#Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler)
throws Exception {
String tenantName = request.getHeader(TENANT_HEADER_NAME);
ThreadLocalTenantStorage.setTenantName(tenantName);
return true;
}
#Override
public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler,
ModelAndView modelAndView) throws Exception {
ThreadLocalTenantStorage.clear();
}
}
TenantIdentifierResolver : This class is responsible to return the current tenant coming from ThreadLocalTenantStorage to select the datasource.
public class TenantIdentifierResolver implements CurrentTenantIdentifierResolver {
private static String DEFAULT_TENANT_NAME = "tenant1";
#Override
public String resolveCurrentTenantIdentifier() {
String currentTenantName = ThreadLocalTenantStorage.getTenantName();
return (currentTenantName != null) ? currentTenantName : DEFAULT_TENANT_NAME;
}
#Override
public boolean validateExistingCurrentSessions() {
return true;
}
}
WebConfiguration : This is configuration to register the MultiTenantInterceptor class to be used as an interceptor.
#Configuration
public class WebConfiguration implements WebMvcConfigurer {
#Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(new MultiTenantInterceptor());
}
}
DataSourceMultiTenantConnectionProvider : This class selects the datasource based on the tenant name.
public class DataSourceMultiTenantConnectionProvider extends AbstractDataSourceBasedMultiTenantConnectionProviderImpl {
private static final long serialVersionUID = 1L;
#Autowired
private Map<String, DataSource> multipleDataSources;
#Override
protected DataSource selectAnyDataSource() {
return multipleDataSources.values().iterator().next();
}
#Override
protected DataSource selectDataSource(String tenantName) {
return multipleDataSources.get(tenantName);
}
}
MultiTenantJPAConfiguration : This class configures the custom beans for database transactions and register the tenant datasources for multi-tenancy.
#Configuration
#EnableJpaRepositories(basePackages = { "com.example.multitenancy.dao" }, transactionManagerRef = "multiTenantTxManager")
#EnableConfigurationProperties({ MultiTenantProperties.class, JpaProperties.class })
#EnableTransactionManagement
public class MultiTenantJPAConfiguration {
#Autowired
private JpaProperties jpaProperties;
#Autowired
private MultiTenantProperties multiTenantProperties;
#Bean
public MultiTenantConnectionProvider multiTenantConnectionProvider() {
return new DataSourceMultiTenantConnectionProvider();
}
#Bean
public CurrentTenantIdentifierResolver currentTenantIdentifierResolver() {
return new TenantIdentifierResolver();
}
#Bean(name = "multipleDataSources")
public Map<String, DataSource> repositoryDataSources() {
Map<String, DataSource> datasources = new HashMap<>();
multiTenantProperties.getDatasources().forEach((key, value) -> datasources.put(key, createDataSource(value)));
return datasources;
}
private DataSource createDataSource(Map<String, String> source) {
return DataSourceBuilder.create().url(source.get("url")).driverClassName(source.get("driverClassName"))
.username(source.get("username")).password(source.get("password")).build();
}
#Bean
public EntityManagerFactory entityManagerFactory(LocalContainerEntityManagerFactoryBean entityManagerFactoryBean) {
return entityManagerFactoryBean.getObject();
}
#Bean
public PlatformTransactionManager multiTenantTxManager(EntityManagerFactory entityManagerFactory) {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(entityManagerFactory);
return transactionManager;
}
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactoryBean(
MultiTenantConnectionProvider multiTenantConnectionProvider,
CurrentTenantIdentifierResolver currentTenantIdentifierResolver) {
Map<String, Object> hibernateProperties = new LinkedHashMap<>();
hibernateProperties.putAll(this.jpaProperties.getProperties());
hibernateProperties.put(Environment.MULTI_TENANT, MultiTenancyStrategy.DATABASE);
hibernateProperties.put(Environment.MULTI_TENANT_CONNECTION_PROVIDER, multiTenantConnectionProvider);
hibernateProperties.put(Environment.MULTI_TENANT_IDENTIFIER_RESOLVER, currentTenantIdentifierResolver);
LocalContainerEntityManagerFactoryBean entityManagerFactoryBean = new LocalContainerEntityManagerFactoryBean();
entityManagerFactoryBean.setPackagesToScan("com.example.multitenancy.entity");
entityManagerFactoryBean.setJpaVendorAdapter(new HibernateJpaVendorAdapter());
entityManagerFactoryBean.setJpaPropertyMap(hibernateProperties);
return entityManagerFactoryBean;
}
}
Sample Entity class for testing :
#Entity
#Table(name = "user_details", schema = "public")
public class User {
#Id
#Column(name = "id")
private Long id;
#Column(name = "full_name", length = 30)
private String name;
public User() {
super();
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
Sample Repository for testing :
public interface UserRepository extends JpaRepository<User, Long>{
}
Sample Controller :
#RestController
#Transactional
public class SampleController {
#Autowired
private UserRepository userRepository;
#GetMapping(value = "/{id}")
public ResponseEntity<User> getUser(#PathVariable("id") String id) {
Optional<User> user = userRepository.findById(Long.valueOf(id));
User userDemo = user.get();
return ResponseEntity.ok(userDemo);
}
#PostMapping(value = "/create/user")
public ResponseEntity<String> createUser(#RequestBody User user) {
userRepository.save(user);
return ResponseEntity.ok("User is saved");
}
}

Based on our current analysis for Spring/JPA Multitenancy implementation works with connecting to multiple database types (MSSQL, PostgGreSQL), only when you have an initial DataSource to connect at the startup. Spring/JPA/Hibernate framework support does require a Dialect setup during application startup and will throw an error if you don't set one. Our requirement was to get these connections in a lazy fashion through another service that requires a tenant context. We implemented a work around to use a lightweight empty/dummy in-memory sqlite DB that we are connecting at the startup to pass through the initial dialect and connection required. This is the path for least customization to current framework code and hopefully this will be added as a feature down the road for Multitenancy implementation.
The key method that needs an initial connect and helps to connect to multiple types of DBs as needed later is in class that
extends AbstractDataSourceBasedMultiTenantConnectionProviderImpl and override following method:
#Override
protected DataSource selectAnyDataSource() {
//TODO This method is called more than once. So check if the data source map
// is empty. If it is then set default tenant for now.
// This is test code and needs to figure out making it work for application scenarios
if (tenantDataSources.isEmpty()) {
tenantDataSources.put("default", dataSource.getDataSource(""));
log.info("selectAnyDataSource() method call...Total tenants:" + tenantDataSources.size());
}
return this.tenantDataSources.values().iterator().next();
}

Related

how to use both Cassandra and MYSQL in a single project?

I am trying to use both Cassandra and MySQL in my project. Some data will be saved into Cassandra and some to Mysql. I had been using mySql for last 1 yr in the same project and now since I'm Expanding it, I want to add Cassandra DB also.
My Cassandra Configuration file is as follows.
#Configuration
#PropertySource(value = {"classpath:META-INF/application.properties"})
#EnableCassandraRepositories(basePackages = {"com.example.repository"})
public class CassandraConfig {
#Autowired
private Environment environment;
private static final Logger LOGGER = LoggerFactory.getLogger(CassandraConfig.class);
#Bean
public CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
cluster.setContactPoints(environment.getProperty("spring.cassandra.contactpoints"));
cluster.setPort(Integer.parseInt(environment.getProperty("spring.cassandra.port")));
return cluster;
}
#Bean
public CassandraMappingContext mappingContext() {
return new BasicCassandraMappingContext();
}
#Bean
public CassandraConverter converter() {
return new MappingCassandraConverter(mappingContext());
}
#Bean
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setKeyspaceName(environment.getProperty("spring.cassandra.keyspace"));
session.setConverter(converter());
session.setSchemaAction(SchemaAction.NONE);
return session;
}
#Bean
public CassandraOperations cassandraTemplate() throws Exception {
return new CassandraTemplate(session().getObject());
}
}
My Repository is
public interface NewRepository extends CassandraRepository<ID>{
}
Now I'm trying to save an entity to it using the reposiroty
repo.save(entity);
where repo is the object for NewRepository.
But it shows InvalidDataAccessApiUsageException: unknown Type.
Where am i wrong.
Thank You in advance.

SpringBoot direct MongoRepository to specific MongoTemplate

I have an app with multiple mongo configurations. This is achieved through some #Configuration classes like so
public abstract class AbstractMongoConfig {
private String database;
private String uri;
public void setUri(String uri) {
this.uri = uri;
}
public void setDatabase(String database) {
this.database = database;
}
public MongoDbFactory mongoDbFactory() throws Exception {
return new SimpleMongoDbFactory(new MongoClient(new MongoClientURI(this.uri)), this.database);
}
abstract public MongoTemplate getMongoTemplate() throws Exception;
}
Config 1 -- app
#Configuration
#ConfigurationProperties(prefix="app.mongodb")
public class AppMongoConfig extends AbstractMongoConfig {
#Primary
#Override
#Bean(name="appMongoTemplate")
public MongoTemplate getMongoTemplate() throws Exception {
return new MongoTemplate(mongoDbFactory());
}
}
Config 2 -- test
#Configuration
#ConfigurationProperties(prefix="test.mongodb")
public class TestMongoConfig extends AbstractMongoConfig {
#Override
#Bean(name="testMongoTemplate")
public MongoTemplate getMongoTemplate() throws Exception {
return new MongoTemplate(mongoDbFactory());
}
}
Then in my properties
test.mongodb.uri=mongodb://127.0.0.1/test
test.mongodb.database=test
app.mongodb.uri=mongodb://127.0.0.1/app
app.mongodb.database=app
So, two mongo configs wired up to an instance running locally but with different databases. I have tried it with different addresses also but it behaves the same.
Anyway, this then gets used via an Entity and MongoRepository
#Document(collection="collname")
public class TestObj {
#Id
private String id;
private String username;
private int age;
// getters & setters
}
Repo
#Repository
public interface TestObjRepository extends MongoRepository<TestObj, String> {
public TestObj findByUsername(String username);
}
However when I use this in some class somewhere
#Service
public class ThingDoer {
#Autowired
TestObjRepository toRepo;
public void doStuff() {
TestObj to = new TestObj("name", 123);
toRepo.save(to);
}
}
This object gets written into the TestMongoConfig one not the AppMongoConfig as I would expect since that's the one annotated with #Primary. Further, if I add the #EnableMongoRepositories annotation on the ThingDoer like:
#EnableMongoRepositories(basePackages={"com.whatever.package"}, mongoTemplateRef="appMongoTemplate")
It still doesn't work. It still writes to the db referenced by "test".
If I #Autowire in the MongoTemplate directly and use that, it works as I expect. Things go to the "app" repo. How can I tell it which database that the TestObjRepository should be writing to and reading from?
So, if anyone else still has this problem, the solution is this:
#EnableMongoRepositories(basePackages={"com.whatever.package"}, mongoTemplateRef="appMongoTemplate")
You have to put it on your custom mongo properties.
Where basePackages is the package path to your repo. You have to have one package for each mongo database, so it looks for the intended repository and model reference.
And you also have to disable mongo auto configuration by spring, when using multiple DBs:
spring.autoconfigure.exclude:org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration
This is a great tutorial:
https://dzone.com/articles/multiple-mongodb-connectors-with-spring-boot

Hibernate entity is updated existing row instead of creating new row using saveAndFlush

I am Using Spring data JPA, hibernate, sqlserver in a Spring Rest Application.
i) For the first request a record is inserted into data base. working fine until here.
ii) when i make another new request with new data updating existing record instead of inserting new record into data base
iii)But when application context reloads I am able to insert new record.
Here below is the code snippet.
1) Hibernate Configuration
public class HibernateConfiguration {
#Autowired
private Environment env;
#Bean
public DataSource dataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(env.getRequiredProperty
("db.driverClassName"));
dataSource.setUrl(env.getRequiredProperty("db.url"));
dataSource.setUsername(env.getRequiredProperty("db.username"));
dataSource.setPassword(env.getRequiredProperty("db.password"));
return dataSource;
}
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory(DataSource dataSource) {
LocalContainerEntityManagerFactoryBean entityManagerFactoryBean = new LocalContainerEntityManagerFactoryBean();
entityManagerFactoryBean.setDataSource(dataSource);
entityManagerFactoryBean.setJpaVendorAdapter(new HibernateJpaVendorAdapter());
entityManagerFactoryBean.setPackagesToScan(new String[] { my.domains.package });
entityManagerFactoryBean.setJpaProperties(hibProperties());
return entityManagerFactoryBean;
}
private Properties hibProperties() {
Properties properties = new Properties();
properties.put("hibernate.dialect", env.getRequiredProperty("hibernate.dialect"));
properties.put("hibernate.show_sql", env.getRequiredProperty("hibernate.show_sql"));
properties.put("hibernate.hbm2ddl.auto", env.getRequiredProperty("hibernate.hbm2ddl.auto"));
return properties;
}
#Bean
public JpaTransactionManager transactionManager(EntityManagerFactory entityManagerFactory) {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(entityManagerFactory);
return transactionManager;
}
2)Domain
#Entity
#Table(name="Emp_Detetail")
public class EmpDetail implements java.io.Serializable {
private static final long serialVersionUID = 7342724430491939936L;
#Column(name="EmployeeId")
#Id
#GeneratedValue
private int employeeId;
.......
}
3)JPA Respository
public interface EmpDetailRepository extends JpaRepository<EmpDetail, Integer> {
}
4) DAO
#Repository("empDetailDao")
public class EmpDetailDaoImpl implements EmpDetailDao {
#Resource
private EmpDetailRepository empDetailRepository;
#Override
#Transactional
public EmpDetail insertEmpDetails(EmpDetail empDetail) {
return empDetailRepository.saveAndFlush(archive);
}
}
5) Service Class
#Service
public class EmpDetailServiceImpl implements EmpDetailService{
#Autowired
private EmpDetailDao empDetailDao;
#Autowired
private EmpDetail empBO;
private EmpDetail toInsertEmpDetails(int active, String empName) throws
Exception {
empBO.setName(empName);
empBO.setActive(active);
empBO = empDetailDao.insertEmpDetails(empBO);
}
return empBO;
}
6) Controller code is
#RestController
public class EmpDeatilController {
#Resource
private EmpDetailService empDetailService;
#RequestMapping(value = "/insertEmpDetail", method =
RequestMethod.GET)
#ResponseBody
public EmpDetialResponse insertEmpDetail(#RequestParam("empName") String
empName, #RequestParam("active") int active) throws Exception{
return empDetailService.toInsertEmpDetails(active, empName);
}
}
Please help me.
Thanks in advance
When you insert the first entry, you save the inserted object in the field
#Autowired
private EmpDetail empBO;
in the EmpDetailServiceImpl bean. Since this is a singleton bean, when you do further calls of the method toInsertEmpDetails, it will use the saved object, update its name and active flag and persist this. Since this object then already has an id (from your first call), it will update the entry in the database instead of creating a new one. To solve that, just remove the field empBO, there is usually no need to have such a field in a service (which should be stateless).

Multiple keyspace support for spring-data-cassandra repositories?

Does Spring Data Cassandra support multiple keyspace repositories in the same application context? I am setting up the cassandra spring data configuration using the following JavaConfig class
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.repository")
public class CassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace1";
}
I tried creating a second configuration class after moving the repository classes to a different package.
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.secondrepository")
public class SecondCassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace2";
}
However in that case the first set if repositories fail as the configured column family for the entities is not found in the keyspace. I think it is probably looking for the column family in the second keyspace.
Does spring-data-cassandra support multiple keyspace repositories? The only place where I found a reference for multiple keyspaces was here. But it does not explain if this can be done with repositories?
Working APP Sample:
http://valchkou.com/spring-boot-cassandra.html#multikeyspace
The Idea you need override default beans: sessionfactory and template
Sample:
1) application.yml
spring:
data:
cassandra:
test1:
keyspace-name: test1_keyspace
contact-points: localhost
test2:
keyspace-name: test2_keyspace
contact-points: localhost
2) base config class
public abstract class CassandraBaseConfig extends AbstractCassandraConfiguration{
protected String contactPoints;
protected String keyspaceName;
public String getContactPoints() {
return contactPoints;
}
public void setContactPoints(String contactPoints) {
this.contactPoints = contactPoints;
}
public void setKeyspaceName(String keyspaceName) {
this.keyspaceName = keyspaceName;
}
#Override
protected String getKeyspaceName() {
return keyspaceName;
}
}
3) Config implementation for test1
package com.sample.repo.test1;
#Configuration
#ConfigurationProperties("spring.data.cassandra.test1")
#EnableCassandraRepositories(
basePackages = "com.sample.repo.test1",
cassandraTemplateRef = "test1Template"
)
public class Test1Config extends CassandraBaseConfig {
#Override
#Primary
#Bean(name = "test1Template")
public CassandraAdminOperations cassandraTemplate() throws Exception {
return new CassandraAdminTemplate(session().getObject(), cassandraConverter());
}
#Override
#Bean(name = "test1Session")
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setConverter(cassandraConverter());
session.setKeyspaceName(getKeyspaceName());
session.setSchemaAction(getSchemaAction());
session.setStartupScripts(getStartupScripts());
session.setShutdownScripts(getShutdownScripts());
return session;
}
}
4) same for test2, just use different package
package com.sample.repo.test2;
5) place repo for each keyspace in dedicated package
i.e.
package com.sample.repo.test1;
#Repository
public interface RepositoryForTest1 extends CassandraRepository<MyEntity> {
// ....
}
package com.sample.repo.test2;
#Repository
public interface RepositoryForTest2 extends CassandraRepository<MyEntity> {
// ....
}
Try explicitly naming your CassandraTemplate beans for each keyspace and using those names in the #EnableCassandraRepositories annotation's cassandraTemplateRef attribute (see lines with /* CHANGED */ for changes).
In your first configuration:
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.repository",
/* CHANGED */ cassandraTemplateRef = "template1")
public class CassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace1";
}
/* CHANGED */
#Override
#Bean(name = "template1")
public CassandraAdminOperations cassandraTemplate() throws Exception {
return new CassandraAdminTemplate(session().getObject(), cassandraConverter());
}
...and in your second configuration:
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.secondrepository",
/* CHANGED */ cassandraTemplateRef = "template2")
public class SecondCassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace2";
}
/* CHANGED */
#Override
#Bean(name = "template2")
public CassandraAdminOperations cassandraTemplate() throws Exception {
return new CassandraAdminTemplate(session().getObject(), cassandraConverter());
}
I think that might do the trick. Please post back if it doesn't.
It seems that it is recommended to use fully qualified keyspace names in queries managed by one session, as the session is not very lightweight.
Please see reference here
I tried this approach. However I ran into exceptions while trying to access the column family 2. Operations on column family 1 seems to be fine.
I am guessing because the underlying CassandraSessionFactoryBean bean is a singleton. And this causes
unconfigured columnfamily columnfamily2
Here are some more logs to provide context
DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Returning cached instance of singleton bean 'entityManagerFactory'
DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Returning cached instance of singleton bean 'session'
DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Returning cached instance of singleton bean 'cluster'
org.springframework.cassandra.support.exception.CassandraInvalidQueryException: unconfigured columnfamily shardgroup; nested exception is com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured columnfamily columnfamily2
at org.springframework.cassandra.support.CassandraExceptionTranslator.translateExceptionIfPossible(CassandraExceptionTranslator.java:116)
at org.springframework.cassandra.config.CassandraCqlSessionFactoryBean.translateExceptionIfPossible(CassandraCqlSessionFactoryBean.java:74)
Hmm. Can't comment on the answer by matthew-adams. But that will reuse the session object as AbstractCassandraConfiguration is annotated with #Bean on all the relevant getters.
In a similar setup I initially had it working with overwriting all the getters and specifically give them different bean names. But due to Spring still claiming to need beans with the names. I have now had to make a copy of AbstractCassandraConfiguration with no annotations that I can inherit.
Make sure to expose the CassandraTemplate so you can refer to it from #EnableCassandraRepositories if you use those.
I also have a separate implementation of AbstractClusterConfiguration to expose a common CassandraCqlClusterFactoryBean so the underlying connections are being reused.
Edit:
I guess according to the email thread linked by bclarance one should really attempt to reuse the Session object. Seems the way Spring Data Cassandra isn't really set up for that though
In my case, I had a Spring Boot app where the majority of repositories were in one keyspace, and just two were in a second. I kept the default Spring Boot configuration for the first keyspace, and manually configured the second keyspace using the same configuration approach Spring Boot uses for its autoconfiguration.
#Repository
#NoRepositoryBean // This uses a different keyspace than the default, so not auto-creating
public interface SecondKeyspaceTableARepository
extends MapIdCassandraRepository<SecondKeyspaceTableA> {
}
#Repository
#NoRepositoryBean // This uses a different keyspace than the default, so not auto-creating
public interface SecondKeyspaceTableBRepository
extends MapIdCassandraRepository<SecondKeyspaceTableB> {
}
#Configuration
public class SecondKeyspaceCassandraConfig {
public static final String KEYSPACE_NAME = "second_keyspace";
/**
* #see org.springframework.boot.autoconfigure.data.cassandra.CassandraDataAutoConfiguration#cassandraSession(CassandraConverter)
*/
#Bean(autowireCandidate = false)
public CassandraSessionFactoryBean secondKeyspaceCassandraSession(
Cluster cluster, Environment environment, CassandraConverter converter) {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster);
session.setConverter(converter);
session.setKeyspaceName(KEYSPACE_NAME);
Binder binder = Binder.get(environment);
binder.bind("spring.data.cassandra.schema-action", SchemaAction.class)
.ifBound(session::setSchemaAction);
return session;
}
/**
* #see org.springframework.boot.autoconfigure.data.cassandra.CassandraDataAutoConfiguration#cassandraTemplate(com.datastax.driver.core.Session, CassandraConverter)
*/
#Bean(autowireCandidate = false)
public CassandraTemplate secondKeyspaceCassandraTemplate(
Cluster cluster, Environment environment, CassandraConverter converter) {
return new CassandraTemplate(secondKeyspaceCassandraSession(cluster, environment, converter)
.getObject(), converter);
}
#Bean
public SecondKeyspaceTableARepository cdwEventRepository(
Cluster cluster, Environment environment, CassandraConverter converter) {
return createRepository(CDWEventRepository.class,
secondKeyspaceCassandraTemplate(cluster, environment, converter));
}
#Bean
public SecondKeyspaceTableBTypeRepository dailyCapacityRepository(
Cluster cluster, Environment environment, CassandraConverter converter) {
return createRepository(DailyCapacityRepository.class,
secondKeyspaceCassandraTemplate(cluster, environment, converter));
}
private <T> T createRepository(Class<T> repositoryInterface, CassandraTemplate operations) {
return new CassandraRepositoryFactory(operations).getRepository(repositoryInterface);
}
}

Spring data testing custom repository data doesn't update

I am trying to write a test for custom spring data repository. I'm also using QueryDSL.
I am new to spring-data. I use spring support for HSQL DB in testing. MySQL for dev.
Problem: I do not see updated data in tests if I use custom repository.
public interface AuctionRepository extends AuctionRepositoryCustom, CrudRepository<Auction, Long>, QueryDslPredicateExecutor<Auction> {
// needed for spring data crud
}
.
public interface AuctionRepositoryCustom {
long renameToBestName();
}
.
public class AuctionRepositoryImpl extends QueryDslRepositorySupport implements AuctionRepositoryCustom {
private static final QAuction auction = QAuction.auction;
public AuctionRepositoryImpl() {
super(Auction.class);
}
#Override
public long renameToBestName() {
return update(auction)
.set(auction.name, "BestName")
.execute();
}
}
My test
Somehow fails at last line
public class CustomAuctionRepositoryImplTest extends AbstractIntegrationTest {
#Inject
AuctionRepository auctionRepository;
#Test
public void testDoSomething() {
Auction auction = auctionRepository.findOne(26L);
assertEquals("EmptyName", auction.getName());
// test save
auction.setName("TestingSave");
auctionRepository.save(auction);
Auction saveResult = auctionRepository.findOne(26L);
assertEquals("TestingSave", saveResult.getName());
// test custom repository
long updatedRows = auctionRepository.renameToBestName();
assertTrue(updatedRows > 0);
Auction resultAuction = auctionRepository.findOne(26L);
assertEquals("BestName", resultAuction.getName()); // FAILS expected:<[BestNam]e> but was:<[TestingSav]e>
}
}
I can't figure out why data doesn't update when using custom repository. If I start application in dev mode, and call renameToBestName() through controller, everything works as expected, name changes.
Below is Test Configuration if needed
#RunWith(SpringJUnit4ClassRunner.class)
#Transactional
#ActiveProfiles("test")
#ContextConfiguration(classes = {TestBeans.class, JpaConfig.class, EmbeddedDataSourceConfig.class})
#ComponentScan(basePackageClasses = IntegrationTest.class, excludeFilters = #Filter({Configuration.class}))
public abstract class AbstractIntegrationTest {
}
.
#Configuration
#EnableTransactionManagement
#EnableJpaRepositories(basePackageClasses = Application.class)
class JpaConfig {
#Value("${hibernate.dialect}")
private String dialect;
#Value("${hibernate.hbm2ddl.auto}")
private String hbm2ddlAuto;
#Value("${hibernate.isShowSQLOn}")
private String isShowSQLOn;
#Autowired
private DataSource dataSource;
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
LocalContainerEntityManagerFactoryBean entityManagerFactory = new LocalContainerEntityManagerFactoryBean();
entityManagerFactory.setDataSource(dataSource);
entityManagerFactory.setPackagesToScan("auction");
entityManagerFactory.setJpaVendorAdapter(new HibernateJpaVendorAdapter());
Properties jpaProperties = new Properties();
jpaProperties.put(org.hibernate.cfg.Environment.DIALECT, dialect);
if ( !hbm2ddlAuto.isEmpty()) {
jpaProperties.put(org.hibernate.cfg.Environment.HBM2DDL_AUTO, hbm2ddlAuto);
}
jpaProperties.put(org.hibernate.cfg.Environment.SHOW_SQL, isShowSQLOn);
jpaProperties.put(org.hibernate.cfg.Environment.HBM2DDL_IMPORT_FILES_SQL_EXTRACTOR, "org.hibernate.tool.hbm2ddl.MultipleLinesSqlCommandExtractor");
entityManagerFactory.setJpaProperties(jpaProperties);
return entityManagerFactory;
}
#Bean
public PlatformTransactionManager transactionManager() {
return new JpaTransactionManager();
}
}
This is due to the update query issued through your code is defined to not evict the object potentially touched by the query from the EntityManager. Read more on that in this answer.

Categories

Resources