How to instantiate object(Jdbc template) inside Hazelcast Map store - java

I'm trying to Autowire jdbc template inside mapStore.. but I'm getting null pointer exception.
I worked on so many examples but sill not able to resolve this issue..
Here is my main class
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
#SpringBootApplication
public class TestCacheApplication {
public static void main(String[] args) {
SpringApplication.run(TestCacheApplication.class, args);
System.err.println("......running successfully......");
}
}
Here is my cache configured code
#Component
public class CacheConfig {
#Bean
public static Config config() {
System.err.println("config class");
Config config = new Config();
config.setInstanceName("hazelcast");
MapConfig mapCfg = new MapConfig();
mapCfg.setName("first-map");
mapCfg.setBackupCount(2);
mapCfg.setTimeToLiveSeconds(300);
MapStoreConfig mapStoreCfg = new MapStoreConfig();
mapStoreCfg.setClassName(DataMapStore .class.getName()).setEnabled(true);
mapCfg.setMapStoreConfig(mapStoreCfg);
config.addMapConfig(mapCfg);
return config;
}
}
and TblRepo implementation
#Service
public class DataTblRepoImpl implements DataTblRepo {
#Autowired
private JdbcTemplate jdbcTemplate;
#Override
public void save(String id, String name) {
Object[] params = new Object[] { id, name };
int[] types = new int[] { Types.VARCHAR, Types.VARCHAR };
String insertSql = "INSERT INTO public.person(id, name) VALUES(?, ?)";
jdbcTemplate.update(insertSql, params, types);
}
and TblRepo interface I have annotated with #Repository annotation..
And My map store class
#SpringAware
public class DataMapStore implements MapStore<String, ModelClass>{
#Autowired
DataTblRepo dataTblRepo;
#Override
public void store(String key, ModelClass value) {
dataTblRepo.save(value.getId(), value.getName());
}
//remaining methods will come here
}
and Controller
#RestController
#CrossOrigin(origins = "*")
#RequestMapping("/api/v1")
public class DataController {
#Autowired
DataService dataService;
HazelcastInstance hazelCast = Hazelcast.getHazelcastInstanceByName("hazelcast");
#PostMapping("/{test}")
public String saveDatafrom(#RequestBody ModelClass model) {
hazelCast.getMap("first-map").put(model.getId(), model);
return "stored";
}
}
Here is the program flow.. When I start the application, first Cacheconfig class will run.
In the controller when I perform the map.put() operation, data will go to the DataMapStore class and call the store method to save the data in database..since DataTblRepo is null so operation is failing at the store method itself..*
I tried adding #component on the DataMapStore class also
but in my case I'm getting this error
"message": "Cannot invoke "com.example.demo.repo.DataTblRepository.save(String, String)" because "this.dataTableRepo" is null",
I saw this same issue in many platforms also but still not able to resolve this issue.
Any suggestions would be very helpful

SpringAware is for Hazelcast distributed objects (cf. documentation).
The MapStore in your example is not a distributed object but a simple plain object. It should be managed by Spring itself. You should replace the #SpringAware annotation by a Spring #Component annotation.
The next issue is that your map store configuration makes Hazelcast responsible to instantiate the MapStore. If this happens, you won't benefit from Spring's Dependency Injection mechanism. You should directly set the instance created by Spring.
Replace SpringAware by Component
#Component
public class DataMapStore implements MapStore<String, ModelClass> {
// ...
}
Use the Spring-configured MapStore instance
#Bean
public Config config(DataMapStore mapStore) { // Ask Spring to inject the instance
// ...
MapStoreConfig mapStoreCfg = new MapStoreConfig();
mapStoreCfg.setImplementation(mapStore); // Use it
mapCfg.setMapStoreConfig(mapStoreCfg);
config.addMapConfig(mapCfg);
return config;
}
I also removed the static keyword on the config() method.
Note that this way of using MapStore couples it with the "client" code. This means you need to use Hazelcast embedded. For more information about embedded mode vs. client/server, please check the documentation related to topology.

Related

SpringBoot selecting the #Repository based on design pattern and configuration

Small question on Spring Boot, and how to use a design pattern combined with Spring #Value configuration in order to select the appropriate #Repository please.
Setup: A springboot project which does nothing but save a pojo. The "difficulty" is the need to choose where to save the pojo, based on some info from inside the payload request.
I started with a first straightforward version, which looks like this:
#RestController
public class ControllerVersionOne {
#Autowired private ElasticRepository elasticRepository;
#Autowired private MongoDbRepository mongoRepository;
#Autowired private RedisRepository redisRepository;
//imagine many more other repositories
//imagine many more other repositories
//imagine many more other repositories
#PostMapping(path = "/save")
public String save(#RequestBody MyRequest myRequest) {
String whereToSave = myRequest.getWhereToSave();
MyPojo myPojo = new MyPojo(UUID.randomUUID().toString(), myRequest.getValue());
if (whereToSave.equals("elastic")) {
return elasticRepository.save(myPojo).toString();
} else if (whereToSave.equals("mongo")) {
return mongoRepository.save(myPojo).toString();
} else if (whereToSave.equals("redis")) {
return redisRepository.save(myPojo).toString();
// imagine many more if
// imagine many more if
// imagine many more if
} else {
return "unknown destination";
}
}
With the appropriate #Configuration and #Repository for each and every databases. I am showing 3 here, but imagine many. The project has a way to inject future #Configuration and #Repository as well (the question is not here actually)
#Configuration
public class ElasticConfiguration extends ElasticsearchConfiguration {
#Repository
public interface ElasticRepository extends CrudRepository<MyPojo, String> {
#Configuration
public class MongoConfiguration extends AbstractMongoClientConfiguration {
#Repository
public interface MongoDbRepository extends MongoRepository<MyPojo, String> {
#Configuration
public class RedisConfiguration {
#Repository
public interface RedisRepository {
Please note, some of the repositories are not children of CrudRepository. There is no direct ___Repository which can cover everything.
And this first version is working fine. Very happy, meaning I am able to save the pojo to where it should be saved, as I am getting the correct repository bean, using this if else structure.
In my opinion, this structure is not very elegant (if it ok if we have different opinion here), especially, not flexible at all (need to hardcode each and every possible repository, again imagine many).
This is why I went to refactor and change to this second version:
#RestController
public class ControllerVersionTwo {
private ElasticRepository elasticRepository;
private MongoDbRepository mongoRepository;
private RedisRepository redisRepository;
private Map<String, Function<MyPojo, MyPojo>> designPattern;
#Autowired
public ControllerVersionTwo(ElasticRepository elasticRepository, MongoDbRepository mongoRepository, RedisRepository redisRepository) {
this.elasticRepository = elasticRepository;
this.mongoRepository = mongoRepository;
this.redisRepository = redisRepository;
// many more repositories
designPattern = new HashMap<>();
designPattern.put("elastic", myPojo -> elasticRepository.save(myPojo));
designPattern.put("mongo", myPojo -> mongoRepository.save(myPojo));
designPattern.put("redis", myPojo -> redisRepository.save(myPojo));
//many more put
}
#PostMapping(path = "/save")
public String save(#RequestBody MyRequest myRequest) {
String whereToSave = myRequest.getWhereToSave();
MyPojo myPojo = new MyPojo(UUID.randomUUID().toString(), myRequest.getValue());
return designPattern.get(whereToSave).apply(myPojo).toString();
}
As you can see, I am leveraging a design pattern refactoring the if-else into a hashmap.
This post is not about if-else vs hashmap by the way.
Working fine, but please note, the map is a Map<String, Function<MyPojo, MyPojo>>, as I cannot construct a map of Map<String, #Repository>.
With this second version, the if-else is being refactored, but again, we need to hardcode the hashmap.
This is why I am having the idea to build a third version, where I can configure the map itself, via a spring boot property #Value for Map:
Here is what I tried:
#RestController
public class ControllerVersionThree {
#Value("#{${configuration.design.pattern.map}}")
Map<String, String> configurationDesignPatternMap;
private Map<String, Function<MyPojo, MyPojo>> designPatternStrategy;
public ControllerVersionThree() {
convertConfigurationDesignPatternMapToDesignPatternStrategy(configurationDesignPatternMap, designPatternStrategy);
}
private void convertConfigurationDesignPatternMapToDesignPatternStrategy(Map<String, String> configurationDesignPatternMap, Map<String, Function<MyPojo, MyPojo>> designPatternStrategy) {
// convert configurationDesignPatternMap
// {elastic:ElasticRepository, mongo:MongoDbRepository , redis:RedisRepository , ...}
// to a map where I can directly get the appropriate repository based on the key
}
#PostMapping(path = "/save")
public String save(#RequestBody MyRequest myRequest) {
String whereToSave = myRequest.getWhereToSave();
MyPojo myPojo = new MyPojo(UUID.randomUUID().toString(), myRequest.getValue());
return designPatternStrategy.get(whereToSave).apply(myPojo).toString();
}
And I would configure in the property file:
configuration.design.pattern.map={elastic:ElasticRepository, mongo:MongoDbRepository , saveToRedis:RedisRepositry, redis:RedisRepository , ...}
And tomorrow, I would be able to configure add or remove the future repository target.
configuration.design.pattern.map={elastic:ElasticRepository, anotherElasticKeyForSameElasticRepository, redis:RedisRepository , postgre:PostGreRepository}
Unfortunately, I am stuck.
What is the correct code in order to leverage a configurable property for mapping a key with it's "which #Repository to use" please?
Thank you for your help.
You can create a base repository to be extended by all your repositories:
public interface BaseRepository {
MyPojo save(MyPojo onboarding);
}
so you will have a bunch of repositories like:
#Repository("repoA")
public interface ARepository extends JpaRepository<MyPojo, String>, BaseRepository {
}
#Repository("repoB")
public interface BRepository extends JpaRepository<MyPojo, String>, BaseRepository {
}
...
Those repositories will be provided by a factory:
public interface BaseRepositoryFactory {
BaseRepository getBaseRepository(String whereToSave);
}
that you must configure in a ServiceLocatorFactoryBean:
#Bean
public ServiceLocatorFactoryBean baseRepositoryBean() {
ServiceLocatorFactoryBean serviceLocatorFactoryBean = new ServiceLocatorFactoryBean();
serviceLocatorFactoryBean.setServiceLocatorInterface(BaseRepositoryFactory.class);
return serviceLocatorFactoryBean;
}
Now you can inject the factory wherever you need and get the repo want:
#Autowired
private BaseRepositoryFactory baseRepositoryFactory;
...
baseRepositoryFactory.getBaseRepository("repoA").save(myPojo);
...
Hope it helps.
Short answer:
create a shared interface
create multiple sub-class of this interface (one per storage) using different spring component names
Use a map to deal with aliases
use Spring context to retrieve the right bean by alias (instead of creating a custom factory)
Now adding a new storage is only adding a new Repository classes with a name
Explanation:
As mentioned in the other answer you first need to define a common interface as you can't use the CrudRepository.save(...).
In my example I reuse the same signature as the save method to avoid re-implementing it in the sub-classes of CrudRepository.
public interface MyInterface<T> {
<S extends T> S save(S entity);
}
Redis Repository:
#Repository("redis") // Here is the name of the redis repo
public class RedisRepository implements MyInterface<MyPojo> {
#Override
public <S extends MyPojo> S save(S entity) {
entity.setValue(entity.getValue() + " saved by redis");
return entity;
}
}
For the other CrudRepository no need to provide an implementation:
#Repository("elastic") // Here is the name of the elastic repo
public interface ElasticRepository extends CrudRepository<MyPojo, String>, MyInterface<MyPojo> {
}
Create a configuration for your aliases in application.yml
configuration:
design:
pattern:
map:
redis: redis
saveToRedisPlease: redis
elastic: elastic
Create a custom properties to retrieve the map:
#Component
#ConfigurationProperties(prefix = "configuration.design.pattern")
public class PatternProperties {
private Map<String, String> map;
public String getRepoName(String alias) {
return map.get(alias);
}
public Map<String, String> getMap() {
return map;
}
public void setMap(Map<String, String> map) {
this.map = map;
}
}
Now create the version three of your repository with the injection of SpringContext:
#RestController
public class ControllerVersionThree {
private final ApplicationContext context;
private PatternProperties designPatternMap;
public ControllerVersionThree(ApplicationContext context,
PatternProperties designPatternMap) {
this.context = context;
this.designPatternMap = designPatternMap;
}
#PostMapping(path = "/save")
public String save(#RequestBody MyRequest myRequest) {
String whereToSave = myRequest.getWhereToSave();
MyPojo myPojo = new MyPojo(UUID.randomUUID().toString(), myRequest.getValue());
String repoName = designPatternMap.getRepoName(whereToSave);
MyInterface<MyPojo> repo = context.getBean(repoName, MyInterface.class);
return repo.save(myPojo).toString();
}
}
You can check that this is working with a test:
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.test.web.client.TestRestTemplate;
import org.springframework.boot.test.web.server.LocalServerPort;
import org.springframework.http.HttpEntity;
import static org.junit.jupiter.api.Assertions.assertEquals;
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
class ControllerVersionThreeTest {
#LocalServerPort
private int port;
#Autowired
private TestRestTemplate restTemplate;
#Test
void testSaveByRedis() {
// Given: here 'redis' is the name of the spring beans
HttpEntity<MyRequest> request = new HttpEntity<>(new MyRequest("redis", "aValue"));
// When
String response = restTemplate.postForObject("http://localhost:" + port + "/save", request, String.class);
// Then
assertEquals("MyPojo{value='aValue saved by redis'}", response);
}
#Test
void testSaveByRedisAlias() {
// Given: here 'saveToRedisPlease' is an alias name of the spring beans
HttpEntity<MyRequest> request = new HttpEntity<>(new MyRequest("saveToRedisPlease", "aValue"));
// When
String response = restTemplate.postForObject("http://localhost:" + port + "/save", request, String.class);
// Then
assertEquals("MyPojo{value='aValue saved by redis'}", response);
}
}
Have you tried creating a configuration class to create your repository map
#Configuration
public class MyConfiguration {
#Bean
public Map repositoryMap() {
Map<String, ? extends Repository> repositoryMap = new HashMap<>();
repositoryMap.put('redis', new RedisRepository());
repositoryMap.put('mongo', new MongoRepository());
repositoryMap.put('elastic', new ElasticRepository());
return Collections.unmodifiableMap(repositoryMap);
}
}
Then you could have the following in your rest controller
#RestController
#Configuration
public class ControllerVersionFour {
#Autowired
private Map<String, ? extends Repository> repositoryMap;
#PostMapping(path = "/save/{dbname}")
public String save(#RequestBody MyRequest myRequest, #PathVariable("dbname") String dbname) {
MyPojo myPojo = new MyPojo(UUID.randomUUID().toString(), myRequest.getValue());
return repisitoryMap.get(dbname).save(myPojo);
}
It might be better to have the db as a path/query parameter instead of having it in the request body. That way you may or may not be able to just save the request body depending on your use case instead of creating another pojo.
This post may also be useful for autowiring a map

How to use the data stored in cache for another function in springboot

I have created a sample spring boot application which have a class "CacheManagement "where I have a function "getAllValue" which return the a customer object whose values I need to cache.
This class also have a function checkCache which uses the data from cache. How Can I implement this. Somebody please help me on this. I am providing the class I have written.PFB the class
#Service
public class CacheManagement {
#Cacheable(value = "service",key = "list")
public Customer getAllValue(){
Customer customer = new Customer();
System.out.println("Inside cache");
customer.setId("1");
customer.setName("Shilpa");
return customer;
}
public String checkCache() {
Customer cus = getAllValue();
System.out.println(cus.toString());
return "1";
}
}
PFB the main class:
#SpringBootApplication
public class DemoApplication implements CommandLineRunner{
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
#Override
public void run(String... arg0) {
CacheManagement cacheService = new CacheManagement();
cacheService.getAllValue();
cacheService.checkCache();
}
}
According to my need the sysout statement should only print once right? But my issue is it is printing twice .Means the getAllValue() call from checkCache() method is calling the function itself not fetching data from cache.
Please help me to resolve this issue.
There are multiple issues with your code.
CacheManagement cacheService = new CacheManagement(); creates an instance which is not a Spring container managed bean. The Spring magic will not work here . You should get a Spring container managed bean of cacheManagement for all your Spring framework support to kickin.
#SpringBootApplication
#EnableCaching
public class DemoApplication implements CommandLineRunner {
#Autowired
CacheManagement cacheService;
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
#Override
public void run(String... arg0) {
cacheService.getAllValue();
cacheService.checkCache();
}
}
2.Assuming you have enabled caching using #EnableCaching ( Refer here )
#SpringBootApplication
#EnableCaching
public class DemoApplication implements CommandLineRunner{
...
}
The Spring Caching works with the help of Spring AOP and Spring AOP works on proxies. Calling getAllValue() from within a method of the same class ( here checkCache()) is called a self-invocation. Spring AOP will not be able to advice the method call in this case , as it will not go through the proxy and hence the caching logic fails to work.
To understand this concept in detail , do refer the official spring reference documentation : Understanding AOP Proxies . Read through the section starting with The key thing to understand here is that the client code inside the main(..)
The resolution to your issue is to make sure the checkValue() call goes through the proxy so that the cached result is returned. Since you need the method (checkValue()) as part of the same bean , a self reference , will do it.
You must read through the official documentation on the same topic though to understand the implications. Section starting with As of 4.3, #Autowired also considers self references for injection ..
Following code should fix the issue for you
#Service
public class CacheManagement {
#Autowired
CacheManagement self;
#Cacheable(value = "service",key = "list")
public Customer getAllValue(){
Customer customer = new Customer();
System.out.println("Inside cache");
customer.setId("1");
customer.setName("Shilpa");
return customer;
}
public String checkCache() {
Customer cus = self.getAllValue();
System.out.println(cus.toString());
return "1";
}
}
Read more on caching here

SpringBoot direct MongoRepository to specific MongoTemplate

I have an app with multiple mongo configurations. This is achieved through some #Configuration classes like so
public abstract class AbstractMongoConfig {
private String database;
private String uri;
public void setUri(String uri) {
this.uri = uri;
}
public void setDatabase(String database) {
this.database = database;
}
public MongoDbFactory mongoDbFactory() throws Exception {
return new SimpleMongoDbFactory(new MongoClient(new MongoClientURI(this.uri)), this.database);
}
abstract public MongoTemplate getMongoTemplate() throws Exception;
}
Config 1 -- app
#Configuration
#ConfigurationProperties(prefix="app.mongodb")
public class AppMongoConfig extends AbstractMongoConfig {
#Primary
#Override
#Bean(name="appMongoTemplate")
public MongoTemplate getMongoTemplate() throws Exception {
return new MongoTemplate(mongoDbFactory());
}
}
Config 2 -- test
#Configuration
#ConfigurationProperties(prefix="test.mongodb")
public class TestMongoConfig extends AbstractMongoConfig {
#Override
#Bean(name="testMongoTemplate")
public MongoTemplate getMongoTemplate() throws Exception {
return new MongoTemplate(mongoDbFactory());
}
}
Then in my properties
test.mongodb.uri=mongodb://127.0.0.1/test
test.mongodb.database=test
app.mongodb.uri=mongodb://127.0.0.1/app
app.mongodb.database=app
So, two mongo configs wired up to an instance running locally but with different databases. I have tried it with different addresses also but it behaves the same.
Anyway, this then gets used via an Entity and MongoRepository
#Document(collection="collname")
public class TestObj {
#Id
private String id;
private String username;
private int age;
// getters & setters
}
Repo
#Repository
public interface TestObjRepository extends MongoRepository<TestObj, String> {
public TestObj findByUsername(String username);
}
However when I use this in some class somewhere
#Service
public class ThingDoer {
#Autowired
TestObjRepository toRepo;
public void doStuff() {
TestObj to = new TestObj("name", 123);
toRepo.save(to);
}
}
This object gets written into the TestMongoConfig one not the AppMongoConfig as I would expect since that's the one annotated with #Primary. Further, if I add the #EnableMongoRepositories annotation on the ThingDoer like:
#EnableMongoRepositories(basePackages={"com.whatever.package"}, mongoTemplateRef="appMongoTemplate")
It still doesn't work. It still writes to the db referenced by "test".
If I #Autowire in the MongoTemplate directly and use that, it works as I expect. Things go to the "app" repo. How can I tell it which database that the TestObjRepository should be writing to and reading from?
So, if anyone else still has this problem, the solution is this:
#EnableMongoRepositories(basePackages={"com.whatever.package"}, mongoTemplateRef="appMongoTemplate")
You have to put it on your custom mongo properties.
Where basePackages is the package path to your repo. You have to have one package for each mongo database, so it looks for the intended repository and model reference.
And you also have to disable mongo auto configuration by spring, when using multiple DBs:
spring.autoconfigure.exclude:org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration
This is a great tutorial:
https://dzone.com/articles/multiple-mongodb-connectors-with-spring-boot

Loading Beans based on hostname

I am writing services in Spring boot that get their configurations from Spring cloud. These services are multi-tenant and the tenant is based on the host name.
what I have now is
public class MyController {
#Autowired
public MyController(MyServiceFactory factory) {
...
}
#RequestMapping("some/path/{id}")
ResponseEntity<SomeEntity> getSomeEntity(#RequestHeader header, #PathVariable id) {
return factory.getMyService(header).handle(id);
}
}
where MyServiceFactory looks something like...
public class MyServiceFactory {
private final HashMap<String, MyService> serviceRegistry = new HashMap<>();
public MyService getMyService(String key) {
return serviceRegistry.get(key);
}
MyServiceFactory withService(String key, MyService service) {
this.serviceRegistry.put(key, service);
return this;
}
}
then in a configuration file
#Configuration
public ServiceFactoryConfiguration {
#Bean
public MyServiceFactory getMyServiceFactory() {
return new MyServiceFactory()
.withService("client1", new MyService1())
.withService("client2", new MyService2());
}
}
While what I have now works, I don't like that I need to create a factory for every dependency my controller may have. I'd like to have my code look something like this...
public class MyController {
#Autowired
public MyController(MyService service) {
...
}
#RequestMapping("some/path/{id}")
ResponseEntity<SomeEntity> getSomeEntity(#PathVariable id) {
return service.handle(id);
}
}
with a configuration file like
#Configuration
public class MyServiceConfiguration() {
#Bean
#Qualifier("Client1")
public MyService getMyService1() {
return new MyService1();
}
#Bean
#Qualifier("Client2")
public MyService getMyService2() {
return new MyService2();
}
}
I can get the code that I want to write if I use a profile at application start up. But I want to have lots of different DNS records pointing to the same (pool of) instance(s) and have an instance be able to handle requests for different clients. I want to be able to swap out profiles on a per request basis.
Is this possible to do?
Spring profiles would not help here, you would need one application context per client, and that seems not what you want.
Instead you could use scoped beans.
Create your client dependent beans with scope 'client' :
#Bean
#Scope(value="client",proxyMode = ScopedProxyMode.INTERFACES)
#Primary
MyService myService(){
//does not really matter, which instance you create here
//the scope will create the real instance
//may be you can even return null, did not try that.
return new MyServiceDummy();
}
There will be at least 3 beans of type MyService : the scoped one, and one for each client. The annotation #Primary tells spring to always use the scoped bean for injection.
Create a scope :
public class ClientScope implements Scope {
#Autowired
BeanFactory beanFactory;
Object get(String name, ObjectFactory<?> objectFactory){
//we do not use the objectFactory here, instead the beanFactory
//you somehow have to know which client is the current
//from the config, current request, session, or ThreadLocal..
String client=findCurrentClient(..);
//client now is something like 'Client1'
//check if your cache (HashMap) contains an instance with
//BeanName = name for the client, if true, return that
..
//if not, create a new instance of the bean with the given name
//for the current client. Easiest way using a naming convention
String clientBeanName=client+'.'+name;
Object clientBean=BeanFactory.getBean(clientBeanName);
//put in cache ...
return clientBean;
};
}
And your client specific beans are configured like this :
#Bean('Client1.myService')
public MyService getMyService1() {
return new MyService1();
}
#Bean('Client2.myService')
public MyService getMyService2() {
return new MyService2();
}
Did not test it but used it in my projects. Should work.
tutorial spring custom scope

Multiple keyspace support for spring-data-cassandra repositories?

Does Spring Data Cassandra support multiple keyspace repositories in the same application context? I am setting up the cassandra spring data configuration using the following JavaConfig class
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.repository")
public class CassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace1";
}
I tried creating a second configuration class after moving the repository classes to a different package.
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.secondrepository")
public class SecondCassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace2";
}
However in that case the first set if repositories fail as the configured column family for the entities is not found in the keyspace. I think it is probably looking for the column family in the second keyspace.
Does spring-data-cassandra support multiple keyspace repositories? The only place where I found a reference for multiple keyspaces was here. But it does not explain if this can be done with repositories?
Working APP Sample:
http://valchkou.com/spring-boot-cassandra.html#multikeyspace
The Idea you need override default beans: sessionfactory and template
Sample:
1) application.yml
spring:
data:
cassandra:
test1:
keyspace-name: test1_keyspace
contact-points: localhost
test2:
keyspace-name: test2_keyspace
contact-points: localhost
2) base config class
public abstract class CassandraBaseConfig extends AbstractCassandraConfiguration{
protected String contactPoints;
protected String keyspaceName;
public String getContactPoints() {
return contactPoints;
}
public void setContactPoints(String contactPoints) {
this.contactPoints = contactPoints;
}
public void setKeyspaceName(String keyspaceName) {
this.keyspaceName = keyspaceName;
}
#Override
protected String getKeyspaceName() {
return keyspaceName;
}
}
3) Config implementation for test1
package com.sample.repo.test1;
#Configuration
#ConfigurationProperties("spring.data.cassandra.test1")
#EnableCassandraRepositories(
basePackages = "com.sample.repo.test1",
cassandraTemplateRef = "test1Template"
)
public class Test1Config extends CassandraBaseConfig {
#Override
#Primary
#Bean(name = "test1Template")
public CassandraAdminOperations cassandraTemplate() throws Exception {
return new CassandraAdminTemplate(session().getObject(), cassandraConverter());
}
#Override
#Bean(name = "test1Session")
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setConverter(cassandraConverter());
session.setKeyspaceName(getKeyspaceName());
session.setSchemaAction(getSchemaAction());
session.setStartupScripts(getStartupScripts());
session.setShutdownScripts(getShutdownScripts());
return session;
}
}
4) same for test2, just use different package
package com.sample.repo.test2;
5) place repo for each keyspace in dedicated package
i.e.
package com.sample.repo.test1;
#Repository
public interface RepositoryForTest1 extends CassandraRepository<MyEntity> {
// ....
}
package com.sample.repo.test2;
#Repository
public interface RepositoryForTest2 extends CassandraRepository<MyEntity> {
// ....
}
Try explicitly naming your CassandraTemplate beans for each keyspace and using those names in the #EnableCassandraRepositories annotation's cassandraTemplateRef attribute (see lines with /* CHANGED */ for changes).
In your first configuration:
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.repository",
/* CHANGED */ cassandraTemplateRef = "template1")
public class CassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace1";
}
/* CHANGED */
#Override
#Bean(name = "template1")
public CassandraAdminOperations cassandraTemplate() throws Exception {
return new CassandraAdminTemplate(session().getObject(), cassandraConverter());
}
...and in your second configuration:
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.secondrepository",
/* CHANGED */ cassandraTemplateRef = "template2")
public class SecondCassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace2";
}
/* CHANGED */
#Override
#Bean(name = "template2")
public CassandraAdminOperations cassandraTemplate() throws Exception {
return new CassandraAdminTemplate(session().getObject(), cassandraConverter());
}
I think that might do the trick. Please post back if it doesn't.
It seems that it is recommended to use fully qualified keyspace names in queries managed by one session, as the session is not very lightweight.
Please see reference here
I tried this approach. However I ran into exceptions while trying to access the column family 2. Operations on column family 1 seems to be fine.
I am guessing because the underlying CassandraSessionFactoryBean bean is a singleton. And this causes
unconfigured columnfamily columnfamily2
Here are some more logs to provide context
DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Returning cached instance of singleton bean 'entityManagerFactory'
DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Returning cached instance of singleton bean 'session'
DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Returning cached instance of singleton bean 'cluster'
org.springframework.cassandra.support.exception.CassandraInvalidQueryException: unconfigured columnfamily shardgroup; nested exception is com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured columnfamily columnfamily2
at org.springframework.cassandra.support.CassandraExceptionTranslator.translateExceptionIfPossible(CassandraExceptionTranslator.java:116)
at org.springframework.cassandra.config.CassandraCqlSessionFactoryBean.translateExceptionIfPossible(CassandraCqlSessionFactoryBean.java:74)
Hmm. Can't comment on the answer by matthew-adams. But that will reuse the session object as AbstractCassandraConfiguration is annotated with #Bean on all the relevant getters.
In a similar setup I initially had it working with overwriting all the getters and specifically give them different bean names. But due to Spring still claiming to need beans with the names. I have now had to make a copy of AbstractCassandraConfiguration with no annotations that I can inherit.
Make sure to expose the CassandraTemplate so you can refer to it from #EnableCassandraRepositories if you use those.
I also have a separate implementation of AbstractClusterConfiguration to expose a common CassandraCqlClusterFactoryBean so the underlying connections are being reused.
Edit:
I guess according to the email thread linked by bclarance one should really attempt to reuse the Session object. Seems the way Spring Data Cassandra isn't really set up for that though
In my case, I had a Spring Boot app where the majority of repositories were in one keyspace, and just two were in a second. I kept the default Spring Boot configuration for the first keyspace, and manually configured the second keyspace using the same configuration approach Spring Boot uses for its autoconfiguration.
#Repository
#NoRepositoryBean // This uses a different keyspace than the default, so not auto-creating
public interface SecondKeyspaceTableARepository
extends MapIdCassandraRepository<SecondKeyspaceTableA> {
}
#Repository
#NoRepositoryBean // This uses a different keyspace than the default, so not auto-creating
public interface SecondKeyspaceTableBRepository
extends MapIdCassandraRepository<SecondKeyspaceTableB> {
}
#Configuration
public class SecondKeyspaceCassandraConfig {
public static final String KEYSPACE_NAME = "second_keyspace";
/**
* #see org.springframework.boot.autoconfigure.data.cassandra.CassandraDataAutoConfiguration#cassandraSession(CassandraConverter)
*/
#Bean(autowireCandidate = false)
public CassandraSessionFactoryBean secondKeyspaceCassandraSession(
Cluster cluster, Environment environment, CassandraConverter converter) {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster);
session.setConverter(converter);
session.setKeyspaceName(KEYSPACE_NAME);
Binder binder = Binder.get(environment);
binder.bind("spring.data.cassandra.schema-action", SchemaAction.class)
.ifBound(session::setSchemaAction);
return session;
}
/**
* #see org.springframework.boot.autoconfigure.data.cassandra.CassandraDataAutoConfiguration#cassandraTemplate(com.datastax.driver.core.Session, CassandraConverter)
*/
#Bean(autowireCandidate = false)
public CassandraTemplate secondKeyspaceCassandraTemplate(
Cluster cluster, Environment environment, CassandraConverter converter) {
return new CassandraTemplate(secondKeyspaceCassandraSession(cluster, environment, converter)
.getObject(), converter);
}
#Bean
public SecondKeyspaceTableARepository cdwEventRepository(
Cluster cluster, Environment environment, CassandraConverter converter) {
return createRepository(CDWEventRepository.class,
secondKeyspaceCassandraTemplate(cluster, environment, converter));
}
#Bean
public SecondKeyspaceTableBTypeRepository dailyCapacityRepository(
Cluster cluster, Environment environment, CassandraConverter converter) {
return createRepository(DailyCapacityRepository.class,
secondKeyspaceCassandraTemplate(cluster, environment, converter));
}
private <T> T createRepository(Class<T> repositoryInterface, CassandraTemplate operations) {
return new CassandraRepositoryFactory(operations).getRepository(repositoryInterface);
}
}

Categories

Resources