Multiple keyspace support for spring-data-cassandra repositories? - java

Does Spring Data Cassandra support multiple keyspace repositories in the same application context? I am setting up the cassandra spring data configuration using the following JavaConfig class
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.repository")
public class CassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace1";
}
I tried creating a second configuration class after moving the repository classes to a different package.
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.secondrepository")
public class SecondCassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace2";
}
However in that case the first set if repositories fail as the configured column family for the entities is not found in the keyspace. I think it is probably looking for the column family in the second keyspace.
Does spring-data-cassandra support multiple keyspace repositories? The only place where I found a reference for multiple keyspaces was here. But it does not explain if this can be done with repositories?

Working APP Sample:
http://valchkou.com/spring-boot-cassandra.html#multikeyspace
The Idea you need override default beans: sessionfactory and template
Sample:
1) application.yml
spring:
data:
cassandra:
test1:
keyspace-name: test1_keyspace
contact-points: localhost
test2:
keyspace-name: test2_keyspace
contact-points: localhost
2) base config class
public abstract class CassandraBaseConfig extends AbstractCassandraConfiguration{
protected String contactPoints;
protected String keyspaceName;
public String getContactPoints() {
return contactPoints;
}
public void setContactPoints(String contactPoints) {
this.contactPoints = contactPoints;
}
public void setKeyspaceName(String keyspaceName) {
this.keyspaceName = keyspaceName;
}
#Override
protected String getKeyspaceName() {
return keyspaceName;
}
}
3) Config implementation for test1
package com.sample.repo.test1;
#Configuration
#ConfigurationProperties("spring.data.cassandra.test1")
#EnableCassandraRepositories(
basePackages = "com.sample.repo.test1",
cassandraTemplateRef = "test1Template"
)
public class Test1Config extends CassandraBaseConfig {
#Override
#Primary
#Bean(name = "test1Template")
public CassandraAdminOperations cassandraTemplate() throws Exception {
return new CassandraAdminTemplate(session().getObject(), cassandraConverter());
}
#Override
#Bean(name = "test1Session")
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setConverter(cassandraConverter());
session.setKeyspaceName(getKeyspaceName());
session.setSchemaAction(getSchemaAction());
session.setStartupScripts(getStartupScripts());
session.setShutdownScripts(getShutdownScripts());
return session;
}
}
4) same for test2, just use different package
package com.sample.repo.test2;
5) place repo for each keyspace in dedicated package
i.e.
package com.sample.repo.test1;
#Repository
public interface RepositoryForTest1 extends CassandraRepository<MyEntity> {
// ....
}
package com.sample.repo.test2;
#Repository
public interface RepositoryForTest2 extends CassandraRepository<MyEntity> {
// ....
}

Try explicitly naming your CassandraTemplate beans for each keyspace and using those names in the #EnableCassandraRepositories annotation's cassandraTemplateRef attribute (see lines with /* CHANGED */ for changes).
In your first configuration:
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.repository",
/* CHANGED */ cassandraTemplateRef = "template1")
public class CassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace1";
}
/* CHANGED */
#Override
#Bean(name = "template1")
public CassandraAdminOperations cassandraTemplate() throws Exception {
return new CassandraAdminTemplate(session().getObject(), cassandraConverter());
}
...and in your second configuration:
#Configuration
#EnableCassandraRepositories(basePackages = "com.blah.secondrepository",
/* CHANGED */ cassandraTemplateRef = "template2")
public class SecondCassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "keyspace2";
}
/* CHANGED */
#Override
#Bean(name = "template2")
public CassandraAdminOperations cassandraTemplate() throws Exception {
return new CassandraAdminTemplate(session().getObject(), cassandraConverter());
}
I think that might do the trick. Please post back if it doesn't.

It seems that it is recommended to use fully qualified keyspace names in queries managed by one session, as the session is not very lightweight.
Please see reference here

I tried this approach. However I ran into exceptions while trying to access the column family 2. Operations on column family 1 seems to be fine.
I am guessing because the underlying CassandraSessionFactoryBean bean is a singleton. And this causes
unconfigured columnfamily columnfamily2
Here are some more logs to provide context
DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Returning cached instance of singleton bean 'entityManagerFactory'
DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Returning cached instance of singleton bean 'session'
DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Returning cached instance of singleton bean 'cluster'
org.springframework.cassandra.support.exception.CassandraInvalidQueryException: unconfigured columnfamily shardgroup; nested exception is com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured columnfamily columnfamily2
at org.springframework.cassandra.support.CassandraExceptionTranslator.translateExceptionIfPossible(CassandraExceptionTranslator.java:116)
at org.springframework.cassandra.config.CassandraCqlSessionFactoryBean.translateExceptionIfPossible(CassandraCqlSessionFactoryBean.java:74)

Hmm. Can't comment on the answer by matthew-adams. But that will reuse the session object as AbstractCassandraConfiguration is annotated with #Bean on all the relevant getters.
In a similar setup I initially had it working with overwriting all the getters and specifically give them different bean names. But due to Spring still claiming to need beans with the names. I have now had to make a copy of AbstractCassandraConfiguration with no annotations that I can inherit.
Make sure to expose the CassandraTemplate so you can refer to it from #EnableCassandraRepositories if you use those.
I also have a separate implementation of AbstractClusterConfiguration to expose a common CassandraCqlClusterFactoryBean so the underlying connections are being reused.
Edit:
I guess according to the email thread linked by bclarance one should really attempt to reuse the Session object. Seems the way Spring Data Cassandra isn't really set up for that though

In my case, I had a Spring Boot app where the majority of repositories were in one keyspace, and just two were in a second. I kept the default Spring Boot configuration for the first keyspace, and manually configured the second keyspace using the same configuration approach Spring Boot uses for its autoconfiguration.
#Repository
#NoRepositoryBean // This uses a different keyspace than the default, so not auto-creating
public interface SecondKeyspaceTableARepository
extends MapIdCassandraRepository<SecondKeyspaceTableA> {
}
#Repository
#NoRepositoryBean // This uses a different keyspace than the default, so not auto-creating
public interface SecondKeyspaceTableBRepository
extends MapIdCassandraRepository<SecondKeyspaceTableB> {
}
#Configuration
public class SecondKeyspaceCassandraConfig {
public static final String KEYSPACE_NAME = "second_keyspace";
/**
* #see org.springframework.boot.autoconfigure.data.cassandra.CassandraDataAutoConfiguration#cassandraSession(CassandraConverter)
*/
#Bean(autowireCandidate = false)
public CassandraSessionFactoryBean secondKeyspaceCassandraSession(
Cluster cluster, Environment environment, CassandraConverter converter) {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster);
session.setConverter(converter);
session.setKeyspaceName(KEYSPACE_NAME);
Binder binder = Binder.get(environment);
binder.bind("spring.data.cassandra.schema-action", SchemaAction.class)
.ifBound(session::setSchemaAction);
return session;
}
/**
* #see org.springframework.boot.autoconfigure.data.cassandra.CassandraDataAutoConfiguration#cassandraTemplate(com.datastax.driver.core.Session, CassandraConverter)
*/
#Bean(autowireCandidate = false)
public CassandraTemplate secondKeyspaceCassandraTemplate(
Cluster cluster, Environment environment, CassandraConverter converter) {
return new CassandraTemplate(secondKeyspaceCassandraSession(cluster, environment, converter)
.getObject(), converter);
}
#Bean
public SecondKeyspaceTableARepository cdwEventRepository(
Cluster cluster, Environment environment, CassandraConverter converter) {
return createRepository(CDWEventRepository.class,
secondKeyspaceCassandraTemplate(cluster, environment, converter));
}
#Bean
public SecondKeyspaceTableBTypeRepository dailyCapacityRepository(
Cluster cluster, Environment environment, CassandraConverter converter) {
return createRepository(DailyCapacityRepository.class,
secondKeyspaceCassandraTemplate(cluster, environment, converter));
}
private <T> T createRepository(Class<T> repositoryInterface, CassandraTemplate operations) {
return new CassandraRepositoryFactory(operations).getRepository(repositoryInterface);
}
}

Related

SimpUserRegistry doesnot contain any session objects

Iam new to Websockets. I have been trying to use SimpUserRegistry to find session object by Principal. I wrote a custom handshake handler to convert Anonymous users to authenticated users and Iam able to access the Principal name from Websocket session object.
The code for custom handshake handler is shown below
import java.security.Principal;
public class StompPrincipal implements Principal {
private String name;
public StompPrincipal(String name) {
this.name = name;
}
#Override
public String getName() {
return name;
}
}
Handler
class CustomHandshakeHandlerTwo extends DefaultHandshakeHandler {
// Custom class for storing principal
#Override
protected Principal determineUser(
ServerHttpRequest request,
WebSocketHandler wsHandler,
Map<String, Object> attributes
) {
// Generate principal with UUID as name
return new StompPrincipal(UUID.randomUUID().toString());
}
}
But as specified in many questions like this I'am not able to inject the SimpUserRegistry directly.
It throws error
Field simpUserRegistry required a bean of type 'org.springframework.messaging.simp.user.SimpUserRegistry' that could not be found.
The injection point has the following annotations:
- #org.springframework.beans.factory.annotation.Autowired(required=true)
Action:
Consider defining a bean of type 'org.springframework.messaging.simp.user.SimpUserRegistry' in your configuration.
So I created a configuration class as shown below.
#Configuration
public class UsersConfig {
final private SimpUserRegistry userRegistry = new DefaultSimpUserRegistry();
#Bean
#Primary
public SimpUserRegistry userRegistry() {
return userRegistry;
}
}
Now I can autowire and use it but everytime I try to acess the SimpUserRegistry it is empty.
What could be the cause of this problem?
EDIT:
Showing websocket config
#Configuration
#EnableWebSocket
#Controller
#Slf4j
public class WebSocketConfig implements WebSocketConfigurer {
#Autowired
EventTextHandler2 handler;
public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
log.info("Registering websocket handler SocketTextHandler");
registry.addHandler(handler, "/event").setHandshakeHandler(new CustomHandshakeHandlerTwo());
}
}
SimpUserRegistry is an "infrastructure bean" registered/provided by Spring WebSocket, you should not instantiate it directly.
Is your WebSocket Spring configuration correct?
Make sure your application is well configured (ie. your configuration class is being scanned).
SimpUserRegistry is imported by spring-messaging dependency: make sure your configuration class is annotated with #EnableWebSocketMessageBroker.
Official documentation: https://docs.spring.io/spring-framework/docs/5.3.6/reference/html/web.html#websocket-stomp-enable
To back the connected users in Redis, you may want to create a new SimpUserRegistry implementation:
public class RedisSimpUserRegistry implements SimpUserRegistry, SmartApplicationListener {
private final RedisTemplate redisTemplate;
public RedisSimpUserRegistry(RedisTemplate redisTemplate) {
this.redisTemplate = redisTemplate;
}
[...]
#Override
public void onApplicationEvent(ApplicationEvent event) {
// Maintain Redis collection on event type
// ie. SessionConnectedEvent / SessionDisconnectEvent
}
[...]
}
PS: The #Controller annotation on your config class is not necessary unless you have an endpoint defined in it.
Edit after new comments:
You can see the DefaultSimpUserRegistry implementation to get an idea of how to do it.
To intercept an application event, you have to implement the ApplicationListener interface (in this case SmartApplicationListener).
The supportsEventType method is important to define which event types you want to intercept:
#Override
public boolean supportsEventType(Class<? extends ApplicationEvent> eventType) {
return AbstractSubProtocolEvent.class.isAssignableFrom(eventType);
}
The AbstractSubProtocolEvent have multiple implementations. The most important ones are SessionConnectEvent, SessionDisconnectEvent.
Intercepting (see onApplicationEvent method) these event types will allow your implementation to maintain the desired state in your Redis cache. You could then store users (ids, etc.).

How to instantiate object(Jdbc template) inside Hazelcast Map store

I'm trying to Autowire jdbc template inside mapStore.. but I'm getting null pointer exception.
I worked on so many examples but sill not able to resolve this issue..
Here is my main class
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
#SpringBootApplication
public class TestCacheApplication {
public static void main(String[] args) {
SpringApplication.run(TestCacheApplication.class, args);
System.err.println("......running successfully......");
}
}
Here is my cache configured code
#Component
public class CacheConfig {
#Bean
public static Config config() {
System.err.println("config class");
Config config = new Config();
config.setInstanceName("hazelcast");
MapConfig mapCfg = new MapConfig();
mapCfg.setName("first-map");
mapCfg.setBackupCount(2);
mapCfg.setTimeToLiveSeconds(300);
MapStoreConfig mapStoreCfg = new MapStoreConfig();
mapStoreCfg.setClassName(DataMapStore .class.getName()).setEnabled(true);
mapCfg.setMapStoreConfig(mapStoreCfg);
config.addMapConfig(mapCfg);
return config;
}
}
and TblRepo implementation
#Service
public class DataTblRepoImpl implements DataTblRepo {
#Autowired
private JdbcTemplate jdbcTemplate;
#Override
public void save(String id, String name) {
Object[] params = new Object[] { id, name };
int[] types = new int[] { Types.VARCHAR, Types.VARCHAR };
String insertSql = "INSERT INTO public.person(id, name) VALUES(?, ?)";
jdbcTemplate.update(insertSql, params, types);
}
and TblRepo interface I have annotated with #Repository annotation..
And My map store class
#SpringAware
public class DataMapStore implements MapStore<String, ModelClass>{
#Autowired
DataTblRepo dataTblRepo;
#Override
public void store(String key, ModelClass value) {
dataTblRepo.save(value.getId(), value.getName());
}
//remaining methods will come here
}
and Controller
#RestController
#CrossOrigin(origins = "*")
#RequestMapping("/api/v1")
public class DataController {
#Autowired
DataService dataService;
HazelcastInstance hazelCast = Hazelcast.getHazelcastInstanceByName("hazelcast");
#PostMapping("/{test}")
public String saveDatafrom(#RequestBody ModelClass model) {
hazelCast.getMap("first-map").put(model.getId(), model);
return "stored";
}
}
Here is the program flow.. When I start the application, first Cacheconfig class will run.
In the controller when I perform the map.put() operation, data will go to the DataMapStore class and call the store method to save the data in database..since DataTblRepo is null so operation is failing at the store method itself..*
I tried adding #component on the DataMapStore class also
but in my case I'm getting this error
"message": "Cannot invoke "com.example.demo.repo.DataTblRepository.save(String, String)" because "this.dataTableRepo" is null",
I saw this same issue in many platforms also but still not able to resolve this issue.
Any suggestions would be very helpful
SpringAware is for Hazelcast distributed objects (cf. documentation).
The MapStore in your example is not a distributed object but a simple plain object. It should be managed by Spring itself. You should replace the #SpringAware annotation by a Spring #Component annotation.
The next issue is that your map store configuration makes Hazelcast responsible to instantiate the MapStore. If this happens, you won't benefit from Spring's Dependency Injection mechanism. You should directly set the instance created by Spring.
Replace SpringAware by Component
#Component
public class DataMapStore implements MapStore<String, ModelClass> {
// ...
}
Use the Spring-configured MapStore instance
#Bean
public Config config(DataMapStore mapStore) { // Ask Spring to inject the instance
// ...
MapStoreConfig mapStoreCfg = new MapStoreConfig();
mapStoreCfg.setImplementation(mapStore); // Use it
mapCfg.setMapStoreConfig(mapStoreCfg);
config.addMapConfig(mapCfg);
return config;
}
I also removed the static keyword on the config() method.
Note that this way of using MapStore couples it with the "client" code. This means you need to use Hazelcast embedded. For more information about embedded mode vs. client/server, please check the documentation related to topology.

Changing the value of an #Bean at runtime in Java Spring Boot

In Java with Spring Boot framework there is an #Bean named DataSource that is used to make connections to a database, and is used by another #Bean called JdbcTemplate that serves to perform actions on the database. But, there is a problem, this #Bean DataSource, which serves to make the connection to the database, requires the properties for the connection to be preconfigured (url, username and password). The #Bean DataSource needs to start with an "empty" or "default" value at project startup, and at runtime it changes this value. I want a certain Endpoint to execute the action of changing the value of #Bean, to be more exact. And with the change of the value of the #Bean DataSource the JdbcTemplate, consequently, will be able to perform actions in several database.
Some details:
I have already evaluated this issue of using multiple databases, and in my case, it will be necessary.
All databases to be connected have the same model.
I do not think I need to delete and create another #Bean DataSource at runtime, maybe just override the #Bean values ​​that the Spring Boot itself already creates automatically.
I have already made the #Bean DataSource start with an "empty" value by making a method with the #Bean annotation and that returns a DataSource object that is literally this code: DataSourceBuilder.build().create();.
My English is not very good so if it's not very understandable, sorry.
DataSource #Bean code:
#Bean
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
Main class:
#SpringBootApplication(scanBasePackages = "br.com.b2code")
#RequiredArgsConstructor(onConstructor = #__(#Autowired))
public class RunAdm extends SpringBootServletInitializer implements
CommandLineRunner {
public static final String URL_FRONTEND = "*";
/**
* Método main do módulo de Gestão.
*
* #param args argumentos de inicialização
* #throws Exception uma exception genérica
*/
public static void main(String[] args) throws Exception {
SpringApplication.run(RunAdm.class, args);
}
#Override
protected SpringApplicationBuilder
configure(SpringApplicationBuilder application) {
return application.sources(RunAdm.class);
}
#Override
public void run(String... args) throws Exception {
}
}
A class to exemplify how I use JdbcTemplate:
#Repository
#RequiredArgsConstructor(onConstructor = #__(#Autowired))
public class ClienteQueryRepositoryImpl implements ClienteQueryRepository {
private final #NonNull
JdbcTemplate jdbc;
#Override
public List<Cliente> findAll() {
return jdbc.query(ClienteQuerySQL.SELECT_ALL_CLIENTE_SQL, new ClienteRowMapper());
}
}
I think as a general approach you might consider a Proxy Design Pattern for the actual DataSource Implementation.
Let's suppose the DataSource is an interface that has getConnection method by user and password (other methods are not really important because this answer is theoretical):
interface DataSource {
Connection getConnection(String user, String password);
}
Now, in order to maintain many databases you might want to provide an implementation of the datasource which will act as a proxy for other datasources that will be created on the fly (upon the endpoint call as you say).
Here is an example:
public class MultiDBDatasource implements DataSource {
private DataSourcesRegistry registry;
public Connection getConnection(String user, String password) {
UserAndPassword userAndPassword = new UserAndPassword(user, password);
registry.get(userAndPassword);
}
}
#Component
class DataSourcesRegistry {
private Map<UserAndPassword, DataSource> map = ...
public DataSource get(UserAndPassword a) {
map.get(a);
}
public void addDataSource(UserAndPassword cred, DataSource ds) {
// add to Map
map.put(...)
}
}
#Controller
class InvocationEndPoint {
// injected by spring
private DataSourcesRegistry registry;
#PostMapping ...
public void addNewDB(params) {
DataSource ds = createDS(params); // not spring based
UserAndPassword cred = createCred(params);
registry.addDataSource(cred, ds);
}
}
A couple of notes:
You should "override" the bean of DataSource offered by spring - this can be done by defining your own bean with the same name in your own configuration that will take precedence over spring's definition.
Spring won't create Dynamic Data Source, they'll be created from the "invocation point" (controller in this case for brevity, in real life probably some service). in any case only Registry will be managed by spring, not the data sources.
Keep in mind that this approach is very high-level, in a real life you'll have to think about:
Connection Pooling
Metering
Transaction Support
Multithreaded Access
and many other things

SpringBoot direct MongoRepository to specific MongoTemplate

I have an app with multiple mongo configurations. This is achieved through some #Configuration classes like so
public abstract class AbstractMongoConfig {
private String database;
private String uri;
public void setUri(String uri) {
this.uri = uri;
}
public void setDatabase(String database) {
this.database = database;
}
public MongoDbFactory mongoDbFactory() throws Exception {
return new SimpleMongoDbFactory(new MongoClient(new MongoClientURI(this.uri)), this.database);
}
abstract public MongoTemplate getMongoTemplate() throws Exception;
}
Config 1 -- app
#Configuration
#ConfigurationProperties(prefix="app.mongodb")
public class AppMongoConfig extends AbstractMongoConfig {
#Primary
#Override
#Bean(name="appMongoTemplate")
public MongoTemplate getMongoTemplate() throws Exception {
return new MongoTemplate(mongoDbFactory());
}
}
Config 2 -- test
#Configuration
#ConfigurationProperties(prefix="test.mongodb")
public class TestMongoConfig extends AbstractMongoConfig {
#Override
#Bean(name="testMongoTemplate")
public MongoTemplate getMongoTemplate() throws Exception {
return new MongoTemplate(mongoDbFactory());
}
}
Then in my properties
test.mongodb.uri=mongodb://127.0.0.1/test
test.mongodb.database=test
app.mongodb.uri=mongodb://127.0.0.1/app
app.mongodb.database=app
So, two mongo configs wired up to an instance running locally but with different databases. I have tried it with different addresses also but it behaves the same.
Anyway, this then gets used via an Entity and MongoRepository
#Document(collection="collname")
public class TestObj {
#Id
private String id;
private String username;
private int age;
// getters & setters
}
Repo
#Repository
public interface TestObjRepository extends MongoRepository<TestObj, String> {
public TestObj findByUsername(String username);
}
However when I use this in some class somewhere
#Service
public class ThingDoer {
#Autowired
TestObjRepository toRepo;
public void doStuff() {
TestObj to = new TestObj("name", 123);
toRepo.save(to);
}
}
This object gets written into the TestMongoConfig one not the AppMongoConfig as I would expect since that's the one annotated with #Primary. Further, if I add the #EnableMongoRepositories annotation on the ThingDoer like:
#EnableMongoRepositories(basePackages={"com.whatever.package"}, mongoTemplateRef="appMongoTemplate")
It still doesn't work. It still writes to the db referenced by "test".
If I #Autowire in the MongoTemplate directly and use that, it works as I expect. Things go to the "app" repo. How can I tell it which database that the TestObjRepository should be writing to and reading from?
So, if anyone else still has this problem, the solution is this:
#EnableMongoRepositories(basePackages={"com.whatever.package"}, mongoTemplateRef="appMongoTemplate")
You have to put it on your custom mongo properties.
Where basePackages is the package path to your repo. You have to have one package for each mongo database, so it looks for the intended repository and model reference.
And you also have to disable mongo auto configuration by spring, when using multiple DBs:
spring.autoconfigure.exclude:org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration
This is a great tutorial:
https://dzone.com/articles/multiple-mongodb-connectors-with-spring-boot

Loading Beans based on hostname

I am writing services in Spring boot that get their configurations from Spring cloud. These services are multi-tenant and the tenant is based on the host name.
what I have now is
public class MyController {
#Autowired
public MyController(MyServiceFactory factory) {
...
}
#RequestMapping("some/path/{id}")
ResponseEntity<SomeEntity> getSomeEntity(#RequestHeader header, #PathVariable id) {
return factory.getMyService(header).handle(id);
}
}
where MyServiceFactory looks something like...
public class MyServiceFactory {
private final HashMap<String, MyService> serviceRegistry = new HashMap<>();
public MyService getMyService(String key) {
return serviceRegistry.get(key);
}
MyServiceFactory withService(String key, MyService service) {
this.serviceRegistry.put(key, service);
return this;
}
}
then in a configuration file
#Configuration
public ServiceFactoryConfiguration {
#Bean
public MyServiceFactory getMyServiceFactory() {
return new MyServiceFactory()
.withService("client1", new MyService1())
.withService("client2", new MyService2());
}
}
While what I have now works, I don't like that I need to create a factory for every dependency my controller may have. I'd like to have my code look something like this...
public class MyController {
#Autowired
public MyController(MyService service) {
...
}
#RequestMapping("some/path/{id}")
ResponseEntity<SomeEntity> getSomeEntity(#PathVariable id) {
return service.handle(id);
}
}
with a configuration file like
#Configuration
public class MyServiceConfiguration() {
#Bean
#Qualifier("Client1")
public MyService getMyService1() {
return new MyService1();
}
#Bean
#Qualifier("Client2")
public MyService getMyService2() {
return new MyService2();
}
}
I can get the code that I want to write if I use a profile at application start up. But I want to have lots of different DNS records pointing to the same (pool of) instance(s) and have an instance be able to handle requests for different clients. I want to be able to swap out profiles on a per request basis.
Is this possible to do?
Spring profiles would not help here, you would need one application context per client, and that seems not what you want.
Instead you could use scoped beans.
Create your client dependent beans with scope 'client' :
#Bean
#Scope(value="client",proxyMode = ScopedProxyMode.INTERFACES)
#Primary
MyService myService(){
//does not really matter, which instance you create here
//the scope will create the real instance
//may be you can even return null, did not try that.
return new MyServiceDummy();
}
There will be at least 3 beans of type MyService : the scoped one, and one for each client. The annotation #Primary tells spring to always use the scoped bean for injection.
Create a scope :
public class ClientScope implements Scope {
#Autowired
BeanFactory beanFactory;
Object get(String name, ObjectFactory<?> objectFactory){
//we do not use the objectFactory here, instead the beanFactory
//you somehow have to know which client is the current
//from the config, current request, session, or ThreadLocal..
String client=findCurrentClient(..);
//client now is something like 'Client1'
//check if your cache (HashMap) contains an instance with
//BeanName = name for the client, if true, return that
..
//if not, create a new instance of the bean with the given name
//for the current client. Easiest way using a naming convention
String clientBeanName=client+'.'+name;
Object clientBean=BeanFactory.getBean(clientBeanName);
//put in cache ...
return clientBean;
};
}
And your client specific beans are configured like this :
#Bean('Client1.myService')
public MyService getMyService1() {
return new MyService1();
}
#Bean('Client2.myService')
public MyService getMyService2() {
return new MyService2();
}
Did not test it but used it in my projects. Should work.
tutorial spring custom scope

Categories

Resources