I am using CosmosDB in a multi-tenant application. Have a separate database for each tenant and the Collections for each tenant are in their own database.
Given that my application has to handle multiple tenants, I cannot have a single configuration for my repositories which has a pre-defined database. The database has to be dynamically configured based on the request context (tenant). How is it possible to achieve such a setup with Spring Data CosmosDB?
Here's how the Repository Configuration is setup. As you can see, this has the database fixed through the application properties. In a real world scenario, the application has requests coming from different tenants; so will have to use different databases
#Configuration
#EnableCosmosRepositories
#Slf4j
public class UserRepositoryConfiguration extends AbstractCosmosConfiguration {
#Autowired
private CosmosDBProperties properties;
private CosmosKeyCredential cosmosKeyCredential;
#Bean
#Primary
public CosmosDBConfig cosmosDbConfig() {
this.cosmosKeyCredential = new CosmosKeyCredential(properties.getKey());
CosmosDBConfig cosmosDBConfig = CosmosDBConfig.builder(properties.getUri(), cosmosKeyCredential,
properties.getDatabase()).build();
cosmosDBConfig.setPopulateQueryMetrics(properties.isPopulateQueryMetrics());
cosmosDBConfig.setResponseDiagnosticsProcessor(new ResponseDiagnosticsProcessorImplementation());
return cosmosDBConfig;
}
public void switchToPrimaryKey() {
this.cosmosKeyCredential.key(properties.getKey());
}
public void switchKey(String key) {
this.cosmosKeyCredential.key(key);
}
private static class ResponseDiagnosticsProcessorImplementation implements ResponseDiagnosticsProcessor {
#Override
public void processResponseDiagnostics(#Nullable ResponseDiagnostics responseDiagnostics) {
log.info("Response Diagnostics {}", responseDiagnostics);
}
}
}
I'm putting my .net code here for reference, it will help you to write your spring code.
You have to create one database account and need keys (DatabaseEndPoint & DatabaseKey). Then everthing you can create dynamically i.e database, collection etc. based on your tenant.
In .net, I use dependency injection to inject IDocumentClient. below is my configuration
string databaseEndPoint = ConfigurationManager.AppSettings["DatabaseEndPoint"]; //Get from config file
string databaseKey = ConfigurationManager.AppSettings["DatabaseKey"];//Get from config file
services.AddSingleton<IDocumentClient>(new DocumentClient(new System.Uri(databaseEndPoint), databaseKey,
new ConnectionPolicy
{
ConnectionMode = ConnectionMode.Direct,
ConnectionProtocol = Protocol.Tcp,
RequestTimeout = TimeSpan.FromMinutes(5),//Groupasset sync has some timeout issue with large payload
// Customize retry options for Throttled requests
RetryOptions = new RetryOptions()
{
MaxRetryAttemptsOnThrottledRequests = 5,
MaxRetryWaitTimeInSeconds = 60
}
}
));
BaseDAO/BaseRepository
public abstract class BaseDao : IBaseDao
{
protected readonly IDocumentClient client;
protected BaseDao(IDocumentClient client)
{
this.client = client;
}
/// <summary>
/// Create Document in Database
/// </summary>
/// <param name="databaseId">database name</param>
/// <param name="collectionId">collection name</param>
/// <param name="document">document object</param>
/// <returns></returns>
public virtual async Task<string> CreateAsync(string databaseId, string collectionId, JObject document)
{
Document response = await client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(databaseId, collectionId), document);
return response.Id;
}
}
Create DAO/Repository class and inherit from base DAO.
In my scenario, we are creating database based on tenant name i.e. google, microsoft etc. based on user (bill#microsoft.com) all query execute under one (microsoft) database.
Related
I have a Spring Webflux application with the "org.springframework.boot:spring-boot-starter-data-r2dbc" dependency for the DB connection.
I also have a postgres cluster containing master and read-only replica. Both have separate URLs.
I am looking for an option to configure the app to use both these urls accordingly.
What is the best way to do this?
Following this PR from #mp911de I created a custom AbstractRoutingConnectionFactory which can route to different datasources depending on the specific key in Reactor's context.
public class ClusterConnectionFactory extends AbstractRoutingConnectionFactory {
#Override
protected Mono<Object> determineCurrentLookupKey() {
return Mono.deferContextual(Mono::just)
.filter(it -> it.hasKey("CONNECTION_MODE"))
.map(it -> it.get("CONNECTION_MODE"));
}
}
#Configuration
public class ClusterConnectionFactoryConfiguration {
#Bean
public ConnectionFactory routingConnectionFactory() {
var clusterConnFactory = new ClusterConnectionFactory();
var connectionFactories = Map.of(
ConnectionMode.READ_WRITE, getDefaultConnFactory(),
ConnectionMode.READ_ONLY, getReadOnlyConnFactory()
);
clusterConnFactory.setTargetConnectionFactories(connectionFactories);
clusterConnFactory.setDefaultTargetConnectionFactory(getDefaultConnFactory());
return clusterConnFactory;
}
// In this example I used Postgres
private ConnectionFactory getDefaultConnFactory() {
return new PostgresqlConnectionFactory(
PostgresqlConnectionConfiguration.builder()...build());
}
private ConnectionFactory getReadOnlyConnFactory() {
// similar to the above but pointing to the read-only replica
}
public enum ConnectionMode { // auxiliary enum as a key
READ_WRITE,
READ_ONLY
}
}
Then I had to extend my repository methods with this contextual info like
public <S extends Entity> Mono<UUID> save(final S entity) {
return repository.save(entity)
.contextWrite(context -> context.put("CONNECTION_MODE", READ_WRITE));
This works, but unfortunately doesn't look good in the sense that it is not declarative and interferes with reactive chains.
I would be glad if someone suggests a better solution.
I am trying configuring multiple couchbase data source using springboot-data-couchbase.
This is a way I tried to attach two couchbase sources with 2 repositories.
#Configuration
#EnableCouchbaseRepositories("com.xyz.abc")
public class AbcDatasource extends AbstractCouchbaseConfiguration {
#Override
protected List<String> getBootstrapHosts() {
return Collections.singletonList("ip_address_of_couchbase");
}
//bucket_name
#Override
protected String getBucketName() {
return "bucket_name";
}
//password
#Override
protected String getBucketPassword() {
return "user_password";
}
#Override
#Bean(destroyMethod = "disconnect", name = "COUCHBASE_CLUSTER_2")
public Cluster couchbaseCluster() throws Exception {
return CouchbaseCluster.create(couchbaseEnvironment(), "ip_address_of_couchbase");
}
#Bean( name = "BUCKET2")
public Bucket bucket2() throws Exception {
return this.couchbaseCluster().openBucket("bucket2", "somepassword");
}
#Bean( name = "BUCKET2_TEMPLATE")
public CouchbaseTemplate newTemplateForBucket2() throws Exception {
CouchbaseTemplate template = new CouchbaseTemplate(
couchbaseClusterInfo(), //reuse the default bean
bucket2(), //the bucket is non-default
mappingCouchbaseConverter(), translationService()
);
template.setDefaultConsistency(getDefaultConsistency());
return template;
}
#Override
public void configureRepositoryOperationsMapping(RepositoryOperationsMapping baseMapping) {
baseMapping
.mapEntity(SomeDAOUsedInSomeRepository.class, newTemplateForBucket2());
}
}
similarly:
#Configuration
#EnableCouchbaseRepositories("com.xyz.mln")
public class MlnDatasource extends AbstractCouchbaseConfiguration {...}
Now the problem is there is no straight forward way to specify namespace based datasource by attaching different beans to these configurations like in springdata-jpa as springdata-jpa support this feature do using entity-manager-factory-ref and transaction-manager-ref.
Due to which only one configuration is being picked whoever comes first.
Any suggestion is greatly appreciated.
Related question: Use Spring Data Couchbase to connect to different Couchbase clusters
#anshul you are almost there.
Make one of the Data Source as #primary which will be used as by default bucket.
Wherever you want to use the other bucket .Just use specific bean in your service class with the qualifier below is the example:
#Qualifier(value = "BUCKET1_TEMPLATE")
#Autowired
CouchbaseTemplate couchbaseTemplate;
Now you can use this template to perform all couch related operations on the desired bucket.
We have a REST API application based on Spring Data REST. We have many types of data exposed as spring data repositories marked with the #RepositoryRestResource. We would like to control precisely which data types are exposed at runtime, as we will have several installations with slightly different requirements.
How can we achieve fine grained control at runtime over which repositories are exposed by Spring Data REST?
Our naive attempt was to use the export parameter in #RepositoryRestResource with an expression, but we can't see how to make that work - the expression evaluates to a string, not a boolean.
#RepositoryRestResource(exported = "${app.exportStudy}")
public interface StudyRepository<Study> extends MongoRepository<Study,String> {
}
One way of solving this is to replace the repository detection strategy.
First, use an object to store your configuration:
#Component
#ConfigurationProperties("app.repository")
#Data
public class AppRepositoryConfig {
private boolean exportStudy = true;
private boolean exportSample = true;
...
}
Second, amend the behaviour of the stock RepositoryDetectionStrategy to take into account your configuration:
#Configuration
#RequiredArgsConstructor
public class AppRepositoryDetectionStrategyConfig extends RepositoryRestConfigurerAdapter {
#NonNull private AppRepositoryConfig appRepositoryConfig;
#Override
public void configureRepositoryRestConfiguration(RepositoryRestConfiguration config) {
RepositoryDetectionStrategy rds = config.getRepositoryDetectionStrategy();
config.setRepositoryDetectionStrategy(
repositoryDetectionStrategy(rds)
);
}
private RepositoryDetectionStrategy repositoryDetectionStrategy(
RepositoryDetectionStrategy repositoryDetectionStrategy) {
RepositoryDetectionStrategy rds = metadata -> {
boolean defaultExportSetting = repositoryDetectionStrategy.isExported(metadata);
if (metadata.getDomainType().equals(Study.class)) {
return (appRepositoryConfig.isExportStudy()) ? defaultExportSetting : false;
}
...
return defaultExportSetting;
};
return rds;
}
I have 4 databases with similar schema on PostgreSQL
My current code is like this
ressources
spring.datasource.url=jdbc:postgresql://localhost:5432/postgres
spring.datasource.username=postgres
spring.datasource.password=postgres
DAO
public interface AccountRepository extends JpaRepository<Account, Long>{}
Configuration
#Configuration
public class AccountServiceConfiguration {
#Autowired
private AccountRepository accountRepository;
#Bean
public AccountService accountService() {
return new AccountService(accountRepository);
}
}
Controller
#RestController
#RequestMapping("/accounts")
public class AccountController {
#Autowired
private AccountService accountService;
#RequestMapping(name = "/", method = RequestMethod.GET)
public Page<Account> getAccounts(Integer page, Integer size) {
return accountService.getAll(page, size);
}
}
Service
public class AccounttService {
public AccounttService(AccountRepository accountRepository) {
this.accountRepository = accountRepository;
}
public Page<Account> getAll(Integer page, Integer size) {
PageRequest pageRequest = new PageRequest(page, size);
return accountRepository.findAll(pageRequest);
}
}
I want to change like this
ressources
spring.db1.url=jdbc:postgresql://db1:5432/postgres
spring.db1.username=postgres1
spring.db1.password=postgres1
spring.db2.url=jdbc:postgresql://db2:5432/postgres
spring.db2.username=postgres2
spring.db2.password=postgres2
spring.db3.url=jdbc:postgresql://db3:5432/postgres
spring.db3.username=postgres3
spring.db3.password=postgres3
spring.db4.url=jdbc:postgresql://db4:5432/postgres
spring.db4.username=postgres4
spring.db4.password=postgres4
Controller
...
public Page<Account> getAccounts(Integer page, Integer size, string env) {
return accountService.getAll(page, size, env);
}
...
Service
public class AccounttService {
public AccounttService(Map<AccountRepository> mapAccountRepository) {
this.mapAccountRepository = mapAccountRepository;
}
public Page<Account> getAll(Integer page, Integer size, String env) {
PageRequest pageRequest = new PageRequest(page, size);
// search in specific env
}
}
How to load 4 data sources (may be on map) and search by environnement !
If i send env=db1 i want to run my request on db1
If you have other solution, i take it, but must use one repository and one entity to search in all databases.
Thank you :)
According to your comments you want a single Repository instance to switch between different schemata.
This won't work.
What you can do is provide a Facade for multiple Repository instance that delegates each call to on of the many instances according to some parameter/field/property.
But one way or the other you have to create a separate Repository instance with a different database connection for each.
What you are describing is called multi-tenancy using multiple databases.
To accomplish so you would need to manually configure the persistence layer and not to rely completely in Spring Boot auto-configuration capabilities.
The persistence layer configuration involves:
Hibernate, JPA and datasources properties
Datasources beans
Entity manager factory bean (in the case of Hibernate, with properties specifying this is a mult-tenant entity manager factory bean and tenant connection provider as well as tenant resolver)
Transaction manager bean
Spring Data JPA and transaction support configuration
In a blog post I recently published: Multi-tenant applications using Spring Boot, JPA, Hibernate and Postgres I cover in this exact problem with a detailed implementation.
I'm currently having the issue that the #Transactional annotation doesn't seem to start a transaction for Neo4j, yet (it doesn't work with any of my #Transactional annotated methods, not just with the following example).
Example:
I have this method (UserService.createUser), which creates a user node in the Neo4j graph first and then creates the user (with additional information) in a MongoDB. (MongoDB doesn't support transactions, thus create the user-node first, then insert the entity into MongoDB and commit the Neo4j-transaction afterwards).
The method is annotated with #Transactional yet a org.neo4j.graphdb.NotInTransactionException is thrown when it comes to creating the user in Neo4j.
Here is about my configuration and coding, respectively:
Code based SDN-Neo4j configuration:
#Configuration
#EnableTransactionManagement // mode = proxy
#EnableNeo4jRepositories(basePackages = "graph.repository")
public class Neo4jConfig extends Neo4jConfiguration {
private static final String DB_PATH = "path_to.db";
private static final String CONFIG_PATH = "path_to.properties";
#Bean(destroyMethod = "shutdown")
public GraphDatabaseService graphDatabaseService() {
return new GraphDatabaseFactory().newEmbeddedDatabaseBuilder(DB_PATH)
.loadPropertiesFromFile(CONFIG_PATH).newGraphDatabase();
}
}
Service for creating the user in Neo4j and the MongoDB:
#Service
public class UserService {
#Inject
private UserMdbRepository mdbUserRepository; // MongoRepository
#Inject
private Neo4jTemplate neo4jTemplate;
#Transactional
public User createUser(User user) {
// Create the graph-node first, because if this fails the user
// shall not be created in the MongoDB
this.neo4jTemplate.save(user); // NotInTransactionException is thrown here
// Then create the MongoDB-user. This can't be rolled back, but
// if this fails, the Neo4j-modification shall be rolled back too
return this.mdbUserRepository.save(user);
}
...
}
Side-notes:
I'm using spring version 3.2.3.RELEASE and spring-data-neo4j version 2.3.0.M1
UserService and Neo4jConfig are in separate Maven artifacts
Starting the server and SDN reading operations work so far, I'm just having troubles with writing operations
I'm currently migrating our project from the tinkerpop-framework to SDN-Neo4j. This user creation-process has worked before (with tinkerpop), I just have to make it work again with SDN-Neo4j.
I'm running the application in Jetty
Does anyone have any clue why this is not working (yet)?
I hope, this information is sufficient. If anything is missing, please let me know and I'll add it.
Edit:
I forgot to mention that manual transaction-handling works, but of course I'd like to implement it the way "as it's meant to be".
public User createUser(User user) throws ServiceException {
Transaction tx = this.graphDatabaseService.beginTx();
try {
this.neo4jTemplate.save(user);
User persistantUser = this.mdbUserRepository.save(user);
tx.success();
return persistantUser;
} catch (Exception e) {
tx.failure();
throw new ServiceException(e);
} finally {
tx.finish();
}
}
Thanks to m-deinum I finally found the issue. The problem was that I scanned for those components / services in a different spring-configuration-file, than where I configured SDN-Neo4j. I moved the component-scan for those packages which might require transactions to my Neo4jConfig and now it works
#Configuration
#EnableTransactionManagement // mode = proxy
#EnableNeo4jRepositories(basePackages = "graph.repository")
#ComponentScan({
"graph.component",
"graph.service",
"core.service"
})
public class Neo4jConfig extends Neo4jConfiguration {
private static final String DB_PATH = "path_to.db";
private static final String CONFIG_PATH = "path_to.properties";
#Bean(destroyMethod = "shutdown")
public GraphDatabaseService graphDatabaseService() {
return new GraphDatabaseFactory().newEmbeddedDatabaseBuilder(DB_PATH)
.loadPropertiesFromFile(CONFIG_PATH).newGraphDatabase();
}
}
I still will have to separate those components / services which require transactions from those which don't, though. However, this works for now.
I assume that the issue was that the other spring-configuration-file (which included the component-scan) was loaded before Neo4jConfig, since neo4j:repositories has to be put before context:component-scan. (See Note in Example 20.26. Composing repositories http://static.springsource.org/spring-data/data-neo4j/docs/current/reference/html/programming-model.html#d0e2948)