I am trying configuring multiple couchbase data source using springboot-data-couchbase.
This is a way I tried to attach two couchbase sources with 2 repositories.
#Configuration
#EnableCouchbaseRepositories("com.xyz.abc")
public class AbcDatasource extends AbstractCouchbaseConfiguration {
#Override
protected List<String> getBootstrapHosts() {
return Collections.singletonList("ip_address_of_couchbase");
}
//bucket_name
#Override
protected String getBucketName() {
return "bucket_name";
}
//password
#Override
protected String getBucketPassword() {
return "user_password";
}
#Override
#Bean(destroyMethod = "disconnect", name = "COUCHBASE_CLUSTER_2")
public Cluster couchbaseCluster() throws Exception {
return CouchbaseCluster.create(couchbaseEnvironment(), "ip_address_of_couchbase");
}
#Bean( name = "BUCKET2")
public Bucket bucket2() throws Exception {
return this.couchbaseCluster().openBucket("bucket2", "somepassword");
}
#Bean( name = "BUCKET2_TEMPLATE")
public CouchbaseTemplate newTemplateForBucket2() throws Exception {
CouchbaseTemplate template = new CouchbaseTemplate(
couchbaseClusterInfo(), //reuse the default bean
bucket2(), //the bucket is non-default
mappingCouchbaseConverter(), translationService()
);
template.setDefaultConsistency(getDefaultConsistency());
return template;
}
#Override
public void configureRepositoryOperationsMapping(RepositoryOperationsMapping baseMapping) {
baseMapping
.mapEntity(SomeDAOUsedInSomeRepository.class, newTemplateForBucket2());
}
}
similarly:
#Configuration
#EnableCouchbaseRepositories("com.xyz.mln")
public class MlnDatasource extends AbstractCouchbaseConfiguration {...}
Now the problem is there is no straight forward way to specify namespace based datasource by attaching different beans to these configurations like in springdata-jpa as springdata-jpa support this feature do using entity-manager-factory-ref and transaction-manager-ref.
Due to which only one configuration is being picked whoever comes first.
Any suggestion is greatly appreciated.
Related question: Use Spring Data Couchbase to connect to different Couchbase clusters
#anshul you are almost there.
Make one of the Data Source as #primary which will be used as by default bucket.
Wherever you want to use the other bucket .Just use specific bean in your service class with the qualifier below is the example:
#Qualifier(value = "BUCKET1_TEMPLATE")
#Autowired
CouchbaseTemplate couchbaseTemplate;
Now you can use this template to perform all couch related operations on the desired bucket.
Related
I have a Spring Webflux application with the "org.springframework.boot:spring-boot-starter-data-r2dbc" dependency for the DB connection.
I also have a postgres cluster containing master and read-only replica. Both have separate URLs.
I am looking for an option to configure the app to use both these urls accordingly.
What is the best way to do this?
Following this PR from #mp911de I created a custom AbstractRoutingConnectionFactory which can route to different datasources depending on the specific key in Reactor's context.
public class ClusterConnectionFactory extends AbstractRoutingConnectionFactory {
#Override
protected Mono<Object> determineCurrentLookupKey() {
return Mono.deferContextual(Mono::just)
.filter(it -> it.hasKey("CONNECTION_MODE"))
.map(it -> it.get("CONNECTION_MODE"));
}
}
#Configuration
public class ClusterConnectionFactoryConfiguration {
#Bean
public ConnectionFactory routingConnectionFactory() {
var clusterConnFactory = new ClusterConnectionFactory();
var connectionFactories = Map.of(
ConnectionMode.READ_WRITE, getDefaultConnFactory(),
ConnectionMode.READ_ONLY, getReadOnlyConnFactory()
);
clusterConnFactory.setTargetConnectionFactories(connectionFactories);
clusterConnFactory.setDefaultTargetConnectionFactory(getDefaultConnFactory());
return clusterConnFactory;
}
// In this example I used Postgres
private ConnectionFactory getDefaultConnFactory() {
return new PostgresqlConnectionFactory(
PostgresqlConnectionConfiguration.builder()...build());
}
private ConnectionFactory getReadOnlyConnFactory() {
// similar to the above but pointing to the read-only replica
}
public enum ConnectionMode { // auxiliary enum as a key
READ_WRITE,
READ_ONLY
}
}
Then I had to extend my repository methods with this contextual info like
public <S extends Entity> Mono<UUID> save(final S entity) {
return repository.save(entity)
.contextWrite(context -> context.put("CONNECTION_MODE", READ_WRITE));
This works, but unfortunately doesn't look good in the sense that it is not declarative and interferes with reactive chains.
I would be glad if someone suggests a better solution.
I am using CosmosDB in a multi-tenant application. Have a separate database for each tenant and the Collections for each tenant are in their own database.
Given that my application has to handle multiple tenants, I cannot have a single configuration for my repositories which has a pre-defined database. The database has to be dynamically configured based on the request context (tenant). How is it possible to achieve such a setup with Spring Data CosmosDB?
Here's how the Repository Configuration is setup. As you can see, this has the database fixed through the application properties. In a real world scenario, the application has requests coming from different tenants; so will have to use different databases
#Configuration
#EnableCosmosRepositories
#Slf4j
public class UserRepositoryConfiguration extends AbstractCosmosConfiguration {
#Autowired
private CosmosDBProperties properties;
private CosmosKeyCredential cosmosKeyCredential;
#Bean
#Primary
public CosmosDBConfig cosmosDbConfig() {
this.cosmosKeyCredential = new CosmosKeyCredential(properties.getKey());
CosmosDBConfig cosmosDBConfig = CosmosDBConfig.builder(properties.getUri(), cosmosKeyCredential,
properties.getDatabase()).build();
cosmosDBConfig.setPopulateQueryMetrics(properties.isPopulateQueryMetrics());
cosmosDBConfig.setResponseDiagnosticsProcessor(new ResponseDiagnosticsProcessorImplementation());
return cosmosDBConfig;
}
public void switchToPrimaryKey() {
this.cosmosKeyCredential.key(properties.getKey());
}
public void switchKey(String key) {
this.cosmosKeyCredential.key(key);
}
private static class ResponseDiagnosticsProcessorImplementation implements ResponseDiagnosticsProcessor {
#Override
public void processResponseDiagnostics(#Nullable ResponseDiagnostics responseDiagnostics) {
log.info("Response Diagnostics {}", responseDiagnostics);
}
}
}
I'm putting my .net code here for reference, it will help you to write your spring code.
You have to create one database account and need keys (DatabaseEndPoint & DatabaseKey). Then everthing you can create dynamically i.e database, collection etc. based on your tenant.
In .net, I use dependency injection to inject IDocumentClient. below is my configuration
string databaseEndPoint = ConfigurationManager.AppSettings["DatabaseEndPoint"]; //Get from config file
string databaseKey = ConfigurationManager.AppSettings["DatabaseKey"];//Get from config file
services.AddSingleton<IDocumentClient>(new DocumentClient(new System.Uri(databaseEndPoint), databaseKey,
new ConnectionPolicy
{
ConnectionMode = ConnectionMode.Direct,
ConnectionProtocol = Protocol.Tcp,
RequestTimeout = TimeSpan.FromMinutes(5),//Groupasset sync has some timeout issue with large payload
// Customize retry options for Throttled requests
RetryOptions = new RetryOptions()
{
MaxRetryAttemptsOnThrottledRequests = 5,
MaxRetryWaitTimeInSeconds = 60
}
}
));
BaseDAO/BaseRepository
public abstract class BaseDao : IBaseDao
{
protected readonly IDocumentClient client;
protected BaseDao(IDocumentClient client)
{
this.client = client;
}
/// <summary>
/// Create Document in Database
/// </summary>
/// <param name="databaseId">database name</param>
/// <param name="collectionId">collection name</param>
/// <param name="document">document object</param>
/// <returns></returns>
public virtual async Task<string> CreateAsync(string databaseId, string collectionId, JObject document)
{
Document response = await client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(databaseId, collectionId), document);
return response.Id;
}
}
Create DAO/Repository class and inherit from base DAO.
In my scenario, we are creating database based on tenant name i.e. google, microsoft etc. based on user (bill#microsoft.com) all query execute under one (microsoft) database.
We have a REST API application based on Spring Data REST. We have many types of data exposed as spring data repositories marked with the #RepositoryRestResource. We would like to control precisely which data types are exposed at runtime, as we will have several installations with slightly different requirements.
How can we achieve fine grained control at runtime over which repositories are exposed by Spring Data REST?
Our naive attempt was to use the export parameter in #RepositoryRestResource with an expression, but we can't see how to make that work - the expression evaluates to a string, not a boolean.
#RepositoryRestResource(exported = "${app.exportStudy}")
public interface StudyRepository<Study> extends MongoRepository<Study,String> {
}
One way of solving this is to replace the repository detection strategy.
First, use an object to store your configuration:
#Component
#ConfigurationProperties("app.repository")
#Data
public class AppRepositoryConfig {
private boolean exportStudy = true;
private boolean exportSample = true;
...
}
Second, amend the behaviour of the stock RepositoryDetectionStrategy to take into account your configuration:
#Configuration
#RequiredArgsConstructor
public class AppRepositoryDetectionStrategyConfig extends RepositoryRestConfigurerAdapter {
#NonNull private AppRepositoryConfig appRepositoryConfig;
#Override
public void configureRepositoryRestConfiguration(RepositoryRestConfiguration config) {
RepositoryDetectionStrategy rds = config.getRepositoryDetectionStrategy();
config.setRepositoryDetectionStrategy(
repositoryDetectionStrategy(rds)
);
}
private RepositoryDetectionStrategy repositoryDetectionStrategy(
RepositoryDetectionStrategy repositoryDetectionStrategy) {
RepositoryDetectionStrategy rds = metadata -> {
boolean defaultExportSetting = repositoryDetectionStrategy.isExported(metadata);
if (metadata.getDomainType().equals(Study.class)) {
return (appRepositoryConfig.isExportStudy()) ? defaultExportSetting : false;
}
...
return defaultExportSetting;
};
return rds;
}
I am using below config.yml
# AWS DynamoDB settings
dynamoDB:
# Access key
aws_access_key_id: "access-key"
#Secret Key
aws_secret_access_key: "secret-key"
aws_dynamodb_region: EU_WEST_1
And below class to read the above config values in my DynamoDBConfig class.
public class DynamoDBConfig {
public DynamoDBConfig() {
}
#JsonProperty("aws_access_key_id")
public String accessKey;
#JsonProperty("aws_secret_access_key")
public String secretKey;
#JsonProperty("aws_dynamodb_region")
public String region;
// getters and setters
}
Finally ApplicationConfig class which include DynamoDB config.
public class ReadApiConfiguration extends Configuration {
#NotNull
private DynamoDBConfig dynamoDBConfig = new DynamoDBConfig();
#JsonProperty("dynamoDB")
public DynamoDBConfig getDynamoDBConfig() {
return dynamoDBConfig;
}
#JsonProperty("dynamoDB")
public void setDynamoDBConfig(DynamoDBConfig dynamoDBConfig) {
this.dynamoDBConfig = dynamoDBConfig;
}
}
Now i want to read aws_access_key and aws_secret_key values in my AWSclient.java class to create a awsclient
BasicAWSCredentials awsCreds = new BasicAWSCredentials("access_key_id", "secret_key_id");
My problem is, how i read/inject the config values, in my AWSClient class. I am using the dropwizard-guice module for DI. and couldn't figure out , how can i bind the configuration object created at the DW startup time to its class.
P.S. :-> I've gone through this SO post but it doesn't solve my issue, as its not using guice as a DI module.
Normally, you can inject your configuration object either into a class field or into a constructor, like:
public class AWSclient {
#Inject
public AWSclient(ReadApiConfiguration conf) {
initConnection(conf.getDynamoDBConfig().getSecretKey(), ...)
}
}
Additionally, annotate your ReadApiConfiguration class with the #Singleton annotation.
I have configured Spring Mongodb project to load data in database named "warehouse". Here is how my config class looks like
#Configuration
public class SpringMongoConfig extends AbstractMongoConfiguration {
#Override
protected String getDatabaseName() {
return "warehouse";
}
public #Bean Mongo mongo() throws Exception {
return new Mongo("localhost");
}
public #Bean MongoTemplate mongoTemplate() throws Exception {
return new MongoTemplate(mongo(), getDatabaseName());
}
}
But Spring is always using the default database "test" to store and retrieve the collections. I have tried different approaches to point it to "warehouse" db. But it doesnt seem to work. What am doing wrong? Any leads are appreciated.
Assuming you have a standard mongo install (e.g., the database is at a default such as /data/db or C:\data\db), your configuration class looks correct. How are you using it? Can you try:
SpringMongoConfig config = new SpringMongoConfig();
MongoTemplate template = config.mongoTemplate();
template.createCollection("someCollection");
From a shell, if you then log into mongo and enter show dbs, do you not see a warehouse"?