Per-class configuration of spring data rest repository export - java

We have a REST API application based on Spring Data REST. We have many types of data exposed as spring data repositories marked with the #RepositoryRestResource. We would like to control precisely which data types are exposed at runtime, as we will have several installations with slightly different requirements.
How can we achieve fine grained control at runtime over which repositories are exposed by Spring Data REST?
Our naive attempt was to use the export parameter in #RepositoryRestResource with an expression, but we can't see how to make that work - the expression evaluates to a string, not a boolean.
#RepositoryRestResource(exported = "${app.exportStudy}")
public interface StudyRepository<Study> extends MongoRepository<Study,String> {
}

One way of solving this is to replace the repository detection strategy.
First, use an object to store your configuration:
#Component
#ConfigurationProperties("app.repository")
#Data
public class AppRepositoryConfig {
private boolean exportStudy = true;
private boolean exportSample = true;
...
}
Second, amend the behaviour of the stock RepositoryDetectionStrategy to take into account your configuration:
#Configuration
#RequiredArgsConstructor
public class AppRepositoryDetectionStrategyConfig extends RepositoryRestConfigurerAdapter {
#NonNull private AppRepositoryConfig appRepositoryConfig;
#Override
public void configureRepositoryRestConfiguration(RepositoryRestConfiguration config) {
RepositoryDetectionStrategy rds = config.getRepositoryDetectionStrategy();
config.setRepositoryDetectionStrategy(
repositoryDetectionStrategy(rds)
);
}
private RepositoryDetectionStrategy repositoryDetectionStrategy(
RepositoryDetectionStrategy repositoryDetectionStrategy) {
RepositoryDetectionStrategy rds = metadata -> {
boolean defaultExportSetting = repositoryDetectionStrategy.isExported(metadata);
if (metadata.getDomainType().equals(Study.class)) {
return (appRepositoryConfig.isExportStudy()) ? defaultExportSetting : false;
}
...
return defaultExportSetting;
};
return rds;
}

Related

Spring boot - Adding data read only

I am thinking what would be best solution for following case. Suppose we have at start CRUD app - using Spring Boot. I would like to add read only state for this application - which allows only data read and blocks create, update, delete data operations for admin role. I think about adding aspect (#Aspect) which checks current app state (which is saved in db) and starts if create, update, update operations are invoked. If app is in read-only state - exception will be thrown (handled by #ControllerAdvice)
I wonder if adding aspect is the best option - I dont want modify existing code. Whats your take on that? Moreover - how would you write integration tests for #aspect - testing if aspect starts properly? How could be aspects tested for this case? What are good practises for testing #aspects (integration tests #springboottest)
This honestly seems like an inconvenient way of doing this. Why not just add an Interceptor that checks for this? I did something similar before
#Component
#RequiredArgsConstructor
public class ReadOnlyModeInterceptor implements HandlerInterceptor {
private final ServerProperties serverProperties;
#Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) {
if (serverProperties.isReadOnlyMode()) {
String method = request.getMethod();
boolean isReadOnlyMethod = "GET".equals(method) || "OPTIONS".equals(method);
String servletPath = request.getServletPath();
boolean isReadOnlyPath = isReadOnlyPath(servletPath);
if (!isReadOnlyMethod && isReadOnlyPath) {
throw new ServiceUnavailableException("Server is in read-only mode.");
}
}
return true;
}
private boolean isReadOnlyPath(String servletPath) {
if (serverProperties.isFullyReadOnly()) {
return true; // wildcard option, entire server is read-only
}
return serverProperties.getReadOnlyPaths().stream().anyMatch(servletPath::contains);
}
}
You also need to register it
#RequiredArgsConstructor
#Configuration
public class WebMvcConfig implements WebMvcConfigurer {
private final ReadOnlyModeInterceptor readOnlyModeInterceptor;
#Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(readOnlyModeInterceptor).order(0);
}
}

How can I configure spring r2dbc to use separate read-only and read-write DB urls?

I have a Spring Webflux application with the "org.springframework.boot:spring-boot-starter-data-r2dbc" dependency for the DB connection.
I also have a postgres cluster containing master and read-only replica. Both have separate URLs.
I am looking for an option to configure the app to use both these urls accordingly.
What is the best way to do this?
Following this PR from #mp911de I created a custom AbstractRoutingConnectionFactory which can route to different datasources depending on the specific key in Reactor's context.
public class ClusterConnectionFactory extends AbstractRoutingConnectionFactory {
#Override
protected Mono<Object> determineCurrentLookupKey() {
return Mono.deferContextual(Mono::just)
.filter(it -> it.hasKey("CONNECTION_MODE"))
.map(it -> it.get("CONNECTION_MODE"));
}
}
#Configuration
public class ClusterConnectionFactoryConfiguration {
#Bean
public ConnectionFactory routingConnectionFactory() {
var clusterConnFactory = new ClusterConnectionFactory();
var connectionFactories = Map.of(
ConnectionMode.READ_WRITE, getDefaultConnFactory(),
ConnectionMode.READ_ONLY, getReadOnlyConnFactory()
);
clusterConnFactory.setTargetConnectionFactories(connectionFactories);
clusterConnFactory.setDefaultTargetConnectionFactory(getDefaultConnFactory());
return clusterConnFactory;
}
// In this example I used Postgres
private ConnectionFactory getDefaultConnFactory() {
return new PostgresqlConnectionFactory(
PostgresqlConnectionConfiguration.builder()...build());
}
private ConnectionFactory getReadOnlyConnFactory() {
// similar to the above but pointing to the read-only replica
}
public enum ConnectionMode { // auxiliary enum as a key
READ_WRITE,
READ_ONLY
}
}
Then I had to extend my repository methods with this contextual info like
public <S extends Entity> Mono<UUID> save(final S entity) {
return repository.save(entity)
.contextWrite(context -> context.put("CONNECTION_MODE", READ_WRITE));
This works, but unfortunately doesn't look good in the sense that it is not declarative and interferes with reactive chains.
I would be glad if someone suggests a better solution.

How to trim Swagger docs based on current User Role in Java Spring?

I'm developing application using Spring Boot, and I'm using Swagger to auto-generate API docs and also I use swagger-ui.html to interact with those APIs.
I have Spring Security enabled too, and I have Users with different roles. Different REST APIs are available to different roles.
Question: how do I configure Swagger to respect Spring's #Secured annotation and trim operations displayed by swagger-ui.html so that only operations available to current user are available?
I.e. imagine following controller
#RestController
#Secured(ROLE_USER)
public void SomeRestController {
#GetMapping
#Secured(ROLE_USER_TOP_MANAGER)
public String getInfoForTopManager() { /*...*/ }
#GetMapping
#Secured(ROLE_USER_MIDDLE_MANAGER)
public String getInfoForMiddleManager() { /*...*/ }
#GetMapping
public String getInfoForAnyUser() { /*...*/ }
}
Swagger will show both operations getInfoForTopManager and getInfoForMiddleManager regardless of current user role. In case currently authenticated user role is ROLE_USER_MIDDLE_MANAGER, I want only getInfoForMiddleManager and getInfoForAnyUser operations to be available in the Swagger.
Ok, I think found good solution to that question. Solution consists of 2 parts:
Extend controllers scanning logic through OperationBuilderPlugin to retain roles in the Swagger's vendor extensions
Override ServiceModelToSwagger2MapperImpl bean to filter out actions based on current security context
In your project this might look a bit different (i.e. most likely you don't have thing like securityContextResolver), but I believe you'll get the gist of this solution from following code:
Part 1: Extend controllers scanning logic to retain roles in the Swagger's vendor extensions
#Component
#Order(SwaggerPluginSupport.SWAGGER_PLUGIN_ORDER + 1000)
public class OperationBuilderPluginSecuredAware implements OperationBuilderPlugin {
#Override
public void apply(OperationContext context) {
Set<String> roles = new HashSet<>();
Secured controllerAnnotation = context.findControllerAnnotation(Secured.class).orNull();
if (controllerAnnotation != null) {
roles.addAll(List.of(controllerAnnotation.value()));
}
Secured methodAnnotation = context.findAnnotation(Secured.class).orNull();
if (methodAnnotation != null) {
roles.addAll(List.of(methodAnnotation.value()));
}
if (!roles.isEmpty()) {
context.operationBuilder().extensions(List.of(new TrimToRoles(roles.toArray(new String[0]))));
}
}
#Override
public boolean supports(DocumentationType delimiter) {
return SwaggerPluginSupport.pluginDoesApply(delimiter);
}
}
Part 2: Filter out actions based on current security context
#Primary
#Component
public class ServiceModelToSwagger2MapperImplEx extends ServiceModelToSwagger2MapperImpl {
#Autowired
private SecurityContextResolver<User> securityContextResolver;
#Override
protected io.swagger.models.Operation mapOperation(Operation from) {
if (from == null) {
return null;
}
if (!isPermittedForCurrentUser(findTrimToRolesExtension(from.getVendorExtensions()))) {
return null;
}
return super.mapOperation(from);
}
private boolean isPermittedForCurrentUser(TrimToRoles trimToRoles) {
if (trimToRoles == null) {
return true;
}
if (securityContextResolver.hasAnyRole(trimToRoles.getValue())) {
return true;
}
return false;
}
private TrimToRoles findTrimToRolesExtension(#SuppressWarnings("rawtypes") List<VendorExtension> list) {
if (CollectionUtils.isEmpty(list)) {
return null;
}
return list.stream().filter(x -> x instanceof TrimToRoles).map(TrimToRoles.class::cast).findFirst()
.orElse(null);
}
#Override
protected Map<String, Path> mapApiListings(Multimap<String, ApiListing> apiListings) {
Map<String, Path> paths = super.mapApiListings(apiListings);
return paths.entrySet().stream().filter(x -> !x.getValue().isEmpty())
.collect(Collectors.toMap(x -> x.getKey(), v -> v.getValue()));
}
#Override
public Swagger mapDocumentation(Documentation from) {
Swagger ret = super.mapDocumentation(from);
Predicate<? super Tag> hasAtLeastOneOperation = tag -> ret.getPaths().values().stream()
.anyMatch(x -> x.getOperations().stream().anyMatch(y -> y.getTags().contains(tag.getName())));
ret.setTags(ret.getTags().stream().filter(hasAtLeastOneOperation).collect(Collectors.toList()));
return ret;
}
}
p.s. these impls are not efficient, but given their usage scenarios I preferred simple impl

How to configure multiple couchbase data source using springboot-data-couchbase?

I am trying configuring multiple couchbase data source using springboot-data-couchbase.
This is a way I tried to attach two couchbase sources with 2 repositories.
#Configuration
#EnableCouchbaseRepositories("com.xyz.abc")
public class AbcDatasource extends AbstractCouchbaseConfiguration {
#Override
protected List<String> getBootstrapHosts() {
return Collections.singletonList("ip_address_of_couchbase");
}
//bucket_name
#Override
protected String getBucketName() {
return "bucket_name";
}
//password
#Override
protected String getBucketPassword() {
return "user_password";
}
#Override
#Bean(destroyMethod = "disconnect", name = "COUCHBASE_CLUSTER_2")
public Cluster couchbaseCluster() throws Exception {
return CouchbaseCluster.create(couchbaseEnvironment(), "ip_address_of_couchbase");
}
#Bean( name = "BUCKET2")
public Bucket bucket2() throws Exception {
return this.couchbaseCluster().openBucket("bucket2", "somepassword");
}
#Bean( name = "BUCKET2_TEMPLATE")
public CouchbaseTemplate newTemplateForBucket2() throws Exception {
CouchbaseTemplate template = new CouchbaseTemplate(
couchbaseClusterInfo(), //reuse the default bean
bucket2(), //the bucket is non-default
mappingCouchbaseConverter(), translationService()
);
template.setDefaultConsistency(getDefaultConsistency());
return template;
}
#Override
public void configureRepositoryOperationsMapping(RepositoryOperationsMapping baseMapping) {
baseMapping
.mapEntity(SomeDAOUsedInSomeRepository.class, newTemplateForBucket2());
}
}
similarly:
#Configuration
#EnableCouchbaseRepositories("com.xyz.mln")
public class MlnDatasource extends AbstractCouchbaseConfiguration {...}
Now the problem is there is no straight forward way to specify namespace based datasource by attaching different beans to these configurations like in springdata-jpa as springdata-jpa support this feature do using entity-manager-factory-ref and transaction-manager-ref.
Due to which only one configuration is being picked whoever comes first.
Any suggestion is greatly appreciated.
Related question: Use Spring Data Couchbase to connect to different Couchbase clusters
#anshul you are almost there.
Make one of the Data Source as #primary which will be used as by default bucket.
Wherever you want to use the other bucket .Just use specific bean in your service class with the qualifier below is the example:
#Qualifier(value = "BUCKET1_TEMPLATE")
#Autowired
CouchbaseTemplate couchbaseTemplate;
Now you can use this template to perform all couch related operations on the desired bucket.

What rules govern using #MOdify and #Query in JPA Repository?

Now that my project is successfully completed, we are trying to document lessons learned. One that still confuses me is the following:
We have a database of addresses, and needed to autocomplete when a User started typing in a street name. Using JPA repository, we implemented a PString class (simply a persistent wrapper for a String), and then implemented this interface:
#RepositoryRestResource(collectionResourceRel = "locations", path = "locations")
public interface LocationRepository extends JpaRepository<Location, Integer>, LocationRepositoryCustom {
List<Location> findByStreetNameAndCommunity_ID(#Param("street") String streetName, #Param("commId") Integer commId);
#Modifying
#Query("select distinct x.streetName from Location x where x.streetName like :street%")
List<PString> findStreetNameStartingWith(#Param("street") String streetName);
}
Trying to call locations/search/findStreetNameStartingWith?street=N%20College over the web resulted in:
{"cause":null,"message":"PersistentEntity must not be null!"}
However, we added a controller to call the method:
#RestController
#RequestMapping("/custom/locations")
public class LocationController {
#Autowired
private LocationRepository repo;
#RequestMapping(value = "/findStreetNamesStartingWith", method=RequestMethod.GET)
public List<PString> findStreetNameStartingWith(
#Param("streetName") String streetName) {
return repo.findStreetNameStartingWith(streetName);
}
}
Calling /custom/locations/findStreetNamesStartingWith?streetName=N%20Coll returns the expected three results. Why does the method not work if called directly, but runs like a greyhound when we pipe it through a controller?
Make sure you configured Spring Data REST properly, like adding the RepositoryRestConfiguration:
#Configuration
public class CustomizedRestMvcConfiguration extends RepositoryRestMvcConfiguration {
#Override
public RepositoryRestConfiguration config() {
RepositoryRestConfiguration config = super.config();
config.setBasePath("/custom");
return config;
}
}

Categories

Resources