Spring saveAndFlush() not working with secondary data source - java

I set up a small Spring Boot project that I use for certain monitorings. It has various endpoints to receive data via webservice calls and is supposed to store them into two different databases.
While everything works fine forthe primary data base, I cannot save data to the secondary database. Reading the existing data is working fine, but I cannot save anything. I'm always getting an exception:
javax.persistence.TransactionRequiredException: no transaction is in progress
This is the coding:
Everything related to Light, Motion and Temperature (LMT) is in one database. The information for Power (P) is stored in a second database (where I have the issues with).
My package set up is as follows:
- [Configuration]
- [Controller]
--- [Light]
--- [Motion]
--- [Temperature]
--- [Power]
- [Model]
--- [LMT-Models]
--- [P-Models]
- [Repository]
--- [lmt]
--- [PowerMonitoring]
In my Configuration package I have two persistence classes, one for LMT and one for Power, both handling the database connections for the respective repositories. The LMT one is the primary one, the Power one the secondary (or rather the non-primary):
Primary data source:
#Configuration
#EnableJpaRepositories(
basePackages = "com.delta.Monitoring.Repository.lmt",
entityManagerFactoryRef = "entityManagerFactory",
transactionManagerRef = "lmtTransactionManager"
)
public class LmtPersistenceConfiguration {
#Bean( name = "lmtDataSourceProperties")
#ConfigurationProperties("spring.datasource")
#Primary
public DataSourceProperties lmtDataSourceProperties() {
return new DataSourceProperties();
}
#Bean( name = "lmtDataSource")
#ConfigurationProperties("spring.datasource.configuration")
#Primary
public DataSource lmtDataSource() {
return lmtDataSourceProperties().initializeDataSourceBuilder()
.type(BasicDataSource.class).build();
}
#Bean(name = "entityManagerFactory")
#Primary
public LocalContainerEntityManagerFactoryBean entityManagerFactory(
EntityManagerFactoryBuilder builder) {
return builder
.dataSource(lmtDataSource())
.packages(new String[]{"com.delta.Monitoring.Model.lmt"})
.build();
}
#Bean (name = "lmtTransactionManager")
#Primary
public PlatformTransactionManager lmtTransactionManager(
final #Qualifier("entityManagerFactory") LocalContainerEntityManagerFactoryBean lmtEntityManagerFactory) {
return new JpaTransactionManager(lmtEntityManagerFactory.getObject());
}
}
Secondary data source:
#Configuration
#EnableJpaRepositories(
basePackages = "com.delta.Monitoring.Repository.PowerMonitoring",
entityManagerFactoryRef = "entityManagerFactorySecondary",
transactionManagerRef = "powerTransactionManager"
)
public class PowerPersistenceConfiguration {
#Bean( name = "powerDataSourceProperties")
#ConfigurationProperties("spring.powerdatasource")
public DataSourceProperties powerDataSourceProperties() {
return new DataSourceProperties();
}
#Bean( name = "powerDataSource")
#ConfigurationProperties("spring.powerdatasource.configuration")
public DataSource powerDataSource() {
return powerDataSourceProperties().initializeDataSourceBuilder()
.type(BasicDataSource.class).build();
}
#Bean(name = "entityManagerFactorySecondary")
public LocalContainerEntityManagerFactoryBean entityManagerFactory(
EntityManagerFactoryBuilder builder) {
return builder
.dataSource(powerDataSource())
.packages(new String[]{"com.delta.Monitoring.Model.PowerMonitoring"})
.build();
}
#Bean( name = "powerTransactionManager")
public PlatformTransactionManager powerTransactionManager(
final #Qualifier("entityManagerFactory") LocalContainerEntityManagerFactoryBean powerEntityManagerFactory) {
return new JpaTransactionManager(powerEntityManagerFactory.getObject());
}
}
This is what one of the Power-Repositories looks like. Since I'm not doing anything fancy with it (yet), I just extended the JpaRepository:
#Transactional("powerTransactionManager")
public interface PowerDaysRepository extends JpaRepository<PowerDay, PowerDayId> {
}
And finally, the controller, where I always get the exception:
#RestController
#RequestMapping("power")
public class PowerController {
private static final Logger myLog = LogManager.getLogger(PowerController.class);
#Autowired
PowerDayDetailsRepository powerDayDetailsRepository;
#PostMapping("/powerdaydetails")
public boolean insertPowerDayDetails(#RequestBody List<PowerDayDetails> powerDayDetails){
myLog.info("POST /power/powerdaydetails");
powerDayDetailsRepository.deleteAll();
//List<PowerDayDetails> lines = powerDayDetailsRepository.saveAllAndFlush(powerDayDetails);
List<PowerDayDetails> lines = powerDayDetailsRepository.saveAll(powerDayDetails);
myLog.info("Update size: " + lines.size());
myLog.info("Received data: " + powerDayDetails.size());
return lines.size() == powerDayDetails.size();
}
}
When I call the /powerdaydetails-endpoint I wanted to save the respective data in the database. First delete all and then save the newly received data.
When I use the saveAll() method, I am not getting the exception, however also nothing in stored in the database. This is the log:
2022-10-20 10:00:12.067 INFO 22842 --- [http-nio-8321-exec-88] c.m.H.Controller.Power.PowerController : POST /power/powerdaydetails
2022-10-20 10:00:12.639 INFO 22842 --- [http-nio-8321-exec-88] c.m.H.Controller.Power.PowerController : Update size: 582
2022-10-20 10:00:12.639 INFO 22842 --- [http-nio-8321-exec-88] c.m.H.Controller.Power.PowerController : Received data: 582
When I use the saveAllAndFlush() method, the above-mentioned exception occurs and - needless to say - also nothing gets stored in the database.
I've read a lot about the #Transactional and also the #Modifying topic, but that also never solved my problem. The primary data source and the respective repositories and controllers work perfectly and save all the data directly. It just seems that I have made a mistake with the second data source or haven't told Spring yet where/how to find a transaction for this.

It's quite simple and was in front of my eyes the whole time:
In the secondary data source configuration I named the entity manager factory bean "entityManagerFactorySecondary" but in the transaction manager method I only referred to "entityManagerFactory".
This is how they must look like:
return builder
.dataSource(powerDataSource())
.packages(new String[]{"com.delta.Monitoring.Model.PowerMonitoring"})
.build();
}
#Bean( name = "powerTransactionManager")
public PlatformTransactionManager powerTransactionManager(
final #Qualifier("entityManagerFactorySecondary") LocalContainerEntityManagerFactoryBean powerEntityManagerFactory) {
return new JpaTransactionManager(powerEntityManagerFactory.getObject());
}

Related

AbstractRoutingDataSource mappers only work with default datasource

I have 2 datasources. A main one (default) and secondary one. I have an AbstractRoutingDatasource implementation which chooses a datasource based on ThreadLocal variable and falls back to the default datasource if the ThreadLocal is empty. Very vanilla "like-tutorial" situation. In addition to that I have a bit of custom JdbcTemplate configuration. It's needed to make some String<->jsonb conversion work. Problem is that this configuration only takes effect if I fall back to the main datasource. As soon as the AbstractRoutingDatasource picks the non-main datasource, I get errors which I should not get because of the custom converters.
Why this is and what can I do to make the custom configuration work on all datasources no matter what the AbstractRoutingDatasource ends up picking?
application.yaml:
ds1:
username: smth1
password: smth1
host: smth1
ds2:
username: smth2
password: smth2
host: smth2
RoutingDatasource.kt:
class RoutingDatasource(
mainDs: DataSource,
customDatasources: Map<String, DataSource>
) :
AbstractRoutingDataSource() {
companion object {
private val MAIN_DS_NAME = "mainDs"
public val holder = ThreadLocal<String>()
}
init {
setDefaultTargetDataSource(mainDs)
setTargetDataSources(customDatasources + (MAIN_DS_NAME to mainDs))
}
override fun determineCurrentLookupKey() = holger.get() ?: MAIN_DS_NAME
}
CustomJdbcConfiguration.kt:
#Configuration
#EnableJdbcRepositories(
basePackages = ["my.package"],
jdbcOperationsRef = "mainJdbcTemplate",
dataAccessStrategyRef = "mainDataAccessStrategy"
)
class CustomJdbcConfiguration(
private val readingConverters: List<Converter<PGobject, *>>,
private val writingConverters: List<Converter<*, JdbcValue>>
) {
#Bean("ds1")
fun ds1(): DataSource {
// build ds1
}
#Bean("ds2")
fun ds2(): DataSource {
// build ds2
}
#Bean
#Primary
fun routingDataSource(
#Qualifier("ds1") ds1: DataSource,
#Qualifier("ds2") ds2: DataSource
): DataSource {
return RoutingDataSource(ds1, mapOf("secondary" to ds2))
}
#Bean("mainJdbcConversions")
#Primary
fun jdbcCustomConversions(): JdbcCustomConversions {
return JdbcCustomConversions(
listOf(
JsonbReadingConverter(),
JsonbWritingConverter(),
JsonReadingConverter()
) + readingConverters + writingConverters
)
}
#Bean
#Primary
fun jdbcTemplate(dataSource: DataSource, properties: JdbcProperties): JdbcTemplate {
val jdbcTemplate = JdbcTemplate(dataSource)
val template = properties.template
jdbcTemplate.fetchSize = template.fetchSize
jdbcTemplate.maxRows = template.maxRows
if (template.queryTimeout != null) {
jdbcTemplate.queryTimeout = template.queryTimeout.seconds.toInt()
}
return jdbcTemplate
}
#Bean("mainJdbcTemplate")
#Primary
fun namedParameterJdbcTemplate(jdbcTemplate: JdbcTemplate): NamedParameterJdbcTemplate {
return NamedParameterJdbcTemplate(jdbcTemplate)
}
#Bean("mainJdbcConverter")
#Primary
fun jdbcConverter(
mappingContext: JdbcMappingContext,
#Qualifier("mainJdbcTemplate") operations: NamedParameterJdbcOperations,
#Lazy relationResolver: RelationResolver,
#Qualifier("mainJdbcConversions") conversions: JdbcCustomConversions,
dialect: Dialect
): JdbcConverter {
val jdbcTypeFactory = DefaultJdbcTypeFactory(operations.jdbcOperations)
return BasicJdbcConverter(
mappingContext, relationResolver, conversions, jdbcTypeFactory, dialect.identifierProcessing
)
}
#Primary
#Bean("mainDataAccessStrategy")
fun dataAccessStrategyBean(
#Qualifier("mainJdbcTemplate") operations: NamedParameterJdbcOperations,
#Qualifier("mainJdbcConverter") jdbcConverter: JdbcConverter,
context: JdbcMappingContext,
dialect: Dialect
): DataAccessStrategy {
return DefaultDataAccessStrategy(
SqlGeneratorSource(context, jdbcConverter, dialect),
context,
jdbcConverter,
operations,
SqlParametersFactory(context, jdbcConverter, dialect),
InsertStrategyFactory(operations, BatchJdbcOperations(operations.jdbcOperations), dialect)
)
}
}
Now if I save an object which Kotlin object has a string field, but in the DB it's a jsonb and the holder is not set then all works well. If I set the holder to point to non-main database then converters don't work anymore and I get an error like this:
ERROR: column "your_column" is of type jsonb but expression is of type character varying
Hint: You will need to rewrite or cast the expression.
Position: 174
Even if I flip the datasources, the effect is the same. Always works with the fallback one, never works when it actually routes to a secondary one.
P.S. In my testcase the actual database and the connection properties are exactly the same. So the problem is not on the database side.
This had nothing to do with multiple datasources and custom mappings directly.
Issue was that my end data-sources were Hikari datasources and I forgot to specify the property data-source-properties: stringtype=unspecified for them. As soon as I did that, everything started working correctly.

Hibernate isolated integration tests

I'm a little bit new to hibernate, so I started with simple things.
According to F.I.R.S.T test principles, unit tests must be I - isolated.
I'm trying to apply it to integration tests for repository layer (Hibernate\JPA) using #Transactional annotation:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = RepositoryConfig.class)
#Transactional
public class EmployeeRepositoryTest extends AbstractRepositoryTest {
#Autowired
private IEmployeeRepository employeeRepository;
#Test
public void saveTest() {
Employee expectedEmployee = buildEmployee(1, "Parker");
employeeRepository.save(expectedEmployee);
Employee actualEmployee = employeeRepository.findById(1);
assertEquals(expectedEmployee, actualEmployee);
}
private Employee buildEmployee(long id, String name) {
Employee employee = new Employee();
employee.setId(id);
employee.setName(name);
return employee;
}
}
However, as far as two methods are performed within a transaction, hibernate does not actually perform them (as I understand it) - at least there's no line with insert in logs.
If I run data insertion by adding a script to embeded datasourse like:
INSERT INTO employee (employee_id, employee_name) VALUES (1, 'name');
and try to save employee with the same id but new name, the test will success. And that's the most confusing thing for me.
I saw a solution with autowiring EntityManager and calling it's flush() method. But I don't like it, since I try to write tests without being tied to Hibernate\JPA.
I also tried different flushMode, but it didn't help either.
Q1: Is there a way to make Hibernate run queries right after repository's method is called?
Q2: Is it a good practice to call EntityManager#flush in save/update/delete repository methods explicitly?
My Employee:
#Entity
#Table(name = "employee")
public class Employee {
#Id
#Column(name = "employee_id")
private long id;
#Column(name = "employee_name")
private String name;
// the rest required things (constructor, getters/setters and etc)
}
and RepositoryConfig:
#Configuration
#EnableTransactionManagement
#ComponentScan("org.my.package")
public class RepositoryConfig {
#Bean
public DataSource getDataSource() {
return new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.H2)
.build();
}
#Bean
public JpaTransactionManager transactionManager() {
return new JpaTransactionManager();
}
#Bean
#Autowired
public HibernateTemplate getHibernateTemplate(SessionFactory sessionFactory) {
return new HibernateTemplate(sessionFactory);
}
#Bean
public LocalSessionFactoryBean getSessionFactory() {
LocalSessionFactoryBean sessionFactory = new LocalSessionFactoryBean();
sessionFactory.setDataSource(getDataSource());
sessionFactory.setPackagesToScan("org.my.package.model");
sessionFactory.setHibernateProperties(getHibernateProperties());
return sessionFactory;
}
private Properties getHibernateProperties() {
Properties properties = new Properties();
properties.put("hibernate.dialect", "H2Dialect");
properties.put("hibernate.show_sql", true);
return properties;
}
}
You have no option but to interact with the entity manager to get these tests working as you expect - not to trigger a flush (as that can be done by calling saveAndFlush(..) method on your repository rather than just save(...)) but to clear the first level cache:
https://docs.spring.io/spring-data/jpa/docs/current/api/org/springframework/data/jpa/repository/JpaRepository.html#saveAndFlush-S-
#Test
public void saveTest() {
Employee expectedEmployee = buildEmployee(1, "Parker");
//call save and flush for immediate flush.
employeeRepository.saveAndFlush(expectedEmployee);
//now you will need to clear the persistence context to actually
//trigger a load from the database as employee with id 1 is already
//in the persistence context.
//without the below you will not see a db select
entityManager.clear();
Employee actualEmployee = employeeRepository.findById(1);
assertEquals(expectedEmployee, actualEmployee);
}
An alternative to clearing the persistence context is to fall back to using raw JDBC to read the updated row(s).
But I don't like it, since I try to write tests without being tied to Hibernate\JPA. You are testing a persistence mechanism implemented in Hibernate\JPA and your repository is just an abstraction that is allowing you to avoid direct calls to it so this seems a slightly ridiculous statement.

How to have multiple cache manager configuration in multiple modules/projects spring cache java

Have two different modules currently Let say Project A and Project B. Project B imported/used into/in Project A. Currently Project B already have CacheManager.
Project B
public class CacheConfig {
#Bean
public CacheManager cacheManager() {
// using SimpleCacheManager()
}
}
But now planed to implement CacheManager in Project A for someother Purpose.
class SomeCacheConfig{
#Bean
public CacheManager someCacheManager(){
// using SimpleCacheManager()
}
}
While loading application throws below exception.
java.lang.IllegalStateException: No CacheResolver specified, and no unique bean of type CacheManager found. Mark one as primary (or give it the name 'cacheManager') or declare a specific CacheManager to use, that serves as the default one.
Can you please help me how to achieve multiple cacheManager in multiple modules/projects.
ok then.
put #Primary on the CacheManager bean that will use as default.
#Primary
#Bean(name = "primaryCacheManager")
public CacheManager primaryCacheManager() {
return new SimpleCacheManager();
}
#Bean(name = "myCacheManager")
public CacheManager myCacheManager() {
return new SimpleCacheManager();
}
and when you want to use another one(i.e. not a default), explictly define a name of CacheManager bean with #Qualifier annotation.
#Autowired
#Qualifier("myCacheManager")
private CacheManager myCacheManager;
or if you use annotation base Spring Cache implementation, you can also define a CacheManager name as property of those annotations
#Cacheable(value = "some",cacheManager = "myCacheManager")
public String getSome(){
return "";
}
You can use the CompositeCacheManager implementation provided by Spring (https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/cache/support/CompositeCacheManager.html)
This allows you to compose a list of cache managers. The composite manager will iterate through the list and get the cache in the first manager it exists in. Please note that "Note: Regular CacheManagers that this composite manager delegates to need to return null from getCache(String) if they are unaware of the specified cache name, allowing for iteration to the next delegate in line. However, most CacheManager implementations fall back to lazy creation of named caches once requested; check out the specific configuration details for a 'static' mode with fixed cache names, if available."
What eventually worked for me as Erik Ahlswede suggested
#Bean
public CacheManager cacheManager() {
return new CompositeCacheManager(
new ConcurrentMapCacheManager("cacheA") {
#Override
protected Cache createConcurrentMapCache(final String name) {
return new ConcurrentMapCache(name,
CacheBuilder.newBuilder()
.expireAfterWrite(CACHE_TTL_IN_SECONDS, TimeUnit.SECONDS)
.maximumSize(MAX_ENTRIES_IN_CACHE)
.build().asMap(), false);
}
},
new ConcurrentMapCacheManager("cacheB") {
#Override
protected Cache createConcurrentMapCache(final String name) {
return new ConcurrentMapCache(name,
CacheBuilder.newBuilder()
.expireAfterWrite(CACHE_TTL_IN_SECONDS, TimeUnit.SECONDS)
.maximumSize(MAX_ENTRIES_IN_CACHE)
.build().asMap(), false);
}
}
);
}
And then use it with
#Cacheable(cacheNames = "someComplicatedAction", cacheManager = "cacheA")
public String someComplicatedAction() {
}

ACL security in Spring Boot

I am having issues setting up ACL through Java config in a Spring Boot application. I have created one small project to reproduce the issues.
I have tried a few different approaches. First issue I had was with EhCache, and after I fixed that (I assume I did) I couldn't login any more, and it looks like all the data is gone.
There are 4 classes with different configurations:
ACLConfig1.class
ACLConfig2.class
ACLConfig3.class
ACLConfig4.class
All #PreAuthorize and #PostAuthorize annotations are working as expected, except hasPermission.
Controller holds 4 endpoints: one for User, one for Admin, one Public and the last one which gives me headache #PostAuthorize("hasPermission(returnObject,'administration')")
I am pretty sure that inserts in DB are correct. This class is one of four, the last one that I have tried:
#Configuration
#EnableGlobalMethodSecurity(prePostEnabled = true, securedEnabled = true)
public class ACLConfig4 {
#Autowired
DataSource dataSource;
#Bean
public EhCacheBasedAclCache aclCache() {
return new EhCacheBasedAclCache(aclEhCacheFactoryBean().getObject(), permissionGrantingStrategy(), aclAuthorizationStrategy());
}
#Bean
public EhCacheFactoryBean aclEhCacheFactoryBean() {
EhCacheFactoryBean ehCacheFactoryBean = new EhCacheFactoryBean();
ehCacheFactoryBean.setCacheManager(aclCacheManager().getObject());
ehCacheFactoryBean.setCacheName("aclCache");
return ehCacheFactoryBean;
}
#Bean
public EhCacheManagerFactoryBean aclCacheManager() {
return new EhCacheManagerFactoryBean();
}
#Bean
public DefaultPermissionGrantingStrategy permissionGrantingStrategy() {
ConsoleAuditLogger consoleAuditLogger = new ConsoleAuditLogger();
return new DefaultPermissionGrantingStrategy(consoleAuditLogger);
}
#Bean
public AclAuthorizationStrategy aclAuthorizationStrategy() {
return new AclAuthorizationStrategyImpl(new SimpleGrantedAuthority("ROLE_ADMINISTRATOR"));
}
#Bean
public LookupStrategy lookupStrategy() {
return new BasicLookupStrategy(dataSource, aclCache(), aclAuthorizationStrategy(), new ConsoleAuditLogger());
}
#Bean
public JdbcMutableAclService aclService() {
JdbcMutableAclService service = new JdbcMutableAclService(dataSource, lookupStrategy(), aclCache());
return service;
}
#Bean
public DefaultMethodSecurityExpressionHandler defaultMethodSecurityExpressionHandler() {
return new DefaultMethodSecurityExpressionHandler();
}
#Bean
public MethodSecurityExpressionHandler createExpressionHandler() {
DefaultMethodSecurityExpressionHandler expressionHandler = defaultMethodSecurityExpressionHandler();
expressionHandler.setPermissionEvaluator(new AclPermissionEvaluator(aclService()));
expressionHandler.setPermissionCacheOptimizer(new AclPermissionCacheOptimizer(aclService()));
return expressionHandler;
}
}
What am I missing here? Why I have no data if I use ACLConfig3.class or
ACLConfig4.class. Is there any example on how this should be configured programmatically in Spring Boot?
The reason why you have no data was a bit tricky to find out. As soon as you define a MethodSecurityExpressionHandler bean in your config, there is no data in the database tables. This is because your data.sql file isn't executed.
Before explaining why data.sql isn't executed I'd first like to point out that you're not using the file as intended.
data.sql is executed by spring-boot after hibernate has been initialized and normally only contains DML statements. Your data.sql contains both DDL (schema) statements and DML (data) statements. This isn't ideal as some of your DDL statements clash with hibernate's hibernate.hbm2ddl.auto behaviour (note that spring-boot uses 'create-drop' when an embedded DataSource is being used). You should put your DDL statements in schema.sql and your DML statements in data.sql. As you're manually defining all tables you should disable hibernate.hbm2ddl.auto (by adding spring.jpa.hibernate.ddl-auto=none to applciation.properties).
That being said, let's take a look at why data.sql isn't executed.
The execution of data.sql is triggered via an ApplicationEvent that's fired via a BeanPostProcessor. This BeanPostProcessor (DataSourceInitializedPublisher) is created as a part of spring-boot's Hibernate/JPA auto configuration (see org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration, org.springframework.boot.autoconfigure.orm.jpa.DataSourceInitializedPublisher and org.springframework.boot.autoconfigure.jdbc.DataSourceInitializer).
Normally the DataSourceInitializedPublisher is created before the (embedded) DataSource is created and everything works as expected but by defining a custom MethodSecurityExpressionHandler the normal bean creation order alters.
As you've configured #EnableGlobalMethodSecurity, your're automatically importing GlobalMethodSecurityConfiguration.
spring-security related beans are created early on. As your MethodSecurityExpressionHandler requires a DataSource for the ACL stuff and the spring-security related beans require your custom MethodSecurityExpressionHandler, the DataSource is created earlier than usual; in fact it's created so early on that spring-boot's DataSourceInitializedPublisher isn't created yet.
The DataSourceInitializedPublisher is created later on but as it didn't notice the creation of a DataSource bean, it also doesn't trigger the execution of data.sql.
So long story short: the security configuration alters the normal bean creation order which results in data.sql not being loaded.
I guess that fixing the bean creation order would do the trick, but as I don't now how (without further experimentation) I propose the following solution: manually define your DataSource and take care of data initialization.
#Configuration
public class DataSourceConfig {
#Bean
public EmbeddedDatabase dataSource() {
return new EmbeddedDatabaseBuilder().setType(EmbeddedDatabaseType.H2)
//as your data.sql file contains both DDL & DML you might want to rename it (e.g. init.sql)
.addScript("classpath:/data.sql")
.build();
}
}
As your data.sql file contains all DDL required by your application you can disable hibernate.hbm2ddl.auto. Add spring.jpa.hibernate.ddl-auto=none to applciation.properties.
When defining your own DataSource spring-boot's DataSourceAutoConfiguration normally back's out but if you want to be sure you can also exclude it (optional).
#SpringBootConfiguration
#EnableAutoConfiguration(exclude = DataSourceAutoConfiguration.class)
#ComponentScan
#EnableCaching
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
This should fix your 'no data' problem. But in order to get everything working as expected you need to make 2 more modifications.
First of all, you should only define one MethodSecurityExpressionHandler bean. Currently you're defining 2 MethodSecurityExpressionHandler beans. Spring-security won't know which one to use and will (silently) use it's own internal MethodSecurityExpressionHandler instead. See org.springframework.security.config.annotation.method.configuration.GlobalMethodSecurityConfiguration#setMethodSecurityExpressionHandler.
#Configuration
#EnableGlobalMethodSecurity(prePostEnabled = true, securedEnabled = true)
public class MyACLConfig {
//...
#Bean
public MethodSecurityExpressionHandler createExpressionHandler() {
DefaultMethodSecurityExpressionHandler securityExpressionHandler = new DefaultMethodSecurityExpressionHandler();
securityExpressionHandler.setPermissionEvaluator(new AclPermissionEvaluator(aclService()));
securityExpressionHandler.setPermissionCacheOptimizer(new AclPermissionCacheOptimizer(aclService()));
return securityExpressionHandler;
}
}
The last thing you need to do is make the getId() method in Car public.
#Entity
public class Car {
//...
public long getId() {
return id;
}
//...
}
The standard ObjectIdentityRetrievalStrategy will look for a public method 'getId()' when trying to determine an object's identity during ACL permission evaluation.
(Note that I've based my answer upon ACLConfig4.)

How do I get AbstractRoutingDataSource to work properly?

I am currently trying to get a Hibernate Session Factory created when using an AbstractRoutingDataSource, but when it goes through the bean initialization process, it tries to determine a lookup key. Because I didn't set a default, it'll be null. I don't want to have to set a default - I'd rather delay this until I need to create a session and make an actual query.
I did find other people having the same problem. Here is an old archived post from 2005 that describes the exact same problem I am having. Unfortunately, there wasn't really an answer to it:
http://forum.spring.io/forum/spring-projects/data/108464-abstractroutingdatasource-not-routing-when-used-with-hibernate-sample-attached
If I set a default value, everything will load "fine" - but then changing the thread local value that the routing datasource depends on has zero effect on what database is used - it seems 'locked' in that point.
Any ideas?
AbstractRoutingDataSource inherits the Datasource. So it may replace a data source that has to be configured at startup.
You can load the data source list to a DataSourceProperties component:
#Component
#ConfigurationProperties(prefix = "tenants")
public class DataSourceProperties {
private Map <Object, Object> datasources = new LinkedHashMap <>();
public Map<Object, Object> getDatasources() {
return datasources;
}
public void setDatasources(Map<String, Map<String, String>> datasources) {
datasources
.forEach((key, value) -> this.datasources.put(key, convert(value)));
}
private DataSource convert(Map <String, String> source) {
return DataSourceBuilder.create()
.url(source.get("jdbcUrl"))
.driverClassName(source.get("driverClassName"))
.username(source.get("username"))
.password(source.get("password"))
.build();
}
}
Create AbstractRoutingDataSource:
public class TenantAwareRoutingDataSource extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
return ThreadLocalStorage.getTenantName();
}
}
And configure your datasource to be AbstractRoutingDataSource:
#Configuration
public class DataSourceConfig {
private final DataSourceProperties dataSourceProperties;
public DataSourceConfig(DataSourceProperties dataSourceProperties) {
this.dataSourceProperties = dataSourceProperties;
}
#Bean
public DataSource getDataSource() {
TenantAwareRoutingDataSource tenantAwareRoutingDataSource = new TenantAwareRoutingDataSource();
tenantAwareRoutingDataSource.setTargetDataSources(dataSourceProperties.getDatasources());
tenantAwareRoutingDataSource.afterPropertiesSet();
return tenantAwareRoutingDataSource;
}
}
You should also implement the ThreadLocalStorage to store the tenant identifier and let the AbstractRoutingDataSource retrieve it to determine which data source to use.
To prevent the Spring to auto-configure the datasource at startup:
spring:
jpa:
open-in-view: false # Get Rid of OIV Warning
show-sql: true
database: postgresql # Do not Auto-Detect the Database
hibernate:
ddl-auto: none # Prevent Hibernate from Automatic Changes to the DDL Schema
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQLDialect
datasource:
initialization-mode: never # Prevent JPA from trying to Initialize

Categories

Resources