How do I get AbstractRoutingDataSource to work properly? - java

I am currently trying to get a Hibernate Session Factory created when using an AbstractRoutingDataSource, but when it goes through the bean initialization process, it tries to determine a lookup key. Because I didn't set a default, it'll be null. I don't want to have to set a default - I'd rather delay this until I need to create a session and make an actual query.
I did find other people having the same problem. Here is an old archived post from 2005 that describes the exact same problem I am having. Unfortunately, there wasn't really an answer to it:
http://forum.spring.io/forum/spring-projects/data/108464-abstractroutingdatasource-not-routing-when-used-with-hibernate-sample-attached
If I set a default value, everything will load "fine" - but then changing the thread local value that the routing datasource depends on has zero effect on what database is used - it seems 'locked' in that point.
Any ideas?

AbstractRoutingDataSource inherits the Datasource. So it may replace a data source that has to be configured at startup.
You can load the data source list to a DataSourceProperties component:
#Component
#ConfigurationProperties(prefix = "tenants")
public class DataSourceProperties {
private Map <Object, Object> datasources = new LinkedHashMap <>();
public Map<Object, Object> getDatasources() {
return datasources;
}
public void setDatasources(Map<String, Map<String, String>> datasources) {
datasources
.forEach((key, value) -> this.datasources.put(key, convert(value)));
}
private DataSource convert(Map <String, String> source) {
return DataSourceBuilder.create()
.url(source.get("jdbcUrl"))
.driverClassName(source.get("driverClassName"))
.username(source.get("username"))
.password(source.get("password"))
.build();
}
}
Create AbstractRoutingDataSource:
public class TenantAwareRoutingDataSource extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
return ThreadLocalStorage.getTenantName();
}
}
And configure your datasource to be AbstractRoutingDataSource:
#Configuration
public class DataSourceConfig {
private final DataSourceProperties dataSourceProperties;
public DataSourceConfig(DataSourceProperties dataSourceProperties) {
this.dataSourceProperties = dataSourceProperties;
}
#Bean
public DataSource getDataSource() {
TenantAwareRoutingDataSource tenantAwareRoutingDataSource = new TenantAwareRoutingDataSource();
tenantAwareRoutingDataSource.setTargetDataSources(dataSourceProperties.getDatasources());
tenantAwareRoutingDataSource.afterPropertiesSet();
return tenantAwareRoutingDataSource;
}
}
You should also implement the ThreadLocalStorage to store the tenant identifier and let the AbstractRoutingDataSource retrieve it to determine which data source to use.
To prevent the Spring to auto-configure the datasource at startup:
spring:
jpa:
open-in-view: false # Get Rid of OIV Warning
show-sql: true
database: postgresql # Do not Auto-Detect the Database
hibernate:
ddl-auto: none # Prevent Hibernate from Automatic Changes to the DDL Schema
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQLDialect
datasource:
initialization-mode: never # Prevent JPA from trying to Initialize

Related

AbstractRoutingDataSource mappers only work with default datasource

I have 2 datasources. A main one (default) and secondary one. I have an AbstractRoutingDatasource implementation which chooses a datasource based on ThreadLocal variable and falls back to the default datasource if the ThreadLocal is empty. Very vanilla "like-tutorial" situation. In addition to that I have a bit of custom JdbcTemplate configuration. It's needed to make some String<->jsonb conversion work. Problem is that this configuration only takes effect if I fall back to the main datasource. As soon as the AbstractRoutingDatasource picks the non-main datasource, I get errors which I should not get because of the custom converters.
Why this is and what can I do to make the custom configuration work on all datasources no matter what the AbstractRoutingDatasource ends up picking?
application.yaml:
ds1:
username: smth1
password: smth1
host: smth1
ds2:
username: smth2
password: smth2
host: smth2
RoutingDatasource.kt:
class RoutingDatasource(
mainDs: DataSource,
customDatasources: Map<String, DataSource>
) :
AbstractRoutingDataSource() {
companion object {
private val MAIN_DS_NAME = "mainDs"
public val holder = ThreadLocal<String>()
}
init {
setDefaultTargetDataSource(mainDs)
setTargetDataSources(customDatasources + (MAIN_DS_NAME to mainDs))
}
override fun determineCurrentLookupKey() = holger.get() ?: MAIN_DS_NAME
}
CustomJdbcConfiguration.kt:
#Configuration
#EnableJdbcRepositories(
basePackages = ["my.package"],
jdbcOperationsRef = "mainJdbcTemplate",
dataAccessStrategyRef = "mainDataAccessStrategy"
)
class CustomJdbcConfiguration(
private val readingConverters: List<Converter<PGobject, *>>,
private val writingConverters: List<Converter<*, JdbcValue>>
) {
#Bean("ds1")
fun ds1(): DataSource {
// build ds1
}
#Bean("ds2")
fun ds2(): DataSource {
// build ds2
}
#Bean
#Primary
fun routingDataSource(
#Qualifier("ds1") ds1: DataSource,
#Qualifier("ds2") ds2: DataSource
): DataSource {
return RoutingDataSource(ds1, mapOf("secondary" to ds2))
}
#Bean("mainJdbcConversions")
#Primary
fun jdbcCustomConversions(): JdbcCustomConversions {
return JdbcCustomConversions(
listOf(
JsonbReadingConverter(),
JsonbWritingConverter(),
JsonReadingConverter()
) + readingConverters + writingConverters
)
}
#Bean
#Primary
fun jdbcTemplate(dataSource: DataSource, properties: JdbcProperties): JdbcTemplate {
val jdbcTemplate = JdbcTemplate(dataSource)
val template = properties.template
jdbcTemplate.fetchSize = template.fetchSize
jdbcTemplate.maxRows = template.maxRows
if (template.queryTimeout != null) {
jdbcTemplate.queryTimeout = template.queryTimeout.seconds.toInt()
}
return jdbcTemplate
}
#Bean("mainJdbcTemplate")
#Primary
fun namedParameterJdbcTemplate(jdbcTemplate: JdbcTemplate): NamedParameterJdbcTemplate {
return NamedParameterJdbcTemplate(jdbcTemplate)
}
#Bean("mainJdbcConverter")
#Primary
fun jdbcConverter(
mappingContext: JdbcMappingContext,
#Qualifier("mainJdbcTemplate") operations: NamedParameterJdbcOperations,
#Lazy relationResolver: RelationResolver,
#Qualifier("mainJdbcConversions") conversions: JdbcCustomConversions,
dialect: Dialect
): JdbcConverter {
val jdbcTypeFactory = DefaultJdbcTypeFactory(operations.jdbcOperations)
return BasicJdbcConverter(
mappingContext, relationResolver, conversions, jdbcTypeFactory, dialect.identifierProcessing
)
}
#Primary
#Bean("mainDataAccessStrategy")
fun dataAccessStrategyBean(
#Qualifier("mainJdbcTemplate") operations: NamedParameterJdbcOperations,
#Qualifier("mainJdbcConverter") jdbcConverter: JdbcConverter,
context: JdbcMappingContext,
dialect: Dialect
): DataAccessStrategy {
return DefaultDataAccessStrategy(
SqlGeneratorSource(context, jdbcConverter, dialect),
context,
jdbcConverter,
operations,
SqlParametersFactory(context, jdbcConverter, dialect),
InsertStrategyFactory(operations, BatchJdbcOperations(operations.jdbcOperations), dialect)
)
}
}
Now if I save an object which Kotlin object has a string field, but in the DB it's a jsonb and the holder is not set then all works well. If I set the holder to point to non-main database then converters don't work anymore and I get an error like this:
ERROR: column "your_column" is of type jsonb but expression is of type character varying
Hint: You will need to rewrite or cast the expression.
Position: 174
Even if I flip the datasources, the effect is the same. Always works with the fallback one, never works when it actually routes to a secondary one.
P.S. In my testcase the actual database and the connection properties are exactly the same. So the problem is not on the database side.
This had nothing to do with multiple datasources and custom mappings directly.
Issue was that my end data-sources were Hikari datasources and I forgot to specify the property data-source-properties: stringtype=unspecified for them. As soon as I did that, everything started working correctly.

Spring saveAndFlush() not working with secondary data source

I set up a small Spring Boot project that I use for certain monitorings. It has various endpoints to receive data via webservice calls and is supposed to store them into two different databases.
While everything works fine forthe primary data base, I cannot save data to the secondary database. Reading the existing data is working fine, but I cannot save anything. I'm always getting an exception:
javax.persistence.TransactionRequiredException: no transaction is in progress
This is the coding:
Everything related to Light, Motion and Temperature (LMT) is in one database. The information for Power (P) is stored in a second database (where I have the issues with).
My package set up is as follows:
- [Configuration]
- [Controller]
--- [Light]
--- [Motion]
--- [Temperature]
--- [Power]
- [Model]
--- [LMT-Models]
--- [P-Models]
- [Repository]
--- [lmt]
--- [PowerMonitoring]
In my Configuration package I have two persistence classes, one for LMT and one for Power, both handling the database connections for the respective repositories. The LMT one is the primary one, the Power one the secondary (or rather the non-primary):
Primary data source:
#Configuration
#EnableJpaRepositories(
basePackages = "com.delta.Monitoring.Repository.lmt",
entityManagerFactoryRef = "entityManagerFactory",
transactionManagerRef = "lmtTransactionManager"
)
public class LmtPersistenceConfiguration {
#Bean( name = "lmtDataSourceProperties")
#ConfigurationProperties("spring.datasource")
#Primary
public DataSourceProperties lmtDataSourceProperties() {
return new DataSourceProperties();
}
#Bean( name = "lmtDataSource")
#ConfigurationProperties("spring.datasource.configuration")
#Primary
public DataSource lmtDataSource() {
return lmtDataSourceProperties().initializeDataSourceBuilder()
.type(BasicDataSource.class).build();
}
#Bean(name = "entityManagerFactory")
#Primary
public LocalContainerEntityManagerFactoryBean entityManagerFactory(
EntityManagerFactoryBuilder builder) {
return builder
.dataSource(lmtDataSource())
.packages(new String[]{"com.delta.Monitoring.Model.lmt"})
.build();
}
#Bean (name = "lmtTransactionManager")
#Primary
public PlatformTransactionManager lmtTransactionManager(
final #Qualifier("entityManagerFactory") LocalContainerEntityManagerFactoryBean lmtEntityManagerFactory) {
return new JpaTransactionManager(lmtEntityManagerFactory.getObject());
}
}
Secondary data source:
#Configuration
#EnableJpaRepositories(
basePackages = "com.delta.Monitoring.Repository.PowerMonitoring",
entityManagerFactoryRef = "entityManagerFactorySecondary",
transactionManagerRef = "powerTransactionManager"
)
public class PowerPersistenceConfiguration {
#Bean( name = "powerDataSourceProperties")
#ConfigurationProperties("spring.powerdatasource")
public DataSourceProperties powerDataSourceProperties() {
return new DataSourceProperties();
}
#Bean( name = "powerDataSource")
#ConfigurationProperties("spring.powerdatasource.configuration")
public DataSource powerDataSource() {
return powerDataSourceProperties().initializeDataSourceBuilder()
.type(BasicDataSource.class).build();
}
#Bean(name = "entityManagerFactorySecondary")
public LocalContainerEntityManagerFactoryBean entityManagerFactory(
EntityManagerFactoryBuilder builder) {
return builder
.dataSource(powerDataSource())
.packages(new String[]{"com.delta.Monitoring.Model.PowerMonitoring"})
.build();
}
#Bean( name = "powerTransactionManager")
public PlatformTransactionManager powerTransactionManager(
final #Qualifier("entityManagerFactory") LocalContainerEntityManagerFactoryBean powerEntityManagerFactory) {
return new JpaTransactionManager(powerEntityManagerFactory.getObject());
}
}
This is what one of the Power-Repositories looks like. Since I'm not doing anything fancy with it (yet), I just extended the JpaRepository:
#Transactional("powerTransactionManager")
public interface PowerDaysRepository extends JpaRepository<PowerDay, PowerDayId> {
}
And finally, the controller, where I always get the exception:
#RestController
#RequestMapping("power")
public class PowerController {
private static final Logger myLog = LogManager.getLogger(PowerController.class);
#Autowired
PowerDayDetailsRepository powerDayDetailsRepository;
#PostMapping("/powerdaydetails")
public boolean insertPowerDayDetails(#RequestBody List<PowerDayDetails> powerDayDetails){
myLog.info("POST /power/powerdaydetails");
powerDayDetailsRepository.deleteAll();
//List<PowerDayDetails> lines = powerDayDetailsRepository.saveAllAndFlush(powerDayDetails);
List<PowerDayDetails> lines = powerDayDetailsRepository.saveAll(powerDayDetails);
myLog.info("Update size: " + lines.size());
myLog.info("Received data: " + powerDayDetails.size());
return lines.size() == powerDayDetails.size();
}
}
When I call the /powerdaydetails-endpoint I wanted to save the respective data in the database. First delete all and then save the newly received data.
When I use the saveAll() method, I am not getting the exception, however also nothing in stored in the database. This is the log:
2022-10-20 10:00:12.067 INFO 22842 --- [http-nio-8321-exec-88] c.m.H.Controller.Power.PowerController : POST /power/powerdaydetails
2022-10-20 10:00:12.639 INFO 22842 --- [http-nio-8321-exec-88] c.m.H.Controller.Power.PowerController : Update size: 582
2022-10-20 10:00:12.639 INFO 22842 --- [http-nio-8321-exec-88] c.m.H.Controller.Power.PowerController : Received data: 582
When I use the saveAllAndFlush() method, the above-mentioned exception occurs and - needless to say - also nothing gets stored in the database.
I've read a lot about the #Transactional and also the #Modifying topic, but that also never solved my problem. The primary data source and the respective repositories and controllers work perfectly and save all the data directly. It just seems that I have made a mistake with the second data source or haven't told Spring yet where/how to find a transaction for this.
It's quite simple and was in front of my eyes the whole time:
In the secondary data source configuration I named the entity manager factory bean "entityManagerFactorySecondary" but in the transaction manager method I only referred to "entityManagerFactory".
This is how they must look like:
return builder
.dataSource(powerDataSource())
.packages(new String[]{"com.delta.Monitoring.Model.PowerMonitoring"})
.build();
}
#Bean( name = "powerTransactionManager")
public PlatformTransactionManager powerTransactionManager(
final #Qualifier("entityManagerFactorySecondary") LocalContainerEntityManagerFactoryBean powerEntityManagerFactory) {
return new JpaTransactionManager(powerEntityManagerFactory.getObject());
}

How can I configure spring r2dbc to use separate read-only and read-write DB urls?

I have a Spring Webflux application with the "org.springframework.boot:spring-boot-starter-data-r2dbc" dependency for the DB connection.
I also have a postgres cluster containing master and read-only replica. Both have separate URLs.
I am looking for an option to configure the app to use both these urls accordingly.
What is the best way to do this?
Following this PR from #mp911de I created a custom AbstractRoutingConnectionFactory which can route to different datasources depending on the specific key in Reactor's context.
public class ClusterConnectionFactory extends AbstractRoutingConnectionFactory {
#Override
protected Mono<Object> determineCurrentLookupKey() {
return Mono.deferContextual(Mono::just)
.filter(it -> it.hasKey("CONNECTION_MODE"))
.map(it -> it.get("CONNECTION_MODE"));
}
}
#Configuration
public class ClusterConnectionFactoryConfiguration {
#Bean
public ConnectionFactory routingConnectionFactory() {
var clusterConnFactory = new ClusterConnectionFactory();
var connectionFactories = Map.of(
ConnectionMode.READ_WRITE, getDefaultConnFactory(),
ConnectionMode.READ_ONLY, getReadOnlyConnFactory()
);
clusterConnFactory.setTargetConnectionFactories(connectionFactories);
clusterConnFactory.setDefaultTargetConnectionFactory(getDefaultConnFactory());
return clusterConnFactory;
}
// In this example I used Postgres
private ConnectionFactory getDefaultConnFactory() {
return new PostgresqlConnectionFactory(
PostgresqlConnectionConfiguration.builder()...build());
}
private ConnectionFactory getReadOnlyConnFactory() {
// similar to the above but pointing to the read-only replica
}
public enum ConnectionMode { // auxiliary enum as a key
READ_WRITE,
READ_ONLY
}
}
Then I had to extend my repository methods with this contextual info like
public <S extends Entity> Mono<UUID> save(final S entity) {
return repository.save(entity)
.contextWrite(context -> context.put("CONNECTION_MODE", READ_WRITE));
This works, but unfortunately doesn't look good in the sense that it is not declarative and interferes with reactive chains.
I would be glad if someone suggests a better solution.

Cannot connect non primary database in spring boot

I am new to spring boot . Through tutorials, I have build an application . But when I try to connect 2 mysql database, I am successfull in connecting first DB, but for second the code always refer to the primary database and throws error that the table doesn't exist.
There are multiple ways to achieve that depend on requirements also.
Create Two Datasource bean while both database url, username, pwd defined in property file. Read them through #Value and create #bean of both source
#Value("${datasource.url}")
private String url;
#Value("${datasource.username}")
private String username;
#Value("${datasource.password}")
private String password;
#Bean
#Primary
public DataSource dataSource1() {
return DataSourceBuilder.create().username(username).password(password).url(url)
.build();
}
#Bean
public DataSource dataSource2() {
return DataSourceBuilder.create().username(username).password(password).url(url)
.build();
}
In case you have requirement to sync both database operations, I would suggest to use JTA

ACL security in Spring Boot

I am having issues setting up ACL through Java config in a Spring Boot application. I have created one small project to reproduce the issues.
I have tried a few different approaches. First issue I had was with EhCache, and after I fixed that (I assume I did) I couldn't login any more, and it looks like all the data is gone.
There are 4 classes with different configurations:
ACLConfig1.class
ACLConfig2.class
ACLConfig3.class
ACLConfig4.class
All #PreAuthorize and #PostAuthorize annotations are working as expected, except hasPermission.
Controller holds 4 endpoints: one for User, one for Admin, one Public and the last one which gives me headache #PostAuthorize("hasPermission(returnObject,'administration')")
I am pretty sure that inserts in DB are correct. This class is one of four, the last one that I have tried:
#Configuration
#EnableGlobalMethodSecurity(prePostEnabled = true, securedEnabled = true)
public class ACLConfig4 {
#Autowired
DataSource dataSource;
#Bean
public EhCacheBasedAclCache aclCache() {
return new EhCacheBasedAclCache(aclEhCacheFactoryBean().getObject(), permissionGrantingStrategy(), aclAuthorizationStrategy());
}
#Bean
public EhCacheFactoryBean aclEhCacheFactoryBean() {
EhCacheFactoryBean ehCacheFactoryBean = new EhCacheFactoryBean();
ehCacheFactoryBean.setCacheManager(aclCacheManager().getObject());
ehCacheFactoryBean.setCacheName("aclCache");
return ehCacheFactoryBean;
}
#Bean
public EhCacheManagerFactoryBean aclCacheManager() {
return new EhCacheManagerFactoryBean();
}
#Bean
public DefaultPermissionGrantingStrategy permissionGrantingStrategy() {
ConsoleAuditLogger consoleAuditLogger = new ConsoleAuditLogger();
return new DefaultPermissionGrantingStrategy(consoleAuditLogger);
}
#Bean
public AclAuthorizationStrategy aclAuthorizationStrategy() {
return new AclAuthorizationStrategyImpl(new SimpleGrantedAuthority("ROLE_ADMINISTRATOR"));
}
#Bean
public LookupStrategy lookupStrategy() {
return new BasicLookupStrategy(dataSource, aclCache(), aclAuthorizationStrategy(), new ConsoleAuditLogger());
}
#Bean
public JdbcMutableAclService aclService() {
JdbcMutableAclService service = new JdbcMutableAclService(dataSource, lookupStrategy(), aclCache());
return service;
}
#Bean
public DefaultMethodSecurityExpressionHandler defaultMethodSecurityExpressionHandler() {
return new DefaultMethodSecurityExpressionHandler();
}
#Bean
public MethodSecurityExpressionHandler createExpressionHandler() {
DefaultMethodSecurityExpressionHandler expressionHandler = defaultMethodSecurityExpressionHandler();
expressionHandler.setPermissionEvaluator(new AclPermissionEvaluator(aclService()));
expressionHandler.setPermissionCacheOptimizer(new AclPermissionCacheOptimizer(aclService()));
return expressionHandler;
}
}
What am I missing here? Why I have no data if I use ACLConfig3.class or
ACLConfig4.class. Is there any example on how this should be configured programmatically in Spring Boot?
The reason why you have no data was a bit tricky to find out. As soon as you define a MethodSecurityExpressionHandler bean in your config, there is no data in the database tables. This is because your data.sql file isn't executed.
Before explaining why data.sql isn't executed I'd first like to point out that you're not using the file as intended.
data.sql is executed by spring-boot after hibernate has been initialized and normally only contains DML statements. Your data.sql contains both DDL (schema) statements and DML (data) statements. This isn't ideal as some of your DDL statements clash with hibernate's hibernate.hbm2ddl.auto behaviour (note that spring-boot uses 'create-drop' when an embedded DataSource is being used). You should put your DDL statements in schema.sql and your DML statements in data.sql. As you're manually defining all tables you should disable hibernate.hbm2ddl.auto (by adding spring.jpa.hibernate.ddl-auto=none to applciation.properties).
That being said, let's take a look at why data.sql isn't executed.
The execution of data.sql is triggered via an ApplicationEvent that's fired via a BeanPostProcessor. This BeanPostProcessor (DataSourceInitializedPublisher) is created as a part of spring-boot's Hibernate/JPA auto configuration (see org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration, org.springframework.boot.autoconfigure.orm.jpa.DataSourceInitializedPublisher and org.springframework.boot.autoconfigure.jdbc.DataSourceInitializer).
Normally the DataSourceInitializedPublisher is created before the (embedded) DataSource is created and everything works as expected but by defining a custom MethodSecurityExpressionHandler the normal bean creation order alters.
As you've configured #EnableGlobalMethodSecurity, your're automatically importing GlobalMethodSecurityConfiguration.
spring-security related beans are created early on. As your MethodSecurityExpressionHandler requires a DataSource for the ACL stuff and the spring-security related beans require your custom MethodSecurityExpressionHandler, the DataSource is created earlier than usual; in fact it's created so early on that spring-boot's DataSourceInitializedPublisher isn't created yet.
The DataSourceInitializedPublisher is created later on but as it didn't notice the creation of a DataSource bean, it also doesn't trigger the execution of data.sql.
So long story short: the security configuration alters the normal bean creation order which results in data.sql not being loaded.
I guess that fixing the bean creation order would do the trick, but as I don't now how (without further experimentation) I propose the following solution: manually define your DataSource and take care of data initialization.
#Configuration
public class DataSourceConfig {
#Bean
public EmbeddedDatabase dataSource() {
return new EmbeddedDatabaseBuilder().setType(EmbeddedDatabaseType.H2)
//as your data.sql file contains both DDL & DML you might want to rename it (e.g. init.sql)
.addScript("classpath:/data.sql")
.build();
}
}
As your data.sql file contains all DDL required by your application you can disable hibernate.hbm2ddl.auto. Add spring.jpa.hibernate.ddl-auto=none to applciation.properties.
When defining your own DataSource spring-boot's DataSourceAutoConfiguration normally back's out but if you want to be sure you can also exclude it (optional).
#SpringBootConfiguration
#EnableAutoConfiguration(exclude = DataSourceAutoConfiguration.class)
#ComponentScan
#EnableCaching
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
This should fix your 'no data' problem. But in order to get everything working as expected you need to make 2 more modifications.
First of all, you should only define one MethodSecurityExpressionHandler bean. Currently you're defining 2 MethodSecurityExpressionHandler beans. Spring-security won't know which one to use and will (silently) use it's own internal MethodSecurityExpressionHandler instead. See org.springframework.security.config.annotation.method.configuration.GlobalMethodSecurityConfiguration#setMethodSecurityExpressionHandler.
#Configuration
#EnableGlobalMethodSecurity(prePostEnabled = true, securedEnabled = true)
public class MyACLConfig {
//...
#Bean
public MethodSecurityExpressionHandler createExpressionHandler() {
DefaultMethodSecurityExpressionHandler securityExpressionHandler = new DefaultMethodSecurityExpressionHandler();
securityExpressionHandler.setPermissionEvaluator(new AclPermissionEvaluator(aclService()));
securityExpressionHandler.setPermissionCacheOptimizer(new AclPermissionCacheOptimizer(aclService()));
return securityExpressionHandler;
}
}
The last thing you need to do is make the getId() method in Car public.
#Entity
public class Car {
//...
public long getId() {
return id;
}
//...
}
The standard ObjectIdentityRetrievalStrategy will look for a public method 'getId()' when trying to determine an object's identity during ACL permission evaluation.
(Note that I've based my answer upon ACLConfig4.)

Categories

Resources