#Cacheable key on multiple method arguments - java

From the spring documentation :
#Cacheable(value="bookCache", key="isbn")
public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed)
How can I specify #Cachable to use isbn and checkWarehouse as key?

Update: Current Spring cache implementation uses all method parameters as the cache key if not specified otherwise. If you want to use selected keys, refer to Arjan's answer which uses SpEL list {#isbn, #includeUsed} which is the simplest way to create unique keys.
From Spring Documentation
The default key generation strategy changed with the release of Spring
4.0. Earlier versions of Spring used a key generation strategy that, for multiple key parameters, only considered the hashCode() of
parameters and not equals(); this could cause unexpected key
collisions (see SPR-10237 for background). The new
'SimpleKeyGenerator' uses a compound key for such scenarios.
Before Spring 4.0
I suggest you to concat the values of the parameters in Spel expression with something like key="#checkWarehouse.toString() + #isbn.toString()"), I believe this should work as org.springframework.cache.interceptor.ExpressionEvaluator returns Object, which is later used as the key so you don't have to provide an int in your SPEL expression.
As for the hash code with a high collision probability - you can't use it as the key.
Someone in this thread has suggested to use T(java.util.Objects).hash(#p0,#p1, #p2) but it WILL NOT WORK and this approach is easy to break, for example I've used the data from SPR-9377 :
System.out.println( Objects.hash("someisbn", new Integer(109), new Integer(434)));
System.out.println( Objects.hash("someisbn", new Integer(110), new Integer(403)));
Both lines print -636517714 on my environment.
P.S. Actually in the reference documentation we have
#Cacheable(value="books", key="T(someType).hash(#isbn)")
public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed)
I think that this example is WRONG and misleading and should be removed from the documentation, as the keys should be unique.
P.P.S. also see https://jira.springsource.org/browse/SPR-9036 for some interesting ideas regarding the default key generation.
I'd like to add for the sake of correctness and as an entertaining mathematical/computer science fact that unlike built-in hash, using a secure cryptographic hash function like MD5 or SHA256, due to the properties of such function IS absolutely possible for this task, but to compute it every time may be too expensive, checkout for example Dan Boneh cryptography course to learn more.

After some limited testing with Spring 3.2, it seems one can use a SpEL list: {..., ..., ...}. This can also include null values. Spring passes the list as the key to the actual cache implementation. When using Ehcache, such will at some point invoke List#hashCode(), which takes all its items into account. (I am not sure if Ehcache only relies on the hash code.)
I use this for a shared cache, in which I include the method name in the key as well, which the Spring default key generator does not include. This way I can easily wipe the (single) cache, without (too much...) risking matching keys for different methods. Like:
#Cacheable(value="bookCache",
key="{ #root.methodName, #isbn?.id, #checkWarehouse }")
public Book findBook(ISBN isbn, boolean checkWarehouse)
...
#Cacheable(value="bookCache",
key="{ #root.methodName, #asin, #checkWarehouse }")
public Book findBookByAmazonId(String asin, boolean checkWarehouse)
...
Of course, if many methods need this and you're always using all parameters for your key, then one can also define a custom key generator that includes the class and method name:
<cache:annotation-driven mode="..." key-generator="cacheKeyGenerator" />
<bean id="cacheKeyGenerator" class="net.example.cache.CacheKeyGenerator" />
...with:
public class CacheKeyGenerator
implements org.springframework.cache.interceptor.KeyGenerator {
#Override
public Object generate(final Object target, final Method method,
final Object... params) {
final List<Object> key = new ArrayList<>();
key.add(method.getDeclaringClass().getName());
key.add(method.getName());
for (final Object o : params) {
key.add(o);
}
return key;
}
}

You can use a Spring-EL expression, for eg on JDK 1.7:
#Cacheable(value="bookCache", key="T(java.util.Objects).hash(#p0,#p1, #p2)")

You can use Spring SimpleKey class
#Cacheable(value = "bookCache", key = "new org.springframework.cache.interceptor.SimpleKey(#isbn, #checkWarehouse)")

This will work
#Cacheable(value="bookCache", key="#checkwarehouse.toString().append(#isbn.toString())")

Use this
#Cacheable(value="bookCache", key="#isbn + '_' + #checkWarehouse + '_' + #includeUsed")

Related

How can I cache by method parameter in Spring Boot?

I use Redis for caching and have the following service method:
#Cacheable(value = "productCache")
#Override
public List<ProductDTO> findAllByCategory(Category category) {
// code omitted
return productDTOList;
}
When I pass categoryA to this method, the result is cached and is kept during expiration period. If I pass categoryB to this method, it is retrieved from database and then kept in cache. Then if I pass categoryA again, it is retrieved from cache.
1. I am not sure if it is normal, because I just use value parameter ("productCache") of #Cacheable annotation and have no idea how it caches categoryA and categoryB results separately. Could you please explain how it works?
2. As mentioned on this page, there is also key parameter. But when using it as shown below, I think it does not make any sense and it works as above. Is that normal or am I missing something?
#Cacheable(value = "productCache", key="#category")
#Override
public List<ProductDTO> findAllByCategory(Category category) {
// code omitted
return productDTOList;
}
3. Should I get cache via Cache cache = cacheManager.getCache("productCache#" + category); ?
Caches are essentially key-value stores, where – in Spring –
the key is generated from the method parameter(s)
the value is the result of the method invocation
The default key generation algorithm works like this (taken right from Spring docs):
If no params are given, return SimpleKey.EMPTY.
If only one param is given, return that instance.
If more than one param is given, return a SimpleKey that contains all parameters.
This approach works well for most use-cases, as long as parameters have natural keys and implement valid hashCode() and equals() methods. If that is not the case, you need to change the strategy.
So in your example, the category object acts as key per default (for this to work, the Category class should have hashCode() and equals() implemented correctly). Writing key="#category" is hence redundant and has no effect. If your Category class would have an id property, you could write key="#category.id" however.
Should I get cache via Cache cache = cacheManager.getCache("productCache#" + category); ?
No, since there is no such cache. You only have one single cache named productCache.

How can I handle a giant static configuration class used in multiple applications?

A related group of applications I've been assigned share a database table for configuration values that has a table with columns 'application', 'config_name', 'config_type' (IE String, Integer), and 'config_value'. There's also a stored procedure that takes in a string (applicationName) and returns all config names, types, and values where applicationName == application.
In each application, a wrapper class is instantiated which contains a static ThreadLocal (hereafter 'static config)', and that static config pulls all values from the config table for the application.
When loading configuration values, the stored procedure returns a massive list of properties that are iterated over, going through a massive list of if-else statements testing whether the 'config_name' column matches a string literal, and if so, loads the value into a differently named variable.
EX:
if (result.isBeforeFirst()) {
while(result.next()) {
if (result.getString("config_name").equals("myConfig1") {
myConfigurationValue1 = result.getString(config_value); }
else if (result.getString("config_name").equals("myConfig2") {
myConfigurationValue2 = result.getString(config_value); }
}}
These cover between 60-100ish configs per app, and each application has an identical Configuration class save for the names of the properties they're trying to read.
So my questions are:
Having one gigantic configuration class is poor design, right? I'm not entirely sure how to break them down and I can't share them here, but I'm assuming best practice would be to have multiple configuration classes that have all the components needed to perform a particular operation, IE 'LocalDatabaseConfig' or '(ExternalSystemName)DatabaseConfig'?
Once broken down, what's the best way to get config valuee where needed without static access? If I have each class instantiate the configuration it needs I'll be doing a lot of redundant db operations, but if I just pass them from the application entry point then many classes have to 'hold on' to data they don't need to feed it to later classes... Is this a time when static config classes are the best option??
Is there an elegant way to load properties from the DB (in core java - company is very particular with third party libraries) without using this massive if-else chain I keep thinking that ideally we'd just dynamically load each property as it's referenced, but the only way I can think to do that is to use another stored procedure that takes in a unique identifier for a property and load it that way, but that would involve a lot more string literals...m
(Might be invalidated by 3) Is there a better way for the comparison in the pseudo-code above to test for a property rather than using a string literal? Could this be resolved if we just agreed to name our configuration properties in the application the same way they're named in the DB?
Currently every application just copy-pastes this configuration class and replaces the string literals and variable names; many of the values are unique in name and value, some are unique in value but are named the same between applications (and vice versa), and some are the same name and value for each application, but because the stored procedure fetches values based on application, redundant db entries are necessary (despite that many such values are supposed to be the same at all times, and any change to one needs to be performed on the other versions as well). Would it make sense to create a core library class that can construct any of the proposed 'broken down' configuration classes? IE, every application needs some basic logging configurations that don't change across the applications. We already have a core library that's a dependency for each application, but I don't know whether it would make sense add all/some/none of the configuration classes to the core library...
Thanks for your help! Sorry for the abundance of questions!?
The cascading if-then-else might be eliminated by using a while loop to copy the database-query results into two maps: a Map[String, String] for the string-based configuration variables, and a Map[String, Integer] for the integer configuration variables. Then the class could provide the following operations:
public String lookupStringVariable(String name, String defaultValue) {
String value = stringMap.get(name);
if (value == null) {
return defaultValue;
} else {
return value;
}
}
public int lookupIntVariable(String name, int defaultValue) {
Integer value = intMap.get(name);
if (value == null) {
return defaultValue;
} else {
return value.intValue();
}
}
If there is a requirement (perhaps for runtime performance) to have the configuration values stored in fields of the configuration class, then the configuration class could make the above two operations private and use them to initialize fields. For example:
logLevel = lookupIntVariable("log_level", 2);
logDir = lookupStringVariable("log_dir", "/tmp");
An alternative (but complementary) suggestion is to write a code generator application that will query the DB table and generate a separate Java class for each value in the application column of the DB table. The implementation of a generated Java class would use whatever coding approach you like to query the DB table and retrieve the application-specific configuration variables. Once you have written this generator application, you can rerun it whenever the DB table is updated to add/modify configuration variables. If you decide to write such a generator, you can use print() statements to generate the Java code. Alternatively, you might use a template engine to reduce some of the verbosity associated with print() statements. An example of a template engine is Velocity, but the Comparison of web template engines Wikipedia article lists dozens more.
You would be better off separating the database access from the application initialisation. A basic definition would be Map<String,String> returned by querying for one application's settings:
Map<String,String> config = dbOps.getConfig("myappname");
// which populates a map from the config_name/config_value queries:
// AS: config.put(result.getString("config_name"), result.getString("config_value");
Your application code then can initialise from the single application settings:
void init(Map<String,String> config) {
myConfigurationValue1 = config.get("myConfig1");
myConfigurationValue2 = config.get("myConfig2");
}
A benefit of this decoupling is that you define test cases for your application by hardwiring the config for different permutations of Map settings without accessing a huge test database configurations, and can test the config loader independently of your application logic.
Once this is working, you might consider whether dbOps.getConfig("myappname") caches the per-application settings to avoid excessive queries (if they don't change on database), or whether to declare Config as a class backed by Map but with calls for getInt / get and default values, and which throw RuntimeException if missing keys:
void init(Config config) {
myConfigurationValue1 = config.get("myConfig1", "aDefaultVal");
myConfigurationInt2 = config.getInt("myConfig2", 100);
}

How to implement Seed and Next when extending UserVersionType

I'm trying to implement a String based UserVersionType. I have found examples enough to understand how to use the UserType methods to an extent. I can't find anything that shows me exactly what to do with next() and seed().
So I have something like this
public class StringVersionType implements UserType, UserVersionType {
...
public int compare(Object o1, Object o2) {
String a = (String) o1;
String b = (String) o2;
return a.compareTo(b);
}
public Object next(Object arg0, SharedSessionContractImplementor arg1)
{
return "DUMMY SEED"; // + LocalTime.now().toString();
}
public Object seed(SharedSessionContractImplementor session){
return "DUMMY SEED"; // LocalTime.now().toString();
}
}
I've tried adding simple code to return a string that is always the same and code that might change the version number. I always get an error on update. Looking at the hibernate console output when I add almost anything to these UserVersionType methods hibernate stops doing a select and then an update but always goes straight to a new insert query and so fails on a primary key still exists.
Obviously I'm misunderstanding what seed and next should do but I can't find any useful documentation ?
Can anyone tell me more about how to use them ?
Seed:
Generate an initial version.
Parameters:
session - The session from which this request originates. May be null; currently this only happens during startup when trying to determine the "unsaved value" of entities.
Returns:
an instance of the type
#Override
public Object
seed(SharedSessionContractImplementor session)
{
return ( (UserVersionType) userType ).seed(
session );
}
For properties mapped as either version or timestamp, the insert statement gives you two options. You can either specify the property in the properties_list, in which case its value is taken from the corresponding select expressions, or omit it from the properties_list, in which case the seed value defined by the org.hibernate.type.VersionType is used
next:
Increment the version.
Parameters:
session - The session from which this request originates.
current - the current version
Returns:
an instance of the type
public Object next(Object current,
SessionImplementor session) {
return ( (UserVersionType) userType ).next( current, session );
}
From the docs:
"UPDATE statements, by default, do not effect the version or the timestamp attribute values for the affected entities. However, you can force Hibernate to set the version or timestamp attribute values through the use of a versioned update. This is achieved by adding the VERSIONED keyword after the UPDATE keyword. Note, however, that this is a Hibernate specific feature and will not work in a portable manner. Custom version types, org.hibernate.usertype.UserVersionType, are not allowed in conjunction with a update versioned statement."
Other Docs:
Dedicated version number
The version number mechanism for optimistic locking is provided through a #Version annotation.
The #Version annotation
#Entity
public class Flight implements Serializable {
...
#Version
#Column(name="OPTLOCK")
public Integer getVersion() { ... }
}
Here, the version property is mapped to the OPTLOCK column, and the entity manager uses it to detect conflicting updates, and prevent the loss of updates that would be overwritten by a last-commit-wins strategy.
The version column can be any kind of type, as long as you define and implement the appropriate UserVersionType.
Your application is forbidden from altering the version number set by Hibernate. To artificially increase the version number, see the documentation for properties LockModeType.OPTIMISTIC_FORCE_INCREMENT or LockModeType.PESSIMISTIC_FORCE_INCREMENTcheck in the Hibernate Entity Manager reference documentation.
Database-generated version numbers
If the version number is generated by the database, such as a trigger, use the annotation #org.hibernate.annotations.Generated(GenerationTime.ALWAYS).
Declaring a version property in hbm.xml
<version
column="version_column"
name="propertyName"
type="typename"
access="field|property|ClassName"
unsaved-value="null|negative|undefined"
generated="never|always"
insert="true|false"
node="element-name|#attribute-
name|element/#attribute|."
/>
This is all I can find from the documentation that can help you understand why and how to use those methods. Give me feed back about the irrelevent parts, due to my misunderstanding to the question, to remove it.

ZK MVVM Validation - Dependent Property Array content?

I'm playing around with the ZK 8 MVVM form validation system and generally it seems to do what I want, but I wonder what the definition of the dependent property index is...
Let's take a simple validator...
public class FormValidator extends AbstractValidator {
#Override
public void validate(final ValidationContext ctx) {
Property[] properties = ctx.getProperties("firstName");
Object value0 = properties[0].getValue();
Object value1 = properties[1].getValue();
}
}
So, when this is called before the save command, for every property, I get a Property[] array of length 2. But somehow, I have yet to find out what is stored in [0] and what is stored in [1]. Sometimes it seems that [0] stores the current value (which may or may not be valid according the field validator there) and [1] the last valid entry... But sometimes it seems to be the other way round...
The examples in the documentation always seem to simply take the first element ([0]) for validation, but I would like the understand what both parts of this pair actually mean...
Anyone got an idea for that?
I might be off the mark with my answer, but if you are using ZK8, you should look into using Form binding
That way you do not have to handle Properties in your validator and can retrieve a proxy object matching the bean you use for your form.
If you are using a User POJO with a firstName and lastName attribut.
User myProxy= (User ) ctx.getProperty().getValue();
And then you can validate both fields by simply doing getFirstName and getLastName on myProxy.
Hope it helps.

Map as parameter for all fields for EJB backend calls

I am designing backend EJB calls to be called by REST api.
Example for EJB calls;
Get all Systems
getSystems(String systemId)
Now I know that i would get system id to get all systems.
There is a possibility of retrieving them by some another unique id as well
getSystemsByOtherId(String otherId)
There is requirement that there could be sort parameter passed in
getSystems(String systemId, String sort_by, String sort_how)
Would it be better to have something like Map as param and have it passed in with every information
getSystems(Map criteria)
So the key- value pair for Map would have systemId, otherId, sort_by, sort_how and more if needed in future. Or is it better to follow other approach to have unique methods for different params. Or if there is some other better approach.
Thank you.
The first solution is a little cumbersome, in case you want to add or remove parameters you'd have to modify the signature of your EJB every time, the map solution is a little dirty since you'd have to keep track of your parameters names at runtime and if you want to use parameters of a type other than String you'd lose typing info at runtime as well.
This is how I would do it, define a class that encapsulates your parameters:
public class Criteria {
private String systemId;
private String otherId;
private String sortBy;
private String sortHow;
.
.
.
}
and in your EJB,
getSystem(Criteria criteria)

Categories

Resources