I am using guava-libraries LoadingCache to cache classes in my app.
Here is the class I have came up with.
public class MethodMetricsHandlerCache {
private Object targetClass;
private Method method;
private Configuration config;
private LoadingCache<String, MethodMetricsHandler> handlers = CacheBuilder.newBuilder()
.maximumSize(1000)
.build(
new CacheLoader<String, MethodMetricsHandler>() {
public MethodMetricsHandler load(String identifier) {
return createMethodMetricsHandler(identifier);
}
});
private MethodMetricsHandler createMethodMetricsHandler(String identifier) {
return new MethodMetricsHandler(targetClass, method, config);
}
public void setTargetClass(Object targetClass) {
this.targetClass = targetClass;
}
public void setMethod(Method method) {
this.method = method;
}
public void setConfig(Configuration config) {
this.config = config;
}
public MethodMetricsHandler getHandler(String identifier) throws ExecutionException {
return handlers.get(identifier);
}
I am using this class as follows to cache the MethodMetricsHandler
...
private static MethodMetricsHandlerCache methodMetricsHandlerCache = new MethodMetricsHandlerCache();
...
MethodMetricsHandler handler = getMethodMetricsHandler(targetClass, method, config);
private MethodMetricsHandler getMethodMetricsHandler(Object targetClass, Method method, Configuration config) throws ExecutionException {
String identifier = targetClass.getClass().getCanonicalName() + "." + method.getName();
methodMetricsHandlerCache.setTargetClass(targetClass);
methodMetricsHandlerCache.setMethod(method);
methodMetricsHandlerCache.setConfig(config);
return methodMetricsHandlerCache.getHandler(identifier);
}
My question:
Is this creating a cache of the MethodMetricHandler classes keyed on identifier (not used this before so just a sanity check).
Also is there a better approach? Given that I will have multiple instances (hundreds) of the same MethodMetricHandler for a given identifier if I do not cache?
Yes, it does create a cache of MethodMetricsHandler objects. This approach generally is not bad however I might be able to say more if you described your use case because this solution is quite unusual. You've partially reinvented factory pattern.
Also think about some suggestions:
It's very odd that you need to call 3 setters before running getHandler
As "Configuration" is not in the key, you'll get the same object from cache for different configurations and the same targetClass and method
Why targetClass is an Object. You may want to pass Class<?> instead.
Are you planning to evict objects from cache?
Related
Do we have custom serialization capability for EntryProcessor or ExecutorService ?. Hazelcast document is not specifying anything in this regard. There are no samples given in the document related to custom serialization of EntryProcessor. We are looking for a Portable serialization of the EntryProcessor.
public class SampleEntryProcessor implements EntryProcessor<SampleDataKey, SampleDataValue , SampleDataValue >,Portable {
/**
*
*/
private static final long serialVersionUID = 1L;
private SampleDataValue sampleDataValue ;
public SampleDataValue process(Map.Entry<SampleDataKey, SampleDataValue > entry) {
//Sample logic here
return null;
}
#Override
public int getFactoryId() {
return 1;
}
#Override
public int getClassId() {
return 1;
}
#Override
public void writePortable(PortableWriter writer) throws IOException {
writer.writePortable("i", sampleDataValue );
}
#Override
public void readPortable(PortableReader reader) throws IOException {
sampleDataValue = reader.readPortable("i");
}
}
UPDATE : When i try to call processor am getting error as follows.
Exception in thread "main" java.lang.ClassCastException: com.hazelcast.internal.serialization.impl.portable.DeserializedPortableGenericRecord cannot be cast to com.hazelcast.map.EntryProcessor
at com.hazelcast.client.impl.protocol.task.map.MapExecuteOnKeyMessageTask.prepareOperation(MapExecuteOnKeyMessageTask.java:42)
at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.processInternal(AbstractPartitionMessageTask.java:45)
Yes, you can use different serialization mechanisms to serialize entry processors, provided that they are correctly configured on the sender and receiver sides. So, after making sure that the Portable factory for your class is registered on the members and on the instance you are sending the entry processor from (for example, your client), it should work.
I am trying to get Guava Caching working for my app. Specifically, I'm basically looking for a cache that behaves like a map:
// Here the keys are the User.getId() and the values are the respective User.
Map<Long, User> userCache = new HashMap<Long, User>();
From various online sources (docs, blogs, articles, etc.):
// My POJO.
public class User {
Long id;
String name;
// Lots of other properties.
}
public class UserCache {
LoadingCache _cache;
UserCacheLoader loader;
UserCacheRemovalListener listener;
UserCache() {
super();
this._cache = CacheBuilder.newBuilder()
.maximumSize(1000)
.expireAfterAccess(30, TimeUnit.SECONDS)
.removalListener(listener)
.build(loader);
}
User load(Long id) {
_cache.get(id);
}
}
class UserCacheLoader extends CacheLoader {
#Override
public Object load(Object key) throws Exception {
// ???
return null;
}
}
class UserCacheRemovalListener implements RemovalListener<String, String>{
#Override
public void onRemoval(RemovalNotification<String, String> notification) {
System.out.println("User with ID of " + notification.getKey() + " was removed from the cache.");
}
}
But I'm not sure how/where to specify that keys should be Long types, and cached values should be User instances. I'm also looking to implement a store(User) (basically a Map#put(K,V)) method as well as a getKeys() method that returns all the Long keys in the cache. Any ideas as to where I'm going awry?
Use generics:
class UserCacheLoader extends CacheLoader<Long, User> {
#Override
public User load(Long key) throws Exception {
// ???
}
}
store(User) can be implemented with Cache.put, just like you'd expect.
getKeys() can be implemented with cache.asMap().keySet().
You can (and should!) not only specify the return type of the overriden load method of CacheLoader to be User but also the onRemoval method argument to be:
class UserCacheRemovalListener implements RemovalListener<String, String>{
#Override
public void onRemoval(RemovalNotification<Long, User> notification) {
// ...
}
}
I have a collection of data parsers that implement a common DataSource interface. I want to have a parsing method with a following signature:
public static DataSource parseData(InputStream contents, String identifier)
It's supposed to take the data to be parsed and an identifier and use the appropriate DataSource implementation. Each of the DataSources is responsible for one identifier. I bet there is a more elegant way to do this than this one:
public static DataSource parseData(InputStream contents, String identifier) {
if (DataSource1.respondsTo(identifier) {
return new DataSource1(contents);
}
//more ifs. There likely will be about 20 of those.
}
But I can't really think of anything better. Is there an appropriate design pattern to use here? Some kind of a chained list of detectors?
I'm doing this in Groovy, but Java based responses are welcome.
Given the following DataSource classes:
interface DataSource {
boolean respondsTo(String identifier)
}
class DataSource1 implements DataSource {
DataSource1(InputStream is) { /* magic goes here */ }
#Override boolean respondsTo(String identifier) { identifier in ["DS1 idX", "DS1 idY", "DS1 idZ"] }
}
class DataSource2 implements DataSource {
DataSource2(InputStream is) { /* magic goes here */ }
#Override boolean respondsTo(String identifier) { identifier in ["DS2 idX", "DS2 idY", "DS2 idZ"] }
}
// ...
class DataSource20 implements DataSource {
DataSource20(InputStream is) { /* magic goes here */ }
#Override boolean respondsTo(String identifier) { identifier in ["DS20 idX", "DS20 idY", "DS20 idZ"] }
}
This solution uses an enum to facilitate mapping each identifier string into a closure that generates the DataSource.
enum DataSourceEnum {
ds1 (["DS1 idX", "DS1 idY", "DS1 idZ"], { is -> new DataSource1(is) }),
ds2 (["DS2 idX", "DS2 idY", "DS2 idZ"], { is -> new DataSource2(is) }),
// ...
ds20 (["DS20 idX", "DS20 idY", "DS20 idZ"], { is -> new DataSource20(is) })
private final static Map<String, DataSourceEnum> dsMapping = [:]
final Closure<DataSource> buildDataSource
private DataSourceEnum(List<String> identifiers, Closure<DataSource> ctor) {
DataSourceEnum.dsMapping += identifiers.collectEntries { id -> [(id):this] }
this.buildDataSource = ctor
}
static DataSourceEnum identify(String id) { dsMapping[id] }
}
Now it is almost trivially easy to write the desired parseData method:
DataSource parseData(InputStream contents, String identifier) {
DataSourceEnum.identify(identifier)?.buildDataSource(contents)
}
I realize BalRog's answer is what you want, but I couldn't resist throwing out a reflexion answer.
If you're sure of your identifiers corresponding to class names, e.g., idX --> ParsersidX, you could do something like this (disclaimer - I didn't compile this - there are surely some try/catches necessary):
public static DataSource parseData(InputStream contents, String identifier) {
Class dataSourceClass = Class.forName("Parsers" + identifier);
Constructor dataSourceConstructor = dataSourceClass.getDeclaredConstructor(Class.forName(InputStream));
return dataSourceConstructor.newInstance(contents);
}
Finally, I saw now in the comments that it's not a one-to-one match, so this way probably won't be adequate?
Is there any way of using wildcards in #CacheEvict?
I have an application with multi-tenancy that sometimes needs to evict all the data from the cache of the tenant, but not of all tenants in the system.
Consider the following method:
#Cacheable(value="users", key="T(Security).getTenant() + #user.key")
public List<User> getUsers(User user) {
...
}
So, I would like to do something like:
#CacheEvict(value="users", key="T(Security).getTenant() + *")
public void deleteOrganization(Organization organization) {
...
}
Is there anyway to do it?
Answer is: No.
And it is no easy way to achieve what you want.
Spring Cache annotations must be simple to be easy to implement by cache provider.
Efficient caching must be simple. There is a key and value. If key is found in cache use the value, otherwise compute value and put to cache. Efficient key must have fast and honest equals() and hashcode(). Assume you cached many pairs (key,value) from one tenant. For efficiency different keys should have different hashcode(). And you decide to evict whole tenant. It is no easy to find tenant elements in cache. You have to iterate all cached pairs and discard pairs belonging to the tenant. It is not efficient. It is rather not atomic, so it is complicated and needs some synchronization. Synchronization is not efficient.
Therefore no.
But, if you find a solution tell me, because feature you want is really useful.
As with 99% of every question in the universe, the answer is: it depends. If your cache manager implements something that deals with that, great. But that doesn't seem to be the case.
If you're using SimpleCacheManager, which is a basic in-memory cache manager provided by Spring, you're probably using ConcurrentMapCache that also comes with Spring. Although it's not possible to extend ConcurrentMapCache to deal with wildcards in keys (because the cache store is private and you can't access it), you could just use it as an inspiration for your own implementation.
Below there's a possible implementation (I didn't really test it much other than to check if it's working). This is a plain copy of ConcurrentMapCache with a modification on the evict() method. The difference is that this version of evict() treats the key to see if it's a regex. In that case, it iterates through all the keys in the store and evict the ones that match the regex.
package com.sigraweb.cache;
import java.io.Serializable;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import org.springframework.cache.Cache;
import org.springframework.cache.support.SimpleValueWrapper;
import org.springframework.util.Assert;
public class RegexKeyCache implements Cache {
private static final Object NULL_HOLDER = new NullHolder();
private final String name;
private final ConcurrentMap<Object, Object> store;
private final boolean allowNullValues;
public RegexKeyCache(String name) {
this(name, new ConcurrentHashMap<Object, Object>(256), true);
}
public RegexKeyCache(String name, boolean allowNullValues) {
this(name, new ConcurrentHashMap<Object, Object>(256), allowNullValues);
}
public RegexKeyCache(String name, ConcurrentMap<Object, Object> store, boolean allowNullValues) {
Assert.notNull(name, "Name must not be null");
Assert.notNull(store, "Store must not be null");
this.name = name;
this.store = store;
this.allowNullValues = allowNullValues;
}
#Override
public final String getName() {
return this.name;
}
#Override
public final ConcurrentMap<Object, Object> getNativeCache() {
return this.store;
}
public final boolean isAllowNullValues() {
return this.allowNullValues;
}
#Override
public ValueWrapper get(Object key) {
Object value = this.store.get(key);
return toWrapper(value);
}
#Override
#SuppressWarnings("unchecked")
public <T> T get(Object key, Class<T> type) {
Object value = fromStoreValue(this.store.get(key));
if (value != null && type != null && !type.isInstance(value)) {
throw new IllegalStateException("Cached value is not of required type [" + type.getName() + "]: " + value);
}
return (T) value;
}
#Override
public void put(Object key, Object value) {
this.store.put(key, toStoreValue(value));
}
#Override
public ValueWrapper putIfAbsent(Object key, Object value) {
Object existing = this.store.putIfAbsent(key, value);
return toWrapper(existing);
}
#Override
public void evict(Object key) {
this.store.remove(key);
if (key.toString().startsWith("regex:")) {
String r = key.toString().replace("regex:", "");
for (Object k : this.store.keySet()) {
if (k.toString().matches(r)) {
this.store.remove(k);
}
}
}
}
#Override
public void clear() {
this.store.clear();
}
protected Object fromStoreValue(Object storeValue) {
if (this.allowNullValues && storeValue == NULL_HOLDER) {
return null;
}
return storeValue;
}
protected Object toStoreValue(Object userValue) {
if (this.allowNullValues && userValue == null) {
return NULL_HOLDER;
}
return userValue;
}
private ValueWrapper toWrapper(Object value) {
return (value != null ? new SimpleValueWrapper(fromStoreValue(value)) : null);
}
#SuppressWarnings("serial")
private static class NullHolder implements Serializable {
}
}
I trust that readers know how to initialize the cache manager with a custom cache implementation. There's lots of documentation out there that shows you how to do that. After your project is properly configured, you can use the annotation normally like so:
#CacheEvict(value = { "cacheName" }, key = "'regex:#tenant'+'.*'")
public myMethod(String tenant){
...
}
Again, this is far from being properly tested, but it gives you a way to do what you want. If you're using another cache manager, you could extends its cache implementation similarly.
Below worked for me on Redis Cache.
Suppose you want to delete all Cache entries with key prefix: 'cache-name:object-name:parentKey'. Call method with key value cache-name:object-name:parentKey*.
import org.springframework.data.redis.core.RedisOperations;
...
private final RedisOperations<Object, Object> redisTemplate;
...
public void evict(Object key)
{
redisTemplate.delete(redisTemplate.keys(key));
}
From RedisOperations.java
/**
* Delete given {#code keys}.
*
* #param keys must not be {#literal null}.
* #return The number of keys that were removed.
* #see Redis Documentation: DEL
*/
void delete(Collection<K> keys);
/**
* Find all keys matching the given {#code pattern}.
*
* #param pattern must not be {#literal null}.
* #return
* #see Redis Documentation: KEYS
*/
Set<K> keys(K pattern);
Include the tenant as part of the cache name, by implementing a custom CacheResolver; extending and implementing SimpleCacheResolver.getCacheName
then do evict all keys
#CacheEvict(value = {CacheName.CACHE1, CacheName.CACHE2}, allEntries = true)
But note that if you are using redis as your backing cache, then under the hood spring uses the KEYS command, so the solution will not scale. Once you get few 100K keys in redis, KEYS will take 150ms and the redis server will bottleneck on CPU. Naughty spring.
I had a similar issue as well. I solved it that way.
My Config Class
#Bean
RedisTemplate redisTemplate() {
RedisTemplate template = new RedisTemplate();
template.setConnectionFactory(lettuceConnectionFactory());
template.setKeySerializer(new StringRedisSerializer());
template.setValueSerializer(new RedisSerializerGzip());
return template;
}
My Util Class
public class CacheService {
final RedisTemplate redisTemplate;
public void evictCachesByPrefix(String prefix) {
Set<String> keys = redisTemplate.keys(prefix + "*");
for (String key : keys) {
redisTemplate.delete(key);
}
}
}
Warning: consider KEYS as a command that should only be used in
production environments with extreme care. It may ruin performance
when it is executed against large databases.
https://redis.io/commands/keys
I wanted to remove all stored orders from cache and i complited it this way.
#CacheEvict(value = "List<Order>", allEntries = true)
As i understand this way will be removed all lists stored with this value. So you can create another structure and it also can be a kind of solution.
I solved this by leaving the AOP-Pattern in this special case.
read remains annotation-driven:
#Cacheable(value = "imageCache", keyGenerator = "imageKeyGenerator", unless="#result == null")
public byte[] getImageData(int objectId, int imageType, int width, int height, boolean sizeAbsolute) {
// ...
}
public boolean deleteImage(int objId, int type) {
removeFromCacheByPrefix("imageCache", ImageCacheKeyGenerator.generateKey(objId, type));
int rc = jdbcTemplate.update(SQL_DELETE_IMAGE, new Object[] {objId,type});
return rc > 0;
}
as you can see, the deleteImage(...) has no annotation, but calls removeFromCacheByPrefix(...).
this is a function in the superclass of the repository which looks like this:
protected void removeFromCacheByPrefix(String cacheName, String prefix) {
var cache = this.cacheManager.getCache(cacheName);
Set<String> keys = new HashSet<String>();
cache.forEach(entry -> {
var key = String.valueOf(entry.getKey());
if (key.startsWith(prefix)) {
keys.add(String.valueOf(entry.getKey()));
}
});
cache.removeAll(keys);
}
works fine for me this way!
I have an ExecutorService that is used to handle a stream of tasks. The tasks are represented by my DaemonTask class, and each task builds a response object which is passed to a response call (outside the scope of this question). I am using a switch statement to spawn the appropriate task based on a task id int. It looks something like;
//in my api listening thread
executorService.submit(DaemonTask.buildTask(int taskID));
//daemon task class
public abstract class DaemonTask implements Runnable {
public static DaemonTask buildTask(int taskID) {
switch(taskID) {
case TASK_A_ID: return new WiggleTask();
case TASK_B_ID: return new WobbleTask();
// ...very long list ...
case TASK_ZZZ_ID: return new WaggleTask();
}
}
public void run() {
respond(execute());
}
public abstract Response execute();
}
All of my task classes (such as WiggleTask() ) extend DaemonTask and provide an implementation for the execute() method.
My question is simply; is this pattern reasonable? Something feels wrong when I look at my huge switch case with all its return statements. I have tried to come up with a more elegant lookup table solution using reflection in some way but can't seem to figure out an approach that would work.
Do you really need so many classes? You could have one method per taskId.
final ResponseHandler handler = ... // has many methods.
// use a map or array or enum to translate transIds into method names.
final Method method = handler.getClass().getMethod(taskArray[taskID]);
executorService.submit(new Callable<Void>() {
public Void call() throws Exception {
method.invoke(handler);
}
});
If you have to have many classes, you can do
// use a map or array or enum to translate transIds into methods.
final Runnable runs = Class.forName(taskClassArray[taskID]).newInstance();
executorService.submit(new Callable<Void>() {
public Void call() throws Exception {
runs.run();
}
});
You can use an enum:
public enum TaskBuilder
{
// Task definitions
TASK_A_ID(1){
#Override
public DaemonTask newTask()
{
return new WiggleTask();
}
},
// etc
// Build lookup map
private static final Map<Integer, TaskBuilder> LOOKUP_MAP
= new HashMap<Integer, TaskBuilder>();
static {
for (final TaskBuilder builder: values())
LOOKUP_MAP.put(builder.taskID, builder);
}
private final int taskID;
public abstract DaemonTask newTask();
TaskBuilder(final int taskID)
{
this.taskID = taskID;
}
// Note: null needs to be handled somewhat
public static TaskBuilder fromTaskID(final int taskID)
{
return LOOKUP_MAP.get(taskID);
}
}
With such an enum, you can then do:
TaskBuilder.fromTaskID(taskID).newTask();
Another possibility is to use a constructor field instead of a method, that is, you use reflection. It is much easier to write and it works OK, but exception handling then becomes nothing short of a nightmare:
private enum TaskBuilder
{
TASK_ID_A(1, WiggleTask.class),
// others
// Build lookup map
private static final Map<Integer, TaskBuilder> LOOKUP_MAP
= new HashMap<Integer, TaskBuilder>();
static {
for (final TaskBuilder builder: values())
LOOKUP_MAP.put(builder.taskID, builder);
}
private final int index;
private final Constructor<? extends DaemonTask> constructor;
TaskBuilder(final int index, final Class<? extends DaemonTask> c)
{
this.index = index;
// This can fail...
try {
constructor = c.getConstructor();
} catch (NoSuchMethodException e) {
throw new ExceptionInInitializerError(e);
}
}
// Ewww, three exceptions :(
public DaemonTask newTask()
throws IllegalAccessException, InvocationTargetException,
InstantiationException
{
return constructor.newInstance();
}
// Note: null needs to be handled somewhat
public static TaskBuilder fromTaskID(final int taskID)
{
return LOOKUP_MAP.get(taskID);
}
}
This enum can be used the same way as the other one.